title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Dependence of reaction kinetics and physical and mechanical properties on the reaction systems of acetylation II: physical and mechanical properties | Acetylation of wood was carried out in acetic anhydride only, acetic anhydride/xylene 1 : 1 (v/v), and acetic anhydride/pyridine 4 : 1 (v/v) solutions. The antishrink efficiency (ASE), hygroscopic properties, vibrational properties, and bending strength were compared among the three reaction solutions. The ASE was a simple function of weight gain (WG); the equilibrium moisture content at a given WG differed among the reaction solutions. Based on this fact and the results of repeated water soaking and oven-drying tests, it was found that the bulking effect was a major factor, and that decreased hygroscopicity contributes only slightly to the dimensional stabilization by acetylation. The difference in equilibrium moisture content among reaction solutions appears more significant in block samples than wood meal, probably due to the fiber-to-fiber bonds in the former. The tendencies for change in the specific Young’s modulus and the loss tangent differed among reaction solutions, whereas in the static bending test the difference was not marked. |
Single-layer slotted post-wall waveguide array with compact feed-line structures for 77 GHz automotive radar | A single-layer antenna composed of four subarrays and compact low-loss feed-line structures connecting to an RFmodule for 77 GHz automotive radar systems is presented. This antenna is based on a slotted post-wall waveguide array, which is integrated in a ceramic-filled PTFE substrate. The antenna gain is higher than 16 dBi relative to the maximum available directivity of 20.4 dBi at 76.5 GHz. The proposed compact feedline structures achieve the low transmission loss of less than 3.2 dB. |
Reliable, Built-in, High-Accuracy Force Sensing for Legged Robots | An approach for achieving reliable, built-in, high-accuracy force sensing for legged robots is presented, based on direct exploitation of the properties of a robot’s mechanical structure. The proposed methodology relies on taking account of force-sensing requirements at the robot design st age, with aview to embedding force-sensing capability within the mechanical structure of the robot itself. The test case is ROBOCLIMBER, a bulky, quadruped climbing and walking machine whose weighty legs enable it to carry out heavy-duty drilling operations. The paper shows that, with finite-element analysis of ROBOCLIMBER’s mechanical configuration during the design stage,candidate positions can be selected for the placement of force transducers to measure indirectly the contact forces between the feet and the ground. Force sensors are then installed at the theoretically best positions on the mechanical structure, and several experiments are carried out to calibrate all sensors within their operational range of interest. After calibration, the built-in sensors are subjected to experimental performance evaluation, and the final best sensor option is found. The built-in force-sensing capability thus implemented is subjected to its first test of usability when it is employed to compute the actual centre of gravity of ROBOCLIMBER. The method is shown to be useful for determining variation during a gait (due to the non-negligible weight of the legs). Afterwards the force sensors are shown to be useful for controlling foot–ground interaction, and several illustrative experiments confirm the high sensitivity, reliability and accuracy of the selected approach. Lastly, the built-in sensors are used to measure ground-reaction forces and to compute The International Journal of Robotics Research Vol. 25, No. 9, September 2006, pp. 931-950 DOI: 10.1177/0278364906069173 ©2006 SAGE Publications Figures appear in color online: http://ijr.sagepub.com the zero-moment point for ROBOCLIMBER in real time, both while standing and while executing a dynamically balanced gait. KEY WORDS—climbing and walking robots, finite-element analysis, force sensor, foot–ground interaction, forcefeedback control, zero-moment point |
The future of medicine. The role of the consumer. | As representative of the universities and medical schools of the United States, I congratulate the University of Nebraska Medical Center on the important occasion of its 100th anniversary. But may I remind you that, when a university or a medical school achieves an age of 100 years, a century, it really only attains its majority. This can be compared, perhaps, with attaining the age of 21 for the individual. You see, therefore, that the University of Nebraska Medical Center has merely gone through infancy and part of the teens. It is an occasion to rejoice about, but it is not an occasion to feel any pangs of incipient senility. I am reminded of the fact that the span of years that has been covered by the history of this university and this medical school has been a very exciting century within the area of medicine and medical education. It has |
Induction of resistance against pathogens by β-aminobutyric acid | Among the many types of plant stressors, pathogen attack, mainly fungi and bacteria can cause particularly severe damage both to individual plants and, on a wider scale, to agricultural productivity. The magnitude of these pathogen-induced problems has stimulated rapid progress in green biotechnology research into plant defense mechanisms. Plants can develop local and systemic wide-spectrum resistance induced by their exposure to virulent (systemic acquired resistance—SAR) or non-pathogenic microbes and various chemical elicitors (induced systemic resistance—ISR). β-Aminobutyric acid (BABA), non-protein amino acid, is though to be important component of the signaling pathway regulating ISR response in plants. After treatment with BABA or various chemicals, after infection by a necrotizing pathogen, colonization of the roots by beneficial microbes many plants establish a unique physiological state that is called the “primed” state of the plant. This review will focus on the recent knowledge about the role of BABA in the induction of ISR against pathogens mainly against fungi. |
Inferring Air Quality for Station Location Recommendation Based on Urban Big Data | This paper tries to answer two questions. First, how to infer real-time air quality of any arbitrary location given environmental data and historical air quality data from very sparse monitoring locations. Second, if one needs to establish few new monitoring stations to improve the inference quality, how to determine the best locations for such purpose? The problems are challenging since for most of the locations (>99%) in a city we do not have any air quality data to train a model from. We design a semi-supervised inference model utilizing existing monitoring data together with heterogeneous city dynamics, including meteorology, human mobility, structure of road networks, and point of interests (POIs). We also propose an entropy-minimization model to suggest the best locations to establish new monitoring stations. We evaluate the proposed approach using Beijing air quality data, resulting in clear advantages over a series of state-of-the-art and commonly used methods. |
Automatic Detection of Tone Mispronunciation in Mandarin | In this paper we present our study on detecting tone mispronunciations in Mandarin. Both template and HMM approaches are investigated. Schematic templates of pitch contours are shown to be impractical due to their larger pitch range of inter-, even intra-speaker variation. The statistical Hidden Markov Models (HMM) is used to generate a Goodness of Pronunciation (GOP) score for detection with an optimized threshold. To deal with the discontinuity issue of the F0 in speech, the multi-space distribution (MSD) modeling is used for building corresponding HMMs. Under an MSD-HMM framework, detection performance of different choices of features, HMM types and GOP measures are evaluated. |
Online Display Advertising: Targeting and Obtrusiveness | We use data from a large-scale field experiment to explore what influences the effectiveness of online advertising. We find that matching an ad to website content and increasing an ad’s obtrusiveness independently increase purchase intent. However, in combination these two strategies are ineffective. Ads that match both website content and are obtrusive do worse at increasing purchase intent than ads that do only one or the other. This failure appears to be related to privacy concerns: The negative effect of combining targeting with obtrusiveness is strongest for people who refuse to give their income, and for categories where privacy matters most. Our results suggest a possible explanation for the growing bifurcation in internet advertising between highly targeted plain text ads and more visually striking but less targeted ads. |
- IRM ) 6-2011 A Taxonomy for Social Engineering attacks | As the technology to secure information improves, hackers will employ less technical means to get access to unauthorized data. The use of Social Engineering as a non tech method of hacking has been increasingly used during the past few years. There are different types of social engineering methods reported but what is lacking is a unifying effort to understand these methods in the aggregate. This paper aims to classify these methods through taxonomy so that organizations can gain a better understanding of these attack methods and accordingly be vigilant against them. |
A Quantitative Analysis of Business Process Reengineering and Organizational Resistance : The Case of Uganda | Many organizations in African countries need to reengineer their business processes to improve on efficiency. The general objective of this study was to identify the impact of different factors, including organizational resistance to change on Business Process Reengineering (BPR). The study showed that only 30.4% of BPR projects in Uganda have delivered the intended usable Information Systems. The researchers have identified the factors impacting on BPR and possible causes of BPR failures. The identified emotional response of the users towards the BPR implementation ranges from Acceptance to Testing, Indifference and Anger. Based upon the study findings, the researchers have formulated the set of recommendations for organizations implementing BPR. This paper will be of interest to the organizational managers, BPR implementers and the future researchers in a related area of study. |
Learning to Teach | Teaching plays a very important role in our society, by spreading human knowledge and educating our next generations. A good teacher will select appropriate teaching materials, impact suitable methodologies, and set up targeted examinations, according to the learning behaviors of the students. In the field of artificial intelligence, however, one has not fully explored the role of teaching, and pays most attention to machine learning. In this paper, we argue that equal attention, if not more, should be paid to teaching, and furthermore, an optimization framework (instead of heuristics) should be used to obtain good teaching strategies. We call this approach “learning to teach”. In the approach, two intelligent agents interact with each other: a student model (which corresponds to the learner in traditional machine learning algorithms), and a teacher model (which determines the appropriate data, loss function, and hypothesis space to facilitate the training of the student model). The teacher model leverages the feedback from the student model to optimize its own teaching strategies by means of reinforcement learning, so as to achieve teacher-student co-evolution. To demonstrate the practical value of our proposed approach, we take the training of deep neural networks (DNN) as an example, and show that by using the learning to teach techniques, we are able to use much less training data and fewer iterations to achieve almost the same accuracy for different kinds of DNN models (e.g., multi-layer perceptron, convolutional neural networks and recurrent neural networks) under various machine learning tasks (e.g., image classification and text understanding). |
On effective classification of strings with wavelets | In recent years, the technological advances in mapping genes have made it increasingly easy to store and use a wide variety of biological data. Such data are usually in the form of very long strings for which it is difficult to determine the most relevant features for a classification task. For example, a typical DNA string may be millions of characters long, and there may be thousands of such strings in a database. In many cases, the classification behavior of the data may be hidden in the compositional behavior of certain segments of the string which cannot be easily determined apriori. Another problem which complicates the classification task is that in some cases the classification behavior is reflected in global behavior of the string, whereas in others it is reflected in local patterns. Given the enormous variation in the behavior of the strings over different data sets, it is useful to develop an approach which is sensitive to both the global and local behavior of the strings for the purpose of classification. For this purpose, we will exploit the multi-resolution property of wavelet decomposition in order to create a scheme which can mine classification characteristics at different levels of granularity. The resulting scheme turns out to be very effective in practice on a wide range of problems. |
Accelerating literature curation with text-mining tools: a case study of using PubTator to curate genes in PubMed abstracts | Today's biomedical research has become heavily dependent on access to the biological knowledge encoded in expert curated biological databases. As the volume of biological literature grows rapidly, it becomes increasingly difficult for biocurators to keep up with the literature because manual curation is an expensive and time-consuming endeavour. Past research has suggested that computer-assisted curation can improve efficiency, but few text-mining systems have been formally evaluated in this regard. Through participation in the interactive text-mining track of the BioCreative 2012 workshop, we developed PubTator, a PubMed-like system that assists with two specific human curation tasks: document triage and bioconcept annotation. On the basis of evaluation results from two external user groups, we find that the accuracy of PubTator-assisted curation is comparable with that of manual curation and that PubTator can significantly increase human curatorial speed. These encouraging findings warrant further investigation with a larger number of publications to be annotated. Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/PubTator/ |
Chaos Synchronization between Two Different Fractional Systems of Lorenz Family | This work investigates chaos synchronization between two different fractional order chaotic systems of Lorenz family. The fractional order Lü system is controlled to be the fractional order Chen system, and the fractional order Chen system is controlled to be the fractional order Lorenzlike system. The analytical conditions for the synchronization of these pairs of different fractional order chaotic systems are derived by utilizing Laplace transform. Numerical simulations are used to verify the theoretical analysis using different values of the fractional order parameter. |
Hand gesture recognition using a neural network shape fitting technique | A new method for hand gesture recognition that is based on a hand gesture fitting procedure via a new Self-Growing and Self-Organized Neural Gas (SGONG) network is proposed. Initially, the region of the hand is detected by applying a color segmentation technique based on a skin color filtering procedure in the YCbCr color space. Then, the SGONG network is applied on the hand area so as to approach its shape. Based on the output grid of neurons produced by the neural network, palm morphologic characteristics are extracted. These characteristics, in accordance with powerful finger features, allow the identification of the raised fingers. Finally, the hand gesture recognition is accomplished through a likelihood-based classification technique. The proposed system has been extensively tested with success. & 2009 Elsevier Ltd. All rights reserved. |
DC dielectrophoretic particle-particle interactions and their relative motions. | When particles in an electrolyte subjected to an external electric field get close to each other, the presence of particles could alter the local electric field and consequently induce mutual dielectrophoretic (DEP) forces on each other. In this paper, a transient, two-dimensional (2D) multiphysics model taking into account the particle-fluid-electric field interactions under a thin electrical double layer (EDL) assumption is performed to investigate the effects of the imposed electric field, the initial particle's orientation and distance on the DEP particle-particle interaction between a pair of micro-sized particles and their relative motions. Prior to the study of the DEP particle-particle interaction, the magnitude comparison between the DEP particle-particle interaction and the Brownian motion is analyzed. When the DEP particle-particle interaction dominates the random Brownian motion, it is expected to observe the particle chaining along the direction of the imposed electric field, independent of the initial particle orientation. The numerical predictions are in qualitative agreement with the experimental observations available from the literature. During the attraction motion of particles, their velocities tend to dramatically decrease due to the rapid increase in the repulsive hydrodynamic pressure force when the particle distance decreases to a certain value. One exclusive exception of the particle chaining occurs when the initial connecting line of the particles is perpendicular to the imposed electric field, which is extremely unstable owing to the inevitable Brownian motion. |
Degeneration in VAE: in the Light of Fisher Information Loss | While enormous progress has been made to Variational Autoencoder (VAE) in recent years, similar to other deep networks, VAE with deep networks suffers from the problem of degeneration, which seriously weakens the correlation between the input and the corresponding latent codes, deviating from the goal of the representation learning. To investigate how degeneration affects VAE from a theoretical perspective, we illustrate the information transmission in VAE and analyze the intermediate layers of the encoders/decoders. Specifically, we propose a Fisher Information measure for the layer-wise analysis. With such measure, we demonstrate that information loss is ineluctable in feed-forward networks and causes the degeneration in VAE. We show that skip connections in VAE enable the preservation of information without changing the model architecture. We call this class of VAE equipped with skip connections as SCVAE and perform a range of experiments to show its advantages in information preservation and degeneration mitigation. |
Low-resource routing attacks against tor | Tor has become one of the most popular overlay networks for anonymizing TCP traffic. Its popularity is due in part to its perceived strong anonymity properties and its relatively low latency service. Low latency is achieved through Torâ s ability to balance the traffic load by optimizing Tor router selection to probabilistically favor routers with highbandwidth capabilities.
We investigate how Torâ s routing optimizations impact its ability to provide strong anonymity. Through experiments conducted on PlanetLab, we show the extent to which routing performance optimizations have left the system vulnerable to end-to-end traffic analysis attacks from non-global adversaries with minimal resources. Further, we demonstrate that entry guards, added to mitigate path disruption attacks, are themselves vulnerable to attack. Finally, we explore solutions to improve Torâ s current routing algorithms and propose alternative routing strategies that prevent some of the routing attacks used in our experiments. |
Auditory brainstem response latency in forward masking, a marker of sensory deficits in listeners with normal hearing thresholds | In rodent models, acoustic exposure too modest to elevate hearing thresholds can nonetheless cause auditory nerve fiber deafferentation, interfering with the coding of supra-threshold sound. Low-spontaneous rate nerve fibers, important for encoding acoustic information at supra-threshold levels and in noise, are more susceptible to degeneration than high-spontaneous rate fibers. The change in auditory brainstem response (ABR) wave-V latency with noise level has been shown to be associated with auditory nerve deafferentation. Here, we measured ABR in a forward masking paradigm and evaluated wave-V latency changes with increasing masker-to-probe intervals. In the same listeners, behavioral forward masking detection thresholds were measured. We hypothesized that 1) auditory nerve fiber deafferentation increases forward masking thresholds and increases wave-V latency and 2) a preferential loss of low-spontaneous rate fibers results in a faster recovery of wave-V latency as the slow contribution of these fibers is reduced. Results showed that in young audiometrically normal listeners, a larger change in wave-V latency with increasing masker-to-probe interval was related to a greater effect of a preceding masker behaviorally. Further, the amount of wave-V latency change with masker-to-probe interval was positively correlated with the rate of change in forward masking detection thresholds. Although we cannot rule out central contributions, these findings are consistent with the hypothesis that auditory nerve fiber deafferentation occurs in humans and may predict how well individuals can hear in noisy environments. |
Flexibility in 21st Century Power Systems | All power systems have some inherent level of flexibility— designed to balance supply and demand at all times. Variability and uncertainty are not new to power systems because loads change over time and in sometimes unpredictable ways, and conventional resources fail unexpectedly. Variable renewable energy supply, however, can make this balance harder to achieve. Both wind and solar generation output vary significantly over the course of hours to days, sometimes in a predictable fashion, but often imperfectly forecasted. |
The Role of Emotion Regulation and Children's Early Academic Success. | This study investigated the role of children's emotion regulation skills and academic success in kindergarten, using a sample of 325 five-year-old children. A mediational analysis addressed the potential mechanisms through which emotion regulation relates to children's early academic success. Results indicated that emotion regulation was positively associated with teacher reports of children's academic success and productivity in the classroom and standardized early literacy and math achievement scores. Contrary to predictions, child behavior problems and the quality of the student teacher relationship did not mediate these relations. However, emotion regulation and the quality of the student-teacher relationship uniquely predicted academic outcomes even after accounting for IQ. Findings are discussed in terms of how emotion regulation skills facilitate children's development of a positive student-teacher relationship and cognitive processing and independent learning behavior, both of which are important for academic motivation and success. |
An architecture methodology for secure video conferencing | This paper describes how to enhance the security of VoIP applications using hardware security features on computing platforms such as notebooks, tablets and smartphones. Specifically we explain how to develop such applications using the protection offered by a processor based security technology, which provides the ability for software developers to maintain control of the security by creating trusted domains within applications. Using this processor security technology, sensitive code and data can be hosted without the risk of being observed or modified by malware present in other parts of the system. The resulting VoIP applications would better meet the strong security needs of corporate and government sectors for real time digital information sharing. We include results of a research project sponsored by the United States Department of Homeland Security and the United States Air Force Academyi where the team studied ways to enhance the security of a video conferencing application and implemented an experimental Video Chat application using security technologies provided by the processor and media processing hardware. |
A real world comparison of sulfonylurea and insulin vs. incretin-based treatments in patients not controlled on prior metformin monotherapy | AIMS
Metformin is the first line drug for patients diagnosed with type-2 diabetes; however, the impact of different treatment escalation strategies after metformin failure has thus far not been investigated in a real world situation. The registry described herein goes some way to clarifying treatment outcomes in such patients.
METHODS
DiaRegis is a multicentre registry including 3,810 patients with type-2 diabetes. For the present analysis we selected patients being treated with metformin monotherapy at baseline (n = 1,373), with the subsequent addition of incretin-based drugs (Met/Incr; n = 783), sulfonylureas (Met/SU; n = 255), or insulin (n = 220).
RESULTS
After two years 1,110 of the initial 1,373 patients had a complete follow-up (80.8%) and 726 of these were still on the initial treatment combination (65.4%). After treatment escalation, compared to Met/Incr (n = 421), Met/SU (n = 154) therapy resulted in a higher HbA1c reduction vs. baseline (-0.6 ± 1.4% vs. -0.5 ± 1.0%; p = 0.039). Insulin (n = 151) resulted in a stronger reduction in HbA1c (-0.9 ± 2.0% vs. -0.5 ± 1.0%; p = 0.003), and fasting plasma glucose (-24 ± 70 mg/dl vs. -19 ± 42 mg/dl; p = 0.001), but was associated with increased bodyweight (0.8 ± 9.0 kg vs. -1.5 ± 5.0 kg; p = 0.028). Hypoglycaemia rates (any with or without help and symptoms) were higher for patients receiving insulin (Odds Ratio [OR] 8.35; 95% Confidence Interval [CI] 4.84-14.4) and Met/SU (OR 2.70; 95% CI 1.48-4.92) versus Met/Incr. While there was little difference in event rates between Met/Incr and Met/SU, insulin was associated with higher rates of death, major cardiac and cerebrovascular events, and microvascular disease.
CONCLUSIONS
Taking the results of DiaRegis into consideration it can be concluded that incretin-based treatment strategies appear to have a favourable balance between glycemic control and treatment emergent adverse effects. |
Automatically characterizing large scale program behavior | Understanding program behavior is at the foundation of computer architecture and program optimization. Many programs have wildly different behavior on even the very largest of scales (over the complete execution of the program). This realization has ramifications for many architectural and compiler techniques, from thread scheduling, to feedback directed optimizations, to the way programs are simulated. However, in order to take advantage of time-varying behavior, we must first develop the analytical tools necessary to automatically and efficiently analyze program behavior over large sections of execution.Our goal is to develop automatic techniques that are capable of finding and exploiting the Large Scale Behavior of programs (behavior seen over billions of instructions). The first step towards this goal is the development of a hardware independent metric that can concisely summarize the behavior of an arbitrary section of execution in a program. To this end we examine the use of Basic Block Vectors. We quantify the effectiveness of Basic Block Vectors in capturing program behavior across several different architectural metrics, explore the large scale behavior of several programs, and develop a set of algorithms based on clustering capable of analyzing this behavior. We then demonstrate an application of this technology to automatically determine where to simulate for a program to help guide computer architecture research. |
Chronic Prostatitis: The Clinical Pharmacist Role and New DeliverySystems | It is clearly known that the prostatic gland is frequently involved in Different pathologies in adults and elderly patients. Benignant or malignant: anatomic of functionally disease. Frequently other condition as bladder dysfunctions can be associated or added to this pathologies or causated by it. Some of these pathologies give low level in patient quality life and reducing in life expectance (malignant). Malignant pathologies start as local disease but can diffused as metastatic interesting other apparatus of the patient (patient frequently show resistance to first line therapy in example hormonal blocks, or different chemiotherapic). |
Increase the cisplatin cytotoxicity and cisplatin-induced DNA damage in HepG2 cells by XRCC1 abrogation related mechanisms. | Cisplatin is one of the most potent chemotherapeutic anticancer drugs for the treatment of various cancers. The cytotoxic action of the drug is often thought to be associated with its ability to bind DNA to form cisplatin-DNA adducts. Impaired DNA repair processes including base excision repair (BER) play important roles on its cytotoxicity. XRCC1 is a key protein known to play a central role at an early stage in the BER pathway. However, whether XRCC1 contributes to decrease the cisplatin cytotoxicity and cisplatin-induced DNA damage in HepG2 still remains unknown. Hence, the purpose of this study was to explore whether abrogation of XRCC1 gene expression by short hairpin RNAs (shRNA) could reduce DNA repair and thus sensitize liver cancer cells to cisplatin. We abrogated the XRCC1 gene in HepG2 cell using shRNA transfection. Cell viability was measured by MTT assay and clonogenicity assay. Comet assay was used to detect the DNA damage induced by cisplatin. The host cell reactivation was employed to assess the DNA repair capacity of cisplatin-damaged luciferase reporter plasmid. Flow cytometry analysis was used to determine cisplatin-induced apoptosis, cell cycle and reactive oxygen species (ROS). The results showed that abrogation of XRCC1 could sensitize HepG2 cells to cisplatin. This enhanced cytotoxicity could be attributed to the increased DNA damage and reduced DNA repair capacity. Increasing cell cycle arrest and intracellular ROS production lead to more tumor cell apoptosis and then enhanced the cisplatin cytotoxicity. Our results suggested that the cisplatin cytotoxicity may increase by targeting inhibition of XRCC1. |
Effectiveness of physical therapy treatments on lateral epicondylitis. | OBJECTIVE
To analyze research literature that has examined the effectiveness of various physical therapy interventions on lateral epicondylitis.
DATA SOURCES
Evidence was compiled with data located using the PubMed, EBSCO, The Cochrane Library, and the Hooked on Evidence databases from 1994 to 2006 using the key words lateral epicondylitis, tennis elbow, modalities, intervention, management of, treatment for, radiohumeral bursitis, and experiment.
STUDY SELECTION
The literature used included peer-reviewed studies that evaluated the effectiveness of physical therapy treatments on lateral epicondylitis. Future research is needed to provide a better understanding of beneficial treatment options for people living with this condition.
DATA SYNTHESIS
Shockwave therapy and Cyriax therapy protocol are effective physical therapy interventions.
CONCLUSIONS
There are numerous treatments for lateral epicondylitis and no single intervention has been proven to be the most efficient. Therefore, future research is needed to provide a better understanding of beneficial treatment options for people living with this condition. |
Principles of remote attestation | Remote attestation is the activity of making a claim about properties of a target by supplying evidence to an appraiser over a network. We identify five central principles to guide development of attestation systems. We argue that (i) attestation must be able to deliver temporally fresh evidence; (ii) comprehensive information about the target should be accessible; (iii) the target, or its owner, should be able to constrain disclosure of information about the target; (iv) attestation claims should have explicit semantics to allow decisions to be derived from several claims; and (v) the underlying attestation mechanism must be trustworthy. We illustrate how to acquire evidence from a running system, and how to transport it via protocols to remote appraisers. We propose an architecture for attestation guided by these principles. Virtualized platforms, which are increasingly well supported on stock hardware, provide a natural basis for our attestation architecture. |
Anonymity for Decentralized Electronic Cash Systems | In 2008 Bitcoin was introduced as the first decentralized electronic cash system and it has seen widespread adoption since it became fully functional in 2009. This thesis describe the Bitcoin system, anonymity aspects for Bitcoin and how we can use cryptography to improve anonymity by a scheme called Zerocoin. The Bitcoin system will be described with focus on transactions and the blockchain where all transactions are recorded. We look more closely into anonymity in terms of address unlinkability and illustrate how the anonymity provided is insufficient by clustering addresses. Further we describe Zerocoin, a decentralized electronic cash scheme designed to cryptographically improve the anonymity guarantees in Bitcoin by breaking the link between individual Bitcoin transactions. We detail the construction of Zerocoin, provide security analysis and describe how it integrates into Bitcoin. |
International environmental law and global public health. | The environment continues to be a source of ill-health for many people, particularly in developing countries. International environmental law offers a viable strategy for enhancing public health through the promotion of increased awareness of the linkages between health and environment, mobilization of technical and financial resources, strengthening of research and monitoring, enforcement of health-related standards, and promotion of global cooperation. An enhanced capacity to utilize international environmental law could lead to significant worldwide gains in public health. |
Two storage inventory model of a deteriorating item with variable demand under partial credit period | In this paper, a two-warehouse inventory model for deteriorating item with stock and selling price dependent demand has been developed. Above a certain (fixed) ordered label, supplier provides full permissible delay in payment per order to attract more customers. But an interest is charged by the supplier if payment is made after the said delay period. The supplier also offers a partial permissible delay in payment even if the order quantity is less than the fixed ordered label. For display of goods, retailer has one warehouse of finite capacity at the heart of the market place and another warehouse of infinite capacity (that means capacity of second warehouse is sufficiently large) situated outside the market but near to first warehouse. Units are continuously transferred from second warehouse to first and sold from first warehouse. Combining the features of Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) a hybrid heuristic (named Particle Swarm-Genetic Algorithm (PSGA)) is developed and used to find solution of the proposed model. To test the efficiency of the proposed algorithm, models are also solved using another two established heuristic techniques and results are compared with those obtained using proposed PSGA. Here order quantity, refilling point at first warehouse and mark-up of selling price of fresh units are decision variables. Models are formulated for both crisp and fuzzy inventory parameters and illustrated with numerical examples. © 2012 Elsevier B.V. All rights reserved. . Introduction The inventory model is based on the assumption that the retailer pays for the units of the item immediately after the units are received. ut, due to introduction of multinationals and their franchises, now-a-days, supplier frequently offers a permissible delay in payments o attract new customers and increase sale. Goyal [10] developed an economic ordered quantity (EOQ) model under the conditions of ermissible delay in payments. Aggarwal and Jaggi [2] extended Goyal’s [10] model considering deteriorating item. Chung and Liao [7] tudied a lot-sizing problem under supplier’s trade credit depending on the retailer’s order quantity. Huang [13] established an EOQ model n which the supplier offers a partially permissible delay in payment when the order quantity is smaller than a predetermined quantity. uyang et al. [29] developed an EOQ model for deteriorating items with partially permissible delay in payments linked to order quantity. n their model supplier gives full trade credit when retailer’s purchase quantity is more than or equal to certain amount and allows partial rade credit for less amount. Liang and Zhou [16] developed a two-warehouse inventory model for deteriorating items under conditionally ermissible delay in payment. Singh et al. [30] developed a two warehouse fuzzy inventory model under the conditions of permissible elay in payment. All of the above mention studies on permissible delay in payments are made considering single warehouse facility specially when partial rade credit period is offered to retailers. To avail the delay in payment facility retailer has to purchase a large amount of units for which ufficient warehouse space is required to store the item. But in present competitive market due to high rent and scarcity of space, it is ifficult to get an outlet having sufficient storage area at the heart of the market place. Due to limited capacity of outlet (viz., W1) at the arket place, purchase of large amount of units (to get facilities of permissible delay in payments, i.e., to maximize the profit) of an item eads to storage problem. As a result, normally excess units are stored in the second warehouse (W2) having sufficiently large capacity ∗ Corresponding author. Tel.: +91 9434385976. E-mail addresses: [email protected] (P. Guchhait), [email protected] (M.K. Maiti), [email protected] (M. Maiti). 568-4946/$ – see front matter © 2012 Elsevier B.V. All rights reserved. ttp://dx.doi.org/10.1016/j.asoc.2012.07.028 w o o i o d o d a t w o i w i M c p • • • |
1-to-4 double-side slotted waveguide power divider/combiner for Ka-band power amplifiers | In this paper, a Ka-band power divider/combiner structure based on a double-side slotted waveguide technology is proposed. The novel structure uses slots on both sides of the waveguide to divide/combine the power thus reducing half the size compared to previous power dividers/combiners structures, based on similar technology. The 4-way combiner design is the first step toward the design/development of a 40-W Ka-band spatial power amplifier, aimed to operate at the 29-31 GHz band. The validation of this design is performed by means of simulations using a commercially available Full-Wave Electromagnetic (FWEM) simulator; e.g., the CST CAD Tool. |
Design and Implementation of Computationally Efficient Image Compressor for Wireless Capsule Endoscopy | An image compressor inside wireless capsule endoscope should have low power consumption, small silicon area, high compression rate and high reconstructed image quality. Simple and efficient image compression scheme, consisting of reversible color space transformation, quantization, subsampling, differential pulse code modulation (DPCM) and Golomb–Rice encoding, is presented in this paper. To optimize these methods and combine them optimally, the unique properties of human gastrointestinal tract image are exploited. Computationally simple and suitable color spaces for efficient compression of gastrointestinal tract images are proposed. Quantization and subsampling methods are optimally combined. A hardware-efficient, locally adaptive, Golomb–Rice entropy encoder is employed. The proposed image compression scheme gives an average compression rate of 90.35 % and peak signal-to-noise ratio of 40.66 dB. ASIC has been fabricated on UMC130nm CMOS process using Faraday high-speed standard cell library. The core of the chip occupies 0.018 mm2 and consumes 35 μW power. The experiment was performed at 2 frames per second on a 256 × 256 color image. The power consumption is further reduced from 35 to 9.66 μW by implementing the proposed image compression scheme using Faraday low-leakage standard cell library on UMC130nm process. As compared to the existing DPCM-based implementations, our realization achieves a significantly higher compression rate for similar area and power consumption. We achieve almost as high B Kinde A. Fante [email protected] Basabi Bhaumik [email protected] Shouri Chatterjee [email protected] 1 Department of Electrical Engineering, Indian Institute of Technology Delhi, New Delhi 110016, India Circuits Syst Signal Process compression rate as can be achieved with existing DCT-based image compression methods, but with an order of reduced area and power consumption. |
How Current Android Malware Seeks to Evade Automated Code Analysis | First we report on a new threat campaign, underway in Korea, which infected around 20,000 Android users within two months. The campaign attacked mobile users with malicious applications spread via different channels, such as email attachments or SMS spam. A detailed investigation of the Android malware resulted in the identification of a new Android malware family Android/BadAccents. The family represents current state-of-the-art in mobile malware development for banking trojans. Second, we describe in detail the techniques this malware family uses and confront them with current state-of-the-art static and dynamic codeanalysis techniques for Android applications. We highlight various challenges for automatic malware analysis frameworks that significantly hinder the fully automatic detection of malicious components in current Android malware. Furthermore, the malware exploits a previously unknown tapjacking vulnerability in the Android operating system, which we describe. As a result of this work, the vulnerability, affecting all Android versions, will be patched in one of the next releases of the Android Open Source Project. |
Ethical Dimensions of Counseling about the Clinical Management of Gestational Diabetes. | Gestational diabetes is associated with both short- and long-term adverse outcomes for the mother and the child. Glycemic control to improve perinatal outcomes is consistent with the best available evidence and should be recommended. The evidence for interventions to improve long-term outcomes is less robust. Therefore, patients need to be informed of the data, have the limitations explained, and be supported in decision-making. Theoretical risks do not need to be revealed to patients. Enthusiasm for interventions not supported by evidence should not be promoted. This article provides an ethical framework for counseling patients about the management of gestational diabetes. |
From social media to public health surveillance: Word embedding based clustering method for twitter classification | Social media provide a low-cost alternative source for public health surveillance and health-related classification plays an important role to identify useful information. In this paper, we summarized the recent classification methods using social media in public health. These methods rely on bag-of-words (BOW) model and have difficulty grasping the semantic meaning of texts. Unlike these methods, we present a word embedding based clustering method. Word embedding is one of the strongest trends in Natural Language Processing (NLP) at this moment. It learns the optimal vectors from surrounding words and the vectors can represent the semantic information of words. A tweet can be represented as a few vectors and divided into clusters of similar words. According to similarity measures of all the clusters, the tweet can then be classified as related or unrelated to a topic (e.g., influenza). Our simulations show a good performance and the best accuracy achieved was 87.1%. Moreover, the proposed method is unsupervised. It does not require labor to label training data and can be readily extended to other classification problems or other diseases. |
Image-based visual navigation for mobile robots | We introduce a new image-based visual navigation algorithm that allows the Cartesian velocity of a robot to be defined with respect to a set of visually observed features corresponding to previously unseen and unmapped world points. The technique is well suited to mobile robot tasks such as moving along a road or flying over the ground. We describe the algorithm in general form and present detailed simulation results for an aerial robot scenario using a spherical camera and a wide angle perspective camera, and present experimental results for a mobile ground robot. |
Multidimensional Divide-and-Conquer | Most results in the field of algorithm design are single algorithms that solve single problems. In this paper we discuss multidimensional divide-and-conquer, an algorithmic paradigm that can be instantiated in many different ways to yield a number of algorithms and data structures for multidimensional problems. We use this paradigm to give best-known solutions to such problems as the ECDF, maxima, range searching, closest pair, and all nearest neighbor problems. The contributions of the paper are on two levels. On the first level are the particular algorithms and data structures given by applying the paradigm. On the second level is the more novel contribution of this paper: a detailed study of an algorithmic paradigm that is specific enough to be described precisely yet general enough to solve a wide variety of problems. |
CloudWave: Where adaptive cloud management meets DevOps | The transition to cloud computing offers a large number of benefits, such as lower capital costs and a highly agile environment. Yet, the development of software engineering practices has not kept pace with this change. Moreover, the design and runtime behavior of cloud based services and the underlying cloud infrastructure are largely decoupled from one another.This paper describes the innovative concepts being developed by CloudWave to utilize the principles of DevOps to create an execution analytics cloud infrastructure where, through the use of programmable monitoring and online data abstraction, much more relevant information for the optimization of the ecosystem is obtained. Required optimizations are subsequently negotiated between the applications and the cloud infrastructure to obtain coordinated adaption of the ecosystem. Additionally, the project is developing the technology for a Feedback Driven Development Standard Development Kit which will utilize the data gathered through execution analytics to supply developers with a powerful mechanism to shorten application development cycles. |
Hierarchical classification: combining Bayes with SVM | We study hierarchical classification in the general case when an instance could belong to more than one class node in the underlying taxonomy. Experiments done in previous work showed that a simple hierarchy of Support Vectors Machines (SVM) with a top-down evaluation scheme has a surprisingly good performance on this kind of task. In this paper, we introduce a refined evaluation scheme which turns the hierarchical SVM classifier into an approximator of the Bayes optimal classifier with respect to a simple stochastic model for the labels. Experiments on synthetic datasets, generated according to this stochastic model, show that our refined algorithm outperforms the simple hierarchical SVM. On real-world data, however, the advantage brought by our approach is a bit less clear. We conjecture this is due to a higher noise rate for the training labels in the low levels of the taxonomy. |
Leaky gut - concept or clinical entity? | PURPOSE OF REVIEW
This article evaluates the current status of the gut barrier in gastrointestinal disorders.
RECENT FINDINGS
The gut barrier is a complex, multicomponent, interactive, and bidirectional entity that includes, but is not restricted to, the epithelial cell layer. Intestinal permeability, the phenomenon most readily and commonly studied, reflects just one (albeit an important one) function of the barrier that is intimately related to and interacts with luminal contents, including the microbiota. The mucosal immune response also influences barrier integrity; effects of inflammation per se must be accounted for in the interpretation of permeability studies in disease states.
SUMMARY
Although several aspects of barrier function can be assessed in man, one must be aware of exactly what a given test measures, as well as of its limitations. The temptation to employ results from a test of paracellular flux to imply a role for barrier dysfunction in disorders thought to be based on bacterial or macromolecular translocation must be resisted. Although changes in barrier function have been described in several gastrointestinal disorders, their primacy remains to be defined. At present, few studies support efficacy for an intervention that improves barrier function in altering the natural history of a disease process. |
Validation of clinical scores predicting severe acute kidney injury after cardiac surgery. | BACKGROUND
Acute kidney injury (AKI) requiring renal replacement therapy (RRT) in patients undergoing cardiac surgery is associated strongly with adverse patient outcomes. Recently, 3 predictive risk models for RRT have been developed. The aims of our study are to validate the predictive scoring models for patients requiring postoperative RRT and test applicability to the broader spectrum of patients with postoperative severe AKI.
STUDY DESIGN
Diagnostic test study.
SETTING & PARTICIPANTS
12,096 patients undergoing cardiac surgery with cardiopulmonary bypass at Mayo Clinic, Rochester, MN, from 2000 through 2007.
INDEX TEST
Cleveland Clinic score, Mehta score, and Simplified Renal Index (SRI) score.
REFERENCE TEST OR OUTCOME
Incidence of postoperative RRT or composite outcome of severe AKI, defined as serum creatinine level >2.0 mg/dL, and a 2-fold increase compared with the preoperative baseline creatinine level or RRT.
RESULTS
RRT was used in 254 (2.1%) patients, whereas severe AKI was present in 467 (3.9%). Discrimination for the prediction of RRT and severe AKI was good for all scoring models measured using areas under the receiver operating characteristic curve (AUROCs): 0.86 (95% CI, 0.84-0.88) for RRT and 0.81 (95% CI, 0.79-0.83) for severe AKI using the Cleveland score, 0.81 (95% CI, 0.78-0.86) and 0.76 (95% CI, 0.73-0.80) using the Mehta score, and 0.79 (95% CI, 0.77-0.82) and 0.75 (95% CI, 0.72-0.77) using the SRI score. The Cleveland score and Mehta score consistently showed significantly better discrimination compared with the SRI score (P < 0.001). Despite lower AUROCs for the prediction of severe AKI, the Cleveland score AUROC was still >0.80. The Mehta score is applicable in only a subgroup of patients.
LIMITATIONS
Single-center retrospective cohort study.
CONCLUSIONS
The Cleveland scoring system offers the best discriminative value to predict postoperative RRT and covers most patients undergoing cardiac surgery. It also can be used for prediction of the composite end point of severe AKI, which enables broader application to patients at risk of postoperative kidney dysfunction. |
Differential Evolution Algorithm and Method of Moments for the Design of Low-RCS Antenna | This letter introduces a novel technique for low radar cross section (RCS) antenna designs. The differential evolution algorithm (DEA) and the method of moments (MoM) are combined to achieve the RCS reduction and satisfied radiation performance. The antenna geometric parameters are extracted to be optimized by DEA, and a fitness function is evaluated by MoM to represent the performance of each candidate design. The optimization example of the slot patch antenna is presented as a test of the proposed DEA/MoM algorithm. By optimizing the slot location, the given slot antenna realizes the RCS reduction in the broad frequency range of 2-12 GHz at different incident angles, which demonstrates the feasibility of applying the proposed method to antenna RCS reduction. |
Changes in bone resorption across the menopause transition: effects of reproductive hormones, body size, and ethnicity. | OBJECTIVE
Our objective was to characterize changes in bone resorption in relation to the final menstrual period (FMP), reproductive hormones, body mass index (BMI), and ethnicity.
METHODS
Urinary type I collagen N-telopeptide (NTX), estradiol, and FSH levels were measured annually for up to 8 years spanning the menopause transition in 918 African American, Chinese, Japanese, or Caucasian women.
RESULTS
Urinary NTX began to increase sharply about 2 years before the FMP, reaching its peak level about 1 to 1.5 years after the FMP. NTX levels declined modestly from 2 to 6 years after the FMP but remained about 20% higher than before the menopause transition. The sharp rise in FSH occurred in conjunction with a sharp decline in estradiol and shortly after FSH levels began increasing rapidly. The mean increase in urinary NTX across the menopause transition was greatest in women with BMI <25 kg/m² and smallest in women with BMI >30 kg/m². Increases in NTX were greatest in Japanese women and smallest in African Americans. These differences were attenuated, but not eliminated, when analyses were adjusted for covariates, particularly BMI.
SUMMARY
During the menopause transition, a decline in ovarian function beginning about 2 years before the FMP is followed by an increase in bone resorption and subsequently by bone loss. The magnitude of the increase in bone resorption is inversely associated with BMI. Ethnic differences in changes in bone resorption are attenuated, but not eliminated, by adjustment for BMI. Ethnic differences in BMI, and corresponding ethnic differences in bone resorption, appear to account for much of the ethnic variation in perimenopausal bone loss. |
Describing Robotic Bat Flight with Stable Periodic Orbits | From a dynamic system point of view, bat locomotion stands out among other forms of flight. During a large part of bat wingbeat cycle the moving body is not in a static equilibrium. This is in sharp contrast to what we observe in other simpler forms of flight such as insects, which stay at their static equilibrium. Encouraged by biological examinations that have revealed bats exhibit periodic and stable limit cycles, this work demonstrates that one effective approach to stabilize articulated flying robots with bat morphology is locating feasible limit cycles for these robots; then, designing controllers that retain the closed-loop system trajectories within a bounded neighborhood of the designed periodic orbits. This control design paradigm has been evaluated in practice on a recently developed bio-inspired robot called Bat Bot (B2). |
Synthesizing Entity Matching Rules by Examples | Entity matching (EM) is a critical part of data integration. We study how to synthesize entity matching rules from positive-negative matching examples. The core of our solution is program synthesis, a powerful tool to automatically generate rules (or programs) that satisfy a given highlevel specification, via a predefined grammar. This grammar describes a General Boolean Formula (GBF) that can include arbitrary attribute matching predicates combined by conjunctions ( Ź |
A MODEL FOR STEADY-STATE , BALLISTIC CHARGE TRANSPORT THROUGH QUANTUM DOT LAYER SUPERLATTICES | We derive a model for one-dimensional charge-transport through two-terminal semiconducture heterostructure nano-devices comprising stacked layers of quantum dots with transverse Bravais lattice layout. All dots in a layer are assumed to be identical. The stacked layers are assumed to be perfectly vertically-aligned so that the entire device has a well-defined transverse unit-cell. We allow for an arbitrary sequence of quantum dot layers and intervening spacer and wetting layers between two heavily-doped, ohmic contacts. The model naturally accounts for longrange, inter-dot correlations. We rigorously prove that the device behaves as a diffraction grating which distributes incident wavefunction-phases into specific patterns of transmitted phases. This establishes several mathematical properties that allow efficient decomposition of the problem and permit feasible strategies for computational implementation. We motivate the study of this family of devices citing experimental developments. |
A Positive and Unlabeled Learning Algorithm for One-Class Classification of Remote-Sensing Data | In remote-sensing classification, there are situations when users are only interested in classifying one specific land-cover type, without considering other classes. These situations are referred to as one-class classification. Traditional supervised learning is inefficient for one-class classification because it requires all classes that occur in the image to be exhaustively assigned labels. In this paper, we investigate a new positive and unlabeled learning (PUL) algorithm, applying it to one-class classifications of two scenes of a high-spatial-resolution aerial photograph. The PUL algorithm trains a classifier on positive and unlabeled data, estimates the probability that a positive training sample has been labeled, and generates binary predictions for test samples using an adjusted threshold. Experimental results indicate that the new algorithm provides high classification accuracy, outperforming the biased support-vector machine (SVM), one-class SVM, and Gaussian domain descriptor methods. The advantages of the new algorithm are that it can use unlabeled data to help build classifiers, and it requires only a small set of positive data to be labeled by hand. Therefore, it can significantly reduce the effort of assigning labels to training data without losing predictive accuracy. |
The Wide Margins of the Century@@@When Was Modernism: Essays on Contemporary Cultural Practice in India | A commitment to modernity is the underlying theme of this volume. The essays range from interpretive prose pieces to theoretical expositions the authors seeks to situate the modern and contemporary cultural practice. The essays here formalize polemic options that emerge from concerns developed across the book. |
Prospects of pharmaceuticals and biopharmaceuticals loaded microparticles prepared by double emulsion technique for controlled delivery. | Several methods and techniques are potentially useful for the preparation of microparticles in the field of controlled drug delivery. The type and the size of the microparticles, the entrapment, release characteristics and stability of drug in microparticles in the formulations are dependent on the method used. One of the most common methods of preparing microparticles is the single emulsion technique. Poorly soluble, lipophilic drugs are successfully retained within the microparticles prepared by this method. However, the encapsulation of highly water soluble compounds including protein and peptides presents formidable challenges to the researchers. The successful encapsulation of such compounds requires high drug loading in the microparticles, prevention of protein and peptide degradation by the encapsulation method involved and predictable release, both rate and extent, of the drug compound from the microparticles. The above mentioned problems can be overcome by using the double emulsion technique, alternatively called as multiple emulsion technique. Aiming to achieve this various techniques have been examined to prepare stable formulations utilizing w/o/w, s/o/w, w/o/o, and s/o/o type double emulsion methods. This article reviews the current state of the art in double emulsion based technologies for the preparation of microparticles including the investigation of various classes of substances that are pharmaceutically and biopharmaceutically active. |
Self-biting with multiple finger amputations following spinal cord injury | We have observed mutilative self-biting leading to multiple finger amputations in two patients following C4 complete spinal cord injury (SCI). Both men were of normal intelligence without psychosis and each had a neurotic personality and history of fingernail biting. They related the self-biting to anxiety and depression. We believe these to be the first English language reports of multiple finger amputations due to self-biting following SCI. |
Adoption of business continuity planning processes in IT service management | For any fault of the same severity level, traditional fault discovery and notification tools provide equal weighting from business points of view. To improve the fault correlation from business perspectives, we proposed a framework to automate network and system alerts with respect to its business service impact for proactive notification to IT operations management. This paper outlines the value of business continuity planning (BCP) during the course of service impact analysis, placing particular emphasis on the business perspective in the processes of IT service management. The framework explicitly employs BCP relevant processes in order to identify the relationships between business services and IT resources A practical case in IT operations to illustrate the concept was then conducted. |
Using Software Reliability Growth Models in Practice | The amount of software in consumer electronics has grown from thousands to millions of lines of source code over the past decade. Up to a million of these products are manufactured each month for a successful mobile phone or television. Development organizations must meet two challenging requirements at the same time: be predictable to meet market windows and provide nearly fault-free software. Software reliability is the probability of failure-free operation for a specified period of time in a specified environment. The process of finding and removing faults to improve the software reliability can be described by a mathematical relationship called a software reliability growth model (SRGM). Our goal is to assess the practical application of SRGMs during integration and test and compare them with other estimation methods. We empirically validated SRGMs' usability in a software development environment. During final test phases for three embedded software projects, software reliability growth models predicted remaining faults in the software, supporting management's decisions. |
Decision making, the P3, and the locus coeruleus-norepinephrine system. | Psychologists and neuroscientists have had a long-standing interest in the P3, a prominent component of the event-related brain potential. This review aims to integrate knowledge regarding the neural basis of the P3 and to elucidate its functional role in information processing. The authors review evidence suggesting that the P3 reflects phasic activity of the neuromodulatory locus coeruleus-norepinephrine (LC-NE) system. They discuss the P3 literature in the light of empirical findings and a recent theory regarding the information-processing function of the LC-NE phasic response. The theoretical framework emerging from this research synthesis suggests that the P3 reflects the response of the LC-NE system to the outcome of internal decision-making processes and the consequent effects of noradrenergic potentiation of information processing. |
Symptom perception in childhood asthma: the role of anxiety and asthma severity. | This study tested the relationship of anxiety and asthma severity to symptom perception. Eighty-six children diagnosed with mild or moderate asthma had symptom perception and pulmonary function measured throughout methacholine challenge (to induce bronchoconstriction). Higher trait anxiety was associated with heightened symptom perception (controlling for pulmonary function) at baseline. Greater asthma severity was associated with blunted symptom perception (controlling for pulmonary function) at the end of methacholine challenge and with a slower rate of increase in symptom perception across methacholine challenge. These results suggest that anxiety plays a role when children's symptoms are mild, whereas medical variables such as severity play a role in perception of changes in asthma symptomatology as bronchoconstriction worsens. |
Fructus phyllanthi tannin fraction induces apoptosis and inhibits migration and
invasion of human lung squamous carcinoma cells in vitro via MAPK/MMP
pathways | Aim:Fructus phyllanthi tannin fraction (PTF) from the traditional Tibetan medicine Fructus phyllanthi has been found to inhibit lung and liver carcinoma in mice. In this study we investigated the anticancer mechanisms of PTF in human lung squamous carcinoma cells in vitro.Methods:Human lung squamous carcinoma cell line (NCI-H1703), human large-cell lung cancer cell line (NCI-H460), human lung adenocarcinoma cell line (A549) and human fibrosarcoma cell line (HT1080) were tested. Cell viability was detected with MTT assay. Cell migration and invasion were assessed using a wound healing assay and a transwell chemotaxis chambers assay, respectively. Cell apoptosis was analyzed with flow cytometric analysis. The levels of apoptosis-related and metastasis-related proteins were detected by Western blot and immunofluorescence.Results:PTF dose-dependently inhibited the viability of the 3 human lung cancer cells. The IC50 values of PTF in inhibition of NCI-H1703, NCI-H460, and A549 cells were 33, 203, and 94 mg/L, respectively. PTF (15, 30, and 60 mg/L) dose-dependently induced apoptosis of NCI-H1703 cells. Treatment of NCI-H1703 and HT1080 cells with PTF significantly inhibited cell migration, and reduced the number of invasive cells through Matrigel. Furthermore, PTF dose-dependently down-regulated the expression of phosphor-ERK1/2, MMP-2 and MMP-9, up-regulated the expression of phosphor-JNK, but had no significant effect on the expression of ERK1/2 or JNK.Conclusion:PTF induces cell apoptosis and inhibits the migration and invasion of NCI-H1703 cells by decreasing MPPs expression through regulation of the MAPK pathway. |
Auditory event-related dynamics of the EEG spectrum and effects of exposure to tones. | A new measure of event-related brain dynamics, the event-related spectral perturbation (ERSP), is introduced to study event-related dynamics of the EEG spectrum induced by, but not phase-locked to, the onset of the auditory stimuli. The ERSP reveals aspects of event-related brain dynamics not contained in the ERP average of the same response epochs. Twenty-eight subjects participated in daily auditory evoked response experiments during a 4 day study of the effects of 24 h free-field exposure to intermittent trains of 89 dB low frequency tones. During evoked response testing, the same tones were presented through headphones in random order at 5 sec intervals. No significant changes in behavioral thresholds occurred during or after free-field exposure. ERSPs induced by target pips presented in some inter-tone intervals were larger than, but shared common features with, ERSPs induced by the tones, most prominently a ridge of augmented EEG amplitude from 11 to 18 Hz, peaking 1-1.5 sec after stimulus onset. Following 3-11 h of free-field exposure, this feature was significantly smaller in tone-induced ERSPs; target-induced ERSPs were not similarly affected. These results, therefore, document systematic effects of exposure to intermittent tones on EEG brain dynamics even in the absence of changes in auditory thresholds. |
What do software architects expect from requirements specifications? results of initial explorative studies | Software requirements specifications (SRS) serve as an important source of information for software architects with regard to deriving suitable architectural design decisions. However, from a software architect's viewpoint, using these documents efficiently and effectively is often difficult. One could attribute these observations to the fact that SRS also have to address and satisfy the information needs of other document consumers involved in downstream activities like interaction design, user interface design or testing - which is, indeed, very challenging for requirements engineers. In this position paper, we present goals and initial results of explorative studies aimed at investigating information needs that should be fulfilled in SRS from the viewpoint of software architects. In these studies, we gained first insights into the relevance of certain artifact types (like descriptions of interactions or system functionalities) and their suitable representation in terms of notations. Furthermore, the analysis of these initial results revealed variances within the group of software architects regarding information needs. This has motivated the planning and conduction of further studies in the near future that will investigate factors such as expertise or individual motivation, which might influence the information needs from software architects' viewpoints. |
Classifying the Political Leaning of News Articles and Users from User Votes | Social news aggregator services generate readers’ subjective reactions to news opinion articles. Can we use those as a resource to classify articles as liberal or conservative, even without knowing the self-identified political leaning of most users? We applied three semi-supervised learning methods that propagate classifications of political news articles and users as conservative or liberal, based on the assumption that liberal users will vote for liberal articles more often, and similarly for conservative users and articles. Starting from a few labeled articles and users, the algorithms propagate political leaning labels to the entire graph. In cross-validation, the best algorithm achieved 99.6% accuracy on held-out users and 96.3% accuracy on held-out articles. Adding social data such as users’ friendship or text features such as cosine similarity did not improve accuracy. The propagation algorithms, using the subjective liking data from users, also performed better than an SVM based text classifier, which achieved 92.0% accuracy on articles. |
Phase 3 randomized controlled study of gastroretentive gabapentin for the treatment of moderate-to-severe hot flashes in menopause. | OBJECTIVE
The goal of this study was to evaluate the efficacy and safety of gastroretentive gabapentin (G-GR) for the treatment of moderate-to-severe menopausal hot flashes.
METHODS
The primary endpoints of this randomized, placebo-controlled study of G-GR (600 mg am/1,200 mg pm) were the mean daily frequency and severity of hot flashes at weeks 4 and 12. Secondary endpoints included Patients' Global Impression of Change, Clinicians' Global Impression of Change, and daily sleep interference at week 24.
RESULTS
Six hundred women with 7 or more moderate-to-severe hot flashes/day enrolled; 66.2% completed 24 weeks of treatment. At weeks 4 and 12, G-GR-treated women experienced significantly greater reductions in mean hot flash frequency and severity than placebo-treated women (frequency: week 4, -1.7, P < 0.0001; week 12, -1.14, P = 0.0007; severity: week 4, -0.21, P < 0.0001; week 12, -0.19, P = 0.012). Similar reductions were maintained up to week 24. On the Patient Global Impression of Change, more women receiving G-GR than placebo were "much" or "very much" improved (week 12: 58% vs 44%, P = 0.0008; week 24: 76% vs 55%, P < 0.0001). G-GR significantly reduced sleep interference compared with placebo at week 12 (P = 0.0056) and week 24 (P = 0.0084). Approximately 5% more women taking G-GR withdrew because of adverse events (G-GR/placebo, 16.7%/11.5%). The most common adverse events were dizziness (12.7%/3.4%), headache (9.3%/8.1%), and somnolence (6.0%/2.7%); incidences dropped to sustained low levels after a few weeks.
CONCLUSIONS
G-GR is a modestly effective nonhormone therapy option for the treatment of moderate-to-severe hot flashes due to menopause and is well tolerated with titration. |
The effects of organizational climate and interorganizational coordination on the quality and outcomes of children's service systems. | OBJECTIVE
This study examines the effects of organizational characteristics, including organizational climate and interorganizational coordination, on the quality and outcomes of children's service systems.
METHOD
A quasi-experimental, longitudinal design was used to assess the effects of increasing interorganizational services coordination in public children's service agencies. The research team collected both qualitative and quantitative data over a 3-year period describing the services provided to 250 children by 32 public children's service offices in 24 counties in Tennessee.
RESULTS
Findings show that organizational climate (including low conflict, cooperation, role clarity, and personalization) is the primary predictor of positive service outcomes (the children's improved psychosocial functioning) and a significant predictor of service quality. In contrast, interorganizational coordination had a negative effect on service quality and no effect on outcomes.
CONCLUSIONS
Efforts to improve public children's service systems should focus on creating positive organizational climates rather than on increasing interorganizational services coordination. This is important because many large-scale efforts to improve children's service systems have focused on interorganizational coordination with little success and none to date have focused on organizational climate. |
Visual Nonclassical Receptive Field Effects Emerge from Sparse Coding in a Dynamical System | Extensive electrophysiology studies have shown that many V1 simple cells have nonlinear response properties to stimuli within their classical receptive field (CRF) and receive contextual influence from stimuli outside the CRF modulating the cell's response. Models seeking to explain these non-classical receptive field (nCRF) effects in terms of circuit mechanisms, input-output descriptions, or individual visual tasks provide limited insight into the functional significance of these response properties, because they do not connect the full range of nCRF effects to optimal sensory coding strategies. The (population) sparse coding hypothesis conjectures an optimal sensory coding approach where a neural population uses as few active units as possible to represent a stimulus. We demonstrate that a wide variety of nCRF effects are emergent properties of a single sparse coding model implemented in a neurally plausible network structure (requiring no parameter tuning to produce different effects). Specifically, we replicate a wide variety of nCRF electrophysiology experiments (e.g., end-stopping, surround suppression, contrast invariance of orientation tuning, cross-orientation suppression, etc.) on a dynamical system implementing sparse coding, showing that this model produces individual units that reproduce the canonical nCRF effects. Furthermore, when the population diversity of an nCRF effect has also been reported in the literature, we show that this model produces many of the same population characteristics. These results show that the sparse coding hypothesis, when coupled with a biophysically plausible implementation, can provide a unified high-level functional interpretation to many response properties that have generally been viewed through distinct mechanistic or phenomenological models. |
FOTS: Fast Oriented Text Spotting with a Unified Network | Incidental scene text spotting is considered one of the most difficult and valuable challenges in the document analysis community. Most existing methods treat text detection and recognition as separate tasks. In this work, we propose a unified end-to-end trainable Fast Oriented Text Spotting (FOTS) network for simultaneous detection and recognition, sharing computation and visual information among the two complementary tasks. Specifically, RoIRotate is introduced to share convolutional features between detection and recognition. Benefiting from convolution sharing strategy, our FOTS has little computation overhead compared to baseline text detection network, and the joint training method makes our method perform better than these two-stage methods. Experiments on ICDAR 2015, ICDAR 2017 MLT, and ICDAR 2013 datasets demonstrate that the proposed method outperforms state-of-the-art methods significantly, which further allows us to develop the first real-time oriented text spotting system which surpasses all previous state-of-the-art results by more than 5% on ICDAR 2015 text spotting task while keeping 22.6 fps. |
Respiratory volume monitoring in an obese surgical population and the prediction of postoperative respiratory depression by the STOP-bang OSA risk score. | STUDY OBJECTIVE
To evaluate use of a respiratory volume monitor (RVM; ExSpiron, Respiratory Motion, Inc., Waltham, MA, USA) that provides minute ventilation (MV), tidal volume (TV) and respiratory rate (RR) measurements in obese surgical patients, hitherto undescribed.
DESIGN
Prospective, IRB-approved observational study of RVM parameter accuracy in obese surgical patients, designed to test the ability of the RVM to detect predefined postoperative respiratory depression (PORD) and apneic events (POA) and to correlate STOP-Bang scores with PORD and POA.
SETTING
Pre-, intra-, and post-op patient-care areas, including the post-anesthesia care unit (PACU) in 2 academic centers with bariatric populations.
PATIENTS
80 patients (47±12 years), BMI of 43±7 kg/m(2) undergoing elective surgery were enrolled.
INTERVENTIONS
Data collected included patient characteristics, STOP-Bang scores and RVM data from immediately preoperatively through PACU completion without effecting standard clinical care.
MEASUREMENTS
Low minute ventilation (LMV) was defined as 40% of predicted MV, and PORD was defined as sustained LMV for 5 minutes. Appropriate parametric and non-parametric statistical analyses were performed, P<.05 considered significant.
MAIN RESULTS
In 56 patients with complete intraoperative ventilator data, correlation between RVM and ventilator MV measurements was r=0.89 (measurement bias 1.5%, accuracy 11%). Measurement error was 0.13 L/min (95% confidence interval-0.93 L/min - 1.20 L/min). In PACU, 16.3% and 31% of patients had PORD and POA respectively. There were no significant differences in the incidence of PORD and POA in 3 STOP-Bang risk categories (P>.2).
CONCLUSIONS
There was excellent correlation and accuracy between the RVM and ventilator volumes in obese surgical patients. A considerable number of patients exhibited PORD and POA in the PACU. The STOP-Bang risk scores correlated poorly with PORD and POA which suggests that obese surgical patients remain at risk for early post-operative respiratory events irrespective of the STOP-Bang score. |
A phase I clinical and pharmacokinetic study of the camptothecin glycoconjugate, BAY 38-3441, as a daily infusion in patients with advanced solid tumors. | BACKGROUND
The aim of this study was to define the maximum tolerated dose (MTD), dose-limiting toxicity (DLT) and pharmacokinetics of the camptothecin glycoconjugate BAY 38-3441, administered as an infusion for 30 min on two separate schedules every 3 weeks.
PATIENTS AND METHODS
A total of 81 patients with advanced solid tumors were treated with BAY 38-3441 either at doses of 20, 40, 67, 100, 140, 210, 315, 470 and 600 mg/m2/day for 1 day every 3 weeks (single-dose schedule), or at doses of 126, 189, 246, 320 and 416 mg/m2/day once daily for three consecutive days every 3 weeks (3-day schedule). Plasma sampling was performed to characterize the pharmacokinetics of BAY 38-3441 and camptothecin with these schedules.
RESULTS
DLTs included renal toxicity, granulocytopenia and thrombocytopenia on the single-day schedule at doses > or = 470 mg/m2/day, and diarrhea and thrombocytopenia on the 3-day schedule at doses > or = 320 mg/m2/day. Other non-DLTs were gastrointestinal, dermatological and hematological. Pharmacokinetics of BAY 38-3441 and camptothecin appear to be dose-dependent, but not linear.
CONCLUSIONS
Renal toxicity was dose-limiting for BAY 38-3441 using 30-min infusions on the single-dose schedule. Dose escalation to 470 mg/m2/day is feasible using a 2-h infusion. However, because of the superior safety profile, we recommend the 3-day schedule for BAY 38-3441 at a dose of 320 mg/m2/day as 30-min infusions for further phase II studies. |
Optimal design and tradeoffs analysis for planar transformer in high power DC-DC converters | A planar magnetic is a low profile transformer or inductor utilizing planar windings instead of the traditional windings made of Cu-wires. In this paper, the important factors for planar transformer design including winding loss, core loss, leakage inductance and stray capacitance have been investigated individually. The tradeoffs among these factors have to be analyzed in order to achieve optimal parameters. Combined with a certain application, four typical winding arrangements have been compared to illustrate each their advantages and disadvantages. An improved interleaving structure with optimal behaviors is proposed, which constructs the top layer paralleling with the bottom layer and then in series with the other turns of the primary so that a lower magneto motive force (MMF) ratio m can be obtained as well as minimized AC resistance, leakage inductance and even stray capacitance. A 1.2-kW full-bridge DC-DC converter prototype employing the improved planar transformer structure has been constructed, over 96% efficiency is achieved and a 2.7% improvement compared to the non-interleaving structure is obtained. |
Lifelong Learning CRF for Supervised Aspect Extraction | This paper makes a focused contribution to supervised aspect extraction. It shows that if the system has performed aspect extraction from many past domains and retained their results as knowledge, Conditional Random Fields (CRF) can leverage this knowledge in a lifelong learning manner to extract in a new domain markedly better than the traditional CRF without using this prior knowledge. The key innovation is that even after CRF training, the model can still improve its extraction with experiences in its applications. |
Representation, Vision and Visualization: Cognitive Functions in Mathematical Thinking. Basic Issues for Learning. | Mathematics education has been very sensitive to change needs over the last fifty years. Researches in developmental psychology, new technologies, new requirements in assessment have supported them. But their impact has been more effective on mathematics curriculum and on means of teaching than on the explanations of the deep processes of understanding and learning in mathematics. Difficulties of such research stem from the necessity to define a framework within the epistemological constraints specific to mathematical activity and the cognitive functions of thought which it involves are not separated. That requires going beyond local studies of concept acquiring at each level of the curriculum and beyond mere reference to very general theories of learning and even beyond global description of student's activity in classroom. Representation and visualization are at the core of understanding in mathematics. But in which framework can their role in mathematical thinking and in learning of mathematics be analyzed? Already in 1961, Piaget admitted the difficulty to understand what mathematicians call "intuition", a way of understanding which has close links with representation and visualization: " rien n'est plus difficile à comprendre pour un psychologue que ce que les mathematiciens entendent par intuition". He distinguished "many forms of mathematical intuition" (1961, pp.223-241) from the empirical ones to the symbolizing ones. From a cognitive viewpoint, the question is not easier. Representation refers to a large range of meaning activities: steady and holistic beliefs about something, various ways to evoke and to denote objects, how information is coded... On the contrary, visualization seems to emphasize images and empirical intuition of physical objects and actions. Which ones are relevant to analyze the understanding in mathematics in order to bring out conditions of learning? Our purpose in this panel is to focus on some main distinctions which are necessary to analyze the mathematical knowledge from a learning point of view and to explain how many students come up against difficulties at each level of curriculum and very often cannot go beyond. Studies about reasoning, proving, using geometrical figures in problem solving, reading of graphs... have made these distinctions necessary. They lead not only to emphasize semiotic representations as an intrinsic process of thinking but also to relativize some other ones as the distinction between internal and external representations. They lead also to point out the gap between vision and visualization. And from a learning point of view, visualization, the only relevant cognitive modality … |
Collaborative Filtering with Graph Information: Consistency and Scalable Methods | Low rank matrix completion plays a fundamental role in collaborative filtering applications, the key idea being that the variables lie in a smaller subspace than the ambient space. Often, additional information about the variables is known, and it is reasonable to assume that incorporating this information will lead to better predictions. We tackle the problem of matrix completion when pairwise relationships among variables are known, via a graph. We formulate and derive a highly efficient, conjugate gradient based alternating minimization scheme that solves optimizations with over 55 million observations up to 2 orders of magnitude faster than state-of-the-art (stochastic) gradient-descent based methods. On the theoretical front, we show that such methods generalize weighted nuclear norm formulations, and derive statistical consistency guarantees. We validate our results on both real and synthetic datasets. |
LIBSVX: A Supervoxel Library and Benchmark for Early Video Processing | Supervoxel segmentation has strong potential to be incorporated into early video analysis as superpixel segmentation has in image analysis. However, there are many plausible supervoxel methods and little understanding as to when and where each is most appropriate. Indeed, we are not aware of a single comparative study on supervoxel segmentation. To that end, we study seven supervoxel algorithms, including both off-line and streaming methods, in the context of what we consider to be a good supervoxel: namely, spatiotemporal uniformity, object/region boundary detection, region compression and parsimony. For the evaluation we propose a comprehensive suite of seven quality metrics to measure these desirable supervoxel characteristics. In addition, we evaluate the methods in a supervoxel classification task as a proxy for subsequent high-level uses of the supervoxels in video analysis. We use six existing benchmark video datasets with a variety of content-types and dense human annotations. Our findings have led us to conclusive evidence that the hierarchical graph-based (GBH), segmentation by weighted aggregation (SWA) and temporal superpixels (TSP) methods are the top-performers among the seven methods. They all perform well in terms of segmentation accuracy, but vary in regard to the other desiderata: GBH captures object boundaries best; SWA has the best potential for region compression; and TSP achieves the best undersegmentation error. |
Opioid-Sparing Effect of Ketamine in Children: A Meta-Analysis and Trial Sequential Analysis of Published Studies. | INTRODUCTION
Reducing postoperative opioid consumption is a priority given its impact upon recovery, and the efficacy of ketamine as an opioid-sparing agent in children is debated. The goal of this study was to update a previous meta-analysis on the postoperative opioid-sparing effect of ketamine, adding trial sequential analysis (TSA) and four new studies.
MATERIALS AND METHODS
A comprehensive literature search was conducted to identify clinical trials that examined ketamine as a perioperative opioid-sparing agent in children and infants. Outcomes measured were postoperative opioid consumption to 48 h (primary outcome: postoperative opioid consumption to 24 h), postoperative pain intensity, postoperative nausea and vomiting and psychotomimetic symptoms. The data were combined to calculate the pooled mean difference, odds ratios or standard mean differences. In addition to this classical meta-analysis approach, a TSA was performed.
RESULTS
Eleven articles were identified, with four added to seven from the previous meta-analysis. Ketamine did not exhibit a global postoperative opioid-sparing effect to 48 postoperative hours, nor did it decrease postoperative pain intensity. This result was confirmed using TSA, which found a lack of power to draw any conclusion regarding the primary outcome of this meta-analysis (postoperative opioid consumption to 24 h). Ketamine did not increase the prevalence of either postoperative nausea and vomiting or psychotomimetic complications.
CONCLUSIONS
This meta-analysis did not find a postoperative opioid-sparing effect of ketamine. According to the TSA, this negative result might involve a lack of power of this meta-analysis. Further studies are needed in order to assess the postoperative opioid-sparing effects of ketamine in children. |
CloudSeer: Workflow Monitoring of Cloud Infrastructures via Interleaved Logs | Cloud infrastructures provide a rich set of management tasks that operate computing, storage, and networking resources in the cloud. Monitoring the executions of these tasks is crucial for cloud providers to promptly find and understand problems that compromise cloud availability. However, such monitoring is challenging because there are multiple distributed service components involved in the executions. CloudSeer enables effective workflow monitoring. It takes a lightweight non-intrusive approach that purely works on interleaved logs widely existing in cloud infrastructures. CloudSeer first builds an automaton for the workflow of each management task based on normal executions, and then it checks log messages against a set of automata for workflow divergences in a streaming manner. Divergences found during the checking process indicate potential execution problems, which may or may not be accompanied by error log messages. For each potential problem, CloudSeer outputs necessary context information including the affected task automaton and related log messages hinting where the problem occurs to help further diagnosis. Our experiments on OpenStack, a popular open-source cloud infrastructure, show that CloudSeer's efficiency and problem-detection capability are suitable for online monitoring. |
Small-Size LTE/WWAN Printed Loop Antenna With an Inductively Coupled Branch Strip for Bandwidth Enhancement in the Tablet Computer | The technique of using an inductively coupled branch strip for bandwidth enhancement of a simple printed loop antenna to achieve small size yet multiband operation to cover the LTE/WWAN bands (704-960 and 1710-2690 MHz) in the tablet computer is presented. The antenna's metal pattern occupies a small area of 10 × 34.5 mm 2 and is printed on a thin FR4 substrate. The branch strip is coupled to the loop antenna through a chip inductor. In the antenna's lower band, the branch strip can cause an additional resonance excited near a resonant mode contributed by the printed loop antenna and thereby greatly widens the antenna's low-band bandwidth to cover the 704-960 MHz band for the LTE700/GSM850/900 operations. While in the antenna's higher band, the chip inductor provides a high inductance and limits the excitation of the branch strip, thereby making the branch strip have negligible effects on the high-band operation. In this case, with the printed loop antenna excited in the higher band, the antenna can generate two higher order resonant modes to form a wide operating band for the GSM1800/1900/UMTS/LTE2300/2500 operations (1710-2690 MHz). Details of the proposed antenna are described, and the obtained results of the antenna are presented and discussed. |
Scene tunnels for seamless virtual tour | This paper proposes a visual representation named scene tunnel to archive and visualize urban scenes for Internet based virtual tour. We scan cityscapes using multiple cameras on a vehicle that moves along streets, and generate scene archive more complete than a route panorama. The scene tunnel can cover high architectures and various object aspects. It contains much less data than video, which is suitable for image transmission and rendering over the Internet. It has a uniformed resolution along the camera path and provides continuous views for virtual traversing of a real city. We have developed image acquisition methods from slit setting, view scanning, to image integration. We have also achieved city visualization with scene tunnels on the Internet by transforming view, streaming data, and providing interactions. |
Classifying imbalanced data sets using similarity based hierarchical decomposition | Classification of data is difficult if the data is imbalanced and classes are overlapping. In recent years, more research has started to focus on classification of imbalanced data since real world data is often skewed. Traditional methods are more successful with classifying the class that has the most samples (majority class) compared to the other classes (minority classes). For the classification of imbalanced data sets, different methods are available, although each has some advantages and shortcomings. In this study, we propose a new hierarchical decomposition method for imbalanced data sets which is different from previously proposed solutions to the class imbalance problem. Additionally, it does not require any data pre-processing step as many other solutions need. The new method is based on clustering and outlier detection. The hierarchy is constructed using the similarity of labeled data subsets at each level of the hierarchy with different levels being built by different data and feature subsets. Clustering is used to partition the data while outlier detection is utilized to detect minority class samples. The comparison of the proposed method with state of art the methods using 20 public imbalanced data sets and 181 synthetic data sets showed that the proposed method’s classification performance is better than the state of art methods. It is especially successful if the minority class is sparser than the majority class. It has accurate performance even when classes have sub-varieties and minority and majority classes are overlapping. Moreover, its performance is also good when the class imbalance ratio is low, i.e. classes are more imbalanced. |
Source code retrieval on StackOverflow using LDA | Internet code search is quite popular research area. StackOverflow allows developers to ask and answer questions about code. Previous approach to search code on StackOverflow uses tf-idf method that based on number of occurrences of words to recommend source code. This method has the disadvantage that variable or method identifiers are considered as normal words, even though identifiers are often a combination of two or more words. For example, there is an identifier named “randomString”. In that case, if we search using a keyword “random” the system probably will not recommend “randomString” because both words are different. Concept location can tackle this problem. Concept location has been used widely to obtain the correlation between code with a specific concepts or features. Previous research of concept location only focused on source code's comments, and relation among the objects within the source code. This research proposes a mechanism for finding code on StackOverflow uses Latent Dirichlet Allocation (LDA) using concept location in the preprocessing stage. Questions, answers, and code snippets about Java programming are downloaded from StackOverflow to a local repository. Corpuses are generated by extracting questions, answers and code snippets. Inferencing concept location from source code is created using LDA algorithm. Developers query concepts and then system will recommend source code based on the relevant concepts. The result of the experiment shows that the system is able to recommend source code with 48% average of precision and 58% average of recall. |
Pictorial Structures for Object Recognition | In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images. |
Cross-sectional study of quality of life and symptoms in chronic renal disease patients: the Modification of Diet in Renal Disease Study. | The purposes of this study were to measure health-related quality of life in the Modification of Diet in Renal Disease clinical trial; correlate quality of life measures with demographic, medical, and laboratory variables; and compare quality of life in various chronic diseases. The 1,284 patients enrolled in the baseline period of the Modification of Diet in Renal Disease study who completed at least one measurement of quality of life or symptoms served as the subjects of this study. The Quality of Well-Being (QWB) scale, which was a general health-related quality of life index, the Symptom Checklist-90R (SCL-90R), which provided a global measure of mental health, and the Patient Symptom Form, which assessed the frequency of symptoms specific to this population, were used as measurements. The mean +/- SD QWB score was 0.74 +/- 0.09. Using multivariate analysis, there was a significant negative correlation between the overall QWB score and age and female gender, and a significant positive correlation between the QWB and level of education, income, and glomerular filtration rate (GFR). For the SCL-90R subscores, the mean normalized Global Symptom Index was 49.7 +/- 9.6, the Positive Symptom Total was 47.9 +/- 10.4, and the mean Positive Symptom Distress Index was 51.3 +/- 12.6. Using multivariate analysis, significant inverse relationships were seen between each of the SCL-90R subscores and income, serum albumin level, and GFR. The most commonly reported medical symptoms in this cohort included tiring easily, weakness, lack of pep or energy, difficulty sleeping, and abdominal bloating or gas. Symptoms in which the severity index score had a negative correlation with GFR included tiring easily, weakness, lack of pep and energy, muscle cramps, easy bruising or bleeding, bad taste in mouth, and hiccoughs. In conclusions, patients with moderate to advanced renal insufficiency have a reduced quality of life and an increased frequency and severity of both symptoms and psychological distress, with the magnitude of these changes negatively correlated with GFR. |
Flexible and efficient Gaussian process models for machine learning | Gaussian process (GP) models are widely used to perform Bayesian nonlinear regression and classification — tasks that are central to many machine learning problems. A GP is nonparametric, meaning that the complexity of the model grows as more data points are received. Another attractive feature is the behaviour of the error bars. They naturally grow in regions away from training data where we have high uncertainty about the interpolating function. In their standard form GPs have several limitations, which can be divided into two broad categories: computational difficulties for large data sets, and restrictive modelling assumptions for complex data sets. This thesis addresses various aspects of both of these problems. The training cost for a GP hasO(N3) complexity, whereN is the number of training data points. This is due to an inversion of the N × N covariance matrix. In this thesis we develop several new techniques to reduce this complexity to O(NM2), whereM is a user chosen number much smaller thanN . The sparse approximation we use is based on a set of M ‘pseudo-inputs’ which are optimised together with hyperparameters at training time. We develop a further approximation based on clustering inputs that can be seen as a mixture of local and global approximations. Standard GPs assume a uniform noise variance. We use our sparse approximation described above as a way of relaxing this assumption. By making a modification of the sparse covariance function, we can model input dependent noise. To handle high dimensional data sets we use supervised linear dimensionality reduction. As another extension of the standard GP, we relax the Gaussianity assumption of the process by learning a nonlinear transformation of the output space. All these techniques further increase the applicability of GPs to real complex data sets. We present empirical comparisons of our algorithms with various competing techniques, and suggest problem dependent strategies to follow in practice. |
First Administration of the Fc-Attenuated Anti-β Amyloid Antibody GSK933776 to Patients with Mild Alzheimer’s Disease: A Randomized, Placebo-Controlled Study | OBJECTIVE
To assess the safety, tolerability, pharmacokinetics, and pharmacodynamics of the Fc-inactivated anti-β amyloid (Aβ) monoclonal antibody (mAb) GSK933776 in patients with mild Alzheimer's disease (AD) or mild cognitive impairment (MCI).
METHODS
This was a two-part, single blind, placebo-controlled, first-time-in-human (FTIH) study of single (n = 18) and repeat dose (n = 32) intravenous GSK933776 0.001-6 mg/kg (ClinicalTrials.gov: NCT00459550). Additional safety data from an open-label, uncontrolled, single dose study of intravenous GSK933776 1-6 mg/kg (n = 18) are included (ClinicalTrials.gov: NCT01424436).
RESULTS
There were no cases of amyloid-related imaging abnormalities-edema (ARIA-E) or -hemorrhage (ARIA-H) after GSK933776 administration in both studies. Three patients across the two studies developed anti-GSK933776 antibodies. Plasma GSK933776 half-life (t1/2) was 10-15 days after repeat dosing. After each of three administrations of GSK933776, plasma levels of total Aβ42 and Aβ increased whereas plasma levels of free Aβ decreased dose dependently; no changes were observed for placebo. For total Aβ42 the peak:trough ratio was ≤2 at doses ≥3 mg/kg; for total Aβ the ratio was ≤2 at 6 mg/kg. CSF concentrations of Aβ showed increases from baseline to week 12 for Aβ X-38 (week 12:baseline ratio: 1.65; 95%CI: 1.38, 1.93) and Aβ X-42 (week 12:baseline ratio: 1.18; 95%CI: 1.06, 1.30) for values pooled across doses.
CONCLUSION
In this FTIH study the Fc-inactivated anti-Aβ mAb GSK933776 engaged its target in plasma and CSF without causing brain ARIA-E/H in patients with mild AD or MCI.
TRIAL REGISTRATION
ClinicalTrials.gov NCT00459550. |
How stress influences the immune response. | In response to a stressor, physiological changes are set into motion to help an individual cope with the stressor. However, chronic activation of these stress responses, which include the hypothalamic–pituitary–adrenal axis and the sympathetic–adrenal–medullary axis, results in chronic production of glucocorticoid hormones and catecholamines. Glucocorticoid receptors expressed on a variety of immune cells bind cortisol and interfere with the function of NF-kB, which regulates the activity of cytokine-producing immune cells. Adrenergic receptors bind epinephrine and norepinephrine and activate the cAMP response element binding protein, inducing the transcription of genes encoding for a variety of cytokines. The changes in gene expression mediated by glucocorticoid hormones and catecholamines can dysregulate immune function. There is now good evidence (in animal and human studies) that the magnitude of stress-associated immune dysregulation is large enough to have health implications. |
Adversarial Frontier Stitching for Remote Neural Network Watermarking | The state of the art performance of deep learning models comes at a high cost for companies and institutions, due to the tedious data collection and the heavy processing requirements. Recently, Uchida et al. (2017) proposed to watermark convolutional neural networks by embedding information into their weights. While this is a clear progress towards model protection, this technique solely allows for extracting the watermark from a network that one accesses locally and entirely. This is a clear impediment, as leaked models can be re-used privately, and thus not released publicly for ownership inspection. Instead, we aim at allowing the extraction of the watermark from a neural network (or any other machine learning model) that is operated remotely, and available through a service API. To this end, we propose to operate on the model’s action itself, tweaking slightly its decision frontiers so that a set of specific queries convey the desired information. In present paper, we formally introduce the problem and propose a novel zerobit watermarking algorithm that makes use of adversarial model examples (called adversaries for short). While limiting the loss of performance of the protected model, this algorithm allows subsequent extraction of the watermark using only few remote queries. We experiment this approach on the MNIST dataset with three types of neural networks, demonstrating that e.g., watermarking with 100 images incurs a slight accuracy degradation, while being resilient to most removal attacks. |
Aesthetic-Driven Image Enhancement by Adversarial Learning | We introduce EnhanceGAN, an adversarial learning based model that performs automatic image enhancement. Traditional image enhancement frameworks typically involve training models in a fully-supervised manner, which require expensive annotations in the form of aligned image pairs. In contrast to these approaches, our proposed EnhanceGAN only requires weak supervision (binary labels on image aesthetic quality) and is able to learn enhancement operators for the task of aesthetic-based image enhancement. In particular, we show the effectiveness of a piecewise color enhancement module trained with weak supervision, and extend the proposed EnhanceGAN framework to learning a deep filtering-based aesthetic enhancer. The full differentiability of our image enhancement operators enables the training of EnhanceGAN in an end-to-end manner. We further demonstrate the capability of EnhanceGAN in learning aesthetic-based image cropping without any groundtruth cropping pairs. Our weakly-supervised EnhanceGAN reports competitive quantitative results on aesthetic-based color enhancement as well as automatic image cropping, and a user study confirms that our image enhancement results are on par with or even preferred over professional enhancement. |
A vibration damping optimization algorithm for a parallel machines scheduling problem with sequence-independent family setup times | Parallel machines scheduling problem is a branch of production scheduling, which is among the most difficult combinatorial optimization problems. This paper develops a meta-heuristic algorithm based on the concept of the vibration damping in mechanical vibration, called vibration damping optimization (VDO) algorithm for optimizing the identical parallel machine scheduling problem with sequence-independent family setup times. The objective function of this problem is to minimize the total weighted completion time. Furthermore, the Taguchi experimental design method is applied to set and estimate the appropriate values of the parameters required in our proposed VDO. We computationally compare the results obtained by the proposed VDO with the results of the genetic algorithm (GA) and branch-and-bound method. Consequently, the computational results validate the quality of the proposed algorithm. 2015 Elsevier Inc. All rights reserved. |
Speech Recognition using Artificial Neural Network | Humans prefer to interact with each other using speech. Since this is the most natural mode of communication, the humans also want to interact with machines using speech only. So, automatic speech recognition has gained a lot of popularity. Different approaches for speech recognition exists like Hidden Markov Model (HMM), Dynamic Time Warping (DTW), Vector Quantization (VQ), etc. This paper uses Neural Network (NN) along with Mel Frequency Cepstrum Coefficients (MFCC) for speech recognition. Mel Frequency Cepstrum Coefiicients (MFCC) has been used for the feature extraction of speech. This gives the feature of the waveform. For pattern matching FeedForward Neural Network with Back propagation algorithm has been applied. The paper analyzes the various training algorithms present for training the Neural Network and uses train scg for the experiment. The work has been done on MATLAB and experimental results show that system is able to recognize words at sufficiently high accuracy. |
Can you program ethics into a self-driving car? | IT'S 2034. A drunken man walking along a sidewalk at night trips and falls directly in front of a driverless car, which strikes him square on, killing him instantly. Had a human been at the wheel, the death would have been considered an accident because the pedestrian was clearly at fault and no reasonable person could have swerved in time. But the "reasonable person" legal standard for driver negligence disappeared back in the 2020s, when the proliferation of driverless cars reduced crash rates by 90 percent. Now the standard is that of the reasonable robot. The victim's family sues the vehicle manufacturer on that ground, claiming that, although the car didn't have time to brake, it could have swerved around the pedestrian, crossing the double yellow line and colliding with the empty driverless vehicle in the next lane. A reconstruction of the crash using data from the vehicle's own sensors confirms this. The plaintiff's attorney, deposing the car's lead software designer, asks: "Why didn't the car swerve?" |
Brain Amyloid Deposition and Longitudinal Cognitive Decline in Nondemented Older Subjects: Results from a Multi-Ethnic Population | OBJECTIVE
We aimed to whether the abnormally high amyloid-β (Aβ) level in the brain among apparently healthy elders is related with subtle cognitive deficits and/or accelerated cognitive decline.
METHODS
A total of 116 dementia-free participants (mean age 84.5 years) of the Washington Heights Inwood Columbia Aging Project completed 18F-Florbetaben PET imaging. Positive or negative cerebral Aβ deposition was assessed visually. Quantitative cerebral Aβ burden was calculated as the standardized uptake value ratio in pre-established regions of interest using cerebellar cortex as the reference region. Cognition was determined using a neuropsychological battery and selected tests scores were combined into four composite scores (memory, language, executive/speed, and visuospatial) using exploratory factor analysis. We examined the relationship between cerebral Aβ level and longitudinal cognition change up to 20 years before the PET scan using latent growth curve models, controlling for age, education, ethnicity, and Apolipoprotein E (APOE) genotype.
RESULTS
Positive reading of Aβ was found in 41 of 116 (35%) individuals. Cognitive scores at scan time was not related with Aβ. All cognitive scores declined over time. Aβ positive reading (B = -0.034, p = 0.02) and higher Aβ burden in temporal region (B = -0.080, p = 0.02) were associated with faster decline in executive/speed. Stratified analyses showed that higher Aβ deposition was associated with faster longitudinal declines in mean cognition, language, and executive/speed in African-Americans or in APOE ε4 carriers, and with faster memory decline in APOE ε4 carriers. The associations remained significant after excluding mild cognitive impairment participants.
CONCLUSIONS
High Aβ deposition in healthy elders was associated with decline in executive/speed in the decade before neuroimaging, and the association was observed primarily in African-Americans and APOE ε4 carriers. Our results suggest that measuring cerebral Aβ may give us important insights into the cognitive profile in the years prior to the scan in cognitively normal elders. |
Validating viral marketing strategies in Twitter via agent-based social simulation | A number of marketing phenomena are too complex for conventional analytical or empirical approaches. This makes marketing a costly process of trial and error: proposing, imagining, trying in the real world, and seeing results. Alternatively, Agent-based Social Simulation (ABSS) is becoming the most popular approach to model and study these phenomena. This research paradigm allows modeling a virtual market to: design, understand, and evaluate marketing hypotheses before taking them to the real world. However, there are shortcomings in the specialized literature such as the lack of methods, data, and implemented tools to deploy a realistic virtual market with ABSS. To advance the state of the art in this complex and interesting problem, this paper is a seven-fold contribution based on a (1) method to design and validate viral marketing strategies in Twitter by ABSS. The method is illustrated with the widely studied problem of rumor diffusion in social networks. After (2) an extensive review of the related works for this problem, (3) an innovative spread model is proposed which rests on the exploratory data analysis of two different rumor datasets in Twitter. Besides, (4) new strategies are proposed to control malicious gossips. (5) The experimental results validate the realism of this new propagation model with the datasets and (6) the strategies performance is evaluated over this model. (7) Finally, the article is complemented by a free and open-source simulator. |
Influence of life-style choices on locomotor disability, arthritis and cardiovascular disease in older women: prospective cohort study. | BACKGROUND
many chronic conditions have their roots in modifiable health-related behaviours.
METHODS
a total of 4,286 women aged 60-79 in the British Women's Heart and Health Study are followed up for incident cardiovascular disease (CVD), arthritis and locomotor disability over 7 years. Self-reported smoking, alcohol consumption, exercise and fruit intake at baseline is also available. Associations between these and each outcome, plus a composite outcome, are investigated in those without prevalent disease at baseline using logistic regression with multiple imputation.
RESULTS
ex-smokers and current smokers showed increased odds of locomotor disability, CVD and the combined outcome. Less regular exercisers had increased odds of all outcomes, particularly locomotor disability. There was no evidence that alcohol or fruit intake was associated with any outcome. Population attributable fractions (PAFs) suggest in addition to the influence of smoking and alcohol, exercise accounts for 9% of incident locomotor disability, 5% of CVD and 4% of arthritis. All four lifestyle factors combined account for 17% of incident locomotor disability and 9% of incident conditions combined.
CONCLUSIONS
never smokers and regular exercisers had substantially reduced odds of 7-year disability onset. Low PAFs suggest changes in health-related behaviours in older women would result in only modest reductions in common chronic conditions. |
Vocabulary Learning Strategies in an ESP Context | The paper focuses on vocabulary learning strategies as a subcategory of language learning strategies and their instruction within the ESP context at the Faculty of Maritime Studies and Transport in Portorož. Vocabulary strategy instruction will be implemented at our faculty as part of a broader PhD research into the effect of language learning strategy instruction on strategy use and subject-specific and general language acquisition. Additional variables that will be taken into consideration are language proficiency, motivation and learning styles of the students. The introductory section in which the situation that triggered my PhD research is presented is followed by a theoretical introduction to the concept of language and vocabulary learning strategies. The aspects that the paper focuses on are the central role of lexis within ESP, vocabulary learning strategy taxonomies, and the presentation of research studies made in the examined field to date. The final section presents the explicit vocabulary learning strategy instruction model. In the conclusion, some implications for teaching can be found. © 2006 Scripta Manent. Slovensko društvo uèiteljev tujega strokovnega jezika. |
A Measure for Objective Evaluation of Image Segmentation Algorithms | Despite significant advances in image segmentation techniques, evaluation of these techniques thus far has been largely subjective. Typically, the effectiveness of a new algorithm is demonstrated only by the presentation of a few segmented images and is otherwise left to subjective evaluation by the reader. Little effort has been spent on the design of perceptually correct measures to compare an automatic segmentation of an image to a set of hand-segmented examples of the same image. This paper demonstrates how a modification of the Rand index, the Normalized Probabilistic Rand (NPR) index, meets the requirements of largescale performance evaluation of image segmentation. We show that the measure has a clear probabilistic interpretation as the maximum likelihood estimator of an underlying Gibbs model, can be correctly normalized to account for the inherent similarity in a set of ground truth images, and can be computed efficiently for large datasets. Results are presented on images from the publicly available Berkeley Segmentation dataset. |
Cost Efficient Design of Fault Tolerant Geo-Distributed Data Centers | Many critical e-commerce and financial services are deployed on geo-distributed data centers for scalability and availability. Recent market surveys show that failure of a data center is inevitable resulting in a huge financial loss. Fault-tolerance in distributed data centers is typically handled by provisioning spare capacity to mask failure at a site. We argue that the operating cost and data replication cost (for data availability) must be considered in spare capacity provisioning along with minimizing the number of servers. Since the operating cost and client demand vary across space and time, we propose cost-aware capacity provisioning to minimize the total cost of ownership (TCO) for fault-tolerant data centers. We formulate the problem of spare capacity provisioning in fault-tolerant distributed data centers using mixed integer linear programming (MILP), with an objective of minimizing the TCO. The model accounts for heterogeneous client demand, data replication strategies (single and multiple site), variation in electricity price and carbon tax, and delay constraints while computing the spare capacity. Solving the MILP using real-world data, we observed a saving in the TCO to the tune of 35% compared to a model that minimizes the total number of servers and 43% compared to the model that minimizes the average response time. We demonstrate that our model is beneficial when the cost of electricity, carbon tax, and bandwidth vary significantly across the locations, which seems to be the problem for most of the operators. |
High-Selectivity Dual-Band Bandpass Filter Using a Quad-Mode Resonator With Source-Load Coupling | This letter presents a novel dual-band bandpass filter with controllable frequencies and bandwidths as well as a high out-of-band rejection level. The proposed filter utilizes a novel stub-loaded quad-mode resonator. Every two modes, which can be flexibly controlled, are utilized to form a passband with controllable frequency and bandwidth. Source-load coupling and hook-shape feed lines are introduced and high selectivity is achieved. An experimental filter is implemented and the experimental results are presented for validation. |
Everyday Life Information Seeking : Approaching Information Seeking in the Context of “ Way of Life ” | The study offers a framework for the study of everyday life information seeking (ELIS) in the context of way of and mastery of life.Way of life is defined as the “order of things,” manifesting itself, for example, in the relationship between work and leisure time, models of consumption, and nature of hobbies. Mastery of life is interpreted as “keeping things in order; ” four ideal types of mastery of life with their implications for ELIS, namely optimistic-cognitive, pessimistic-cognitive, defensiveaffective and pessimistic-affective mastery of life are outlined. The article reviews two major dimensions of ELIS, there are. the seeking of orienting and practical information. The research framework was tested in an empirical study based on interviews with teachers and industrial workers, eleven of both. The main features of seeking orienting and practical information are reviewed, followed by suggestions for refinement of the research framework. |
An advanced data driven model for residential electric vehicle charging demand | As the electric vehicle (EV) is becoming a significant component of the loads, an accurate and valid model for the EV charging demand is the key to enable accurate load forecasting, demand respond, system planning, and several other important applications. We propose a data driven queuing model for residential EV charging demand by performing big data analytics on smart meter measurements. The data driven model captures the non-homogeneity and periodicity of the residential EV charging behavior through a self-service queue with a periodic and non-homogeneous Poisson arrival rate, an empirical distribution for charging duration and a finite calling population. Upon parameter estimation, we further validate the model by comparing the simulated data series with real measurements. The hypothesis test shows the proposed model accurately captures the charging behavior. We further acquire the long-run average steady state probabilities and simultaneous rate of the EV charging demand through simulation output analysis. |
Reranking and Self-Training for Parser Adaptation | Statistical parsers trained and tested on the Penn Wall Street Journal (WSJ) treebank have shown vast improvements over the last 10 years. Much of this improvement, however, is based upon an ever-increasing number of features to be trained on (typically) the WSJ treebank data. This has led to concern that such parsers may be too finely tuned to this corpus at the expense of portability to other genres. Such worries have merit. The standard “Charniak parser” checks in at a labeled precisionrecall f -measure of 89.7% on the Penn WSJ test set, but only 82.9% on the test set from the Brown treebank corpus. This paper should allay these fears. In particular, we show that the reranking parser described in Charniak and Johnson (2005) improves performance of the parser on Brown to 85.2%. Furthermore, use of the self-training techniques described in (McClosky et al., 2006) raise this to 87.8% (an error reduction of 28%) again without any use of labeled Brown data. This is remarkable since training the parser and reranker on labeled Brown data achieves only 88.4%. |
Your botnet is my botnet: analysis of a botnet takeover | Botnets, networks of malware-infected machines that are controlled by an adversary, are the root cause of a large number of security problems on the Internet. A particularly sophisticated and insidious type of bot is Torpig, a malware program that is designed to harvest sensitive information (such as bank account and credit card data) from its victims. In this paper, we report on our efforts to take control of the Torpig botnet and study its operations for a period of ten days. During this time, we observed more than 180 thousand infections and recorded almost 70 GB of data that the bots collected. While botnets have been "hijacked" and studied previously, the Torpig botnet exhibits certain properties that make the analysis of the data particularly interesting. First, it is possible (with reasonable accuracy) to identify unique bot infections and relate that number to the more than 1.2 million IP addresses that contacted our command and control server. Second, the Torpig botnet is large, targets a variety of applications, and gathers a rich and diverse set of data from the infected victims. This data provides a new understanding of the type and amount of personal information that is stolen by botnets. |
Urban cycles and mobility patterns: Exploring and predicting trends in a bicycle-based public transport system | This paper provides an analysis of human mobility data in an urban area using the amount of available bikes in the stations of the community bicycle program Bicing in Barcelona. Based on data sampled from the operator’s website, it is possible to detect temporal and geographic mobility patterns within the city. These patterns are applied to predict the number of available bikes for any station someminutes/hours ahead. The predictions could be used to improve the bicycle programand the information given to the users via the Bicing website. © 2010 Elsevier B.V. All rights reserved. |
MAPMAKER: an interactive computer package for constructing primary genetic linkage maps of experimental and natural populations. | With the advent of RFLPs, genetic linkage maps are now being assembled for a number of organisms including both inbred experimental populations such as maize and outbred natural populations such as humans. Accurate construction of such genetic maps requires multipoint linkage analysis of particular types of pedigrees. We describe here a computer package, called MAPMAKER, designed specifically for this purpose. The program uses an efficient algorithm that allows simultaneous multipoint analysis of any number of loci. MAPMAKER also includes an interactive command language that makes it easy for a geneticist to explore linkage data. MAPMAKER has been applied to the construction of linkage maps in a number of organisms, including the human and several plants, and we outline the mapping strategies that have been used. |
Comparative proteomic analysis of methyl jasmonate-induced defense responses in different rice cultivars. | Jasmonate is an important endogenous chemical signal that plays a role in modulation of plant defense responses. To understand its mechanisms in regulation of rice resistance against the fungal pathogen Magnaporthe oryzae, comparative phenotype and proteomic analyses were undertaken using two near-isogenic cultivars with different levels of disease resistance. Methyl-jasmonate (MeJA) treatment significantly enhanced the resistance against M. oryzae in both cultivars but the treated resistant cultivar maintained a higher level of resistance than the same treated susceptible cultivars. Proteomic analysis revealed 26 and 16 MeJA-modulated proteins in resistant and susceptible cultivars, respectively, and both cultivars shared a common set of 13 proteins. Cumulatively, a total of 29 unique MeJA-influenced proteins were identified with many of them known to be associated with plant defense response and ROS accumulation. Consistent with the findings of proteomic analysis, MeJA treatment increased ROS accumulation in both cultivars with the resistant cultivar showing higher levels of ROS production and cell membrane damage than the susceptible cultivar. Taken together, our data add a new insight into the mechanisms of overall MeJA-induced rice defense response and provide a molecular basis of using MeJA to enhance fungal disease resistance in resistant and susceptible rice cultivars. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.