title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Cognitive Functioning and Academic Performance in Elementary School Children with Anxious/Depressed and Withdrawn Symptoms. | RATIONALE: Few studies have evaluated the relationship between depressive symptomatology and neuropsychological performance in children without symptomatic depression. OBJECTIVES: This study determined the relationship between anxious/depressed and withdrawn symptoms and performance on cognitive and academic achievement measures. METHODS: 335 Caucasian and Hispanic children aged 6 to 11 years who participated in the Tucson Children's Assessment of Sleep Apnea (TuCASA) study were administered a comprehensive neuropsychological battery measuring cognitive functioning and academic achievement. Their parents completed the Child Behavior Checklist (CBCL). Correlations between performance on the cognitive and academic achievement measures and two Internalizing scales from the CBCL were calculated. Comparisons were made between a "Clinical" referral group (using a T-score of ≥ 60 from the CBCL scales) and a "Normal" group, as well as between Caucasians and Hispanics. RESULTS: No differences were found between those participants with increased anxious/depressed or withdrawn symptoms on the CBCL and those without increased symptoms with respect to age, gender, ethnicity, or parental education level. However, significant negative correlations were found between these symptoms and general intellectual function, language, visual construction skills, attention, processing speed, executive functioning abilities, aspects of learning and memory, psychomotor speed and coordination, and basic academic skills. CONCLUSIONS: These findings support the hypothesis that depressive symptomatology negatively impacts performance on cognitive and academic achievement measures in school-aged children and these findings are not affected by ethnicity. The findings also reinforce the concept that the presence of anxious/depressed or withdrawn symptoms needs to be considered when evaluating poor neuropsychological performance in children. |
Congenital Anomalies of the Hand--Principles of Management. | Physicians who specialize in pediatric orthopedics and hand surgery frequently encounter congenital hand abnormalities, despite their relative rarity. The treating physician should be aware of the associated syndromes and malformations that may, in some cases, be fatal if not recognized and treated appropriately. Although these congenital disorders have a wide variability, their treatment principles are similar in that the physician should promote functional use and cosmesis for the hand. This article discusses syndactyly, preaxial polydactyly and post-axial polydactyly, and the hypoplastic thumb. |
Artificial neural networks applied to forecasting time series. | This study offers a description and comparison of the main models of Artificial Neural Networks (ANN) which have proved to be useful in time series forecasting, and also a standard procedure for the practical application of ANN in this type of task. The Multilayer Perceptron (MLP), Radial Base Function (RBF), Generalized Regression Neural Network (GRNN), and Recurrent Neural Network (RNN) models are analyzed. With this aim in mind, we use a time series made up of 244 time points. A comparative study establishes that the error made by the four neural network models analyzed is less than 10%. In accordance with the interpretation criteria of this performance, it can be concluded that the neural network models show a close fit regarding their forecasting capacity. The model with the best performance is the RBF, followed by the RNN and MLP. The GRNN model is the one with the worst performance. Finally, we analyze the advantages and limitations of ANN, the possible solutions to these limitations, and provide an orientation towards future research. |
Statistical Models for Text Segmentation | This paper introduces a new statistical approach to automatically partitioning text into coherent segments. The approach is based on a technique that incrementally builds an exponential model to extract features that are correlated with the presence of boundaries in labeled training text. The models use two classes of features: topicality features that use adaptive language models in a novel way to detect broad changes of topic, and cue-word features that detect occurrences of specific words, which may be domain-specific, that tend to be used near segment boundaries. Assessment of our approach on quantitative and qualitative grounds demonstrates its effectiveness in two very different domains, Wall Street Journal news articles and television broadcast news story transcripts. Quantitative results on these domains are presented using a new probabilistically motivated error metric, which combines precision and recall in a natural and flexible way. This metric is used to make a quantitative assessment of the relative contributions of the different feature types, as well as a comparison with decision trees and previously proposed text segmentation algorithms. |
Psychometric evaluation of a new questionnaire measuring treatment satisfaction in hypothyroidism: the ThyTSQ. | OBJECTIVES
There is a clinical impression of dissatisfaction with treatment for hypothyroidism among some patients. Psychometric properties of the new ThyTSQ questionnaire are evaluated. The questionnaire, measuring patients' satisfaction with their treatment for hypothyroidism, has two parts: the seven-item ThyTSQ-Present and four-item ThyTSQ-Past, measuring satisfaction with present and past treatment, respectively, on scales from 6 (very satisfied) to 0 (very dissatisfied).
METHODS
The questionnaire was completed once by 103 adults with hypothyroidism, age (mean [SD]) 55.2 [14.4], range 23-84 years (all treated with thyroxine).
RESULTS
Completion rates were very high. Internal consistency reliability was excellent for both ThyTSQ-Present and ThyTSQ-Past (Cronbach's alpha = 0.91 and 0.90, respectively [N = 102 and 103]). Principal components analyses indicated that the seven items of the ThyTSQ-Present and the four items of the ThyTSQ-Past could be summed into separate Present Satisfaction and Past Satisfaction total scores. Mean Present Satisfaction was 32.5 (7.8), maximum range 0-42, and mean Past Satisfaction was 17.5 (6.1), maximum range 0-24, indicating considerable room for improvement. Patients were least satisfied with their present understanding of their condition, mean 4.2 (1.7) (maximum range 0-6), and with information provided about hypothyroidism around the time of diagnosis, mean 3.9 (1.8) (maximum range 0-6).
CONCLUSIONS
The ThyTSQ is highly acceptable to patients with hypothyroidism (excellent completion rates), and has established internal consistency reliability. It will assist health professionals in considering psychological outcomes when treating people with hypothyroidism, and is suitable for clinical trials and routine clinical monitoring. |
Using fleets of electric-drive vehicles for grid support | bstract Electric-drive vehicles can provide power to the electric grid when they are parked (vehicle-to-grid power). We evaluated the economic potential f two utility-owned fleets of battery electric-drive vehicles to provide power for a specific electricity market, regulation, in four US regional egulation services markets. The two battery-electric fleet cases are: (a) 100 Th!nk City vehicle and (b) 252 Toyota RAV4. Important variables re: (a) the market value of regulation services, (b) the power capacity (kW) of the electrical connections and wiring, and (c) the energy capacity kWh) of the vehicle’s battery. With a few exceptions when the annual market value of regulation was low, we find that vehicle-to-grid power for egulation services is profitable across all four markets analyzed. Assuming now more than current Level 2 charging infrastructure (6.6 kW) the |
Paying Attention to Descriptions Generated by Image Captioning Models | To bridge the gap between humans and machines in image understanding and describing, we need further insight into how people describe a perceived scene. In this paper, we study the agreement between bottom-up saliency-based visual attention and object referrals in scene description constructs. We investigate the properties of human-written descriptions and machine-generated ones. We then propose a saliency-boosted image captioning model in order to investigate benefits from low-level cues in language models. We learn that (1) humans mention more salient objects earlier than less salient ones in their descriptions, (2) the better a captioning model performs, the better attention agreement it has with human descriptions, (3) the proposed saliencyboosted model, compared to its baseline form, does not improve significantly on the MS COCO database, indicating explicit bottom-up boosting does not help when the task is well learnt and tuned on a data, (4) a better generalization is, however, observed for the saliency-boosted model on unseen data. |
Overview of on-chip electrostatic discharge protection design with SCR-based devices in CMOS integrated circuits | An overview on the electrostatic discharge (ESD) protection circuits by using the silicon controlled rectifier (SCR)-based devices in CMOS ICs is presented. The history and evolution of SCR device used for on-chip ESD protection is introduced. Moreover, two practical problems (higher switching voltage and transient-induced latchup issue) limiting the use of SCR-based devices in on-chip ESD protection are reported. Some modified device structures and trigger-assist circuit techniques to reduce the switching voltage of SCR-based devices are discussed. The solutions to overcome latchup issue in the SCR-based devices are also discussed to safely apply the SCR-based devices for on-chip ESD protection in CMOS IC products. |
A randomized controlled trial to evaluate the effectiveness of a brief, behaviorally oriented intervention for cancer-related fatigue. | BACKGROUND
It has been shown that nonpharmacologic interventions are effective management techniques for cancer-related fatigue (CRF) in cancer survivors. However, few studies have investigated their effectiveness in patients who are receiving chemotherapy. In this study, the authors tested the effectiveness of a brief behaviorally oriented intervention in reducing CRF and improving physical function and associated distress in individuals who were receiving chemotherapy.
METHODS
For this randomized controlled trial, 60 patients with cancer were recruited and received either usual care or the intervention. The intervention was delivered on an individual basis on 3 occasions over a period from 9 weeks to 12 weeks, and the objective of the intervention was to alter fatigue-related thoughts and behavior. Primary outcomes were assessed as follows: CRF using the Visual Analogue Scale-Global Fatigue; physical functioning using the European Organization for Research and Treatment of Cancer Quality-of-Life Core 30 Questionnaire, and CRF-associated distress using the Fatigue Outcome Measure. Assessments were made on 4 occasions: at baseline (T0), at the end of chemotherapy (T1), 1 month after chemotherapy (T2), and 9 months after recruitment (T3). Normally distributed data were analyzed using t tests and random-slope/random-intercept mixed models.
RESULTS
The intervention demonstrated a trend toward improved CRF, although this effect was reduced once confounders had been controlled statistically. There was a significant improvement in physical functioning (coefficient, 10.0; 95% confidence interval, 2.5-17.5; P = .009), and this effect remained once the confounding effects of mood disturbance and comorbid disorders were controlled statistically. No decrease in fatigue-related distress was detected.
CONCLUSIONS
The behaviorally oriented intervention brought about significant improvements in physical functioning, indicated a trend toward improved CRF, but detected no effect for fatigue-related distress. |
Mobile augmented reality techniques for GeoVisualisation | This paper presents the first prototype of an interactive visualisation framework specifically designed for presenting geographical information in both indoor and outdoor environments. The input of our system is ESRI Shapefiles which represent 3D building geometry and landuse attributes. Participants can visualise 3D reconstructions of geographical information in real-time based on two visualisation clients: a mobile VR interface and a tangible AR interface. To prove the functionality of our system an educational application specifically designed for university students is illustrated with some initial results. Finally, our conclusions as well as future work are presented. |
Vygotsky, Piaget, and Education: a reciprocal assimilation of theories and educational practices | Seeking a rapprochement between Vygotskians and Piagetians, the theories of Piaget and Vygotsky are compared, and educational extensions by their followers are examined. A paradox in Vygotsky’s theory is highlighted, where evidence is found both for claiming that Vygotsky was a behaviorist and that he was a constructivist. Similarities in the two theories are presented: social factors as having a central role in child development, the transformative nature of internalization, and the individual as what develops. Differences in the theories pertain to the nature of the stimulus, nature, and origin of psychological instruments, nature of self-regulation and novelty in development, direction of development, the concept of social development, and the role of language in development. Because practical applications of theories often clarify the theories, some educational extensions of Vygotsky’s theory are critiqued from a Piagetian constructivist perspective, and, in contrast, constructivist educational interpretations of Vygotsky’s work are noted. Aspects of Piaget’s theory emphasized by educators are presented, and educational practices inspired by this theory are outlined. A rapprochement is sought, with consideration of convergences in educational practices of followers of Piaget and Vygotsky, sources of difficulty for rapprochement, and changes necessary in educational theories of followers of both Piaget and Vygotsky. |
Assessing assumptions in kinematic hand models: A review | The incredible complexity of the human hand makes accurate modeling difficult. When implementing a kinematic hand model, many simplifications are made, either to provide simpler analytical solutions, to ease implementation, or to speed up computation for real time applications. However, it is important to understand the trade-offs that certain simplifications entail - the kinematic structure chosen can have important implications for the final model accuracy. This paper provides a brief overview of the biomechanics of the human hand, followed by an in-depth review of kinematic models presented in the literature. This review discusses some simplifications that may often be inappropriate, such as assuming no metacarpal bone motion or assuming orthogonal, intersecting thumb axes. This discussion should help researchers select appropriate kinematic models for applications including anthropomorphic hand design, humancomputer interaction, surgery, rehabilitation, and ergonomics. Some modeling issues remain unclear in the current literature. Future work could compare thumb MCP models and better investigate unactuated compliant degrees of freedom in the hand. |
A Method for Automated Prediction of Defect Severity Using Ontologies | Assessing severity of software defects is essential for prioritizing fixing activities as well as for assessing whether the quality level of a software system is good enough for release. In practice, filling out defect reports is done manually and developers routinely fill out default values for the severity levels. Moreover, external factors are a reason for assigning wrong severity levels to defects. The purpose of this research is to automate the prediction of defect severity. We have researched how this severity prediction can be achieved through incorporating knowledge of the software development process using ontologies. In addition, we also employ an IEEE standard to create a uniform framework for the attributes of the defects. The thesis presents MAPDESO – a Method for Automated Prediction of DEfect Severity using Ontologies. It was developed using industrial case studies during an internship at Logica Netherlands B. V. The method is based on classification rules that consider the software quality properties affected by a defect, together with the defect’s type, insertion activity and detection activity. The results from its validation and comparison with the Weka machine learning workbench indicate that MAPDESO is a good predictor for defect severity levels and it can be especially useful for medium-to-large projects with many defects. |
Forecasting emerging technologies : Use of bibliometrics and patent analysis | It is rather difficult to forecast emerging technologies as there is no historical data available. In such cases, the use of bibliometrics and patent analysis have provided useful data. This paper presents the forecasts for three emerging technology areas by integrating the use of bibliometrics and patent analysis into well-known technology forecasting tools such as scenario planning, growth curves and analogies. System dynamics is also used to be able to model the dynamic ecosystem of the technologies and their diffusion. Technologies being forecasted are fuel cell, food safety and optical storage technologies. Results from these three applications help us to validate the proposed methods as appropriate tools to forecast emerging technologies. © 2006 Elsevier Inc. All rights reserved. |
Simple versus complex forecasting : The evidence | This article introduces the Special Issue on simple versus complex methods in forecasting. Simplicity in forecasting requires that (1) method, (2) representation of cumulative knowledge, (3) relationships in models, and (4) relationships among models, forecasts, and decisions are all sufficiently uncomplicated as to be easily understood by decision-makers. Our review of studies comparing simple and complex methods— including those in this special issue—found 97 comparisons in 32 papers. None of the papers provide a balance of evidence that complexity improves forecast accuracy. Complexity increases forecast error by 27 percent on average in the 25 papers with quantitative comparisons. The finding is consistent with prior research to identify valid forecasting methods: all 22 previously identified evidence-based forecasting procedures are simple. Nevertheless, complexity remains popular among researchers, forecasters, and clients. Some evidence suggests that the popularity of complexity may be due to incentives: (1) researchers are rewarded for publishing in highly ranked journals, which favor complexity; (2) forecasters can use complex methods to provide forecasts that support decision-makers’ plans; and (3) forecasters’ clients may be reassured by incomprehensibility. Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures. They can rate the simplicity of forecasters’ procedures using the questionnaire at simple-forecasting.com. |
FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance | This paper describes a probabilistic approach to the problem of recognizing places based on their appearance. The system we present is not limited to localization, but can determine that a new observation comes from a previously unseen place, and so augment its map. Effectively this is a SLAM system in the space of appearance. Our probabilistic approach allows us to explicitly account for perceptual aliasing in the environment—identical but indistinctive observations receive a low probability of having come from the same place. We achieve this by learning a generative model of place appearance. By partitioning the learning problem into two parts, new place models can be learned online from only a single observation of a place. The algorithm complexity is linear in the number of places in the map, and is particularly suitable for online loop closure detection in mo- |
Economic Perceptions and Executive Approval in Comparative Perspective | Controversy exists over whether people use retrospective or prospective economic perceptions when evaluating their political leadership. In this article, I argue that the structure of the political-economic system affects which type of economic perception people employ. Specifically, in established democracies with developed economies, people will employ prospective assessments. In contrast, in nations with less well-established democratic systems and less developed economies, people will employ retrospective reasoning. They do so because under such conditions uncertainty about the future is too high for them to make reliable prospective assessments. I test this hypothesis on aggregate survey data taken from 41 nations in 2002. Support for the hypothesis is found. The conclusion puts the findings into perspective and discusses directions for future research. |
Text-to-speech conversion technology | The historical and theoretical bases of contemporary high-performance text-to-speech (TTS) systems and their current design are discussed. The major elements of a TTS system are described, with particular reference to vocal tract models. The stages involved in the process of converting text into speech parameters are examined, covering text normalization, word pronunciation, prosodies, phonetic rules, voice tables, and hardware implementation. Examples are drawn mainly from Berkeley Speech Technologies' proprietary text-to-speech system, T-T-S, but other approaches are indicated briefly.<<ETX>> |
An energy minimization reconstruction algorithm for multivalued discrete tomography | We propose a new algorithm for multivalued discrete tomogra phy, that reconstructs images from few projections by approximating the minimum of a suitably constructed ener gy function with a deterministic optimization method. We also compare the proposed algorithm to other reco nstruction techniques on software phantom images, in order to prove its applicability. |
What Additional Factors Beyond State-of-the-Art Analytical Methods Are Needed for Optimal Generation and Interpretation of Biomonitoring Data? | BACKGROUND
The routine use of biomonitoring (i.e., measurement of environmental chemicals, their metabolites, or specific reaction products in human biological specimens) to assess internal exposure (i.e., body burden) has gained importance in exposure assessment.
OBJECTIVES
Selection and validation of biomarkers of exposure are critical factors in interpreting biomonitoring data. Moreover, the strong relation between quality of the analytical methods used for biomonitoring and quality of the resulting data is well understood. However, the relevance of collecting, storing, processing, and transporting the samples to the laboratory to the overall biomonitoring process has received limited attention, especially for organic chemicals.
DISCUSSION
We present examples to illustrate potential sources of unintended contamination of the biological specimen during collection or processing procedures. The examples also highlight the importance of ensuring that the biological specimen analyzed both represents the sample collected for biomonitoring purposes and reflects the exposure of interest.
CONCLUSIONS
Besides using high-quality analytical methods and good laboratory practices for biomonitoring, evaluation of the collection and handling of biological samples should be emphasized, because these procedures can affect the samples integrity and representativeness. Biomonitoring programs would be strengthened with the inclusion of field blanks. |
DeepSecure: Scalable Provably-Secure Deep Learning | This paper presents DeepSecure, the an scalable and provably secure Deep Learning (DL) framework that is built upon automated design, efficient logic synthesis, and optimization methodologies. DeepSecure targets scenarios in which neither of the involved parties including the cloud servers that hold the DL model parameters or the delegating clients who own the data is willing to reveal their information. Our framework is the first to empower accurate and scalable DL analysis of data generated by distributed clients without sacrificing the security to maintain efficiency. The secure DL computation in DeepSecure is performed using Yao's Garbled Circuit (GC) protocol. We devise GC-optimized realization of various components used in DL. Our optimized implementation achieves up to 58-fold higher throughput per sample compared with the best prior solution. In addition to the optimized GC realization, we introduce a set of novel low-overhead pre-processing techniques which further reduce the GC overall runtime in the context of DL. Our extensive evaluations demonstrate up to two orders-of-magnitude additional runtime improvement achieved as a result of our pre-processing methodology. |
A classical but new kinetic equation for hydride transfer reactions. | A classical but new kinetic equation to estimate activation energies of various hydride transfer reactions was developed according to transition state theory using the Morse-type free energy curves of hydride donors to release a hydride anion and hydride acceptors to capture a hydride anion and by which the activation energies of 187 typical hydride self-exchange reactions and more than thirty thousand hydride cross transfer reactions in acetonitrile were safely estimated in this work. Since the development of the kinetic equation is only on the basis of the related chemical bond changes of the hydride transfer reactants, the kinetic equation should be also suitable for proton transfer reactions, hydrogen atom transfer reactions and all the other chemical reactions involved with breaking and formation of chemical bonds. One of the most important contributions of this work is to have achieved the perfect unity of the kinetic equation and thermodynamic equation for hydride transfer reactions. |
Experimental characterization of components for active soft orthotics | In this paper, we present the characterization of soft pneumatic actuators currently under development for active soft orthotics. The actuators are tested statically and dynamically to characterize force and displacement properties needed for use in system design. They are shown to demonstrate remarkably repeatable performance, and very predictable behavior as actuator size (initial length) is varied to adjust total actuation. The results of the characterization are then used to inform design rules for selecting the size and number of actuators for a soft leg orthotic that can produce the force/moment/velocity properties required for a target wearer. The device was tested on a silicone model with inner frame and articulated ankle. Additionally, a portable pneumatic system was developed and characterized, that can actuate the pneumatic actuator over 800 times before refilling the gas supply. |
Degradation of cardiac troponin I: implication for reliable immunodetection. | We have analyzed by different immunological methods the proteolytic degradation of cardiac troponin I (cTnI) in human necrotic tissue and in serum. cTnI is susceptible to proteolysis, and its degradation leads to the appearance of a wide diversity of proteolytic peptides with different stabilities. N- and C-terminal regions were rapidly cleaved by proteases, whereas the fragment located between residues 30 and 110 demonstrated substantially higher stability, possibly because of its protection by TnC. We conclude that antibodies selected for cTnI sandwich immunoassays should preferentially recognize epitopes located in the region resistant to proteolysis. Such an approach can be helpful for a much needed standardization of cTnI immunoassays and can improve the sensitivity and reproducibility of cTnI assays. |
Epidermoid cyst of the clitoris: a case report. | We report a rare case of spontaneous clitoral epidermal cyst without any declared previous female genital mutilation. This patient was successfully and surgically resected with good local and cosmetic results. |
Survey of Automated Vulnerability Detection and Exploit Generation Techniques in Cyber Reasoning Systems | Software is everywhere, from mission critical systems such as industrial power stations, pacemakers and even household appliances. This growing dependence on technology and the increasing complexity software has serious security implications as it means we are potentially surrounded by software that contain exploitable vulnerabilities. These challenges have made binary analysis an important area of research in computer science and has emphasized the need for building automated analysis systems that can operate at scale, speed and efficacy; all while performing with the skill of a human expert. Though great progress has been made in this area of research, there remains limitations and open challenges to be addressed. Recognizing this need, DARPA sponsored the Cyber Grand Challenge (CGC), a competition to showcase the current state of the art in systems that perform; automated vulnerability detection, exploit generation and software patching. This paper is a survey of the vulnerability detection and exploit generation techniques, underlying technologies and related works of two of the winning systems Mayhem and Mechanical Phish. Keywords—Cyber reasoning systems, automated binary analysis, automated exploit generation, dynamic symbolic execution, fuzzing |
Prospective evaluation of dietary treatment in childhood constipation: high dietary fiber and wheat bran intake are associated with constipation amelioration. | OBJECTIVES
The aim of the study was to evaluate, over 24 months, the intake of dietary fiber (DF) and the bowel habit (BH) of constipated children advised a DF-rich diet containing wheat bran.
PATIENTS AND METHODS
BH and dietary data of 28 children with functional constipation defined by the "Boston criteria" were obtained at visit 1 (V1, n = 28) and at 4 follow-up visits (V2-V5, n = 80). At each visit the BH was rated BAD (worse/unaltered; improved but still complications) or RECOVERY (REC) (improved, no complications; asymptomatic), and a food intake questionnaire was applied. DF intake was calculated according to age (year) + 5 to 10 g/day and bran intake according to international tables. Nonparametric statistics were used.
RESULTS
Median age (range) was 7.25 years (0.25-15.6 years); 21 children underwent bowel washout (most before V1/V2), and 14 had the last visit at V3/V4. DF intake, bran intake, and the BH rate significantly increased at V2 and remained higher than at V1 through V2 to V5. At V1, median DF intake was 29.9% below the minimum recommended and at the last visit 49.9% above it. Twenty-four children accepted bran at 60 visits, at which median bran intake was 20 g/day and median proportion of DF due to bran 26.9%. Children had significantly higher DF and higher bran intake at V2 to V5 at which they had REC than at those at which they presented BAD BH. DF intake > age +10 g/day was associated with bran acceptance and REC. At the last visit 21 children presented REC (75%); 20 of them were asymptomatic and 18 were off washout/laxatives.
CONCLUSIONS
High DF and bran intake are feasible in constipated children and contribute to amelioration of constipation. |
Evaluation of the bioequivalence of tablets and capsules containing the novel anticancer agent R115777 (Zarnestra) in patients with advanced solid tumors | R 115777 (Zarnestra) is a novel anticancer agent, currently undergoing phase III clinical testing. An open, cross-over trial was performed in 24 patients with solid tumors to compare the bioavailability of a new tablet formulation with the standard capsule formulation. Both dosage forms were administered once daily in doses of 300 or 400 mg. Patients received R 115777 as a capsule on day 1 and as a tablet on day 2, or vice versa. Blood samples were drawn up to 24 hours after drug intake and R 115777 levels were measured using a validated high performance liquid chromatography, (HPLC) method. The following pharmacokinetic parameters were determined and compared for the two formulations: time to maximal plasma concentration (Tmax), half-life (t1/2), maximal plasma concentration (Cmax) and area under the curve at twenty-four hours (AUC24h). For the latter two parameters, 90% classical confidence intervals of the ratio tablet/capsule were calculated after a log-transformation, using an Analysis of Variance (ANOVA). For t1/2 and Tmax, no statistically significant differences were found between tablet and capsule. The point estimates of the ratio’s of the log-normalized Cmax and AUC24h were 0.94 and 0.92, respectively, and the 90% confidence intervals were 0.81–1.09 and 0.83–1.03, which is within the critical range for bioequivalence of 0.80–1.25. In conclusion, the established pharmacokinetic parameters demonstrate that the capsule and tablet formulations, of R 115777 are interchangeable. |
Correlation Filters for Object Alignment | Alignment of 3D objects from 2D images is one of the most important and well studied problems in computer vision. A typical object alignment system consists of a landmark appearance model which is used to obtain an initial shape and a shape model which refines this initial shape by correcting the initialization errors. Since errors in landmark initialization from the appearance model propagate through the shape model, it is critical to have a robust landmark appearance model. While there has been much progress in designing sophisticated and robust shape models, there has been relatively less progress in designing robust landmark detection models. In this paper we present an efficient and robust landmark detection model which is designed specifically to minimize localization errors thereby leading to state-of-the-art object alignment performance. We demonstrate the efficacy and speed of the proposed approach on the challenging task of multi-view car alignment. |
Automatic Identification of Retinal Arteries and Veins in Fundus Images using Local Binary Patterns | Artery and vein (AV) classification of retinal images is a key to necessary tasks, such as automated measurement of arteriolar-to-venular diameter ratio (AVR). This paper comprehensively reviews the state-of-the art in AV classification methods. To improve on previous methods, a new Local Binary Pattern-based method (LBP) is proposed. Beside its simplicity, LBP is robust against low contrast and low quality fundus images; and it helps the process by including additional AV texture and shape information. Experimental results compare the performance of the new method with the state-of-the art; and also methods with different feature extraction and classification schemas. |
A neo-utilitarian theory of class? | Aage Sorensen aims to develop a structural theory of inequality that isequal in format and comprehensiveness to Marx’s theory of class yetavoids the flaws of that theory, which derive from its grounding in thepremises of classical economics. Applying neo-utilitarian theory to classanalysis, Sorensen argues that access to enduring rents can inform a newconceptualization of “class as exploitation” and thereby put the sociologi-cal enterprise of class analysis on a sounder basis. The need for such con-ceptual reformulation grows out of the marginalist turn in economic the-ory and the related demise of the labor theory of value as well as, Sorensensuggests, the failure of subsequent class analysts to invent alternative the-ories of structural inequality in which exploitation generates antagonisticinterests. Although many scholars—Marxist and non-Marxist—have pro-posed alternative class schemes, nearly all of these are based on an under-standing of “class as life conditions.” According to Sorensen, class as lifeconditions—defined by the total wealth controlled by similarly situatedactors—does not necessarily create antagonistic interests and thus pro-vides a poor foundation for understanding how class position generatesmobilization and conflict.We agree that the labor theory of value of classical economics is indefen-sible, leaving Marxist theory without its primary basis for identifying classexploitation. |
The software and information services sector in Argentina: the pros and cons of an inward-oriented development strategy | This paper analyzes the evolution, present situation, and prospects for the Argentine software and information services (SIS) sector. Argentina has some advantages to exploit in order to make significant inroads in this sector. It has a relative abundance of well-educated people, a sizeable domestic market, and a cultural influence in Spanish-speaking Latin America. The currency devaluation of 2002 dramatically reduced costs measured in U.S. dollars. Nonetheless, SIS firms in Argentina have focused primarily on the domestic accountancy and management market, where they enjoy advantages derived from the specific requirements of the domestic regulations and their knowledge of the business culture and the needs of their local clients. This concentration in the domestic market has caused SIS firms to pay insufficient attention to some key issues for competitiveness in this sector. Hence, it is no surprise to find that they lack marketing and management capabilities and that the diffusion of quality certifications is almost null. The domestic environment also poses some obstacles, since firms often have difficulties accessing investment and working capital. Business networking mechanisms are weak, both among SIS firms as well as between them and their customers, R&D institutions, etc. Increasing the competitiveness of this sector requires intelligent public policies and actions aimed at improving the SIS firms’ capabilities and endowments. C © 2005Wiley Periodicals, Inc. |
Mutations affecting the secretory COPII coat component SEC23B cause congenital dyserythropoietic anemia type II | Congenital dyserythropoietic anemias (CDAs) are phenotypically and genotypically heterogeneous diseases. CDA type II (CDAII) is the most frequent CDA. It is characterized by ineffective erythropoiesis and by the presence of bi- and multinucleated erythroblasts in bone marrow, with nuclei of equal size and DNA content, suggesting a cytokinesis disturbance. Other features of the peripheral red blood cells are protein and lipid dysglycosylation and endoplasmic reticulum double-membrane remnants. Development of other hematopoietic lineages is normal. Individuals with CDAII show progressive splenomegaly, gallstones and iron overload potentially with liver cirrhosis or cardiac failure. Here we show that the gene encoding the secretory COPII component SEC23B is mutated in CDAII. Short hairpin RNA (shRNA)-mediated suppression of SEC23B expression recapitulates the cytokinesis defect. Knockdown of zebrafish sec23b also leads to aberrant erythrocyte development. Our results provide in vivo evidence for SEC23B selectivity in erythroid differentiation and show that SEC23A and SEC23B, although highly related paralogous secretory COPII components, are nonredundant in erythrocyte maturation. |
Artificial Intelligence and Cognitive Psychology Applications , Models Gabriella | How can we connect artificial intelligence with cognitive psychology? What kind of models and approaches were developed in these scientific fields? The main aim of this paper is to provide a broad summary and analyses about the relationships between psychology and artificial intelligence. I present the state of the art applications, human like thinking and acting systems (Human Computer Interface, Modelling Mental Processes, Data Mining Application) Application can be divided into several groups and aspects. Main goal of the artificial intelligence was/is to develop human level intelligence, but the technology transfer turned out to be much comprehensive, and these systems are used widely, and the research is blooming. The first part of the paper introduces the development, and the basic knowledge, general models of the cognitive psychology (gives also its relevant connecting points to artificial intelligence), it describes also the information processing model of the human brain. The second part provides analyses of the human computing interaction, its tasks, application fields, the psychological models used for HCI, and the barriers of the field. In order to extend or defeat these barriers, the science has to face several scientific, pragmatic, and technical challenges (such as the problem of complexity, disturbing coefficients... etc). Other important area demonstrated in this paper is the mental modelling used to prevent, prognoses, manipulate, or to support the human mental processes, like learning. By a prognoses (for example prognoses of the children affected by mental illnesses according to their environments. etc), data mining, knowledge discovery, or expert systems are applied. The paper gives an outline about in the system used coefficients, and analyses the missing attributes. The last part deals with the expert systems used to help people and relatives with autism and with the life simulation (applied mental model) in the virtual reality/virtual environment. |
Machine Learning DDoS Detection for Consumer Internet of Things Devices | An increasing number of Internet of Things (IoT) devices are connecting to the Internet, yet many of these devices are fundamentally insecure, exposing the Internet to a variety of attacks. Botnets such as Mirai have used insecure consumer IoT devices to conduct distributed denial of service (DDoS) attacks on critical Internet infrastructure. This motivates the development of new techniques to automatically detect consumer IoT attack traffic. In this paper, we demonstrate that using IoT-specific network behaviors (e.g., limited number of endpoints and regular time intervals between packets) to inform feature selection can result in high accuracy DDoS detection in IoT network traffic with a variety of machine learning algorithms, including neural networks. These results indicate that home gateway routers or other network middleboxes could automatically detect local IoT device sources of DDoS attacks using low-cost machine learning algorithms and traffic data that is flow-based and protocol-agnostic. |
Answer Set Programming for Procedural Content Generation: A Design Space Approach | Procedural content generators for games produce artifacts from a latent design space. This space is often only implicitly defined, an emergent result of the procedures used in the generator. In this paper, we outline an approach to content generation that centers on explicit description of the design space, using domain-independent procedures to produce artifacts from the described space. By concisely capturing a design space as an answer set program, we can rapidly define and expressively sculpt new generators for a variety of game content domains. We walk through the reimplementation of a reference evolutionary content generator in a tutorial example, and review existing applications of answer set programming to generative-content design problems in and outside of a game context. |
Insufficient Reason and Entropy in Quantum Theory | The objective of the consistent-amplitude approach to quantum theory has been to justify the mathematical formalism on the basis of three main assumptions: the first defines the subject matter, the second introduces amplitudes as the tools for quantitative reasoning, and the third is an interpretative rule that provides the link to the prediction of experimental outcomes. In this work we introduce a natural and compelling fourth assumption: if there is no reason to prefer one region of the configuration space over another, then they should be “weighted” equally. This is the last ingredient necessary to introduce a unique inner product in the linear space of wave functions. Thus, a form of the principle of insufficient reason is implicit in the Hilbert inner product. Armed with the inner product we obtain two results. First, we elaborate on an earlier proof of the Born probability rule. The implicit appeal to insufficient reason shows that quantum probabilities are not more objective than classical probabilities. Previously we had argued that the consistent manipulation of amplitudes leads to a linear time evolution; our second result is that time evolution must also be unitary. The argument is straightforward and hinges on the conservation of entropy. The only subtlety consists of defining the correct entropy; it is the array entropy, not von Neumann's. After unitary evolution has been established we proceed to introduce the useful notion of observables and we explore how von Neumann's entropy can be linked to Shannon's information theory. Finally, we discuss how various connections among the postulates of quantum theory are made explicit within this approach. |
Neonatal neurobehavioral abnormalities and MRI brain injury in encephalopathic newborns treated with hypothermia. | BACKGROUND
Neonatal Encephalopathy (NE) is a prominent cause of infant mortality and neurodevelopmental disability. Hypothermia is an effective neuroprotective therapy for newborns with encephalopathy. Post-hypothermia functional-anatomical correlation between neonatal neurobehavioral abnormalities and brain injury findings on MRI in encephalopathic newborns has not been previously described.
AIM
To evaluate the relationship between neonatal neurobehavioral abnormalities and brain injury on magnetic resonance imaging (MRI) in encephalopathic newborns treated with therapeutic hypothermia.
STUDY DESIGN
Neonates with hypoxic ischemic encephalopathy (HIE) referred for therapeutic hypothermia were prospectively enrolled in this observational study. Neurobehavioral functioning was assessed with the NICU network neurobehavioral scale (NNNS) performed at target age 14 days. Brain injury was assessed by MRI at target age 7-10 days. NNNS scores were compared between infants with and without severe MRI injury.
SUBJECTS & OUTCOME MEASURES
Sixty-eight term newborns (62% males) with moderate to severe encephalopathy underwent MRI at median 8 days (range 5-16) and NNNS at median 12 days of life (range 5-20). Fifteen (22%) had severe injury on MRI.
RESULTS
Overall Total Motor Abnormality Score and individual summary scores for Non-optimal Reflexes and Asymmetry were higher, while Total NNNS Z-score across cognitive/behavioral domains was lower (reflecting poorer performance) in infants with severe MRI injury compared to those without (p < 0.05).
CONCLUSIONS
Neonatal neurobehavioral abnormalities identified by the NNNS are associated with MRI brain injury in encephalopathic newborns post-hypothermia. The NNNS can provide an early functional assessment of structural brain injury in newborns, which may guide rehabilitative therapies in infants after perinatal brain injury. |
Towards Controlled Transformation of Sentiment in Sentences | An obstacle to the development of many natural language processing products is the vast amount of training examples necessary to get satisfactory results. The generation of these examples is often a tedious and timeconsuming task. This paper this paper proposes a method to transform the sentiment of sentences in order to limit the work necessary to generate more training data. This means that one sentence can be transformed to an opposite sentiment sentence and should reduce by half the work required in the generation of text. The proposed pipeline consists of a sentiment classifier with an attention mechanism to highlight the short phrases that determine the sentiment of a sentence. Then, these phrases are changed to phrases of the opposite sentiment using a baseline model and an autoencoder approach. Experiments are run on both the separate parts of the pipeline as well as on the end-to-end model. The sentiment classifier is tested on its accuracy and is found to perform adequately. The autoencoder is tested on how well it is able to change the sentiment of an encoded phrase and it was found that such a task is possible. We use human evaluation to judge the performance of the full (end-to-end) pipeline and that reveals that a model using word vectors outperforms the encoder model. Numerical evaluation shows that a success rate of 54.7% is achieved on the sentiment change. |
Blog effects on brand attitude and purchase intention | This research attempts to investigate the effects of blog marketing on brand attitude and purchase intention. The elements of blog marketing are identified as community identification, interpersonal trust, message exchange, and two-way communication. The relationships among variables are pictured on the fundamental research framework provided by this study. Data were collected via an online questionnaire and 727 useable samples were collected and analyzed utilizing AMOS 5.0. The empirical findings show that the blog marketing elements can impact on brand attitude positively except for the element of community identification. Further, the analysis result also verifies the moderating effects on the relationship between blog marketing elements and brand attitude. |
Dermal damage promoted by repeated low-level UV-A1 exposure despite tanning response in human skin. | IMPORTANCE
Solar UV irradiation causes photoaging, characterized by fragmentation and reduced production of type I collagen fibrils that provide strength to skin. Exposure to UV-B irradiation (280-320 nm) causes these changes by inducing matrix metalloproteinase 1 and suppressing type I collagen synthesis. The role of UV-A irradiation (320-400 nm) in promoting similar molecular alterations is less clear yet important to consider because it is 10 to 100 times more abundant in natural sunlight than UV-B irradiation and penetrates deeper into the dermis than UV-B irradiation. Most (approximately 75%) of solar UV-A irradiation is composed of UV-A1 irradiation (340-400 nm), which is also the primary component of tanning beds.
OBJECTIVE
To evaluate the effects of low levels of UV-A1 irradiation, as might be encountered in daily life, on expression of matrix metalloproteinase 1 and type I procollagen (the precursor of type I collagen).
DESIGN, SETTING, AND PARTICIPANTS
In vivo biochemical analyses were conducted after UV-A1 irradiation of normal human skin at an academic referral center. Participants included 22 healthy individuals without skin disease.
MAIN OUTCOMES AND MEASURES
Skin pigmentation was measured by a color meter (chromometer) under the L* variable (luminescence), which ranges from 0 (black) to 100 (white). Gene expression in skin samples was assessed by real-time polymerase chain reaction.
RESULTS
Lightly pigmented human skin (L* >65) was exposed up to 4 times (1 exposure/d) to UV-A1 irradiation at a low dose (20 J/cm2), mimicking UV-A levels from strong sun exposure lasting approximately 2 hours. A single exposure to low-dose UV-A1 irradiation darkened skin slightly and did not alter matrix metalloproteinase 1 or type I procollagen gene expression. With repeated low-dose UV-A1 irradiation, skin darkened incrementally with each exposure. Despite this darkening, 2 or more exposures to low-dose UV-A1 irradiation significantly induced matrix metalloproteinase 1 gene expression, which increased progressively with successive exposures. Repeated UV-A1 exposures did not suppress type I procollagen expression.
CONCLUSIONS AND RELEVANCE
A limited number of low-dose UV-A1 exposures, as commonly experienced in daily life, potentially promotes photoaging by affecting breakdown, rather than synthesis, of collagen. Progressive skin darkening in response to repeated low-dose UV-A1 exposures in lightly pigmented individuals does not prevent UV-A1-induced collagenolytic changes. Therefore, for optimal protection against skin damage, sunscreen formulations should filter all UV wavelengths, including UV-A1 irradiation. |
Reduced prefrontal and increased subcortical brain functioning assessed using positron emission tomography in predatory and affective murderers. | There appear to be no brain imaging studies investigating which brain mechanisms subserve affective, impulsive violence versus planned, predatory violence. It was hypothesized that affectively violent offenders would have lower prefrontal activity, higher subcortical activity, and reduced prefrontal/subcortical ratios relative to controls, while predatory violent offenders would show relatively normal brain functioning. Glucose metabolism was assessed using positron emission tomography in 41 comparisons, 15 predatory murderers, and nine affective murderers in left and right hemisphere prefrontal (medial and lateral) and subcortical (amygdala, midbrain, hippocampus, and thalamus) regions. Affective murderers relative to comparisons had lower left and right prefrontal functioning, higher right hemisphere subcortical functioning, and lower right hemisphere prefrontal/subcortical ratios. In contrast, predatory murderers had prefrontal functioning that was more equivalent to comparisons, while also having excessively high right subcortical activity. Results support the hypothesis that emotional, unplanned impulsive murderers are less able to regulate and control aggressive impulses generated from subcortical structures due to deficient prefrontal regulation. It is hypothesized that excessive subcortical activity predisposes to aggressive behaviour, but that while predatory murderers have sufficiently good prefrontal functioning to regulate these aggressive impulses, the affective murderers lack such prefrontal control over emotion regulation. |
Applying OLSR routing in FANETs | MANETs are getting exposure due to their versatile applications in the last few years. New networking paradigms like VANETs and FANETs have evolved by using the concept of MANETs. FANETs provide a distinguished approach to tackle with the situations like emergency, natural disaster, military battle fields, UAVs etc. Due to the high mobility in FANETs nodes and rapid topology change, it is a big challenge for researcher to apply routing in FANETs. Mobility models also play a very important role in optimizing the performance of routing protocol in FANET. The research presented aims to apply OLSR routing protocol in FANETs and study of OLSR under different mobility models to optimize the performance of OLSR in FANETs. |
Least-Squares Fitting of Circles and Ellipses ∗ | Fitting circles and ellipses to given points in the plane is a problem that arises in many application areas, e.g. computer graphics [1], coordinate metrology [2], petroleum engineering [11], statistics [7]. In the past, algorithms have been given which fit circles and ellipses in some least squares sense without minimizing the geometric distance to the given points [1], [6]. In this paper we present several algorithms which compute the ellipse for which the sum of the squares of the distances to the given points is minimal. These algorithms are compared with classical simple and iterative methods. Circles and ellipses may be represented algebraically i.e. by an equation of the form F (x) = 0. If a point is on the curve then its coordinates x are a zero of the function F . Alternatively, curves may be represented in parametric form, which is well suited for minimizing the sum of the squares of the distances. 1 Preliminaries and Introduction Ellipses, for which the sum of the squares of the distances to the given points is minimal will be referred to as “best fit” or “geometric fit”, and the algorithms will be called “geometric”. Determining the parameters of the algebraic equation F (x) = 0 in the least squares sense will be denoted by “algebraic fit” and the algorithms will be called “algebraic”. We will use the well known Gauss-Newton method to solve the nonlinear least squares problem (cf. [15]). Let u = (u1, . . . , un) T be a vector of unknowns and consider the nonlinear system of m equations f(u) = 0. If m > n, then we want to minimize |
Wideband Pattern-Reconfigurable Antenna With Switchable Broadside and Conical Beams | A wideband, pattern-reconfigurable antenna is reported that is, for example, a good candidate for ceiling-mounted indoor wireless systems. Switchable linearly polarized broadside and conical radiation patterns are achieved by systematically integrating a wideband low-profile monopolar patch antenna with a wideband L-probe fed patch antenna. The monopolar patch acts as the ground for the L-probe fed patch, which is fed with a coaxial cable that replaces one shorting via of the monopolar patch to avoid deterioration of the conical-beam pattern. A simple switching feed network facilitates the pattern reconfigurability. A prototype was fabricated and tested. The measured results confirm the predicted wideband radiation performance. The operational impedance bandwidth, i.e., |S 11| ≤ −10 dB, is obtained as the overlap of the bands associated with both pattern modalities. It is wide, from 2.25 to 2.85 GHz (23.5%). Switchable broadside and conical radiation patterns are observed across this entire operating bandwidth. The peak measured gain was 8.2 dBi for the broadside mode and 6.9 dBi for the conical mode. The overall profile of this antenna is 0.13λ 0 at its lowest operating frequency. |
Social bonds and internet pornographic exposure among adolescents. | Concern has grown regarding possible harm to the social and psychological development of children and adolescents exposed to Internet pornography. Parents, academics and researchers have documented pornography from the supply side, assuming that its availability explains consumption satisfactorily. The current paper explored the user's dimension, probing whether pornography consumers differed from other Internet users, as well as the social characteristics of adolescent frequent pornography consumers. Data from a 2004 survey of a national representative sample of the adolescent population in Israel were used (n=998). Adolescent frequent users of the Internet for pornography were found to differ in many social characteristics from the group that used the Internet for information, social communication and entertainment. Weak ties to mainstream social institutions were characteristic of the former group but not of the latter. X-rated material consumers proved to be a distinct sub-group at risk of deviant behaviour. |
Irinotecan plus raltitrexed as first-line treatment in advanced colorectal cancer: a phase II study | To evaluate the efficacy and toxicity of irinotecan (CPT-11) in combination with raltitrexed as first-line treatment of advanced colorectal cancer (CRC). A total of 91 previously untreated patients with advanced CRC and measurable disease were enrolled in this phase II study. The median age was 62 years (range 31–77); male/female 54/37; ECOG performance status was 0 in 50 patients (55%), one in 39 (43%) and two in two (2%). Treatment consisted of CPT-11 350 mg m−2 in a 30-min intravenous infusion on day 1, followed after 30 min by a 15-min infusion of raltitrexed 3 mg m−2. Measurements of efficacy included the following: response rate, time to disease progression and overall survival. Of the 83 evaluable patients valuable to objective response, there were five complete responses (6%) and 23 partial responses (28%), for an overall response rate of 34% (95% CI: 25.9–46.5%). In all, 36 patients (43%) had stable disease, whereas 19 (23%) had a progression. The median time to progression was 11.1 months and the median overall survival was 15.6 months. A total of 487 cycles of chemotherapy were delivered with a median of five per patient. Grade 3–4 WHO toxicities were as follows: diarrhoea in 13 patients (15%), nausea/vomiting in four (4%), transaminase increase in six (7%), stomatitis in two (2%), febrile neutropenia in three (3%), anaemia in five (6%) and asthenia in three (3%). The combination CPT-11–raltitrexed is an effective, well-tolerated and convenient regimen as front-line treatment of advanced CRC. |
WORKING PAPER NO . 10-13 WHAT “ TRIGGERS ” MORTGAGE DEFAULT ? | This paper assesses the relative importance of two key drivers of mortgage default: negative equity and illiquidity. To do so, we combine loan-level mortgage data with detailed credit bureau information about the borrower's broader balance sheet. This gives us a direct way to measure illiquid borrowers: those with high credit card utilization rates. We find that both negative equity and illiquidity are significantly associated with mortgage default, with comparably sized marginal effects. Moreover, these two factors interact with each other: The effect of illiquidity on default generally increases with high combined loan-to-value ratios (CLTV), though is significant even for low CLTV. County-level unemployment shocks are also associated with higher default risk (though less so than high utilization) and strongly interact with CLTV. In addition, having a second mortgage implies significantly higher default risk, particularly for borrowers who have a first-mortgage LTV approaching 100 percent. * Ronel Elul (corresponding author): Federal Reserve Bank of Philadelphia, Philadelphia, PA 19106 (email: [email protected]); Nicholas S. Souleles: University of Pennsylvania, Philadelphia, PA 19104 (email: [email protected]); Souphala Chomsisengphet: Office of the Comptroller of the Currency, Washington, DC 20219 (email: [email protected]); Dennis Glennon: Office of the Comptroller of the Currency, Washington, DC 20219 (email: dennis.glennon@ occ.treas.gov); Robert M. Hunt: Federal Reserve Bank of Philadelphia, Philadelphia, PA 19106 (email: [email protected]). Session title: “Mortgage Market and the Financial Crisis” (chair: Nancy Wallace, discussant: Benjamin Keys). The views expressed in this paper are those of the authors and not necessarily those of the Federal Reserve Bank of Philadelphia, the Federal Reserve System, or the Office of the Comptroller of the Currency. This paper is available free of charge at www.philadelphiafed.org/research-and-data/publications/working-papers/. The authors thank Bob O’Loughlin for outstanding research assistance, and the discussant, Benjamin Keys, for helpful comments. 2 The “option model” of mortgage default is traditionally interpreted as implying that borrowers should default if and only if they have negative equity in their home. However, numerous studies have found that many borrowers with negative equity do not default; and, conversely, default is often associated with “shocks,” such as unemployment. 2 One standard way of reconciling the model and the data is to introduce transaction costs of defaulting, such as moving costs, reputation costs (e.g., lost access to credit), and stigma. But such costs can be difficult to identify. Moreover, properly understood, the option model does not imply that negative equity alone is sufficient for default. By defaulting today, one gives up the option to default in the future; as a result, even with negative equity, one might prefer to wait and see if house prices recover (James Kau et al., 1994). This paper focuses on another — not mutually exclusive — explanation. The cost of continuing to pay one’s mortgage also depends on one’s idiosyncratic discount factor and thus on one’s liquidity position. For someone who is very illiquid, it can be costly to wait for house prices to recover. Indeed, in the extreme, he might literally not be able to find the cash to make the next mortgage payment. See Peter Elmer and Steven Seelig (1999), Kristopher Gerardi et al. (2007), and Patrick Bajari et al. (2008). This paper assesses the relative importance of these two factors for mortgage default: negative equity and illiquidity. To do so, we combine loan-level mortgage data with detailed credit bureau information about the borrower’s broader balance sheet. This gives us a direct way to identify illiquidity, using credit-card utilization rates. Sumit Agarwal et al. (2007) and David 1 For example, Chester Foster and Robert Van Order (1984), and Neil Bhutta et al. (2010). 2 Some papers have noted that it might be the “double-trigger” combination of negative equity and shocks that leads to default. See, for example, the discussion in Kerry Vandell (1995). But few papers have actually allowed for an interaction between these variables in the estimation, as we do below; one exception is Christopher Foote et al. (2009). 3 Ethan Cohen-Cole and Jonathan Morse (2009) examine the choice between mortgage versus credit-card default. 3 B. Gross and Nicholas S. Souleles (2002a) have shown that households who have “maxed-out” their credit cards display high propensities to spend in response to increases in income, consistent with their being liquidity constrained. Also, while illiquidity is conceptually distinct from shocks, high credit-card utilization may reflect prior shocks (e.g., James X. Sullivan, 2008), which otherwise might be hard to observe directly. Another benefit of using credit bureau data is that it allows us to measure total housing debt and thus the borrower’s combined loan-to-value ratio (CLTV). By contrast, the mortgage data sets typically used in the literature (e.g., from Loan Performance and Lender Processing Services [LPS]) have spotty information on second liens, at best, and so mis-measure the contribution of negative equity to default. This effect can be economically significant. For example, for the 26 percent of borrowers in our sample with a second mortgage, using only the first-mortgage loan-to-value ratio (LTV) underestimates their total CLTV by 15 percentage points. Disentangling these two determinants of mortgage default (negative equity and illiquidity) is also important for the policy debate over loan modifications. If negative equity dominates, one might tend to focus more on reducing principal, ceteris paribus. By contrast, if illiquidity is also important, temporary reductions in payments may also be useful. I. Data Our mortgage data are from the LPS dataset. We focus on first mortgages originated in 2005 and 2006, since these cohorts are the most likely to have negative equity during our sample period. The LPS data cover about 70 percent of all mortgage originations in these years. For 4 Formerly known as McDash, this dataset has been used extensively to study mortgage default. See, for example, Ronel Elul (2009) and the references therein. 4 brevity, we limit our sample to fixed-rate mortgages (FRMs). We further restrict attention to owner-occupied houses and exclude multifamily properties. We consider the three most common maturities: 15, 30, and 40 years. This sample represents about three-quarters of all FRMs in the LPS data. We follow our borrowers through April 2009. Our credit bureau data are from Equifax, one of the three major credit reporting agencies in the United States. The dataset contains a random subsample of credit users. The data include comprehensive summaries of key characteristics of the different types of debt held by individual borrowers (e.g., total credit-card balances and limits). In addition, the dataset includes loan-level information on these borrowers’ mortgage trades. We linked this dataset to the LPS dataset through the characteristics of the first mortgages, in particular, open date, initial balance and ZIP code. (To be conservative, we used only unique matches.) We matched about one-third of the potential overlap between the two datasets. Our final sample consists of approximately 364,000 FRMs. We also added MSA-level house price indexes from the Federal Housing Finance Agency and county-level unemployment rates from the Bureau of Labor Statistics. Since the house-price index and bureau data are available quarterly, we follow the mortgages quarterly. |
Collaboration across private and public sector primary health care services: benefits, costs and policy implications. | Ongoing care for chronic conditions is best provided by interprofessional teams. There are challenges in achieving this where teams cross organisational boundaries. This article explores the influence of organisational factors on collaboration between private and public sector primary and community health services involved in diabetes care. It involved a case study using qualitative methods. Forty-five participants from 20 organisations were purposively recruited. Data were collected through semi-structured interviews and from content analysis of documents. Thematic analysis was used employing a two-level coding system and cross case comparisons. The patterns of collaborative patient care were influenced by a combination of factors relating to the benefits and costs of collaboration and the influence of support mechanisms. Benefits lay in achieving common or complementary health or organisational goals. Costs were incurred in bridging differences in organisational size, structure, complexity and culture. Collaboration was easier between private sector organisations than between private and public sectors. Financial incentives were not sufficient to overcome organisational barriers. To achieve more coordinated primary and community health care structural changes are also needed to better align funding mechanisms, priorities and accountabilities of the different organisations. |
Effect of cigarette smoking on gastric emptying of solids in Japanese smokers: a crossover study using the 13C-octanoic acid breath test | Cigarette smoking is associated with an increased risk of peptic ulcer and gastroesophageal reflux disease. Gastric emptying disorders may play a role in the development of these upper gastrointestinal diseases. Thus, studies examining a link between smoking and gastric emptying disorders have clinical relevance. This study was conducted to investigate the effect of smoking on gastric emptying of solids in Japanese smokers. The 13C-octanoic acid breath test was performed in eight male habitual smokers on two randomized occasions (either sham smoking or actively smoking). The time vs 13CO2 excretion rate curve was mathematically fitted to a conventional formula of y (t) = m*k*β*e−k*t*(1 − e−k*t)β−1, and the parameters of k and β were determined: under the crossover protocol, a larger (smaller) β indicates slower (faster) emptying in the early phase, and a larger (smaller) k indicates faster (slower) emptying in the later phase. The half 13CO2 excretion time (t1/2b = −[ln(1 − 2−1/β)]/k) and the time of maximal 13CO2 excretion rate (tmax = [lnβ]/k) were also calculated. Between the two occasions, k, β, t1/2b, and tmax were compared by the Wilcoxon signed-rank test. After smoking, k was significantly increased. No significant differences were found in β, t1/2, and tmax between the two occasions. The increase in k suggests the acceleration of gastric emptying in the later phase. For the first time, this study has revealed that acute smoking speeds the gastric emptying of solids in Japanese habitual smokers. |
Quantitative Analysis of Penalty Kicks and Yellow Card Referee Decisions in Soccer | Soccer referees are required to make instant decisions during the game under non-optimal conditions such as imperfect view of the incident and substantial pressure from the crowd, the teams, and the media. Some of the decisions can be subjective, such as a yellow card decision after a foul is called, where different referees might make different decisions. Here we perform quantitative analysis of factors related to the reputation of the team such as the team’s rank, budget, and crowd attendance in home games, and correlate these factors with referee decisions such as penalty kicks and yellow cards. The calls were normalized by dividing the number of yellow cards by the number of fouls, and the number of penalty kicks by the number of shot attempts from the penalty box. Application of the analysis to the four major soccer leagues shows that certain referee decisions have significant correlation with factors such as the team’s rank, budget, and audience in home games, while for other decisions the Pearson correlation is not statistically significant. For budget, or audience in home games. On the other hand, a significant Pearson correlation has been identified between the chance of a foul call to result in a yellow card and the rank or budget of the team in the Bundesliga. The strongest correlation has been observed between the chance of a tackle to result in a foul call, and the budget and rank of the team. |
The influence of shallow trench isolation angle on hot carrier effect of STI-based LDMOS transistors | Hot carrier reliability imposes challenges in the design of STI-based laterally diffused metal-oxide-semiconductor (LDMOS) devices as the device feature is miniaturized. Efforts to quantify the degradation are crucial in countering the device reliability risk. This paper investigates the effect of shallow trench isolation (STI) angle on hot carrier effect (HCI) of STI-based LDMOS devices. The effect on critical device parameters specifically the saturation drain current (Idsat), on-resistance (Ron) as well as the rate of impact ionization of the device had been studied and discussed in detail. From the result obtained, it is found that the drain current for device with 100° STI angle is reduced by 58.78% compared to device with 45° STI angle. Larger STI angle shows higher HCI degradation and the physical mechanism behind the results is analyzed from the Sentaurus 2D techplot. |
Entrainment of Country Rock during Basaltic Eruptions of the Lucero Volcanic Field, New Mexico | As magma rises through the lithosphere it may entrain wall rock debris. The entrainment process depends on the local hydrodynamic regime of the magma (e.g., velocity, temperature, bulk density), the extent of interaction of magma with groundwater, and the mechanical properties of the wall rocks. Wall rock entrainment results in local flaring of dikes and conduits, which in turn affects the hydrodynamics of magma ascent and eruption. We studied upper-crustal xenoliths erupted from small-volume basaltic volcanoes of the Lucero volcanic field (west-central New Mexico) in order to assess the relative importance of various entrainment mechanisms during a range of eruptive styles, including strongly hydrovolcanic, Strombolian, and effusive processes. Total xenolith volume fractions ranged from 0.3-0.9 in hydrovolcanic facies to $$<10^{-4}-10^{-2}$$ in Strombolian facies. The volcanoes erupted through a thick, well-characterized sequence of Paleozoic and lower Mesozoic sedimentary rocks, so that erupted xenolith... |
A LITERATURE REVIEW ON IMMERSIVE VIRTUAL REALITY IN EDUCATION: STATE OF THE ART AND PERSPECTIVES | Since the first time the term "Virtual Reality" (VR) has been used back in the 60s, VR has evolved in different manners becoming more and more similar to the real world. Two different kinds of VR can be identified: non-immersive and immersive. The former is a computer-based environment that can simulate places in the real or imagined worlds; the latter takes the idea even further by giving the perception of being physically present in the non-physical world. While non-immersive VR can be based on a standard computer, immersive VR is still evolving as the needed devices are becoming more user friendly and economically accessible. In the past, there was a major difficulty about using equipment such as a helmet with goggles, while now new devices are being developed to make usability better for the user. VR, which is based on three basic principles: Immersion, Interaction, and User involvement with the environment and narrative, offers a very high potential in education by making learning more motivating and engaging. Up to now, the use of immersive-VR in educational games has been limited due to high prices of the devices and their limited usability. Now new tools like the commercial "Oculus Rift", make it possible to access immersive-VR in lots of educational situations. This paper reports a survey on the scientific literature on the advantages and potentials in the use of Immersive Virtual Reality in Education in the last two years (2013-14). It shows how VR in general, and immersive VR in particular, has been used mostly for adult training in special situations or for university students. It then focuses on the possible advantages and drawbacks of its use in education with reference to different classes of users like children and some kinds of cognitive disabilities (with particular reference to the Down syndrome). It concludes outlining strategies that could be carried out to verify these ideas. |
Highly Bendable, Transparent Thin‐Film Transistors That Use Carbon‐Nanotube‐Based Conductors and Semiconductors with Elastomeric Dielectrics | We report the use of networks of single-walled carbon nanotubes (SWNTs) with high and moderate coverages (measured as number of tubes per unit area) for all of the conducting (i.e., source, drain, and gate electrodes) and semiconducting layers, respectively, of a type of transparent, mechanically flexible, thin-film transistor (TFT). The devices are fabricated on plastic substrates using layer-by-layer transfer printing of SWNT networks grown using optimized chemical vapor deposition (CVD) procedures. The unique properties of the SWNT networks lead to electrical (e.g., good performance on plastic), optical (e.g., transparent at visible wavelengths), and mechanical (e.g., extremely bendable) characteristics in this “all-tube” TFT that would be difficult, or impossible, to achieve with conventional materials. Invisible circuits based on transparent transistors have broad potential applications in consumer, military, and industrial electronic systems. In backlit display devices, for example, transparent active-matrix circuits can increase the aperture ratio and battery life. Transparent electronic materials that can be printed on low-cost, flexible, plastic substrates are potentially important for new applications, such as bendable heads-up display devices, see-through structural health monitors, sensors, and steerable antennas. More advanced systems, such as electronic artificial skins and canopy window displays, will require materials that can also tolerate the high degrees of mechanical flexing (i.e., high strains) needed for integration with complex curvilinear surfaces. Most examples of transparent TFTs (TTFTs) use thin films of inorganic oxides as the semiconducting and conducting layers. Although the electrical properties of these oxides can be good (mobilities and conductivities as high as 20 cm V s and 4.8 × 10 X cm, respectively), their mechanical characteristics are not optimally suited for use in flexible and mechanically robust devices. For example, the tensile fracture strains for ZnO and indium tin oxide (ITO) thin films are less than 0.03 % and 1 %, respectively. Aligned arrays or random networks of individual SWNTs represent alternative classes of transparent semiconducting and conducting materials. In networks with high coverages of SWNTs, especially when in the form of small bundles, the metallic tubes (normally present with semiconducting tubes in a 1:2 ratio) form a percolating network that behaves like a conducting “film”. At moderate coverages, only the semiconducting tubes form such a percolating network and the film shows semiconducting properties. Unlike the oxides, the SWNT films have excellent mechanical properties due to their high elastic moduli (1.36–1.76 TP nm/tube diameter nm) and fracture stresses (100–150 GPa) of the tubes. SWNT-based semiconductors have been used in flexible TFTs. In one case, solution-deposited SWNT networks also formed the gate electrodes. Although these TFTs can show good electrical properties, especially when CVD tubes are used, the metal (Au, Pd, etc.) source, and drain electrodes limit their optical transparency and C O M M U N IC A TI O N S |
Standardised antibacterial Manuka honey in the management of persistent post-operative corneal oedema: a case series. | BACKGROUND
Corneal oedema is a common post-operative problem that delays or prevents visual recovery from ocular surgery. Honey is a supersaturated solution of sugars with an acidic pH, high osmolarity and low water content. These characteristics inhibit the growth of micro-organisms, reduce oedema and promote epithelialisation. This clinical case series describes the use of a regulatory approved Leptospermum species honey ophthalmic product, in the management of post-operative corneal oedema and bullous keratopathy.
METHODS
A retrospective review of 18 consecutive cases (30 eyes) with corneal oedema persisting beyond one month after single or multiple ocular surgical procedures (phacoemulsification cataract surgery and additional procedures) treated with Optimel Antibacterial Manuka Eye Drops twice to three times daily as an adjunctive therapy to conventional topical management with corticosteroid, aqueous suppressants, hypertonic sodium chloride five per cent, eyelid hygiene and artificial tears. Visual acuity and central corneal thickness were measured before and at the conclusion of Optimel treatment.
RESULTS
A temporary reduction in corneal epithelial oedema lasting up to several hours was observed after the initial Optimel instillation and was associated with a reduction in central corneal thickness, resolution of epithelial microcysts, collapse of epithelial bullae, improved corneal clarity, improved visualisation of the intraocular structures and improved visual acuity. Additionally, with chronic use, reduction in punctate epitheliopathy, reduction in central corneal thickness and improvement in visual acuity were achieved. Temporary stinging after Optimel instillation was experienced. No adverse infectious or inflammatory events occurred during treatment with Optimel.
CONCLUSIONS
Optimel was a safe and effective adjunctive therapeutic strategy in the management of persistent post-operative corneal oedema and warrants further investigation in clinical trials. |
Beta-tricalcium phosphate shows superior absorption rate and osteoconductivity compared to hydroxyapatite in open-wedge high tibial osteotomy | The purpose of this study was to clinically and radiologically compare the utility, osteoconductivity, and absorbability of hydroxyapatite (HAp) and beta-tricalcium phosphate (TCP) spacers in medial open-wedge high tibial osteotomy (HTO). Thirty-eight patients underwent medial open-wedge HTO with a locking plate. In the first 19 knees, a HAp spacer was implanted in the opening space (HAp group). In the remaining 19 knees, a TCP spacer was implanted in the same manner (TCP group). All patients underwent clinical and radiological examinations before surgery and at 18 months after surgery. Concerning the background factors, there were no statistical differences between the two groups. Post-operatively, the knee score significantly improved in each group. Concerning the post-operative knee alignment and clinical outcome, there was no statistical difference in each parameter between the two groups. Regarding the osteoconductivity, the modified van Hemert’s score of the TCP group was significantly higher (p = 0.0009) than that of the HAp group in the most medial osteotomy zone. The absorption rate was significantly greater in the TCP group than in the HAp group (p = 0.00039). The present study demonstrated that a TCP spacer was significantly superior to a HAp spacer concerning osteoconductivity and absorbability at 18 months after medial open-wedge HTO. Retrospective comparative study, Level III. |
The necessary and sufficient conditions of therapeutic personality change. | For many years I have been engaged in psychotherapy with individuals in distress. In recent years I have found myself increasingly concerned with the process of abstracting from that experience the general principles which appear to be involved in it. I have endeavored to discover any orderliness, any unity which seems to inhere in the subtle, complex tissue of interpersonal relationship in which I have so constantly been immersed in therapeutic work. One of the current products of this concern is an attempt to state, in formal terms, a theory of psychotherapy, of personality, and of interpersonal relationships which will encompass and contain the phenomena of my experience. What I wish to do in this paper is to take one very small segment of that theory, spell it out more completely, and explore its meaning and usefulness. |
Hepatoprotective and in vivo antioxidant effects of Byrsocarpus coccineus Schum. and Thonn. (Connaraceae). | ETHNOPHARMACOLOGICAL RELEVANCE
The leaf decoction of Byrsocarpus coccineus (Connaraceae) is drunk for the treatment of jaundice in West African traditional medicine.
AIM OF THE STUDY
To investigate the hepatoprotective and in vivo antioxidant effects of Byrsocarpus coccineus in carbon tetrachloride (CCl(4))-induced hepatotoxicity in rats.
MATERIALS AND METHODS
Group allotment in this study included vehicle, CCl(4), Byrsocarpus coccineus 1000 mg/kg alone, Byrsocarpus coccineus 200, 400, and 1000 mg/kg+CCl(4) and Livolin((R)) 20mg/kg+CCl(4), and treatment was carried out accordingly. On the 7th day, rats were sacrificed and blood was withdrawn by cardiac puncture. The levels and activities of serum biochemical parameters and antioxidant enzymes were then assayed using standard procedures.
RESULTS
CCl(4) significantly (P<0.05) increased the levels of ALT and AST and reduced total protein. In CCl(4) treated animals, Byrsocarpus coccineus (200, 400, and 1000 mg/kg) dose-dependently and significantly decreased ALT, AST and ALP levels with peak effect produced at the highest dose. Conversely, Byrsocarpus coccineus produced significant increases in albumin and total protein levels. The standard drug produced significant effects in respect of ALT (downward arrow), albumin (upward arrow), and total protein (upward arrow). CCl(4) also produced significant (P<0.05) reductions in the activity of catalase, SOD, peroxidase and GSH, and conversely increased MDA level. Byrsocarpus coccineus produced significant and dose-dependent reversal of CCl(4)-diminished activity of the antioxidant enzymes and reduced CCl(4)-elevated level of MDA. The standard drug also significantly increased CCl(4)-diminished antioxidant enzymes activity and reduced CCl(4)-elevated MDA level. In general, the effects of the standard drug were comparable and not significantly different from those of Byrsocarpus coccineus.
CONCLUSION
The results obtained in this study suggest that the aqueous leaf extract of Byrsocarpus coccineus possesses hepatoprotective and in vivo antioxidant effects. This finding justifies the use of this preparation in West African traditional medicine for the treatment of liver disease. |
Protein contact prediction by integrating joint evolutionary coupling analysis and supervised learning | MOTIVATION
Protein contact prediction is important for protein structure and functional study. Both evolutionary coupling (EC) analysis and supervised machine learning methods have been developed, making use of different information sources. However, contact prediction is still challenging especially for proteins without a large number of sequence homologs.
RESULTS
This article presents a group graphical lasso (GGL) method for contact prediction that integrates joint multi-family EC analysis and supervised learning to improve accuracy on proteins without many sequence homologs. Different from existing single-family EC analysis that uses residue coevolution information in only the target protein family, our joint EC analysis uses residue coevolution in both the target family and its related families, which may have divergent sequences but similar folds. To implement this, we model a set of related protein families using Gaussian graphical models and then coestimate their parameters by maximum-likelihood, subject to the constraint that these parameters shall be similar to some degree. Our GGL method can also integrate supervised learning methods to further improve accuracy. Experiments show that our method outperforms existing methods on proteins without thousands of sequence homologs, and that our method performs better on both conserved and family-specific contacts.
AVAILABILITY AND IMPLEMENTATION
See http://raptorx.uchicago.edu/ContactMap/ for a web server implementing the method.
CONTACT
[email protected]
SUPPLEMENTARY INFORMATION
Supplementary data are available at Bioinformatics online. |
Probrain natriuretic peptide for assessment of efficacy in heart failure treatment. | N-terminal probrain natriuretic peptide (NT-proBNP) is elevated in patients with heart failure. Numerous clinical trials have evaluated the efficacy of spironolactone in heart failure; however, no studies have directly examined the effects of spironolactone treatment on NT-proBNP level. This study investigated whether NT-proBNP levels decrease with daily spironolactone treatment. The study consisted of 117 adult patients with heart failure. All participants were divided into 3 groups, class I, class II, and class III, according to the New York Heart Association classification system. Patients were randomly assigned to receive spironolactone or were treated with another drug, other than spironolactone, as placebo. NT-proBNP plasma samples were taken at baseline and after 6 mo of treatment. A total of 62 patients were treated with daily spironolactone; 55 patients were followed with available treatment without spironolactone. The baseline demographic and laboratory parameters were similar for patients in all groups. At the end of 6 mo, spironolactone-treated patients had significantly lower NT-proBNP levels and significantly better ejection fractions than did patients in all New York Heart Association classes who were not treated with spironolactone. Results suggest that spironolactone decreases plasma NT-proBNP concentrations, and that the measurement of plasma NT-proBNP levels may be helpful in assessing therapeutic efficacy in patients who are treated for heart failure. |
Subduction styles in the Precambrian : Insight from numerical experiments | a r t i c l e i n f o Plate tectonics is a self-organizing global system driven by the negative buoyancy of the thermal boundary layer resulting in subduction. Although the signature of plate tectonics is recognized with some confidence in the Phanerozoic geological record of the continents, evidence for plate tectonics becomes less certain further back in time. To improve our understanding of plate tectonics on the Earth in the Precambrian we have to combine knowledge derived from the geological record with results from well-constrained numerical modeling. In a series of experiments using a 2D petrological–thermomechanical numerical model of oceanic subduction we have systematically investigated the dependence of tectono-metamorphic and magmatic regimes at an active plate margin on upper-mantle temperature, crustal radiogenic heat production, degree of lithospheric weakening and other parameters. We have identified a first-order transition from a " no-subduction " tectonic regime through a " pre-subduction " tectonic regime to the modern style of subduction. The first transition is gradual and occurs at upper-mantle temperatures between 250 and 200 K above the present-day values, whereas the second transition is more abrupt and occurs at 175–160 K. The link between geological observations and model results suggests that the transition to the modern plate tectonic regime might have occurred during the Mesoarchean–Neoarchean time (ca. 3.2–2.5 Ga). In the case of the " pre-subduction " tectonic regime (upper-mantle temperature 175–250 K above the present) the plates are weakened by intense percolation of melts derived from the underlying hot melt-bearing sub-lithospheric mantle. In such cases, convergence does not produce self-sustaining one-sided subduction, but rather results in shallow underthrusting of the oceanic plate under the continental plate. Further increase in the upper-mantle temperature (N 250 K above the present) causes a transition to a " no-subduction " regime where horizontal movements of small deformable plate fragments are accommodated by internal strain and even shallow underthrusts do not form under the imposed convergence. Thus, based on the results of the numerical modeling, we suggest that the crucial parameter controlling the tectonic regime is the degree of lithospheric weakening induced by emplacement of sub-lithospheric melts into the lithosphere. A lower melt flux at upper-mantle temperatures b 175–160 K results in a lesser degree of melt-related weakening leading to stronger plates, which stabilizes modern style subduction even at high mantle temperatures. |
The Oral-B CrossAction manual toothbrush: a 5-year literature review. | UNLABELLED
The design of the modern conventional manual toothbrush can be attributed to Dr. Robert Hutson, a Californian periodontist, who in the early 1950s developed the multitufted, flattrimmed, end-rounded nylon filament brush that became known as the Oral-B manual toothbrush. The trademark Oral-B emphasized that this was an oral brush, designed to clean all parts of the oral cavity, not merely a toothbrush. Flat-trimmed conventional toothbrushes based on the original Oral-B design have good plaque-removing capability when used carefully. However, limitations in terms of patients" brushing technique and brushing time necessitated a radical change in bristle pattern to improve performance, especially at approximal sites and along the gumline.
RATIONALE FOR PRODUCT DEVELOPMENT
Detailed studies of the tooth-brushing process, using advanced scientific and ergonomic research methods, led to new toothbrush designs intended to maximize the efficacy of brushing efforts. These studies showed that the point of greatest interproximal penetration occurs when the direction of brushing changes; bristles angle back into the interproximal space, moving down and back up the adjoining approximal surface. These mechanics were further optimized on the basis of standardized evaluations of brush-design characteristics, including combinations of tuft lengths, insertion angles and tuft layout. With conventional vertical bristles these improvements yield limited benefits because only a few bristles are correctly positioned at the interproximal junction when the brush changes direction. Ultimately, a design with bristle tufts arranged at 16 masculine from vertical along the horizontal brush head axis was identified, in which the maximum number of bristles operated at the optimum angle throughout the brushing cycle. This design was significantly more effective (p < 0.001) than others in terms of penetration (by 9.6%) and cleaning effectiveness per brush stroke (by 15.5%).
EFFECTIVENESS
This discovery paved the way for a new toothbrush design with a unique patented array of tufts, which became known as the Oral-B CrossAction brush. This design was selected for extensive independent studies designed to evaluate plaque removal at the gingival margins and in the approximal areas and longer-term control of gingivitis, relative to current standard designs. In a series of studies (published in 2000), 14 single-brushing comparisons and 2 longer-term studies demonstrated the consistent superiority of the Oral-B CrossAction brush over the equivalent commercial standards. Since then, several additional studies have contributed further positive performance data for the CrossAction brush. Two of the studies demonstrated that plaque removal by this brush was superior to that of 15 other manual toothbrushes, and further investigations contributed similarly positive data. Longer-term data have confirmed superior CrossAction performance and the long-term benefits of improved efficacy, particularly for gingivitis.
DISCUSSION
Novel approaches to toothbrush design have produced a toothbrush that, when tested in a large number of clinical studies, has consistently met or exceeded established standards of efficacy. The literature contains a wealth of performance data on various toothbrush designs, but none of these designs shows the year-on-year consistency and reproducibility of the Oral-B CrossAction. |
Polymorphisms, de novo lipogenesis, and plasma triglyceride response following fish oil supplementation. | Interindividual variability in the response of plasma triglyceride concentrations (TG) following fish oil consumption has been observed. Our objective was to examine the associations between single-nucleotide polymorphisms (SNPs) within genes encoding proteins involved in de novo lipogenesis and the relative change in plasma TG levels following a fish oil supplementation. Two hundred and eight participants were recruited in the greater Quebec City area. The participants completed a six-week fish oil supplementation (5 g fish oil/day: 1.9-2.2 g eicosapentaenoic acid and 1.1 g docosahexaenoic acid. SNPs within SREBF1, ACLY, and ACACA genes were genotyped using TAQMAN methodology. After correction for multiple comparison, only two SNPs, rs8071753 (ACLY) and rs1714987 (ACACA), were associated with the relative change in plasma TG concentrations (P = 0.004 and P = 0.005, respectively). These two SNPs explained 7.73% of the variance in plasma TG relative change following fish oil consumption. Genotype frequencies of rs8071753 according to the TG response groups (responders versus nonresponders) were different (P = 0.02). We conclude that the presence of certain SNPs within genes, such as ACLY and ACACA, encoding proteins involved in de novo lipogenesis seem to influence the plasma TG response following fish oil consumption. |
Whole-tree level water balance and its implications on stomatal oscillations in orange trees [Citrus sinensis (L.) Osbeck] under natural climatic conditions. | Sustained cyclic oscillations in stomatal conductance, leaf water potential, and sap flow were observed in young orange trees growing under natural conditions. The oscillations had an average period of approximately 70 min. Water uptake by the roots and loss by the leaves was characterized by large time lags which led to imbalances between water supply and demand in the leaves. The bulk of the lag in response between stomatal movements and the upstream water balance resided downstream of the branch, with branch level sap flow lagging behind the stomatal conductance by approximately 20 min while the stem sap flow had a much shorter time lag of only 5 min behind the branch sap flow. This imbalance between water uptake and loss caused transient changes in internal water deficits which were closely correlated to the dynamics of the leaf water potential. The hydraulic resistance of the whole tree fluctuated throughout the day, suggesting transient changes in the efficiency of water supply to the leaves. A simple whole-tree water balance model was applied to describe the dynamics of water transport in the young orange trees, and typical values of the hydraulic parameters of the transpiration stream were estimated. In addition to the hydro-passive stomatal movements, whole-tree water balance appears to be an important factor in the generation of stomatal oscillations. |
A Reinforcement Learning Approach to Weaning of Mechanical Ventilation in Intensive Care Units | The management of invasive mechanical ventilation, and the regulation of sedation and analgesia during ventilation, constitutes a major part of the care of patients admitted to intensive care units. Both prolonged dependence on mechanical ventilation and premature extubation are associated with increased risk of complications and higher hospital costs, but clinical opinion on the best protocol for weaning patients off of a ventilator varies. This work aims to develop a decision support tool that uses available patient information to predict time-to-extubation readiness and to recommend a personalized regime of sedation dosage and ventilator support. To this end, we use off-policy reinforcement learning algorithms to determine the best action at a given patient state from sub-optimal historical ICU data. We compare treatment policies from fitted Qiteration with extremely randomized trees and with feedforward neural networks, and demonstrate that the policies learnt show promise in recommending weaning protocols with improved outcomes, in terms of minimizing rates of reintubation and regulating physiological stability. |
MLBCD: a machine learning tool for big clinical data | BACKGROUND
Predictive modeling is fundamental for extracting value from large clinical data sets, or "big clinical data," advancing clinical research, and improving healthcare. Machine learning is a powerful approach to predictive modeling. Two factors make machine learning challenging for healthcare researchers. First, before training a machine learning model, the values of one or more model parameters called hyper-parameters must typically be specified. Due to their inexperience with machine learning, it is hard for healthcare researchers to choose an appropriate algorithm and hyper-parameter values. Second, many clinical data are stored in a special format. These data must be iteratively transformed into the relational table format before conducting predictive modeling. This transformation is time-consuming and requires computing expertise.
METHODS
This paper presents our vision for and design of MLBCD (Machine Learning for Big Clinical Data), a new software system aiming to address these challenges and facilitate building machine learning predictive models using big clinical data.
RESULTS
The paper describes MLBCD's design in detail.
CONCLUSIONS
By making machine learning accessible to healthcare researchers, MLBCD will open the use of big clinical data and increase the ability to foster biomedical discovery and improve care. |
Optimization of a z-source DC circuit breaker | DC faults may cause severe disruptions in continuity of service to vital loads in a shipboard integrated power system, hence detection, isolation, and protection against such faults must be incorporated in both medium-voltage DC (MVDC) and low-voltage DC (LVDC) systems. Here we consider the effectiveness of existing z-source breakers and propose several new designs more appropriate for fault detection in MVDC and LVDC systems. In particular, we perform an optimization study that aims to minimize dissipation and weight and we identify the key parameters for use in MVDC and LVDC systems. Preliminary verification and validation studies are also included. |
An Algorithmic Framework to Control Bias in Bandit-based Personalization | Personalization is pervasive in the online space as it leads to higher efficiency and revenue by allowing the most relevant content to be served to each user. However, recent studies suggest that personalization methods can propagate societal or systemic biases and polarize opinions; this has led to calls for regulatory mechanisms and algorithms to combat bias and inequality. Algorithmically, bandit optimization has enjoyed great success in learning user preferences and personalizing content or feeds accordingly. We propose an algorithmic framework that allows for the possibility to control bias or discrimination in such bandit-based personalization. Our model allows for the specification of general fairness constraints on the sensitive types of the content that can be displayed to a user. The challenge, however, is to come up with a scalable and low regret algorithm for the constrained optimization problem that arises. Our main technical contribution is a provably fast and low-regret algorithm for the fairness-constrained bandit optimization problem. Our proofs crucially leverage the special structure of our problem. Experiments on synthetic and real-world data sets show that our algorithmic framework can control bias with only a minor loss to revenue. ∗A short version of this paper appeared in the FAT/ML 2017 workshop (https://arxiv.org/abs/1707.02260) 1 ar X iv :1 80 2. 08 67 4v 1 [ cs .L G ] 2 3 Fe b 20 18 |
Processing and visualization for diffusion tensor MRI | This paper presents processing and visualization techniques for Diffusion Tensor Magnetic Resonance Imaging (DT-MRI). In DT-MRI, each voxel is assigned a tensor that describes local water diffusion. The geometric nature of diffusion tensors enables us to quantitatively characterize the local structure in tissues such as bone, muscle, and white matter of the brain. This makes DT-MRI an interesting modality for image analysis. In this paper we present a novel analytical solution to the Stejskal-Tanner diffusion equation system whereby a dual tensor basis, derived from the diffusion sensitizing gradient configuration, eliminates the need to solve this equation for each voxel. We further describe decomposition of the diffusion tensor based on its symmetrical properties, which in turn describe the geometry of the diffusion ellipsoid. A simple anisotropy measure follows naturally from this analysis. We describe how the geometry or shape of the tensor can be visualized using a coloring scheme based on the derived shape measures. In addition, we demonstrate that human brain tensor data when filtered can effectively describe macrostructural diffusion, which is important in the assessment of fiber-tract organization. We also describe how white matter pathways can be monitored with the methods introduced in this paper. DT-MRI tractography is useful for demonstrating neural connectivity (in vivo) in healthy and diseased brain tissue. |
Reaching Human-level Performance in Automatic Grammatical Error Correction: An Empirical Study | Neural sequence-to-sequence (seq2seq) approaches have proven to be successful in grammatical error correction (GEC). Based on the seq2seq framework, we propose a novel fluency boost learning and inference mechanism. Fluency boosting learning generates diverse error-corrected sentence pairs during training, enabling the error correction model to learn how to improve a sentence’s fluency from more instances, while fluency boosting inference allows the model to correct a sentence incrementally with multiple inference steps. Combining fluency boost learning and inference with convolutional seq2seq models, our approach achieves the stateof-the-art performance: 75.72 (F0.5) on CoNLL-2014 10 annotation dataset and 62.42 (GLEU) on JFLEG test set respectively, becoming the first GEC system that reaches human-level performance (72.58 for CoNLL and 62.37 for JFLEG) on both of the benchmarks. |
Development of Yes/No Arabic Question Answering System | Developing Question Answering systems has been one of the important research issues because it requires insights from a variety of disciplines, including, Artificial Intelligence, Information Retrieval, Information Extraction, Natural Language Processing, and Psychology. In this paper we realize a formal model for a lightweight semantic–based open domain yes/no Arabic question answering system based on paragraph retrieval (with variable length). We propose a constrained semantic representation. Using an explicit unification framework based on semantic similarities and query expansion (synonyms and antonyms). This frequently improves the precision of the system. Employing the passage retrieval system achieves a better precision by retrieving more paragraphs that contain relevant answers to the question; It significantly reduces the amount of text to be processed by the system. |
Expression and Prognostic Significance of Macrophage Inflammatory Protein-3 Alpha and Cystatin A in Nasopharyngeal Carcinoma | This study aims to investigate the expression of macrophage inflammatory protein-3 alpha (MIP-3α) and cystatin A in nasopharyngeal carcinoma (NPC) and their association with clinical characteristics and prognosis. Primary tumor specimens from 114 NPC patients and associated clinical follow-up data were collected, and the expression of MIP-3α and cystatin A proteins was investigated by immunohistochemistry. Expression of MIP-3α was significantly associated with TNM stage in patients with NPC (P < 0.05). NPC patients with positive expression of MIP-3α exhibited shorter median overall survival (OS) and distant metastasis-free survival (DMFS), compared with patients with negative expression (OS: 50.5 months versus 59.0 months, P = 0.013; DMFS: 50.1 months versus 60.2 months, P = 0.003). NPC patients with positive expression of cystatin A exhibited shorter median OS, local recurrence-free survival (LRFS), and DMFS, compared with patients with negative expression (OS: 51.1 months versus 60.0 months, P = 0.004; LRFS: 54.5 months versus 59.5 months, P = 0.036; DMFS: 52.3 months versus 58.8 months, P = 0.036). Both MIP-3α and cystatin A overexpressions in NPC tumor tissues were strong independent factors of poor prognosis in NPC patients. MIP-3α and cystatin A expressions may be valuable prognostic markers in NPC patients. |
Clofarabine with high dose cytarabine and granulocyte colony-stimulating factor (G-CSF) priming for relapsed and refractory acute myeloid leukaemia. | This phase I/II study was conducted to determine the maximum tolerated dose, toxicity, and efficacy of clofarabine in combination with high dose cytarabine and granulocyte colony-stimulating factor (G-CSF) priming (GCLAC), in the treatment of patients with relapsed or refractory acute myeloid leukaemia (AML). Dose escalation of clofarabine occurred without dose-limiting toxicity, so most patients were treated at the maximum dose, 25 mg/m(2) per day with cytarabine 2 g/m(2) per day, each for 5 d, and G-CSF 5 μg/kg, beginning the day before chemotherapy and continuing daily until neutrophil recovery. The complete remission (CR) rate among the 46 evaluable patients was 46% (95% confidence interval [CI] 31-61%) and the CR + CR but with a platelet count <100 × 10(9)/l rate was 61% (95% CI 45-75%). Multivariate analysis showed that responses to GCLAC were independent of age, cytogenetic risk category, and number of prior salvage regimens. GCLAC is highly active in relapsed and refractory AML and warrants prospective comparison to other regimens, as well as study in untreated patients. |
Publish/Subscribe in a mobile enviroment | A publish/subscribe system dynamically routes and delivers events from sources to interested users, and is an extremely useful communication service when it is not clear in advance who needs what information. In this paper we discuss how a publish/subscribe system can be extended to operate in a mobile environment, where events can be generated by moving sensors or users, and subscribers can request delivery at handheld and/or mobile devices. We describe how the publish/subscribe system itself can be distributed across multiple (possibly mobile) computers to distribute load, and how the system can be replicated to cope with failures, message loss, and disconnections. |
Semantic Image Synthesis via Adversarial Learning | In this paper, we propose a way of synthesizing realistic images directly with natural language description, which has many useful applications, e.g. intelligent image manipulation. We attempt to accomplish such synthesis: given a source image and a target text description, our model synthesizes images to meet two requirements: 1) being realistic while matching the target text description; 2) maintaining other image features that are irrelevant to the text description. The model should be able to disentangle the semantic information from the two modalities (image and text), and generate new images from the combined semantics. To achieve this, we proposed an end-to-end neural architecture that leverages adversarial learning to automatically learn implicit loss functions, which are optimized to fulfill the aforementioned two requirements. We have evaluated our model by conducting experiments on Caltech-200 bird dataset and Oxford-102 flower dataset, and have demonstrated that our model is capable of synthesizing realistic images that match the given descriptions, while still maintain other features of original images. |
Step-by-step design and simulation of a simple CPU architecture | This paper describes a sequence of assignments, each building upon the next, leading students to a working simulation of a simple 8-bit CPU (Central Processing Unit). The design features a classic Von Neumann architecture comprising a simple data path with a few registers, a simple ALU (Arithmetic Logic Unit), and a microprogram to direct all the control signals. The first step involves the design of the ALU which is capable of eight basic operations. The second step guides students to construct a datapath complete with several 8-bit registers. The third step involves the design and implementation of a control unit which uses a microprogram to implement machine code instructions. The microprogram implements nine basic machine language instructions which are sufficient for writing many simple programs. The final step involves adding program memory and an input and output device to form a simple working simulation of a computer. At this point, students may hand-assemble code for their CPU and simulate its execution. All simulations are performed using a free and open source simulator called Logisim which performs digital logic simulations with the ability to build larger circuits from smaller subcircuits. Students can set an adjustable clock rate and observe the internal CPU state and registers as it retrieves instructions and steps through the microcode. The basic CPU architecture provides many opportunities for more advanced exercises, such as adding an instruction fetch unit, adding pipelining, or adding more machine language instructions. The assignments were introduced in a second year course on computer organization, providing an effective hands-on approach to understanding how a CPU actually operates. |
Diagnostic accuracy of urine dipsticks for detecting albuminuria in indigenous and non-indigenous children in a community setting | Albuminuria predicts cardiovascular and end-stage kidney disease in indigenous populations. Early detection in indigenous children may identify those who could benefit from early treatment. Community-based detection of albuminuria needs to be performed using a reliable, inexpensive, and widely available test, such as a proteinuria dipstick. Dipstick accuracy for detecting albuminuria in a community setting has not been evaluated. We assessed the accuracy of Multistix 10 SG dipsticks to detect baseline albuminuria and predict for persistent albuminuria at a 2-year follow-up in a population-based cohort of Australian Aboriginal and non-Aboriginal elementary-school-aged children. Variability in the accuracy of dipsticks in subgroups of higher risk children was analyzed using the relative diagnostic odds ratio (RDOR). Using Multistix 10 SG dipsticks, index-test-positive cases were defined as ≥0.30 g/L (1+) proteinuria and index-test-negative cases as <0.30 g/L (negative or trace) proteinuria. Referent-test-positive cases were defined as spot albumin:creatinine (ACR) ≥3.4 mg/mmol, and referent-test-negative cases as ACR <3.4 mg/mmol. There were 2,266 children (55.1% Aboriginal, 51.0% boys, mean age 8.9 years) enrolled. At the 2-year follow-up, 1,432 (63.0%) children were retested (54.0% Aboriginal, 50.5% boys, mean age 10.5 years). Prevalence of baseline albuminuria was 7.3%, and persistent albuminuria was 1.5%. Dipsticks had a sensitivity of 62% and specificity of 97% at baseline. In predicting persistent albuminuria, sensitivity was 75% and specificity 93%. Accuracy did not vary with ethnicity, gender, or body mass index. Accuracy was less in younger children (4.0–7.9 years), and in those with hematuria. The performance characteristics of Multistix dipsticks make them suitable for albuminuria detection in Aboriginal and other higher-risk groups of children. More than two thirds of children detected at a single test will have transient rather than persistent albuminuria. Multistix dipsticks are particularly useful for detecting children who will have persistent albuminuria. |
Compositional Asymmetric Cooperations for Process Algebras with Probabilities, Priorities, and Time | The modeling and analysis experience with process algebras has shown the necessity of extending them with priority, probabilistic internal/external choice, and time in order to be able to faithfully model the behavior of real systems and capture the properties of interest. An important open problem in this scenario is how to obtain semantic compositionality in the presence of all these features, to allow for an efficient analysis. Starting from a Markovian process algebra, i.e. a process algebra incorporating exponentially distributed durations, the objective of this paper is to show how to add the expressive features above while preserving compositionality. Theoretically speaking, we argue that, when abandoning the classical nondeterministic setting by considering the features above, a natural solution is to break the symmetry of the roles of the processes participating in a synchronization. We accomplish this by distinguishing between master actions – the choice among which is carried out generatively according to their priorities/probabilities or exponentially distributed durations – and slave actions – the choice among which is carried out reactively according to their priorities/probabilities – and by imposing that a master action can synchronize with slave actions only. We show that such an asymmetric cooperation mechanism is natural and easy to understand by means of the novel cooperation structure model. Technically speaking, we define EMPAgr, a Markovian process algebra extended with probabilities, priorities, zero durations, and the generative master-reactive slaves synchronization mechanism. Then, we prove that the synchronization mechanism in EMPAgr is correct w.r.t. the cooperation structure model, we show that the Markovian bisimulation equivalence is a congruence w.r.t. all the operators of EMPAgr as well as recursion, and we present a sound and complete axiomatization of the Markovian bisimulation equivalence for nonrecursive process terms. As far as the Markovian bisimulation equivalence is concerned, we introduce a new notion of Markovian bisimulation up to Markovian bisimulation equivalence, which improves the previous definitions given in the literature, and a new proof technique for showing congruence w.r.t. recursion in Markovian process algebras, which repairs some inaccuracies in the proofs previosly proposed in the literature. 1. Università di Bologna, Dipartimento di Scienze dell’Informazione, Mura Anteo Zamboni 7, 40127 Bologna, Italy. E-mail: [email protected] 2. Università di Torino, Dipartimento di Informatica, Corso Svizzera 185, 10149 Torino, Italy. E-mail: [email protected] |
The Development of the Game Engagement Questionnaire: A Measure of Engagement in Video Game Playing: Response to Reviews | This paper begins with an argument that most measure development in the social sciences, with its reliance on correlational techniques as a tool, falls short of the requirements for constructing meaningful, unidimensional measures of human attributes. By demonstrating how rating scales are ordinal-level data, we argue the necessity of converting these to equal-interval units to develop a measure that is both qualitatively and quantitatively defensible. This requires that the empirical results and theoretical explanation are questioned and adjusted at each step of the process. In our response to the reviewers, we describe how this approach was used to develop the Game Engagement Questionnaire (GEQ), including its emphasis on examining a continuum of involvement in violent video games. The GEQ is an empirically sound measure focused on one player characteristic that may be important in determining game influence. |
A Hierarchy-to-Sequence Attentional Neural Machine Translation Model | Although sequence-to-sequence attentional neural machine translation NMT has achieved great progress recently, it is confronted with two challenges: learning optimal model parameters for long parallel sentences and well exploiting different scopes of contexts. In this paper, partially inspired by the idea of segmenting a long sentence into short clauses, each of which can be easily translated by NMT, we propose a hierarchy-to-sequence attentional NMT model to handle these two challenges. Our encoder takes the segmented clause sequence as input and explores a hierarchical neural network structure to model words, clauses, and sentences at different levels, particularly with two layers of recurrent neural networks modeling semantic compositionality at the word and clause level. Correspondingly, the decoder sequentially translates segmented clauses and simultaneously applies two types of attention models to capture contexts of interclause and intraclause for translation prediction. In this way, we can not only improve parameter learning, but also well explore different scopes of contexts for translation. Experimental results on Chinese–English and English–German translation demonstrate the superiorities of the proposed model over the conventional NMT model. |
STORING AND EXCHANGING SIMULATION RESULTS IN TELECOMMUNICATIONS | Though storing and exchanging simulation data is a rather simple task done by simulation practitioners, it is quite often a challenge as huge quantities of data are not uncommon, and conversion between different formats can be much time consuming. After examining some of the needs of the telecommunications simulation community, we describe the architecture of a working prototype – CostGlue – to be used as a general-purpose archiver and converter for large quantities of simulation data. The software architecture of the CostGlue tool is modular therefore allowing further development and contributions from other research sphere of activity. The core of the tool – CoreGlue – is responsible for communicating with the database. It acts as a unified interface for writing to the database and reading from it. Specific functions like import and export of data and different mathematical calculations are represented as a set of self-described modules, which are loaded as necessary. The graphic user interface is introduced as a web application for the simplicity of use and effective remote access to the application. The software package CostGlue is going to be released as free software with the possibility of further development. |
A Blockchain-Based Authentication and Security Mechanism for IoT | The existing identity authentication of IoT devices mostly depends on an intermediary institution, i.e., a CA server, which suffers from the single-point-failure attack. Even worse, the critical data of authenticated devices can be tampered by inner attacks without being identified. To address these issues, we utilize blockchain technology, which serves as a secure tamper-proof distributed ledger to IoT devices. In the proposed method, we assign a unique ID for each individual device and record them into the blockchain, so that they can authenticate each other without a central authority. We also design a data protection mechanism by hashing significant data (i.e. firmware) into the blockchain where any state changes of the data can be detected immediately. Finally, we implement a prototype based on an open source blockchain platform Hyperledger Fabric to verify the proposed system. |
Link of dietary patterns with metabolic syndrome: analysis of the National Health and Nutrition Examination Survey | Background:Population-based interventions aimed at halting the increasing prevalence of metabolic syndrome (MetS) require thorough understanding of dietary interplays. Objective is to identify the independent dietary nutrients associated with MetS and its components using dietary pattern identification and the single-nutrient approaches in The United States.Methods:This is a cross-sectional observation. Participants are selected from the National Health and Nutrition Examination Survey (NHANES) with available dietary intake, biochemical and anthropometrical data from 2001 to 2012. Exposure is diet obtained from 24-h dietary recall. Main outcome measure is MetS and its components.Results:Overall, 23 157 eligible individuals including 6561 with MetS were included in the final analysis. Using principle component analysis, we identified three food patterns that explained 50.8% of the variance of the dietary nutrient consumption. The highest quartile of the factor score representative of saturated/monounsaturated fatty acids or the first dietary pattern was associated with 1.27-fold (95% confidence interval (CI): 1.10–1.46, P=0.001) higher odds of association with MetS when compared with the first quartile. The second pattern representative of vitamins and trace elements had an odds ratio of 0.79 (95% CI: 0.70–0.89, P<0.001) for association with MetS, and the third pattern representative of polyunsaturated fatty acids did not have any association with MetS. The nutrient-by-nutrient approach showed that mild alcohol intake and lower consumption of total saturated fatty acids and sodium were associated with lower risk of MetS.Conclusions:Application of multiple complementary analytic approaches reveals more comprehensive dietary determinants of MetS and its components as potential intervening targets. |
Geosocial Media Data as Predictors in a GWR Application to Forecast Crime Hotspots (Short Paper) | In this paper we forecast hotspots of street crime in Portland, Oregon. Our approach uses geosocial media posts, which define the predictors in geographically weighted regression (GWR) models. We use two predictors that are both derived from Twitter data. The first one is the population at risk of being victim of street crime. The second one is the crime related tweets. These two predictors were used in GWR to create models that depict future street crime hotspots. The predicted hotspots enclosed more than 23% of the future street crimes in 1% of the study area and also outperformed the prediction efficiency of a baseline approach. Future work will focus on optimizing the prediction parameters and testing the applicability of this approach to other mobile crime types. 2012 ACM Subject Classification Information systems → Geographic information systems |
Literature Survey on Sentiment Analysis of Twitter Data using Machine Learning Approaches | In today’s world, micro-blogging sites has become a platform for individuals or organizations across the world to express their opinions, sentiment and experience in the form of tweets, status updates, blog posts, etc. This platform has no political and economic restrictions. This paper discusses an approach where a published stream of tweets on electronic products from the twitter micro-blogging site are then subjected to preprocessing and classified based on their emotional content as positive, negative and neutral. The performance of the unsupervised algorithm is then analyzed. The paper concludes with the comparison of the existing system with the proposed systems and applications of the research. |
Pravastatin: a potential cause for acute pancreatitis. | Acute pancreatitis (AP) secondary to drugs is un-common, with an incidence ranging from 0.3% to 2.0% of AP cases. Drug-induced AP due to statins is rare, and only 12 cases have thus far been reported. In this case report, we report a case of a 50-year-old female on pravastatin therapy for 3 d prior to developing symptoms of AP. The common etiological factors for AP were all excluded. The patient was admitted to the intensive care unit secondary to respiratory distress, though she subsequently improved and was discharged 14 d after admission. Although the incidence of drug-induced AP is low, clinicians should have a high index of suspicion for it in patients with AP due to an unknown etiology. Clinicians should be aware of the association of statins with AP. If a patient taking a statin develops abdominal pain, clinicians should consider the diagnosis of AP and conduct the appropriate laboratory and diagnostic evaluation if indicated. |
Implementing IPv6 as a Peer-to-Peer Overlay Network | This paper proposes to implement an IPv6 routing infrastructure as a self-organizing overlay network on top of the current IPv4 infrastructure. The overlay network builds upon a distributed IPv6 edge router with a master/slave architecture. We show how different slaves can be constructed to tunnel through NATs and firewalls, as well as to improve the robustness of the routing infrastructure and to provide efficient and resilient implementations for features such as multicast, anycast, and mobile IP, using currently available peer-to-peer (P2P) protocols. The resulting IPv6 overlay network would restore the end-to-end property of the original Internet, support evolution and dynamic updating of the protocols running on the overlay network, make available IPv6 and the associated features to network applications immediately, and provide an ideal underlying infrastructure for P2P applications, without changing networking hardware and software in the core Internet. |
A substrate integrated waveguide leaky wave antenna radiating from a slot in the broad wall | This paper presents the application of a substrate integrated waveguide (SIW) for the design of a leaky wave antenna radiating from a slot in the broad wall. The antenna radiates into a beam split into two main lobes and its gain is about 7 dB at 19 GHz. The characteristics and radiation aspects of the antenna are discussed here. The measured antenna characteristics are in good agreement with those predicted by the simulation. Due to the SIW technology, the antenna is suitable for integration into T/X circuits and antenna arrays. |
Sentiment Analysis of Suicide Notes: A Shared Task | This paper reports on a shared task involving the assignment of emotions to suicide notes. Two features distinguished this task from previous shared tasks in the biomedical domain. One is that it resulted in the corpus of fully anonymized clinical text and annotated suicide notes. This resource is permanently available and will (we hope) facilitate future research. The other key feature of the task is that it required categorization with respect to a large set of labels. The number of participants was larger than in any previous biomedical challenge task. We describe the data production process and the evaluation measures, and give a preliminary analysis of the results. Many systems performed at levels approaching the inter-coder agreement, suggesting that human-like performance on this task is within the reach of currently available technologies. |
Performance Engineering of Software Systems | One day, you will discover a new adventure and knowledge by spending more money. But when? Do you think that you need to obtain those all requirements when having much money? Why don't you try to get something simple at first? That's something that will lead you to know more about the world, adventure, some places, history, entertainment, and more? It is your own time to continue reading habit. One of the books you can enjoy now is performance engineering of software systems here. |
Nitrous oxide anesthesia-associated myelopathy. | BACKGROUND
The role of nitrous oxide exposure in neurologic complications of subclinical cobalamin deficiency has been reported, but few cases are well documented.
OBSERVATION
Two weeks after surgery for prosthetic adenoma, a 69-year-old man developed ascending paresthesia of the limbs, severe ataxia of gait, tactile sensory loss on the 4 limbs and trunk, and absent tendon reflexes. After a second surgical intervention, the patient became confused. Four months after onset, the patient had paraplegia, severe weakness of the upper limbs, cutaneous anesthesia sparing the head, and confusion. Moderate macrocytosis, low serum B12 levels, and a positive Schilling test result led to the diagnosis of pernicious anemia. Results of electrophysiologic examinations showed a diffuse demyelinating neuropathy. Magnetic resonance imaging of the spinal cord disclosed hyperintensities of the dorsal columns on T2-weighted images.
CONCLUSIONS
Pernicious anemia can result in severe neurologic symptoms with only mild hematologic changes. The role of nitrous oxide anesthesia in revealing subclinical B12 deficiency must be emphazised. Magnetic resonance imaging of the spinal cord might be helpful in making the diagnosis. |
Towards a Seamless Integration of Word Senses into Downstream NLP Applications | Lexical ambiguity can impede NLP systems from accurate understanding of semantics. Despite its potential benefits, the integration of sense-level information into NLP systems has remained understudied. By incorporating a novel disambiguation algorithm into a state-of-the-art classification model, we create a pipeline to integrate sense-level information into downstream NLP applications. We show that a simple disambiguation of the input text can lead to consistent performance improvement on multiple topic categorization and polarity detection datasets, particularly when the fine granularity of the underlying sense inventory is reduced and the document is sufficiently large. Our results also point to the need for sense representation research to focus more on in vivo evaluations which target the performance in downstream NLP applications rather than artificial benchmarks. |
Acute diffuse and total alopecia: A new subtype of alopecia areata with a favorable prognosis. | BACKGROUND
Alopecia areata (AA) appears in several clinical forms, all having different clinical courses and different prognoses. Acute diffuse and total alopecia (ADTA) has been reported to have a short clinical course ranging from acute hair loss to total baldness, followed by rapid recovery.
OBJECTIVE
To determine the clinical course and prognosis of ADTA through precise clinical observations.
METHODS
Thirty Korean patients who showed ADTA of the scalp within an average of 10 weeks after the onset of hair loss were studied.
RESULTS
Most patients were women who were older than 20 years of age. The histopathology of the lesion revealed infiltration of mononuclear cells around the hair follicles and prominent pigment incontinence. The patients experienced hair regrowth within about 6 months, without regard to the method of treatment.
LIMITATIONS
The duration of follow-up after remission ranged from 3 to 49 months, with a mean of 24 months.
CONCLUSIONS
These cases can be categorized as having "acute diffuse and total alopecia," a new subtype of AA that is associated with a favorable prognosis and rapid and spontaneous recovery even without treatment. |
RCS reduction of Antipodal Vivaldi Antenna | A novel Antipodal Vivaldi Antenna with low radar cross section (RCS) is proposed in this paper. By using flat corrugated slotline to replace exponential gradient curve on both sides of the antenna, the RCS in the endfire direction can be reduced in the operating band of 4.3 GHz-12 GHz when the incident wave is perpendicular to the antenna plane. Mean while, the lowest operating frequency of the antenna is reduced from 4.3 GHz to 3.8 GHz, and radiation performance keeps stably. |
Aspects in Effectiveness of Glass-and Polyethylene-Fibre Reinforced Composite Resin in Periodontal Splinting | EDWIN SEVER BECHIR1, MARIANA PACURAR1, TUDOR ALEXANDRU HANTOIU1, ANAMARIA BECHIR2*, OANA SMATREA2, ALEXANDRU BURCEA2, CHERANA GIOGA2, MONICA MONEA1 1 Medicine and Pharmacy University of Tirgu-Mures, Faculty of Dentistry, 38 Gheorghe Marinescu Str., 540142,Tirgu-Mures, Romania 2 Titu Maiorescu University of Bucharest, Faculty of Dentistry, Department of Dental Specialties, 67A Gheorghe Petrascu Str., 031593, Bucharest, Romania |
Wind turbine pitch faults prognosis using a-priori knowledge-based ANFIS | Keywords: Wind turbine Fault prognosis Fault detection Pitch system ANFIS Neuro-fuzzy A-priori knowledge a b s t r a c t The fast growing wind industry has shown a need for more sophisticated fault prognosis analysis in the critical and high value components of a wind turbine (WT). Current WT studies focus on improving their reliability and reducing the cost of energy, particularly when WTs are operated offshore. WT Supervisory Control and Data Acquisition (SCADA) systems contain alarms and signals that could provide an early indication of component fault and allow the operator to plan system repair prior to complete failure. Several research programmes have been made for that purpose; however, the resulting cost savings are limited because of the data complexity and relatively low number of failures that can be easily detected in early stages. A new fault prognosis procedure is proposed in this paper using a-priori knowledge-based Adaptive Neuro-Fuzzy Inference System (ANFIS). This has the aim to achieve automated detection of significant pitch faults, which are known to be significant failure modes. With the advantage of a-priori knowledge incorporation, the proposed system has improved ability to interpret the previously unseen conditions and thus fault diagnoses are improved. In order to construct the proposed system, the data of the 6 known WT pitch faults were used to train the system with a-priori knowledge incorporated. The effectiveness of the approach was demonstrated using three metrics: (1) the trained system was tested in a new wind farm containing 26 WTs to show its prognosis ability; (2) the first test result was compared to a general alarm approach; (3) a Confusion Matrix analysis was made to demonstrate the accuracy of the proposed approach. The result of this research has demonstrated that the proposed a-priori knowledge-based ANFIS (APK-ANFIS) approach has strong potential for WT pitch fault prognosis. Wind is currently the fastest growing renewable energy source for electrical generation around the world. It is expected that a large number of wind turbines (WTs), especially offshore, will be employed in the near future (EWEA, 2011; Krohn, Morthorst, & Awerbuch, 2009). Following a rapid acceleration of wind energy development in the early 21st century, WT manufacturers are beginning to focus on improving their cost of energy. WT operational performance is critical to the cost of energy. This is because Operation and Maintenance (O&M) costs constitute a significant share of the annual cost of a wind … |
Depth of field postprocessing for layered scenes using constant-time rectangle spreading | Control over what is in focus and what is not in focus in an image is an important artistic tool. The range of depth in a 3D scene that is imaged in sufficient focus through an optics system, such as a camera lens, is called depth of field. Without depth of field, the entire scene appears completely in sharp focus, leading to an unnatural, overly crisp appearance. Current techniques for rendering depth of field in computer graphics are either slow or suffer from artifacts, or restrict the choice of point spread function (PSF). In this paper, we present a new image filter based on rectangle spreading which is constant time per pixel. When used in a layered depth of field framework, our filter eliminates the intensity leakage and depth discontinuity artifacts that occur in previous methods. We also present several extensions to our rectangle spreading method to allow flexibility in the appearance of the blur through control over the PSF. |
Clopidogrel and Aspirin in Acute Ischemic Stroke and High-Risk TIA. | BACKGROUND
Combination antiplatelet therapy with clopidogrel and aspirin may reduce the rate of recurrent stroke during the first 3 months after a minor ischemic stroke or transient ischemic attack (TIA). A trial of combination antiplatelet therapy in a Chinese population has shown a reduction in the risk of recurrent stroke. We tested this combination in an international population.
METHODS
In a randomized trial, we assigned patients with minor ischemic stroke or high-risk TIA to receive either clopidogrel at a loading dose of 600 mg on day 1, followed by 75 mg per day, plus aspirin (at a dose of 50 to 325 mg per day) or the same range of doses of aspirin alone. The dose of aspirin in each group was selected by the site investigator. The primary efficacy outcome in a time-to-event analysis was the risk of a composite of major ischemic events, which was defined as ischemic stroke, myocardial infarction, or death from an ischemic vascular event, at 90 days.
RESULTS
A total of 4881 patients were enrolled at 269 international sites. The trial was halted after 84% of the anticipated number of patients had been enrolled because the data and safety monitoring board had determined that the combination of clopidogrel and aspirin was associated with both a lower risk of major ischemic events and a higher risk of major hemorrhage than aspirin alone at 90 days. Major ischemic events occurred in 121 of 2432 patients (5.0%) receiving clopidogrel plus aspirin and in 160 of 2449 patients (6.5%) receiving aspirin plus placebo (hazard ratio, 0.75; 95% confidence interval [CI], 0.59 to 0.95; P=0.02), with most events occurring during the first week after the initial event. Major hemorrhage occurred in 23 patients (0.9%) receiving clopidogrel plus aspirin and in 10 patients (0.4%) receiving aspirin plus placebo (hazard ratio, 2.32; 95% CI, 1.10 to 4.87; P=0.02).
CONCLUSIONS
In patients with minor ischemic stroke or high-risk TIA, those who received a combination of clopidogrel and aspirin had a lower risk of major ischemic events but a higher risk of major hemorrhage at 90 days than those who received aspirin alone. (Funded by the National Institute of Neurological Disorders and Stroke; POINT ClinicalTrials.gov number, NCT00991029 .). |
Depression among Chinese University Students: Prevalence and Socio-Demographic Correlates | The purpose of the present study was to estimate the prevalence of depression in Chinese university students, and to identify the socio-demographic factors associated with depression in this population. A multi-stage stratified sampling procedure was used to select university students (N = 5245) in Harbin (Heilongjiang Province, Northeastern China), who were aged 16-35 years. The Beck Depression Inventory (BDI) was used to determine depressive symptoms of the participants. BDI scores of 14 or higher were categorized as depressive for logistic regression analysis. Depression was diagnosed by the Structured Clinical Interview (SCID) for the Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition (DSM-IV). 11.7% of the participants had a BDI score 14 or higher. Major Depressive Disorder was seen in 4.0% of Chinese university students. There were no statistical differences in the incidence of depression when gender, ethnicity, and university classification were analyzed. Multivariate analysis showed that age, study year, satisfaction with major, family income situation, parental relationship and mother's education were significantly associated with depression. Moderate depression is prevalent in Chinese university students. The students who were older, dissatisfied with their major, had a lower family income, poor parental relationships, and a lower level of mother's education were susceptible to depression. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.