title
stringlengths
8
300
abstract
stringlengths
0
10k
PET-CT image registration in the chest using free-form deformations
We have implemented and validated an algorithm for three-dimensional positron emission tomography transmission-to-computed tomography registration in the chest, using mutual information as a similarity criterion. Inherent differences in the two imaging protocols produce significant nonrigid motion between the two acquisitions. A rigid body deformation combined with localized cubic B-splines is used to capture this motion. The deformation is defined on a regular grid and is parameterized by potentially several thousand coefficients. Together with a spline-based continuous representation of images and Parzen histogram estimates, our deformation model allows closed-form expressions for the criterion and its gradient. A limited-memory quasi-Newton optimization algorithm is used in a hierarchical multiresolution framework to automatically align the images. To characterize the performance of the method, 27 scans from patients involved in routine lung cancer staging were used in a validation study. The registrations were assessed visually by two expert observers in specific anatomic locations using a split window validation technique. The visually reported errors are in the 0- to 6-mm range and the average computation time is 100 min on a moderate-performance workstation.
Self-Test and Diagnosis for Self-Aware Systems
<italic>Editor’s note:</italic> Self-testing hardware has a long tradition as a complement to manufacturing testing based on test stimuli and response analysis. Today, it is a mature field and many complex SoCs have self-testing structures built-in (BIST). For self-aware SoCs this is a key technology, allowing the system to distinguish between correct and erroneous behavior. This survey article reviews the state of the art and shows how these techniques are to be generalized to facilitate self-awareness. —<italic>Axel Jantsch</italic>, <italic>TU Wien</italic> —<italic>Nikil Dutt, University of California at Irvine</italic>
3D virtual "smart home" user interface
In contrast to the rapid development of home automation equipment and ‘smart home’ capabilities, comparatively less attention has been paid to the development of comprehensive, comfortable and self-explaining user interfaces. This is acknowledged to be an important obstacle for the broad success of smart home ideas and products. In order to make home automation desirable and economically feasible, it is vital to improve the attractivity, intuitivity and adaptivity of the user-home interaction taking advantage from modern information technologies such as Virtual Reality technology and downloadable hypertext documents. The user interface is augmented by the use of a virtual environment in conjunction with the physical environment adding information from different systems and sensors in the home, based on virtual environment activities. In this paper we describe a powerful graphical interface created with standard 3D programming tools to control and supervise the household. A number of features implemented in a prototype and implementation aspects will be described.
The standardised mistletoe extract PS76A2 improves QoL in patients with breast cancer receiving adjuvant CMF chemotherapy: a randomised, placebo-controlled, double-blind, multicentre clinical trial.
Patients with breast cancer receiving adjuvant chemotherapy frequently suffer from a restricted quality of life (QoL) due to the side-effects of chemotherapy and the consequences of coping with the diagnosis. Therefore, the objective of this clinical study was to investigate the impact of PS76A2, an aqueous mistletoe extract standardised to the galactoside-specific mistletoe lectin, on QoL by performing a placebo-controlled trial. Overall, 272 patients with breast cancer receiving adjuvant CMF chemotherapy (cyclophosphamide-methotrexate-fluorouracil) were enrolled and randomised to groups receiving placebo or PS76A2 at concentrations of 10, 30 or 70 ng mistletoe lectin (ML) per ml. The patients received 0.5 ml study medication twice weekly subcutaneously for 15 consecutive weeks (4 CMF cycles). Primary variables were the self-assessment QoL scores GLQ-8 (Global Life Quality) and Spitzer's uniscale. As a result, statistically significant effects on QoL were obtained with the medium dose (15 ng ML/0.5 ml). The treatment difference between the medium dose and placebo with regard to the GLQ-8 sum was 60.8 mm (95% confidence interval: 19.3 to 102.0 mm). The treatment effect for Spitzer's uniscale between the medium dose and placebo was 16.4 mm (95% confidence interval: 6.3 to 26.6 mm). The results on QoL were supported by an increase of T helper lymphocytes (CD4+) and the CD4+/CD8+ ratio (p<0.05). Overall, PS76A2 was well tolerated. Local reactions at the injection sites occurred dose-dependently, but were mild at the low and medium dose levels.
On Clustering Graph Streams
In this paper, we will examine the problem of clustering massive graph streams. Graph clustering poses significant challenges because of the complex structures which may be present in the underlying data. The massive size of the underlying graph makes explicit structural enumeration very difficult. Consequently, most techniques for clustering multi-dimensional data are difficult to generalize to the case of massive graphs. Recently, methods have been proposed for clustering graph data, though these methods are designed for static data, and are not applicable to the case of graph streams. Furthermore, these techniques are especially not effective for the case of massive graphs, since a huge number of distinct edges may need to be tracked simultaneously. This results in storage and computational challenges during the clustering process. In order to deal with the natural problems arising from the use of massive disk-resident graphs, we will propose a technique for creating hash-compressed micro-clusters from graph streams. The compressed micro-clusters are designed by using a hash-based compression of the edges onto a smaller domain space. We will provide theoretical results which show that the hash-based compression continues to maintain bounded accuracy in terms of distance computations. We will provide experimental results which illustrate the accuracy and efficiency of the underlying method.
Comment on "Observation of superluminal behaviors in wave propagation".
Two Comments on the Letter by D. Mugnai, A. Ranfagni, and R. Ruggeri, Phys.Rev.Lett.84, 4830 (2000).
Fungal Evolution: Aquatic–Terrestrial Transitions
The coevolution of plants and fungi is of key importance to the development of life on land. Much of our understanding of this long-shared evolutionary history comes from (1) the study of living species, in particular through molecular phylogenetics and (2) direct fossil evidence of plant–fungus interactions and fungal diversity in general. However little is known about the aquatic–terrestrial transitions in fungal evolution. In this article we discuss some hypotheses that have arisen from molecular data and the fossil record, and we highlight particular traits that can be helpful to understand the origin of fungi and their evolutionary history.
Emotion control in collaborative learning situations: do students regulate emotions evoked by social challenges?
BACKGROUND During recent decades, self-regulated learning (SRL) has become a major research field. SRL successfully integrates the cognitive and motivational components of learning. Self-regulation is usually seen as an individual process, with the social aspects of regulation conceptualized as one aspect of the context. However, recent research has begun to investigate whether self-regulation processes are complemented by socially shared regulation processes. AIMS The presented study investigated what kind of socio-emotional challenges students experience during collaborative learning and whether the students regulate the emotions evoked during these situations. The interplay of the emotion regulation processes between the individual and the group was also studied. SAMPLE The sample for this study was 63 teacher education students who studied in groups of three to five during three collaborative learning tasks. METHOD Students' interpretations of experienced social challenges and their attempts to regulate emotions evoked by these challenges were collected following each task using the Adaptive Instrument for the Regulation of Emotions. RESULTS The results indicated that students experienced a variety of social challenges. Students also reported the use of shared regulation in addition to self-regulation. Finally, the results suggested that intrinsic group dynamics are derived from both individual and social elements of collaborative situations. CONCLUSION The findings of the study support the assumption that students can regulate emotions collaboratively as well as individually. The study contributes to our understanding of the social aspects of emotional regulation in collaborative learning contexts.
Robotic abdominal surgery.
As a whole, abdominal surgeons possess excellent videoendoscopic surgical skills. However, the limitations of laparoscopy-such as reduced range of motion and instrument dexterity and 2-dimensional view of the operative field-have inspired even the most accomplished laparoscopists to investigate the potential of surgical robotics to broaden their application of the minimally invasive surgery paradigm. This review discusses data obtained from articles indexed in the MEDLINE database written in English and mapped to the following key words: "surgical robotics," "robotic surgery," "robotics," "computer-assisted surgery," "da Vinci," "Zeus," "fundoplication," "morbid obesity," "hepatectomy," "pancreatectomy," "small intestine," "splenectomy," "colectomy," "adrenalectomy," and "pediatric surgery." A limited subset of 387 publications was reviewed to determine article relevance to abdominal robotic surgery. Particular emphasis was placed on reports that limited their discussion to human applications and surgical outcomes. Included are comments about the initial 202 robotic abdominal surgery cases performed at Johns Hopkins University Hospital (Baltimore, MD) from August 2000 to January 2004. Surgical robotic systems are being used to apply laparoscopy to the surgical treatment of diseases in virtually every abdominal organ. Procedures demanding superior visualization or requiring complex reconstruction necessitating extensive suturing obtain the greatest benefit from robotics over conventional laparoscopy. Whereas advanced surgical robotic systems offer the promise of a unique combination of advantages over open and conventional laparoscopic approaches, clinical data demonstrating improved outcomes are lacking for robotic surgical applications within the abdomen. Outcomes data for surgical robotics are essential given the exorbitant costs associated with the use of these tools.
Development Dynamics in the Philippines Historical Perspectives: 1950-2010
This paper attempts to explore the use of an OECD Multi-Dimensional Country Review (MDCR) framework in understanding the long-term development history of the Philippines. The MDCR recognizes the multiplicity of development objectives countries usually pursue and therefore the associated multiplicity of challenges and opportunities. Following a conventional dichotomy of explaining the country’s development dynamics into economic and non-economic factors, the paper reviews the historical economic record and examines more recent non-economic hypotheses. While the latter is mostly political explanations it tries to link them to economic outcomes yet it is weak in tracing the mechanisms of the linkage despite using more rigorous methodologies. The paper then proceeds with hypothesizing that the long-term (political) behavior of breaking the country into finer geographical (and political) entities has been inimical to its sustainable long-term (economic) growth. The splitting of provinces, creation of new ones, of legislating more congressional districts, and further break-up of even the lowest government levels clearly fragment markets, raise real financial and transactions costs, bloat government budgets and the bureaucracy, and add burden to the private sector environment. Partial evidence is explored showing this behavior along the country’s long-term development history and some policy directions are suggested.
Anticoagulation after subcutaneous enoxaparin is time sensitive in STEMI patients treated with tenecteplase
The adequacy of anticoagulation with enoxaparin as an adjuvant to fibrinolytic therapy for STEMI is unclear and has implications for both efficacy and safety; especially in patients undergoing a pharmacoinvasive reperfusion strategy. A subset of fibrinolytic-treated patients in the WEST study was enrolled in a systematic anti-Xa substudy. All received ASA and subcutaneous (SQ) enoxaparin 1 mg/kg followed by TNK-tPA. Incremental IV dosing of enoxaparin (0.3–0.5 mg/kg) was allowed prior to percutaneous coronary intervention (PCI). Anti-Xa blood samples were drawn prior and after angiography. Data are presented as percentages, medians and IQRs. Forty-five patients underwent angiography 2.8 h (2.5–14.6) after fibrinolytic. The pre-angiography median anti-Xa acquired 179 min (153–875) after SQ enoxaparin was 0.48 U/ml (0.42–0.65); a relationship between anti-Xa activity and time from administration was evident (r = 0.418, p < 0.007). Without supplemental IV enoxaparin the 2nd anti-Xa acquired 218 min (195–930) after SQ enoxaparin was 0.48 U/ml (0.41–0.80, n = 29). After supplemental IV enoxaparin, the 2nd anti-Xa was 0.92 U/ml (0.72–1.10, n = 16). An incremental IV enoxaparin dose and anti-Xa relationship was demonstrated (r = 0.59, p = 0.001) i.e. no IV 0.48 U/ml (0.41–0.80, n = 29), 0.3 mg/kg IV 0.81 U/ml (0.63–1.00, n = 12), and 0.5 mg/kg IV 1.34 U/ml (1.16–1.54, n = 4). Most fibrinolytic treated STEMI patients receiving weight-adjusted SQ enoxaparin (1 mg/kg) had subtherapeutic anti-Xa levels (<0.5 U/ml) after ~3 h. A strategy of supplemental 0.3 mg/kg IV enoxaparin at time of PCI reliably achieved anti-Xa ≥ 0.5 U/ml. Our findings provide a rational novel strategy for anti-thrombotic management in STEMI patients undergoing a pharmacoinvasive reperfusion strategy.
Developing fully functional E-government: A four stage model
Literature reports the experiences with e-government initiatives as chaotic and unmanageable, despite recent numerous initiatives at different levels of government and academic and practitionersâ€TM conferences on e-government. E-government presents a number of challenges for public administrators. To help public administrators think about e-government and their organizations, this article describes different stages of egovernment development and proposes a †̃stages of growthâ€TM model for fully functional e-government. Various government websites and related e-government initiatives help to ground and explain this model. These stages outline the multiperspective transformation within government structures and functions as they make transitions to e-government through each stage. Technological and organizational challenges for each stage accompany these descriptions. At the same time, this paper describes how the e-government becomes amalgamated with traditional public administrative structure. a b Purchase Export
The Hallmarks of Aging
Aging is characterized by a progressive loss of physiological integrity, leading to impaired function and increased vulnerability to death. This deterioration is the primary risk factor for major human pathologies, including cancer, diabetes, cardiovascular disorders, and neurodegenerative diseases. Aging research has experienced an unprecedented advance over recent years, particularly with the discovery that the rate of aging is controlled, at least to some extent, by genetic pathways and biochemical processes conserved in evolution. This Review enumerates nine tentative hallmarks that represent common denominators of aging in different organisms, with special emphasis on mammalian aging. These hallmarks are: genomic instability, telomere attrition, epigenetic alterations, loss of proteostasis, deregulated nutrient sensing, mitochondrial dysfunction, cellular senescence, stem cell exhaustion, and altered intercellular communication. A major challenge is to dissect the interconnectedness between the candidate hallmarks and their relative contributions to aging, with the final goal of identifying pharmaceutical targets to improve human health during aging, with minimal side effects.
Solid state circuit breakers for DC micrgrids: Current status and future trends
Short circuit protection remains one of the major technical barriers in DC microgrids. This paper reviews state of the art of DC solid state circuit breakers (SSCBs). A new concept of a self-powered SSCB using normally-on wideband gap (WBG) semiconductor devices as the main static switch is described in this paper. The new SSCB detects short circuit faults by sensing its terminal voltage rise, and draws power from the fault condition itself to turn and hold off the static switch. The new two-terminal SSCB can be directly placed in a circuit branch without requiring any external power supply or extra wiring. Challenges and future trends in protecting low voltage distribution microgrids against short circuit and other faults are discussed.
Linking WordNet to 3 D Shapes
We describe a project to link the Princeton WordNet to 3D representations of real objects and scenes. The goal is to establish a dataset that helps us to understand how people categorize everyday common objects via their parts, attributes, and context. This paper describes the annotation and data collection effort so far as well as ideas for future work.
Knowledge, attitudes and practices of Ugandan men regarding prostate cancer.
BACKGROUND The incidence of prostate cancer in Uganda is one of the highest recorded in Africa. Prostate cancer is the most common cancer among men in Uganda. OBJECTIVE This study assessed the current knowledge, attitudes and practices of adult Ugandan men regarding prostate cancer. SUBJECTS AND METHODS We conducted a descriptive cross-sectional study using interviewer administered questionnaires and focus group discussions among 545 adult men aged 18-71 years, residing in Kampala, the capital of Uganda. Quantitative data were analyzed with SPSS version 20. Qualitative data were collected using audio recorded focus group discussions, transcribed and analyzed by clustering into themes. RESULTS The majority of the respondents (324, 59.4%) were aged 18-28 years, 295 (54.1%) had heard about prostate cancer and 250 (45.9%) had never heard about it. The commonest source of information about prostate cancer was the mass media. Only 12.5% of the respondents obtained information about prostate cancer from a health worker, 37.4% did not know the age group that prostate cancer affects and 50.2% could not identify any risk factor for prostate cancer. Participants in the focus group discussions confused prostate cancer with gonorrhea and had various misconceptions about the causes of prostate cancer. Only 10.3% of the respondents had good knowledge of the symptoms of prostate cancer and only 9% knew about serum prostate specific antigen (PSA) testing. Although 63.5% thought they were susceptible to prostate cancer, only 22.9% considered getting and only 3.5% had ever undergone a serum PSA test. CONCLUSION There was generally poor knowledge and several misconceptions regarding prostate cancer and screening in the study population. Community based health education programs about prostate cancer are greatly needed for this population.
T-drive: driving directions based on taxi trajectories
GPS-equipped taxis can be regarded as mobile sensors probing traffic flows on road surfaces, and taxi drivers are usually experienced in finding the fastest (quickest) route to a destination based on their knowledge. In this paper, we mine smart driving directions from the historical GPS trajectories of a large number of taxis, and provide a user with the practically fastest route to a given destination at a given departure time. In our approach, we propose a time-dependent landmark graph, where a node (landmark) is a road segment frequently traversed by taxis, to model the intelligence of taxi drivers and the properties of dynamic road networks. Then, a Variance-Entropy-Based Clustering approach is devised to estimate the distribution of travel time between two landmarks in different time slots. Based on this graph, we design a two-stage routing algorithm to compute the practically fastest route. We build our system based on a real-world trajectory dataset generated by over 33,000 taxis in a period of 3 months, and evaluate the system by conducting both synthetic experiments and in-the-field evaluations. As a result, 60-70% of the routes suggested by our method are faster than the competing methods, and 20% of the routes share the same results. On average, 50% of our routes are at least 20% faster than the competing approaches.
Know Your Mind: Adaptive Brain Signal Classification with Reinforced Attentive Convolutional Neural Networks
Electroencephalography (EEG) signals reƒect activities on certain brain areas. E‚ective classi€cation of time-varying EEG signals is still challenging. First, EEG signal processing and feature engineering are time-consuming and highly rely on expert knowledge. In addition, most existing studies focus on domain-speci€c classi€cation algorithms which may not be applicable to other domains. Moreover, the EEG signal usually has a low signal-to-noise ratio and can be easily corrupted. In this regard, we propose a generic EEG signal classi€cation framework that accommodates a wide range of applications to address the aforementioned issues. Œe proposed framework develops a reinforced selective aŠention model to automatically choose the distinctive information among the raw EEG signals. A convolutional mapping operation is employed to dynamically transform the selected information to an over-complete feature space, wherein implicit spatial dependency of EEG samples distribution is able to be uncovered. We demonstrate the e‚ectiveness of the proposed framework using three representative scenarios: intention recognition with motor imagery EEG, person identi€cation, and neurological diagnosis. Œree widely used public datasets and a local dataset are used for our evaluation. Œe experiments show that our framework outperforms the state-of-the-art baselines and achieves the accuracy of more than 97% on all the datasets with low latency and good resilience of handling complex EEG signals across various domains. Œese results con€rm the suitability of the proposed generic approach for a range of problems in the realm of Brain-Computer Interface applications.
Continuous positive airway pressure with helmet versus mask in infants with bronchiolitis: an RCT.
BACKGROUND Noninvasive continuous positive airway pressure (CPAP) is usually applied with a nasal or facial mask to treat mild acute respiratory failure (ARF) in infants. A pediatric helmet has now been introduced in clinical practice to deliver CPAP. This study compared treatment failure rates during CPAP delivered by helmet or facial mask in infants with respiratory syncytial virus-induced ARF. METHODS In this multicenter randomized controlled trial, 30 infants with respiratory syncytial virus-induced ARF were randomized to receive CPAP by helmet (n = 17) or facial mask (n = 13). The primary endpoint was treatment failure rate (defined as due to intolerance or need for intubation). Secondary outcomes were CPAP application time, number of patients requiring sedation, and complications with each interface. RESULTS Compared with the facial mask, CPAP by helmet had a lower treatment failure rate due to intolerance (3/17 [17%] vs 7/13 [54%], P = .009), and fewer infants required sedation (6/17 [35%] vs 13/13 [100%], P = .023); the intubation rates were similar. In successfully treated patients, CPAP resulted in better gas exchange and breathing pattern with both interfaces. No major complications due to the interfaces occurred, but CPAP by mask had higher rates of cutaneous sores and leaks. CONCLUSIONS These findings confirm that CPAP delivered by helmet is better tolerated than CPAP delivered by facial mask and requires less sedation. In addition, it is safe to use and free from adverse events, even in a prolonged clinical setting.
Non-Invasive Blood Glucose Measurement
Diabetes has emerged as a major healthcare problem in India. Today Approximately 8.3 % of global adult population is suffering from Diabetes. India is one of the most diabetic populated country in the world. Today the technologies available in the market are invasive methods. Since invasive methods cause pain, time consuming, expensive and there is a potential risk of infectious diseases like Hepatitis & HIV spreading and continuous monitoring is therefore not possible. Now a days there is a tremendous increase in the use of electrical and electronic equipment in the medical field for clinical and research purposes. Thus biomedical equipment’s have a greater role in solving medical problems and enhance quality of life. Hence there is a great demand to have a reliable, instantaneous, cost effective and comfortable measurement system for the detection of blood glucose concentration. Non-invasive blood glucose measurement device is one such which can be used for continuous monitoring of glucose levels in human body.
Evaluation of an intervention to reduce sun exposure in children: design and baseline results.
The Kidskin Study is a 5-year intervention study (1995-1999) involving 1,776 5- and 6-year-old children attending 33 primary schools in Perth, Western Australia. The aim of the study is to design, implement, and evaluate an intervention to reduce sun exposure in young children. There are three study groups: a control group, a "moderate intervention" group, and a "high intervention" group. The control schools receive the standard Western Australian health education curriculum, while the moderate and high intervention schools receive a specially designed curricular intervention. In addition, children in the high intervention group receive program materials over the summer holidays, when exposure is likely to be highest, and are offered sun-protective swimwear at low cost. The main outcome measure is the number of nevi on the back. Other outcomes include nevi on the chest (boys only), face, and arms, levels of suntanning, degree of freckling, and sun-related behaviors. At baseline, the three groups were similar with respect to nevi and freckling after adjustment for observer and month of observation. Sun exposure was slightly higher in the high intervention group. The groups were also similar with respect to most potential confounders, although they differed with respect to Southern European ethnicity and parental education.
Association Toward a Theory of Culturally Relevant Pedagogy
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.
Power system blackouts - literature review
Increasing electrical energy demand, modern lifestyles and energy usage patterns have made the world fully dependant on power systems. This instigated mandatory requirements for the operators to maintain high reliability and stability of the power system grid. However, the power system is a highly nonlinear system, which changes its operations continuously. Therefore, it is very challenging and uneconomical to make the system be stable for all disturbances. The system is usually designed to handle a single outage at a time. However, during the last decade several major blackouts were reported and all of them started with single outages. Each major blackout was mandatorily and transparently reported to the public. The properly written blackout reports help to minimize the operational risk, by strengthening the system and its operations based on selected high risk contingencies. In the last decade, several major blackouts were reported separately in many research papers. This paper lists a good collection of properly reported literatures on power system stability and reliability including history of blackouts. Some critical comments on root causes, lessons learnt from the blackouts and solutions are addressed while briefly discussing the blackout events presented in published literatures.
Venous thromboembolism pharmacy intervention management program with an active, multifaceted approach reduces preventable venous thromboembolism and increases appropriate prophylaxis.
Two concepts relating to venous thromboembolism (VTE) prevention have recently emerged-"appropriate" prophylaxis and "preventable" VTE. We evaluated whether a human alert, as part of a pharmacy intervention program, can increase appropriate prophylaxis and decrease preventable symptomatic VTE in hospitalized patients. This prospective study with retrospective data collection was conducted utilizing data from 1879 patients in 2006 as a control cohort. The intervention cohort data were from 1646 patients during 2007, after program implementation. The rate of appropriate prophylaxis increased from 23.8% in 2006 to 37.9% in 2007 (odds ratio 1.8; 95% confidence interval [CI] = 1.6-2.1; P < .0001). Preventable VTE incidence was reduced by 74% (95% CI = 44%-88%) from 18.6 to 4.9 per 1000 patient discharges in 2006 and 2007, respectively (P = .0006). In conclusion, a pharmacy-led multifaceted intervention can significantly increase the rates of appropriate prophylaxis and significantly reduce the incidence of preventable VTE in hospitalized patients.
Combination of flavonoids with Centella asiatica and Melilotus for diabetic cystoid macular edema without macular thickening.
PURPOSE The purpose of this study was to evaluate the orally administered combination of flavonoids desmin and troxerutin with Centella asiatica and Melilotus for the treatment of diabetic cystoid macular edema (CME) without macular thickening. METHODS In this prospective, interventional, controlled study, 40 consecutive patients with type 2 diabetes and CME without macular thickening at optical coherence tomography were randomized into 2 groups of 20 subjects each (treatment and control groups). The treatment group received an oral combination of desmin (300 mg/day) and troxerutin (300 mg/day) with C. asiatica (30 mg/die) and Melilotus (160 mg/die) for 14 months. Best collected visual acuity, central retinal thickness at optical coherence tomography, retinal sensitivity (RS), and stability of fixation at microperimetry were measured at baseline and monthly for 14 months. RESULTS In both groups, mean best collected visual acuity, central retinal thickness, and stability of fixation did not show differences during follow-up (P > 0.05). At month 14, the RS was greater in the treated group (P = 0.01) and was significantly reduced in the control group only (P < 0.001). Five eyes in the study group showed disappearance of the intraretinal cysts after a mean time of 3.5 ± 0.3 months, which persisted in the following months. These 5 eyes presented a greater RS at each follow-up visit when compared with the control group (P < 0.05). Anatomic improvement was never reported in the control group. CONCLUSIONS The orally administered combination of flavonoids, C. asiatica, and Melilotus could be beneficial in preserving RS in diabetic CME without macular thickening.
Squeeze-SegNet: A new fast Deep Convolutional Neural Network for Semantic Segmentation
The recent researches in Deep Convolutional Neural Network have focused their attention on improving accuracy that provide significant advances. However, if they were limited to classification tasks, nowadays with contributions from Scientific Communities who are embarking in this field, they have become very useful in higher level tasks such as object detection and pixel-wise semantic segmentation. Thus, brilliant ideas in the field of semantic segmentation with deep learning have completed the state of the art of accuracy, however this architectures become very difficult to apply in embedded systems as is the case for autonomous driving. We present a new Deep fully Convolutional Neural Network for pixel-wise semantic segmentation which we call Squeeze-SegNet. The architecture is based on Encoder-Decoder style. We use a SqueezeNet-like encoder and a decoder formed by our proposed squeeze-decoder module and upsample layer using downsample indices like in SegNet and we add a deconvolution layer to provide final multi-channel feature map. On datasets like Camvid or City-states, our net gets SegNet-level accuracy with less than 10 times fewer parameters​ ​than​ ​SegNet.
A Dual-Network Progressive Approach to Weakly Supervised Object Detection
A major challenge that arises in Weakly Supervised Object Detection (WSOD) is that only image-level labels are available, whereas WSOD trains instance-level object detectors. A typical approach to WSOD is to 1) generate a series of region proposals for each image and assign the image-level label to all the proposals in that image; 2) train a classifier using all the proposals; and 3) use the classifier to select proposals with high confidence scores as the positive instances for another round of training. In this way, the image-level labels are iteratively transferred to instance-level labels. We aim to resolve the following two fundamental problems within this paradigm. First, existing proposal generation algorithms are not yet robust, thus the object proposals are often inaccurate. Second, the selected positive instances are sometimes noisy and unreliable, which hinders the training at subsequent iterations. We adopt two separate neural networks, one to focus on each problem, to better utilize the specific characteristic of region proposal refinement and positive instance selection. Further, to leverage the mutual benefits of the two tasks, the two neural networks are jointly trained and reinforced iteratively in a progressive manner, starting with easy and reliable instances and then gradually incorporating difficult ones at a later stage when the selection classifier is more robust. Extensive experiments on the PASCAL VOC dataset show that our method achieves state-of-the-art performance.
Towards Better Text Understanding and Retrieval through Kernel Entity Salience Modeling
This paper presents a Kernel Entity Salience Model (KESM) that improves text understanding and retrieval by better estimating entity salience (importance) in documents. KESM represents entities by knowledge enriched distributed representations, models the interactions between entities and words by kernels, and combines the kernel scores to estimate entity salience. The whole model is learned end-to-end using entity salience labels. The salience model also improves ad hoc search accuracy, providing effective ranking features by modeling the salience of query entities in candidate documents. Our experiments on two entity salience corpora and two TREC ad hoc search datasets demonstrate the effectiveness of KESM over frequency-based and feature-based methods. We also provide examples showing how KESM conveys its text understanding ability learned from entity salience to search.
Regulatory Effect of Iguratimod on the Balance of Th Subsets and Inhibition of Inflammatory Cytokines in Patients with Rheumatoid Arthritis
OBJECTIVE To expand upon the role of iguratimod (T-614) in the treatment of rheumatoid arthritis (RA), we investigated whether the Th1, Th17, follicular helper T cells (Tfh), and regulatory T cells (Treg) imbalance could be reversed by iguratimod and the clinical implications of this reversal. METHODS In this trial, 74 patients were randomized into iguratimod-treated (group A) and control (broup B) group for a 24-week treatment period. In the subsequent 28 weeks, both groups were given iguratimod. Frequencies of Th1, Th17, Tfh, and Treg were quantified using flow cytometry, and serum cytokines were detected by enzyme-linked immunosorbent assay. mRNA expression of cytokines and transcriptional factor were quantified by RT-PCR. The composite Disease Activity Score, erythrocyte sedimentation rate, and C-reactive protein were assessed at each visit. RESULT The clinical scores demonstrated effective suppression of disease after treatment with iguratimod. In addition, iguratimod downregulated Th1, Th17-type response and upregulated Treg. Furthermore, the levels of Th1, Th17, and Tfh associated inflammatory cytokines and transcription factors were reduced after treatment with iguratimod, while the levels of Treg associated cytokines and transcription factors were increased.
Prognostic impact of the addition of ventilatory efficiency to the Seattle Heart Failure Model in patients with heart failure.
BACKGROUND The Seattle Heart Failure Model (SHFM) is a multivariable model with proven prognostic value. Cardiopulmonary exercise testing (CPX) and neurohormonal markers (eg, B-type natriuretic peptide [BNP]) are also well accepted assessment techniques in the HF population and have both demonstrated robust prognostic value. The purpose of this investigation was to assess the combined prognostic value of the SHFM and CPX. METHODS AND RESULTS This study included all 453 patients enrolled in the Multicenter In-Sync Randomized Clinical Evaluation (MIRACLE) trial. Baseline SHFM and CPX were used. Both peak oxygen consumption (VO(2)) and ventilatory efficiency (VE/VCO(2)) were determined. In a univariate Cox proportional model analysis, SHFM and log-transformed peak VE/VCO(2) were stronger predictors of 6-month mortality (both P < .001) than log-transformed BNP (P = .013) or peak VO(2) (P = .066). In a multivariable Cox proportional hazards model, neither peak VO(2) nor BNP were independent predictors when added to the SHFM (P > .1). Conversely, peak VE/VCO(2) was a strong independent predictor when added to the SHFM, with an increase in the Cox proportional hazards model Wald χ(2) from 22.7 for SHFM alone to 33.8 with inclusion of log-transformed peak VE/VCO(2) (P < .0001) and significant changes in the net reclassification improvement and integrated discrimination index (both P < .002). CONCLUSIONS These results indicate that the SHFM and peak VE/VCO(2) work synergistically to improve prognostic resolution. Further investigation is needed to continue to optimize multivariable prognostic models in patients with HF, a chronic disease population that continues to suffer from a high adverse event rate despite advances in medical care.
A Fuzzy Game Theoretic Approach to Multi-Agent Coordination
Game theoretic decision making is a practical approach to multiagent coordination. Rational agents may make decisions based on different principles of rationality assumptions that usually involve knowledge of how other agents might move. After formulating a game matrix of utility entries of possible combination of moves from both agents, agents can reason which combination is the equilibrium. Most previous game theoretic works treat the utility values qualitatively (i.e., consider only the order of the utility values). This is not practical since the utility values are usually approximate and the differences between utility values are somewhat vague. In this paper, we present a fuzzy game theoretic decision making mechanism that can deal with uncertain utilities. We thus construct a fuzzy-theoretic game framework under both the fuzzy theory and the game theory. The notions of fuzzy dominant relations, fuzzy Nash equilibrium, and fuzzy strategies are defined and fuzzy reasoning are carried out in agent decision making. We show that a fuzzy strategy can perform better than a mixed strategy in traditional game theory in dealing with more than one Nash equilibrium games.
Aminophylline inhibits adaptation to ischaemia during angioplasty. Role of adenosine in ischaemic preconditioning.
UNLABELLED The ability of brief periods of ischaemia to protect the heart from subsequent ischaemia has been termed "ischaemic preconditioning'. In order to assess the role of adenosine receptor stimulation in this phenomenon we studied the ischaemic preconditioning effect during angioplasty in 10 control patients and in 10 patients pre-treated with 5 mg.kg-1 aminophylline, an adenosine receptor antagonist. The ischaemic response was assessed by analysis of the intracoronary electrocardiogram every 10 s during three consecutive inflations of 90 s with a reperfusion time of 180 s. The severity of transmural local ischaemia was expressed as the magnitude of the ST segment shift in relation to the time during each inflation. The control patients showed an improved tolerance to myocardial ischaemia: ST segment shift decreased from 1.42 +/- 0.49 mV at the end of the first inflation to 1.03 +/- 0.44 mV at the end of the third inflation (P < 0.001). However, in patients pre-treated with aminophylline, the ischaemic response was not significantly different during three inflations. CONCLUSION Aminophylline inhibits ischaemic preconditioning, as assessed by analysis of the intracoronary. ST segment changes during angioplasty. This suggests that ischaemic preconditioning is mediated by adenosine receptor stimulation in humans.
Response of Flax Plant (Linum usitatissimum L.) to Treatments with Mineral and Bio-Fertilizers from Nitrogen and Phosphorus
Field experiments were carried out at Sakha Agricultural Research Station, Kafr-El-Sheikh, Egypt during the two growing winter seasons of 2007/2008 and 2008/2009 in order to study the effect of different levels of mineral fertilizers from nitrogen and phosphorus (25, 50 and 100% of the recommended dose) alone or in combination with a mixture of biofertilizers containing nitrogen fixers (nitrobein) and phosphate dissolving bacteria (phosphorein) on morphological characters and yield of flax plant cv. Sakha 1 from seeds, oil, straw and fibers. Moreover, anatomy of the main stem was also investigated. The obtained results indicated that increasing level of the used mineral fertilizers induced significant increases in all investigated morphological and yield characters except that of number of seeds per capsule and seed oil percentage which showed no significant effect in this respect. The rate of promotion increased gradually as the rate of mineral fertilizers increased up to 100% of the recommended dose. It is clear that raising the level of the used mineral fertilizers from 25 to 100% of the recommended dose induced significant increases of 48.7, 46.6, 55.0, 14.1, 37.3, 19.1, 68.6, 45.4, 56.8, 44.5, 43.5, 42.3, 38.7 and 47.5% for plant height, technical length, length of fruiting zone, stem diameter, number of capsules / plant, weight of 1000 seeds, seed yield / plant, seed yield / feddan, seed oil yield / feddan, straw yield / plant , straw yield / feddan, fiber yield / plant, fiber yield / feddan and fiber length of flax plant cv. Sakha 1; respectively. Data also revealed that flax plants obtained from biofertilized seeds and grown in soil inoculated with biofertilizers (nitrobein + phosphorein) showed significant increases in all investigated morphological characters and in most of yield characters when compared with control plants which were obtained from uninoculated seeds and grown in uninoculated soil. The increments in the mentioned characters as a result of biofertilization treatment were 17.8, 17.6, 22.2, 6.2, 15.6, 8.5, 26.8, 20.5, 25.1, 17.3, 15.4, 15.9, 14.3 and 18.5% for plant height, technical length, length of fruiting zone, stem diameter, number of capsules / plant, weight of 1000 seeds, seed yield/plant, seed yield/feddan, seed oil yield/feddan , straw yield/plant, straw yield/feddan, fiber yield/plant, fiber yield/feddan and fiber length of flax plant cv. Sakha 1; respectively. The interaction between the used levels of mineral fertilizers and biofertilizers proved significant effect for the above mentioned characters. It is noted that the promotion induced by raising the level of the used mineral fertilizers was equal to that induced by biofertilizers treatment which substituted half of the recommended dose from the used NP and this decreased the environmental pollution caused by repeated application of mineral fertilizers. The effect of the used mineral and biofertilizers on anatomical structure of the main stem of flax plant cv. Sakha 1 was also investigated. [Journal of American Science. 2010;6(10):207-217]. (ISSN: 1545-1003).
ANALYSIS OF VITAL SIGNS MONITORING USING AN IR-UWB RADAR
Ultra-wide Band (UWB) technology is a new, useful and safe technology in the field of wireless body networks. This paper focuses on the feasibility of estimating vital signs — specifically breathing rate and heartbeat frequency — from the spectrum of recorded waveforms, using an impulse-radio (IR) UWB radar. To this end, an analytical model is developed to perform and interpret the spectral analysis. Both the harmonics and the intermodulation between respiration and heart signals are addressed. Simulations have been performed to demonstrate how they affect the detection of vital signs and also to analyze the influence of the pulse waveform. A filter to cancel out breathing harmonics is also proposed to improve heart rate detection. The results of the experiments are presented under different scenarios which demonstrate the accuracy of the proposed technique for determining respiration and heartbeat rates. It has been shown that an IR-UWB radar can meet the requirements of typical biomedical applications such as non-invasive heart and respiration rate monitoring.
Is computer science science?
Computer science meets every criterion for being a science, but it has a self-inflicted credibility problem.
A New Multidisciplinary Home Care Telemedicine System to Monitor Stable Chronic Human Immunodeficiency Virus-Infected Patients: A Randomized Study
BACKGROUND Antiretroviral therapy has changed the natural history of human immunodeficiency virus (HIV) infection in developed countries, where it has become a chronic disease. This clinical scenario requires a new approach to simplify follow-up appointments and facilitate access to healthcare professionals. METHODOLOGY We developed a new internet-based home care model covering the entire management of chronic HIV-infected patients. This was called Virtual Hospital. We report the results of a prospective randomised study performed over two years, comparing standard care received by HIV-infected patients with Virtual Hospital care. HIV-infected patients with access to a computer and broadband were randomised to be monitored either through Virtual Hospital (Arm I) or through standard care at the day hospital (Arm II). After one year of follow up, patients switched their care to the other arm. Virtual Hospital offered four main services: Virtual Consultations, Telepharmacy, Virtual Library and Virtual Community. A technical and clinical evaluation of Virtual Hospital was carried out. FINDINGS Of the 83 randomised patients, 42 were monitored during the first year through Virtual Hospital (Arm I) and 41 through standard care (Arm II). Baseline characteristics of patients were similar in the two arms. The level of technical satisfaction with the virtual system was high: 85% of patients considered that Virtual Hospital improved their access to clinical data and they felt comfortable with the videoconference system. Neither clinical parameters [level of CD4+ T lymphocytes, proportion of patients with an undetectable level of viral load (p = 0.21) and compliance levels >90% (p = 0.58)] nor the evaluation of quality of life or psychological questionnaires changed significantly between the two types of care. CONCLUSIONS Virtual Hospital is a feasible and safe tool for the multidisciplinary home care of chronic HIV patients. Telemedicine should be considered as an appropriate support service for the management of chronic HIV infection. TRIAL REGISTRATION Clinical-Trials.gov: NCT01117675.
Information Security in Big Data: Privacy and Data Mining
The growing popularity and development of data mining technologies bring serious threat to the security of individual,'s sensitive information. An emerging research topic in data mining, known as privacy-preserving data mining (PPDM), has been extensively studied in recent years. The basic idea of PPDM is to modify the data in such a way so as to perform data mining algorithms effectively without compromising the security of sensitive information contained in the data. Current studies of PPDM mainly focus on how to reduce the privacy risk brought by data mining operations, while in fact, unwanted disclosure of sensitive information may also happen in the process of data collecting, data publishing, and information (i.e., the data mining results) delivering. In this paper, we view the privacy issues related to data mining from a wider perspective and investigate various approaches that can help to protect sensitive information. In particular, we identify four different types of users involved in data mining applications, namely, data provider, data collector, data miner, and decision maker. For each type of user, we discuss his privacy concerns and the methods that can be adopted to protect sensitive information. We briefly introduce the basics of related research topics, review state-of-the-art approaches, and present some preliminary thoughts on future research directions. Besides exploring the privacy-preserving approaches for each type of user, we also review the game theoretical approaches, which are proposed for analyzing the interactions among different users in a data mining scenario, each of whom has his own valuation on the sensitive information. By differentiating the responsibilities of different users with respect to security of sensitive information, we would like to provide some useful insights into the study of PPDM.
Camouflage treatment of skeletal class III malocclusion with asymmetry using a bone-borne rapid maxillary expander.
This case report presents the successful use of palatal mini-implants for rapid maxillary expansion and mandibular distalization in a skeletal Class III malocclusion. The patient was a 13-year-old girl with the chief complaint of facial asymmetry and a protruded chin. Camouflage orthodontic treatment was chosen, acknowledging the possibility of need for orthognathic surgery after completion of her growth. A bone-borne rapid expander (BBRME) was used to correct the transverse discrepancy and was then used as indirect anchorage for distalization of the lower dentition with Class III elastics. As a result, a Class I occlusion with favorable inclination of the upper teeth was achieved without any adverse effects. The total treatment period was 25 months. Therefore, BBRME can be considered an alternative treatment in skeletal Class III malocclusion.
Compliance of IEEE 802.22 WRAN for field area network in smart grid
Distributed power system network is going to be complex, and it will require high-speed, reliable and secure communication systems for managing intermittent generation with coordination of centralised power generation, including load control. Cognitive Radio (CR) is highly favourable for providing communications in Smart Grid by using spectrum resources opportunistically. The IEEE 802.22 Wireless Regional Area Network (WRAN) having the capabilities of CR use vacant channels opportunistically in the frequency range of 54 MHz to 862 MHz occupied by TV band. A comprehensive review of using IEEE 802.22 for Field Area Network in power system network using spectrum sensing (CR based communication) is provided in this paper. The spectrum sensing technique(s) at Base Station (BS) and Customer Premises Equipment (CPE) for detecting the presence of incumbent in order to mitigate interferences is also studied. The availability of backup and candidate channels are updated during “Quite Period” for further use (spectrum switching and management) with geolocation capabilities. The use of IEEE 802.22 for (a) radio-scene analysis, (b) channel identification, and (c) dynamic spectrum management are examined for applications in power management.
CLARE: A Joint Approach to Label Classification and Tag Recommendation
Data classification and tag recommendation are both important and challenging tasks in social media. These two tasks are often considered independently and most efforts have been made to tackle them separately. However, labels in data classification and tags in tag recommendation are inherently related. For example, a Youtube video annotated with NCAA, stadium, pac12 is likely to be labeled as football, while a video/image with the class label of coast is likely to be tagged with beach, sea, water and sand. The existence of relations between labels and tags motivates us to jointly perform classification and tag recommendation for social media data in this paper. In particular, we provide a principled way to capture the relations between labels and tags, and propose a novel framework CLARE, which fuses data CLAssification and tag REcommendation into a coherent model. With experiments on three social media datasets, we demonstrate that the proposed framework CLARE achieves superior performance on both tasks compared to the state-of-the-art methods.
3DTI-Net: Learn Inner Transform Invariant 3D Geometry Features using Dynamic GCN
Deep learning on point clouds has made a lot of progress recently. Many point cloud dedicated deep learning frameworks, such as PointNet and PointNet++, have shown advantages in accuracy and speed comparing to those using traditional 3D convolution algorithms. However, nearly all of these methods face a challenge, since the coordinates of the point cloud are decided by the coordinate system, they cannot handle the problem of 3D transform invariance properly. In this paper, we propose a general framework for point cloud learning. We achieve transform invariance by learning inner 3D geometry feature based on local graph representation, and propose a feature extraction network based on graph convolution network. Through experiments on classification and segmentation tasks, our method achieves state-of-the-art performance in rotated 3D object classification, and achieve competitive performance with the state-of-the-art in classification and segmentation tasks with fixed coordinate value.
A Blended Deep Learning Approach for Predicting User Intended Actions
User intended actions are widely seen in many areas. Forecasting these actions and taking proactive measures to optimize business outcome is a crucial step towards sustaining the steady business growth. In this work, we focus on predicting attrition, which is one of typical user intended actions. Conventional attrition predictive modeling strategies suffer a few inherent drawbacks. To overcome these limitations, we propose a novel end-to-end learning scheme to keep track of the evolution of attrition patterns for the predictive modeling. It integrates user activity logs, dynamic and static user profiles based on multi-path learning. It exploits historical user records by establishing a decaying multi-snapshot technique. And finally it employs the precedent user intentions via guiding them to the subsequent learning procedure. As a result, it addresses all disadvantages of conventional methods. We evaluate our methodology on two public data repositories and one private user usage dataset provided by Adobe Creative Cloud. The extensive experiments demonstrate that it can offer the appealing performance in comparison with several existing approaches as rated by different popular metrics. Furthermore, we introduce an advanced interpretation and visualization strategy to effectively characterize the periodicity of user activity logs. It can help to pinpoint important factors that are critical to user attrition and retention and thus suggests actionable improvement targets for business practice. Our work will provide useful insights into the prediction and elucidation of other user intended actions as well.
Generalizing Data to Provide Anonymity when Disclosing Information (Abstract)
The proliferation of information on the Internet and access to fast computers with large storage capacities has increased the volume of information collected and disseminated about individuals. The existence os these other data sources makes it much easier to re-identify individuals whose private information is released in data believed to be anonymous. At the same time, increasing demands are made on organizations to release individualized data rather than aggregate statistical information. Even when explicit identi ers, such as name and phone number, are removed or encrypted when releasing individualized data, other characteristic data, which we term quasi-identi ers, can exist which allow the data recipient to re-identify individuals to whom the data refer. In this paper, we provide a computational disclosure technique for releasing information from a private table such that the identity of any individual to whom the released data refer cannot be de nitively recognized. Our approach protects against linking to other data. It is based on the concepts of generalization, by which stored values can be replaced with semantically consistent and truthful but less precise alternatives, and of k-anonymity . A table is said to provide k-anonymity when the contained data do not allow the recipient to associate the released information to a set of individuals smaller than k. We introduce the notions of generalized table and of minimal generalization of a table with respect to a k-anonymity requirement. As an optimization problem, the objective is to minimally distort the data while providing adequate protection. We describe an algorithm that, given a table, e ciently computes a preferred minimal generalization to provide anonymity.
Factors affecting shrivelling and friction discolouration of pears (Pyrus communis L. )
The influence of variables (harvesting date, fruit size and storage duration) has been studied on the occurrence and susceptibility of shrivelling in ‘Packham’s Triumph’, ‘Beurré Bosc’ and ‘Forelle’ pears. The periods during simulated postharvest handling that proved the most conducive were those of high temperatures (i.e. after harvest, packing, recooling and shelf life). These short periods of relative high temperatures increased the rate of transpiration to such a level that high rates of weight loss were recorded. The contribution of transpiration to the total degree of weight loss far outweighed that of respiration. Early harvested (i.e. fruit of inferior maturity) and small fruit were more susceptible to shrivel and showed visual signs of shrivel before larger, more mature fruit did. The surface area to volume ratio determined the rate at which transpiration could take place. The smaller the fruit, the bigger the ratio and the higher the risk of experiencing shrivel. This renders the pear neck very susceptible to weight loss and subsequent shrivel. The highest rate of weight loss, at 18 °C, was recorded for ‘Beurré Bosc’ (0.43 %.day) followed by ‘Forelle’ (0.35 %.day). However, ‘Packham’s Triumph’ required the least amount of time to display signs of shrivel (10 days). This can be attributed to the prominent shape of this particular pear’s neck. Average percentage moisture loss required before shrivel was visible ranged from 2.5 %, 3.9 % and 4.4 % for ‘Packham’s Triumph’, ‘Beurré Bosc’ and ‘Forelle’, respectively. Sealing of the fruit stem reduced the amount of moisture that was lost through it and extended the possible shelf life of the pears. This was most notable in ‘Beurré Bosc’ and ‘Packham’s Triumph’.
Interaction between body mass index and hormone-receptor status as a prognostic factor in lymph-node-positive breast cancer
The aim of this study was to determine the relationship between the body mass index (BMI) at a breast cancer diagnosis and various factors including the hormone-receptor, menopause, and lymph-node status, and identify if there is a specific patient subgroup for which the BMI has an effect on the breast cancer prognosis. We retrospectively analyzed the data of 8,742 patients with non-metastatic invasive breast cancer from the research database of Asan Medical Center. The overall survival (OS) and breast-cancer-specific survival (BCSS) outcomes were compared among BMI groups using the Kaplan-Meier method and Cox proportional-hazards regression models with an interaction term. There was a significant interaction between BMI and hormone-receptor status for the OS (P = 0.029), and BCSS (P = 0.013) in lymph-node-positive breast cancers. Obesity in hormone-receptor-positive breast cancer showed a poorer OS (adjusted hazard ratio [HR] = 1.51, 95% confidence interval [CI] = 0.92 to 2.48) and significantly poorer BCSS (HR = 1.80, 95% CI = 1.08 to 2.99). In contrast, a high BMI in hormone-receptor-negative breast cancer revealed a better OS (HR = 0.44, 95% CI = 0.16 to 1.19) and BCSS (HR = 0.53, 95% CI = 0.19 to 1.44). Being underweight (BMI < 18.50 kg/m2) with hormone-receptor-negative breast cancer was associated with a significantly worse OS (HR = 1.98, 95% CI = 1.00-3.95) and BCSS (HR = 2.24, 95% CI = 1.12-4.47). There was no significant interaction found between the BMI and hormone-receptor status in the lymph-node-negative setting, and BMI did not interact with the menopause status in any subgroup. In conclusion, BMI interacts with the hormone-receptor status in a lymph-node-positive setting, thereby playing a role in the prognosis of breast cancer.
pH-Dependent Changes in the Mechanisms of Transport of Chlorine e6 and Its Derivatives in the Blood
We studied the effects of medium pH on steady-state distribution of chlorine e6 and its derivatives between the main transport proteins of human blood plasma. The decrease in medium pH from weakly alkaline (pH 7.4) to acid (pH 5.0) was followed by an increase in relative affinity of chlorines to lipoproteins and reduced their affinity to serum albumin. pH-Dependent changes in the parameters of distribution of photosensitizers between the plasma and blood cells was revealed. We discussed the role of charge and polarity degree of photosensitizer molecule in the mechanism of binding to serum albumin. A possible role of changes in hydrogen ion activity in the processes of selective accumulation of chlorines by tumor cells is discussed.
The effect of prior bisphosphonate therapy on the subsequent therapeutic effects of strontium ranelate over 2 years
Many osteoporotic women prescribed strontium ranelate have previously received bisphosphonates. Prior bisphosphonate use blunted the spinal bone mineral density (BMD) response for 6 months. Hip BMD was blunted to a degree for 2 years, although there was an overall increase in hip BMD in contrast to the heel where BMD did not increase. Many osteoporotic women commenced on strontium ranelate have already received treatment with bisphosphonates. This study investigates whether prior bisphosphonate use impairs the subsequent therapeutic response to strontium ranelate. Women were recruited who were either bisphosphonate naïve or currently receiving a bisphosphonate. All women received strontium ranelate and were followed up for 2 years. One hundred and twenty women were recruited. After 2 years, the bisphosphonate-naïve group had significant BMD increases of 8.9%, 6.0% and 6.4% at the spine, hip and heel, respectively. In the prior bisphosphonate group, BMD increased significantly at the spine (4.0%) and hip (2.5%) but not at the heel. At all time points at all sites, the BMD increase was greater in the bisphosphonate-naïve group. BMD at the spine did not increase during the first 6 months in the prior bisphosphonate group but then increased in parallel with the bisphosphonate-naïve group. In contrast, the difference between the two groups in hip BMD continued to increase throughout the 2 years. P1NP was suppressed in the prior bisphosphonate group for the first 6 months. After bisphosphonate exposure, the BMD response to strontium ranelate is blunted for only 6 months at the spine. At the hip, a degree of blunting was observed over 2 years, although there was an overall increase in hip BMD in contrast to the heel where no increase in BMD was observed.
Application of the new GOLD COPD staging system to a US primary care cohort, with comparison to physician and patient impressions of severity
BACKGROUND In 2011, the traditional Global Initiative for Chronic Obstructive Lung Disease (GOLD) COPD spirometry-based severity classification system was revised to also include exacerbation history and COPD Assessment Test (CAT) and modified Medical Research Council Dyspnea Scale (mMRC) scores. This study examined how COPD patients treated in primary care are reclassified by the new GOLD system compared to the traditional system, and each system's level of agreement with patient's or physician's severity assessments. METHODS In this US multicenter cross-sectional study, COPD patients were recruited by 83 primary care practitioners (PCPs) to complete spirometry testing and a survey. Patients were classified by the traditional spirometry-based system (stages 1-4) and under the new system (grades A, B, C, D) using spirometry, exacerbation history, mMRC, and/or CAT results. Concordance between physician and patient-reported severity, spirometry stage, and ABCD grade based on either mMRC or CAT scores was examined. RESULTS Data from 445 patients with spirometry-confirmed COPD were used. As compared to the traditional system, the GOLD mMRC system reclassifies 47% of patients, and GOLD CAT system reclassifies 41%, but the distributions are very different. The GOLD mMRC system resulted in relatively equal distributions by ABCD grade (33%, 22%, 19%, 26%, respectively), but the GOLD CAT system put most into either B or D groups (9%, 45%, 4%, and 42%). The addition of exacerbation history reclassified only 19 additional patients. Agreement between PCPs' severity rating or their patients' self-assessment and the new ABCD grade was very poor (κ=0.17 or less). CONCLUSION As compared to the traditional system, the GOLD 2011 multidimensional system reclassified nearly half of patients, but how they were reclassified varied greatly by whether the mMRC or CAT questionnaire was chosen. Either way, the new system had little correlation with the PCPs or their patients' impressions about the COPD severity.
Gamification: What It Is and Why It Matters to Digital Health Behavior Change Developers
This editorial provides a behavioral science view on gamification and health behavior change, describes its principles and mechanisms, and reviews some of the evidence for its efficacy. Furthermore, this editorial explores the relation between gamification and behavior change frameworks used in the health sciences and shows how gamification principles are closely related to principles that have been proven to work in health behavior change technology. Finally, this editorial provides criteria that can be used to assess when gamification provides a potentially promising framework for digital health interventions.
Simultaneous Modeling of Multiple Complications for Risk Profiling in Diabetes Care
Type 2 diabetes mellitus (T2DM) is a chronic disease that o‰en results in multiple complications. Risk prediction and pro€ling of T2DM complications is critical for healthcare professionals to design personalized treatment plans for patients in diabetes care for improved outcomes. In this paper, we study the risk of developing complications a‰er the initial T2DM diagnosis from longitudinal patient records. We propose a novel multi-task learning approach to simultaneously model multiple complications where each task corresponds to the risk modeling of one complication. Speci€cally, the proposed method strategically captures the relationships (1) between the risks of multiple T2DM complications, (2) between the di‚erent risk factors, and (3) between the risk factor selection paŠerns. Œe method uses coecient shrinkage to identify an informative subset of risk factors from high-dimensional data, and uses a hierarchical Bayesian framework to allow domain knowledge to be incorporated as priors. Œe proposed method is favorable for healthcare applications because in additional to improved prediction performance, relationships among the di‚erent risks and risk factors are also identi€ed. Extensive experimental results on a large electronic medical claims database show that the proposed method outperforms state-of-the-art models by a signi€cant margin. Furthermore, we show that the risk associations learned and the risk factors identi€ed lead to meaningful clinical insights. CCS CONCEPTS •Information systems→ Data mining; •Applied computing → Health informatics;
Establishing reference values for central blood pressure and its amplification in a general healthy population and according to cardiovascular risk factors.
AIMS Estimated central systolic blood pressure (cSBP) and amplification (Brachial SBP-cSBP) are non-invasive measures potentially prognostic of cardiovascular (CV) disease. No worldwide, multiple-device reference values are available. We aimed to establish reference values for a worldwide general population standardizing between the different available methods of measurement. How these values were significantly altered by cardiovascular risk factors (CVRFs) was then investigated. METHODS AND RESULTS Existing data from population surveys and clinical trials were combined, whether published or not. Reference values of cSBP and amplification were calculated as percentiles for 'Normal' (no CVRFs) and 'Reference' (any CVRFs) populations. We included 45,436 subjects out of 82,930 that were gathered from 77 studies of 53 centres. Included subjects were apparently healthy, not treated for hypertension or dyslipidaemia, and free from overt CV disease and diabetes. Values of cSBP and amplification were stratified by brachial blood pressure categories and age decade in turn, both being stratified by sex. Amplification decreased with age and more so in males than in females. Sex was the most powerful factor associated with amplification with 6.6 mmHg (5.8-7.4) higher amplification in males than in females. Amplification was marginally but significantly influenced by CVRFs, with smoking and dyslipidaemia decreasing amplification, but increased with increasing levels of blood glucose. CONCLUSION Typical values of cSBP and amplification in a healthy population and a population free of traditional CVRFs are now available according to age, sex, and brachial BP, providing values included from different devices with a wide geographical representation. Amplification is significantly influenced by CVRFs, but differently in men and women.
Psychopathy, Machiavellianism, and Narcissism in the Five-Factor Model and the HEXACO model of personality structure ☆
We investigated the relations of the ‘‘Dark Triad’’ personality traits—Psychopathy, Machiavellianism, and Narcissism—with the variables of the Five-Factor Model and the HEXACO model of personality structure. Results (N = 164) indicated that all three Dark Triad traits were strongly negatively correlated (rs = 0.72, 0.57, and 0.53, respectively) with the HEXACO Honesty–Humility factor. Psychopathy and Machiavellianism showed moderate negative correlations with Big Five Agreeableness (rs = 0.39 and 0.44, respectively), but Narcissism did not (r = 0.04). However, Narcissism correlated positively with Big Five Extraversion (r = 0.46) and HEXACO Extraversion (r = 0.49). Correlations among the Dark Triad variables were explained satisfactorily by the HEXACO variables, but not by the Five-Factor Model variables. 2004 Elsevier Ltd. All rights reserved.
Do personal stories make patient decision aids more effective? A critical review of theory and evidence
BACKGROUND Patient decision aids support people to make informed decisions between healthcare options. Personal stories provide illustrative examples of others' experiences and are seen as a useful way to communicate information about health and illness. Evidence indicates that providing information within personal stories affects the judgments and values people have, and the choices they make, differentially from facts presented in non-narrative prose. It is unclear if including narrative communications within patient decision aids enhances their effectiveness to support people to make informed decisions. METHODS A survey of primary empirical research employing a systematic review method investigated the effect of patient decision aids with or without a personal story on people's healthcare judgements and decisions. Searches were carried out between 2005-2012 of electronic databases (Medline, PsycINFO), and reference lists of identified articles, review articles, and key authors. A narrative analysis described and synthesised findings. RESULTS Of 734 citations identified, 11 were included describing 13 studies. All studies found participants' judgments and/or decisions differed depending on whether or not their decision aid included a patient story. Knowledge was equally facilitated when the decision aids with and without stories had similar information content. Story-enhanced aids may help people recall information over time and/or their motivation to engage with health information. Personal stories affected both "system 1" (e.g., less counterfactual reasoning, more emotional reactions and perceptions) and "system 2" (e.g., more perceived deliberative decision making, more stable evaluations over time) decision-making strategies. Findings exploring associations with narrative communications, decision quality measures, and different levels of literacy and numeracy were mixed. The pattern of findings was similar for both experimental and real-world studies. CONCLUSIONS There is insufficient evidence that adding personal stories to decision aids increases their effectiveness to support people's informed decision making. More rigorous research is required to elicit evidence about the type of personal story that a) encourages people to make more reasoned decisions, b) discourages people from making choices based on another's values, and c) motivates people equally to engage with healthcare resources.
Use of Fourier Series Analysis for Motion Artifact Reduction and Data Compression of Photoplethysmographic Signals
Pulse oximeters require artifact-free clean photoplethysmograph (PPG) signals obtained at red and infrared (IR) wavelengths for the estimation of the level of oxygen saturation ( SpO2) in the arterial blood of a patient. Movement of a patient corrupts a PPG signal with motion artifacts and introduces large errors in the computation of SpO2. A novel method for removing motion artifacts from corrupted PPG signals by applying Fourier series analysis on a cycle-by-cycle basis is presented in this paper. Aside from artifact reduction, the proposed method also provides data compression. Experimental results indicate that the proposed method is insensitive to heart rate variation, introduces negligible error in the processed PPG signals due to the additional processing, preserves all the morphological features of the PPG, provides 35 dB reduction in motion artifacts, and achieves a data compression factor of 12.
Visualizing the non-visual: spatial analysis and interaction with information from text documents
The paper describes an approach to IV that involves spatializing text content for enhanced visual browsing and analysis. The application arena is large text document corpora such as digital libraries, regulations and procedures, archived reports, etc. The basic idea is that text content from these sources may be transformed to a spatial representation that preserves informational characteristics from the documents. The spatial representation may then be visually browsed and analyzed in ways that avoid language processing and that reduce the analysts mental workload. The result is an interaction with text that more nearly resembles perception and action with the natural world than with the abstractions of written language.
Doing Feminist Research in Political and Social Science
Introduction to Feminist Research The Feminist Research Ethic Explained Feminist Roadmaps: Planning, Doing and Presenting Your Research Question-Driven Research: Formulating a Good Question Theory and Conceptualization including the Literature Review The Personal and the Political: Constraints and Opportunities of Research Design Designing and Timing a Research Project Sampling Cases, Operationalizing Concepts and Variables and Selecting Data Requirements Generating and Collecting Data Common Techniques for Analysis Structured Inquiry Research Designs Methods for Data Management and Field Research Writing and Publishing Conclusion: Feminist Research Ethic, Review, and Evaluation
The effect of Gamified mHealth App on Exercise Motivation and Physical Activity
In this study, we propose a research model to assess the effect of a mobile health (mHealth) app on exercise motivation and physical activity of individuals based on the design and self-determination theory. The research model is formulated from the perspective of motivation affordance and gamification. We will discuss how the use of specific gamified features of the mHealth app can trigger/afford corresponding users’ exercise motivations, which further enhance users’ participation in physical activity. We propose two hypotheses to test the research model using a field experiment. We adopt a 3-phase longitudinal approach to collect data in three different time zones, in consistence with approach commonly adopted in psychology and physical activity research, so as to reduce the common method bias in testing the two hypotheses.
Learning Analytics in Massive Open Online Courses
Educational technology has obtained great importance over the last fifteen years. At present, the umbrella of educational technology incorporates multitudes of engaging online environments and fields. Learning analytics and Massive Open Online Courses (MOOCs) are two of the most relevant emerging topics in this domain. Since they are open to everyone at no cost, MOOCs excel in attracting numerous participants that can reach hundreds and hundreds of thousands. Experts from different disciplines have shown significant interest in MOOCs as the phenomenon has rapidly grown. In fact, MOOCs have been proven to scale education in disparate areas. Their benefits are crystallized in the improvement of educational outcomes, reduction of costs and accessibility expansion. Due to their unusual massiveness, the large datasets of MOOC platforms require advanced tools and methodologies for further examination. The key importance of learning analytics is reflected here. MOOCs offer diverse challenges and practices for learning analytics to tackle. In view of that, this thesis combines both fields in order to investigate further steps in the learning analytics capabilities in MOOCs. The primary research of this dissertation focuses on the integration of learning analytics in MOOCs, and thereafter looks into examining students' behavior on one side and bridging MOOC issues on the other side. The research was done on the Austrian iMooX xMOOC platform. We followed the prototyping and case studies research methodology to carry out the research questions of this dissertation. The main contributions incorporate designing a general learning analytics framework, learning analytics prototype, records of students' behavior in nearly every MOOC's variables (discussion forums, interactions in videos, self-assessment quizzes, login frequency), a cluster of student engagement...
Multi-target Tracking by Lagrangian Relaxation to Min-cost Network Flow
We propose a method for global multi-target tracking that can incorporate higher-order track smoothness constraints such as constant velocity. Our problem formulation readily lends itself to path estimation in a trellis graph, but unlike previous methods, each node in our network represents a candidate pair of matching observations between consecutive frames. Extra constraints on binary flow variables in the graph result in a problem that can no longer be solved by min-cost network flow. We therefore propose an iterative solution method that relaxes these extra constraints using Lagrangian relaxation, resulting in a series of problems that ARE solvable by min-cost flow, and that progressively improve towards a high-quality solution to our original optimization problem. We present experimental results showing that our method outperforms the standard network-flow formulation as well as other recent algorithms that attempt to incorporate higher-order smoothness constraints.
Knowledge, perceptions and ever use of modern contraception among women in the Ga East District, Ghana.
A survey of 332 women, ages 15-49 years, was carried out in the Ga East district of Ghana to identify community knowledge, perceptions, and factors associated with ever using modern family planning (FP). Knowledge of modern FP was almost universal (97 percent) although knowledge of more than three methods was 56 percent. About 60 percent of all and 65 percent of married respondents reported ever use of a modern method. Among ever users, 82 percent thought contraceptives were effective for birth control. However, one-third did not consider modern FP safe. About 20 percent indicated their male partner as a barrier, and 65 percent of users reported at least one side effect. In a multivariate model that controlled for age, education, religion, and occupation, being married remained significantly associated (OR = 2.14; p=0.01) with ever use of a modern contraceptive method. Interventions are needed to address service- and knowledge-related barriers to use.
Stargazer: Automated regression-based GPU design space exploration
Graphics processing units (GPUs) are of increasing interest because they offer massive parallelism for high-throughput computing. While GPUs promise high peak performance, their challenge is a less-familiar programming model with more complex and irregular performance trade-offs than traditional CPUs or CMPs. In particular, modest changes in software or hardware characteristics can lead to large or unpredictable changes in performance. In response to these challenges, our work proposes, evaluates, and offers usage examples of Stargazer1, an automated GPU performance exploration framework based on stepwise regression modeling. Stargazer sparsely and randomly samples parameter values from a full GPU design space and simulates these designs. Then, our automated stepwise algorithm uses these sampled simulations to build a performance estimator that identifies the most significant architectural parameters and their interactions. The result is an application-specific performance model which can accurately predict program runtime for any point in the design space. Because very few initial performance samples are required relative to the extremely large design space, our method can drastically reduce simulation time in GPU studies. For example, we used Stargazer to explore a design space of nearly 1 million possibilities by sampling only 300 designs. For 11 GPU applications, we were able to estimate their runtime with less than 1.1% average error. In addition, we demonstrate several usage scenarios of Stargazer.
Deep Learning for Recommender Systems
Deep Learning is one of the next big things in Recommendation Systems technology. The past few years have seen the tremendous success of deep neural networks in a number of complex machine learning tasks such as computer vision, natural language processing and speech recognition. After its relatively slow uptake by the recommender systems community, deep learning for recommender systems became widely popular in 2016. We believe that a tutorial on the topic of deep learning will do its share to further popularize the topic. Notable recent application areas are music recommendation, news recommendation, and session-based recommendation. The aim of the tutorial is to encourage the application of Deep Learning techniques in Recommender Systems, to further promote research in deep learning methods for Recommender Systems.
MOBILE ROBOT NAVIGATION IN NARROW AISLES WITH ULTRASONIC SENSORS
This paper describes an experimental obstacle avoidance system for mobile robots traveling through the narrow aisles of a warehouse. In our application the aisles are 91 cm (36") wide and the robot has a width of 64 cm (25"), but the method described here is generally applicable to a large class of narrow-aisle navigation applications. Our approach is based on the carefully designed placement of ultrasonic sensors at strategic locations around the robot. Both the sensor location and the associated navigation algorithms are designed in such a way that whenever accurate range data is needed (e.g., for servoing) a sensor is located so that its accurate radial measurements provide the required data.
Prosthetic gingival reconstruction in fixed partial restorations. Part 3: laboratory procedures and maintenance.
Part 1 of the present series presented a rationale for including prosthetic gingiva in the planning of a fixed restoration to ensure an esthetic result for patients with severe horizontal and vertical ridge deficiencies. The second part focused on the diagnostic and treatment planning aspects of the use of artificial gingiva. This third and final installment in the series focuses on the laboratory and clinical procedures involved in fabricating a prosthesis with artificial gingiva and provides information on proper maintenance of these restorations.
Empowerment, Motivation, and Performance: Examining the Impact of Feedback and Incentives on Nonmanagement Employees
Motivated employees play a key role in organization success, and past research indicates a positive association between perceptions of empowerment and motivation. A prominent model put forth by Spreitzer (1995) suggests that two major components of control systems will positively affect employee feelings of empowerment—performance feedback and performance-based reward systems. This experimental study contributes to the behavioral accounting literature by examining how specific types of performance feedback and performance-based rewards affect three psychological dimensions of empowerment. Also, we use a relatively simple context to investigate whether predictions validated on surveys of managers also hold for lower-level workers. Our results suggest that feedback and rewards affect the dimensions of empowerment differently for lower-level workers than they do for managers. Namely, performance feedback was positively associated with only one dimension and performance-based rewards had negative effects on two out of the three dimensions. In addition, overall motivation was not significantly associated with two of the three empowerment dimensions. Implications of this study are that techniques that work to increase manager perceptions of empowerment may not work at lower organizational levels and, even if successful, the related increase in employee motivation may not be
A Multiband Slot Antenna for GPS/WiMAX/WLAN Systems
The design of a four-band slot antenna for the global positioning system (GPS), worldwide interoperability for microwave access (WiMAX), and wireless area network (WLAN) is presented. The antenna consists of a rectangular slot with an area of 0.37λg × 0.14λg = 48 × 18 mm2 (where λg is the guide wavelength), a T-shaped feed patch, an inverted T-shaped stub, and two E-shaped stubs to generate four frequency bands. The radiating portion and total size of the antenna are less than those of the tri-band antennas studied in literature. Parametric study on the parameters for setting the four frequency bands is presented and hence the methodology of using the design for other frequency bands is proposed. The multiband slot antenna is studied and designed using computer simulation. For verification of simulation results, the antenna is fabricated and measured. The simulated and measured return losses, radiation patterns, realized peak gains, and efficiencies of the antenna are presented. Measured results show that the antenna can be designed to cover the frequency bands from 1.575 to 1.665 GHz for the GPS system, 2.4-2.545 GHz for the IEEE 802.11b&g WLAN systems, 3.27- 3.97 GHz for the WiMAX system, and 5.17-5.93 GHz for the IEEE 802.11a WLAN system. The effects of the feeding cable used in measurement and of the cover are also investigated.
Incremental Solid Modeling from Sparse and Omnidirectional Structure-from-Motion Data
We introduce a new incremental 2-manifold surface reconstruction method. Compared to the previous works, its input is a sparse 3D point cloud estimated by a Structure-from-Motion (SfM) algorithm instead of a more common dense input. The main argument against such a method is that the lack of points implies an inaccurate scene surface. However, the advantages like point quality (thanks to the SfM machinery including bundle adjustment and interest point detection) and simplified resulting surface makes it worth of exploration. Our algorithm is incremental since the surface is locally updated for every new camera pose (and its 3D points) estimated by SfM. This is an advantage compared to global methods like [5] or [2] for applications which require a surface while reading the video sequence. Compared to [6], our method avoids prohibitive time complexity in presence of loops in the camera trajectory. Last but not least, unlike other incremental methods like [3] the output surface is a 2-manifold, i.e. it is a list of triangles in 3D such that the neighborhood of every surface point is topologically a disk. This property is needed to define surface normal and curvature [1] and thus is used by many mesh processing and computational geometry algorithms. Now we introduce notations. Let P be a set of 3D points on the unknown scene surface. The 3D Delaunay triangulation of P is a list T of tetrahedra which partition the convex hull of P. A list O of tetrahedra (O⊆ T ) represents the reconstructed object whose volume is |O|, the union of the O tetrahedra. Border δO is the list of triangles (tetrahedra faces) which are included in exactly one tetrahedra of O. The union of triangles |δO| is our target surface and should be a 2-manifold. SfM also provides visibility knowledge Ri: every point pi ∈ P is computed from camera locations c j where j ∈ Ri. This implies that |δO| should not intersect the rays (line segments) c jpi, j ∈ Ri except at pi. The tetrahedra intersected by a ray are labeled freespace, the others are matter. Let F be the set of free-space tetrahedra. In practice, |δF | is not a 2-manifold. One iteration of our algorithm is shown in Fig. 1. At image (or time) t +1, we have the following input
История российской армии в освещении германского исследователя Д. Байрау
The article deals with analysis of German expert in Eastern Europe history D. Beyrau’s opinion regarding Russian pre-revolutionary army. The significance of the historian’s fundamental work is in presenting an interpretational model of cooperation between the army, economy and society. The researcher concludes that the Russian army has a dualistic nature, which represents in coexistence of advanced technologies and constant adaptation to the regressive social structures.
SDN-based 5G mobile networks: architecture, functions, procedures and backward compatibility
In this paper, we describe an SDN-based plastic architecture for 5G networks, designed to fulfill functional and performance requirements of new generation services and devices. The 5G logical architecture is presented in detail, and key procedures for dynamic control plane instantiation, device attachment, and service request and mobility management are specified. Key feature of the proposed architecture is flexibility, needed to support efficiently a heterogeneous set of services, including Machine Type Communication, Vehicle to X and Internet of Things traffic. These applications are imposing challenging targets, in terms of end-to-end latency, dependability, reliability and scalability. Additionally, backward compatibility with legacy systems is guaranteed by the proposed solution, and Control Plane and Data Plane are fully decoupled. The three levels of unified signaling unify Access, Non-access and Management strata, and a clean-slate forwarding layer, designed according to the software defined networking principle, replaces tunneling protocols for carrier grade mobility. Copyright © 2014 John Wiley & Sons, Ltd. *Correspondence R. Trivisonno, Huawei European Research Institute, Munich, Germany. E-mail: [email protected] Received 13 October 2014; Revised 5 November 2014; Accepted 8 November 2014
Tagging Ingush - Language Technology For Low-Resource Languages Using Resources From Linguistic Field Work
This paper presents on-going work on creating NLP tools for under-resourced languages from very sparse training data coming from linguistic field work. In this work, we focus on Ingush, a Nakh-Daghestanian language spoken by about 300,000 people in the Russian republics Ingushetia and Chechnya. We present work on morphosyntactic taggers trained on transcribed and linguistically analyzed recordings and dependency parsers using English glosses to project annotation for creating synthetic treebanks. Our preliminary results are promising, supporting the goal of bootstrapping efficient NLP tools with limited or no task-specific annotated data resources available.
Impact of Low-Level-Viremia on HIV-1 Drug-Resistance Evolution among Antiretroviral Treated-Patients
BACKGROUND Drug-resistance mutations (DRAM) are frequently selected in patients with virological failure defined as viral load (pVL) above 500 copies/ml (c/mL), but few resistance data are available at low-level viremia (LLV). Our objective was to determine the emergence and evolution of DRAM during LLV in HIV-1-infected patients while receiving antiretroviral therapy (ART). METHODS Retrospective analysis of patients presenting a LLV episode defined as pVL between 40 and 500 c/mL on at least 3 occasions during a 6-month period or longer while on the same ART. Resistance genotypic testing was performed at the onset and at the end of LLV period. Emerging DRAM was defined during LLV if never detected on baseline genotype or before. RESULTS 48 patients including 4 naive and 44 pretreated (median 9 years) presented a LLV episode with a median duration of 11 months. Current ART included 2NRTI (94%), ritonavir-boosted PI (94%), NNRTI (23%), and/or raltegravir (19%). Median pVL during LLV was 134 c/mL. Successful resistance testing at both onset and end of the LLV episode were obtained for 37 patients (77%), among who 11 (30%) acquired at least 1 DRAM during the LLV period: for NRTI in 6, for NNRTI in 1, for PI in 4, and for raltegravir in 2. During the LLV period, number of drugs with genotypic resistance increased from a median of 4.5 to 6 drugs. Duration and pVL level of LLV episode, duration of previous ART, current and nadir CD4 count, number of baseline DRAM and GSS were not identified as predictive factors of resistance acquisition during LLV, probably due to limited number of patients. CONCLUSION Persistent LLV episodes below 500 c/ml while receiving ART is associated with emerging DRAM for all drug classes and a decreasing in further therapeutic options, suggesting to earlier consider resistance monitoring and ART optimization in this setting.
Sizing Equations for Electrical Machinery
Sizing equations for electrical machinery are developed from basic principles. The technique provides new insights into: 1. The effect of stator inner and outer diameters. 2. The amount of copper and steel used. 3. A maximizing function. 4. Equivalent slot dimensions in terms of diameters and flux density distribution. 5. Pole number effects. While the treatment is analytical, the scope is broad and intended to assist in the design of electrical machinery. Examples are given showing how the machine's internal geometry can assume extreme proportions through changes in basic variables.
Test-retest reliability of Brazilian version of Memorial Symptom Assessment Scale for assessing symptoms in cancer patients
Objective To assess the test-retest reliability of the Memorial Symptom Assessment Scale translated and culturally adapted into Brazilian Portuguese. Methods The scale was applied in an interview format for 190 patients with various cancers type hospitalized in clinical and surgical sectors of the Instituto Nacional de Câncer José de Alencar Gomes da Silva and reapplied in 58 patients. Data from the test-retest were double typed into a Microsoft Excel spreadsheet and analyzed by the weighted Kappa. Results The reliability of the scale was satisfactory in test-retest. The weighted Kappa values obtained for each scale item had to be adequate, the largest item was 0.96 and the lowest was 0.69. The Kappa subscale was also evaluated and values were 0.84 for high frequency physic symptoms, 0.81 for low frequency physical symptoms, 0.81 for psychological symptoms, and 0.78 for Global Distress Index. Conclusion High level of reliability estimated suggests that the process of measurement of Memorial Symptom Assessment Scale aspects was adequate. Objetivo Avaliar a confiabilidade teste-reteste da versão traduzida e adaptada culturalmente para o português do Brasil do Memorial Symptom Assessment Scale. Métodos A escala foi aplicada em forma de entrevista em 190 pacientes com diversos tipos de câncer internados nos setores clínicos e cirúrgicos do Instituto Nacional de Câncer José de Alencar Gomes da Silva e reaplicada em 58 pacientes. Os dados dos testes-retestes foram inseridos num banco de dados por dupla digitação independente em Excel e analisados pelo Kappa ponderado. Resultados A confiabilidade da escala mostrou-se satisfatória nos testes-retestes. Os valores do Kappa ponderado obtidos para cada item da escala apresentaram-se adequados, sendo o maior item de 0,96 e o menor de 0,69. Também se avaliou o Kappa das subescalas, sendo de 0,84 para sintomas físicos de alta frequência, de 0,81 para sintomas físicos de baixa frequência, de 0,81 também para sintomas psicológicos, e de 0,78 para Índice Geral de Sofrimento. Conclusão Altos níveis de confiabilidade estimados permitem concluir que o processo de aferição dos itens do Memorial Symptom Assessment Scale foi adequado.
Indifferentiability of Single-Block-Length and Rate-1 Compression Functions
The security notion of indifferentiability was proposed by Maurer, Renner, and Holenstein in 2004. In 2005, Coron, Dodis, Malinaud, and Puniya discussed the indifferentiability of hash functions. They showed that the Merkle-Damg̊ard construction is not secure in the sense of indifferentiability. In this paper, we analyze the security of single-block-length and rate-1 compression functions in the sense of indifferentiability. We formally show that all single-block-length and rate-1 compression functions, which include the Davies-Meyer compression function, are insecure. Furthermore, we show how to construct a secure single-block-length and rate-1 compression function in the sense of indifferentiability. This does not contradict our result above.
Learning Class-specific Word Representations for Early Detection of Hoaxes in Social Media
As people increasingly use social media as a source for news consumption, its unmoderated nature enables the diffusion of hoaxes, which in turn jeopardises the credibility of information gathered from social media platforms. To mitigate this problem, we study the development of a hoax detection system that can distinguish true and false reports early on. We introduce a semi-automated approach that leverages the Wikidata knowledge base to build large-scale datasets for veracity classification, which enables us to create a dataset with 4,007 reports including over 13 million tweets, 15% of which are fake. We describe a method for learning class-specific word representations using word embeddings, which we call multiw2v. Our approach achieves competitive results with F1 scores over 72% within 10 minutes of the first tweet being posted, outperforming other baselines. Our dataset represents a realistic scenario with a real distribution of true and false stories, which we release for further use as a benchmark in future research.
Missing value imputation for gene expression data: computational techniques to recover missing data from available information
Microarray gene expression data generally suffers from missing value problem due to a variety of experimental reasons. Since the missing data points can adversely affect downstream analysis, many algorithms have been proposed to impute missing values. In this survey, we provide a comprehensive review of existing missing value imputation algorithms, focusing on their underlying algorithmic techniques and how they utilize local or global information from within the data, or their use of domain knowledge during imputation. In addition, we describe how the imputation results can be validated and the different ways to assess the performance of different imputation algorithms, as well as a discussion on some possible future research directions. It is hoped that this review will give the readers a good understanding of the current development in this field and inspire them to come up with the next generation of imputation algorithms.
Toward a model of domain-specific search
We examine what makes a search system domain-specific and find that previous definitions are incomplete. We propose a new definition of domain specific search, together with a corresponding model, to assist researchers, systems designers and system beneficiaries in their analysis of their own domain. This model is then instantiated for two domains: intellectual property search (i.e. patent search) and medical or healthcare search. For each of the two we follow the theoretical model and identify outstanding issues. We find that the choice of dimensions is still an open issue, as linear independence is often absent and specific use-cases, particularly those related to interactive IR, still cannot be covered by the proposed model.
90% High Efficiency and 100-W/cm$^{\bm 3}$ High Power Density Integrated DC - DC Converter for Cellular Phones
This paper describes a small-size buck-type dc–dc converter for cellular phones. Output power MOSFETs and control circuitry are monolithically integrated. The newly developed pulse frequency modulation control integrated circuit, mounted on a planar inductor within the converter package, has a low quiescent current below 10 μA and a small chip size of 1.4 mm × 1.1 mm in a 0.35-μm CMOS process. The converter achieves a maximum efficiency of 90% and a power density above 100 W/cm<formula formulatype="inline"> <tex Notation="TeX">$^3$</tex></formula>.
HR practices and turnover intention: the mediating roles of organizational commitment and organizational engagement in a selected region in Malaysia
HR practices and turnover intention: the mediating roles of organizational commitment and organizational engagement in a selected region in Malaysia Nurita Juhdi, Fatimah Pa'wan & Ram Milah Kaur Hansaram a Department of Business Administration, Kulliyyah of Economics and Management Sciences, International Islamic University Malaysia, Kuala Lumpur, Malaysia b Faculty of Business Administration, UNITAR International University, Petaling Jaya, Selangor, Malaysia Published online: 07 Feb 2013.
Gradient Descent Quantizes ReLU Network Features
Deep neural networks are often trained in the over-parametrized regime (i.e. with far more parameters than training examples), and understanding why the training converges to solutions that generalize remains an open problem Zhang et al. [2017]. Several studies have highlighted the fact that the training procedure, i.e. mini-batch Stochastic Gradient Descent (SGD) leads to solutions that have specific properties in the loss landscape. However, even with plain Gradient Descent (GD) the solutions found in the over-parametrized regime are pretty good and this phenomenon is poorly understood. We propose an analysis of this behavior for feedforward networks with a ReLU activation function under the assumption of small initialization and learning rate and uncover a quantization effect: The weight vectors tend to concentrate at a small number of directions determined by the input data. As a consequence, we show that for given input data there are only finitely many, “simple” functions that can be obtained, independent of the network size. This puts these functions in analogy to linear interpolations (for given input data there are finitely many triangulations, which each determine a function by linear interpolation). We ask whether this analogy extends to the generalization properties while the usual distribution-independent generalization property does not hold, it could be that for e.g. smooth functions with bounded second derivative an approximation property holds which could “explain” generalization of networks (of unbounded size) to unseen inputs.
Integration of fMRI and simultaneous EEG: towards a comprehensive understanding of localization and time-course of brain activity in target detection
fMRI and EEG are complimentary methods for the analysis of brain activity since each method has its strength where the other one has limits: The spatial resolution is thus in the range of millimeters with fMRI and the time resolution is in the range of milliseconds with EEG. For a comprehensive understanding of brain activity in target detection, nine healthy subjects (age 24.2 +/- 2.9) were investigated with simultaneous EEG (27 electrodes) and fMRI using an auditory oddball paradigm. As a first step, event-related potentials, measured inside the scanner, have been compared with the potentials recorded in a directly preceding session in front of the scanner. Attenuated amplitudes were found inside the scanner for the earlier N1/P2 component but not for the late P300 component. Second, an independent analysis of the localizations of the fMRI activations and the current source density as revealed by low resolution electromagnetic tomography (LORETA) has been done. Concordant activations were found in most regions, including the temporoparietal junction (TPJ), the supplementary motor area (SMA)/anterior cingulate cortex (ACC), the insula, and the middle frontal gyrus, with a mean Euclidean distance of 16.0 +/- 6.6 mm between the BOLD centers of gravity and the LORETA-maxima. Finally, a time-course analysis based on the current source density maxima was done. It revealed different time-course patterns in the left and right hemisphere with earlier activations in frontal and parietal regions in the right hemisphere. The results suggest that the combination of EEG and fMRI permits an improved understanding of the spatiotemporal dynamics of brain activity.
Addressing supply chain risks of microelectronic devices through computer vision
Microelectronics are at the heart of nearly all modern devices, ranging from small embedded integrated circuits (ICs) inside household products to complex microprocessors that power critical infrastructure systems. Devices often consist of numerous ICs from a variety of different manufacturers and procured through different vendors, all of whom may be trusted to varying degrees. Ensuring the quality, safety, and security of these components is a critical challenge. One possible solution is to use automated imaging techniques to check devices' physical appearance against known reference models in order to detect counterfeit or malicious components. This analysis can be performed at both a macro level (i.e., ensuring that the packaging of the IC appears legitimate and undamaged) and a micro level (i.e., comparing microscopic, transistor-level imagery of the circuit itself to detect suspicious deviations from a reference model). This latter analysis in particular is very challenging, considering that modern devices can contain billions of transistors. In this paper, we review the problem of microelectronics counterfeiting, discuss the potential application of computer vision to microelectronics inspection, present initial results, and recommend directions for future work.
Positive Healthy Organizations: Promoting Well-Being, Meaningfulness, and Sustainability in Organizations
This contribution deals with the concept of healthy organizations and starts with a definition of healthy organizations and healthy business. In healthy organizations, culture, climate, and practices create an environment conducive to employee health and safety as well as organizational effectiveness (Lowe, 2010). A healthy organization thus leads to a healthy and successful business (De Smet et al., 2007; Grawitch and Ballard, 2016), underlining the strong link between organizational profitability and workers' well-being. Starting from a positive perspective focused on success and excellence, the contribution describes how positive organizational health psychology evolved from occupational health psychology to positive occupational health psychology stressing the importance of a primary preventive approach. The focus is not on deficiency and failure but on a positive organizational attitude that proposes interventions at different levels: individual, group, organization, and inter-organization. Healthy organizations need to find the right balance between their particular situation, sector, and culture, highlighting the importance of well-being and sustainability. This contribution discusses also the sustainability of work-life projects and the meaning of work in healthy organizations, stressing the importance of recognizing, respecting, and using the meaning of work as a key for growth and success. Finally, the contribution discusses new research and intervention opportunities for healthy organizations.
Hierarchical Relational Networks for Group Activity Recognition and Retrieval
Modeling structured relationships between people in a scene is an important step toward visual understanding. We present a Hierarchical Relational Network that computes relational representations of people, given graph structures describing potential interactions. Each relational layer is fed individual person representations and a potential relationship graph. Relational representations of each person are created based on their connections in this particular graph. We demonstrate the efficacy of this model by applying it in both supervised and unsupervised learning paradigms. First, given a video sequence of people doing a collective activity, the relational scene representation is utilized for multi-person activity recognition. Second, we propose a Relational Autoencoder model for unsupervised learning of features for action and scene retrieval. Finally, a Denoising Autoencoder variant is presented to infer missing people in the scene from their context. Empirical results demonstrate that this approach learns relational feature representations that can effectively discriminate person and group activity classes.
Combination therapy with nifedipine GITS 60 mg: subanalysis of a prospective, 12-week observational study (AdADOSE)
BACKGROUND AdADOSE was a 12-week, international, observational study conducted in the Middle East and Russia where patients received nifedipine gastrointestinal therapeutic system (GITS) at a daily dose of 30, 60, or 90 mg as part of an antihypertensive combination therapy. This subgroup analysis of the AdADOSE study assesses the efficacy and tolerability of nifedipine GITS combination therapy when used specifically at the 60-mg strength. METHODS Patients with hypertension who received a daily nifedipine GITS dose of 60 mg, either at constant dose (n = 686) or up-titrated from 30 mg (n = 392), were analyzed. Target blood pressure (BP) was <140/90 mmHg (or <130/80 mmHg for those at high/very high cardiovascular risk). RESULTS Following nifedipine GITS combination therapy, target BP was achieved by 33.7% patients in the 60 mg group (previously untreated, 42.5%; previously treated, 32.0%) and 32.4% patients in the 30-60 mg group (previously untreated, 45.2%; previously treated, 30.7%). Mean systolic BP/diastolic BP changes were -40.3/-20.7 mmHg and -35.6/-18.5 mmHg, respectively, and were similar regardless of previous antihypertensive treatment or the number of concomitant diseases. Incidences of drug-related adverse events (AEs) were low (3.2%, 60 mg; 2.0%, 30-60 mg group), few patients discontinued because of AEs (0.6% and 1.0%, respectively), and there were no serious AEs. CONCLUSION Combination therapy with nifedipine GITS 60 mg in a real-life observational setting was effective and well tolerated in hypertensive patients, with low rates of treatment-related AEs.
Adversarial Stain Transfer for Histopathology Image Analysis
It is generally recognized that color information is central to the automatic and visual analysis of histopathology tissue slides. In practice, pathologists rely on color, which reflects the presence of specific tissue components, to establish a diagnosis. Similarly, automatic histopathology image analysis algorithms rely on color or intensity measures to extract tissue features. With the increasing access to digitized histopathology images, color variation and its implications have become a critical issue. These variations are the result of not only a variety of factors involved in the preparation of tissue slides but also in the digitization process itself. Consequently, different strategies have been proposed to alleviate stain-related tissue inconsistencies in automatic image analysis systems. Such techniques generally rely on collecting color statistics to perform color matching across images. In this work, we propose a different approach for stain normalization that we refer to as stain transfer. We design a discriminative image analysis model equipped with a stain normalization component that transfers stains across datasets. Our model comprises a generative network that learns data set-specific staining properties and image-specific color transformations as well as a task-specific network (e.g., classifier or segmentation network). The model is trained end-to-end using a multi-objective cost function. We evaluate the proposed approach in the context of automatic histopathology image analysis on three data sets and two different analysis tasks: tissue segmentation and classification. The proposed method achieves superior results in terms of accuracy and quality of normalized images compared to various baselines.
Efficient and Low Latency Detection of Intruders in Mobile Active Authentication
Active authentication (AA) refers to the problem of continuously verifying the identity of a mobile device user for the purpose of securing the device. We address the problem of quickly detecting intrusions with lower false detection rates in mobile AA systems with higher resource efficiency. Bayesian and MiniMax versions of the quickest change detection (QCD) algorithms are introduced to quickly detect intrusions in mobile AA systems. These algorithms are extended with an update rule to facilitate low-frequency sensing which leads to low utilization of resources. Effectiveness of the proposed framework is demonstrated using three publicly available unconstrained face and touch gesture-based AA datasets. It is shown that the proposed QCD-based intrusion detection methods can perform better than many state-of-the-art AA methods in terms of latency and low false detection rates. Furthermore, it is shown that employing the proposed resource-efficient extension further improves the performance of the QCD-based setup.
Deep Learning
Jährlich veröffentlichen Forscher neue Zahlen und Bestwerte ihrer Lernverfahren in verschiedensten Bereichen, mit welchen sie sich den menschlichen Fähigkeiten nähern oder diese sogar bereits übertreffen. Hierbei ist Deep Learning als Schlagwort prominent vertreten. Das bekannteste Beispiel darunter ist sicherlich der von Googles DeepMind-Gruppe entwickelte AlphaGoComputer, der erstmals professionelle menschliche Spieler im Go-Spiel, das weitaus komplexer als Schach ist, bezwang (vgl. [8]). Insbesondere die Mainstreammedien griffen das Thema auf und postulierten eine neue Ära von Künstlicher Intelligenz. Trotz wachsender Popularität des Deep Learning löst dieser Ansatz nicht pauschal jedes ungelöste oder nur unzufrieden gelöste Problem des maschinellen Lernens. Vielmehr ist es als eines von vielen Werkzeugen zu verstehen, das zum überwachten oder unüberwachten Lernen von insbesondere sehr großen Datensätzen verwendet werden kann. Deep Learning kann hierbei den geschichteten Aufbau hierarchischer Features automatisch sehr gut abbilden, weshalb es vor allem in der Bildund Sprachverarbeitung eine wichtige Rolle spielt. Weitere Bereiche des maschinellen Lernens wie iML [5] und OCR [9], in denen Deep Learning eingesetzt wird, wurden bereits in Artikeln des aktuellen Schlagworts tituliert.
Learning to Collaborate for Question Answering and Asking
Question answering (QA) and question generation (QG) are closely related tasks that could improve each other; however, the connection of these two tasks is not well explored in literature. In this paper, we give a systematic study that seeks to leverage the connection to improve both QA and QG. We present a training algorithm that generalizes both Generative Adversarial Network (GAN) and Generative Domain-Adaptive Nets (GDAN) under the question answering scenario. The two key ideas are improving the QG model with QA through incorporating additional QA-specific signal as the loss function, and improving the QA model with QG through adding artificially generated training instances. We conduct experiments on both document based and knowledge based question answering tasks. We have two main findings. Firstly, the performance of a QG model (e.g in terms of BLEU score) could be easily improved by a QA model via policy gradient. Secondly, directly applying GAN that regards all the generated questions as negative instances could not improve the accuracy of the QA model. Learning when to regard generated questions as positive instances could bring performance boost.
Is the Canadian childhood obesity epidemic related to physical inactivity?
OBJECTIVE: This study examined the relation among children's physical activity, sedentary behaviours, and body mass index (BMI), while controlling for sex, family structure, and socioeconomic status.DESIGN: Epidemiological study examining the relations among physical activity participation, sedentary behaviour (video game use and television (TV)/video watching), and BMI on a nationally representative sample of Canadian children.SUBJECTS: A representative sample of Canadian children aged 7–11 (N=7216) from the 1994 National Longitudinal Survey of Children and Youth was used in the analysis.MEASUREMENTS: Physical activity and sport participation, sedentary behaviour (video game use and TV/video watching), and BMI measured by parental report.RESULTS: Both organized and unorganized sport and physical activity are negatively associated with being overweight (10–24% reduced risk) or obese (23–43% reduced risk), while TV watching and video game use are risk factors for being overweight (17–44% increased risk) or obese (10–61% increased risk). Physical activity and sedentary behaviour partially account for the association of high socioeconomic status and two-parent family structure with the likelihood of being overweight or obese.CONCLUSION: This study provides evidence supporting the link between physical inactivity and obesity of Canadian children.
Complementary Phase Power Divider Feed for Dipole Antenna Specific to GSM 900 Base Station Applications
Power Divider (PD) design intended for feeding dipole antenna meant for Global System for Mobile Communications (GSM) 900 applications with an antenna height of 22 mm, operating in the frequency range of 880~960 MHz is presented herein. The PD provides 3 dB power division along with complementary phase at its output. The out of phase division of power divider is obtained by utilizing the concept of defected ground structures. A slot line accompanied by T-junction makes up the defected ground region, while coupled microstrip lines form the feed positions. The simulated and the measured results are in good coherence. PD shows good return loss and low insertion loss. It is observed that the dual polarized base station antenna has VSWR below 2 in the required frequency range, 10 dB gain and port to port isolation is less than 22 dB. The antenna is dual polarized and a + or - 45 degrees polarization is maintained.
Essential Oils as Feed Additives—Future Perspectives
The inconsistency of phytogenic feed additives' (PFA) effects on the livestock industry poses a risk for their use as a replacement for antibiotic growth promoters. The livestock market is being encouraged to use natural growth promotors, but information is limited about the PFA mode of action. The aim of this paper is to present the complexity of compounds present in essential oils (EOs) and factors that influence biological effects of PFA. In this paper, we highlight various controls and optimization parameters that influence the processes for the standardization of these products. The chemical composition of EOs depends on plant genetics, growth conditions, development stage at harvest, and processes of extracting active compounds. Their biological effects are further influenced by the interaction of phytochemicals and their bioavailability in the gastrointestinal tract of animals. PFA effects on animal health and production are also complex due to various EO antibiotic, antioxidant, anti-quorum sensing, anti-inflammatory, and digestive fluids stimulating activities. Research must focus on reliable methods to identify and control the quality and effects of EOs. In this study, we focused on available microencapsulation techniques of EOs to increase the bioavailability of active compounds, as well as their application in the animal feed additive industry.
DESIGN OF MICROSTRIP ANTENNA FOR WIRELESS COMMUNICATION AT 2 . 4 GHZ
The paper presents a broadband microstrip patch antenna for wireless communication. In its most basic form, a microstrip patch antenna consists of a radiating patch on one side of a dielectric substrate which has a ground plane on the other side. The patch is generally made of conducting material such as copper or gold and can take any possible shape. A rectangular patch is used as the main radiator. There are several advantages of this type of broadband antenna, such as planar, small in size, simple in structure, low in cost, and easy to be fabricated, thus attractive for practical applications. This rectangular microstrip patch antenna is designed for wireless communication application that works at 2.4 GHz with gain 11 dB for outdoor place. It also has a wide angle of beam in its radiation pattern. The results obtain that microstrip patch antenna can be used as client antenna in computer and workable antenna for wireless fidelity.
Course Assessment Plan: A Tool For Integrated Curriculum Management
As we enter the 21 Century in engineering education, a common desire exists to improve curriculum structure, integration and assessment. Much has been written and discussed in workshops and professional journals concerning the top-down process for assessing and/or revising a program curriculum. Institutions are finally realizing they cannot afford to rely solely upon the senior capstone design experience to be the integrator of all previous engineering experiences. Studies are beginning to show the positive effects of well-integrated curricula where assessment methods are applied consistently. What is missing in many instances is a credible link between top-down curriculum management and bottom-up course assessment. At the United States Military Academy at West Point, a widely accepted assessment model provides the framework for program management. The Department of Civil and Mechanical Engineering at West Point has long prided itself on working hard to provide a rigorous and well-integrated undergraduate engineering program of study. Over the last five years we have developed and refined an integrating tool within the academy’s assessment model called a course assessment plan. The course assessment plan provides that crucial link between the program curriculum and the individual courses. The plan process and content will be the major focus of this paper. To illustrate the impact of the course assessment plan in closing the assessment loop, we will discuss an example of a course change with implications at the program level that was initiated and completed through use of the plan.
The evolution of mating systems in insects and arachnids
Introduction 1. Evolutionary perspectives on insect mating Richard D. Alexander, David Marshall, and John Cooley 2. Sexual selection by cryptic female choice in insects and arachnids William G. Eberhard 3. Natural and sexual selection components of odonate mating patterns Ola M. Finke, Jonathan Waage, and Walter D. Koenig 4. Sexual selection in resource defense polygyny: lessons from territorial grasshoppers Michael D. Greenfield 5. Reproductive strategies of the crickets (Orthoptera: Gryllidae) Marlene Zuk and Leigh W. Simmons 6. The evolution of edible 'sperm sacs' and other forms of courtship feeding in crickets, katydids and their kin (Orthoptera: Ensifera) Darryl T. Gwynne 7. Sexual conflicts and the evolution of mating patterns in the Zoraptera Jae C. Choe 8. The evolution of water strider mating systems: causes and consequences of sexual conflicts Goran Arnqvist 9. Multiple mating, sperm competition, and cryptic female choice in the leaf beetles (Coleoptera: Chrysomelidae) Janis L. Dickinson 10. Firefly mating ecology, selection and evolution James E. Lloyd 11. Modern mating systems in archaic Holometabola: sexuality in neuropteroid insects Charles S. Henry 12. Mating systems of parasitoid wasps H. C. J. Godfray and J. M. Cook 13. Fig wasp mating systems: pollinators and parasites, sex ratio adjustment and male polymorphism, population structure and its consequences E. A. Herre, S. A. West, J. M. Cook, S. G. Compton and F. Kjellberg 14. Predictions from sexual selection on the evolution of mating systems in moths P. Larry Phelan 15. Sexual dimorphism, mating systems and ecology in butterflies Ronald L. Rutowski 16. Lek behaviour of insects Todd Shelly and Timothy S. Whittier 17. Mate choice and species isolation in swarming insects John Sivinski and Erik Petersson 18. Function and evolution of antlers and eye stalks in flies Gerald S. Wilkinson and Gary N. Dodson 19. Sex via the substrate: mating systems and sexual selection in pseudoscorpions David W. Zeh and Jeanne A. Zeh 20. Jumping spider mating strategies: sex with cannibals in and out of webs Robert R. Jackson and S. D. Pollard 21. Sexual conflict and the evolution of mating systems William D. Brown, Bernard J. Crespi and Jae C. Choe.
Exploring Semantic Representation in Brain Activity Using Word Embeddings
In this paper, we utilize distributed word representations (i.e., word embeddings) to analyse the representation of semantics in brain activity. The brain activity data were recorded using functional magnetic resonance imaging (fMRI) when subjects were viewing words. First, we analysed the functional selectivity of different cortex areas by calculating the correlations between neural responses and several types of word representations, including skipgram word embeddings, visual semantic vectors, and primary visual features. The results demonstrated consistency with existing neuroscientific knowledge. Second, we utilized behavioural data as the semantic ground truth to measure their relevance with brain activity. A method to estimate word embeddings under the constraints of brain activity similarities is further proposed based on the semantic word embedding (SWE) model. The experimental results show that the brain activity data are significantly correlated with the behavioural data of human judgements on semantic similarity. The correlations between the estimated word embeddings and the semantic ground truth can be effectively improved after integrating the brain activity data for learning, which implies that semantic patterns in neural representations may exist that have not been fully captured by state-of-the-art word embeddings derived from text corpora.
Randomized, double-blind, placebo-controlled study of carvedilol on the prevention of nitrate tolerance in patients with chronic heart failure.
OBJECTIVES This study was designed to evaluate the effect of carvedilol on nitrate tolerance in patients with chronic heart failure. BACKGROUND The attenuation of cyclic guanosine 5'-monophosphate (cGMP) production due to inactivation of guanylate cyclase by increased superoxide has been reported as a mechanism of nitrate tolerance. Carvedilol has been known to combine alpha/beta-blockade with antioxidant properties. METHODS To evaluate the effect of carvedilol on nitrate tolerance, 40 patients with chronic heart failure were randomized to four groups that received either carvedilol (2.5 mg once a day [carvedilol group, n=10]), metoprolol (30 mg once a day [metoprolol group, n=10]), doxazosin (0.5 mg once a day [doxazosin group, n=10]) or placebo (placebo group, n=10). Vasodilatory response to nitroglycerin (NTG) was assessed with forearm plethysmography by measuring the change in forearm blood flow (FBF) before and 5 min after sublingual administration of 0.3 mg NTG, and at the same time blood samples were taken from veins on the opposite side to measure platelet cGMP. Plethysmography and blood sampling were obtained serially at baseline (day 0); 3 days after carvedilol, metoprolol, doxazosin or placebo administration (day 3); and 3 days after application of a 10-mg/24-h NTG tape concomitantly with carvedilol, metoprolol, doxazosin or placebo (day 6). RESULTS There was no significant difference in the response of FBF (%FBF) and cGMP (%cGMP) to sublingual NTG on day 0 and day 3 among the four groups. On day 6, %FBF and %cGMP were significantly lower in the metoprolol, doxazosin and placebo groups than on day 0 and day 3, but these parameters in the carvedilol group were maintained. CONCLUSIONS These results indicated that carvedilol may prevent nitrate tolerance in patients with chronic heart failure during continuous therapy with NTG.
Addressing Safety and Security Contradictions in Cyber-Physical Systems
Modern cyber-physical systems are found in important domains such as automobiles, medical devices, building automation, avionics, etc.. Hence, they are increasingly prone to security violations. Often such vulnerabilities occur as a result of contradictory requirements between the safety/real-time properties and the security needs of the system. In this paper we propose a formal framework that assists designers in detecting such conflicts early, thus increasing both, the safety and the security of the overall system.
The efficacy and biobehavioural basis of baclofen in the treatment of alcoholic liver disease (BacALD): study protocol for a randomised controlled trial.
BACKGROUND Effective treatments for alcohol use disorders in those with significant liver disease are critically lacking. The primary aim of the current study is to explore the effectiveness and biobehavioural basis of low and high dose baclofen in improving treatment outcomes for alcohol dependence in people with alcoholic liver disease (The BacALD study). METHODS This double-blind, placebo-controlled study will randomize 180 participants to a 12-week regime of either baclofen (30 mg/day baclofen, 75 mg/day baclofen) or placebo. Participants must meet the ICD-10 criteria for alcohol dependence in addition to alcoholic liver disease (ALD) defined as the presence of symptoms and/or signs referable to liver disease or its complications with or without cirrhosis. Primary outcome measures will include total abstinence duration, and time to lapse and relapse. Furthermore, 60 of the ALD patients enrolled in the trial will also participate in a pharmacokinetic and cue-reactivity component, along with an additional 30 healthy volunteers matched for age and gender randomised to a 1 week regime of either 30 mg/day baclofen or 75 mg/day baclofen. At week 1, plasma levels of baclofen and β-p-chlorophenol-γ-hydroxybutric acid will be measured at 0, 1 and 4 h following baclofen administration and psychophysiological responses to alcohol-associated stimuli will be assessed in a cue reactivity paradigm. Recruitment commenced in late March 2013. CONCLUSIONS This trial will demonstrate the efficacy and safety of two doses of baclofen in patients with alcoholic liver disease and will explore the biobehavioural mechanisms of the treatment effect.
Expression of FOXC2 in adipose and muscle and its association with whole body insulin sensitivity.
FOXC2 is a winged helix/forkhead transcription factor involved in PKA signaling. Overexpression of FOXC2 in the adipose tissue of transgenic mice protected against diet-induced obesity and insulin resistance. We examined the expression of FOXC2 in fat and muscle of nondiabetic humans with varying obesity and insulin sensitivity. There was no relation between body mass index (BMI) and FOXC2 mRNA in either adipose or muscle. There was a strong inverse relation between adipose FOXC2 mRNA and insulin sensitivity, using the frequently sampled intravenous glucose tolerance test (r = -0.78, P < 0.001). However, there was no relationship between muscle FOXC2 and any measure of insulin sensitivity. To separate insulin resistance from obesity, we examined FOXC2 expression in pairs of subjects who were matched for BMI but who were discordant for insulin sensitivity. Compared with insulin-sensitive subjects, insulin-resistant subjects had threefold higher levels of adipose FOXC2 mRNA (P = 0.03). In contrast, muscle FOXC2 mRNA expression was no different between insulin-resistant and insulin-sensitive subjects. There was no association of adipose or muscle FOXC2 mRNA with either circulating or adipose-secreted TNF-alpha, IL-6, leptin, adiponectin, or non-esterified fatty acids. Thus adipose FOXC2 is more highly expressed in insulin-resistant subjects, and this effect is independent of obesity. This association between FOXC2 and insulin resistance may be related to the role of FOXC2 in PKA signaling.