title
stringlengths
8
300
abstract
stringlengths
0
10k
Studying Aesthetics in Photographic Images Using a Computational Approach
Aesthetics, in the world of art and photography, refers to the principles of the nature and appreciation of beauty. Judging beauty and other aesthetic qualities of photographs is a highly subjective task. Hence, there is no unanimously agreed standard for measuring aesthetic value. In spite of the lack of firm rules, certain features in photographic images are believed, by many, to please humans more than certain others. In this paper, we treat the challenge of automatically inferring aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated online photo sharing Website as data source. We extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images. Automated classifiers are built using support vector machines and classification trees. Linear regression on polynomial terms of the features is also applied to infer numerical aesthetics ratings. The work attempts to explore the relationship between emotions which pictures arouse in people, and their low-level content. Potential applications include content-based image retrieval and digital photography.
The Effect of Cabergoline on Sleep, Periodic Leg Movements in Sleep, and Early Morning Motor Function in Patients with Parkinson's Disease
To investigate the effect of the dopamine D2 and D1 receptor agonist cabergoline on sleep, periodic leg movements (PLMs) in sleep, and early morning motor performance in patients with Parkinson's disease (PD). It was hypothesized that cabergoline had long-lasting beneficial effects on sleep and PLMs in sleep in patients with PD, after a single evening intake. A total of 15 patients with idiopathic PD underwent two nights of polysomnography and motor tests (UPDRS, tapping test) before and after 6–8 weeks of treatment with cabergoline (dosage: 3–6 mg/day). Additionally, patients completed a subjective sleep visual analog scale (VAS) before and during cabergoline treatment. Compared to baseline values, treatment with cabergoline did not change sleep efficiency or the amount of stage 1 and stage 2 sleep. The number of awakenings (22.4±10.1 vs 32.5±13.3, p<0.05) and stage shifts (119±42 vs 148±46, p<0.05) were increased during treatment with cabergoline, and PLMs in sleep were reduced (PLM index 34.9±44.9 vs 6.7±4.2 per hour, p<0.05). Cabergoline significantly improved early morning motor function, and in spite of increased phase shifts and awakenings, patients felt significantly more refreshed in the morning during cabergoline therapy. Cabergoline slightly fragmented sleep, without altering its total amount. The functional significance of this finding is uncertain. The subjective quality of sleep improved, and periodic limb movements in sleep decreased.
Enterprise 2.0: the dawn of emergent collaboration
This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.
Aspectual universals of temporal anaphora
It has long been recognized that temporal anaphora in French and English depends on the aspectual distinction between events and states. For example, temporal location as well as temporal update depends on the aspectual type. This paper presents a general theory of aspect-based temporal anaphora, which extends from languages with grammatical tenses (like French and English) to tenseless languages (e.g. Kalaallisut). This theory also extends to additional aspect-dependent phenomena and to non-atomic aspectual types, processes and habits, which license anaphora to proper atomic parts (cf. nominal pluralities and kinds).
Real-time SVM-based emotion recognition algorithm
The rise in ageing population is a global trend and this increasing number of elderly requires the development of new techniques, especially in the healthcare. Nowadays, already a lot of research is conducted with respect to the development of healthcare robots. However, these robots often focus on practical tasks and lack on a social interaction level. To enhance these social skills it is necessary to analyze both verbal and non-verbal communication. This paper focuses on the latter form of communication, more specific on emotion detection. To accomplish this the developed algorithm extracts specific facial cues, in the form of displacement ratios, and interprets these cues with a cascade of SVMs. In total there are 4 different steps to achieve the emotion detection. First, the countenance is detected with an adapted Histogram of Oriented Gradients (HoG) algorithm. Subsequently, 19 feature points are derived from the facial region. The next step comprises the calculation of 12 displacement ratios based on the distance between those feature points in successive frames. Finally, the displacement ratios are used as feature vectors for a multi-class SVM in cascade with a binary SVM. The developed algorithm is evaluated on the Extended Cohn-Kanade (CK+) dataset and has an overall accuracy of 89.78% with a detection speed of less than 30ms, which makes it suitable for real-time applications.
Antiprotons in CR: What Do They Tell Us?
Recent measurements of the CR antiproton flux have been shown to pose a problem for conventional propagation models. In particular, models consistent with secondary/primary nuclei ratio in CR produce too few antiprotons, while matching the ratio and the antiproton flux requires ad hoc assumptions. This may indicate an additional local CR component or new phenomena in CR propagation in the Galaxy. We discuss several possibilities which may cause this problem.
An Assistive Navigation Framework for the Visually Impaired
This paper provides a framework for context-aware navigation services for vision impaired people. Integrating advanced intelligence into navigation requires knowledge of the semantic properties of the objects around the user's environment. This interaction is required to enhance communication about objects and places to improve travel decisions. Our intelligent system is a human-in-the-loop cyber-physical system that interprets ubiquitous semantic entities by interacting with the physical world and the cyber domain, viz., 1) visual cues and distance sensing of material objects as line-of-sight interaction to interpret location-context information, and 2) data (tweets) from social media as event-based interaction to interpret situational vibes. The case study elaborates our proposed localization methods (viz., topological, landmark, metric, crowdsourced, and sound localization) for applications in way finding, way confirmation, user tracking, socialization, and situation alerts. Our pilot evaluation provides a proof of concept for an assistive navigation system.
Gender and Competition
Laboratory studies have documented that women often respond less favorably to competition than men. Conditional on performance, men are often more eager to compete, and the performance of men tends to respond more positively to an increase in competition. This means that few women enter and win competitions. We review studies that examine the robustness of these differences as well the factors that may give rise to them. Both laboratory and field studies largely confirm these initial findings, showing that gender differences in competitiveness tend to result from differences in overconfidence and in attitudes toward competition. Gender differences in risk aversion, however, seem to play a smaller and less robust role. We conclude by asking what could and should be done to encourage qualified males and females to compete. 601 A nn u. R ev . E co n. 2 01 1. 3: 60 163 0. D ow nl oa de d fr om w w w .a nn ua lre vi ew s.o rg by $ {i nd iv id ua lU se r.d is pl ay N am e} o n 08 /1 6/ 11 . F or p er so na l u se o nl y.
Slouching towards Bethlehem -- : and further psychoanalytic explorations
Nina Coltart is a member of the Independent Group. In her essays she explores the parameters of psychoanalysis against a background of broader cultural issues. The title essay is justly famous - an account of an apparently hopeless analysis, in which Dr Coltart's intervention produced remarkable results.
The boundaries of business: commentaries from the experts.
The World Leadership Survey, which began a worldwide dialogue on a set of important issues facing managers in the 1990s, continues with commentaries from four recognized experts, each of whom addresses the survey results from a different perspective. Kenichi Ohmae, chairman of McKinsey and Company in Tokyo, addresses "The Perils of Protectionism." Ohmae argues that the old definitions of national boundaries and corporate interests reflect obsolete economic theories. The real test of national well-being, Ohmae suggests, should be the economic welfare of a nation's citizens. Sylvia Ann Hewlett, economist and former director of the Economic Policy Council in New York, analyzes the survey in terms of "The Human Resource Deficit." According to Hewlett, four principles should guide corporate strategies in the 1990s: human resource development should move up the scale of corporate priorities; a family-friendly workplace will attract and keep talented workers; companies will take limited direct responsibility for training and education; the private sector will promote public investment in social issues. James E. Austin, the Richard P. Chapman Professor of Business at the Harvard Business School, writes about "The Developing-Country Difference." In developing countries, Austin observes, managers display attitudes and follow practices that diverge from those in developed nations. In particular, the role of government, investments in education and technology, and environmental concerns set these nations apart. Michel Crozier, president of the Centre de Sociologie des Organisations in Paris, writes about "The Changing Organization." In the 1990s, Crozier argues, managers need to break from old management theories and practice, questioning hierarchy, control, distance, access to information-the whole managerial system.
Training principles for fascial connective tissues: scientific foundation and suggested practical applications.
Conventional sports training emphasizes adequate training of muscle fibres, of cardiovascular conditioning and/or neuromuscular coordination. Most sports-associated overload injuries however occur within elements of the body wide fascial net, which are then loaded beyond their prepared capacity. This tensional network of fibrous tissues includes dense sheets such as muscle envelopes, aponeuroses, as well as specific local adaptations, such as ligaments or tendons. Fibroblasts continually but slowly adapt the morphology of these tissues to repeatedly applied challenging loading stimulations. Principles of a fascia oriented training approach are introduced. These include utilization of elastic recoil, preparatory counter movement, slow and dynamic stretching, as well as rehydration practices and proprioceptive refinement. Such training should be practiced once or twice a week in order to yield in a more resilient fascial body suit within a time frame of 6-24 months. Some practical examples of fascia oriented exercises are presented.
Hierarchical Spatial Transformer Network
Computer vision researchers have been expecting that neural networks have spatial transformation ability to eliminate the interference caused by geometric distortion for a long time. Emergence of spatial transformer network makes dream come true. Spatial transformer network and its variants can handle global displacement well, but lack the ability to deal with local spatial variance. Hence how to achieve a better manner of deformation in the neural network has become a pressing matter of the moment. To address this issue, we analyze the advantages and disadvantages of approximation theory and optical flow theory, then we combine them to propose a novel way to achieve image deformation and implement it with a hierarchical convolutional neural network. This new approach solves for a linear deformation along with an optical flow field to model image deformation. In the experiments of cluttered MNIST handwritten digits classification and image plane alignment, our method outperforms baseline methods by a large margin.
Multicentre audit of inpatient management of acute exacerbations of chronic obstructive pulmonary disease: comparison with clinical guidelines.
BACKGROUND AND OBJECTIVE Chronic obstructive pulmonary disease (COPD) exacerbations are a major cause of hospital admission and clinical guidelines for optimised management are available. However, few data assessing concordance with these guidelines are available. We aimed to identify gaps and document variability in clinical practices for COPD admissions. METHODS Medical records of all admissions over a 3-month period as COPD with non-catastrophic or severe comorbidities or complications at eight acute-care hospitals within the Hunter New England region were retrospectively audited. RESULTS Mean (SD) length of stay was 6.3 (6.1) days for 221 admissions with mean age of 71 (10), 53% female and 34% current smokers. Spirometry was performed in 34% of admissions with a wide inter-hospital range (4-58%, P < 0.0001): mean FEV1 was 36% (18) predicted. Arterial blood gases were performed on admission in 54% of cases (range 0-85%, P < 0.0001). Parenteral steroids were used in 82% of admissions, antibiotics in 87% and oxygen therapy during admission in 79% (with oxygen prescription in only 3% of these). Bronchodilator therapy was converted from nebuliser to an inhaler device in 51% of cases early in admission at 1.6 (1.7) days. Only 22% of patients were referred to pulmonary rehabilitation (inter-hospital range of 0-50%, P = 0.002). Re-admission within 28 days was higher in rural hospitals compared with metropolitan (27% vs 7%, P < 0.0001). CONCLUSIONS We identified gaps in best practice service provision associated with wide inter-hospital variations, indicating disparity in access to services throughout the region.
Theories of First Language Acquisition
Investigating the processes through which individuals acquire language is Language acquisition. In general, acquisition of language points to native language acquisition, which examines children’s acquisition of their first language, while second language acquisition concerns acquisition of extra languages in children and adults as well. The history of language learning theories can be considered as a great pendulum cycled from Skinnerian environmentalism to Piagetian constructivism to Chomskian innatism. Consequently, much of research in this field has been revolved around the debates about whether cognitive process and structure are constrained by innately predetermined mechanism or shaped by environmental input. Linguists Noam Chomsky and Eric Lenneberg, for half a century have argued for the hypothesis that children have inborn, language-specific capabilities that make possible and restrict language learning. Others, like Catherine Snow, Elizabeth Bates and Brian MacWhinney have hypothesized that language acquisition is the product of common cognitive capacities and the interface between children and their surrounding communities. William O'Grady suggests that multifaceted syntactic phenomena stem from an efficiency-driven, linear computational system. O'Grady refers to his work as "nativism without Universal Grammar. Nevertheless, these basic theories of language acquisition cannot be absolutely divorced from each other. The purpose of the present paper is reviewing some of the fundamental theories that describe how children acquire their native language. Therefore describing the strengths and weaknesses of Behaviorism, Mentalism, Rationalism, Empiricism, Emergentism, Chunking, Vygotsky’s Sociocultural Theory, Piaget’s theory of child language and thought, Statistical Language Learning, Relational Frame Theory and Activity theories are among the objectives of this study. In general these basic theories are very much complementary to each other, serving different types of learning and indicating diverse cases of language learning.
III-nitride micro-emitter arrays : development and applications
III-nitride micro-emitter array technology was developed in the authors’ laboratory around 1999. Since its inception, much progress has been made by several groups and the technology has led to the invention of several novel devices. This paper provides an overview on recent progress in single-chip ac-micro-size light emitting diodes (μLEDs) that can be plugged directly into standard high ac voltage power outlets, self-emissive microdisplays and interconnected μLEDs for boosting light emitting diodes’s wall-plug efficiency, all of which were evolved from III-nitride micro-emitter array technology. Finally, potential applications of III-nitride visible micro-emitter arrays as a light source for DNA microarrays and future prospects of III-nitride deep ultraviolet micro-emitter arrays for label-free protein analysis in microarray format by taking advantage of the direct excitation of intrinsic protein fluorescence are discussed. (Some figures in this article are in colour only in the electronic version)
Large vocabulary Russian speech recognition using syntactico-statistical language modeling
Speech is the most natural way of human communication and in order to achieve convenient and efficient human–computer interaction implementation of state-of-the-art spoken language technology is necessary. Research in this area has been traditionally focused on several main languages, such as English, French, Spanish, Chinese or Japanese, but some other languages, particularly Eastern European languages, have received much less attention. However, recently, research activities on speech technologies for Czech, Polish, Serbo-Croatian, Russian languages have been steadily increasing. In this paper, we describe our efforts to build an automatic speech recognition (ASR) system for the Russian language with a large vocabulary. Russian is a synthetic and highly inflected language with lots of roots and affixes. This greatly reduces the performance of the ASR systems designed using traditional approaches. In our work, we have taken special attention to the specifics of the Russian language when developing the acoustic, lexical and language models. A special software tool for pronunciation lexicon creation was developed. For the acoustic model, we investigated a combination of knowledge-based and statistical approaches to create several different phoneme sets, the best of which was determined experimentally. For the language model (LM), we introduced a new method that combines syntactical and statistical analysis of the training text data in order to build better n-gram models. Evaluation experiments were performed using two different Russian speech databases and an internally collected text corpus. Among the several phoneme sets we created, the one which achieved the fewest word level recognition errors was the set with 47 phonemes and thus we used it in the following language modeling evaluations. Experiments with 204 thousand words vocabulary ASR were performed to compare the standard statistical n-gram LMs and the language models created using our syntactico-statistical method. The results demonstrated that the proposed language modeling approach is capable of reducing the word recognition errors. 2013 Elsevier B.V. All rights reserved.
Unsupervised spectral clustering for hierarchical modelling and criticality analysis of complex networks
Infrastructure networks are essential to the socioeconomic development of any country. This article applies clustering analysis to extract the inherent structural properties of realistic-size infrastructure networks. Network components with high criticality are identified and a general hierarchical modelling framework is developed for representing the networked system into a scalable hierarchical structure of corresponding fictitious networks. This representation makes a multi-scale criticality analysis possible, beyond the widely used component-level criticality analysis, whose results obtained from zoom-in analysis can support confident decision making.
Target Seq 2 Seq Forward Translation Forward Translation Backward Translation Backward Translation Sentiment Sentiment Prediction 1 23 4 5 Encoder RNN Decoder RNN Embedded Representation Prediction RNN
Multimodal sentiment analysis is a core research area that studies speaker sentiment expressed from the language, visual, and acoustic modalities. The central challenge in multimodal learning involves inferring joint representations that can process and relate information from these modalities. However, existing work learns joint representations by requiring all modalities as input and as a result, the learned representations may be sensitive to noisy or missing modalities at test time. With the recent success of sequence to sequence (Seq2Seq) models in machine translation, there is an opportunity to explore new ways of learning joint representations that may not require all input modalities at test time. In this paper, we propose a method to learn robust joint representations by translating between modalities. Our method is based on the key insight that translation from a source to a target modality provides a method of learning joint representations using only the source modality as input. We augment modality translations with a cycle consistency loss to ensure that our joint representations retain maximal information from all modalities. Once our translation model is trained with paired multimodal data, we only need data from the source modality at test time for final sentiment prediction. This ensures that our model remains robust from perturbations or missing information in the other modalities. We train our model with a coupled translationprediction objective and it achieves new state-of-the-art results on multimodal sentiment analysis datasets: CMU-MOSI, ICTMMMO, and YouTube. Additional experiments show that our model learns increasingly discriminative joint representations with more input modalities while maintaining robustness to missing or perturbed modalities.
A Maximum Entropy Framework for Semisupervised and Active Learning With Unknown and Label-Scarce Classes
We investigate semisupervised learning (SL) and pool-based active learning (AL) of a classifier for domains with label-scarce (LS) and unknown categories, i.e., defined categories for which there are initially no labeled examples. This scenario manifests, e.g., when a category is rare, or expensive to label. There are several learning issues when there are unknown categories: 1) it is a priori unknown which subset of (possibly many) measured features are needed to discriminate unknown from common classes and 2) label scarcity suggests that overtraining is a concern. Our classifier exploits the inductive bias that an unknown class consists of the subset of the unlabeled pool’s samples that are atypical (relative to the common classes) with respect to certain key (albeit a priori unknown) features and feature interactions. Accordingly, we treat negative log- $p$ -values on raw features as nonnegatively weighted derived feature inputs to our class posterior, with zero weights identifying irrelevant features. Through a hierarchical class posterior, our model accommodates multiple common classes, multiple LS classes, and unknown classes. For learning, we propose a novel semisupervised objective customized for the LS/unknown category scenarios. While several works minimize class decision uncertainty on unlabeled samples, we instead preserve this uncertainty [maximum entropy (maxEnt)] to avoid overtraining. Our experiments on a variety of UCI Machine learning (ML) domains show: 1) the use of $p$ -value features coupled with weight constraints leads to sparse solutions and gives significant improvement over the use of raw features and 2) for LS SL and AL, unlabeled samples are helpful, and should be used to preserve decision uncertainty (maxEnt), rather than to minimize it, especially during the early stages of AL. Our AL system, leveraging a novel sample-selection scheme, discovers unknown classes and discriminates LS classes from common ones, with sparing use of oracle labeling.
Advantage of a precurved fenestrated endograft for aortic arch disease: simplified arch aneurysm treatment in Japan 2010 and 2011.
OBJECTIVE We evaluated the results of our previous study investigating a precurved fenestrated endograft treatment for thoracic aortic aneurysms and aortic dissection extended to the aortic arch. METHODS From February 2010 to December 2011 at 35 Japanese centers, 383 patients (mean age, 75.7 ± 9.4 years) who required stent-graft landing in the aortic arch were treated with a precurved fenestrated endograft. The device has 19 3-dimensional curved stent skeleton types similar to aortic arch configurations and 8 graft fenestration types and is 24 to 44 mm in diameter and 16 to 20 cm long. The endografts were fabricated according to preoperative 3-dimensional computed tomographic images. RESULTS Technical and initial successes were achieved in 380 and 364 cases, respectively. Device proximal end was at zones 0 to 2 in 363, 15, and 2 patients, respectively. Lesions' proximal end ranged from zone 0 to 3 in 16, 125, 195, and 44 patients, respectively. The mean operative and fluoroscopic times were 161 ± 76 and 26 ± 13 min, respectively. The complications included stroke (7 patients), permanent paralysis (3), and perioperative death (6). No branch occlusion or proximal migration of the device occurred during follow-up. CONCLUSIONS A precurved fenestrated endograft for endovascular repair in aortic arch disease rendered catheter manipulation simple and minimized operative complication risks. Although most patients had inadequate proximal landing zone and severely angled complex configuration, low mortality and morbidity and satisfactory clinical success were early outcomes, suggesting that this simplified treatment may be effective for aortic arch disease.
Do Executive Stock Options Generate Incentives for Earnings Management ? Evidence from Accounting
In a sample of 224 firms that announced restating their financial statements from January 1997 to June 2002 due to accounting irregularities and a control group of all non-restating firms with data on ExecuComp, we examine the effect of pay for performance incentives on the incentives for earnings management. Controlling for the endogeneity of pay for performance incentives, we find a significant positive effect of incentives on the probability of restating. In our sample, average value of executive option holdings increases by $21 for every $1000 change in equity value. Increasing incentives by 90 cents or 4.3% from the above mean increases the probability of restatement by 1%. We find that stock and options differ in the incentives generated for earnings management. There is no evidence that equity holdings generate incentives for earnings management. Further, large managerial ownership mitigates the positive effect of stock options on the incentive to manage earnings.
Automatic road sign detecion and classification based on support vector machines and HOG descriptos
This paper examines the detection and classification of road signs in color-images acquired by a low cost camera mounted on a moving vehicle. A new method for the detection and classification of road signs is proposed based on color based detection, in order to locate regions of interest. Then, a circular Hough transform is applied to complete detection taking advantage of the shape properties of the road signs. The regions of interest are finally represented using HOG descriptors and are fed into trained Support Vector Machines (SVMs) in order to be recognized. For the training procedure, a database with several training examples depicting Greek road sings has been developed. Many experiments have been conducted and are presented, to measure the efficiency of the proposed methodology especially under adverse weather conditions and poor illumination. For the experiments training datasets consisting of different number of examples were used and the results are presented, along with some possible extensions of this work.
Automated memory leak detection for production use
This paper presents Sniper, an automated memory leak detection tool for C/C++ production software. To track the staleness of allocated memory (which is a clue to potential leaks) with little overhead (mostly <3%), Sniper leverages instruction sampling using performance monitoring units available in commodity processors. It also offloads the time- and space-consuming analyses, and works on the original software without modifying the underlying memory allocator; it neither perturbs the application execution nor increases the heap size. The Sniper can even deal with multithreaded applications with very low overhead. In particular, it performs a statistical analysis, which views memory leaks as anomalies, for automated and systematic leak determination. Consequently, it accurately detected real-world memory leaks with no false positive, and achieved an F-measure of 81% on average for 17 benchmarks stress-tested with various memory leaks.
Printed Drug-Delivery Systems for Improved Patient Treatment.
The use of various types of printing technologies offer potential solutions for personalized medicine and tailored dosage forms to meet the needs of individual treatments of the future. Many types of scenario for printed dosage form exist and the concepts include, on the simplest level, accurately deposited doses of drug substances. In addition, computer design allows endless opportunities to create suitable geometries with tailored functionality and different levels of complexity to control the release properties of one or multiple drug substances. It will take some time to convert these technological developments in printing to better treatments for patients, because challenges exist. However, printing technologies are developing fast and have the potential to allow the use of versatile materials to manufacture sophisticated drug-delivery systems and biofunctional constructs for personalized treatments.
Dielectric Properties of Epoxy Nanocomposites containing TiO 2 , Al 2 O 3 and ZnO fillers
The paper presents results of dielectric spectroscopy and space charge (PEA) measurements on epoxy resin filled with 10% w/w microand nanosized particles of TiO2, Al3O2 and ZnO. The results appear to show that the material from which the nano-particle is made is not highly significant in influencing these results. The results support the proposition that the dielectric properties of such nano-filled composites are controlled by Stern-Gouy-Chapman layers (“interaction zones”) around the particles.
Refinements of Vertical Scar Mammaplasty: Circumvertical Skin Excision Design With Limited Inferior Pole Subdermal Undermining and Liposculpture of the Inframammary Crease
Vertical scar mammaplasty, first described by Lötsch in 1923 and Dartigues in 1924 for mastopexy, was extended later to breast reduction by Arié in 1957. It was otherwise lost to surgical history until Lassus began experimenting with it in 1964. It then was extended by Marchac and de Olarte, finally to be popularized by Lejour. Despite initial skepticism, vertical reduction mammaplasty is becoming increasingly popular in recent years because it best incorporates the two concepts of minimal scarring and a satisfactory breast shape. At the moment, vertical scar techniques seem to be more popular in Europe than in the United States. A recent survey, however, has demonstrated that even in the United States, it has surpassed the rate of inverted T-scar breast reductions. The technique, however, is not without major drawbacks, such as long vertical scars extending below the inframammary crease and excessive skin gathering and “dog-ear” at the lower end of the scar that may require long periods for resolution, causing extreme distress to patients and surgeons alike. Efforts are being made to minimize these complications and make the procedure more user-friendly either by modifying it or by replacing it with an alternative that retains the same advantages. Although conceptually opposed to the standard vertical design, the circumvertical modification probably is the most important maneuver for shortening vertical scars. Residual dog-ears often are excised, resulting in a short transverse scar (inverted T- or L-scar). The authors describe limited subdermal undermining of the skin at the inferior edge of the vertical incisions with liposculpture of the inframammary crease, avoiding scar extension altogether. Simplified circumvertical drawing that uses the familiar Wise pattern also is described.
Loss-of-function DNA sequence variant in the CLCNKA chloride channel implicates the cardio-renal axis in interindividual heart failure risk variation.
Common heart failure has a strong undefined heritable component. Two recent independent cardiovascular SNP array studies identified a common SNP at 1p36 in intron 2 of the HSPB7 gene as being associated with heart failure. HSPB7 resequencing identified other risk alleles but no functional gene variants. Here, we further show no effect of the HSPB7 SNP on cardiac HSPB7 mRNA levels or splicing, suggesting that the SNP marks the position of a functional variant in another gene. Accordingly, we used massively parallel platforms to resequence all coding exons of the adjacent CLCNKA gene, which encodes the K(a) renal chloride channel (ClC-K(a)). Of 51 exonic CLCNKA variants identified, one SNP (rs10927887, encoding Arg83Gly) was common, in linkage disequilibrium with the heart failure risk SNP in HSPB7, and associated with heart failure in two independent Caucasian referral populations (n = 2,606 and 1,168; combined P = 2.25 × 10(-6)). Individual genotyping of rs10927887 in the two study populations and a third independent heart failure cohort (combined n = 5,489) revealed an additive allele effect on heart failure risk that is independent of age, sex, and prior hypertension (odds ratio = 1.27 per allele copy; P = 8.3 × 10(-7)). Functional characterization of recombinant wild-type Arg83 and variant Gly83 ClC-K(a) chloride channel currents revealed ≈ 50% loss-of-function of the variant channel. These findings identify a common, functionally significant genetic risk factor for Caucasian heart failure. The variant CLCNKA risk allele, telegraphed by linked variants in the adjacent HSPB7 gene, uncovers a previously overlooked genetic mechanism affecting the cardio-renal axis.
Switched inductor boost converter for PV applications
This paper introduces a boost converter with high dc gain as a solution for partial shading of photovoltaic (PV) module. Switched inductor boost converter (SIBC) is introduced by replacing the inductor of the boost converter with a switched inductor branch. As a result, the conversion gain ratio can be increased. The proposed converter is used as an interface between the PV system and the load. A Maximum Power Point Tracking (MPPT) control is applied to extract the maximum power of the PV module. Analyses, simulation, and experimental results are provided to validate the operation of the converter.
PHOG: Probabilistic Model for Code
We introduce a new generative model for code called probabilistic higher order grammar (PHOG). PHOG generalizes probabilistic context free grammars (PCFGs) by allowing conditioning of a production rule beyond the parent non-terminal, thus capturing rich contexts relevant to programs. Even though PHOG is more powerful than a PCFG, it can be learned from data just as efficiently. We trained a PHOG model on a large JavaScript code corpus and show that it is more precise than existing models, while similarly fast. As a result, PHOG can immediately benefit existing programming tools based on probabilistic models of code.
Alcohol screening and brief counseling in a primary care hypertensive population: a quality improvement intervention.
AIMS To determine the effect of an intervention to improve alcohol screening and brief counseling for hypertensive patients in primary care. DESIGN Two-year randomized, controlled trial. SETTING/PARTICIPANTS Twenty-one primary care practices across the United States with a common electronic medical record. INTERVENTION To promote alcohol screening and brief counseling. Intervention practices received site visits from study personnel and were invited to annual network meetings to review the progress of the project and share improvement strategies. MEASUREMENTS Main outcome measures included rates of documented alcohol screening in hypertensive patients and brief counseling administered in those diagnosed with high-risk drinking, alcohol abuse or alcohol dependence. Secondary outcomes included change in blood pressure among patients with these diagnoses. FINDINGS Hypertensive patients in intervention practices were significantly more likely to have been screened after 2 years than hypertensive patients in control practices [64.5% versus 23.5%; adjusted odds ratio (OR) = 8.1; 95% confidence interval (CI) 1.7-38.2; P < 0.0087]. Patients in intervention practices diagnosed with high-risk drinking, alcohol abuse or alcohol dependence were more likely than those in control practices to have had alcohol counseling documented (50.5% versus 29.6%; adjusted OR = 5.5, 95% CI 1.3-23.3). Systolic (adjusted mean decline = 4.2 mmHg, P = 0.036) and diastolic (adjusted mean decline = 3.3 mmHg, P = 0.006) blood pressure decreased significantly among hypertensive patients receiving alcohol counseling. CONCLUSIONS Primary care practices receiving an alcohol-focused intervention over 2 years improved rates of alcohol screening for their hypertensive population. Implementation of alcohol counseling for high-risk drinking, alcohol abuse or alcohol dependence also improved and led to changes in patient blood pressures.
Ubiquitous GPS vehicle tracking and management system
Global Positioning System (GPS) is becoming widely used for tracking and monitoring vehicles. Many systems have been created to provide such services which make them popular and needed more than ever before. In this paper a “GPS vehicle tracking system” is proposed. This system is useful for fleet operators in monitoring driving behavior of employees or parents monitoring their teen drivers. Moreover, this system can be used in theft prevention as a retrieval device in addition of working as a security system combined with car alarms. The main contribution of this paper is providing two types of end user applications, a web application and a mobile application. This way the proposed system provides a ubiquitous vehicle tracking system with maximum accessibility for the user anytime and anywhere. The system's tracking services includes acquiring the location and ground speed of a given vehicle in the current moment or on any previous date. It also monitors the vehicle by setting speed and geographical limits and therefore receiving SMS alerts when the vehicle exceeds these pre-defined limits. Additionally, all the movements and stops of a given vehicle can also be monitored. Tracking vehicles in our system uses a wide range of new technologies and communication networks including General Packet Radio Service (GPRS), Global System for Mobile Communication (GSM), the Internet or the World Wide Web and Global Positioning System (GPS).
Group Recommender Systems: Aggregation, Satisfaction and Group Attributes
A Trainable Spaced Repetition Model for Language Learning
We present half-life regression (HLR), a novel model for spaced repetition practice with applications to second language acquisition. HLR combines psycholinguistic theory with modern machine learning techniques, indirectly estimating the “halflife” of a word or concept in a student’s long-term memory. We use data from Duolingo — a popular online language learning application — to fit HLR models, reducing error by 45%+ compared to several baselines at predicting student recall rates. HLR model weights also shed light on which linguistic concepts are systematically challenging for second language learners. Finally, HLR was able to improve Duolingo daily student engagement by 12% in an operational user study.
MCLUHAN AND THE " TORONTO SCHOOL OF COMMUNICATION "
How does one recognize a "school" of thought ? And why should one? These are questions that, concerning a truly distinctive and now distinguished intellectual trend originating in Toronto, I have entertained since the death of Marshall McLuhan on the last day of 1980. At the time I was impressed by the fact that Harold Innis, Eric Havelock and McLuhan, the three main scholars who taught that communication systems create definite psychological and social "states", had all been at the university of Toronto. The most significant common thread was that all three had explored different implications of ancient Greek literacy to support their theoretical approach. Even if they had not directly collaborated with each other, they had known each other's work and been inspired by common perceptions.
SRILM - an extensible language modeling toolkit
SRILM is a collection of C++ libraries, executable programs, and helper scripts designed to allow both production of and experimentation with statistical language models for speech recognition and other applications. SRILM is freely available for noncommercial purposes. The toolkit supports creation and evaluation of a variety of language model types based on N-gram statistics, as well as several related tasks, such as statistical tagging and manipulation of N-best lists and word lattices. This paper summarizes the functionality of the toolkit and discusses its design and implementation, highlighting ease of rapid prototyping, reusability, and combinability of tools.
Learning Perceptual Aspects of Diagnosis in Medicine via Eye Movement Modeling Examples on Patient Video Cases
Complex tasks with a visually rich component, like diagnosing seizures based on patient video cases, not only require the acquisition of conceptual but also of perceptual skills. Medical education has found that besides biomedical knowledge (knowledge of scientific facts) clinical knowledge (actual experience with patients) is crucial. One important aspect of clinical knowledge that medical education has hardly focused on, yet, are perceptual skills, like visually searching, detecting, and interpreting relevant features. Research on instructional design has shown that in a visually rich, but simple classification task perceptual skills could be conveyed by means of showing the eye movements of a didactically behaving expert. The current study applied this method to medical education in a complex task. This was done by example video cases, which were verbally explained by an expert. In addition the experimental groups saw a display of the expert’s eye movements recorded, while he performed the task. Results show that blurring non-attended areas of the expert enhances diagnostic performance of epileptic seizures by medical students in contrast to displaying attended areas as a circle and to a control group without attention guidance. These findings show that attention guidance fosters learning of perceptual aspects of clinical knowledge, if implemented in a spotlight manner.
Battery Energy Storage System for Power Conditioning of Renewable Energy Sources
Renewable energy sources such as wind, hydro, etc. are intermittent in nature. Generators connected to the local grid may lead to severe power quality problems. These issues are voltage dip while connection/ disconnection of the generator, uncertainty of supply, unbalanced and distorted power supply. In this paper, the power conditioning of micro hydro driven induction generator connected to the local grid using battery energy storage system (BESS) is simulated for voltage regulation, load leveling, harmonics elimination and power factor improvement
Trends in Green Wireless Access
Reducing CO2 emissions is an important global environmental issue. Over the recent years, wireless and mobile communications have increasingly become popular with consumers. Today’s typical wireless access network consumes more than 50% of the total power consumption of mobile communications networks. Growth of mobile Internet service usage is expected to drive the growth in wireless access data rates and usage. The current rate of power consumption per unit of data cannot be sustained as we move towards broadband wireless access networks and anticipated increases in wireless data traffic. This paper, first, examines typical energy consumption in mobile communications networks and traffic trends, and it subsequently discusses target power consumption reduction for the evolving broadband wireless networks to be environmentally acceptable and sustainable. It describes several technologies that can contribute towards reaching this target.
Chronic osteomyelitis in Ilorin, Nigeria.
AIM To review cases of chronic osteomyelitis managed at a private health institution (Ela Memorial Medical Centre, Ilorin, Nigeria) between March 1995 and February 2005. PATIENTS AND METHODS; Case notes and X-rays of the patients who presented at EMMC with chronic osteomyelitis were reviewed retrospectively. Age, sex, sites of bone involvement and outcome of treatment were recorded. Local surgical debridement (including saucerisation, sequestrectomy and curettage) was the cornerstone of treatment. All patients received antibiotics for at least 6 weeks. RESULTS Of the 107 cases, 71 (66.4%) were males, with a male-to-female ratio of 2:1. The mean age was 21.9 years (range 1.5 - 80 years). Chronic osteomyelitis is most common in the first and second decades of life (55.2%) and mostly affects people < 50 years of age (93.5%). Haematogenous osteomyelitis was the most common cause of chronic osteomyelitis (81.3%). The most common bone site was the tibia (32.7%). Nearly all (103) were adjudged cured; only 3 patients suffered a recurrence. CONCLUSION Chronic osteomyelitis is common in Nigeria. Most cases occur in the first and second decades of life, with haematogenous osteomyelitis being the most common cause. A high index of suspicion of osteomyelitis in children with septicaemia, and the proper treatment of patients with open fractures, will help to reduce the occurrence of the disease.
Stability and predictive utility, over 3 years, of the illness beliefs of individuals recently diagnosed with Type 2 diabetes mellitus.
AIM To determine the stability of beliefs of patients with Type 2 diabetes about their diabetes over 3 years, following diagnosis. METHODS Data were collected as part of a multicentre cluster randomized controlled trial of a 6-h self-management programme, across 207 general practices in the UK. Participants in the original trial were eligible for follow-up with biomedical data (HbA1c levels, blood pressure, weight, blood lipid levels) collected at the practice, and questionnaire data collected by postal distribution and return. Psychological outcome measures were depression (Hospital Anxiety and Depression Scale) and diabetes distress (Problem Areas in Diabetes scale). Illness beliefs were assessed using the Illness Perceptions Questionnaire-Revised and the Diabetes Illness Representations Questionnaire scales. RESULTS At 3-year follow-up, all post-intervention differences in illness beliefs between the intervention and the control group remained significant, with perceptions of the duration of diabetes, seriousness of diabetes and perceived impact of diabetes unchanged over the course of the 3-year follow-up. The control group reported a greater understanding of diabetes during the follow-up, and the intervention group reported decreased responsibility for diabetes outcomes during the follow-up. After controlling for 4-month levels of distress and depression, the perceived impact of diabetes at 4 months remained a significant predictor of distress and depression at 3-year follow-up. CONCLUSIONS Peoples' beliefs about diabetes are formed quickly after diagnosis, and thereafter seem to be relatively stable over extended follow-up. These early illness beliefs are predictive of later psychological distress, and emphasize the importance of initial context and provision of diabetes care in shaping participants' future well-being.
Regularization of Linear Descriptor Systems with Variable Coefficients
We study linear descriptor control systems with rectangular variable coefficient matrices. We introduce condensed forms for such systems under equivalence transformations and use these forms to detect whether the system can be transformed to a uniquely solvable closed loop system via state or derivative feedback. We show that under some mild assumptions every such system consists of an underlying square subsystem that behaves essentially like a standard state space system, plus some solution components that are constrained to be zero.
Towards gameplay analysis via gameplay metrics
User-oriented research in the game industry is undergoing a change from relying on informal user-testing methods adapted directly from productivity software development to integrating modern approaches to usability- and user experience testing. Gameplay metrics analysis form one of these techniques, being based on instrumentation methods in HCI. Gameplay metrics are instrumentation data about the user behavior and user-game interaction, and can be collected during testing, production and the live period of the lifetime of a digital game. The use of instrumentation data is relatively new to commercial game development, and remains a relatively unexplored method of user research. In this paper, the focus is on utilizing game metrics for informing the analysis of gameplay during commercial game production as well as in research contexts. A series of case studies are presented, focusing on the major commercial game titles Kane & Lynch and Fragile Alliance.
An open-source simulator for cognitive robotics research: the prototype of the iCub humanoid robot simulator
This paper presents the prototype of a new computer simulator for the humanoid robot iCub. The iCub is a new open-source humanoid robot developed as a result of the "RobotCub" project, a collaborative European project aiming at developing a new open-source cognitive robotics platform. The iCub simulator has been developed as part of a joint effort with the European project "ITALK" on the integration and transfer of action and language knowledge in cognitive robots. This is available open-source to all researchers interested in cognitive robotics experiments with the iCub humanoid platform.
Automatic Detection and Language Identification of Multilingual Documents
Language identification is the task of automatically detecting the language(s) present in a document based on the content of the document. In this work, we address the problem of detecting documents that contain text from more than one language (multilingual documents). We introduce a method that is able to detect that a document is multilingual, identify the languages present, and estimate their relative proportions. We demonstrate the effectiveness of our method over synthetic data, as well as real-world multilingual documents collected from the web.
Test-retest reliability and validity of the Pittsburgh Sleep Quality Index in primary insomnia.
OBJECTIVE Psychometric evaluation of the Pittsburgh Sleep Quality Index (PSQI) for primary insomnia. METHODS The study sample consisted of 80 patients with primary insomnia (DSM-IV). The length of the test-retest interval was either 2 days or several weeks. Validity analyses were calculated for PSQI data and data from sleep diaries, as well as polysomnography. To evaluate the specificity of the PSQI, insomnia patients were compared with a control group of 45 healthy subjects. RESULTS In primary insomnia patients, the overall PSQI global score correlation coefficient for test-retest reliability was .87. Validity analyses showed high correlations between PSQI and sleep log data and lower correlations with polysomnography data. A PSQI global score > 5 resulted in a sensitivity of 98.7 and specificity of 84.4 as a marker for sleep disturbances in insomnia patients versus controls. CONCLUSION The PSQI has a high test-retest reliability and a good validity for patients with primary insomnia.
Analysis of The Indispensable Opposition
The Indispensable Opposition is an essay wrote by Walter Lippmann,a famous American columnist.In this essay,he puts forward a rather striking viewpoint about the freedom of speech.From a unique perspective,he explains why disagreement is indispensable to our society and what the real freedom of speech is.These opinions are thought-provoking and have practical significance.
Hypothermia in comatose survivors from out-of-hospital cardiac arrest: pilot trial comparing 2 levels of target temperature.
BACKGROUND It is recommended that comatose survivors of out-of-hospital cardiac arrest should be cooled to 32° to 34°C for 12 to 24 hours. However, the optimal level of cooling is unknown. The aim of this pilot study was to obtain initial data on the effect of different levels of hypothermia. We hypothesized that deeper temperatures will be associated with better survival and neurological outcome. METHODS AND RESULTS Patients were eligible if they had a witnessed out-of-hospital cardiac arrest from March 2008 to August 2011. Target temperature was randomly assigned to 32°C or 34°C. Enrollment was stratified on the basis of the initial rhythm as shockable or asystole. The target temperature was maintained during 24 hours followed by 12 to 24 hours of controlled rewarming. The primary outcome was survival free from severe dependence (Barthel Index score ≥60 points) at 6 months. Thirty-six patients were enrolled in the trial (26 shockable rhythm, 10 asystole), with 18 assigned to 34°C and 18 to 32°C. Eight of 18 patients in the 32°C group (44.4%) met the primary end point compared with 2 of 18 in the 34°C group (11.1%) (log-rank P=0.12). All patients whose initial rhythm was asystole died before 6 months in both groups. Eight of 13 patients with initial shockable rhythm assigned to 32°C (61.5%) were alive free from severe dependence at 6 months compared with 2 of 13 (15.4%) assigned to 34°C (log-rank P=0.029). The incidence of complications was similar in both groups except for the incidence of clinical seizures, which was lower (1 versus 11; P=0.0002) in patients assigned to 32°C compared with 34°C. On the contrary, there was a trend toward a higher incidence of bradycardia (7 versus 2; P=0.054) in patients assigned to 32°C. Although potassium levels decreased to a greater extent in patients assigned to 32°C, the incidence of hypokalemia was similar in both groups. CONCLUSIONS The findings of this pilot trial suggest that a lower cooling level may be associated with a better outcome in patients surviving out-of-hospital cardiac arrest secondary to a shockable rhythm. The benefits observed here merit further investigation in a larger trial in out-of-hospital cardiac arrest patients with different presenting rhythms. CLINICAL TRIAL REGISTRATION URL: http://www.clinicaltrials.gov. Unique identifier: NCT01155622.
Naltrexone versus acamprosate: one year follow-up of alcohol dependence treatment.
Naltrexone and acamprosate reduce relapse in alcohol dependence. They have not yet been compared in a published trial. The aim of this study was to compare the efficacy of these compounds in conditions similar to those in routine clinical practice. Random allocation to a year of treatment with naltrexone (50 mg/day) or acamprosate (1665-1998 mg/day) was made in 157 recently detoxified alcohol-dependent men with moderate dependence (evaluated using the Addictions Severity Index and Severity of Alcohol Dependence Scale). All were patients whom a member of the family would accompany regularly to appointments. Alcohol consumption, craving and adverse events were recorded weekly for the first 3 months, and then bi-weekly, by the treating psychiatrist who was not blinded. At 3-monthly intervals, investigators who were blinded to the treatment documented patients' alcohol consumption based on patients' accounts, information given by the psychiatrists when necessary, and reports from patients' families. Serum gamma-glutamyltransferase (GGT) was also measured. Efforts were made to sustain the blindness of the investigators. The same investigator did not assess the same patient twice. The integrity of the blindness was not checked. There was no difference between treatments in mean time to first drink (naltrexone 44 days, acamprosate 39 days) but the time to first relapse (five or more drinks in a day) was 63 days (naltrexone) versus 42 days (acamprosate) (P = 0.02). At the end of 1 year, 41% receiving naltrexone and 17% receiving acamprosate had not relapsed (P = 0.0009). The cumulative number of days of abstinence was significantly greater, and the number of drinks consumed at one time and severity of craving were significantly less, in the naltrexone group compared to the acamprosate group, as was the percentage of heavy drinking days (P = 0.038). More patients in the acamprosate than the naltrexone group were commenced on disulfiram during the study. Naltrexone patients attended significantly more group therapy sessions, though this could not explain their better outcome. There were non-significant trends for the naltrexone group to comply better with medication, to stay in the study longer, and to show greater improvement over baseline in serum GGT.
Investigation of the spark cycle on material removal rate in wire electrical discharge machining of advanced materials
The development of new, advanced engineering materials and the need for precise and flexible prototypes and low-volume production have made the wire electrical discharge machining (EDM) an important manufacturing process to meet such demands. This research investigates the effect of spark on-time duration and spark on-time ratio, two important EDM process parameters, on the material removal rate (MRR) and surface integrity of four types of advanced material: porous metal foams, metal bond diamond grinding wheels, sintered Nd-Fe-B magnets, and carbon–carbon bipolar plates. An experimental procedure was developed. During the wire EDM, five types of constraints on the MRR due to short circuit, wire breakage, machine slide speed limit, and spark on-time upper and lower limits are identified. An envelope of feasible EDM process parameters is generated for each work-material. Applications of such a process envelope to select process parameters for maximum MRR and for machining of micro features are discussed. Results of Scanning Electron Microscopy (SEM) analysis of surface integrity are presented. # 2003 Elsevier Ltd. All rights reserved.
Application of Random-Effects Pattern-Mixture Models for Missing Data in Longitudinal Studies
Random-effects regression models have become increasingly popular for analysis of longitudinal data. A key advantage of the random-effects approach is that it can be applied when subjects are not measured at the same number of timepoints. In this article we describe use of random-effects pattern-mixture models to further handle and describe the influence of missing data in longitudinal studies. For this approach, subjects are first divided into groups depending on their missing-data pattern and then variables based on these groups are used as model covariates. In this way, researchers are able to examine the effect of missing-data patterns on the outcome (or outcomes) of interest. Furthermore, overall estimates can be obtained by averaging over the missing-data patterns. A psychiatric clinical trials data set is used to illustrate the random-effects pattern-mixture approach to longitudinal data analysis with missing data.
ASIFT: A New Framework for Fully Affine Invariant Image Comparison
If a physical object has a smooth or piecewise smooth boundary, its images obtained by cameras in varying positions undergo smooth apparent deformations. These deformations are locally well approximated by affine transforms of the image plane. In consequence the solid object recognition problem has often been led back to the computation of affine invariant image local features. Such invariant features could be obtained by normalization methods, but no fully affine normalization method exists for the time being. Even scale invariance is dealt with rigorously only by the scaleinvariant feature transform (SIFT) method. By simulating zooms out and normalizing translation and rotation, SIFT is invariant to four out of the six parameters of an affine transform. The method proposed in this paper, affine-SIFT (ASIFT), simulates all image views obtainable by varying the two camera axis orientation parameters, namely, the latitude and the longitude angles, left over by the SIFT method. Then it covers the other four parameters by using the SIFT method itself. The resulting method will be mathematically proved to be fully affine invariant. Against any prognosis, simulating all views depending on the two camera orientation parameters is feasible with no dramatic computational load. A two-resolution scheme further reduces the ASIFT complexity to about twice that of SIFT. A new notion, the transition tilt, measuring the amount of distortion from one view to another, is introduced. While an absolute tilt from a frontal to a slanted view exceeding 6 is rare, much higher transition tilts are common when two slanted views of an object are compared (see Figure 1). The attainable transition tilt is measured for each affine image comparison method. The new method permits one to reliably identify features that have undergone transition tilts of large magnitude, up to 36 and higher. This fact is substantiated by many experiments which show that ASIFT significantly outperforms the state-of-the-art methods SIFT, maximally stable extremal region (MSER), Harris-affine, and Hessian-affine.
Control Strategies for Microgrids With Distributed Energy Storage Systems: An Overview
This paper presents an overview of the state of the art control strategies specifically designed to coordinate distributed energy storage (ES) systems in microgrids. Power networks are undergoing a transition from the traditional model of centralised generation towards a smart decentralised network of renewable sources and ES systems, organised into autonomous microgrids. ES systems can provide a range of services, particularly when distributed throughout the power network. The introduction of distributed ES represents a fundamental change for power networks, increasing the network control problem dimensionality and adding long time-scale dynamics associated with the storage systems’ state of charge levels. Managing microgrids with many small distributed ES systems requires new scalable control strategies that are robust to power network and communication network disturbances. This paper reviews the range of services distributed ES systems can provide, and the control challenges they introduce. The focus of this paper is a presentation of the latest decentralised, centralised and distributed multi-agent control strategies designed to coordinate distributed microgrid ES systems. Finally, multi-agent control with agents satisfying Wooldridge’s definition of intelligence is proposed as a promising direction for future research.
Framework for Traffic Congestion Management
Traffic Congestion is one of many serious global problems in all great cities resulted from rapid urbanization which always exert negative externalities upon society. The solution of traffic congestion is highly geocentric and due to its heterogeneous nature, curbing congestion is one of the hard tasks for transport planners. It is not possible to suggest unique traffic congestion management framework which could be absolutely applied for every great cities. Conversely, it is quite feasible to develop a framework which could be used with or without minor adjustment to deal with congestion problem. So, the main aim of this paper is to prepare a traffic congestion mitigation framework which will be useful for urban planners, transport planners, civil engineers, transport policy makers, congestion management researchers who are directly or indirectly involved or willing to involve in the task of traffic congestion management. Literature review is the main source of information of this study. In this paper, firstly, traffic congestion is defined on the theoretical point of view and then the causes of traffic congestion are briefly described. After describing the causes, common management measures, using worldwide, are described and framework for supply side and demand side congestion management measures are prepared.
Counterfactual Estimation and Optimization of Click Metrics in Search Engines: A Case Study
Optimizing an interactive system against a predefined online metric is particularly challenging, especially when the metric is computed from user feedback such as clicks and payments. The key challenge is the counterfactual nature: in the case of Web search, any change to a component of the search engine may result in a different search result page for the same query, but we normally cannot infer reliably from search log how users would react to the new result page. Consequently, it appears impossible to accurately estimate online metrics that depend on user feedback, unless the new engine is actually run to serve live users and compared with a baseline in a controlled experiment. This approach, while valid and successful, is unfortunately expensive and time-consuming. In this paper, we propose to address this problem using causal inference techniques, under the contextual-bandit framework. This approach effectively allows one to run potentially many online experiments offline from search log, making it possible to estimate and optimize online metrics quickly and inexpensively. Focusing on an important component in a commercial search engine, we show how these ideas can be instantiated and applied, and obtain very promising results that suggest the wide applicability of these techniques.
Gas6 and the Receptor Tyrosine Kinase Axl in Clear Cell Renal Cell Carcinoma
BACKGROUND The molecular biology of renal cell carcinoma (RCC) is complex and not fully understood. We have recently found that the expression of the receptor tyrosine kinase Axl in the RCC tumors independently correlates with survival of the patients. PRINCIPAL FINDINGS Here, we have investigated the role of Axl and its ligand Gas6, the vitamin-K dependent protein product of the growth arrest-specific gene 6, in clear cell RCC (ccRCC) derived cells. The Axl protein was highly expressed in ccRCC cells deficient in functional von Hippel-Lindau (VHL) protein, a tumor suppressor gene often inactivated in ccRCC. VHL reconstituted cells expressed decreased levels of Axl protein, but not Axl mRNA, suggesting VHL to regulate Axl expression. Gas6-mediated activation of Axl in ccRCC cells resulted in Axl phosphorylation, receptor down-regulation, decreased cell-viability and migratory capacity. No effects of the Gas6/Axl system could be detected on invasion. Moreover, in ccRCC tumor tissues, Axl was phosphorylated and Gas6 gamma-carboxylated, suggesting these molecules to be active in vivo. SIGNIFICANCE These results provide novel information regarding the complex function of the Gas6/Axl system in ccRCC.
Dynamic Attention Deep Model for Article Recommendation by Learning Human Editors' Demonstration
As aggregators, online news portals face great challenges in continuously selecting a pool of candidate articles to be shown to their users. Typically, those candidate articles are recommended manually by platform editors from a much larger pool of articles aggregated from multiple sources. Such a hand-pick process is labor intensive and time-consuming. In this paper, we study the editor article selection behavior and propose a learning by demonstration system to automatically select a subset of articles from the large pool. Our data analysis shows that (i) editors' selection criteria are non-explicit, which are less based only on the keywords or topics, but more depend on the quality and attractiveness of the writing from the candidate article, which is hard to capture based on traditional bag-of-words article representation. And (ii) editors' article selection behaviors are dynamic: articles with different data distribution come into the pool everyday and the editors' preference varies, which are driven by some underlying periodic or occasional patterns. To address such problems, we propose a meta-attention model across multiple deep neural nets to (i) automatically catch the editors' underlying selection criteria via the automatic representation learning of each article and its interaction with the meta data and (ii) adaptively capture the change of such criteria via a hybrid attention model. The attention model strategically incorporates multiple prediction models, which are trained in previous days. The system has been deployed in a commercial article feed platform. A 9-day A/B testing has demonstrated the consistent superiority of our proposed model over several strong baselines.
Dental erosion prevalence and associated risk indicators among preschool children in Athens, Greece
The aims of the study were to investigate dental erosion prevalence, distribution and severity in Greek preschool children attending public kindergartens in the prefecture of Attica, Greece and to determine the effect of dental caries, oral hygiene level, socio-economic factors, dental behavior, erosion related medication and chronic illness. A random and stratified sample of 605 Greek preschool children was clinically examined for dental erosion using the Basic Erosive Wear Examination Index (ΒΕWE). Dental caries (dmfs) and Simplified Debris Index were also recorded. The data concerning possible risk indicators were derived by a questionnaire. Zero-inflated Poisson regression was generated to test the predictive effects of the independent variables on dental erosion. The prevalence of dental erosion was 78.8 %, and the mean and SE of BEWE index was 3.64 ± 0.15. High monthly family income was positively related to ΒΕWE cumulative scores [RR = 1.204 (1.016–1.427)], while high maternal education level [RR = 0.872 (0.771–0.986)] and poor oral hygiene level [DI-s, RR = 0.584 (0.450–0.756)] showed a negative association. Dental erosion is a common oral disease in Greek preschool children in Attica, related to oral hygiene and socio-economic factors. Programs aimed at erosion prevention should begin at an early age for all children.
Score of Adherence to 2016 European Cardiovascular Prevention Guidelines Predicts Cardiovascular and All-Cause Mortality in the General Population.
BACKGROUND Guidelines on cardiovascular (CV) disease prevention promote healthy lifestyle behaviours and CV risk factor control to reduce CV risk. The effect of adherence to these guidelines on CV and all-cause mortality is not well known. METHODS We assessed the effect of baseline adherence to "2016 European Guidelines on CV Disease Prevention in Clinical Practice" on long-term CV and all-cause mortality in a sample recruited from the French general population. Analysis was on the basis of the Third French Monitoring of Trends and Determinants in Cardiovascular Disease (MONICA) population-based survey (recruitment period: 1994-1997). We built an adherence score to European guidelines, considering adherence to recommendations for smoking, drinking, physical activity, body mass index, blood pressure, low-density and high-density lipoprotein cholesterol, fasting blood glucose, and diet at baseline. Vital status was obtained 18 years after inclusion. Statistical analysis was on the basis of multivariate Cox modelling. RESULTS Adherence score was assessed in 1311 apparently healthy participants aged 35-64 years (73% men). During the follow-up, 186 deaths occurred (41 were due to a CV cause). Considering CV mortality, the adjusted hazard ratio for subjects in the fourth quartile of the adherence score (worse adherence) was 3.12 (95% confidence interval [CI], 1.62-6.01; P = 0.001), compared with subjects in the first, second, or third quartile (best adherence). Considering all-cause mortality, the adjusted hazard ratio for subjects in the fourth quartile of the adherence score was 2.27 (95% CI, 1.68-3.06; P < 0.001). CONCLUSIONS Better baseline adherence to European guidelines on CV disease prevention was associated with a significantly reduced long-term CV and all-cause mortality in a sample from the French general population.
Complexities in the quantitative assessment of patients with rheumatic diseases in clinical trials and clinical care.
Quantitative measurement has led to major advances in the diagnosis, prognosis and management of chronic diseases. Quantitative measures in rheumatic diseases differ from measures in many chronic diseases in several respects. There is no single "gold standard," such as blood pressure or cholesterol, in the diagnosis, management, and prognosis of any rheumatic disease. Laboratory tests are limited; for example, in rheumatoid arthritis > 40% of patients or more have a normal erythrocyte sedimentation rate (ESR). Formal joint counts have poor reliability and are not performed at most visits of most patients. Radiographs are rarely read quantitatively, except in formal clinical trials. The optimal quantitative measures to monitor status and assess long-term prognosis are often derived from patient self-report questionnaires. Quantitative measures may reflect disease activity, e.g., swollen joint counts or C-reactive protein (CRP), long-term damage, e.g., radiographic damage, or poor outcomes, e.g., work disability and premature death. Disease activity measures used in clinical trials are primarily surrogates for long-term outcomes. As there is no single "gold standard" measure, indices of multiple measures are used in patient assessment. Indices used in rheumatoid arthritis assess primarily disease activity, but separate indices have been developed to assess disease activity versus damage in patients with ankylosing spondylitis, systemic lupus erythematosus, and vasculitis.
Folate, vitamin B6, vitamin B12, and methionine intakes and risk of stroke subtypes in male smokers.
The associations of dietary folate, vitamin B(6), vitamin B(12), and methionine intakes with risk of stroke subtypes were examined among 26,556 male Finnish smokers, aged 50-69 years, enrolled in the Alpha-Tocopherol, Beta-Carotene Cancer Prevention Study. Dietary intake was assessed at baseline by using a validated food frequency questionnaire. During a mean follow-up of 13.6 years, from 1985 through 2004, 2,702 cerebral infarctions, 383 intracerebral hemorrhages, and 196 subarachnoid hemorrhages were identified from national registers. In analyses adjusting for age and cardiovascular risk factors, a high folate intake was associated with a statistically significant lower risk of cerebral infarction but not intracerebral or subarachnoid hemorrhages. The multivariate relative risk of cerebral infarction was 0.80 (95% confidence interval: 0.70, 0.91; p(trend) = 0.001) for men in the highest versus lowest quintile of folate intake. Vitamin B(6), vitamin B(12), and methionine intakes were not significantly associated with any subtype of stroke. These findings in men suggest that a high dietary folate intake may reduce the risk of cerebral infarction.
Hetero-Manifold Regularisation for Cross-Modal Hashing
Recently, cross-modal search has attracted considerable attention but remains a very challenging task because of the integration complexity and heterogeneity of the multi-modal data. To address both challenges, in this paper, we propose a novel method termed hetero-manifold regularisation (HMR) to supervise the learning of hash functions for efficient cross-modal search. A hetero-manifold integrates multiple sub-manifolds defined by homogeneous data with the help of cross-modal supervision information. Taking advantages of the hetero-manifold, the similarity between each pair of heterogeneous data could be naturally measured by three order random walks on this hetero-manifold. Furthermore, a novel cumulative distance inequality defined on the hetero-manifold is introduced to avoid the computational difficulty induced by the discreteness of hash codes. By using the inequality, cross-modal hashing is transformed into a problem of hetero-manifold regularised support vector learning. Therefore, the performance of cross-modal search can be significantly improved by seamlessly combining the integrated information of the hetero-manifold and the strong generalisation of the support vector machine. Comprehensive experiments show that the proposed HMR achieve advantageous results over the state-of-the-art methods in several challenging cross-modal tasks.
Ethical hacking
This course will explore the various means that an intruder has available to gain access to computer resources. We will investigate weaknesses by discussing the theoretical background, and whenever possible, actually performing the attack. We will then discuss methods to prevent/reduce the vulnerabilities. This course is targeted specifically for Certified Ethical Hacking (CEH) exam candidates, matching the CEH exam objectives with the effective and popular Cert Guide method of study.
Fourier and wavelet descriptors for shape recognition using neural networks - a comparative study
This paper presents the application of three di1erent types of neural networks to the 2-D pattern recognition on the basis of its shape. They include the multilayer perceptron (MLP), Kohonen self-organizing network and hybrid structure composed of the self-organizing layer and the MLP subnetwork connected in cascade. The recognition is based on the features extracted from the Fourier and wavelet transformations of the data, describing the shape of the pattern. Application of di1erent neural network structures associated with di1erent preprocessing of the data results in di1erent accuracy of recognition and classi9cation. The numerical experiments performed for the recognition of simulated shapes of the airplanes have shown the superiority of the wavelet preprocessing associated with the self-organizing neural network structure. The integration of the individual classi9ers based on the weighted summation of the signals from the neural networks has been proposed and checked in numerical experiments. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.
Some new directions in control theory inspired by systems biology.
This paper, addressed primarily to engineers and mathematicians with an interest in control theory, argues that entirely new theoretical problems arise naturally when addressing questions in the field of systems biology. Examples from the author's recent work are used to illustrate this point.
Agile SYSTEMS ENGINEERING versus AGILE SYSTEMS engineering
This paper explores recent developments in agile systems engineering. We draw a distinction between agility in the systems engineering process versus agility in the resulting system itself. In the first case the emphasis is on carefully exploring the space of design alternatives and to delay the freeze point as long as possible as new information becomes available during product development. In the second case we are interested in systems that can respond to changed requirements after initial fielding of the system. We provide a list of known and emerging methods in both domains and explore a number of illustrative examples such as the case of the Iridium satellite constellation or recent developments in the automobile industry.
A double blind lipase for lipase comparison of a high lipase and standard pancreatic enzyme preparation in cystic fibrosis.
A standard acid resistant microsphere pancreatic enzyme preparation was compared with identical capsules half filled with mini-tablets of a new high lipase preparation in a randomised double blind crossover study in children with cystic fibrosis. Each patient received his/her usual number of capsules and the same dose of lipase during each period of the study. Eighteen patients completed the study. There were fewer gastrointestinal symptoms when pancreatic enzyme was supplied as the high lipase preparation. There was also a significant improvement in fat absorption (17%, 95% confidence interval (CI) 6 to 27), reduction in faecal fat output (15.8 g/day, 95% CI 6.4 to 22.5), and faecal energy loss (789 kJ/day, 95% CI 211 to 1384). It is concluded that half filled capsules of the new high lipase preparation are more effective than the standard preparation and it is likely that filled capsules would allow patients to use fewer than half the number of pancreatic enzyme capsules.
Are South African financial advisor addressing the estate planning objectives that are important to their client
Estate planning is an important aspect of any effective financial plan. When preparing an estate plan several objectives identified by the individual planner, as well as several pieces of legislation have to be considered. In South Africa the actions of financial advisors are regulated by the Financial Advisory and Intermediary Services Act. The act aims to ensure that the financial advisor act in the best interest of his / her client. If the act meets its set objectives there will be an alignment of objectives set by a financial advisor and his / her client. This study investigates the existence of an expectation gap between the estate planning objectives considered to be important by the financial advisor and the importance allocated to these factors by the clients. The study found that there was an expectation gap for three of the objectives that should be considered in the estate plan.
Detecting Abnormal Machine Characteristics in Cloud Infrastructures
In the cloud computing environment resources are accessed as services rather than as a product. Monitoring this system for performance is crucial because of typical pay-per-use packages bought by the users for their jobs. With the huge number of machines currently in the cloud system, it is often extremely difficult for system administrators to keep track of all machines using distributed monitoring programs such as Ganglia\footnote{\url{ganglia.sourceforge.net/}} which lacks system health assessment and summarization capabilities. To overcome this problem, we propose a technique for automated anomaly detection using machine performance data in the cloud. Our algorithm is entirely distributed and runs locally on each computing machine on the cloud in order to rank the machines in order of their anomalous behavior for given jobs. There is no need to centralize any of the performance data for the analysis and at the end of the analysis, our algorithm generates error reports, thereby allowing the system administrators to take corrective actions. Experiments performed on real data sets collected for different jobs validate the fact that our algorithm has a low overhead for tracking anomalous machines in a cloud infrastructure.
Comparison of three-day and five-day courses of azithromycin in the treatment of atypical pneumonia
This open, randomised clinical study, conducted from June 1988 to December 1989, included 84 patients with clinical and radiological findings of atypical pneumonia. All patients were treated with a total dose of 1.5 grazithromycin, a new azalide antibiotic. In Group I, azithromycin was administered for three days (500 mg once daily). In Group II, azithromycin was administered for five days (250 mg b.i.d. on day 1, followed by 250 mg once daily on days 2 to 5). Causative pathogens were identified by serological methods. Of the 41 patients in Group I,Mycoplasma pneumoniae, Chlamydia psittaci andCoxiella burnetti were identified in 19, 4 and 3 patients, respectively. In Group II there were 43 patients;Mycoplasma pneumoniae was identified in 24,Chlamydia psittaci in 4 andCoxiella burnetti in 3. Only patients with known causative pathogens were included in the evaluation of clinical efficacy. All patients were clinically cured by day 5; most of the patients became afebrile within 48 h of starting treatment. Side effects were observed in one patient in Group I and in one patient in Group II. The results suggest that a 1.5 g total dose of azithromycin is equally effective when administered as a three- or five-day regimen for the treatment of atypical pneumonia.
NeuroScale: Novel Topographic Feature Extraction using RBF Networks
Dimension-reducing feature extraction neural network techniques which also preserve neighbourhood relationships in data have traditionally been the exclusive domain of Kohonen self organising maps. Recently, we introduced a novel dimension-reducing feature extraction process, which is also topographic, based upon a Radial Basis Function architecture. It has been observed that the generalisation performance of the system is broadly insensitive to model order complexity and other smoothing factors such as the kernel widths, contrary to intuition derived from supervised neural network models. In this paper we provide an effective demonstration of this property and give a theoretical justification for the apparent 'self-regularising' behaviour of the 'NEUROSCALE' architecture. 1 'NeuroScale': A Feed-forward Neural Network Topographic Transformation Recently an important class of topographic neural network based feature extraction approaches, which can be related to the traditional statistical methods of Sammon Mappings (Sammon, 1969) and Multidimensional Scaling (Kruskal, 1964), have been introduced (Mao and Jain, 1995; Lowe, 1993; Webb, 1995; Lowe and Tipping, 1996). These novel alternatives to Kohonen-like approaches for topographic feature extraction possess several interesting properties. For instance, the NEuROSCALE architecture has the empirically observed property that the generalisation perfor544 D. Lowe and M. E. Tipping mance does not seem to depend critically on model order complexity, contrary to intuition based upon knowledge of its supervised counterparts. This paper presents evidence for their 'self-regularising' behaviour and provides an explanation in terms of the curvature of the trained models. We now provide a brief introduction to the NEUROSCALE philosophy of nonlinear topographic feature extraction. Further details may be found in (Lowe, 1993; Lowe and Tipping, 1996). We seek a dimension-reducing, topographic transformation of data for the purposes of visualisation and analysis. By 'topographic', we imply that the geometric structure of the data be optimally preserved in the transformation, and the embodiment of this constraint is that the inter-point distances in the feature space should correspond as closely as possible to those distances in the data space. The implementation of this principle by a neural network is very simple. A Radial Basis Function (RBF) neural network is utilised to predict the coordinates of the data point in the transformed feature space. The locations of the feature points are indirectly determined by adjusting the weights of the network. The transformation is determined by optimising the network parameters in order to minimise a suitable error measure that embodies the topographic principle. The specific details of this alternative approach are as follows. Given an mdimensional input space of N data points x q , an n-dimensional feature space of points Yq is generated such that the relative positions of the feature space points minimise the error, or 'STRESS', term: N E = 2: 2:(d~p dqp )2, (1) p q>p where the d~p are the inter-point Euclidean distances in the data space: d~p = J(xq Xp)T(Xq xp), and the dqp are the corresponding distances in the feature space: dqp = J(Yq Yp)T(Yq Yp)· The points yare generated by the RBF, given the data points as input. That is, Yq = f(xq;W), where f is the nonlinear transformation effected by the RBF with parameters (weights and any kernel smoothing factors) W. The distances in the feature space may thus be given by dqp =11 f(xq) f(xp) " and so more explicitly by
Novel Flux-Weakening Control of an IPMSM for Quasi-Six-Step Operation
This paper proposes a novel flux-weakening control algorithm of an interior permanent-magnet synchronous motor for ldquoquasirdquo six-step operation. The proposed method is composed of feedforward and feedback paths. The feedforward path consists of 1-D lookup table, and the feedback is based on the difference between the reference voltage updated by current regulator and the output voltage limited by the overmodulation. Using this method, the flux-weakening and the antiwindup controls can be achieved simultaneously. In addition, the quasi-six-step operation can be obtained. That is, the available maximum output torque in the flux-weakening region is close to that in the six-step operation while the ability of the current control is maintained. The effectiveness of this method is proved by the experimental results.
Exploration-Exploitation Trade-off in Deep Reinforcement Learning
A fundamental dilemma in reinforcement learning is the exploration-exploitation trade-off. Deep reinforcement learning enables agents to act and learn in complex environments, but also introduces new challenges to both exploration and exploitation. Concepts like intrinsic motivation, hierarchical learning or curriculum learning all inspire different methods for exploration, while other agents profit from better methods to exploit current knowledge. In this work a survey of a variety of different approaches to exploration and exploitation in deep reinforcement learning is presented.
Concept mapping assessment in a problem-based medical curriculum.
BACKGROUND In the problem-based learning (PBL) medical curriculum at the Arabian Gulf University in Bahrain, students construct concept maps related to each case they study in PBL tutorials. AIM To evaluate the interrater reliability and predictive validity of concept map scores using a structured assessment tool. METHODS We examined concept maps of the same cohort of students at the beginning (year 2) and end (year 4) of the pre-clerkship phase, where PBL is the main method of instruction. Concept maps were independently evaluated by five raters based on valid selection of concepts, hierarchical arrangement of concepts, integration, relationship to the context of the problem, and degree of student creativity. A 5-point Likert scale was used to evaluate each criterion. Interrater reliability of the instrument was determined using the intraclass correlation coefficient (ICC) and predictive validity was measured by testing the correlations of concept map scores with summative examination scores. RESULTS The ICC of the concept map scores in year 2 was 0.75 (95% CI, 0.67-0.81) and in year 4 was 0.69 (95% CI, 0.59-0.77). Overall concept maps scores of year 4 students were significantly higher compared with year 2 students (p < 0.001, effect size = 0.5). The relationship between the students' scores in concept maps and their scores in summative examination varied from no to mild correlation. CONCLUSION The interrater reliability of concept map scores in this study is good to excellent. However, further studies are required to test the generalizability and validity of assessment using this tool.
Sequential radiation through host-race formation: herbivore diversity leads to diversity in natural enemies
Motion-dependent levels of order in a relativistic universe.
Consider a generally closed system of continuous three-space coordinates x with a differentiable amplitude function ψ(x). What is its level of order R? Define R by the property that it decreases (or stays constant) after the system is coarse grained. Then R turns out to obey R=8(-1)L(2)I,where quantity I=4∫dx[nabla]ψ(*)·[nabla]ψ is the classical Fisher information in the system and L is the longest chord that can connect two points on the system surface. In general, order R is (i) unitless, and (ii) invariant to uniform stretch or compression of the system. On this basis, the order R in the Universe was previously found to be invariant in time despite its Hubble expansion, and with value R=26.0×10(60) for flat space. By comparison, here we model the Universe as a string-based "holostar," with amplitude function ψ(x)[proportionality]1/r over radial interval r=(r(0),r(H)). Here r(0) is of order the Planck length and r(H) is the radial extension of the holostar, estimated as the known value of the Hubble radius. Curvature of space and relative motion of the observer must now be taken into account. It results that a stationary observer observes a level of order R=(8/9)(r(H)/r(0))(3/2)=0.42×10(90); while for a free-falling observer R=2(-1)(r(H)/r(0))(2)=0.85×10(120). Both order values greatly exceed the above flat-space value. Interestingly, they are purely geometric measures, depending solely upon ratio r(H)/r(0). Remarkably, the free-fall value ~10(120) of R approximates the negentropy of a universe modeled as discrete. This might mean that the Universe contains about equal amounts of continuous and discrete structure.
Write-Optimized B-Trees
Large writes are beneficial both on individual disks and on disk arrays, e.g., RAID-5. The presented design enables large writes of internal B-tree nodes and leaves. It supports both in-place updates and large append-only (“log-structured”) write operations within the same storage volume, within the same B-tree, and even at the same time. The essence of the proposal is to make page migration inexpensive, to migrate pages while writing them, and to make such migration optional rather than mandatory as in log-structured file systems. The inexpensive page migration also aids traditional defragmentation as well as consolidation of free space needed for future large writes. These advantages are achieved with a very limited modification to conventional B-trees that also simplifies other B-tree operations, e.g., key range locking and compression. Prior proposals and prototypes implemented transacted B-tree on top of log-structured file systems and added transaction support to log-structured file systems. Instead, the presented design adds techniques and performance characteristics of log-structured file systems to traditional B-trees and their standard transaction support, notably without adding a layer of indirection for locating B-tree nodes on disk. The result retains fine-granularity locking, full transactional ACID guarantees, fast search performance, etc. expected of a modern B-tree implementation, yet adds efficient transacted page relocation and large, high-bandwidth writes.
The Evolution of Agent-based Simulation Platforms : A Review of NetLogo 5 . 0 and ReLogo
We review and evaluate two recently evolved agent-based simulation platforms: version 5.0 of NetLogo and the ReLogo component of Repast. Subsequent to the similar review we published in 2006, NetLogo has evolved into a powerful platform for scientific modeling while retaining its basic conceptual design, ease of use, and excellent documentation. ReLogo evolved both from NetLogo and Repast; it implements NetLogo’s basic design and its primitives in the Groovy programming language embedded in the Eclipse development environment, and provides access to the Repast library. We implemented the “StupidModel” series of 16 pseudo-models in both platforms; these codes contain many elements of basic agent-based models and can serve as templates for programming real models. ReLogo successfully reimplements much of NetLogo, and its translator was generally successful in converting NetLogo codes into ReLogo. Overall we found ReLogo considerably more challenging to use and a less productive development environment. Using ReLogo requires learning Groovy and Eclipse and becoming familiar with Repast’s complex organization; documentation and learning materials are far less abundant and mature than NetLogo’s. Though we did not investigate thoroughly, it is not clear what kinds of models could readily be implemented in ReLogo but not NetLogo. On average, NetLogo executed our example models approximately 20 times faster than ReLogo.
Integrating intimate partner violence assessment and intervention into healthcare in the United States: a systems approach.
The Institute of Medicine, United States Preventive Services Task Force (USPSTF), and national healthcare organizations recommend screening and counseling for intimate partner violence (IPV) within the US healthcare setting. The Affordable Care Act includes screening and brief counseling for IPV as part of required free preventive services for women. Thus, IPV screening and counseling must be implemented safely and effectively throughout the healthcare delivery system. Health professional education is one strategy for increasing screening and counseling in healthcare settings, but studies on improving screening and counseling for other health conditions highlight the critical role of making changes within the healthcare delivery system to drive desired improvements in clinician screening practices and health outcomes. This article outlines a systems approach to the implementation of IPV screening and counseling, with a focus on integrated health and advocacy service delivery to support identification and interventions, use of electronic health record (EHR) tools, and cross-sector partnerships. Practice and policy recommendations include (1) ensuring staff and clinician training in effective, client-centered IPV assessment that connects patients to support and services regardless of disclosure; (2) supporting enhancement of EHRs to prompt appropriate clinical care for IPV and facilitate capturing more detailed and standardized IPV data; and (3) integrating IPV care into quality and meaningful use measures. Research directions include studies across various health settings and populations, development of quality measures and patient-centered outcomes, and tests of multilevel approaches to improve the uptake and consistent implementation of evidence-informed IPV screening and counseling guidelines.
Localized Supervised Metric Learning on Temporal Physiological Data
Effective patient similarity assessment is important for clinical decision support. It enables the capture of past experience as manifested in the collective longitudinal medical records of patients to help clinicians assess the likely outcomes resulting from their decisions and actions. However, it is challenging to devise a patient similarity metric that is clinically relevant and semantically sound. Patient similarity is highly context sensitive: it depends on factors such as the disease, the particular stage of the disease, and co-morbidities. One way to discern the semantics in a particular context is to take advantage of physicians’ expert knowledge as reflected in labels assigned to some patients. In this paper we present a method that leverages localized supervised metric learning to effectively incorporate such expert knowledge to arrive at semantically sound patient similarity measures. Experiments using data obtained from the MIMIC II database demonstrate the effectiveness of this approach.
Energy trade-offs in the IBM Wristwatch computer
We recently demonstrated a high function wrist watch computer prototype that runs the Linux operating system and also X11 graphics libraries. In this paper we describe the unique energy related challenges and tradeoffs we encountered while building this watch. We show that the usage duty factor for the device heavily dictates which of the powers, active power or sleep power, needs to be minimized more aggressively in order to achieve the longest perceived battery life. We also describe the energy issues that percolate through several layers of software all the way from device usage scenarios, applications, user interfaces, system level software to device drivers and the need to systematically address all of them to achieve the battery life dictated by the hardware components and the capacity of the battery in the device.
Construction and evaluation of a robust multifeature speech/music discriminator
We report on the construction of a real-time computer system capable of distinguishing speech signals from music signals over a wide range of digital audio input. We have examined 13 features intended to measure conceptually distinct properties of speech and/or music signals, and combined them in several multidimensional classification frameworks. We provide extensive data on system performance and the cross-validated training/test setup used to evaluate the system. For the datasets currently in use, the best classifier classifies with 5.8% error on a frame-by-frame basis, and 1.4% error when integrating long (2.4 second) segments of sound.
Anterior open-bite treatment by means of zygomatic miniplates: a case report
This case report presents the treatment of a patient with skeletal Cl II malocclusion and anterior open-bite who was treated with zygomatic miniplates through the intrusion of maxillary posterior teeth. A 16-year-old female patient with a chief complaint of anterior open-bite had a symmetric face, incompetent lips, convex profile, retrusive lower lip and chin. Intraoral examination showed that the buccal segments were in Class II relationship, and there was anterior open-bite (overbite -6.5 mm). The cephalometric analysis showed Class II skeletal relationship with increased lower facial height. The treatment plan included intrusion of the maxillary posterior teeth using zygomatic miniplates followed by fixed orthodontic treatment. At the end of treatment Class I canine and molar relationships were achieved, anterior open-bite was corrected and normal smile line was obtained. Skeletal anchorage using zygomatic miniplates is an effective method for open-bite treatment through the intrusion of maxillary posterior teeth.
Deterministic nanoassembly of a coupled quantum emitter–photonic crystal cavity system
T. van der Sar, J. Hagemeier, W. Pfaff, E.C. Heeres, S.M. Thon, H. Kim, P.M. Petroff, T.H. Oosterkamp, D. Bouwmeester, and R. Hanson Kavli Institute of Nanoscience, Delft University of Technology, P.O. Box 5046, 2600 GA Delft, The Netherlands. Department of Physics, University of California Santa Barbara, Santa Barbara, California 93106, USA. Leiden Institute of Physics, Leiden University, Niels Bohrweg 2, 2333 CA Leiden, The Netherlands. Department of Materials, University of California Santa Barbara, Santa Barbara, California 93106, USA. Department of ECE, University of California Santa Barbara, Santa Barbara, California 93106 USA. Current address: Department of ECE, IREAP, University of Maryland, College Park, Maryland 20742, USA. (Dated: July 5, 2011)
On the topology of the Lü attractor and related systems
Abstract We use well-established methods of knot theory to study the topological structure of the set of periodic orbits of the Lü attractor. We show that, for a specific set of parameters, the Lü attractor is topologically different from the classical Lorenz attractor, whose dynamics is formed by a double cover of the simple horseshoe. This argues against the ‘similarity’ between the Lü and Lorenz attractors, claimed, for these parameter values, by some authors on the basis of non-topological observations. However, we show that the Lü system belongs to the Lorenz-like family, since by changing the values of the parameters, the behaviour of the system follows the behaviour of all members of this family. An attractor of the Lü kind with higher order symmetry is constructed and some remarks on the Chen attractor are also presented.
Medical Image Segmentation Using Genetic Algorithms
Genetic algorithms (GAs) have been found to be effective in the domain of medical image segmentation, since the problem can often be mapped to one of search in a complex and multimodal landscape. The challenges in medical image segmentation arise due to poor image contrast and artifacts that result in missing or diffuse organ/tissue boundaries. The resulting search space is therefore often noisy with a multitude of local optima. Not only does the genetic algorithmic framework prove to be effective in coming out of local optima, it also brings considerable flexibility into the segmentation procedure. In this paper, an attempt has been made to review the major applications of GAs to the domain of medical image segmentation.
Symmetric Offset Stack Balun in Standard 0.13-$\mu{\hbox {m}}$ CMOS Technology for Three Broadband and Low-Loss Balanced Passive Mixer Designs
This paper presents symmetric offset stack Marchand single and dual baluns that are designed, analyzed, and implemented in a 0.18-μm CMOS process to verify the feasibility. Both single and dual baluns achieve measured bandwidths (BWs) of over 110% and 90%, and insertion losses of less than 4.4 and 7.4 dB at 38 GHz. The amplitude imbalance and phase imbalance of single and dual baluns are less than 1 dB and 5° from 10 to 67 and 11 to 50 GHz, respectively. The baluns were used in three broadband balanced passive mixers, i.e., a single-balanced resistive mixer (SBRM), a star mixer, and a subharmonic gate pumped resistive mixer (SHPRM) design in a 0.13-μm CMOS technology. These mixers exhibit wide BWs over 14-45 GHz (105%), 18-54 GHz (100%), and 28-50 GHz (56%). The 14-45-GHz SBRM achieves a conversion loss of better than 12 dB at 7 dBm of local oscillator (LO) power. The LO to RF and LO to IF isolations are better than 30 dB. The chip area is 0.53 mm2. The star mixer achieves a conversion loss of better than 12 dB from 18 to 54 GHz, and LO to RF, LO to IF, and RF to IF isolations better than 35 dB at LO frequencies spanning 10-60 GHz. The chip area is 0.6 mm2. The SHPRM has a conversion loss of better than 11 dB from 28 to 50 GHz. The isolations are better than 31 dB and occupy a chip area of 0.61 mm2.
A reconfigurable flight control system based on the EMMAE method
The ability of the multiple model adaptive estimation method (MMAE) to detect faults based on a predefined hypothesis and the parameter-estimating ability of an extended Kalman filter (EKF) results in an efficient fault detection approach. This extended multiple model adaptive estimation method (EMMAE) has been investigated on a nonlinear model of an aircraft to estimate on-line the state vector of the system and the control surface deflection in case of failed actuators. A supervision module has been designed to enhance the performance of the EMMAE method and to appropriately change settings in a control allocation module. The results show that this reconfigurable flight control system is capable of detecting, isolating, and compensating for actuator faults of various types, without any need to add additional sensors to measure control-surface deflections or to change the flight controller
“ APPLICATION OF TAGUCHI METHOD FOR DESIGN OF EXPERIMENTS IN TURNING GRAY CAST IRON ”
In order to produce any product with desired quality by machining, proper selection of process parameters is essential. Taguchi‟s parameter design is an important tool for robust design, which offers a simple and systematic approach to optimize a design for performance, quality and cost. The Taguchi method of off-line quality control encompasses all stages of product /process development. However, the key element for achieving high quality at low cost is Design of Experiments (DOE). Quality achieved by means of process optimization is found by many manufacturers to be cost effective in gaining and maintaining a competitive position in the world market. This paper describes use and steps of Taguchi design of experiments and orthogonal array to find a specific range and combinations of turning parameters like cutting speed ,feed rate and depth of cut to achieve optimal values of response variables like surface finish, tool wear, material removal rate in turning of Brake drum of FG 260 gray cast iron Material.
Evaluation of Anterior Cervical Reconstruction with Titanium Mesh Cages versus Nano-Hydroxyapatite/Polyamide66 Cages after 1- or 2-Level Corpectomy for Multilevel Cervical Spondylotic Myelopathy: A Retrospective Study of 117 Patients
OBJECTIVE To retrospectively compare the efficacy of the titanium mesh cage (TMC) and the nano-hydroxyapatite/polyamide66 cage (n-HA/PA66 cage) for 1- or 2-level anterior cervical corpectomy and fusion (ACCF) to treat multilevel cervical spondylotic myelopathy (MCSM). METHODS A total of 117 consecutive patients with MCSM who underwent 1- or 2-level ACCF using a TMC or an n-HA/PA66 cage were studied retrospectively at a mean follow-up of 45.28 ± 12.83 months. The patients were divided into four groups according to the level of corpectomy (1- or 2-level corpectomy) and cage type used (TMC or n-HA/PA66 cage). Clinical and radiological parameters were used to evaluate outcomes. RESULTS At the one-year follow-up, the fusion rate in the n-HA/PA66 group was higher, albeit non-significantly, than that in the TMC group for both 1- and 2-level ACCF, but the fusion rates of the procedures were almost equal at the final follow-up. The incidence of cage subsidence at the final follow-up was significantly higher in the TMC group than in the n-HA/PA66 group for the 1-level ACCF (24% vs. 4%, p = 0.01), and the difference was greater for the 2-level ACCF between the TMC group and the n-HA/PA66 group (38% vs. 5%, p = 0.01). Meanwhile, a much greater loss of fused height was observed in the TMC group compared with the n-HA/PA66 group for both the 1- and 2-level ACCF. All four groups demonstrated increases in C2-C7 Cobb angle and JOA scores and decreases in VAS at the final follow-up compared with preoperative values. CONCLUSION The lower incidence of cage subsidence, better maintenance of the height of the fused segment and similar excellent bony fusion indicate that the n-HA/PA66 cage may be a superior alternative to the TMC for cervical reconstruction after cervical corpectomy, in particular for 2-level ACCF.
Effects of climate change on crop production in Cameroon
This study involves an assessment of the potential effects of greenhouse gas climate change, as well as the direct fertilization effect of CO2 on crop yields in Cameroon. The methodology involves coupling the transient diagnostics of 2 atmosphere–ocean general circulation models, namely NASA/Goddard Institute GISS and the Hadley Centre’s HadCM3, to the CropSyst crop model to simulate current and future (2020, 2080) crop yields (bambara nut, groundnut, maize, sorghum and soybean) in 8 agricultural regions of Cameroon. For the future we estimate substantial yield increases for bambara groundnut, soybean and groundnut, and little or no change and even decreases of maize and sorghum yields, varying according to the climate scenario and the agricultural region. Maize and sorghum (both C4 crops) yields are expected to decrease by 14.6 and 39.9%, respectively, across the whole country under GISS 2080 scenarios. The results also show that the effect of temperature patterns on climate change is much more important than that of precipitation. Findings call for monitoring of climate change/variability and dissemination of information to farmers, to encourage adaptation to climate change.
A wireless slanted optrode array with integrated micro leds for optogenetics
This paper presents a wireless-enabled, flexible optrode array with multichannel micro light-emitting diodes (μ-LEDs) for bi-directional wireless neural interface. The array integrates wirelessly addressable μ-LED chips with a slanted polymer optrode array for precise light delivery and neural recording at multiple cortical layers simultaneously. A droplet backside exposure (DBE) method was developed to monolithically fabricate varying-length optrodes on a single polymer platform. In vivo tests in rat brains demonstrated that the μ-LEDs were inductively powered and controlled using a wireless switched-capacitor stimulator (SCS), and light-induced neural activity was recorded with the optrode array concurrently.
Application of a two-state kinetic model to the heterogeneous kinetics of reaction between cysteine and hydrogen peroxide in amorphous lyophiles.
The bimolecular reaction between cysteine (CSH) and hydrogen peroxide (H(2)O(2)) in amorphous PVP and trehalose lyophiles has been examined at different reactant and excipient concentrations and at varying pH and temperature. Initial rates of product formation and complete reactant and product concentration-time profiles were generated by HPLC analyses of reconstituted solutions of lyophiles stored for various periods of time. While only cystine (CSSC) forms in aqueous solutions, cysteine sulfinic (CSO(2)H) and sulfonic (CSO(3)H) acids are significant degradants in amorphous solids. The formation of alternative degradants was consistent with the solution reaction mechanism, which involves a reactive sulfenic acid (CSOH) intermediate, coupled with the restricted mobility in the amorphous solid-state, which favors reaction of CSOH with the smaller, mobility-advantaged H(2)O(2) over its reaction with cysteine. Complex rate laws (i.e., deviations from 1st order for each reactant) observed in initial rate studies and biphasic concentration-time profiles in PVP were successfully fitted by a two-state kinetic model assuming two reactant populations with different reactivities. The highly reactive population forms CSSC preferentially while the less reactive population generates primarily sulfinic and sulfonic acids. Reactions in trehalose could be described by a simple one-state model. In contrast to the reaction in aqueous solutions, the 'pH' effect was minimal in amorphous solids, suggesting a change in the rate-determining step to diffusion control for the model reaction occurring in amorphous lyophiles.
Noisy Sparse Subspace Clustering
This paper considers the problem of subspace clustering under noise. Specifically, we study the behavior of Sparse Subspace Clustering (SSC) when either adversarial or random noise is added to the unlabelled input data points, which are assumed to lie in a union of low-dimensional subspaces. We show that a modified version of SSC is provably effective in correctly identifying the underlying subspaces, even with noisy data. This extends theoretical guarantee of this algorithm to the practical setting and provides justification to the success of SSC in a class of real applications.
Radar waveform design and multi-target detection in vehicular applications
In traffic environment, conventional FMCW radar with triangular transmit waveform may bring out many false targets in multi-target situations and result in a high false alarm rate. An improved FMCW waveform and multi-target detection algorithm for vehicular applications is presented. The designed waveform in each small cycle is composed of two-segment: LFM section and constant frequency section. They have the same duration, yet in two adjacent small cycles the two LFM slopes are opposite sign and different size. Then the two adjacent LFM bandwidths are unequal. Within a determinate frequency range, the constant frequencies are modulated by a unique PN code sequence for different automotive radar in a big period. Corresponding to the improved waveform, which combines the advantages of both FSK and FMCW formats, a judgment algorithm is used in the continuous small cycle to further eliminate the false targets. The combination of unambiguous ranges and relative velocities can confirm and cancel most false targets in two adjacent small cycles.
Geodesic flow kernel for unsupervised domain adaptation
In real-world applications of visual recognition, many factors - such as pose, illumination, or image quality - can cause a significant mismatch between the source domain on which classifiers are trained and the target domain to which those classifiers are applied. As such, the classifiers often perform poorly on the target domain. Domain adaptation techniques aim to correct the mismatch. Existing approaches have concentrated on learning feature representations that are invariant across domains, and they often do not directly exploit low-dimensional structures that are intrinsic to many vision datasets. In this paper, we propose a new kernel-based method that takes advantage of such structures. Our geodesic flow kernel models domain shift by integrating an infinite number of subspaces that characterize changes in geometric and statistical properties from the source to the target domain. Our approach is computationally advantageous, automatically inferring important algorithmic parameters without requiring extensive cross-validation or labeled data from either domain. We also introduce a metric that reliably measures the adaptability between a pair of source and target domains. For a given target domain and several source domains, the metric can be used to automatically select the optimal source domain to adapt and avoid less desirable ones. Empirical studies on standard datasets demonstrate the advantages of our approach over competing methods.
A Review on Entity Relation Extraction
Because of large amounts of unstructured data generated on the Internet, entity relation extraction is believed to have high commercial value. Entity relation extraction is a case of information extraction and it is based on entity recognition. This paper firstly gives a brief overview of relation extraction. On the basis of reviewing the history of relation extraction, the research status of relation extraction is analyzed. Then the paper divides theses research into three categories: supervised machine learning methods, semi-supervised machine learning methods and unsupervised machine learning method, and toward to the deep learning direction.
Next-generation Intrusion Detection Expert System (NIDES)A Summary
What is NIDES? Intrusion detection system that performs real-time monitoring of user activity Performs 2 types of analysis(statistical and rule-based) Statistical analysis-maintains historical profile of a user and raises an alarm when observed behavior differs from established patterns of use
Automatic Control of Students' Attendance in Classrooms Using RFID
Radio frequency identification (RFID) is one of the automatic identification technologies more in vogue nowadays. There is a wide research and development in this area trying to take maximum advantage of this technology, and in coming years many new applications and research areas will continue to appear. This sudden interest in RFID also brings about some concerns, mainly the security and privacy of those who work with or use tags in their everyday life. RFID has, for some time, been used to access control in many different areas, from asset tracking to limiting access to restricted areas. In this paper we propose an architecture and a prototype of a system that uses distributed RFID over Ethernet and we demonstrate how to automate an entire students' attendance registration system by using RFID in an educational institution environment. Although the use of RFID systems in educational institutions is not new, it is intended to show how the use of it came to solve daily problems in our university.
Homomorphic Secret Sharing: Optimizations and Applications
We continue the study of Homomorphic Secret Sharing (HSS), recently introduced by Boyle et al. (Crypto 2016, Eurocrypt 2017). A (2-party) HSS scheme splits an input <i>x</i> into shares (<i></i>x<sup>0</sup>,<i>x</i><sup>1</sup>) such that (1) each share computationally hides <i>x</i>, and (2) there exists an efficient homomorphic evaluation algorithm $\Eval$ such that for any function (or "program") <i></i> from a given class it holds that Eval(<i>x</i><sup>0</sup>,<i>P</i>)+Eval(<i>x</i><sup>1</sup>,<i>P</i>)=<i>P</i>(<i>x</i>). Boyle et al. show how to construct an HSS scheme for branching programs, with an inverse polynomial error, using discrete-log type assumptions such as DDH. We make two types of contributions. <b>Optimizations</b>. We introduce new optimizations that speed up the previous optimized implementation of Boyle et al. by more than a factor of 30, significantly reduce the share size, and reduce the rate of leakage induced by selective failure. <b>Applications.</b> Our optimizations are motivated by the observation that there are natural application scenarios in which HSS is useful even when applied to simple computations on short inputs. We demonstrate the practical feasibility of our HSS implementation in the context of such applications.
Effects on health of air pollution: a narrative review.
Air pollution is a complex and ubiquitous mixture of pollutants including particulate matter, chemical substances and biological materials. There is growing awareness of the adverse effects on health of air pollution following both acute and chronic exposure, with a rapidly expanding body of evidence linking air pollution with an increased risk of respiratory (e.g., asthma, chronic obstructive pulmonary disease, lung cancer) and cardiovascular disease (e.g., myocardial infarction, heart failure, cerebrovascular accidents). Elderly subjects, pregnant women, infants and people with prior diseases appear especially susceptible to the deleterious effects of ambient air pollution. The main diseases associated with exposure to air pollutants will be summarized in this narrative review.