title
stringlengths
8
300
abstract
stringlengths
0
10k
Analyzing Users’ Sentiment Towards Popular Consumer Industries and Brands on Twitter
Social media serves as a unified platform for users to express their thoughts on subjects ranging from their daily lives to their opinion on consumer brands and products. These users wield an enormous influence in shaping the opinions of other consumers and influence brand perception, brand loyalty and brand advocacy. In this paper, we analyze the opinion of 19M Twitter users towards 62 popular industries, encompassing 12,898 enterprise and consumer brands, as well as associated subject matter topics, via sentiment analysis of 330M tweets over a period spanning a month. We find that users tend to be most positive towards manufacturing and most negative towards service industries. In addition, they tend to be more positive or negative when interacting with brands than generally on Twitter. We also find that sentiment towards brands within an industry varies greatly and we demonstrate this using two industries as use cases. In addition, we discover that there is no strong correlation between topic sentiments of different industries, demonstrating that topic sentiments are highly dependent on the context of the industry that they are mentioned in. We demonstrate the value of such an analysis in order to assess the impact of brands on social media. We hope that this initial study will prove valuable for both researchers and companies in understanding users' perception of industries, brands and associated topics and encourage more research in this field.
1 Bootstrap : A Statistical Method
This paper at tempts to introduce readers w ith the c oncept and methodology of bootstrap in Stat ist ics, which is p l aced under a larger umbrel la of resampl ing. Major port ion of the discu ssions should be accessib le to any one who has had a couple of col le ge level appl ied stat is t ics courses. Towards the end, we at tempt to provide gl impses of the vast l i terature publ ished on the topic, which s hould be helpfu l to someone aspir ing to go into the depth of the method ology. A sect ion is dedicated to i l lustrate real data examples. A tech nical appendix is inc luded, which contains a short proof of “bootst ra p Centra l L imit Theorem” for the means. I t should inspire a mathem at ical minded reader to study further. We th ink the selected set of ref erences cover the greater part of the developments on this subject matter . 1. Introduct ion and the Idea B. Efron introduced a statistical method, which is called Bootstrap, published in 1979 (Efron 1979). It spread like brush fire in statistical sciences within a couple of decades. Now if one conducts a “Google search” for the above title, an astounding 1.86 million records will be mentioned; scanning through even a fraction of these records is a daunting task. We attempt first to explain the idea behind the method and the purpose of it at a rather rudimentary level. The primary task of a statistician is to summarize a sample based study and generalize the finding to the parent population in a scientific manner. A technical term for a sample summary number is (sample) statistic. Some basic sample statistics are, for instance, sample mean, sample median, sample standard deviation etc. Of course, a summary statistic like the sample mean will fluctuate from sample to sample and a statistician would like to know the magnitude of these fluctuations around the corresponding population parameter in an overall sense. This is then used in assessing Margin of Errors. The entire picture of all possible values of a sample statistics presented in the form of a probability distribution is called a sampling distribution. There is a
Computer Based Assessment Systems Evaluation via the ISO9126 Quality Model
Many commercial products, as well as freeware and shareware tools, are the result of studies and research in this field made by companies and public institutions. This noteworthy growth in the market raises the problem of identifying a set of criteria that may be useful to an educational team wishing to select the most appropriate tool for their assessment needs. The scientific literature is very poor in respect of this issue. An important help is provided in this direction, by a number of research studies in the field of Software Engineering providing general criteria that may be used to evaluate software systems. Furthermore, a relevant effort has been made in this field by the International Standard Organization that in 1991 defined the ISO9126 standard for “Information Technology – Software Quality Characteristics and Sub-characteristics” (ISO, 1991). It is important to note that a typical CBA system is composed by:
Turbojet and Turbofan Engine Performance Increases Through Turbine Burners
In a conventional turbojet and turbofan engine, fuel is burned in the main combustor before the heated highpressure gas expands through the turbine. A turbine-burner concept was proposed in a previous paper in which combustion is continued inside the turbine to increase the efŽ ciency and speciŽ c thrust of the turbojet engine. This concept is extended to include not only continuous burning in the turbine but also “discrete” interstage turbine burners as an intermediate option. A thermodynamic cycle analysis is performed to compare the relative performances of the conventional engine and the turbine-burner engine with different combustion options for both turbojet and turbofan conŽ gurations. Turbine-burner engines are shown to provide signiŽ cantly higher speciŽ c thrust with no or only small increases in thrust speciŽ c fuel consumption compared to conventional engines. Turbine-burner engines also widen the operational range of  ight Mach number and compressor pressure ratio. The performance gain of turbine-burner engines over conventional engines increases with compressor pressure ratio, fan bypass ratio, and  ight Mach number.
Technological aids to support choice strategies by three girls with Rett syndrome.
This study was aimed at extending the use of assistive technology (i.e., photocells, interface and personal computer) to support choice strategies by three girls with Rett syndrome and severe to profound developmental disabilities. A second purpose of the study was to reduce stereotypic behaviors exhibited by the participants involved (i.e., body rocking, hand washing and hand mouthing). Finally, a third goal of the study was to monitor the effects of such program on the participants' indices of happiness. The study was carried out according to a multiple probe design across responses for each participant. Results showed that the three girls increased the adaptive responses and decreased the stereotyped behaviors during intervention phases compared to baseline. Moreover, during intervention phases, the indices of happiness augmented for each girl as well. Clinical, psychological and rehabilitative implications of the findings are discussed.
Computer-Aided Diagnosis of Mammographic Masses Using Scalable Image Retrieval
Computer-aided diagnosis of masses in mammograms is important to the prevention of breast cancer. Many approaches tackle this problem through content-based image retrieval techniques. However, most of them fall short of scalability in the retrieval stage, and their diagnostic accuracy is, therefore, restricted. To overcome this drawback, we propose a scalable method for retrieval and diagnosis of mammographic masses. Specifically, for a query mammographic region of interest (ROI), scale-invariant feature transform (SIFT) features are extracted and searched in a vocabulary tree, which stores all the quantized features of previously diagnosed mammographic ROIs. In addition, to fully exert the discriminative power of SIFT features, contextual information in the vocabulary tree is employed to refine the weights of tree nodes. The retrieved ROIs are then used to determine whether the query ROI contains a mass. The presented method has excellent scalability due to the low spatial-temporal cost of vocabulary tree. Extensive experiments are conducted on a large dataset of 11 553 ROIs extracted from the digital database for screening mammography, which demonstrate the accuracy and scalability of our approach.
A passivity based Cartesian impedance controller for flexible joint robots - part I: torque feedback and gravity compensation
In this paper a novel approach to the Cartesian impedance control problem for robots with flexible joints is presented. The proposed controller structure is based on simple physical considerations, which are motivating the extension of classical position feedback by an additional feedback of the joint torques. The torque feedback action can be interpreted as a scaling of the apparent motor inertia. Furthermore the problem of gravity compensation is addressed. Finally, it is shown that the closed loop system can be seen as a feedback interconnection of passive systems. Based on this passivity property a proof of asymptotic stability is presented.
Spray Characteristics of a Multi-hole Injector for Direct-Injection Gasoline Engines
The sprays from a high-pressure multi-hole nozzle injected into a constant volume chamber have been visualised and quantified in terms of droplet velocity and diameter with a two-component phase Doppler amenometry (PDA) system at injection pressures up to 200bar and chamber pressures varying from atmospheric to 12bar. The flow characteristics within the injection system were quantified by means of an FIE 1-D model, providing the injection rate and the injection velocity in the presence of hole cavitation, by an in-house 3-D CFD model providing the detailed flow distribution for various combinations of nozzle hole configurations, and by a fuel atomisation model giving estimates of the droplet size very near to the nozzle exit. The overall spray angle relative to the axis of the injector was found to be almost independent of injection and chamber pressure, a significant advantage relative to swirl pressure atomisers. Temporal droplet velocities were found to increase sharply at the start of injection and then to remain unchanged during the main part of injection before decreasing rapidly towards the end of injection. The spatial droplet velocity profiles were jet-like at all axial locations, with the local velocity maximum found at the centre of the jet. Within the measured range, the effect of injection pressure on droplet size was rather small while the increase in chamber pressure from atmospheric to 12bar resulted in much smaller droplet velocities, by up to fourfold, and larger droplet sizes by up to 40%.
Evolving plastic neural networks with novelty search
Biological brains can adapt and learn from past experience. Yet neuroevolution, i.e. automatically creating artificial neural networks (ANNs) through evolutionary algorithms, has sometimes focused on static ANNs that cannot change their weights during their lifetime. A profound problem with evolving adaptive systems is that learning to learn is highly deceptive. Because it is easier at first to improve fitness without evolving the ability to learn, evolution is likely to exploit domain-dependent static (i.e. non-adaptive) heuristics. This paper analyzes this inherent deceptiveness in a variety of different dynamic, reward-based learning tasks, and proposes a way to escape the deceptive trap of static policies based on the novelty search algorithm. The main idea in novelty search is to abandon objective-based fitness and instead simply search only for novel behavior, which avoids deception entirely. A series of experiments and an in-depth analysis show how behaviors that could potentially serve as a stepping stone to finding adaptive solutions are discovered by novelty search yet are missed by fitness-based search. The conclusion is that novelty search has the potential to foster the emergence of adaptive behavior in reward-based learning tasks, thereby opening a new direction for research in evolving plastic ANNs.
Contraceptive practices among married women of reproductive age in Bangladesh: a review of the evidence
BACKGROUND Bangladesh has experienced a sevenfold increase in its contraceptive prevalence rate (CPR) in less than forty years from 8% in 1975 to 62% in 2014. However, despite this progress, almost one-third of pregnancies are still unintended which may be attributed to unmet need for family planning and discontinuation and switching of methods after initiation of their use. METHODS We conducted an extensive literature review on contraceptive use among married women of reproductive age (MWRA) in Bangladesh. A total of 263 articles were identified through database search and after final screening ten articles were included in this synthesis. RESULTS Findings showed that method discontinuation and switching, method failure, and method mix may offset achievements in the CPR. Most of the women know of at least one contraceptive method. Oral pill is the most widely used (27%) method, followed by injectables (12.4%), condoms (6.4%), female sterilization (4.6%), male sterilization (1.2%), implants (1.7%), and IUDs (0.6%). There has been a decline in the use of long acting and permanent methods over the last two decades. Within 12 months of initiation, the rate of method discontinuation particularly the short-acting methods remain high at 36%. It is important to recognize the trends as married Bangladeshi women, on average, wanted 1.6 children, but the rate of actual children was 2.3. CONCLUSIONS A renewed commitment from government bodies and independent organizations is needed to implement and monitor family planning strategies in order to ensure the adherence to and provision of the most appropriate contraceptive method for couples.
Cannabidiol: an overview of some pharmacological aspects.
Over the past few years, considerable attention has focused on cannabidiol (CBD), a major nonpsychotropic constituent of cannabis. The authors present a review on the chemistry of CBD and discuss the anticonvulsive, antianxiety, antipsychotic, antinausea, and antirheumatoid arthritic properties of CBD. CBD does not bind to the known cannabinoid receptors, and its mechanism of action is yet unknown. It is possible that, in part at least, its effects are due to its recently discovered inhibition of anandamide uptake and hydrolysis and to its antioxidative effect.
Gigascope: A Stream Database for Network Applications
We have developed Gigascope, a stream database for network applications including traffic analysis, intrusion detection, router configuration analysis, network research, network monitoring, and performance monitoring and debugging. Gigascope is undergoing installation at many sites within the AT&T network, including at OC48 routers, for detailed monitoring. In this paper we describe our motivation for and constraints in developing Gigascope, the Gigascope architecture and query language, and performance issues. We conclude with a discussion of stream database research problems we have found in our application.
Designing focused crawler based on improved genetic algorithm
As focused crawler intends to search the Internet conform to a specific topic, it has a good prospect in the field of vertical search. A good search strategy is the core to improve the accuracy and coverage of focused crawler. Best-First search strategy is often applied but easily falls into local optimization. In order to improve the global search capability, this paper proposes a focused crawler based on improved genetic algorithm. In this algorithm, fitness function considers both topic correlation and importance. Topic correlation is analyzed by vector space model and topic importance is calculated by improved PageRank algorithm. Genetic operations are optimized based on user browsing behavior. Selection operation chooses webpages with high fitness, crossover operation sorts links by topic importance and mutation operation searchs combined keywords through search engine. Compared with existing genetic algorithms, the experimental results show that improved genetic algorithm can enhance precision and recall of focused crawler and enlarge the search scope of the crawler.
Augmented Reality through Wearable Computing
Wearable computing moves computation from the desktop to the user. We are forming a community of networked, wearable-computer users to explore, over a long period, the augmented realities that these systems can provide. By adapting its behavior to the user's changing environment, a body-worn computer can assist the user more intelligently, consistently, and continuously than a desktop system. A text-based augmented reality, the Remembrance Agent, is presented to illustrate this approach. Video cameras are used both to warp the visual input (mediated reality) and to sense the user's world for graphical overlay. With a camera, the computer could track the user's finger to act as the system's mouse; perform face recognition; and detect passive objects to overlay 2.5D and 3D graphics onto the real world. Additional apparatus such as audio systems, infrared beacons for sensing location, and biosensors for learning about the wearer's affect are described. With the use of input from these interface devices and sensors, a long-term goal of this project is to model the user's actions, anticipate his or her needs, and perform a seamless interaction between the virtual and physical environments.
Colostrum avoidance, prelacteal feeding and late breast-feeding initiation in rural Northern Ethiopia.
OBJECTIVE To identify specific cultural and behavioural factors that might be influenced to increase colostrum feeding in a rural village in Northern Ethiopia to improve infant health. DESIGN Background interviews were conducted with six community health workers and two traditional birth attendants. A semi-structured tape-recorded interview was conducted with twenty mothers, most with children under the age of 5 years. Variables were: parental age and education; mother's ethnicity; number of live births and children's age; breast-feeding from birth through to weaning; availability and use of formula; and descriptions of colostrum v. other stages of breast milk. Participant interviews were conducted in Amharic and translated into English. SETTING Kossoye, a rural Amhara village with high prevalence rates of stunting: inappropriate neonatal feeding is thought to be a factor. SUBJECTS Women (20-60 years of age) reporting at least one live birth (range: 1-8, mean: ∼4). RESULTS Colostrum (inger) and breast milk (yetut wotet) were seen as different substances. Colostrum was said to cause abdominal problems, but discarding a portion was sufficient to mitigate this effect. Almost all (nineteen of twenty) women breast-fed and twelve (63 %) reported ritual prelacteal feeding. A majority (fifteen of nineteen, 79 %) reported discarding colostrum and breast-feeding within 24 h of birth. Prelacteal feeding emerged as an additional factor to be targeted through educational intervention. CONCLUSIONS To maximize neonatal health and growth, we recommend culturally tailored education delivered by community health advocates and traditional health practitioners that promotes immediate colostrum feeding and discourages prelacteal feeding.
Cost sensitive modeling of credit card fraud using neural network strategy
Due to the rapid growth in e-business and electronic payment systems, Fraud is rising in banking transactions associated with credit cards. This paper intends to develop a credit card fraud detection (CCFD) model based on Artificial Neural Networks (ANN) and Meta Cost procedure to reduce risk reputation and risk of loss. ANN strategy have been used for credit card fraud prevention and detection. Because of the unbalanced nature of the data (Fraud and Non-Fraud cases), the detection of fraudulent transactions is difficult to achieve. To deal with the problem of imbalanced data, Meta Cost procedure is added. The proposed model, which is called Cost Sensitive Neural Network (CSNN), is based on misuse detection approach. Compared to the model based on Artificial Immune System (AIS), this model showed cost saving and increased detection rate. Data of this study is taken from real transactional data provided by a big Brazilian credit card issuer.
Progressive Analytics: A Computation Paradigm for Exploratory Data Analysis
Exploring data requires a fast feedback loop from the analyst to the system, with a latency below about 10 seconds because of human cognitive limitations. When data becomes large or analysis becomes complex, sequential computations can no longer be completed in a few seconds and data exploration is severely hampered. This article describes a novel computation paradigm called Progressive Computation for Data Analysis or more concisely Progressive Analytics, that brings at the programming language level a low-latency guarantee by performing computations in a progressive fashion. Moving this progressive computation at the language level relieves the programmer of exploratory data analysis systems from implementing the whole analytics pipeline in a progressive way from scratch, streamlining the implementation of scalable exploratory data analysis systems. This article describes the new paradigm through a prototype implementation called ProgressiVis, and explains the requirements it implies through examples.
A framework for adapting agile development methodologies
Department of Information Technology and Decision Sciences, College of Business and Public Administration, Old Dominion University Nortfolk, VA, U.S.A.; Department of Statistics and Computer Information Systems, Zicklin School of Business, Baruch College, City University of New York, New York, NY, U.S.A.; Department of Management Science and Information Systems, College of Management, University of Massachusetts Boston, Boston, MA, U.S.A.; Board of Advisors Professor of Computer Information Systems, J. Mack Robinson College of Business, Georgia State University, Atlanta, GA, U.S.A.
Erotic Deception into Agapeic Truth
Kierkegaard's radical, existential understanding of the person implies a new approach to 'Truth and Method' and also, therefore, to the philosophy of education. He understands the person as 'a relation that relates itself to itself and in relating itself to itself relates itself to another'. This definition, moving away from the concept of the person as 'an individual substance of a rational nature,' stresses rather that the person is 'a relational agent of loving transcendence.' Implied by this revolutionary understanding of personal being is the new understanding of truth as 'objective uncertainty held fast in the appropriation process of passionate inwardness'. But I shall not deal here with the notion of person or truth. Instead, I will concentrate on Kierkegaard's existential method and pedagogy, and clarify his way of thinking and teaching as 'a deceiving into truth.'
Incentives for expressing opinions in online polls
Prediction markets efficiently extract and aggregate the private information held by individuals about events and facts that can be publicly verified. However, facts such as the effects of raising or lowering interest rates can never be publicly verified, since only one option will be implemented. Online opinion polls can still be used to extract and aggregate private information about such questions. This paper addresses incentives for truthful reporting in online opinion polls. The challenge lies in designing reward schemes that do not require a-priori knowledge of the participants' beliefs. We survey existing solutions, analyze their practicality and propose a new mechanism that extracts accurate information from rational participants.
An Integrated Laser Radar Receiver Channel Utilizing a Time-Domain Walk Error Compensation Scheme
An integrated receiver channel for a pulsed time-of-flight (TOF) laser rangefinder has been designed and fabricated in a 0.35-μm SiGe BiCMOS process. The receiver channel generates a timing mark for the TDC by means of a leading-edge timing discriminator that detects the crossover of the received pulse with respect to a set reference level. The walk error generated by the amplitude variation is compensated in the time domain on the basis of the measured dependence of the walk on the length of the received pulse. The measurement accuracy is ±15 ps with compensation within a dynamic range of 1:100000, and the single-shot precision and power consumption are 120 ps for a minimum detectable signal of ~1 μA and 115 mW, respectively.
Modelling infrastructures as socio-technical systems
The conceptualization of the notion of a system in systems engineering, as exemplified in, for instance, the engineering standard IEEE Std 1220-1998, is problematic when applied to the design of socio-technical systems. This is argued using Intelligent Transportation Systems as an example. A preliminary conceptualization of socio-technical systems is introduced which includes technical and social elements and actors, as well as four kinds of relations. Current systems engineering practice incorporates technical elements and actors in the system but sees social elements exclusively as contextual. When designing socio-technical systems, however, social elements and the corresponding relations must also be considered as belonging to the system.
Semi-autonomous Car Control Using Brain Computer Interfaces
In this paper we present an approach to control a real car with brain signals. To achieve this, we use a brain computer interface (BCI) which is connected to our autonomous car. The car is equipped with a variety of sensors and can be controlled by a computer. We implemented two scenarios to test the usability of the BCI for controlling our car. In the first scenario our car is completely brain controlled, using four different brain patterns for steering and throttle/brake. We will describe the control interface which is necessary for a smooth, brain controlled driving. In a second scenario, decisions for path selection at intersections and forkings are made using the BCI. Between these points, the remaining autonomous functions (e.g. path following and obstacle avoidance) are still active. We evaluated our approach in a variety of experiments on a closed airfield and will present results on accuracy, reaction times and usability.
Customers churn prediction and marketing retention strategies . An application of support vector machines based on the AUC parameter-selection technique in B 2 B e-commerce industry
Article history: Received 24 January 2015 Received in revised form 12 August 2016 Accepted 16 August 2016 Available online xxxx E-commerce has provided newopportunities for both businesses and consumers to easily share information,find and buy a product, increasing the ease of movement from one company to another as well as to increase the risk of churn. In this study we develop a churn prediction model tailored for B2B e-commerce industry by testing the forecasting capability of a newmodel, the support vector machine (SVM) based on the AUC parameter-selection technique (SVMauc). The predictive performance of SVMauc is benchmarked to logistic regression, neural network and classic support vector machine. Our study shows that the parameter optimization procedure plays an important role in the predictive performance and the SVMauc points out good generalization performance when applied to noisy, imbalance and nonlinear marketing data outperforming the other methods. Thus, our findings confirm that the data-driven approach to churn prediction and the development of retention strategies outperforms commonly used managerial heuristics in B2B e-commerce industry. © 2016 Elsevier Inc. All rights reserved.
High-dose chemotherapy and peripheral blood stem cell support in refractory gestational trophoblastic neoplasia
We present retrospectively our experience in the use of high-dose chemotherapy and haematopoietic stem cell support (HSCS) for refractory gestational trophoblastic neoplasia (GTN) in the largest series so far reported. In all, 11 patients have been treated at three Trophoblast Centres between 1993 and 2004. The conditioning regimens comprised either Carbop-EC-T (carboplatin, etoposide, cyclophosphamide, paclitaxel and prednisolone) or CEM (carboplatin, etoposide and melphalan) or ICE (ifosfamide, carboplatin, etoposide). Two patients had complete human chorionic gonadotrophin responses, one for 4 and the other for 12 months. Three patients had partial tumour marker responses for 1–2 months. High-dose chemotherapy and HSCS for GTN is still unproven. Further studies are needed, perhaps in high-risk patients who fail their first salvage treatment.
A survey of FPGA-based accelerators for convolutional neural networks
Deep convolutional neural networks (CNNs) have recently shown very high accuracy in a wide range of cognitive tasks, and due to this, they have received significant interest from the researchers. Given the high computational demands of CNNs, custom hardware accelerators are vital for boosting their performance. The high energy efficiency, computing capabilities and reconfigurability of FPGA make it a promising platform for hardware acceleration of CNNs. In this paper, we present a survey of techniques for implementing and optimizing CNN algorithms on FPGA. We organize the works in several categories to bring out their similarities and differences. This paper is expected to be useful for researchers in the area of artificial intelligence, hardware architecture and system design.
The timing of puberty (oocyte quality and management)
This review aims at giving an overview on the physiological events leading to puberty onset in mammals and more specifically in cattle. Puberty is an important developmental milestone in mammals involving numerous changes in various physiological regulations and behaviors. It is a physiological unique event integrating several important central regulations at the crossroad of adaptation to environment: reproductive axis, feeding behavior and nutritional controls, growth, seasonal rhythm and stress. Puberty onset is also an important economic parameter in replacement heifer program and in genomic selection (genomic bulls). The quest for advanced puberty onset should be carefully balanced by its impact on physiological parameters of the animal and its offspring. Thus one has to carefully consider each step leading to puberty onset and set up a strategy that will lead to early puberty without being detrimental in the long term. In this review, major contributions in the understanding of puberty process obtained in rodents, primates and farm animals such as sheep and cattle are discussed. In the first part we will detail the endocrine events leading to puberty onset with a special focus on the regulation of GnRH secretion. In the second part we will describe the neural mechanisms involved in silencing and reactivating the GnRH neuronal network. These central mechanisms are at the crossroad of the integration of environmental factors such as the nutritional status, the stress and the photoperiod that will be discussed in the third part. In the fourth part, we will discuss the genetic determinants of puberty onset and more particularly in humans, where several pathologies are associated with puberty delay or advance and in cattle where several groups have now identified genomic regions or gene networks associated with puberty traits. Last but not least, in the last part we will focus on the embryologist point of view, how to get good oocytes for in vitro fertilization and embryo development from younger animals.
Identifying and meeting the challenges of insulin therapy in type 2 diabetes
Type 2 diabetes mellitus (T2DM) is a chronic illness that requires clinical recognition and treatment of the dual pathophysiologic entities of altered glycemic control and insulin resistance to reduce the risk of long-term micro- and macrovascular complications. Although insulin is one of the most effective and widely used therapeutic options in the management of diabetes, it is used by less than one-half of patients for whom it is recommended. Clinician-, patient-, and health care system-related challenges present numerous obstacles to insulin use in T2DM. Clinicians must remain informed about new insulin products, emerging technologies, and treatment options that have the potential to improve adherence to insulin therapy while optimizing glycemic control and mitigating the risks of therapy. Patient-related challenges may be overcome by actively listening to the patient's fears and concerns regarding insulin therapy and by educating patients about the importance, rationale, and evolving role of insulin in individualized self-treatment regimens. Enlisting the services of Certified Diabetes Educators and office personnel can help in addressing patient-related challenges. Self-management of diabetes requires improved patient awareness regarding the importance of lifestyle modifications, self-monitoring, and/or continuous glucose monitoring, improved methods of insulin delivery (eg, insulin pens), and the enhanced convenience and safety provided by insulin analogs. Health care system-related challenges may be improved through control of the rising cost of insulin therapy while making it available to patients. To increase the success rate of treatment of T2DM, the 2012 position statement from the American Diabetes Association and the European Association for the Study of Diabetes focused on individualized patient care and provided clinicians with general treatment goals, implementation strategies, and tools to evaluate the quality of care.
Tryptamine levels are low in plasma of chronic migraine and chronic tension-type headache
The primary aim of this study (TA-CH, Tryptophan Amine in Chronic Headache) was to investigate a possible role of tryptophan (TRP) metabolism in chronic migraine (CM) and chronic tension-type headache (CTTH). It is not known if TRP metabolism plays any role in CM and/or CTTH. Plasma levels of serotonin (5-HT), 5-hydroxyindolacetic acid (5-HIAA), metabolite of 5-HT, and tryptamine (TRY) were tested in 73 patients with CM, 15 patients with CTTH and 37 control subjects. Of these, plasmatic TRY was significantly lower in CM (p < 0.001) and in CTTH (p < 0.002) patients with respect to control subjects, while 5-HIAA levels in plasma were within the same range in all groups. 5-HT was undetectable in the plasma of almost all subjects. Our results support the hypothesis that TRP metabolism is altered in CM and CTTH patients, leading to a reduction in plasma TRY. As TRY modulates the function of pain matrix serotonergic system, this may affect modulation of incoming nociceptive inputs from the trigeminal endings and posterior horns of the spinal cord. We suggest that these biochemical abnormalities play a role in the chronicity of CM and CTTH.
Implementation of inverse perspective mapping algorithm for the development of an automatic lane tracking system
Vision based automatic lane tracking system requires information such as lane markings, road curvature and leading vehicle be detected before capturing the next image frame. Placing a camera on the vehicle dashboard and capturing the forward view results in a perspective view of the road image. The perspective view of the captured image somehow distorts the actual shape of the road, which involves the width, height, and depth. Respectively, these parameters represent the x, y and z components. As such, the image needs to go through a pre-processing stage to remedy the distortion using a transformation technique known as an inverse perspective mapping (IPM). This paper outlines the procedures involved.
The biology and evolution of music: A comparative perspective
Studies of the biology of music (as of language) are highly interdisciplinary and demand the integration of diverse strands of evidence. In this paper, I present a comparative perspective on the biology and evolution of music, stressing the value of comparisons both with human language, and with those animal communication systems traditionally termed "song". A comparison of the "design features" of music with those of language reveals substantial overlap, along with some important differences. Most of these differences appear to stem from semantic, rather than structural, factors, suggesting a shared formal core of music and language. I next review various animal communication systems that appear related to human music, either by analogy (bird and whale "song") or potential homology (great ape bimanual drumming). A crucial comparative distinction is between learned, complex signals (like language, music and birdsong) and unlearned signals (like laughter, ape calls, or bird calls). While human vocalizations clearly build upon an acoustic and emotional foundation shared with other primates and mammals, vocal learning has evolved independently in our species since our divergence with chimpanzees. The convergent evolution of vocal learning in other species offers a powerful window into psychological and neural constraints influencing the evolution of complex signaling systems (including both song and speech), while ape drumming presents a fascinating potential homology with human instrumental music. I next discuss the archeological data relevant to music evolution, concluding on the basis of prehistoric bone flutes that instrumental music is at least 40,000 years old, and perhaps much older. I end with a brief review of adaptive functions proposed for music, concluding that no one selective force (e.g., sexual selection) is adequate to explaining all aspects of human music. I suggest that questions about the past function of music are unlikely to be answered definitively and are thus a poor choice as a research focus for biomusicology. In contrast, a comparative approach to music promises rich dividends for our future understanding of the biology and evolution of music.
Full virtualization based ARINC 653 partitioning
As the number of electronic components of avionics systems is significantly increasing, it is desirable to run several avionics software on a single computing device. In such system, providing a seamless way to integrate separate applications on a computing device is a very critical issue as the Integrated Modular Avionics (IMA) concept addresses. In this context, the ARINC 653 standard defines resource partitioning of avionics application software. The virtualization technology has very high potential of providing an optimal implementation of the partition concept. In this paper, we study supports for full virtualization based ARINC 653 partitioning. The supports include extension of XML-based configuration file format and hierarchical scheduler for temporal partitioning. We show that our implementation can support well-known VMMs, such as VirtualBox and VMware and present basic performance numbers.
Radiological features of osteoarthritis of the acromiclavicular joint and its association with clinical symptoms.
PURPOSE To determine whether increasing age is associated with increased radiological features of osteoarthritis of the acromioclavicular joint (ACJ) in a general population, and whether clinical symptoms correlate with radiological features. METHODS Anteroposterior and axillary shoulder radiographs of 240 patients aged 20 to 80 years were randomly selected. The presence of stigmata of osteoarthritis of the ACJ including sclerosis, cysts, lysis, and osteophytes were recorded, and the width of the ACJ was measured. To determine the correlation between clinical symptoms and radiological features, the same radiological features were assessed for 100 further patients who had undergone either arthroscopic subacromial decompression (ASD) alone (n=50) or ASD plus ACJ excision (n=50, age-matched controls) based on clinical examination. RESULTS Radiological features of osteoarthritis of the ACJ increased significantly with increasing age but were not related to gender or the side affected. Of the 10 features, only medial acromial sclerosis and superior clavicular osteophytes were more prevalent in patients with ASD plus ACJ excision than in those with ASD alone (p=0.016). The sensitivity, specificity, positive and negative predictive values of these features were poor. Therefore, clinical symptoms were not associated with radiological features of osteoarthritis of the ACJ. CONCLUSION Radiological features should only be used as an adjunct in the decision to excise the ACJ. A thorough clinical examination is crucial in the assessment of ACJ pathology.
DiSCO: Distributed Optimization for Self-Concordant Empirical Loss
We propose a new distributed algorithm for empirical risk minimization in machine learning. The algorithm is based on an inexact damped Newton method, where the inexact Newton steps are computed by a distributed preconditioned conjugate gradient method. We analyze its iteration complexity and communication efficiency for minimizing self-concordant empirical loss functions, and discuss the results for distributed ridge regression, logistic regression and binary classification with a smoothed hinge loss. In a standard setting for supervised learning, where the n data points are i.i.d. sampled and when the regularization parameter scales as 1/ √ n, we show that the proposed algorithm is communication efficient: the required round of communication does not increase with the sample size n, and only grows slowly with the number of machines.
Proposals for implementing a new degree on electronics and automation engineering
A new academic curriculum for the Electronics and Automation Enginering Degree at the University of Zaragoza, Spain has been developed in the contest of the European Space for Higher Education. We analyse the distribution and weight of basic contents (subjects related to Mathematics, Physics, Chemistry, Mechanics, Fluids...) and specialized contents (electronics and automation), pointing strengths and weaknesses. Finally, we propose some actions and methodologies for the implementation of this new academic curriculum, Intended to achieve a balance between basic and speclallsed contents oriented to educating Engineers for the European marker.
Nonresonating Mode Waveguide Filters
An overview of the main contributions that introduced the use of nonresonating modes for the realization of pseudoelliptic narrowband waveguide filters is presented. The following are also highlighted: early work using asymmetric irises; oversized H-plane cavity; transverse magnetic cavity; TM dual-mode cavity; and multiple cavity filters.
Bioactive Compounds from Macroalgae in the New Millennium: Implications for Neurodegenerative Diseases
Marine environment has proven to be a rich source of structurally diverse and complex compounds exhibiting numerous interesting biological effects. Macroalgae are currently being explored as novel and sustainable sources of bioactive compounds for both pharmaceutical and nutraceutical applications. Given the increasing prevalence of different forms of dementia, researchers have been focusing their attention on the discovery and development of new compounds from macroalgae for potential application in neuroprotection. Neuroprotection involves multiple and complex mechanisms, which are deeply related. Therefore, compounds exerting neuroprotective effects through different pathways could present viable approaches in the management of neurodegenerative diseases, such as Alzheimer's and Parkinson's. In fact, several studies had already provided promising insights into the neuroprotective effects of a series of compounds isolated from different macroalgae species. This review will focus on compounds from macroalgae that exhibit neuroprotective effects and their potential application to treat and/or prevent neurodegenerative diseases.
Broadband Polarization-Independent Perfect Absorber Using a Phase-Change Metamaterial at Visible Frequencies
We report a broadband polarization-independent perfect absorber with wide-angle near unity absorbance in the visible regime. Our structure is composed of an array of thin Au squares separated from a continuous Au film by a phase change material (Ge2Sb2Te5) layer. It shows that the near perfect absorbance is flat and broad over a wide-angle incidence up to 80° for either transverse electric or magnetic polarization due to a high imaginary part of the dielectric permittivity of Ge2Sb2Te5. The electric field, magnetic field and current distributions in the absorber are investigated to explain the physical origin of the absorbance. Moreover, we carried out numerical simulations to investigate the temporal variation of temperature in the Ge2Sb2Te5 layer and to show that the temperature of amorphous Ge2Sb2Te5 can be raised from room temperature to > 433 K (amorphous-to-crystalline phase transition temperature) in just 0.37 ns with a low light intensity of 95 nW/μm(2), owing to the enhanced broadband light absorbance through strong plasmonic resonances in the absorber. The proposed phase-change metamaterial provides a simple way to realize a broadband perfect absorber in the visible and near-infrared (NIR) regions and is important for a number of applications including thermally controlled photonic devices, solar energy conversion and optical data storage.
The feasibility of nipple aspiration and duct lavage to evaluate the breast duct epithelium of women with increased breast cancer risk.
AIM Nipple aspiration (NA) and duct lavage (DL) are modalities for obtaining breast duct fluid for biomarker analyses. The aim of this study was to assess the feasibility of obtaining serial NA and DL samples at consecutive patient visits for cytology assessment and the creation of a biobank. METHODS Seventy eligible subjects were enroled at a single institution in the United Kingdom as part of an international multicentre study. Entry criteria were based on a 5-year Gail model risk of ≥2% or Claus score lifetime risk of ≥26%. Women underwent NA and DL in an outpatient clinic under local anaesthesia. RESULTS The mean patient age was 48 (range 41-69)years. Sixty seven out of 70 women (96%) attended three consecutive 6 monthly visits and follow-up for 2 years. Three women withdrew due to intolerance of the DL procedure. 56/67 (83%) women produced NA fluid from at least one duct. 204/264 (77%) of ducts declared by NA were cannulated for DL. 170/204 (83%) produced DL samples with adequate cellularity. By the final visit 52/67 (78%) women produced DL, 28/52 (54%) of whom were premenopausal and 24/52 (46%) were postmenopausal. 50/52 women (96%) underwent repeated DL of 81 ducts on 3 consecutive visits. CONCLUSION NA and DL are well tolerated for repeated assessment to obtain material for cytology and to create a biobank for future biomarker studies in women at high breast cancer risk.
Aliskiren, a novel orally effective renin inhibitor, exhibits similar pharmacokinetics and pharmacodynamics in Japanese and Caucasian subjects.
AIMS Aliskiren is the first in a new class of orally effective renin inhibitors for the treatment of hypertension. This study compared the pharmacokinetic and pharmacodynamic properties of aliskiren in Japanese and Caucasian subjects. METHODS In this open-label, single-centre, parallel-group, single- and multiple-dose study, 19 Japanese and 19 Caucasian healthy young male subjects received a single 300-mg oral dose of aliskiren on day 1 and then aliskiren 300 mg once daily on days 4-10. Blood samples were collected for the measurement of plasma aliskiren concentration, plasma renin concentration (PRC) and plasma renin activity (PRA). RESULTS Pharmacokinetic parameters were comparable in Japanese and Caucasian subjects following administration of a single dose of aliskiren {ratio of geometric means: C(max) 1.12 [90% confidence interval (CI) 0.88, 1.43]; AUC(0-72 h) 1.19 [90% CI 1.02, 1.39]} and at steady state [mean ratio: C(max) 1.30 (90% CI 1.00, 1.70); AUC(0-tau) 1.16 (90% CI 0.95, 1.41)]. There was no notable difference in the plasma half-life of aliskiren between Japanese and Caucasian groups (29.7 +/- 10.2 h and 32.0 +/- 6.6 h, respectively). At steady state, peak PRC level and AUC for the concentration-time plot were not significantly different between Japanese and Caucasian subjects (P = 0.64 and P = 0.80, respectively). A single oral dose of aliskiren significantly reduced PRA to a similar extent in Japanese and Caucasian subjects (by 87.5% and 85.7%, respectively, compared with baseline; P < 0.01). Aliskiren was well tolerated by both ethnic groups. CONCLUSIONS The oral renin inhibitor aliskiren demonstrated similar pharmacokinetic and pharmacodynamic properties in Japanese and Caucasian subjects.
Sofosbuvir plus ribavirin for the treatment of chronic genotype 4 hepatitis C virus infection in patients of Egyptian ancestry.
BACKGROUND & AIMS We conducted an open-label phase 2 study to assess the efficacy and safety of the oral nucleotide polymerase inhibitor sofosbuvir in combination with ribavirin in patients of Egyptian ancestry, chronically infected with genotype 4 hepatitis C virus (HCV). METHODS Treatment-naive and previously treated patients with genotype 4 HCV were randomly allocated in a 1:1 ratio to receive sofosbuvir 400mg and weight-based ribavirin, for 12 or 24 weeks. The primary efficacy endpoint was the proportion of patients with sustained virologic response (HCV RNA <25IU/ml) 12 weeks after cessation of therapy (SVR12). RESULTS Thirty treatment-naive and thirty previously treated patients were enrolled and treated for 12 weeks (n=31) or 24 weeks (n=29). Overall, 23% of patients had cirrhosis and 38% had diabetes. 14% of treatment-naive patients were interferon ineligible and 63% of treatment-experienced patients had prior non-response. SVR12 was achieved by 68% of patients (95% CI, 49-83%) in the 12-week group, and by 93% of patients (95% CI, 77-99%) in the 24-week group. The most common adverse events were headache, insomnia, and fatigue. No patient discontinued treatment due to an adverse event. CONCLUSIONS The findings from the present study suggest that 24 weeks of sofosbuvir plus ribavirin is an efficacious and well tolerated treatment in patients with HCV genotype 4 infection.
Trade and Environment: Bargaining Outcomes from Linked Negotiations
Recent literature has explored both physical and policy linkage between trade and environment. Here we explore linkage through leverage in bargaining, whereby developed countries can use trade policy threats to achieve improved developing country environmental management, while developing countries can use environmental concessions to achieve trade disciplines in developed countries. We use a global numerical simulation model to compute bargaining outcomes from linked trade and environment negotiations, comparing developed-developing country bargaining only on trade policy with joint bargaining on both trade and domestic environmental policies. Results indicate joint gains from expanding the trade bargaining set to include environment, opposite to the current developing country reluctance to negotiate in the World Trade Organization on this issue. However, compared to bargaining with cash side payments, linking trade and environment through negotiation on policy instruments provides significantly inferior developing country outcomes. Thus, a trade and environment policy-linked negotiation may be better than an environment-only negotiation, but negotiating compensation to developing countries for environmental restraint would be better. We provide sensitivity and further analysis of our results and indicate what other factors could qualify our main finding, including the erosion of the MFN principle involved with environmentally based trade actions.
Reinforcement Learning: A Review from a Machine Learning Perspective
Machine Learning is the study of methods for programming computers to learn. Reinforcement Learning is a type of Machine Learning and refers to learning problem which allows machines and software agents to automatically determine the ideal behaviour within a specific context, in order to maximize its performance. RL is inspired by behaviourist psychology based on the mechanism of learning from rewards and does not require prior knowledge and automatically get optimal policy with the help of knowledge obtained by trial-and-error and continuous interaction with the dynamic environment. This paper provides the overview of Reinforcement Learning from Machine learning perspective. It discusses the fundamental principles and techniques used to solve RL problems. It presents nature of RL problems, with focus on some influential model free RL algorithms, challenges and recent trends in theory and practice of RL. It concludes with the future scope of RL.
Modeling naturalistic argumentation in research literatures: Representation and interaction design issues
This paper characterises key weaknesses in the ability of current digital libraries to support scholarly inquiry, and as a way to address these, proposes computational services grounded in semiformal models of the naturalistic argumentation commonly found in research literatures. It is argued that a design priority is to balance formal expressiveness with usability, making it critical to co-evolve the modelling scheme with appropriate user interfaces for argument construction and analysis. We specify the requirements for an argument modelling scheme for use by untrained researchers, describe the resulting ontology, contrasting it with other domain modelling and semantic web approaches, before discussing passive and intelligent user interfaces designed to support analysts in the construction, navigation and analysis of scholarly argument structures in a Web-based environment.
Liberty v. Libel: Disparity and Reconciliation in Freedom of Expression Theory
Diverse theoretical positions in freedom of expression have established varying values relevant to democracy. This essay offers a broad-based critique of the state of freedom of expression theory, which it illuminates in a synthesis of libertarian, utilitarian, conservative, critical-cultural and postmodern positions. It argues in favor of reconciling the disparate theoretical positions in order to move the information society toward a common jurisdiction for libel lawsuits so that any chilling effect, caused by a difficulty in predicting law, procedure, and location, is dissipated. It finally identifies contours of the proposed theoretical reconciliation.
Towards Changes in Information Security Education
In the ACM guidelines for curricula at educational institutions, the recommendations for Information Security Assurance (ISA) education do not specify the topics, courses, or sequence of courses. As a consequence, there are numerous ISA education models and curricula in existence at educational institutions around the world. Organizations employing ISA professionals generally base their assessment of an individual’s skill level based on academic qualifications or certifications. While academic qualifications support broad knowledge and skills in general, professional certifications may be effective in a limited area of operations. Academic programs exposing the students to theoretical concepts and problem solving experience are critical for preparing graduates for jobs in the information security. The critical importance of information security curriculum at universities is stressed. Therefore, it is appropriate to evaluate the quality of academic information security programs and suggest changes or improvements in the curricula to ensure that undergraduates and graduates have gained the required skills after completing their studies.
Addictive use of social networking sites can be explained by the interaction of Internet use expectancies, Internet literacy, and psychopathological symptoms
BACKGROUND AND AIMS Most people use the Internet in a functional way to achieve certain goals and needs. However, there is an increasing number of people who experience negative consequences like loss of control and distress based on an excessive use of the Internet and its specific online applications. Some approaches postulate similarities with behavioral addictions as well as substance dependencies. They differentiate between a generalized and a specific Internet addiction, such as the pathological use of social networking sites (SIA-SNS). Prior studies particularly identified the use of applications, personal characteristics, and psychopathological symptoms as significant predictors for the development and maintenance of this phenomenon. So far, it remains unclear how psychopathological symptoms like depression and social anxiety interact with individual expectancies of Internet use and capabilities of handling the Internet, summarized as Internet literacy. METHODS The current study (N = 334) investigated the interaction of these components in a structural equation model. RESULTS The results indicate that the effects of depression and social anxiety on SIA-SNS were mediated by Internet use expectancies and self-regulation. DISCUSSION Thus, Internet use expectancies seem to be crucial for SIA-SNS, which is in line with prior models. CONCLUSIONS SNS use may be reinforced by experienced gratification and relief from negative feelings. Individual competences in handling the Internet may be preventive for the development of SIA-SNS.
Modules for Maple
In control theory the problem of computing greatest common divisors of polynomial matrices arises when trying to compute coprime matr ix factorizations of a given transfer function (see Kailath [2]). Much work has been done on this topic in the numerical case, where the coefficients of the polynomials are floating point numbers. We are invest igating the extension of these algorithms to the symbolic
A double-blind, placebo-controlled study of adjunctive calcitonin nasal spray in acute refractory mania.
OBJECTIVES Calcitonin, a neuropeptide, has been shown in preliminary double-blind trials to reduce agitation in patients with acute mania. Given that it has effects similar to those of lithium and anticonvulsants on modulation of second-messenger signaling pathways and stabilization of neuronal membranes, this study examined the efficacy of calcitonin nasal spray in treating acute manic symptoms in patients with treatment-resistant mania using a double-blind, placebo-controlled design. METHODS A total of 46 hospitalized patients experiencing either a manic or a mixed episode, who were refractory to treatment with adequate doses of either a mood stabilizer or an antipsychotic, or a mood stabilizer/antipsychotic combination, and had a score of ≥16 on the Young Mania Rating Scale (YMRS), were randomized to receive adjunctive nasal calcitonin 200 IU (n = 24) or saline (n = 22) spray for three weeks. The primary efficacy measure was the change in YMRS scores using the last observation carried forward (LOCF) method. RESULTS The clinical and demographic characteristics were similar between the groups. Patients had a mean YMRS score of 26 in the placebo group and a mean score of 25 in the calcitonin group. There were no significant differences in YMRS scores or percentage responders at three weeks between patients who received calcitonin and those who received placebo. There were also no significant differences in change scores on any other scales. Few patients experienced any adverse events. CONCLUSIONS This study does not support the use of nasal calcitonin in the treatment of treatment-resistant mania.
Detecting deception: the scope and limits
With the increasing interest in the neuroimaging of deception and its commercial application, there is a need to pay more attention to methodology. The weakness of studying deception in an experimental setting has been discussed intensively for over half a century. However, even though much effort has been put into their development, paradigms are still inadequate. The problems that bedevilled the old technology have not been eliminated by the new. Advances will only be possible if experiments are designed that take account of the intentions of the subject and the context in which these occur.
Once-Weekly Exenatide Versus Once- or Twice-Daily Insulin Detemir
OBJECTIVE This multicenter, open-label, parallel-arm study compared the efficacy and safety of exenatide once weekly (EQW) with titrated insulin detemir in patients with type 2 diabetes inadequately controlled with metformin (with or without sulfonylureas). RESEARCH DESIGN AND METHODS Patients were randomized to EQW (2 mg) or detemir (once or twice daily, titrated to achieve fasting plasma glucose ≤5.5 mmol/L) for 26 weeks. The primary outcome was proportion of patients achieving A1C ≤7.0% and weight loss ≥1.0 kg at end point, analyzed by means of logistic regression. Secondary outcomes included measures of glycemic control, cardiovascular risk factors, and safety and tolerability. RESULTS Of 216 patients (intent-to-treat population), 111 received EQW and 105 received detemir. Overall, 44.1% (95% CI, 34.7-53.9) of EQW-treated patients compared with 11.4% (6.0-19.1) of detemir-treated patients achieved the primary outcome (P < 0.0001). Treatment with EQW resulted in significantly greater reductions than detemir in A1C (least-square mean ± SE, -1.30 ± 0.08% vs. -0.88 ± 0.08%; P < 0.0001) and weight (-2.7 ± 0.3 kg vs. +0.8 ± 0.4 kg; P < 0.0001). Gastrointestinal-related and injection site-related adverse events occurred more frequently with EQW than with detemir. There was no major hypoglycemia in either group. Five (6%) patients in the EQW group and six (7%) patients in the detemir group experienced minor hypoglycemia; only one event occurred without concomitant sulfonylureas (detemir group). CONCLUSIONS Treatment with EQW resulted in a significantly greater proportion of patients achieving target A1C and weight loss than treatment with detemir, with a low risk of hypoglycemia. These results suggest that EQW is a viable alternative to insulin detemir treatment in patients with type 2 diabetes with inadequate glycemic control using oral antidiabetes drugs.
Multi-attribute decision making: A simulation comparison of select methods
Several methods have been proposed for solving multi-attribute decision making problems (MADM). A major criticism of MADM is that different techniques may yield different results when applied to the same problem. The problem considered in this study consists of a decision matrix input of N criteria weights and ratings of L alternatives on each criterion. The comparative performance of some methods has been investigated in a few, mostly field, studies. In this simulation experiment we investigate the performance of eight methods: ELECTRE, TOPSIS, Multiplicative Exponential Weighting (MEW), Simple Additive Weighting (SAW), and four versions of AHP (original vs. geometric scale and right eigenvector vs. mean transformation solution). Simulation parameters are the number of alternatives, criteria and their distribution. The solutions are analyzed using twelve measures of similarity of performance. Similarities and differences in the behavior of these methods are investigated. Dissimilarities in weights produced by these methods become stronger in problems with few alternatives; however, the corresponding final rankings of the alternatives vary across methods more in problems with many alternatives. Although less significant, the distribution of criterion weights affects the methods differently. In general, all AHP versions behave similarly and closer to SAW than the other methods. ELECTRE is the least similar to SAW (except for closer matching the top-ranked alternative), followed by MEW. TOPSIS behaves closer to AHP and differently from ELECTRE and MEW, except for problems with few criteria. A similar rank-reversal experiment produced the following performance order of methods: SAW and MEW (best), followed by TOPSIS, AHPs and ELECTRE. It should be noted that the ELECTRE version used was adapted to the common MADM problem and therefore it did not take advantage of the method’s capabilities in handling problems with ordinal or imprecise information.
The state-of-the-art in personalized recommender systems for social networking
With the explosion of Web 2.0 application such as blogs, social and professional networks, and various other types of social media, the rich online information and various new sources of knowledge flood users and hence pose a great challenge in terms of information overload. It is critical to use intelligent agent software systems to assist users in finding the right information from an abundance of Web data. Recommender systems can help users deal with information overload problem efficiently by suggesting items (e.g., information and products) that match users’ personal interests. The recommender technology has been successfully employed in many applications such as recommending films, music, books, etc. The purpose of this report is to give an overview of existing technologies for building personalized recommender systems in social networking environment, to propose a research direction for addressing user profiling and cold start problems by exploiting user-generated content newly available in Web 2.0.
System-on-package ultra-wideband transmitter using CMOS impulse generator
In this paper, a low-cost CMOS ultra-wideband (UWB) impulse transmitter module with a compact form factor is proposed for impulse-radio communications. The module consists of a CMOS impulse generator, a compact bandpass filter (BPF), and a printed planar UWB antenna. The impulse generator is designed using a Samsung 0.35-/spl mu/m CMOS process for low-cost and low-power fabrication. The measurement shows the fabricated chip makes a train of sharp triangular pulses with a peak voltage of about 2.8 V under the supply voltage of 3.3 V. To make an impulse fit the Federal Communications Commission (FCC) spectrum mask, the compact BPF is developed using a coupled strip line and a tapered stub. Also, the compact planar UWB antenna is developed. All of the components of the UWB transmitter module are fabricated on a single package using system-on-package technology for miniaturization. The proposed UWB transmitter is tested in an office environment. The measured results show that the generated UWB signal meets the FCC regulation, and the peak-to-peak amplitude of received UWB signal at 1-m distance on line of sight is 16 mVpp with a 10-dB-gain low-noise amplifier in the receiver.
Cost effectiveness of a new strategy to identify HNPCC patients.
BACKGROUND Distinguishing hereditary non-polyposis colorectal cancer (HNPCC) from non-hereditary colorectal cancer (CRC) can increase the life expectancy of HNPCC patients and their close relatives. AIM To determine the effectiveness, efficiency, and feasibility of a new strategy for the detection of HNPCC, using simple criteria for microsatellite instability (MSI) analysis of newly detected tumours that can be applied by pathologists. Criteria for MSI analysis are: (1) CRC before age 50 years; (2) second CRC; (3) CRC and HNPCC associated cancer; or (4) adenoma before age 40 years. METHODS The efficacy and cost effectiveness of the new strategy was evaluated against current practice. Decision analytic models were constructed to estimate the number of extra HNPCC mutation carriers and the costs of this strategy. The incremental costs and gain in life expectancy for a HNPCC mutation carrier were evaluated by Markov modelling. Feasibility was explored in five hospitals. RESULTS Using the new strategy, 2.2 times more HNPCC patients can be identified among a CRC population compared with current practice. This new strategy was found to be cost effective with an expected cost effectiveness ratio of 3801 per life year gained. When including the group of siblings and children, the cost effectiveness ratio became 2184 per life year gained. Sensitivity analysis showed these findings to be robust. CONCLUSIONS MSI testing in a selection of newly diagnosed CRC patients was shown to be cost effective and a feasible method to identify patients at risk for HNPCC who are not recognised by family history.
Treatment of neonatal abstinence syndrome in preterm and term infants.
OBJECTIVE Neonatal abstinence syndrome (NAS) is treated with a variety of drug preparations. With the optional treatment of NAS with chloral hydrate, phenobarbital or morphine the cumulative drug consumption of the mentioned drugs, the length of hospital stay and treatment duration was evaluated in preterm and term neonates. METHODS Retrospective, uncontrolled study which evaluates different therapies of neonatal abstinence syndrome (NAS) in preterm and term neonates. RESULTS During the past 16 years data were obtained from medical records of 51 neonates with NAS; 9 preterm and 35 term neonates were evaluated and 7 were excluded because of incomplete data sets. 31 (72.1%) received a pharmacological treatment (6 preterm and 25 term neonates). Treatment started at 4.3 [3.3-5.3] d. Mean duration of treatment was 11.7 [6.6-16.7] d. In our study chloral hydrate (ch) and phenobarbital (pb) were first line medication escalated by the morphine (mp) solution. Mean cumulative dosage of ch was 643.5 [260.3-1 026.7] mg, of pb 53.2 [19.7-86.8] mg and of mp 4.22 [0-8.99] mg. CONCLUSION Our study group showed similar treatment duration and length of hospital stay compared to other studies. The cumulative dose of mp was lower compared to most studies. This benefit resulted at the expense of a further medication with pb and ch. However, 6 of 9 preterm neonates needed significantly less pharmacological therapy compared to term neonates indicating less susceptibility of immature brain to abstinence of maternalo-pioids.
Action Schema Networks: Generalised Policies With Deep Learning
In this paper, we introduce the Action Schema Network (ASNet): a neural network architecture for learning generalised policies for probabilistic planning problems. By mimicking the relational structure of planning problems, ASNets are able to adopt a weight sharing scheme which allows the network to be applied to any problem from a given planning domain. This allows the cost of training the network to be amortised over all problems in that domain. Further, we propose a training method which balances exploration and supervised training on small problems to produce a policy which remains robust when evaluated on larger problems. In experiments, we show that ASNet’s learning capability allows it to significantly outperform traditional non-learning planners in several challenging domains.
Ten Simple Rules for Developing Usable Software in Computational Biology
The rise of high-throughput technologies in molecular biology has led to a massive amount of publicly available data. While computational method development has been a cornerstone of biomedical research for decades, the rapid technological progress in the wet lab makes it difficult for software development to keep pace. Wet lab scientists rely heavily on computational methods, especially since more research is now performed in silico. However, suitable tools do not always exist, and not everyone has the skills to write complex software. Computational biologists are required to close this gap, but they often lack formal training in software engineering. To alleviate this, several related challenges have been previously addressed in the Ten Simple Rules series, including reproducibility [1], effectiveness [2], and open-source development of software [3, 4]. Here, we want to shed light on issues concerning software usability. Usability is commonly defined as “a measure of interface quality that refers to the effectiveness, efficiency, and satisfaction with which users can perform tasks with a tool” [5]. Considering the subjective nature of this topic, a broad consensus may be hard to achieve. Nevertheless, good usability is imperative for achieving wide acceptance of a software tool in the community. In many cases, academic software starts out as a prototype that solves one specific task and is not geared for a larger user group. As soon as the developer realizes that the complexity of the problems solved by the software could make it widely applicable, the software will grow to meet the new demands. At least by this point, if not sooner, usability should become a priority. Unfortunately, efforts in scientific software development are constrained by limited funding, time, and rapid turnover of group members. As a result, scientific software is often poorly documented, non-intuitive, non-robust with regards to input data and parameters, and hard to install. For many use cases, there is a plethora of tools that appear very similar and make it difficult for the user to select the one that best fits their needs. Not surprisingly, a substantial fraction of these tools are probably abandonware; i.e., these are no longer actively developed or supported in spite of their potential value to the scientific community. To our knowledge, software development as part of scientific research is usually carried out by individuals or small teams with no more than two or three members. Hence, the responsibility of designing, implementing, testing, and documenting the code rests on few shoulders. Additionally, there is pressure to produce publishable results or, at least, to contribute analysis work to ongoing projects. Consequently, academic software is typically released as a prototype. We acknowledge that such a tool cannot adhere to and should not be judged by the standards
Text to 3D Scene Generation with Rich Lexical Grounding
The ability to map descriptions of scenes to 3D geometric representations has many applications in areas such as art, education, and robotics. However, prior work on the text to 3D scene generation task has used manually specified object categories and language that identifies them. We introduce a dataset of 3D scenes annotated with natural language descriptions and learn from this data how to ground textual descriptions to physical objects. Our method successfully grounds a variety of lexical terms to concrete referents, and we show quantitatively that our method improves 3D scene generation over previous work using purely rule-based methods. We evaluate the fidelity and plausibility of 3D scenes generated with our grounding approach through human judgments. To ease evaluation on this task, we also introduce an automated metric that strongly correlates with human judgments.
Current knowledge of Schisandra chinensis (Turcz.) Baill. (Chinese magnolia vine) as a medicinal plant species: a review on the bioactive components, pharmacological properties, analytical and biotechnological studies
Schisandra chinensis Turcz. (Baill.) is a plant species whose fruits have been well known in Far Eastern medicine for a long time. However, schisandra seems to be a plant still underestimated in contemporary therapy still in the countries of East Asia. The article presents latest available information on the chemical composition of this plant species. Special attention is given to dibenzo cyclooctadiene lignans. In addition, recent studies of the biological activity of dibenzocyclooctadiene lignans and schisandra fruit extracts are recapitulated. The paper gives a short resume of their beneficial effects in biological systems in vitro, in animals, and in humans, thus underlining their medicinal potential. The cosmetic properties are depicted, too. The analytical methods used for assaying schisandra lignans in the scientific studies and also in industry are also presented. Moreover, special attention is given to the information on the latest biotechnological studies of this plant species. The intention of this review is to contribute to a better understanding of the huge potential of the pharmacological relevance of S. chinensis.
1D sweep-and-prune self-collision detection for deforming cables
Detecting self-collision for cables and similar objects is an important part of numerous models in computational biology (protein chains), robotics (electric cables), hair modeling, computer graphics, etc. In this paper the 1D sweep-and-prune algorithm for detecting self-collisions of a deforming cable comprising linear segments is investigated. The sweep-and-prune algorithm is compared with other state-of-the-art self-collision detection algorithms for deforming cables and is shown to be up to an order of magnitude faster than existing algorithms for cables with a high proportion of segments moving. We also present a multi-threaded version of the algorithm and investigate its performance. In addition, we present worst-case bounds for 1D sweep-and-prune algorithms whereby the colliding objects do not exceed a certain object density, and apply these results to deforming cables.
Validation of dynamic contrast-enhanced magnetic resonance imaging-derived vascular permeability measurements using quantitative autoradiography in the RG2 rat brain tumor model.
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is widely used to evaluate tumor permeability, yet measurements have not been directly validated in brain tumors. Our purpose was to compare estimates of forward leakage K(trans) derived from DCE-MRI to the estimates K obtained using [(14)C]aminoisobutyric acid quantitative autoradiography ([(14)C]AIB QAR), an established method of evaluating blood-tumor barrier permeability. Both DCE-MRI and [(14)C]AIB QAR were performed in five rats 9 to 11 days following tumor implantation. K(trans) in the tumor was estimated from DCE-MRI using the threeparameter general kinetic model and a measured vascular input function. K(i) was estimated from QAR data using regions of interest (ROI) closely corresponding to those used to estimate K(trans). K(trans) and K(i) correlated with each other for two independent sets of central tumor ROI (R = 0.905, P = .035; R = 0.933, P = .021). In an additional six rats, K(trans) was estimated on two occasions to show reproducibility (intraclass coefficient = 0.9993; coefficient of variance = 6.07%). In vivo blood-tumor permeability parameters derived from DCE-MRI are reproducible and correlate with the gold standard for quantifying blood tumor barrier permeability, [(14)C]AIB QAR.
Ridges and Valleys Detection in Images Using Difference of Rotating Half Smoothing Filters
In this paper we propose a new ridge/valley detection method in images based on the difference of rotating Gaussian semi filters. The novelty of this approach resides in the mixing of ideas coming both from directional filters and DoG method. We obtain a new ridge/valley anisotropic DoG detector enabling very precise detection of ridge/valley points. Moreover, this detector performs correctly at crest lines even if highly bended, and is precise on junctions. This detector has been tested successfully on various image types presenting difficult problems for classical ridges/valleys detection methods.
An Online Algorithm for Maximizing Submodular Functions
We consider the following two problems. We are given as input a set of activities and a set of jobs to complete. Our goal is to devise a schedule for allocating time to the various activities so as to achieve one of two objectives: minimizing the average time required to complete each job, or maximizing the number of jobs completed within a fixed time T . Formally, a schedule is a sequence 〈(v1, τ1), (v2, τ2), . . .〉, where each pair (v, τ) represents investing time τ in activity v. We assume that the fraction of jobs completed, f , is a monotone submodular function of the sequence of pairs that appear in a schedule. In the offline setting in which we have oracle access to f , these two objectives give us, respectively, what we call the MIN SUM SUBMODULAR COVER problem (which is a generalization of the MIN SUM SET COVER problem and the related PIPELINED SET COVER problem) and what we call BUDGETED MAXIMUM SUBMODULAR COVERAGE (which generalizes the problem of maximizing a monotone, submodular function subject to a knapsack constraint). We consider these problems in the online setting, in which the jobs arrive one at a time and we must finish each job (via some schedule) before moving on to the next. We give an efficient online algorithm for this problem whose worst-case asymptotic performance is simultaneously optimal for both objectives (unless P = NP), in the sense that its performance ratio (with respect to the optimal static schedule) converges to the best approximation ratios for the corresponding offline problems. Finally, we evaluate this algorithm experimentally by using it to learn, online, a schedule for allocating CPU time to the solvers entered in the 2007 SAT solver competition.
USING JTAG BASED ON HOL / SCALA / LMS / JIKESRVM / JVM INFORMATICS FRAMEWORK – AN INSIGHT INTO FORENSIC IMAGING OF EMBEDDED SYSTEMS IN THE CONTEXT OF SMART DEVICES
In this technical note the author is interested in exploiting the advantages of Higher Order Logic(HOL),Scala,JVM,JikesRVM & LMS in the IoT scenario.The title is highly self explanatory hence detailed explanation is skipped. index words: IoT/JVM/Scala/HOL/LMS(Lightweight Modular Staging)/Forensic Imaging/Firmware/JTAG. Introduction & Inspiration : [i] “JTAG Explained (finally!): Why "IoT", Software Security Engineers, and Manufacturers Should Care “ Source : http://blog.senr.io/blog/jtag-explained [ii] Forensic imaging of embedded systems using JTAG (boundary-scan) https://doi.org/10.1016/j.diin.2006.01.003 [iii] Scala Based FPGA Design Flow (Abstract Only) https://doi.org/10.1145/3020078.3021762 Informatics Framework Implementation : Figure I – Approximate Informatics Framework – HOL/Scala/JTAG/JikesRVM/LMS/IoT ** Readers please note “the actual implementation might vary to some extent”. Analysis of the Informatics Framework: Informatics framework was explored based on HOL/Scala/JTAG/JVM/IoT to perform forensic imaging of embedded systems in the context of Smart Devices.[Refs from 1 to 9 are used ].Figure I illustrates the main idea to help those who are interested in this interesting domain of embedded systems. Conclusion/s with future perspectives : A simple HOL/Scala/LMS/JVM/IoT based “Informatics Framework” was designed and presented in this technical note.It is sincerely hoped that many new comers in this promising area will explore this challenging domain of “Forensic Imaging of Embedded Systems” using JTAG. Additional Information on Software Used : [a] https://isabelle.in.tum.de/ HOL [b] https://www.isa-afp.org/entries/AWN.html HOL based Wireless Library [c] http://www.scala-lang.org/ Scala Programming Language. [d] http://scala-lms.github.io/ Lightweight Modular Staging (LMS) is a runtime code generation approach. [e] http://www.spiral.net/software/spiral-scala.html Spiral for DSP based in Scala. [f] http://www.jikesrvm.org/ Java Virtual Machine. [g] https://xdk.bosch-connectivity.com/ IoT Environment. [h] https://wilsonmar.github.io/intel-iot-setup/ IoT Environment.
Thermodynamics of Ti in Ag-Cu alloys
The thermodynamic activities of Ti at dilution in a series of Ag-Cu alloys and eutectic Ag-Cu alloys containing In or Sn were measured using a galvanic cell technique employing a ThO2-8 pct Y2O3 electrolyte. The equilibrium oxide phase formed by the reaction of Ti (XTi > 0.004) in the Ag-Cu alloy melts with an A12O3 or ZrO2 crucible was Ti2O (s). The free energy of formation of Ti2O (s) was estimated from available thermodynamic data. Titanium activities were calculated from measured oxygen potentials and the free energy of formation of Ti2O (s). Titanium in the eutectic Ag-Cu melt showed a positive deviation from ideal solution behavior at 1000°C, and its activity coefficient at infinite dilution was about 6.5 relative to pure solid Ti. Indium and Sn did not increase the activity coefficient of Ti in eutectic Ag-Cu melts. Silver increased the Ti activity coefficient in the Ag-Cu-Ti melts significantly. The Ti activity coefficient value in liquid Ag was about 20 times higher than in eutectic Ag-Cu melt at 1000 °C.
Follow-up measurements of Nevirapine plasma levels over a prolonged period.
Over a period of more than four years of treatment, 177 Nevirapine plasma levels were taken from 27 patients. The values showed a high inter-patient variability and a lower intra-patient variability. Differences in body weight turned out to be the main reason for inter-patient variability. Treatment over a prolonged period did not result in any change in plasma concentrations. Adjusting dosage by means of therapeutic drug monitoring would appear to be a reasonable way of maximising patient benefit from treatment.
Prevalence of cervical human papilloma virus infection among married women in Vietnam, 2011.
The burden of cervical cancer is increasing in Vietnam in the recent years, infection with high risk HPV being the cause. This study aimed to examine the prevalence of HPV and the distribution of HPV specific types among the general population in 5 big cities in Vietnam. Totals of 1500 women in round 1 and 3000 in round 2 were interviewed and underwent gynecological examination. HPV infection status, and HPV genotyping test were performed for all participants. Results indicated that the prevalence of HPV infection in 5 cities ranged from 6.1% to 10.2% with Can Tho having highest prevalence. The most common HPV types in all 5 cities were HPV 16, 18 and 58. Most of the positive cases were infected with high risk HPV, especially in Hanoi and Can Tho where more than 90% positive cases were high risk HPV. Furthermore, in Can Tho more than 60% of women were infected with multiple HPV types. The information from this study can be used to provide updated data for planning preventive activities for cervical cancer in the studied cities.
A Web of Things-Based Emerging Sensor Network Architecture for Smart Control Systems
The Web of Things (WoT) plays an important role in the representation of the objects connected to the Internet of Things in a more transparent and effective way. Thus, it enables seamless and ubiquitous web communication between users and the smart things. Considering the importance of WoT, we propose a WoT-based emerging sensor network (WoT-ESN), which collects data from sensors, routes sensor data to the web, and integrate smart things into the web employing a representational state transfer (REST) architecture. A smart home scenario is introduced to evaluate the proposed WoT-ESN architecture. The smart home scenario is tested through computer simulation of the energy consumption of various household appliances, device discovery, and response time performance. The simulation results show that the proposed scheme significantly optimizes the energy consumption of the household appliances and the response time of the appliances.
Irregularities in the Distribution of Primes and Twin Primes By
The maxima and minima of sL(x)) — n(x), iR(x)) — n(x), and sL2(x)) — n2(x) in various intervals up to x = 8 x 10 are tabulated. Here n(x) and n2(x) are respectively the number of primes and twin primes not exceeding x, L(x) is the logarithmic integral, R(x) is Riemann's approximation to ir(x), and L2(x) is the Hardy-Littlewood approximation to ti"2(;c). The computation of the sum of inverses of twin primes less than 8 x 10 gives a probable value 1.9021604 ± 5 x 10~7 for Brun's constant. 1. Approximations to nix). Let P= {2, 3, 5, • • • } be the set of primes, and let 7r(x) be the number of primes not exceeding x. Two well-known approximations to 7t(x) for x > 1 are the logarithmic integral: rx dt (1.1) L{x) = \v ; J o log t O-2) =T + log(logx)+Í: ^> k=\ k-k and Riemann's approximation: (1.3) R{x)=Z^L{xi'k) k=i k (1.4) =1 + ¿Jtó_ fz. k\kl{k + 1) Note that (1.1) differs by 1(2) = 1.04516378... from the frequently used approximation f2 dt/log t. We are interested in the errors O-5) r.{x) = (L(x)> n{x) and (1 6) r2(x) = (Rix)) n{x), where <y> denotes the integer closest to y {i.e., the integer part of (y + W)). k Received July 5, 1974. AMS (MOS) subject classifications (1970). Primary 10-04, 10H25, 10H15; Secondary 10A25, 10A40, 10H05, 65A05, 65B05.
Perceived emotional intelligence and life satisfaction among university teachers.
This study examined the relationship between Perceived Emotional Intelligence (PEI) and Life Satisfaction in university teachers. To assess the nature of these relationships and to predict the factors implied on life satisfaction, positive and negative affect, work satisfaction and alexithymia measures were used. 52 university teachers (30 men and 22 women) completed the Spanish version of the Trait Meta-Mood Scale for emotional intelligence (TMMS, Fernández-Berrocal, Extremera & Ramos, 2004). Alexithymia was measured by the Spanish version of the TAS-20 (Martínez-Sánchez, 1996), and life satisfaction was measured by SWLS (Díaz Morales, 2001). Also, Work Satisfaction Scale was used (JWS, Grajales & Araya, 2001). Our results yield a strong correlation between life satisfaction and TMMS subscales (emotional Clarity and emotional Repair), TAS-20 subscales (difficulty to describe emotions and external oriented thinking), and Work Satisfaction Scale. Further analyses show that the life satisfaction most significant predictors were positive and negative affect and emotional Clarity. These results support the incremental validity of self-report measures, as the TMMS, and the capacity of constructs related to emotional intelligence to explain the differences on life satisfaction independently from personality traits and mood states constructs.
On an Arithmetic Inequality
We obtain an arithmetic proof and a refinement of the inequality φ(n) + σk(n) < 2n , where n ≥ 2 and k ≥ 2. An application to another inequality is also provided.
Directly Addressable Variable-Length Codes
We introduce a symbol reordering technique that implicitly synchronizes variable-length codes, such that it is possible to directly access the i-th codeword without need of any sampling method. The technique is practical and has many applications to the representation of ordered sets, sparse bitmaps, partial sums, and compressed data structures for suffix trees, arrays, and inverted indexes, to name just a few. We show experimentally that the technique offers a competitive alternative to other data structures that handle this problem.
Low-Temperature Two-Phase Microchannel Cooling for High-Heat-Flux Thermal Management of Defense Electronics
For a given heat sink thermal resistance and ambient temperature, the temperature of an electronic device rises fairly linearly with increasing device heat flux. This relationship is especially problematic for defense electronics, where heat dissipation is projected to exceed 1000 W/cm2 in the near future. Direct and indirect low-temperature refrigeration cooling facilitate appreciable reduction in the temperature of both coolant and device. This paper explores the benefits of cooling the device using direct and indirect refrigeration cooling systems. In the direct cooling system, a microchannel heat sink serves as an evaporator in a conventional vapor compression cycle using R134a as working fluid. In the indirect cooling system, HFE 7100 is used to cool the heat sink in a primary pumped liquid loop that rejects heat to a secondary refrigeration loop. Two drastically different flow behaviors are observed in these systems. Because of compressor performance constraints, mostly high void fraction two-phase patterns are encountered in the R134a system, dominated by saturated boiling. On the other hand, the indirect refrigeration cooling system facilitates highly subcooled boiling inside the heat sink. Both systems are shown to provide important cooling benefits, but the indirect cooling system is far more effective at dissipating high heat fluxes. Tests with this system yielded cooling heat fluxes as high as 840 W/cm2 without incurring critical heat flux (CHF). Results from both systems are combined to construct an overall map of performance trends relative to mass velocity, subcooling, pressure, and surface tension. Extreme conditions of near-saturated flow, low mass velocity, and low pressure produce ldquomicrordquo behavior, where macrochannel flow pattern maps simply fail to apply, instabilities are prominent, and CHF is quite low. One the other hand, systems with high mass velocity, high subcooling, and high pressure are far more stable and yield very high CHF values; two-phase flow in these systems follows the fluid flow and heat transfer behavior as well as the flow pattern maps of macrochannels.
Neural Network Bottleneck Features for Language Identifica tion
This paper presents the application of Neural Network Bottleneck (BN) features in Language Identification (LID). BN f eatures are generally used for Large Vocabulary Speech Recogn ition in conjunction with conventional acoustic features, s uch as MFCC or PLP. We compare the BN features to several common types of acoustic features used in the state-of-the-art LID systems. The test set is from DARPA RATS (Robust Automatic Transcription of Speech) program, which seeks to advance state-of-the-art detection capabilities on audio from hig hly degraded radio communication channels. On this type of noisy data, we show that in average, the BN features provide a 45% relative improvement in theCavgor Equal Error Rate (EER) metrics across several test duration conditions, with resp ect to our single best acoustic features.
Large Scale Labelled Video Data Augmentation for Semantic Segmentation in Driving Scenarios
In this paper we present an analysis of the effect of large scale video data augmentation for semantic segmentation in driving scenarios. Our work is motivated by a strong correlation between the high performance of most recent deep learning based methods and the availability of large volumes of ground truth labels. To generate additional labelled data, we make use of an occlusion-aware and uncertainty-enabled label propagation algorithm [8]. As a result we increase the availability of high-resolution labelled frames by a factor of 20, yielding in a 6.8% to 10.8% rise in average classification accuracy and/or IoU scores for several semantic segmentation networks. Our key contributions include: (a) augmented CityScapes and CamVid datasets providing 56.2K and 6.5K additional labelled frames of object classes respectively, (b) detailed empirical analysis of the effect of the use of augmented data as well as (c) extension of proposed framework to instance segmentation.
Automatic procedural model generation for 3D object variation
3D objects are used for numerous applications. In many cases not only single objects but also variations of objects are needed. Procedural models can be represented in many different forms, but generally excel in content generation. Therefore this representation is well suited for variation generation of 3D objects. However, the creation of a procedural model can be time-consuming on its own. We propose an automatic generation of a procedural model from a single exemplary 3D object. The procedural model consists of a sequence of parameterizable procedures and represents the object construction process. Changing the parameters of the procedures changes the surface of the 3D object. By linking the surface of the procedural model to the original object surface, we can transfer the changes and enable the possibility of generating variations of the original 3D object. The user can adapt the derived procedural model to easily and intuitively generate variations of the original object. We allow the user to define variation parameters within the procedures to guide a process of generating random variations. We evaluate our approach by computing procedural models for various object types, and we generate variations of all objects using the automatically generated procedural model.
Provable Subspace Clustering: When LRR meets SSC
An important problem in analyzing big data is subspace clustering, i.e., to represent a collection of points in a high-dimensional space via the union of low-dimensional subspaces. Sparse Subspace Clustering (SSC) and LowRank Representation (LRR) are the state-of-the-art methods for this task. These two methods are fundamentally similar in that both are based on convex optimization exploiting the intuition of “Self-Expressiveness”. The main difference is that SSC minimizes the vector `1 norm of the representation matrix to induce sparsity while LRR minimizes the nuclear norm (aka trace norm) to promote a low-rank structure. Because the representation matrix is often simultaneously sparse and low-rank, we propose a new algorithm, termed Low-Rank Sparse Subspace Clustering (LRSSC), by combining SSC and LRR, and develop theoretical guarantees of the success of the algorithm. The results reveal interesting insights into the strengths and weaknesses of SSC and LRR, and demonstrate how LRSSC can take advantage of both methods in preserving the “Self-Expressiveness Property” and “Graph Connectivity” at the same time. A byproduct of our analysis is that it also expands the theoretical guarantee of SSC to handle cases when the subspaces have arbitrarily small canonical angles but are “nearly independent”.
PZT-Actuated and -Sensed Resonant Micromirrors with Large Scan Angles Applying Mechanical Leverage Amplification for Biaxial Scanning
This article presents design, fabrication and characterization of lead zirconate titanate (PZT)-actuated micromirrors, which enable extremely large scan angle of up to 106° and high frequency of 45 kHz simultaneously. Besides the high driving torque delivered by PZT actuators, mechanical leverage amplification has been applied for the micromirrors in this work to reach large displacements consuming low power. Additionally, fracture strength and failure behavior of poly-Si, which is the basic material of the micromirrors, have been studied to optimize the designs and prevent the device from breaking due to high mechanical stress. Since comparing to using biaxial micromirror, realization of biaxial scanning using two independent single-axial micromirrors shows considerable advantages, a setup combining two single-axial micromirrors for biaxial scanning and the results will also be presented in this work. Moreover, integrated piezoelectric position sensors are implemented within the micromirrors, based on which closed-loop control has been developed and studied.
Repeatable Reverse Engineering with PANDA
We present PANDA, an open-source tool that has been purpose-built to support whole system reverse engineering. It is built upon the QEMU whole system emulator, and so analyses have access to all code executing in the guest and all data. PANDA adds the ability to record and replay executions, enabling iterative, deep, whole system analyses. Further, the replay log files are compact and shareable, allowing for repeatable experiments. A nine billion instruction boot of FreeBSD, e.g., is represented by only a few hundred MB. PANDA leverages QEMU's support of thirteen different CPU architectures to make analyses of those diverse instruction sets possible within the LLVM IR. In this way, PANDA can have a single dynamic taint analysis, for example, that precisely supports many CPUs. PANDA analyses are written in a simple plugin architecture which includes a mechanism to share functionality between plugins, increasing analysis code re-use and simplifying complex analysis development. We demonstrate PANDA's effectiveness via a number of use cases, including enabling an old but legitimately purchased game to run despite a lost CD key, in-depth diagnosis of an Internet Explorer crash, and uncovering the censorship activities and mechanisms of an IM client.
Cooperative flood detection using GSMD via SMS
This paper proposes architecture for an early warning floods system to alert public against flood disasters. An effective early warning system must be developed with linkages between four elements, which are accurate data collection to undertake risk assessments, development of hazard monitoring services, communication on risk related information and existence of community response capabilities. This project focuses on monitoring water level remotely using wireless sensor network. The project also utilizes Global System for Mobile communication (GSM) and short message service (SMS) to relay data from sensors to computers or directly alert the respective victim's through their mobile phone. It is hope that the proposed architecture can be further develop into a functioning system, which would be beneficial to the community and act as a precautionary action to save lives in the case of flood disaster.
Active Sampler: Light-weight Accelerator for Complex Data Analytics at Scale
Recent years have witnessed amazing outcomes from “Big Models” trained by “Big Data”. Most popular algorithms for model training are iterative. Due to the surging volumes of data, we can usually afford to process only a fraction of the training data in each iteration. Typically, the data are either uniformly sampled or sequentially accessed. In this paper, we study how the data access pattern can affect model training. We propose an Active Sampler algorithm, where training data with more “learning value” to the model are sampled more frequently. The goal is to focus training effort on valuable instances near the classification boundaries, rather than evident cases, noisy data or outliers. We show the correctness and optimality of Active Sampler in theory, and then develop a light-weight vectorized implementation. Active Sampler is orthogonal to most approaches optimizing the efficiency of large-scale data analytics, and can be applied to most analytics models trained by stochastic gradient descent (SGD) algorithm. Extensive experimental evaluations demonstrate that Active Sampler can speed up the training procedure of SVM, feature selection and deep learning, for comparable training quality by 1.6-2.2x.
MAC Essentials for Wireless Sensor Networks
The wireless medium being inherently broadcast in nature and hence prone to interferences requires highly optimized medium access control (MAC) protocols. This holds particularly true for wireless sensor networks (WSNs) consisting of a large amount of miniaturized battery-powered wireless networked sensors required to operate for years with no human intervention. There has hence been a growing interest on understanding and optimizing WSN MAC protocols in recent years, where the limited and constrained resources have driven research towards primarily reducing energy consumption of MAC functionalities. In this paper, we provide a comprehensive state-of-the-art study in which we thoroughly expose the prime focus of WSN MAC protocols, design guidelines that inspired these protocols, as well as drawbacks and shortcomings of the existing solutions and how existing and emerging technology will influence future solutions. In contrast to previous surveys that focused on classifying MAC protocols according to the technique being used, we provide a thematic taxonomy in which protocols are classified according to the problems dealt with. We also show that a key element in selecting a suitable solution for a particular situation is mainly driven by the statistical properties of the generated traffic.
Delayed sodium 18F-fluoride PET/CT imaging does not improve quantification of vascular calcification metabolism: results from the CAMONA study.
BACKGROUND This study aimed to determine if delayed sodium (18)F-fluoride (Na(18)F) PET/CT imaging improves quantification of vascular calcification metabolism. Blood-pool activity can disturb the arterial Na(18)F signal. With time, blood-pool activity declines. Therefore, delayed imaging can potentially improve quantification of vascular calcification metabolism. METHODS AND RESULTS Twenty healthy volunteers and 18 patients with chest pain were prospectively assessed by triple time-point PET/CT imaging at approximately 45, 90, and 180 minutes after Na(18)F administration. For each time point, global uptake of Na(18)F was determined in the coronary arteries and thoracic aorta by calculating the blood-pool-corrected maximum standardized uptake value (cSUV(MAX)). A target-to-background ratio (TBR) was calculated to determine the contrast resolution at 45, 90, and 180 minutes. Furthermore, we assessed whether the acquisition time-point affected the relation between cSUV(MAX) and the estimated 10-year risk for fatal cardiovascular disease (SCORE %). Coronary cSUV(MAX) (P = .533) and aortic cSUV(MAX) (P = .654) remained similar with time, whereas the coronary TBR (P < .0001) and aortic TBR (P < .0001) significantly increased with time. Even though the contrast resolution improved with time, positive correlations between SCORE % and coronary cSUV(MAX) (P < .020) and aortic cSUV(MAX) (P < .005) were observed at all investigated time points. CONCLUSIONS Delayed Na(18)F PET/CT imaging does not improve quantification of vascular calcification metabolism. Although contrast resolution improves with time, arterial Na(18)F avidity is invariant to the time between Na(18)F administration and PET/CT acquisition. Therefore, the optimal PET/CT acquisition time-point to quantify vascular calcification metabolism is achieved as early as 45 minutes after Na(18)F administration.
Prospective study on the association between diet quality and depression in mid-aged women over 9 years.
PURPOSE To examine the longitudinal association between diet quality and depression using prospective data from the Australian Longitudinal Study on Women's Health. METHODS Women born in 1946-1951 (n = 7877) were followed over 9 years starting from 2001. Dietary intake was assessed using the Dietary Questionnaire for Epidemiological Studies (version 2) in 2001 and a shortened form in 2007 and 2010. Diet quality was summarised using the Australian Recommended Food Score. Depression was measured using the 10-item Centre for Epidemiologic Depression Scale and self-reported physician diagnosis. Pooled logistic regression models including time-varying covariates were used to examine associations between diet quality tertiles and depression. Women were also categorised based on changes in diet quality during 2001-2007. Analyses were adjusted for potential confounders. RESULTS The highest tertile of diet quality was associated marginally with lower odds of depression (OR 0.94; 95 % CI 0.83, 1.00; P = 0.049) although no significant linear trend was observed across tertiles (OR 1.00; 95 % CI 0.94, 1.10; P = 0.48). Women who maintained a moderate or high score over 6 years had a 6-14 % reduced odds of depression compared with women who maintained a low score (moderate vs low score-OR 0.94; 95 % CI 0.80, 0.99; P = 0.045; high vs low score-OR 0.86; 95 % CI 0.77, 0.96; P = 0.01). Similar results were observed in analyses excluding women with prior history of depression. CONCLUSION Long-term maintenance of good diet quality may be associated with reduced odds of depression. Randomised controlled trials are needed to eliminate the possibility of residual confounding.
Extraperitoneal leakage as a possible explanation for failure of one-time intraperitoneal treatment in ovarian cancer.
We conducted a single-arm study to determine the biodistribution of intraperitoneally (i.p.) administered 90yttrium-labeled murine monoclonal antibody HMFG1 (90Y-muHMFG1) in patients with advanced stage ovarian cancer. Seventeen (17) patients in complete clinical remission for epithelial ovarian cancer were included. After completion of chemotherapy, a mixture of 111indium-labeled muHMFG1 (imaging) and 90Y-muHMFG1 (therapy) was i.p. administered by a surgically placed, indwelling i.p. catheter. Planar and single-photon emission computed tomography images were recorded to determine the distribution of the study medication during the first 6 days postinjection. Of the first 3 patients, 2 patients had extraperitoneal leakage of up to 50% of the injected dose within 24 hours after injection of the study medication. Extraperitoneal leakage was mainly seen in the retroperitoneal spaces covering the upper and lower quadrant of the abdomen. After adjustments in the procedure, leakage was observed in 2 of the remaining 14 patients. Extraperitoneal leakage of i.p. administered therapy does occur. Such leakage would reduce the locally delivered dose of a drug and could potentially have a negative impact on therapeutic efficacy. Given the potential attraction of developing i.p. treatments for intra-abdominal cancer, the observations in this study need to be taken into consideration.
Gradient Boosting for Conditional Random Fields Report
Gradient Boosting for Conditional Random Fields Report Title In this paper, we present a gradient boosting algorithm for tree-shaped conditional random fields (CRF). Conditional random fields are an important class of models for accurate structured prediction, but effective design of the feature functions is a major challenge when applying CRF models to real world data. Gradient boosting, which can induce and select functions, is a natural candidate solution for the problem. However, it is non-trivial to derive gradient boosting algorithms for CRFs, due to the dense Hessian matrices introduced by variable dependencies. We address this challenge by deriving a Markov Chain mixing rate bound to quantify the dependencies, and introduce a gradient boosting algorithm that iteratively optimizes an adaptive upper bound of the objective function. The resulting algorithm induces and selects features for CRFs via functional space optimization, with provable convergence guarantees. Experimental results on three real world datasets demonstrate that the mixing rate based upper bound is effective for training CRFs with non-linear potentials. 2
A survey on motion prediction and risk assessment for intelligent vehicles
With the objective to improve road safety, the automotive industry is moving toward more “intelligent” vehicles. One of the major challenges is to detect dangerous situations and react accordingly in order to avoid or mitigate accidents. This requires predicting the likely evolution of the current traffic situation, and assessing how dangerous that future situation might be. This paper is a survey of existing methods for motion prediction and risk assessment for intelligent vehicles. The proposed classification is based on the semantics used to define motion and risk. We point out the tradeoff between model completeness and real-time constraints, and the fact that the choice of a risk assessment method is influenced by the selected motion model.
Boxy types: inference for higher-rank types and impredicativity
Languages with rich type systems are beginning to employ a blend of type inference and type checking, so that the type inference engine is guided by programmer-supplied type annotations. In this paper we show, for the first time, how to combine the virtues of two well-established ideas: unification-based inference, and bidi-rectional propagation of type annotations. The result is a type system that conservatively extends Hindley-Milner, and yet supports both higher-rank types and impredicativity.
Could mindfulness decrease anger, hostility, and aggression by decreasing rumination?
Research suggests that rumination increases anger and aggression. Mindfulness, or present-focused and intentional awareness, may counteract rumination. Using structural equation modeling, we examined the relations between mindfulness, rumination, and aggression. In a pair of studies, we found a pattern of correlations consistent with rumination partially mediating a causal link between mindfulness and hostility, anger, and verbal aggression. The pattern was not consistent with rumination mediating the association between mindfulness and physical aggression. Although it is impossible with the current nonexperimental data to test causal mediation, these correlations support the idea that mindfulness could reduce rumination, which in turn could reduce aggression. These results suggest that longitudinal work and experimental manipulations mindfulness would be worthwhile approaches for further study of rumination and aggression. We discuss possible implications of these results.
Entity-aware Image Caption Generation
Current image captioning approaches generate descriptions which lack specific information, such as named entities that are involved in the images. In this paper we propose a new task which aims to generate informative image captions, given images and hashtags as input. We propose a simple but effective approach to tackle this problem. We first train a convolutional neural networks long short term memory networks (CNN-LSTM) model to generate a template caption based on the input image. Then we use a knowledge graph based collective inference algorithm to fill in the template with specific named entities retrieved via the hashtags. Experiments on a new benchmark dataset collected from Flickr show that our model generates news-style image descriptions with much richer information. Our model outperforms unimodal baselines significantly with various evaluation metrics.
Software Defined Networks : The New Norm for Networks
Our paper deals with the Software Defined Networking which is in extensive use in present times due to its programmability that helps in initializing, controlling and managing the network dynamics. It allows the network administrators to work on centralized network configuration and improve data center network efficiency. SDN is basically becoming popular for replacing the static architecture of traditional networks and limited computing and storage of the modern computing environments like data centers. Operations are performed by the controllers with the static switches. Due to imbalance caused due to dynamic traffic controllers are underutilized. On the other hand controllers which are overloaded may cause switches to suffer time delays. Wireless networks involve no cabling, therefore it is cost-effective, efficient, easy-installable, manageable and adaptable. We present how SDN makes it easy to achieve end point security by checking the device's status. Local agents collect device information and send to cloud service to check for vulnerabilities. The results of those checks are sent to the SDN Controller through published Application Program Interfaces (APIs). The SDN Controller instructs Open Flow switches to direct vulnerable devices to a Quarantine Network, thus detecting suspicious traffic. The implementation is done using the data network mathematical model.
Follow-up after transanal endoscopic microsurgery or transanal excision of large benign rectal polyps
Methods: Between January 1986 and December 1995, 238 patients with benign rectal polyps under-went either transanal endoscopic microsurgery (n = 226) or transanal excision (n = 12) at the Clinic of General and Abdominal Surgery, Johannes Gutenberg-University, Mainz. Results: Mean polyp size was 4.2 cm; 89.1% of polyps measured more than 2 cm in diameter. In 89.1% of cases, histological analysis revealed polyps containing tubulovillous or villous adenomas. Synchronous colonic polyps were detected in 12.5% of patients. Follow-up data are available on 222 patients (94%). At follow-up examination, 169 of the 193 surviving patients (87.6%) were recurrence free. Seven of 193 patients (3.6%) had developed neoplastic colonic polyps and, in 17 patients (8.8%), metachronous polyps were detected. Conclusions: Transanal endoscopic microsurgical polypectomy was furthermore demonstrated to be a low-risk procedure with a low recurrence rate for the complete resection of large rectal polyps. At a follow-up rate of 61.1%, the incidence of metachronous carcinoma ranged at 3.1%, which is markedly below the rate of 8–18% for tubulovillous or villous adenomas larger than 1 cm in diameter cited in the literature.
The effect of walnut intake on factors related to prostate and vascular health in older men
BACKGROUND Tocopherols may protect against prostate cancer and cardiovascular disease (CVD). METHODS We assessed the effect of walnuts, which are rich in tocopherols, on markers of prostate and vascular health in men at risk for prostate cancer. We conducted an 8-week walnut supplement study to examine effects of walnuts on serum tocopherols and prostate specific antigen (PSA). Subjects (n = 21) consumed (in random order) their usual diet +/- a walnut supplement (75 g/d) that was isocalorically incorporated in their habitual diets. Prior to the supplement study, 5 fasted subjects participated in an acute timecourse experiment and had blood taken at baseline and 1, 2, 4, and 8 h after consuming walnuts (75 g). RESULTS During the timecourse experiment, triglycerides peaked at 4 h, and gamma-tocopherol (gamma-T) increased from 4 to 8 h. Triglyceride - normalized gamma-T was two-fold higher (P = 0.01) after 8 versus 4 h. In the supplement study, change from baseline was +0.83 +/- 0.52 micromol/L for gamma-T, -2.65 +/- 1.30 micromol/L for alpha-tocopherol (alpha-T) and -3.49 +/- 1.99 for the tocopherol ratio (alpha-T: gamma-T). A linear mixed model showed that, although PSA did not change, the ratio of free PSA:total PSA increased and approached significance (P = 0.07). The alpha-T: gamma-T ratio decreased significantly (P = 0.01), partly reflecting an increase in serum gamma-T, which approached significance (P = 0.08). CONCLUSION The significant decrease in the alpha-T: gamma-T ratio with an increase in serum gamma-T and a trend towards an increase in the ratio of free PSA:total PSA following the 8-week supplement study suggest that walnuts may improve biomarkers of prostate and vascular status.
Analysing Political Discourse : Toward a Cognitive Approach 1
The critical study of political discourse has up until very recently rested solely within the domain of the social sciences. Working within a linguistics framework, Critical Discourse Analysis (CDA), in particular Fairclough (Fairclough 1989, 1995a, 1995b, 2001; Fairclough and Wodak 1997), has been heavily influenced by Foucault. 2 The linguistic theory that CDA and critical linguistics especially (which CDA subsumes) has traditionally drawn upon is Halliday‟s Systemic-Functional Grammar, which is largely concerned with the function of language in the social structure 3 (Fowler et al. 1979; Fowler 1991; Kress and Hodge 1979).
Performance of Lung-RADS in the National Lung Screening Trial: a retrospective assessment.
Editors' Notes Context The definitions used to classify low-dose computed tomography findings may markedly influence the benefits and harms of lung cancer screening. Contribution This analysis of data from a large screening trial found that using the recently proposed Lung-RADS approach to classifying low-dose computed tomography findings substantially decreased the false-positive result rate but with a concomitant decrease in sensitivity. Implication Adopting the Lung-RADS classification system may improve the results of lung cancer screening programs. The U.S. Preventive Services Task Force recently recommended (grade B) lung cancer screening with low-dose computed tomography (LDCT) for high-risk current and former smokers (1). The primary evidence used by the Task Force was the National Lung Screening Trial (NLST), which reported a 20% reduction in lung cancerspecific death associated with LDCT screening (2). Important considerations for widespread use of LDCT lung cancer screening in clinical practice include the definition of a positive result in computed tomography (CT) screening and the appropriate management of positive screening results. Much knowledge has accumulated since the NLST was designed in 2002. In this trial, the definition of a positive screening result was a nodule of 4 mm or greater in the longest diameter that had no specific benign calcification patterns. In addition, the NLST achieved its results without a trial-wide specified protocol for diagnostic management for positive screening results. A recent reanalysis of the NLST examined the effect of different cutoffs defining a positive screening result and found that increasing the threshold to 6 or 8 mm would have resulted in substantial decreases in the false-positive result rate with only small corresponding decreases in sensitivity (3). The International Early Lung Cancer Action Program reported similar results for baseline LDCT screenings, showing a substantial reduction in the positivity rate of screening results with increasing size cutoffs and only a few resultant missed cancer cases (4). Over this period, several professional organizations have promulgated lung cancer screening guidelines, many of which define a positive screening result and include nodule management (57). The American College of Radiology recently began efforts to standardize the reporting of LDCT screening results in a manner analogous to the use of the Breast Imaging Reporting and Data System for mammography, based on the best available data. This effort included defining a positive result on lung cancer screening CT in the most effective manner, attempting to reduce the substantial false-positive result rate while having the least possible effect on test sensitivity, and suggesting management recommendations based on lung cancer risk. Published data from several LDCT screening studies, including the NLST, the International Early Lung Cancer Action Program, and the European NELSON (Nederlands-Leuvens Longkanker Screenings Onderzoek) trial, were used by a consensus panel to help derive positivity criteria for Lung-RADS (4, 810), which was officially released in May 2014 (11). Compared with the NLST criteria, Lung-RADS increases the size threshold for a positive baseline screening result from a 4-mm greatest transverse diameter to a 6-mm transverse bidimensional average (and to 20 mm for nonsolid nodules) and requires growth for preexisting nodules. Although data from the NLST, in part, were used to develop the Lung-RADS criteria, only published summary-level data were considered. These data were sufficient to give an approximate positivity rate for Lung-RADS as applied to the NLST but not to give an exact distribution of Lung-RADS scores. This is especially the case for screenings after baseline, where the individual nodule history over time is critical in defining the Lung-RADS category. We used participant- and nodule-level data to retrospectively apply the Lung-RADS criteria to the NLST. We evaluate the effect of Lung-RADS on the performance characteristics of LDCT screening, including sensitivity, false-positive result rate, positive predictive value (PPV), and negative predictive value (NPV). In addition, we compare the characteristics of the cancer cases detectable by Lung-RADS with those that it would have missed. Methods NLST Design The NLST randomly assigned participants aged 55 to 74 years to LDCT or chest radiography screening. Eligibility criteria included 30 pack-years of smoking or greater and current smoking status or having quit within the past 15 years (12). Participants were recruited at 33 U.S. centers from 2002 to 2004 and received either LDCT or chest radiography over 3 annual screening rounds (denoted T0, T1, and T2). The NLST was approved by the institutional review board at each screening center, and all participants provided informed consent. The NLST study protocol defined a noncalcified nodule (NCN) of 4 mm or greater in the longest transverse diameter as a positive screening result. For each NCN that was 4 mm or greater, radiologists used standardized forms to report location, greatest transverse and perpendicular diameters, margins, and attenuation characteristics. At T1 and T2, they reported whether the abnormality was preexisting or new based on examinations of previous images and, if preexisting, whether it had grown and whether a suspicious change in attenuation had occurred since past screenings. Noncalcified nodules that were unchanged from T0 to T2, representing stability for 2 years, could be considered benign and constitute a negative screening result at the radiologist's discretion. Other abnormalities, including adenopathy or effusion, could also trigger positive screening results. Positive results were tracked for resultant diagnostic procedures and lung cancer diagnoses. In addition, participants were followed with annual surveys to ascertain incident cancer cases. All reported cancer cases were verified with medical records, with stage and histologic characteristics recorded. Deaths were tracked with the annual surveys and supplemented by National Death Index searches. Lung-RADS Table 1 describes the primary criteria for defining Lung-RADS categories. Categories 1 (negative) and 2 (benign appearance) correspond to negative screening results, and categories 3 (probably benign) and 4 (suspicious) correspond to positive screening results. Category 4 is further divided into 4A, 4B, and 4X (8). In the context of annual screening, a negative screening result assumes that reevaluation will occur at the next annual screening, whereas a positive screening result means that additional evaluation is recommended before the next annual screening. The distinctions between the positive screening categories are important because Lung-RADS management guidelines differ substantially across categories, ranging from follow-up CT at 6 months for category 3 to positron emission tomography and CT or biopsy for 4B. In addition, category 2 involves tracking small nodules on the next annual screening, and category 1 does not involve tracking nodules. Table 1. Summary of Lung-RADS Classification Lung-RADS criteria distinguish between baseline (first) and subsequent screenings. For baseline screenings (generally lacking comparison examinations), the criteria are based on nodule size, as measured by average diameter, and nodule attenuation (solid, part-solid, or nonsolid). For subsequent screenings, the criteria also consider the preexistence and growth of the nodule. For baseline screenings, positive screening results for solid and part-solid nodules require a size of 6 mm, and 20 mm is required for nonsolid (that is, ground-glass) nodules. For positivity on subsequent screenings, 4 mm is required for new (solid or part-solid) nodules, and preexisting nodules must show growth, defined as an increase in size of greater than 1.5 mm. New or growing nonsolid nodules still must meet the 20-mm size requirement. For part-solid nodules, the size and/or growth of the solid component is also considered. The overall Lung-RADS screening category is determined by the nodule with the highest individual Lung-RADS score. Category 3 or 4 nodules with additional features (such as spiculation) or imaging findings that increase suspicion for cancer (such as enlarged lymph nodes) can qualify as category 4X. Applying Lung-RADS to the NLST The average diameter for NLST nodules was computed as the mean of the longest diameter and the longest perpendicular diameter. The NLST attenuation classifications of soft tissue, ground glass, and mixed were mapped to the Lung-RADS classifications of solid, nonsolid, and part-solid, respectively. The NLST did not report the amount of growth but only whether growth occurred; therefore, report of growth in the NLST was considered nodule growth for Lung-RADS. For part-solid nodules, the NLST did not report the size of the solid component, which may be required to distinguish among categories 3, 4A, and 4B. Therefore, if NLST data for a part-solid nodule were consistent with 2 or more categories of 3 or higher, a range (such as 3 to 4B) instead of a single category was denoted for our analysis. These category ranges for part-solid nodules were used only if they constituted (at the upper limit of their range) the nodule with the highest degree of suspicion. In addition, the solid component was assumed to be growing if the nodule as a whole was reported as growing or there was a suspicious change in attenuation; growth specifically of the solid component was not recorded in the NLST. If other suspicious findings, in the absence of any nodules measuring 4 mm or greater, constituted a positive screening result in the NLST, this was classified as category 4X for Lung-RADS. Quantitative Methods Lung cancer was deemed to be present at a screening if it was diagnosed within 1 year or before the next screening (whichever came first) or, for po
Globally Coherent Text Generation with Neural Checklist Models
Recurrent neural networks can generate locally coherent text but often have difficulties representing what has already been generated and what still needs to be said – especially when constructing long texts. We present the neural checklist model, a recurrent neural network that models global coherence by storing and updating an agenda of text strings which should be mentioned somewhere in the output. The model generates output by dynamically adjusting the interpolation among a language model and a pair of attention models that encourage references to agenda items. Evaluations on cooking recipes and dialogue system responses demonstrate high coherence with greatly improved semantic coverage of the agenda.
Linking GIS and water resources management models: an object-oriented method
Many challenges are associated with the integration of geographic information systems (GISs) with models in specific applications. One of them is adapting models to the environment of GISs. Unique aspects of water resource management problems require a special approach to development of GIS data structures. Expanded development of GIS applications for handling water resources management analysis can be assisted by use of an object oriented approach. In this paper, we model a river basin water allocation problem as a collection of spatial and thematic objects. A conceptual GIS data model is formulated to integrate the physical and logical components of the modeling problem into an operational framework, based on which, extended GIS functions are developed to implement a tight linkage between the GIS and the water resources management model. Through the object-oriented approach, data, models and users interfaces are integrated in the GIS environment, creating great flexibility for modeling and analysis. The concept and methodology described in this paper is also applicable to connecting GIS with models in other fields that have a spatial dimension and hence to which GIS can provide a powerful additional component of the modeler’s tool kit.  2002 Elsevier Science Ltd. All rights reserved.
Multi-objective UAV mission planning using evolutionary computation
This investigation develops an innovative algorithm for multiple autonomous unmanned aerial vehicle (UAV) mission routing. The concept of a UAV Swarm Routing Problem (SRP) as a new combinatorics problem, is developed as a variant of the Vehicle Routing Problem with Time Windows (VRPTW). Solutions of SRP problem model result in route assignments per vehicle that successfully track to all targets, on time, within distance constraints. A complexity analysis and multi-objective formulation of the VRPTW indicates the necessity of a stochastic solution approach leading to a multi-objective evolutionary algorithm. A full problem definition of the SRP as well as a multi-objective formulation parallels that of the VRPTW method. Benchmark problems for the VRPTW are modified in order to create SRP benchmarks. The solutions show the SRP solutions are comparable or better than the same VRPTW solutions, while also representing a more realistic UAV swarm routing solution.
Working memory, psychiatric symptoms, and academic performance at school
Previous studies of the relationship among working memory function, academic performance, and behavior in children have focused mainly on clinical populations. In the present study, the associations of the performance in audio- and visuospatial working memory tasks to teacher reported academic achievement and psychiatric symptoms were evaluated in a sample of fifty-five 6-13-year-old school children. Working memory function was measured by visual and auditory n-back tasks. Information on incorrect responses, reaction times, and multiple and missed responses were collected during the tasks. The children's academic performance and behavioral and emotional status were evaluated by the Teacher Report Form. The results showed that good spatial working memory performance was associated with academic success at school. Children with low working memory performance, especially audiospatial memory, were reported to have more academic and attentional/behavioral difficulties at school than children with good working memory performance. An increased number of multiple and missed responses in the auditory and visual tasks was associated with teacher reported attentional/behavioral problems and in visual tasks with teacher reported anxiety/depressive symptoms. The results suggest that working memory deficits may underlie some learning difficulties and behavioral problems related to impulsivity, difficulties in concentration, and hyperactivity. On the other hand, it is possible that anxiety/depressive symptoms affect working memory function, as well as the ability to concentrate, leading to a lower level of academic performance at school.