title
stringlengths
8
300
abstract
stringlengths
0
10k
Applications of Deep Learning and Reinforcement Learning to Biological Data
Rapid advances in hardware-based technologies during the past decades have opened up new possibilities for life scientists to gather multimodal data in various application domains, such as omics, bioimaging, medical imaging, and (brain/body)–machine interfaces. These have generated novel opportunities for development of dedicated data-intensive machine learning techniques. In particular, recent research in deep learning (DL), reinforcement learning (RL), and their combination (deep RL) promise to revolutionize the future of artificial intelligence. The growth in computational power accompanied by faster and increased data storage, and declining computing costs have already allowed scientists in various fields to apply these techniques on data sets that were previously intractable owing to their size and complexity. This paper provides a comprehensive survey on the application of DL, RL, and deep RL techniques in mining biological data. In addition, we compare the performances of DL techniques when applied to different data sets across various application domains. Finally, we outline open issues in this challenging research area and discuss future development perspectives.
The Measurement of Work Engagement With a Short Questionnaire: A Cross-National Study.
This article reports on the development of a short questionnaire to measure work engagement—a positive work-related state of fulfillment that is characterized by vigor, dedication, and absorption. Data were collected in 10 different countries (N = 14,521), and results indicated that the original 17-item Utrecht Work Engagement Scale (UWES) can be shortened to 9 items (UWES-9). The factorial validity of the UWES-9 was demonstrated using confirmatory factor analyses, and the three scale scores have good internal consistency and test-retest reliability. Furthermore, a two-factor model with a reduced Burnout factor (including exhaustion and cynicism) and an expanded Engagement factor (including vigor, dedication, absorption, and professional efficacy) fit best to the data. These results confirm that work engagement may be conceived as the positive antipode of burnout. It is concluded that the UWES-9 scores has acceptable psychometric properties and that the instrument can be used in studies on positive organizational behavior.
Combining Residual Networks with LSTMs for Lipreading
We propose an end-to-end deep learning architecture for wordlevel visual speech recognition. The system is a combination of spatiotemporal convolutional, residual and bidirectional Long Short-Term Memory networks. We train and evaluate it on the Lipreading In-The-Wild benchmark, a challenging database of 500-size target-words consisting of 1.28sec video excerpts from BBC TV broadcasts. The proposed network attains word accuracy equal to 83.0%, yielding 6.8% absolute improvement over the current state-of-the-art, without using information about word boundaries during training or testing.
How are substance use disorders addressed in VA psychiatric and primary care settings? Results of a national survey.
OBJECTIVE This study examined interventions for substance use disorders within the Department of Veterans Affairs (VA) psychiatric and primary care settings. METHODS National random samples of 83 VA psychiatry program directors and 102 primary care practitioners were surveyed by telephone. The survey assessed screening practices to detect substance use disorders, protocols for treating patients with substance use disorders, and available treatments for substance use disorders. RESULTS Respondents reported extensive contact with patients with substance use problems. However, a majority reported being ill equipped to treat substance use disorders themselves; they usually referred such patients to specialty substance use disorder treatment programs. CONCLUSIONS Offering fewer specialty substance use disorder services within the VA may be problematic: providers can refer patients to specialty programs only if such programs exist. Caring for veterans with substance use disorders may require increasing the capacity of and establishing new specialty programs or expanding the ability of psychiatric programs and primary care practitioners to provide such care.
A safe generic adaptation mechanism for smart cars
Today's vehicles are evolving towards smart cars, which will be able to drive autonomously and adapt to changing contexts. Incorporating self-adaptation in these cyber-physical systems (CPS) promises great benefits, like cheaper software-based redundancy or optimised resource utilisation. As promising as these advantages are, a respective proportion of a vehicle's functionality poses as safety hazards when confronted with fault and failure situations. Consequently, a system's safety has to be ensured with respect to the availability of multiple software applications, thus often resulting in redundant hardware resources, such as dedicated backup control units. To benefit from self-adaptation by means of creating efficient and safe systems, this work introduces a safety concept in form of a generic adaptation mechanism (GAM). In detail, this generic adaptation mechanism is introduced and analysed with respect to generally known and newly created safety hazards, in order to determine a minimal set of system properties and architectural limitations required to safely perform adaptation. Moreover, the approach is applied to the ICT architecture of a smart e-car, thereby highlighting the soundness, general applicability, and advantages of this safety concept and forming the foundation for the currently ongoing implementation of the GAM within a real prototype vehicle.
Anomaly Detection in Elderly Daily Behavior in Ambient Sensing Environments
Current ubiquitous computing applications for smart homes aim to enhance people’s daily living respecting age span. Among the target groups of people, elderly are a population eager for “choices for living arrangements”, which would allow them to continue living in their homes but at the same time provide the health care they need. Given the growing elderly population, there is a need for statistical models able to capture the recurring patterns of daily activity life and reason based on this information. We present an analysis of real-life sensor data collected from 40 different households of elderly people, using motion, door and pressure sensors. Our objective is to automatically observe and model the daily behavior of the elderly and detect anomalies that could occur in the sensor data. For this purpose, we first introduce an abstraction layer to create a common ground for home sensor configurations. Next, we build a probabilistic spatio-temporal model to summarize daily behavior. Anomalies are then defined as significant changes from the learned behavioral model and detected using a cross-entropy measure. We have compared the detected anomalies with manually collected annotations and the results show that the presented approach is able to detect significant behavioral changes of the elderly.
Retrieving chronological age from dental remains of early fossil hominins to reconstruct human growth in the past.
A chronology of dental development in Pan troglodytes is arguably the best available model with which to compare and contrast reconstructed dental chronologies of the earliest fossil hominins. Establishing a time scale for growth is a requirement for being able to make further comparative observations about timing and rate during both dento-skeletal growth and brain growth. The absolute timing of anterior tooth crown and root formation appears not to reflect the period of somatic growth. In contrast, the molar dentition best reflects changes to the total growth period. Earlier initiation of molar mineralization, shorter crown formation times, less root length formed at gingival emergence into functional occlusion are cumulatively expressed as earlier ages at molar eruption. Things that are similar in modern humans and Pan, such as the total length of time taken to form individual teeth, raise expectations that these would also have been the same in fossil hominins. The best evidence there is from the youngest fossil hominin specimens suggests a close resemblance to the model for Pan but also hints that Gorilla may be a better developmental model for some. A mosaic of great ape-like features currently best describes the timing of early hominin dental development.
An effective way of word-level language identification for code-mixed facebook comments using word-embedding via character-embedding
Individuals utilize online networking sites like Facebook and Twitter to express their interests, opinions or reviews. The users used English language as their medium for communication in earlier days. Despite the fact that content can be written in Unicode characters now, people find it easier to communicate by mixing two or more languages together or lean toward writing their native language in Roman script. These types of data are called code-mixed text. While processing such social-media data, recognizing the language of the text is an important task. In this work, we have developed a system for word-level language identification on code-mixed social media text. The work is accomplished for Tamil-English and Malayalam-English code-mixed Facebook comments. The methodology used for the system is a novel approach which is implemented based on features obtained from character-based embedding technique with the context information and uses a machine learning based classifier, Support Vector Machine for training and testing. An accuracy of 93% was obtained for Malayalam-English and 95% for Tamil-English code-mixed text.
The role of genes in life
RésuméCette étude critique deux notions très répandues: 1° que les genes chromosomes forment les unités fondamentales du mécanisme biologique: 2° que les organismes sont des conséquences d'ensembles de genes ‘libres’.Il est suggéré que 3° l'information codée dans les chromosomes est plutôt comparable aux indications données par un livret d'instructions et que 4° des actions fondamentales des cellules ont pu précéder les instructions qui les concernent de même façon que les techniques humaines ont précédé des textes qui les analysent.Cette analogie est élaborée en certains détails et quelquesunes de ses limitations sont demonstrées.SummaryIn this paper two widely held ideas are criticized: (1) that chromosomal genes are the fundamental units of biological machinery and (2) that organisms have arisen from assemblies of “free” genes.Instead it is suggested that (3) the information encoded in the chromosomes may better be compared with the instructions contained in a textbook or leaflet, an that (4) basic cell activities may have preceded the instructions dealing with them in the same way in which human skills have preceded any texts dealing with them.This analogy is elaborated in certain details and some of its limitations are shown.ZusammenfassungDie Arbeit ist eine Kritik zweier weitverbreiteter Ideen, nämlich dass (1) die Gene in den Chromosomen die fundamentalen Bestandteile der biologischen Maschinerie sind und (2) dass die Organismen von Aggregaten „freier“ Gene abstammen.Statt dessen wird vorgeschlagen, dass (3) die in den Chromosomen kodifizierte Information eher mit den Instruktionen eines Lehrbuchs oder einer Gebrauchsanweisung vergleichbar ist und dass (4) die grundlegenden Zellaktivitäten früheren Ursprungs sein dürften, als die Instruktionen zu ihrer Ausführung, etwa in der Weise in der menschliche Fertigkeiten gedruckten Anweisungen zu ihrer Ausführung vorausgegangen sind.Verschiedene Einzelheiten dieser Analogie werden verfolgt und einige ihrer Grenzen aufgezeigt.
Table-to-Text: Describing Table Region With Natural Language
In this paper, we present a generative model to generate a natural language sentence describing a table region, e.g., a row. The model maps a row from a table to a continuous vector and then generates a natural language sentence by leveraging the semantics of a table. To deal with rare words appearing in a table, we develop a flexible copying mechanism that selectively replicates contents from the table in the output sequence. Extensive experiments demonstrate the accuracy of the model and the power of the copying mechanism. On two synthetic datasets, WIKIBIO and SIMPLEQUESTIONS, our model improves the current state-of-the-art BLEU-4 score from 34.70 to 40.26 and from 33.32 to 39.12, respectively. Furthermore, we introduce an open-domain dataset WIKITABLETEXT including 13,318 explanatory sentences for 4,962 tables. Our model achieves a BLEU-4 score of 38.23, which outperforms template based and language model based approaches.
Bundle-branch block as a risk factor in noncardiac surgery.
BACKGROUND Despite extensive data examining perioperative risk in patients with coronary artery disease, little attention has been devoted to the implications of conduction system abnormalities. OBJECTIVE To define the clinical significance of bundle-branch block (BBB) as a perioperative risk factor. METHODS Retrospective, cohort-controlled study of all noncardiac, nonophthalmologic, adult patients with BBB seen in our preoperative evaluation center. Medical charts were reviewed for data regarding cardiovascular disease, surgical procedure, type of anesthesia, intravascular monitoring, and perioperative complications. RESULTS Bundle-branch block was present in 455 patients. Right BBB (RBBB) was more common than left BBB (LBBB) (73.8% vs 26.2%). Three patients with LBBB and 1 patient with RBBB died; 1 patient had a supraventricular tachyarrhythmia. Three of the 4 deaths were sepsis related. There were 2 (0.4%) deaths in the control group. There was no difference in mortality between BBB and control groups (P = .32). Subgroup analysis suggested an increased risk for death in patients with LBBB vs controls (P = .06; odds ratio, 6.0; 95% confidence interval, 1.2-100.0) and vs RBBB (P = .06; odds ratio, 8.7; 95% confidence interval, 1.2-100.0). CONCLUSIONS The presence of BBB is not associated with a high incidence of postoperative cardiac complications. Perioperative mortality is not increased in patients with RBBB and not directly attributable to cardiac complications in patients with LBBB. These data suggest that the presence of BBB does not significantly increase the likelihood of cardiac complications following surgery, but that patients with LBBB may not tolerate the stress of perioperative noncardiac complications.
Prioritizing Attention in Fast Data : Principles and Promise
While data volumes continue to rise, the capacity of human attention remains limited. As a result, users need analytics engines that can assist in prioritizing attention in this fast data that is too large for manual inspection. We present a set of design principles for the design of fast data analytics engines that leverage the relative scarcity of human attention and overabundance of data: return fewer results, prioritize iterative analysis, and filter fast to compute less. We report on our early experiences employing these principles in the design and deployment of MacroBase, an open source analysis engine for prioritizing attention in fast data. By combining streaming operators for feature transformation, classification, and data summarization, MacroBase provides users with interpretable explanations of key behaviors, acting as a search engine for fast data.
Cyclophosphamide plus granulocyte colony stimulating factor (G-CSF) is more effective than G-CSF alone in mobilizing hemopoietic progenitor cells in severe, refractory rheumatoid arthritis.
Autologous hematopoietic stem cell transplantation (AHSCT) has been proposed as a treatment for severe, refractory rheumatoid arthritis (RA).1 Although it needs to be clarified by controlled studies in this setting, AHSCT with ex vivo T-cell depletion might be more effective than an unmanipulated graft reducing the reinfusion of autoreactive T-cells.2 Peripheral blood stem cells are now replacing bone marrow cells as a source of hematopoietic progenitors. Release of a large number of hematopoietic stem cells as required for engraftment can be obtained by granulocyte colony-stimulating factor (G-CSF) either alone or in combination with chemotherapy.1,3 In patients with malignancies, mobilization with cyclophosphamide and G-CSF allows the collection of a larger number of stem cells than G-CSF alone.3 This may be useful when T-cell depletion is planned, as ex vivo graft manipulation is associated with a reduction of stem cell content.4 Previous studies have looked at the combination of cyclophosphamide plus G-CSF in RA,1,5 however direct comparisons between combination treatment and G-CSF alone have not been formally undertaken. After signing informed consent, seven patients with severe RA refractory to conventional therapy were enrolled in a clinical trial approved by the ethical committee of the Maugeri Foundation Institute for Clinical Research. Bone marrow examination was normal in all cases; one patient (#6) had developed stage I non-Hodgkin’s lymphoma (treated with surgery and radiotherapy) 3 years before this study. Three patients (#1-3) received I.V. cyclophosphamide (4 g/m2) followed by lenograstim 10 μg/kg/d starting from day 4 until stem cell collection. Four patients (#47), matched for sex, age, disease activity and duration, received lenograstim 10 μg/kg/d until stem cell collection. CD34+ cell count in the peripheral blood was monitored daily by Facscalibur (Becton Dickinson Immunocytometry Systems, San José, CA, USA); when CD34+ cells exceeded 20/μL, stem cell collection was performed by a Cobe Spectra cell separator (Cobe, Denver, CO, USA). Positive immunoselection was performed using a Isolex 300i cell separator device (Baxter Health Care, Deerfield, IL, USA). Since the number of CD34+ cells required for safe hematopoietic recovery is ≥2.5×106/kg and considering that immunoselection may cause a loss of hematopoietic progenitors, this procedure was performed only in case of harvesting ≥5×106 CD34+ cells/kg. An aliquot of the CD34+ cells exceeding 5×106/kg was stored unselected as a back-up graft. Stem cell mobilization was highly effective in patients receiving cyclophosphamide plus growth factor as compared to G-CSF alone, so that CD34+ cell immunoselection could be performed only in the former patients (Table 1). No patient experienced flares of the disease; neutropenic fever was observed in patient #3 but no infections or bleeding were documented and no patient required blood transfusions. Patients who received cyclophosphamide had complete hair loss lasting 3 months. The clinical outcomes of RA following mobilization are shown in Table 2 according to the core set criteria of the American College of Rheumatology.6 The number of CD34+ cells in the peripheral blood of RA Table 1. Stem cell mobilization, collection and immunoselection in seven patients with severe, refractory rheumatoid arthritis after cyclophosphamide plus G-CSF (patients #1-3) or G-CSF alone (patients #4-7). CD34+ cell positive selection to obtain partial T-cell depletion was performed only in case of harvesting at least 5x106 CD34+ cells/kg.
Full-Duplex Cooperative Non-Orthogonal Multiple Access With Beamforming and Energy Harvesting
In this paper, a novel communication scheme that combines beamforming, energy harvesting, and cooperative non-orthogonal multiple access (NOMA) system is introduced. In the proposed scheme, NOMA is first combined with beamforming to serve more than one user in each beamforming vector. Later, simultaneous wireless information and power transfer (SWIPT) technique is exploited to encourage the strong user to relay the weak user’s information messages in full-duplex (FD) manner. However, one major drawback of FD communications is the self-interference (SI) signal due to signal leakage from terminal’s output to the input. Three different cases are adopted. In the first case, the SI signal is assumed to be fully cancelled at the strong user using SI cancellation techniques. In the second case, SI is only cancelled at the information decoding circuits, while being harvested in the energy harvester to provide extra energy. The third case investigates the impact of imperfect SI cancellation on system performance is investigated. It is proved that introducing SWIPT doesn’t only motivate users to collaborate, but also mitigates the impact of the SI signal. In addition, the problem of total sum rate maximisation is formulated and solved locally by means of two-step difference of convex functions (DC)-based procedure. Furthermore, the outage probability of both weak and strong user is derived and provided in closed-form expressions. Numerical simulations are presented to illustrate the sum rate gain of the proposed scheme if compared with existing schemes, and to verify the derived formulas.
Gaps as characters in sequence-based phylogenetic analyses.
In the analysis of sequence-based data matrices, the use of different methods of treating gaps has been demonstrated to influence the resulting phylogenetic hypotheses (e.g., Eernisse and Kluge, 1993; Vogler and DeSalle, 1994; Simons and May den, 1997). Despite this influence, a well-justified, uniformly applied method of treating gaps is lacking in sequence-based phylogenetic studies. Treatment of gaps varies widely from secondarily mapping gaps onto the tree inferred from base characters to treating all gaps as separate characters or character states (Gonzalez, 1996). This diversity of approaches demonstrates the need for a comprehensive discussion of indel (insertion or deletion) coding and a robust method with which to incorporate gap characters into tree searches. We use the term "indel coding" instead of "gap coding" because the term "gap coding" has already been applied to coding quantitative characters (Mickevich and Johnson, 1976; Archie, 1985). Although "indel coding" undesirably refers to processes that are not observed (insertions and deletions) instead of patterns that are observed (gaps), the term is unambiguous and does not co-opt established terminology. The purpose of this paper is to discuss the implications of each of the methods of treating gaps in phylogenetic analyses, to allow workers to make informed choices among them. We suggest that gaps should be coded as characters in phylogenetic analyses, and we propose two indel-coding methods. We discuss four main points: (1) the logical independence of alignment and tree search; (2) why gaps are properly coded as characters; (3) how gaps should be coded as characters; and (4) problems with a priori weighting of gap characters during tree search. LOGICAL INDEPENDENCE
Beyond budgeting and Agile Software Development: a Conceptual Framework for the Performance Management of Agile Software Development Teams
Around the same time as the emergence of agile methods as a formalized concept, the management accounting literature introduced the concept of Beyond Budgeting as a performance management model for changing business environments. Both concepts share many similarities with both having a distinctly agile or adaptive perspective. The Beyond Budgeting model promises to enable companies to keep pace with changing business environments, quickly create and adapt strategy and empower people throughout the organization to make effective changes. This research in progress paper attempts to develop the Beyond Budgeting model within the context of agile software development teams. The twelve Beyond Budgeting principles are discussed and a research framework is presented. This framework is being used in two case studies to investigate the organizational issues and challenges that affect the performance of agile software
Mental balance and well-being: building bridges between Buddhism and Western psychology.
Clinical psychology has focused primarily on the diagnosis and treatment of mental disease, and only recently has scientific attention turned to understanding and cultivating positive mental health. The Buddhist tradition, on the other hand, has focused for over 2,500 years on cultivating exceptional states of mental well-being as well as identifying and treating psychological problems. This article attempts to draw on centuries of Buddhist experiential and theoretical inquiry as well as current Western experimental research to highlight specific themes that are particularly relevant to exploring the nature of mental health. Specifically, the authors discuss the nature of mental well-being and then present an innovative model of how to attain such well-being through the cultivation of four types of mental balance: conative, attentional, cognitive, and affective.
Breast cancer histopathological image classification using Convolutional Neural Networks
The performance of most conventional classification systems relies on appropriate data representation and much of the efforts are dedicated to feature engineering, a difficult and time-consuming process that uses prior expert domain knowledge of the data to create useful features. On the other hand, deep learning can extract and organize the discriminative information from the data, not requiring the design of feature extractors by a domain expert. Convolutional Neural Networks (CNNs) are a particular type of deep, feedforward network that have gained attention from research community and industry, achieving empirical successes in tasks such as speech recognition, signal processing, object recognition, natural language processing and transfer learning. In this paper, we conduct some preliminary experiments using the deep learning approach to classify breast cancer histopathological images from BreaKHis, a publicly dataset available at http://web.inf.ufpr.br/vri/breast-cancer-database. We propose a method based on the extraction of image patches for training the CNN and the combination of these patches for final classification. This method aims to allow using the high-resolution histopathological images from BreaKHis as input to existing CNN, avoiding adaptations of the model that can lead to a more complex and computationally costly architecture. The CNN performance is better when compared to previously reported results obtained by other machine learning models trained with hand-crafted textural descriptors. Finally, we also investigate the combination of different CNNs using simple fusion rules, achieving some improvement in recognition rates.
3D Integration technology: Status and application development
As predicted by the ITRS roadmap, semiconductor industry development dominated by shrinking transistor gate dimensions alone will not be able to overcome the performance and cost problems of future IC fabrication. Today 3D integration based on through silicon vias (TSV) is a well-accepted approach to overcome the performance bottleneck and simultaneously shrink the form factor. Several full 3D process flows have been demonstrated, however there are still no microelectronic products based on 3D TSV technologies in the market — except CMOS image sensors. 3D chip stacking of memory and logic devices without TSVs is already widely introduced in the market. Applying TSV technology for memory on logic will increase the performance of these advanced products and simultaneously shrink the form factor. In addition to the enabling of further improvement of transistor integration densities, 3D integration is a key technology for integration of heterogeneous technologies. Miniaturized MEMS/IC products represent a typical example for such heterogeneous systems demanding for smart system integration rather than extremely high transistor integration densities. The European 3D technology platform that has been established within the EC funded e-CUBES project is focusing on the requirements coming from heterogeneous systems. The selected 3D integration technologies are optimized concerning the availability of devices (packaged dies, bare dies or wafers) and the requirements of performance and form factor. There are specific technology requirements for the integration of MEMS/NEMS devices which differ from 3D integrated ICs (3D-IC). While 3D-ICs typically show a need for high interconnect densities and conductivities, TSV technologies for the integration of MEMS to ICs may result in lower electrical performance but have to fulfill other requirements, e. g. mechanical stability issues. 3D integration of multiple MEMS/IC stacks was successfully demonstrated for the fabrication of miniaturized sensor systems (e-CUBES), as for automotive, health & fitness and aeronautic applications.
DisSent: Sentence Representation Learning from Explicit Discourse Relations
Sentence vectors represent an appealing approach to meaning: learn an embedding that encompasses the meaning of a sentence in a single vector, that can be used for a variety of semantic tasks. Existing models for learning sentence embeddings either require extensive computational resources to train on large corpora, or are trained on costly, manually curated datasets of sentence relations. We observe that humans naturally annotate the relations between their sentences with discourse markers like “but” and “because”. These words are deeply linked to the meanings of the sentences they connect. Using this natural signal, we automatically collect a classification dataset from unannotated text. We evaluate our sentence embeddings on a variety of transfer tasks, including discourse-related tasks using Penn Discourse Treebank. We demonstrate that training a model to predict discourse markers yields high quality sentence embeddings.
The neurophysiology of unmyelinated tactile afferents
CT (C tactile) afferents are a distinct type of unmyelinated, low-threshold mechanoreceptive units existing in the hairy but not glabrous skin of humans and other mammals. Evidence from patients lacking myelinated tactile afferents indicates that signaling in these fibers activate the insular cortex. Since this system is poor in encoding discriminative aspects of touch, but well-suited to encoding slow, gentle touch, CT fibers in hairy skin may be part of a system for processing pleasant and socially relevant aspects of touch. CT fiber activation may also have a role in pain inhibition. This review outlines the growing evidence for unique properties and pathways of CT afferents.
Fibre Optic Sensors for Structural Health Monitoring of Aircraft Composite Structures: Recent Advances and Applications
In-service structural health monitoring of composite aircraft structures plays a key role in the assessment of their performance and integrity. In recent years, Fibre Optic Sensors (FOS) have proved to be a potentially excellent technique for real-time in-situ monitoring of these structures due to their numerous advantages, such as immunity to electromagnetic interference, small size, light weight, durability, and high bandwidth, which allows a great number of sensors to operate in the same system, and the possibility to be integrated within the material. However, more effort is still needed to bring the technology to a fully mature readiness level. In this paper, recent research and applications in structural health monitoring of composite aircraft structures using FOS have been critically reviewed, considering both the multi-point and distributed sensing techniques.
Another Look at Causality: Discovering Scenario-Specific Contingency Relationships with No Supervision
Contingency discourse relations play an important role in natural language understanding. In this paper we propose an unsupervised learning model to automatically identify contingency relationships between scenario-specific events in web news articles (on the Iraq war and on hurricane Katrina). The model generates ranked contingency relationships by identifying appropriate candidate event pairs for each scenario of a particular domain. Scenario-specific events, contributing towards the same objectives in a domain, are likely to be dependent on each other, and thus form good candidates for contingency relationships. In order to evaluate the ranked contingency relationships, we rely on the manipulation theory of causation and a comparison of precision-recall performance curves. We also perform various tests which bring insights into how people perceive causality. For example, our findings show that the larger the distance between two events, the more likely it becomes for the annotators to identify them as non-causal.
Wind speed modeling and energy production simulation with Weibull sampling
This paper describes the maximum likely estimation (MLE) method and the method of moments (MOM) for wind speed modeling. The Weibull wind speed distribution models are fitted using the two methods and the wind data from a tall tower in the Midwestern United States, with seasonal wind speed variations also considered in the modeling. It turned out that both methods provide very similar results with comparable accuracy. The Monte Carlo simulation is used for obtaining expected wind energy production using a Weibull sampling technique.
Focused ultrasound modulates region-specific brain activity
We demonstrated the in vivo feasibility of using focused ultrasound (FUS) to transiently modulate (through either stimulation or suppression) the function of regional brain tissue in rabbits. FUS was delivered in a train of pulses at low acoustic energy, far below the cavitation threshold, to the animal's somatomotor and visual areas, as guided by anatomical and functional information from magnetic resonance imaging (MRI). The temporary alterations in the brain function affected by the sonication were characterized by both electrophysiological recordings and functional brain mapping achieved through the use of functional MRI (fMRI). The modulatory effects were bimodal, whereby the brain activity could either be stimulated or selectively suppressed. Histological analysis of the excised brain tissue after the sonication demonstrated that the FUS did not elicit any tissue damages. Unlike transcranial magnetic stimulation, FUS can be applied to deep structures in the brain with greater spatial precision. Transient modulation of brain function using image-guided and anatomically-targeted FUS would enable the investigation of functional connectivity between brain regions and will eventually lead to a better understanding of localized brain functions. It is anticipated that the use of this technology will have an impact on brain research and may offer novel therapeutic interventions in various neurological conditions and psychiatric disorders.
Assessing The Feasibility Of Self-organizing Maps For Data Mining Financial Information
Analyzing financial performance in today’s information-rich society can be a daunting task. With the evolution of the Internet, access to massive amounts of financial data, typically in the form of financial statements, is widespread. Managers and stakeholders are in need of a data-mining tool allowing them to quickly and accurately analyze this data. An emerging technique that may be suited for this application is the self-organizing map. The purpose of this study was to evaluate the performance of self-organizing maps for analyzing financial performance of international pulp and paper companies. For the study, financial data, in the form of seven financial ratios, was collected, using the Internet as the primary source of information. A total of 77 companies, and six regional averages, were included in the study. The time frame of the study was the period 1995-00. An example analysis was performed, and the results analyzed based on information contained in the annual reports. The results of the study indicate that self-organizing maps can be feasible tools for the financial analysis of large amounts of financial data.
Conservative and operative treatment in cervical burst fractures
The aim of this study is to compare the results of non-operative and anterior operative treatment of cervical burst and flexion teardrop fractures. Sixty-nine consecutive patients treated during 1980 to 1995 were reviewed retrospectively. Thirty-four of them had been treated with skull traction or halo-vest and 35 with anterior decompression, bone grafting and fixation by an anterior Caspar plate. Neurological functioning on admission and at the end of the follow-up was assessed by using Frankel’s classification. Kyphosis and spinal canal encroachment by retropulsed fragments were measured radiographically. Operatively treated patients recovered more often with at least one Frankel grade (P = 0.027) and presented less narrowing of the spinal canal (P = 0.0006) and kyphotic deformity (P = 0.00003) at the end of the follow-up. In comparison with the conservative methods, the operative Caspar technique provided superior decompression and fixation as well as promoted the healing of cord injuries caused by burst and flexion teardrop fractures.
Design Guidelines for Mobile Augmented Reality: User Experience
Mobile augmented reality (MAR) enabled devices have the capability to present a large amount of information in real time, based on sensors that determine proximity, visual reference, maps, and detailed information on the environment. Location and proximity technologies combined with detailed mapping allow effective navigation. Visual analysis software and growing image databases enable object recognition. Advanced graphics capabilities bring sophisticated presentation of the user interface. These capabilities together allow for real-time melding of the physical and the virtual worlds and can be used for information overlay of the user’s environment for various purposes such as entertainment, tourist assistance, navigation assistance, and education [ 1 ] . In designing for MAR applications it is very important to understand the context in which the information has to be presented. Past research on information presentation on small form factor computing has highlighted the importance of presenting the right information in the right way to effectively engage the user [ 2– 4 ] . The screen space that is available on a small form factor is limited, and having augmented information presented as an overlay poses very interesting challenges. MAR usages involve devices that are able to perceive the context of the user based on the location and other sensor based information. In their paper on “ContextAware Pervasive Systems: Architectures for a New Breed of Applications”, Loke [ 5 ] ,
Computation for Metaphors, Analogy, and Agents
As an introduction to papers in this book we review the notion of metaphor in language, and of metaphor as conceptual, and as primary to understanding. Yet the view of metaphor here is more general. We propose a constructive view of metaphor as mapping or synthesis of meaning between domains, which need not be conceptual ones. These considerations have implications for artificial intelligence (AI), humancomputer interaction (HCI), algebraic structure-preservation, constructive biology, and agent design. In this larger setting for metaphor, contributions of the selected papers are overviewed and key aspects of computation for metaphors, analogy and agents highlighted. 1 Metaphor beyond Language and Concepts Metaphor and analogy had traditionally been considered the strict domain of rhetoric, poetics and linguistics. Their study goes back in long scholarly histories at least to the ancient Greece of Aristotle and the India of Panini. More recently it has been realized that human metaphor in language is primarily conceptual, and moreover that metaphor transcends language, going much deeper into the roots of human concepts, epistemologies, and cultures. Seen as a major component in human thought, metaphor has come to be understood and studied as belonging also to the realm of the cognitive sciences. Lakoff and Johnson’s and Ortony’s landmark volumes [22,36] cast metaphor in cognitive terms (for humans with their particular type of embodiment) and shed much light on the constructive nature of metaphorical understanding and creation of conceptual worlds. Our thesis is that these ideas on metaphor have a power extending beyond the human realm, not only beyond language and into human cognition, but to the realm of animals, as well as robots and other constructed agents. In building robots and agents, we are engaging in a kind of constructive biology, working to realize the mechanism-as-creature metaphor, which has guided and inspired much work on robots and agents. Such agents may have to deal with aspects of Current address: Interactive Systems Engineering, Department of Computer Science, University of Hertfordshire, Hatfield, Hertfordshire AL10 9AB, United Kingdom, Email: [email protected] C. Nehaniv (Ed.): Computation for Metaphors, Analogy, and Agents, LNCS 1562, pp. 1–11, 1999. c © Springer-Verlag Berlin Heidelberg 1999 2 Chrystopher L. Nehaniv time, space, mapping, history and adaptation to their respective Umwelt (“world around”). By looking at the linguistic and cognitive understanding of metaphor and analogy, and at formal and computation instantiations of our understanding of metaphor and analogy, the constructors of agents, robots and creatures may have much to gain. Understanding through building is a powerful way to validate theories and uncover explanatory mechanisms. Moreover, building can open one’s eyes to the light of new understanding in both theory and practice. An intriguing metaphorical blend is the notion of a Robot. The concept of a robot is understood as a cognitive blend of the concepts “machine” and “human” (or “animal”). Attempting to build such a mechanism, one is led to the question of ‘transferring’ – realizing analogues of – human or animal-like abilities in a new medium. Moreover, if this new mechanism should act like an animal, How will need to interact with, adapt to, and perhaps interpret the world around it? How is this agent to ‘compute’ in this way? and, How is it to be engineered in order to meet either or both of these mutually reflective goals? Scientific advances (and delays) have often rested on metaphors and analogies, and paradigmatic shifts may be largely based on them [20]. But computation employing conceptual metaphors has mostly been carried out via human thought. In the realms of human-computer interaction (HCI), artificial intelligence (AI), artificial life, agent technology, constructive biology, cognitive science, linguistics, robotics, and computer science, we may ask for means to employ the powerful tool of metaphor for the synthesis and analysis of systems for which meaning makes sense, for which a correspondence exists between inside and outside, among behaviors, embodiments and environments. Richards [37] formulated a metaphor as a mapping from a topic source domain (‘tenor’) to a target domain (‘vehicle’), by means of which something is as1 The machine itself as tool and metaphor has had a long and creative history [11]. Indeed the conceptualization of what we consider mechanistic explanations in physics, biology and engineering has changed very much in course of the history of ideas. For instance, Newton’s physics was criticized as being non-mechanistic, since it required action at a distance without interconnecting parts (Toumlin [43]). Modern mechanistic scientific explanations were not necessary mechanistic in the older sense of the term. ‘Mechanistic’ represents a refined, blended concept that has evolved over many centuries. 2 A related blend is the notion of ‘cyborg’ — a ‘cybernetic organism’, which is more proximal for us than ‘robot’ in that it entails a physical blending of biological life, including ourselves, with the machine. Indeed, our use of tools such as eyeglasses, hammers, numerical notations, contact lenses, and other prosthetics to augment our bodies and minds has already made us cyborgs. This can be taken as an empowering metaphor when one takes control of and responsibility for our own cybernetic augmentation (Haraway [12], Nehaniv [28]). 3 See the discussion paper “The Second Person — Meaning and Metaphors” [30] at the end of this book for outline of a theory of meaning in a setting extending Shannon-Weaver information theory to situated agents and observers and addressing the origin, evolution and maintenance of interaction channels for perception and action. Computation for Metaphors, Analogy and Agents 3 serted (or understood) about the topic. Cognitive theories realized that metaphor is not an exceptional decorative occurrence in language, but is a main mechanism by which humans understand abstract concepts and carry out abstract reasoning (e.g. Lakoff and Johnson [22], Lakoff [21], Johnson [18]). On this view, metaphors are structure-preserving mappings (partial homomorphisms) between conceptual domains, rather than linguistic constructions. Common metaphorical schemas in our cultures are grounded in embodied perception. Correspondences in experience (rather than just abstract similarity) structure our cognition. Common conceptual root metaphors in English are studied by Lakoff and Johnson [22], as also extended with detailed attention to root analogies in the English lexicon by Goatly [9]. An important extension for conceptual metaphors is the framework of Mark Turner and Gilles Fauconnier (see Turner’s paper in this volume), who argue that metaphors and analogies are not sufficiently accounted for by mappings between pre-existing static domains, but are actually better understood as constructs in forged conceptual spaces, which are blends of conceptual domains, over some common space, with projections from the blend space back to the constituent factors (e.g. a ‘tenor’ and ‘vehicle’) affording recruitment of features from the blend space in which much inference and new structure may be generated. We shall not restrict ourselves to concepts or language. A more general, not necessarily symbolic view is also possible if one conceives of metaphor and analogy in the study of ‘meaning transfer’ between domains, or in light of the theory of cognitive blending, as the realm of ‘meaning synthesis’ by putting things together that already share something to create a new domain guiding thought, perception or action. Other types of meaning can be seen for instance in Dawkins’ notion of memes as replicators in minds and cultures [7], transmitted by imitation and learning, propagating, often in difficult circumstances, via motion through behavioral or linguistic media. Still another type of meaning is comprised by agent behavior in response to sensory stimuli to effect changes in its environment. 1.1 Human-Computer Interaction The idea of metaphor has been applied in Human-Computer Interaction (HCI), Cognitive Ergonomics, and Software Engineering. For example, building user interfaces based on metaphors is now standard engineering practice. Examples are windows, widgets, menus, desktops, synthetic worlds (e.g. nanomolecular manipulation via a virtual reality (VR) and force-feedback haptic interface), and personal assistant agents). The search for improved interaction metaphors is an active research and development area (e.g. [41]). Here we are in a realm of metaphor in human-tool interaction, which is clearly primarily conceptual (and at times merely sensorimotorial) rather than linguistic. Language games have 4 The understanding of readers with an knowledge of basic category theory may be enhanced by the suggestion that Fauconnier-Turner blends may be considered as category-theoretic pushouts or, more generally, as colimits of conceptual domains. 4 Chrystopher L. Nehaniv become interaction games, with the meaning of artifacts defined by the actions they afford. A particular case is the area of ‘intelligent software agents’. This has grown into a large arena of research and application, concerned with realizing the software-as-agent metaphor in interfaces, entertainment and synthetic worlds, as well as for workload and information overload reduction (cf. [38]). As with other types of semantic change in human language and cultures, what may at first have been marked as strange may become common: these metaphors become definitional identities; rather than conceptual mappings, they become realities. Some pieces of software are really agents. 1.2 Algebraic Engineering: Preserving Structure Can the creativity of human metaphor be understood in formal terms? How do humans understand each other’s metaphors and analogies? What if the humans liv
RAPID SITUATION ASSESSMENTS OF ALCOHOL AND SUBSTANCE USE AMONG COMMERCIAL VEHICLE DRIVERS IN NIGERIA.
OBJECTIVES To describe the current situation with respect to substance use and related harms among commercial vehicle drivers, and to identify a range of interventions that could be feasibly implemented to minimise harms related to substance use. STUDY DESIGN Observational and group interviews. SETTING Four different motor parks in Ibadan, Nigeria. SUBJECTS Data were obtained from a sample of commercial vehicle drivers, community and members of the law enforcement agencies. RESULTS Widespread use of psychoactive substances was reported. New trend of local alcohol beverage generally called 'sepe' tended to have replaced older ones such as palm wine. All substances of abuse were freely available and openly displayed at motor parks except for cocaine and narcotics. There was poor law provision and enforcement of laws prohibiting sale and use around motor parks or while driving. CONCLUSIONS This study shows the feasibility and value of conducting rapid assessments among commercial vehicle drivers in Nigeria. One outcome of this study is the development of a guide on rapid assessment of alcohol and other substance use assessment and a measure of brief intervention among them. Presentation of these findings should contribute to increased awareness and improved response from the government.
DECISION BOUNDARY ANALYSIS OF ADVERSARIAL EXAMPLES
Deep neural networks (DNNs) are vulnerable to adversarial examples, which are carefully crafted instances aiming to cause prediction errors for DNNs. Recent research on adversarial examples has examined local neighborhoods in the input space of DNN models. However, previous work has limited what regions to consider, focusing either on low-dimensional subspaces or small balls. In this paper, we argue that information from larger neighborhoods, such as from more directions and from greater distances, will better characterize the relationship between adversarial examples and the DNN models. First, we introduce an attack, OPTMARGIN, which generates adversarial examples robust to small perturbations. These examples successfully evade a defense that only considers a small ball around an input instance. Second, we analyze a larger neighborhood around input instances by looking at properties of surrounding decision boundaries, namely the distances to the boundaries and the adjacent classes. We find that the boundaries around these adversarial examples do not resemble the boundaries around benign examples. Finally, we show that, under scrutiny of the surrounding decision boundaries, our OPTMARGIN examples do not convincingly mimic benign examples. Although our experiments are limited to a few specific attacks, we hope these findings will motivate new, more evasive attacks and ultimately, effective defenses.
Text feature extraction based on deep learning: a review
Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.
Decreases in psychological well-being among American adolescents after 2012 and links to screen time during the rise of smartphone technology.
In nationally representative yearly surveys of United States 8th, 10th, and 12th graders 1991-2016 (N = 1.1 million), psychological well-being (measured by self-esteem, life satisfaction, and happiness) suddenly decreased after 2012. Adolescents who spent more time on electronic communication and screens (e.g., social media, the Internet, texting, gaming) and less time on nonscreen activities (e.g., in-person social interaction, sports/exercise, homework, attending religious services) had lower psychological well-being. Adolescents spending a small amount of time on electronic communication were the happiest. Psychological well-being was lower in years when adolescents spent more time on screens and higher in years when they spent more time on nonscreen activities, with changes in activities generally preceding declines in well-being. Cyclical economic indicators such as unemployment were not significantly correlated with well-being, suggesting that the Great Recession was not the cause of the decrease in psychological well-being, which may instead be at least partially due to the rapid adoption of smartphones and the subsequent shift in adolescents' time use. (PsycINFO Database Record
Inside-Outside and Forward-Backward Algorithms Are Just Backprop (tutorial paper)
A probabilistic or weighted grammar implies a posterior probability distribution over possible parses of a given input sentence. One often needs to extract information from this distribution, by computing the expected counts (in the unknown parse) of various grammar rules, constituents, transitions, or states. This requires an algorithm such as inside-outside or forward-backward that is tailored to the grammar formalism. Conveniently, each such algorithm can be obtained by automatically differentiating an “inside” algorithm that merely computes the log-probability of the evidence (the sentence). This mechanical procedure produces correct and efficient code. As for any other instance of back-propagation, it can be carried out manually or by software. This pedagogical paper carefully spells out the construction and relates it to traditional and nontraditional views of these algorithms.
The spatio-temporal generalized additive model for criminal incidents
Law enforcement agencies need to model spatio-temporal patterns of criminal incidents. With well developed models, they can study the causality of crimes and predict future criminal incidents, and they can use the results to help prevent crimes. In this paper, we described our newly developed spatio-temporal generalized additive model (S-T GAM) to discover underlying factors related to crimes and predict future incidents. The model can fully utilize many different types of data, such as spatial, temporal, geographic, and demographic data, to make predictions. We efficiently estimated the parameters for S-T GAM using iteratively re-weighted least squares and maximum likelihood and the resulting estimates provided for model interpretability. In this paper we showed the evaluation of S-T GAM with the actual criminal incident data from Charlottesville, Virginia. The evaluation results showed that S-T GAM outperformed the previous spatial prediction models in predicting future criminal incidents.
Production and Breeding of Chilli Peppers (Capsicum spp.)
The cultivation of pepper has great importance in all regions of Brazil, due to its characteristics of profi tability, especially when the producer and processing industry add value to the product, or its social importance because it employs large numbers of skilled labor. Peppers require monthly temperatures ranging between 21 and 30 °C, with an average of 18 °C. At low temperatures, there is a decrease in germination, wilting of young parts, and slow growth. Plants require adequate level of nitrogen, favoring plants and fruit growth. Most the cultivars require large spacing for adequate growth due to the canopy of the plants. Proper insect, disease, and weed control prolong the harvest of fruits for longer periods, reducing losses. The crop cycle and harvest period are directly affected by weather conditions, incidence of pests and diseases, and cultural practices including adequate fertilization, irrigation, and adoption of phytosanitary control measures. In general for most cultivars, the fi rst harvest starts 90 days after sowing, which can be prolonged for a couple of months depending on the plant physiological condition.
A Multiresolution 3D Morphable Face Model and Fitting Framework
3D Morphable Face Models are a powerful tool in computer vision. They consist of a PCA model of face shape and colour information and allow to reconstruct a 3D face from a single 2D image. 3D Morphable Face Models are used for 3D head pose estimation, face analysis, face recognition, and, more recently, facial landmark detection and tracking. However, they are not as widely used as 2D methods the process of building and using a 3D model is much more involved. In this paper, we present the Surrey Face Model, a multi-resolution 3D Morphable Model that we make available to the public for non-commercial purposes. The model contains different mesh resolution levels and landmark point annotations as well as metadata for texture remapping. Accompanying the model is a lightweight open-source C++ library designed with simplicity and ease of integration as its foremost goals. In addition to basic functionality, it contains pose estimation and face frontalisation algorithms. With the tools presented in this paper, we aim to close two gaps. First, by offering different model resolution levels and fast fitting functionality, we enable the use of a 3D Morphable Model in time-critical applications like tracking. Second, the software library makes it easy for the community to adopt the 3D Morphable Face Model in their research, and it offers a public place for collaboration.
Interpolation in Time Series : An Introductive Overview of Existing Methods , Their Performance Criteria and Uncertainty Assessment
A thorough review has been performed on interpolation methods to fill gaps in time-series, efficiency criteria, and uncertainty quantifications. On one hand, there are numerous available methods: interpolation, regression, autoregressive, machine learning methods, etc. On the other hand, there are many methods and criteria to estimate efficiencies of these methods, but uncertainties on the interpolated values are rarely calculated. Furthermore, while they are estimated according to standard methods, the prediction uncertainty is not taken into account: a discussion is thus presented on the uncertainty estimation of interpolated/extrapolated data. Finally, some suggestions for further research and a new method are proposed.
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any mac hine learning algorithm is to train many different models on the same data a nd then to average their predictions [3]. Unfortunately, making predictions u ing a whole ensemble of models is cumbersome and may be too computationally expen sive to allow deployment to a large number of users, especially if the indivi dual models are large neural nets. Caruana and his collaborators [1] have shown th at it is possible to compress the knowledge in an ensemble into a single model whi ch is much easier to deploy and we develop this approach further using a dif ferent compression technique. We achieve some surprising results on MNIST and w e show that we can significantly improve the acoustic model of a heavily use d commercial system by distilling the knowledge in an ensemble of models into a si ngle model. We also introduce a new type of ensemble composed of one or more full m odels and many specialist models which learn to distinguish fine-grained c lasses that the full models confuse. Unlike a mixture of experts, these specialist m odels can be trained rapidly and in parallel.
Hardening soft information sources
" $#"% & &' ()% & % &% ! *+ (, . " & 0/213 . '4 5 6 7 98: 5 & ; < !%; & = ' >"& ! 5 & % &%; ! ?*@ (, . " & 06A % & ! &% & *; 5 " ! 7B& (, C;D0 & ') 7 & 3 ? E *F*G%;8 " & H6 * 5 JI$% ; #"% "6K & & '"-,% *L M ? N *; & O ! / PK>G . 8; % *; ! " Q ; 5 ! 8 ; R S*; 5 " H !T" ? *U(R ! . ! ! & O 78 8: ! F V*; 5 " F & % * ' . ! W ? ! 5 % BX !* C$*; 5 " /$YN ! ! 9 A(, . ' . G* S W (, * " 5 N 5 H E '7T5 ! & Z ( . [% IG 7 !*7* " 5 "/ \$ 7 & ] & *; ! 7 & U !* 8 ! . 6^ ,/ "/ 6: & A8 ! . (Q (, & . 5 E I_ '`% *; '" !* * " 5 7 T5 W8 ! & !%; W (, F*; 5 " "/$abIG ?'`(, ? " &% ! W ([ % c 8 8; ! 5 " ) & " S !*; ! ; W ) Qd . ' % ! ) ( ?T" *; A(e < T" f !*L(, 5 W ! IG 4 5 % /g\ (, . % " !* Z " 8 & . h? " & ]8; ! ; . *Z T5 A ) & T" 'W ) & . F 5 & . (, EO *i W A G ? H 8 & . % . / Categories and Subject Descriptors YF/ j / .gk l?mon p:q r s t u3p:m<v^w:x t y r x z D {W ? % General Terms * " 7 ! " & 1. INTRODUCTION ! " $#"% & &' ()% & % &% ! *+ (, . " & 0/213 . '4 5 6 7 98: 5 & ; < !%; & = ' >"& ! 5 H & % &%; ! ?*U (, . " & 06G ;% H & Q ! &% & )* " 5 Q ! B| (, C D^ & '} 7*G%;8 5 & 3 W *N 5 JI)% #"% ? & & '"% * ;M~ *; ! & O ! / a} 7  >G . 8 () $ (e A* " 5 "6S & * Z & c ! " W 5 ! 8; L*; 5 " !T" ? *@(R ! . ! € ! & O 8 8: ! 5 %; ! ] "* *i ! D‚\ƒ h 2„0 &-&…S ? ! ^6 PK " 6cj5†;‡?† Y} !'WˆG 6 ‰H & %; ! ^60‰:aЇŒ‹"i‡?Ž;/  % ! Z "* *i ! D]N 8 / (  . 8;% $ˆG "6}E T /f ( \$ " & " 06‘ˆi 5 & A\ aF/ KDD ’00 Boston, Massachusetts USA ’”“X•?– — ˜ ™)šG›G˜,›Gœ ›"•? ž Ÿ ›G ˜=¡ –G¢?£ “X¤S›G¢ ˜:¥: ¦ §A›G ̈G™ ©U“ aH¢ «R˜X«R¬ ›"¦:œ!!¡ ›G­ «3–G¢S«3 ̈W• ›G˜,«=•R® ›Gœ «=¦ « ˜, ̄;™&°5± › 2ƒ¦ «R›G˜X«R–G ̈:£;“X¤[›"¢!˜o¥: ¦ §F›G ̈i™ ©U“ aH–G¢! ̈  ¦3¦ 3H ̈ «3­  ¢ •?«3˜, ̄;™ °5± ›G ˜=¡ –G¢?£ “X¤S± ¥: ¦ §A›G ̈G™ ©F“ a‘¢ «3˜,«3¬!›"¦:œ!!¡ ›G­ «R–G¢o—?–G¢E•?›G˜X« •3® ›Gœ «=¦ «R˜3 ̄;™ °5± ›G ˜=¡ –G¢?£ “X¤S± ¥: ¦ §A›G ̈G™ ©F“X¤ ́:’7a^μN¤V¶^· ̧˜ ¡ !–G¢& §410¢&–G­ «3 ̈Go ™ °5± ’”“3¡ ›G¢|š ™)šG›G˜,›Gœ ›"•? » Ÿ ›G ˜=¡ –G¢?£ œ ›G¢!˜ •? ¦ §A›G ̈;©‘¬ ¢ «3˜,«3¬!›"¦ °5± › 2ƒ¦ «R›G˜X«R–G ̈:£Jœ!›"¢!˜ •? ¦ §A›G ̈ ©H¬!–G¢ ̈  ¦3¦ °5± ›G ˜=¡ –G¢?£ œ ›G¢!˜ •? ¦ §A›G ̈;©‘œ ¦ ›G¬ 1⁄4Gœ!– 1⁄2 °"± 3⁄4 u3¿:ÀHq yLÁ; ÃÅÄix?p;n t ÆÈÇHu,ÇHÉ,uep ¿:q s ÊHËHu=Ì ÍHs t s Ç^s;x?yLs m^ÍftŒË^y ÀHm^ÍHy q É3w^u,m^¿<Ä Ë^s qŒÍHÆ]ÍHs;t s '$ 'G . F &%  " F…E ? ! :13 * > k ‡"‡ z  ! k j z /Z * " " ? E ? (, . " & $ : % S & % *; S (‘ ; 8 8: ! /ÎYN T5 6: N ) (, $ !*$ W* . 7 ( | W & H ! (, K } & S . Q8 8: 6G H (0 & . SÏ, "/ ;/)B|\ƒ .  CA *B&\+/_PE/  CGÐQ ! (e ) & N . E8Ñ ! 0/K [ . IG Q *i Òc !%; S )8: (, . " & N 8: ! 5 & N &% J Z " % &9 & E G% . : } (K 5 & N F A8 ! & % )8 8: ! N 8: ! ^/ YN ! ! 7 )(, . h W c (, )*; 5 " c " ) 7 ] ; <*G & * & O ! ! . 'Ó ! !(, S F & ) . ) & |'"/K\ N(, . h? U !*G] " 7 & 98 ! . (V*; ? . < & Ï . 5 U I_ ' Ð9 ! (, ! ! ? : & & N (, Q * & O ! &d & : " N 6 *; ! . ; ; 48 ! Î () (, ) *; & O ! c ! (, Î & W . 9 ! ? -| * M ? /K [ E ! ? &%; (H & U !*$Ï, T5 & RÐS*; 5 " "/ aE  >G . 8 W 7 & < LÔ^ %; ! ‡5/ ÕN 5 & W & " . ! Z % & 7 : Î*i ! (R ! . & A !*]*; 5 " Z»Ö & ] & (e E* " 5 cžQ/S13 W8 ! & % 6×»Ø . 8 A & " } Ù % & ) ( & E8 8: ÓB&Ú „0a  Û ÚSÜSÝL & ? ! . 8; ! T" "C7 ÒÎ " * &  ,6: ! ! ) " }žƒ* G 5 / aE & % W N% W ; " ! "8; R *; 5 " c % ; < >". 8; "60 F & c . 8 " & h c & " N (, N* " c U . ' *G Þ: ! $ ?>G / ̧Ô Z ? "6F (, Z*; 5 . W : ! ! " ?*ƒ '4 % . " & ? ' >G & ! " & (, " ? Ó(R ! . 5 & O *È "* k  z 6^ ) S ! % 8ƒ8Ñ " & " k ‹ z /7aß (, )* " 5 . A (3 ! .”. ! < & ? Z () ?T" ! ) ! ! " " ! % 6 % . % & '"-& ! ! " *ÈBX !* C9* " 5 / Ô . '"6_ Q " &% . (, o(, 5 ^ ! Q T" Ó N & Î . . 5 &%; ! c ( 2 IG ' | 9 * & O ! c ! 7 W Ñ 7 -, ! (e ! / Ï,Ô S >G . 8; "6 & } & 5 cB&\+/ PS/  ! C7 *<B|\ƒ R .  C ! c -,8 8 % & 'ƒ ? -, ! (, ! ! 6H ! ? " F & 7 & " B|\ƒ .  CA *]B&Ú ! ˆG . CF ! " / ÐA\ & c(, . h } & 8 ! . ( O *i 7 & [ Ñ AB, !*;C . G* : (: ) ( (, (, " ? E8 ! 3 & " R'"/F\$ A* O 7 78 )8 ! &' *G & % & 4 T5 N8: 5 & ; 7 !* * " 5 ) * F8; )8; ! ;; |'+*G & % & " 2 T5 W . $ -X ! !(, ! / E T" È & *G & % & : 6K & F " &I$ ( & % & & . " S IG ' !* * " 5 Q : ? . H S ! -&* O ?*7 8 & . h 5 & c8; ! ; . /K [ 8 & . h " & f8; ! ; . T" T" . . h Ñ L & &% . (F & G% . : Î (} &%;8 )  & A !*]*; 5 " "6H8 % 7 ? " ) " G ! 5 * & & ) S (K -, ! (, ! ) 5 &% . 8 & N ! #"% ! *:/ 2. A FORMAL VIEW OF HARDENING \$ ) K " &% . F 7 S (FB, ! (, ! C 6: " (K ; N . & 9 ! !(, 9 c . N ! -| * M /Saf &% 8 A (Q ! (e ! ? U F UÏ ÐN ! ! " ? A T" ! 7 . O >G * ? N ( ! " & 7 * 6 6 ! F ! (, ! /ΐE IG ? T" ! & N !*<* " 5 "6 c S g*G & 7 ! (e ! ? N ! (, K ) & N . ! -| *c & &'"/K\$ N* O } 7• – — ˜ šG›G˜,›Gœ!›"• F 9 : c ) (} &%;8 7 (S & 7(, . UÏ Ð 6 ! @ ! 5 S T5 } . NO >G * S (^ ! 5 & ) * 6 6 W ! N ! (, ! ! ? / 13 < & Ó " ! 8 3 * . L & c . c & . A* " *G Þ: ! ! 9 ! & & c [ <% * L*G Þ: ! c >G 6 "/ ;/ 6H & & 4B|{W J " ! " !*; C . )*; " F c*i ÞÑ ! N8: ! c ? E ) !' & Ó8 8Ñ 7 ! 8 ; ? . G* /c H c !% . T" Î & ; )8; ! ; . c < & % A ! (, ! ! ? U '$ ;? " " & 7 N & F [ & Z . *G " & $ (: & S ?>G ×(3 ! . J & " c & < " c >G & ! 5 * N(, c "6} & % & . ƒB|{W J " ! " !*; C ?>G & ! " *$(3 ! . 98Ñ " 8 A8 8: % *9 : N ! !8; ! ! *W 5 E & E ! !(, ! B&{W " " !*; C 6 *c %; *c : S*i & S(3 ! . & [ ! !(, ! WB|{W J " ! ! " !* C#" ?>G & ! " *W(R ! . A S S !'%$ / H F ! &%; Ù 7 & ) >G . 8 F (KÔ^ % ! c‡"6 &% 8 8: " F U " 8 ! ? *W | )8: " 8 E8 8Ñ ! ‡N * i/K N(, 7(, 5 T" ) : Ù >G & ! " ? *W(3 ! . & ) & & )8 5 " F ( ‡"D ›G ˜ ¡ –G¢ £ “X¤S›G¢!˜o¥: ¦ §F›G ̈G™& ' !©Î“ aH¢!«3˜,«3¬ ›"¦0œ!!¡ ›G­ «R–G¢ «3 ̈W•?›G˜X« •3® ›Gœ «=¦ «R˜3 ̄;™& ( 3°5± › 2<¦R£;“X¤[›G¢ ˜^¥: ¦ §F›G ̈G™& ' !©Î“ aH–G¢! ̈  ¦3¦S3H ̈ «3­  ¢ • «3˜, ̄ ™) ( ,°"± * &%;8 8: 5 & c(, (, 5 A T" 9 Ñ L ?>G & ! " * (3 ! . & N ; 5 ! "8 ' ? & $ ( iD ›G ˜ ¡ –G¢ £ “X¤S± ¥: ¦ §F›G ̈i™& * ©Î“ aH¢ «R˜X«R¬ ›"¦0œ! ¡ ›G­ «3–G¢ — –G¢N• ›G˜,«=•R® ›Gœ «=¦ «3˜, ̄;™) * °5± ›G ˜ ¡ –G¢ £ “X¤S± ¥: ¦ §F›G ̈i™& * ©Î“X¤ ́:’7a^μ}¤)¶0·AŸ0’[1G1:¦ ̄"«R ̈GoA˜ ¡ !–G¢& § 10¢&–G­ «3 ̈Go7˜,–N10¢&–Gœ ¦  §Š•?–"¦ ­ «3 ̈Go ™& *|°"± H 5 " & } & U(e " (, . 7 (, S*; 5 " ΞQ/ Y} !* )* . H & Q ? -, ! (, ! ! H ! 5 & & ; 8 Q : | ! !(, ! S $ & F (, S* " 5 "/S F -, ! (e ! N ! " & 8Ñ 8 . " E " &% ! ' ! 8 ! * 'c $ #"% T c ! &  ] & c (e N ! (, ! /ZYN ?T" 6^ ! A ) F J ' . ! ) T" N F IW & Ù c 8 ! " & (3% & d W(R%; & . 8 8; 5 J ] (e U ! (e ! 7 9 !*$ 8; ! !5 & 0/ 1e  !*; Ó ! ! . 'L & % ? Ó ! 8; ! 5 & (R%; & c 7(, . % " W < 8; ! ? " & L(R%; &  5 7 (c«3 ̈ ˜X ¢310¢& ˜,›G˜,«3–G ̈ ›G¢&¬ •c 5 J (} ; 4 c  ! W (} & 9(, . + , * ! * * ! A*G & ? U ! (e ! 6: * -@ } -X 5 " & T" N ! ? G% . Ñ NÏ, ? " !Ð^*G % ?*9 . ! * K : )/)aE «R ̈ ˜, ¢310¢& ˜,›G˜,«3–G ̈/.Î ) 5 'G c } (K 8 ! " & 4 ! 6H &% J & 5 U(, 7 Q ! !(, ! % A 8;8Ñ ` .W & ! 7 A 5 . 5 A c ! 9 (S & Ó(e . + , 0*213.;/c [ * O; & S E 8 ! " & < ! ? E F : A *o6 ,/ "/ 6; S 8: 5 & ; $ & 5 + , * 13.` * * + 4 , 05617.;/$13 < & 7 " N S *G ! & '$ 8; ! ? * " 8 5 /}Ô V ' T" $ 8 ! & 9.) *c 'A ! (e ! "6G S* O 8.;Ï ÐH N : } & S%; & . 5 8 ! " & < (: "D .;Ï Ð<;>= .;Ï ? Ð (@.U + , ? " & ! ÕN 5 A & " } 8 ! 5 & .7* O; ) 7 !* * " " .;Ï|žHÐ * T" * '9 ! !8; 5 $ 5 J ! (e ! U ]žƒ 'c E% & . " 7 8 ! " & % *; 8.;/ ÜE(} % ! 7 5 7 8; ! ? " & W ! 78: " & Wd 7 %; * " W 8 ! +B|ÚN/}ˆG . CL " LB|YA/ Û % h C / ̧ 6-b & ! >+ , * 9 & 5 `Ï, c%; IG !Ð ( 8; ! ? & ) 5 0*G/AÔ . ' F " &% . c & 7 ? " ) ! 8 ! T" * * & N(, . (‘ A1 –G˜X ̈ ˜,«3›"¦ «R ̈ ˜, ¢310¢& ˜,›G˜,«3–G ̈A•? ˜ 6" ,/ "/=6 .0A'B C) ()Ï, ! * ÐV8Ñ " & E ! 8; ! 5 & ! /9 E T" F (, * " 5 7ž$ *W F D.0A'B CK (08: " ! & H 8 ! " & % H 5 " [ : K O; *A ) 8 ! " & /. E .FA'B C . ; . h & W 5 A(3% & HG"Ï .GÐ)* O; * " A(, E c ! 6I Î *HIJ* ! W8 ! . ! c () & " A(R%; & %K .;Ï|žHÐ K^ 9 & 9 G% . : ([*i & F &%;8 ) < & F !* * " 5 2.;Ï|žHÐ0 ‘ *LK .MK; ) & G% . : ! ) (H ! 8; ! 5 & ] ! .;/ G5Ï .GÐON -UÏ .GÐ PQI K .;Ï|žHÐ K PQI * K .MK Ï ‡ŒÐ -)Ï .GÐON R SUTWV X S&Y Z\[ ÕN 5 ) & 5 S " . ! ) ! S ! ) "* *; * .A E T" ) & " D-UÏ .GÐ]P IJ*WK .MKi ! " ? 7 ; /I K .;Ï|žHÐ K * ! 5 /7 7 " N(R%; & ? Ó ! !8; ! ! 7 W & ! 5*; Þ4 : ? & ! & 7%; IG W (S & 8 ! " & ß *+ & ? . 8 " & $ (Î & ! &% & 4 !* * " " 5/) ; } M & T5 A(3% ? & $ ) 8 8; ! >" . 5 ' * T" * (3 ! . A8; ! ; & R . G* ^ $ & N(, W & ^/ 3. A PROBABILISTIC MODEL 1e Ù & ; } & $ ) T5 A 8 8 ! >" . " A8 ! & c* T & $ (K & ) ? " (3% & 4Ï ‡ŒÐ /K„^ D^ "Ï|»2 F.J žHÐS 7*G & ;% & !* * " 5 λ 6; 8 ! " & .;6 * (, N* " 5 žQ/KaE Z 8 & . ^ !* ; c (‘ž )8 )»2 F.) & " . 5>" . h ^ "Ï|»2 F.MK žHÐ /‘ÕN " & ? N & " H(, UO >G *Zž & } 8 & . : !* ; : N(, % *W '9 & . 8; ' . 5>" . h & M~ E8 ! &'$ ( »2 F.A *$žQ/ H N* O _^ _Ï|»2 F.J JžHÐ^ 5 &% . E & " H & ! Q K EO E ? `4 ( 8: " & : ! -| *9 M H *7 NO E ? H ( ! 5 & 6 5 J < ( O >G * |'"/c ? 9 5 &% . 8 & A . 8; ' & " F & ! 7 WO ; F (S8: 5 & ; &% 8 /a J  %; * 8 8: 7 » / \$ W /b : & W G% . : Ù (N8Ñ " & !*L &%;8 /È1e & 8 ! & . G* : S " &% . } & 5 × : 5 & Z & !*c*; 5 U " » *+ & $ (, c* " " žb È*i% 8 " < &% 8; / \$ W K »cKH *>K ždK‘ Ñ & W G% . : Z (F &%;8 9 È» * ž ! &8: ? & T" 'Î [ ! *G & G % ! K (: & . &% 8; S ! & ! ? " *Z " S 8 ! " ) ! . / \ } 7 K .MK : ) & S G% . : (: ! ? ^ c 7 ! 8; ! 5 & 9.Ed 5 & " K*i% 8; ? " & Ù (: ! 9.N × % *Z % × '7 & ! #"% ! . & " T" !') ! (, ! ! ? E " 5 . " K E % 5 A ! "/K\ ƒ 5 &% . N & " ^ _Ï `d !.J žHÐ ? W Ñ A* . 8: " ?* " (, S N ! V6 [ 6; * 7 ! N ! G% . : N8 ! . ! S $ & N ! !T SÏ J ‡ŒÐ * & N G% . Ñ } ?* 7 & 5 D^ _Ï .GÐ &% . S W‡"/ ^ "Ï|»2 F.J !žHÐ ; ^ "Ï|ždK .J J» Ð (^ "Ï .GÐ ^ "Ï|» Ð Ï|"Ð ^ "Ï|» ÐO; Ï ‡ Ð ‡ b Ï,ŽGÐ ^ "Ï .GÐO; ‡ [ Ï ‡ [ Ð [ S T V X S Y Z\[ + Ï,jGÐ ^ "Ï|ždK .J J» ÐO; Ï ‡ ! Ð ‡ K . Ï|» Ð K" Ï|‹"Ð 13 <Ï|‹"ÐS N T5 A & " 8. Ï|» ÐQ ) & A S ( (, S &%;8 a+ &% J & " D.;Ï aSÐ<1$» *QK . Ï|» Ð K" N & ) !*G &'$ (K & N ? / Ñ T5 #"% " & ? ! &8: *c S N ? ! 78 ! G ^(e K " ! ;! ! " & ) & & 8 )»2 .J !žQ/KÔ^ ! K»Š H " ! 5 *A 'N ! 8: ? " *G ' ! & 7 ( & %b€8: " & 9 !* &% 8 ) " N ! * . /ÎaS ? " 4 ! 5 & & W ? & L8 ! G c 8 7 & 48 ! &' b *< & G% c & 8 ! 3 |'+‡ # N/$a ̧ & . c8 ! ? E % * 7 " ! ! 5 .A 'c ? & Z ! S(3 ! . .0A(B C ! & 98 ! &'L (} & Z ! 6 + , 0*c A8; ! 8: ! & F + *Z ! 56; 5 J Ù & ! " & ^6: & N ! & 8 ! G ! . 5 ) & 8; ! ; |'$ [ /[1e $ & ) " F (K 5 ! ! " & 6.;6 T5 6^ (, F & A ! N T5 A : ! < ?* F J I$ & 5 . S ! o(e . *:6G ,/ 5/=6 & " Q Q S 5 'G F *9 & " S 5 J c ! (e ! " S 5 . " S ) % " c * " "/[1e(:.U 5 E ^(, . *Z & ) ! N T5 /S13(K & F E
Treatment of superficial cutaneous vascular lesions: experience with the KTP 532 nm laser
Whilst most facial telangiectasias respond well to short-pulse-duration pulsed dye laser therapy, studies have shown that for the treatment of larger vessels these short-duration pulses are sub-optimal. Long-pulse frequency-doubled neodymium:YAG lasers have been introduced with pulse durations ranging from 1–50 ms and treatment beam diameters of up to 4 mm. We report the results of KTP/532 nm laser treatment for superficial vascular skin lesions. The aim was to determine the efficacy of the KTP/532 nm laser in the treatment of superficial cutaneous vascular lesions at a regional dermatology centre in a 2 year retrospective analysis. Patients were referred from general dermatology clinics to a purpose-built laser facility. A test dose was performed at the initial consultation and thereafter patients were reviewed and treated at 6 week intervals. Outcome was graded into five classifications by the patient and operator independently based on photographic records: clear, marked improvement, partial response, poor response, and no change or worsening. Over the 2 year period, 204 patients with 246 diagnoses were treated [156 female; median age 41 (range 1–74) years; Fitzpatrick skin types I–III]. Equal numbers of spider angioma (102) and facial telangiectasia (102) were treated. Of those patients who completed treatment and follow up, 57/58 (98%) of spider angiomas and 44/49 (90%) of facial telangiectasia markedly improved or cleared. Satisfactory treatment outcomes, with one clearance and two partial responses, occurred in three of five patients with port-wine stain. Few patients experienced adverse effects: two declined further treatment due to pain, and a small area of minimal superficial scarring developed in one case. Two patients developed mild persistent post-inflammatory hyperpigmentation, and one subject experienced an episode of acute facial erythema, swelling and blistering after one treatment. The KTP/532 nm frequency-doubled neodymium:YAG laser is a safe and effective treatment for common superficial cutaneous vascular lesions in patients with Fitzpatrick skin types I–III.
A novel microstrip line backward directional coupler with high directivity
A uni-planar backward directional couplers is analyzed and designed. Microstrip parallel coupled lines and asymmetrical delay lines make multi-sectioned coupler which enhances the directivity. In this paper, 20 dB multi-sectioned couplers with single and double delay lines are designed and measured to show the validation of analysis. The coupler with two delay lines has the directivity over 30 dB and the fractional bandwidth of 30% at the center frequency of 1.8 GHz.
Improving multi-label classification using scene cues
Multi-label classification is one of the most challenging tasks in the computer vision community, owing to different composition and interaction (e.g. partial visibility or occlusion) between objects in multi-label images. Intuitively, some objects usually co-occur with some specific scenes, e.g. the sofa often appears in a living room. Therefore, the scene of a given image may provides informative cues for identifying those embedded objects. In this paper, we propose a novel scene-aware deep framework for addressing the challenging multi-label classification task. In particular, we incorporate two sub-networks that are pre-trained for different tasks (i.e. object classification and scene classification) into a unified framework, so that informative scene-aware cues can be leveraged for benefiting multi-label object classification. In addition, we also present a novel one vs. all multiple-cross-entropy (MCE) loss for optimizing the proposed scene-aware deep framework by independently penalizing the classification error for each label. The proposed method can be learned in an end-to-end manner and extensive experimental results on Pascal VOC 2007 and MS COCO demonstrate that our approach is able to make a noticeable improvement for the multi-label classification task.
Test Results and Torque Improvement of the 50-kW Switched Reluctance Motor Designed for Hybrid Electric Vehicles
A switched reluctance motor (SRM) has been developed as one of the possible candidates of rare-earth-free electric motors. A prototype machine has been built and tested. It has competitive dimensions, torque, power, and efficiency with respect to the 50-kW interior permanent magnet synchronous motor employed in the hybrid electric vehicles (Toyota Prius 2003). It is found that competitive power of 50-kW rating and efficiency of 95% are achieved. The prototype motor provided 85% of the target torque. Except the maximum torque, the most speed-torque region is found to be covered by the test SRM. The cause of discrepancy in the measured and calculated torque values is examined. An improved design is attempted, and a new experimental switched reluctance machine is designed and built for testing. The results are given in this paper.
Analysis of Ultra Wideband Antipodal Vivaldi Antenna Design
The characteristics of antipodal Vivaldi antennas with different structures and the key factors that affect the performance of the antipodal antennas have been investigated. The return loss, radiation pattern and current distribution with various elliptical loading sizes, opening rates and ground plane sizes have been analyzed and compared using full wave simulation tool. Based on the parameter study, a design that is capable of achieving a bandwidth from 2.1 GHz to 20 GHz has been identified.
A Large-Scale Study of Mobile Web App Security
Mobile apps that use an embedded web browser, or mobile web apps, make up 85% of the free apps on the Google Play store. The security concerns for developing mobile web apps go beyond just those for developing traditional web apps or mobile apps. In this paper we develop scalable analyses for finding several classes of vulnerabilities in mobile web apps and analyze a large dataset of 998,286 mobile web apps, representing a complete snapshot of all of the free mobile web apps on the Google Play store as of June 2014. We find that 28% of the studied apps have at least one vulnerability. We explore the severity of these vulnerabilities and identify trends in the vulnerable apps. We find that severe vulnerabilities are present across the entire Android app ecosystem, even in popular apps and libraries. Finally, we offer several changes to the Android APIs to mitigate these vulnerabilities.
Oil content, tocopherol composition and fatty acid patterns of the seeds of 51 Cannabis sativa L. genotypes
The oil content, the tocopherol composition, the plastochromanol-8 (P-8) content and the fatty acid composition (19 fatty acids) of the seed of 51 hemp (Cannabis sativa L.) genotypes were studied in the 2000 and 2001 seasons. The oil content of the hemp seed ranged from 26.25% (w/w) to 37.50%. Analysis of variance revealed significant effects of genotype, year and of the interaction (genotype × year) on the oil content. The oil contents of the 51 genotypes in 2000 and 2001 were correlated (r = 0.37**) and averaged 33.19 ± 1.45% in 2000 and 31.21 ± 0.96% in 2001. The γ-tocopherol, α-tocopherol, δ-tocopherol, P-8- and β-tocopherol contents of the 51 genotypes averaged 21.68 ± 3.19, 1.82 ± 0.49, 1.20 ± 0.40, 0.18 ± 0.07 and 0.16 ± 0.04 mg 100g−1 of seeds, respectively (2000 and 2001 data pooled). Hierarchical clustering of the fatty acid data did not group the hemp genotypes according to their geographic origin. The γ-linolenic acid yield of hemp (3–30 kg ha−1) was similar to the γ-linolenic acid yield of plant species that are currently used as sources of γ-linolenic acid (borage (19–30 kg ha−1), evening primrose (7–30 kg ha−1)). The linoleic acid yield of hemp (129–326 kg ha−1) was similar to flax (102–250 kg ha−1), but less than in sunflower (868–1320 kg ha−1). Significant positive correlations were detected between some fatty acids and some tocopherols. Even though the average content of P-8 in hemp seeds was only 1/120th of the average γ-tocopherol content, P-8 content was more closely correlated with the unsaturated fatty acid content than γ-tocopherol or any other tocopherol fraction. The average broad-sense heritabilities of the oil content, the antioxidants (tocopherols and P-8) and the fatty acids were 0.53, 0.14 and 0.23, respectively. The genotypes Fibrimon 56, P57, Juso 31, GB29, Beniko, P60, FxT, Félina 34, Ramo and GB18 were capable of producing the largest amounts of high quality hemp oil.
Giant Penile Lymphedema Caused by Chronic Penile Strangulation with Rubber Band: A Case Report and Review of the Literature
We treated a 65-year-old Japanese man with a giant penile lymphedema due to chronic penile strangulation with a rubber band. He was referred to our hospital with progressive penile swelling that had developed over a period of 2 years from chronic use of a rubber band placed around the penile base for prevention of urinary incontinence. Under a diagnosis of giant penile lymphedema, we performed resection of abnormal penile skin weighing 4.8 kg, followed by a penile plasty procedure. To the best of our knowledge, this is only the seventh report of such a case worldwide, with the present giant penile lymphedema the most reported.
Fully Bayesian spatio-temporal modeling of FMRI data
We present a fully Bayesian approach to modeling in functional magnetic resonance imaging (FMRI), incorporating spatio-temporal noise modeling and haemodynamic response function (HRF) modeling. A fully Bayesian approach allows for the uncertainties in the noise and signal modeling to be incorporated together to provide full posterior distributions of the HRF parameters. The noise modeling is achieved via a nonseparable space-time vector autoregressive process. Previous FMRI noise models have either been purely temporal, separable or modeling deterministic trends. The specific form of the noise process is determined using model selection techniques. Notably, this results in the need for a spatially nonstationary and temporally stationary spatial component. Within the same full model, we also investigate the variation of the HRF in different areas of the activation, and for different experimental stimuli. We propose a novel HRF model made up of half-cosines, which allows distinct combinations of parameters to represent characteristics of interest. In addition, to adaptively avoid over-fitting we propose the use of automatic relevance determination priors to force certain parameters in the model to zero with high precision if there is no evidence to support them in the data. We apply the model to three datasets and observe matter-type dependence of the spatial and temporal noise, and a negative correlation between activation height and HRF time to main peak (although we suggest that this apparent correlation may be due to a number of different effects).
Non-small cell lung cancer staging: proposed revisions to the TNM system
Patients with non-small cell lung cancer (NSCLC) require careful staging at the time of diagnosis to determine prognosis and guide treatment recommendations. The seventh edition of the TNM Classification of Malignant Tumors is scheduled to be published in 2009 and the International Association for the Study of Lung Cancer (IASLC) created the Lung Cancer Staging Project (LCSP) to guide revisions to the current lung cancer staging system. These recommendations will be submitted to the American Joint Committee on Cancer (AJCC) and to the Union Internationale Contre le Cancer (UICC) for consideration in the upcoming edition of the staging manual. Data from over 100,000 patients with lung cancer were submitted for analysis and several modifications were suggested for the T descriptors and the M descriptors although the current N descriptors remain unchanged. These recommendations will further define homogeneous patient subsets with similar survival rates. More importantly, these revisions will help guide clinicians in making optimal, stage-specific, treatment recommendations.
Circuit Analysis and Modeling of a Phase-Shifted Pulsewidth modulation Full-Bridge-Inverter-Fed Ozone Generator With Constant Applied Electrode Voltage
This paper proposes a circuit analysis and system modeling of an ozone generator using a phase-shifted pulsewidth modulation full-bridge inverter connected to two electrodes via a step-up transformer. The circuit operation of the inverter is fully described. Approximate models of the high-frequency transformer and the ozone-generating tube are given. Model parameter values are obtained from electrical characteristic measurement in conjunction with physical dimension calculation. In order to ensure that a zero-voltage soft-switching mode always operates over a certain range of a frequency variation, a series compensated resonant inductor is included. The advantage of the proposed system is a capability of varying ozone gas production quantity by varying the frequency of the inverter while the applied electrode voltage is kept constant, in order to overcome a high-frequency effect on the transformer voltage regulation. As a consequence, the absolute ozone production affected by the frequency is possibly achieved. The correctness and validity of the proposed system are verified by both simulation and experimental results.
Berkeley DB: A Retrospective
Berkeley DB is an open-source, embedded transactional data management system that has been in wide deployment since 1992. In those fifteen years, it has grown from a simple, non-transactional key-data store to a highly reliable, scalable, flexible data management solution. We trace the history of Berkeley DB and discuss how a small library provides data management solutions appropriate to disparate environments ranging form cell phones to servers.
Journal of Theoretical and Applied Information Technology
Research on software architecture from different perspectives has been done for several years. However, Architectural Knowledge (AK), a relatively new field, has gained its increasing interest among the community. On this regard, various topics devoted to architectural knowledge, such as reusing, sharing, managing, and communicating are being studied. Among them, AK sharing brings new effective challenges and issues not present when studying other topics in architectural knowledge. Therefore, this paper surveys the current researches on AK sharing (pertaining to software architecture), the approaches, models that are being proposed, and issues that arise when sharing different AKs by different parties. By making survey on AK sharing approaches, a better understanding of these approaches and issues related in this area is provided so that it can be a penetrative resource for a fast training and educating in this area of SE. In addition, conclusion about current state of research in this area and future trends for AK sharing is identified.
A random variable shape parameter strategy for radial basis function approximation methods
Several variable shape parameter methods have been successfully used in Radial Basis Function approximation methods. In many cases variable shape parameter strategies produced more accurate results than if a constant shape parameter had been used. We introduce a new random variable shape parameter strategy and give numerical results showing that the new random strategy often outperforms both existing variable shape and constant shape strategies.
Ad-Blocking and Counter Blocking: A Slice of the Arms Race
Adblocking tools like Adblock Plus continue to rise in popularity, potentially threatening the dynamics of advertising revenue streams. In response, a number of publishers have ramped up efforts to develop and deploy mechanisms for detecting and/or counter-blocking adblockers (which we refer to as anti-adblockers), effectively escalating the online advertising arms race. In this paper, we develop a scalable approach for identifying third-party services shared across multiple websites and use it to provide a first characterization of antiadblocking across the Alexa Top-5K websites. We map websites that perform anti-adblocking as well as the entities that provide anti-adblocking scripts. We study the modus operandi of these scripts and their impact on popular adblockers. We find that at least 6.7% of websites in the Alexa Top-5K use anti-adblocking scripts, acquired from 12 distinct entities – some of which have a direct interest in nourishing the online advertising industry.
Employing Event Inference to Improve Semi-Supervised Chinese Event Extraction
Although semi-supervised model can extract the event mentions matching frequent event patterns, it suffers much from those event mentions, which match infrequent patterns or have no matching pattern. To solve this issue, this paper introduces various kinds of linguistic knowledge-driven event inference mechanisms to semi-supervised Chinese event extraction. These event inference mechanisms can capture linguistic knowledge from four aspects, i.e. semantics of argument role, compositional semantics of trigger, consistency on coreference events and relevant events, to further recover missing event mentions from unlabeled texts. Evaluation on the ACE 2005 Chinese corpus shows that our event inference mechanisms significantly outperform the refined state-of-the-art semi-supervised Chinese event extraction system in F1-score by 8.5%.
Familial Risk and Heritability of Cancer Among Twins in Nordic Countries.
IMPORTANCE Estimates of familial cancer risk from population-based studies are essential components of cancer risk prediction. OBJECTIVE To estimate familial risk and heritability of cancer types in a large twin cohort. DESIGN, SETTING, AND PARTICIPANTS Prospective study of 80,309 monozygotic and 123,382 same-sex dizygotic twin individuals (N = 203,691) within the population-based registers of Denmark, Finland, Norway, and Sweden. Twins were followed up a median of 32 years between 1943 and 2010. There were 50,990 individuals who died of any cause, and 3804 who emigrated and were lost to follow-up. EXPOSURES Shared environmental and heritable risk factors among pairs of twins. MAIN OUTCOMES AND MEASURES The main outcome was incident cancer. Time-to-event analyses were used to estimate familial risk (risk of cancer in an individual given a twin's development of cancer) and heritability (proportion of variance in cancer risk due to interindividual genetic differences) with follow-up via cancer registries. Statistical models adjusted for age and follow-up time, and accounted for censoring and competing risk of death. RESULTS A total of 27,156 incident cancers were diagnosed in 23,980 individuals, translating to a cumulative incidence of 32%. Cancer was diagnosed in both twins among 1383 monozygotic (2766 individuals) and 1933 dizygotic (2866 individuals) pairs. Of these, 38% of monozygotic and 26% of dizygotic pairs were diagnosed with the same cancer type. There was an excess cancer risk in twins whose co-twin was diagnosed with cancer, with estimated cumulative risks that were an absolute 5% (95% CI, 4%-6%) higher in dizygotic (37%; 95% CI, 36%-38%) and an absolute 14% (95% CI, 12%-16%) higher in monozygotic twins (46%; 95% CI, 44%-48%) whose twin also developed cancer compared with the cumulative risk in the overall cohort (32%). For most cancer types, there were significant familial risks and the cumulative risks were higher in monozygotic than dizygotic twins. Heritability of cancer overall was 33% (95% CI, 30%-37%). Significant heritability was observed for the cancer types of skin melanoma (58%; 95% CI, 43%-73%), prostate (57%; 95% CI, 51%-63%), nonmelanoma skin (43%; 95% CI, 26%-59%), ovary (39%; 95% CI, 23%-55%), kidney (38%; 95% CI, 21%-55%), breast (31%; 95% CI, 11%-51%), and corpus uteri (27%; 95% CI, 11%-43%). CONCLUSIONS AND RELEVANCE In this long-term follow-up study among Nordic twins, there was significant excess familial risk for cancer overall and for specific types of cancer, including prostate, melanoma, breast, ovary, and uterus. This information about hereditary risks of cancers may be helpful in patient education and cancer risk counseling.
An iterative method for computing the approximate inverse of a square matrix and the Moore-Penrose inverse of a non-square matrix
In this paper, an iterative scheme is proposed to find the roots of a nonlinear equation. It is shown that this iterative method has fourth order convergence in the neighborhood of the root. Based on this iterative scheme, we propose the main contribution of this paper as a new high-order computational algorithm for finding an approximate inverse of a square matrix. The analytical discussions show that this algorithm has fourth-order convergence as well. Next, the iterative method will be extended by theoretical analysis to find the pseudo-inverse (also known as the Moore–Penrose inverse) of a singular or rectangular matrix. Numerical examples are also made on some practical problems to reveal the efficiency of the new algorithm for computing a robust approximate inverse of a real (or complex) matrix. 2013 Elsevier Inc. All rights reserved.
The cost-effectiveness of cardiac computed tomography for patients with stable chest pain.
OBJECTIVE To assess the cost-effectiveness of cardiac CT compared with exercise stress testing (EST) in improving the health-related quality of life of patients with stable chest pain. METHODS A cost-utility analysis alongside a single-centre randomised controlled trial carried out in Northern Ireland. Patients with stable chest pain were randomised to undergo either cardiac CT assessment or EST (standard care). The main outcome measure was cost per quality adjusted life year (QALY) gained at 1 year. RESULTS Of the 500 patients recruited, 250 were randomised to cardiac CT and 250 were randomised to EST. Cardiac CT was the dominant strategy as it was both less costly (incremental total costs -£50.45; 95% CI -£672.26 to £571.36) and more effective (incremental QALYs 0.02; 95% CI -0.02 to 0.05) than EST. At a willingness-to-pay threshold of £20 000 per QALY the probability of cardiac CT being cost-effective was 83%. Subgroup analyses indicated that cardiac CT appears to be most cost-effective in patients with a likelihood of coronary artery disease (CAD) of <30%, followed by 30%-60% and then >60%. CONCLUSIONS Cardiac CT is cost-effective compared with EST and cost-effectiveness was observed to vary with likelihood of CAD. This finding could have major implications for how patients with chest pain in the UK are assessed, however it would need to be validated in other healthcare systems. TRIAL REGISTRATION NUMBER (ISRCTN52480460); results.
Contextually Customized Video Summaries Via Natural Language
The best summary of a long video differs among different people due to its highly subjective nature. Even for the same person, the best summary may change with time or mood. In this paper, we introduce the task of generating contextually customized video summaries through simple text. First, we train a deep architecture to effectively learn semantic embeddings of video frames by leveraging the abundance of image-caption data via a progressive manner, whereby our algorithm is able to select semantically relevant video segments for a contextually meaningful video summary, given a user-specific text description or even a single sentence. In order to evaluate our customized video summaries, we conduct experimental comparison with baseline methods that utilize ground-truth information. Despite the challenging baselines, our method still manages to show comparable or even exceeding performance. We also demonstrate that our method is able to automatically generate semantically diverse video summaries even without any text input.
Analysis of Dual Functions
The purpose of this paper is to develop a theory, inspired from complex analysis, of dual functions. In detail, we introduce the notion of holomorphic dual functions and we establish a general representation of holomorphic dual functions. As an application, we generalize some usual real functions to the dual plane. Finally, we will define the integral trough curves of any dual functions as well as the dual primitive.
Fronthauling for 5G LTE-U Ultra Dense Cloud Small Cell Networks
Ultra dense cloud small cell network (UDCSNet), which combines cloud computing and massive deployment of small cells, is a promising technology for 5G LTE-U mobile communications because it can accommodate the anticipated explosive growth of mobile users’ data traffic. As a result, fronthauling becomes a challenging problem in 5G LTE-U UDCSNet. In this article, we present an overview of the challenges and requirements of the fronthaul technology in 5G LTE-U UDCSNets. We survey the advantages and challenges for various candidate fronthaul technologies such as optical fiber, millimeter- wave-based unlicensed spectrum, Wi-Fibased unlicensed spectrum, sub-6-GHz-based licensed spectrum, and free-space optical-based unlicensed spectrum.
Modulation of Visual Responses by Behavioral State in Mouse Visual Cortex
Studies of visual processing in rodents have conventionally been performed on anesthetized animals, precluding examination of the effects of behavior on visually evoked responses. We have now studied the response properties of neurons in primary visual cortex of awake mice that were allowed to run on a freely rotating spherical treadmill with their heads fixed. Most neurons showed more than a doubling of visually evoked firing rate as the animal transitioned from standing still to running, without changes in spontaneous firing or stimulus selectivity. Tuning properties in the awake animal were similar to those measured previously in anesthetized animals. Response magnitude in the lateral geniculate nucleus did not increase with locomotion, demonstrating that the striking change in responsiveness did not result from peripheral effects at the eye. Interestingly, some narrow-spiking cells were spontaneously active during running but suppressed by visual stimuli. These results demonstrate powerful cell-type-specific modulation of visual processing by behavioral state in awake mice.
Drug-eluting stent, but not bare metal stent, accentuates the systematic inflammatory response in patients.
OBJECTIVE The systematic pro-inflammatory responses after percutaneous coronary intervention with drug-eluting stents (DES) remain poorly defined. Therefore, we compared the systematic pro-inflammatory state of circulating mononuclear cells (MNCs) between DES and bare metal stent (BMS) implantation. METHODS Patients with indications for treatment with stents were randomized in a 1:1 ratio to placement of DES or BMS. The primary endpoint was a change of pro-inflammatory state at 12 weeks post-procedure. RESULTS Thirty-six consecutive patients received DES or BMS. At 12 weeks after stent implantation, the lipid profile and high-sensitivity C-reactive protein (hs-CRP) improved significantly in both groups. The mRNA levels and plasma concentrations of interleukin-6, tumor necrosis factor-α and matrix metalloproteinase-9 were significantly elevated in the DES group, which was not observed in the BMS group. An increase in NF-κB binding activity and a decrease in PPAR-γ expression in MNCs were observed in the DES group, along with increases in IκB phosphorylation and p50 expression. However, similar changes were not observed in the BMS group. CONCLUSIONS Systematic inflammatory responses were accentuated after the patients were treated percutaneously with DES, despite their improved lipid profile and hs-CRP. These data may provide fundamental information for optimizing therapeutic strategy in the era of DES.
Intertrack Interference Cancellation for Shingled Magnetic Recording
The intertrack interference in squeezed tracks of a magnetic hard disk drive is investigated based on captured waveforms. Specifically, extracted intertrack interference impulses and statistics are presented as a function of track squeeze and read offset. The bit-error-rate improvement of intertrack interference cancellation is shown based on simulations with captured drive waveforms. Architecture and implementation aspects for hard disk drives with intertrack interference cancellation are described.
Socioeconomic status and obesity.
The objective of this review was to update Sobal and Stunkard's exhaustive review of the literature on the relation between socioeconomic status (SES) and obesity (Psychol Bull 1989;105:260-75). Diverse research databases (including CINAHL, ERIC, MEDLINE, and Social Science Abstracts) were comprehensively searched during the years 1988-2004 inclusive, using "obesity," "socioeconomic status," and synonyms as search terms. A total of 333 published studies, representing 1,914 primarily cross-sectional associations, were included in the review. The overall pattern of results, for both men and women, was of an increasing proportion of positive associations and a decreasing proportion of negative associations as one moved from countries with high levels of socioeconomic development to countries with medium and low levels of development. Findings varied by SES indicator; for example, negative associations (lower SES associated with larger body size) for women in highly developed countries were most common with education and occupation, while positive associations for women in medium- and low-development countries were most common with income and material possessions. Patterns for women in higher- versus lower-development countries were generally less striking than those observed by Sobal and Stunkard; this finding is interpreted in light of trends related to globalization. Results underscore a view of obesity as a social phenomenon, for which appropriate action includes targeting both economic and sociocultural factors.
Automatically generating features for learning program analysis heuristics for C-like languages
We present a technique for automatically generating features for data-driven program analyses. Recently data-driven approaches for building a program analysis have been developed, which mine existing codebases and automatically learn heuristics for finding a cost-effective abstraction for a given analysis task. Such approaches reduce the burden of the analysis designers, but they do not remove it completely; they still leave the nontrivial task of designing so called features to the hands of the designers. Our technique aims at automating this feature design process. The idea is to use programs as features after reducing and abstracting them. Our technique goes through selected program-query pairs in codebases, and it reduces and abstracts the program in each pair to a few lines of code, while ensuring that the analysis behaves similarly for the original and the new programs with respect to the query. Each reduced program serves as a boolean feature for program-query pairs. This feature evaluates to true for a given program-query pair when (as a program) it is included in the program part of the pair. We have implemented our approach for three real-world static analyses. The experimental results show that these analyses with automatically-generated features are cost-effective and consistently perform well on a wide range of programs.
Defining and assessing professional competence.
CONTEXT Current assessment formats for physicians and trainees reliably test core knowledge and basic skills. However, they may underemphasize some important domains of professional medical practice, including interpersonal skills, lifelong learning, professionalism, and integration of core knowledge into clinical practice. OBJECTIVES To propose a definition of professional competence, to review current means for assessing it, and to suggest new approaches to assessment. DATA SOURCES We searched the MEDLINE database from 1966 to 2001 and reference lists of relevant articles for English-language studies of reliability or validity of measures of competence of physicians, medical students, and residents. STUDY SELECTION We excluded articles of a purely descriptive nature, duplicate reports, reviews, and opinions and position statements, which yielded 195 relevant citations. DATA EXTRACTION Data were abstracted by 1 of us (R.M.E.). Quality criteria for inclusion were broad, given the heterogeneity of interventions, complexity of outcome measures, and paucity of randomized or longitudinal study designs. DATA SYNTHESIS We generated an inclusive definition of competence: the habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and the community being served. Aside from protecting the public and limiting access to advanced training, assessments should foster habits of learning and self-reflection and drive institutional change. Subjective, multiple-choice, and standardized patient assessments, although reliable, underemphasize important domains of professional competence: integration of knowledge and skills, context of care, information management, teamwork, health systems, and patient-physician relationships. Few assessments observe trainees in real-life situations, incorporate the perspectives of peers and patients, or use measures that predict clinical outcomes. CONCLUSIONS In addition to assessments of basic skills, new formats that assess clinical reasoning, expert judgment, management of ambiguity, professionalism, time management, learning strategies, and teamwork promise a multidimensional assessment while maintaining adequate reliability and validity. Institutional support, reflection, and mentoring must accompany the development of assessment programs.
What can we learn from Facebook activity?: using social learning analytics to observe new media literacy skills
Social media platforms such as Facebook are now a ubiquitous part of everyday life for many people. New media scholars posit that the participatory culture encouraged by social media gives rise to new forms of literacy skills that are vital to learning. However, there have been few attempts to use analytics to understand the new media literacy skills that may be embedded in an individual's participation in social media. In this paper, I collect raw activity data that was shared by an exploratory sample of Facebook users. I then utilize factor analysis and regression models to show how (a) Facebook members' online activity coalesce into distinct categories of social media behavior and (b) how these participatory behaviors correlate with and predict measures of new media literacy skills. The study demonstrates the use of analytics to understand the literacies embedded in people's social media activity. The implications speak to the potential of social learning analytics to identify and predict new media literacy skills from data streams in social media platforms.
ARBITRAGE PRICING WITH STOCHASTIC VOLATILITY
We address the problem of pricing contingent claims in the presence of stochastic volatility. Former works claim that, as volatility itself is not a traded asset, no riskless hedge can be established, so equilibrium arguments have to be invoked and risk premia specified. We show that if instead of trying to find the prices of standard options we take these prices as exogenous, we can derive arbitrage prices of more complicated claims indexed on the Spot (and possibly on the volatility itself). This paper is an expansion of a former version presented at the AFFI conference in Paris, June 1992. I am grateful to Nicole El Karoui, Darrell Duffie, John Hull, and my colleagues from SORT (Swaps and Options Research Team) at Paribas for enriching conversations. All errors are indeed mine. Arbitrage Pricing with Stochastic Volatility 2
Remote control laboratory via Internet using Matlab and Simulink
This article describes the general architecture and application of a remote laboratory for teaching control theory based in Matlab/Simulink. The proposed system allows solving the time and spatial limitations of laboratories that rely on real physical systems used in control courses. In this way, control lab assignments with various physical processes present in the remote laboratories can be performed. Also, some examples that show the validity and applicability of the presented architecture are introduced. 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ 18: 694 702, 2010; View this article online at wileyonlinelibrary.com; DOI 10.1002/cae.20274
Human Tracking with a Mobile Robot using a Laser Range-Finder
Human tracking is a fundamental research issue for mobile robot, since coexistence of human and robots is expected in the near future. In this paper, we present a new method for real-time tracking of human walking around a robot using a laser range-finder. The method converts range data with r-θ coordinates to a 2D image with x-y coordinates. Then human tracking is performed using block matching between templates, i.e. appearances of human legs, and the input range data. The view-based human tracking method has the advantage of simplicity over conventional methods which extract local minima in the range data. In addition, the proposed tracking system employs a particle filter to robustly track human in case of occlusions. Experimental results using a real robot demonstrate usefulness of the proposed method.
Response to Alcoff, Ferguson, and Bergoffen
This paper responds to comments, queries, and criticisms offered by Alcoff, Bergoffen, and Ferguson at a scholar's session on my work held at the annual meeting of the Society for Phenomenology and Existential Philosophy in October 2001. Responding to Alcoff, I highlight my understanding of liberation in the context of a Nietzschean and a Latin American feminism and the politics of conceptualizing "resistance" in postcolonial theory. Responding to Ferguson, I address, among other issues, the often misunderstood distinction between postcolonialism and postmodernism, as well as related implications regarding some postcolonial feminists' qualified appeals to universals and women's rights. Responding to Bergoffen, I advocate on behalf of cultural formations supportive of the feminist affirmation of life and of radical subjectivities that challenge gender orthodoxies.
Prognostic impact of day 15 blast clearance in risk-adapted remission induction chemotherapy for younger patients with acute myeloid leukemia: long-term results of the multicenter prospective LAM-2001 trial by the GOELAMS study group.
Early response to chemotherapy has a major prognostic impact in acute myeloid leukemia patients treated with a double induction strategy. Less is known about patients treated with standard-dose cytarabine and anthracycline. We designed a risk-adapted remission induction regimen in which a second course of intermediate-dose cytarabine was delivered after standard "7+3" only if patients had 5% or more bone marrow blasts 15 days after chemotherapy initiation (d15-blasts). Of 823 included patients, 795 (96.6%) were evaluable. Five hundred and forty-five patients (68.6%) had less than 5% d15-blasts. Predictive factors for high d15-blasts were white blood cell count (P<0.0001) and cytogenetic risk (P<0.0001). Patients with fewer than 5% d15-blasts had a higher complete response rate (91.7% vs. 69.2%; P<0.0001) and a lower induction death rate (1.8% vs. 6.8%; P=0.001). Five-year event-free (48.4% vs. 25%; P<0.0001), relapse-free (52.7% vs. 36.9%; P=0.0016) and overall survival (55.3% vs. 36.5%; P<0.0001) were significantly higher in patients with d15-blasts lower than 5%. Multivariate analyses identified d15-blasts and cytogenetic risk as independent prognostic factors for the three end points. Failure to achieve early blast clearance remains a poor prognostic factor even after early salvage. By contrast, early responding patients have a favorable outcome without any additional induction course. (ClinicalTrials.gov identifier NCT01015196).
Early Catholic Responses to Darwin's Theory of Evolution
There are no simple conclusions to be drawn when exploring the historical relationships between religion and science. Those that present either a straightforward conflict or a perfectly harmonious collaboration run contrary to the evidence of history. This is especially true for analyses of the initial reactions of the Catholic Church to Darwin’s theory of evolution, as first thoroughly expounded in the Origin of Species. This paper will show that throughout the 19 th
Partitioning Well-Clustered Graphs with k-Means and Heat Kernel
We study a suitable class of well-clustered graphs that admit good k-way partitions and present the first almost-linear time algorithm for with almost-optimal approximation guarantees partitioning such graphs. A good k-way partition is a partition of the vertices of a graph into disjoint clusters (subsets) {Si}i=1, such that each cluster is better connected on the inside than towards the outside. This problem is a key building block in algorithm design, and has wide applications in community detection and network analysis. Key to our result is a theorem on the multi-cut and eigenvector structure of the graph Laplacians of these well-clustered graphs. Based on this theorem, we give the first rigorous guarantees on the approximation ratios of the widely used k-means clustering algorithms. We also give an almost-linear time algorithm based on heat kernel embeddings and approximate nearest neighbor data structures.
Impact of User Pairing on 5G Non-Orthogonal Multiple Access
Non-orthogonal multiple access (NOMA) represents a paradigm shift from conventional orthogonal multiple access (MA) concepts, and has been recognized as one of the key enabling technologies for 5G systems. In this paper, the imp act of user pairing on the performance of two NOMA systems, NOMA with fixed power allocation (F-NOMA) and cognitive radio inspired NOMA (CR-NOMA), is characterized. For FNOMA, both analytical and numerical results are provided to demonstrate that F-NOMA can offer a larger sum rate than orthogonal MA, and the performance gain of F-NOMA over conventional MA can be further enlarged by selecting users whose channel conditions are more distinctive. For CR-NOMA, the quality of service (QoS) for users with the poorer channe l condition can be guaranteed since the transmit power alloca ted to other users is constrained following the concept of cogniti ve radio networks. Because of this constraint, CR-NOMA has different behavior compared to F-NOMA. For example, for the user with the best channel condition, CR-NOMA prefers to pair it with the user with the second best channel condition, whereas the use r with the worst channel condition is preferred by F-NOMA. I. I NTRODUCTION Multiple access in 5G mobile networks is an emerging research topic, since it is key for the next generation netwo rk to keep pace with the exponential growth of mobile data and multimedia traffic [1] and [2]. Non-orthogonal multiple acc ess (NOMA) has recently received considerable attention as a promising candidate for 5G multiple access [3]–[6]. Partic ularly, NOMA uses the power domain for multiple access, where different users are served at different power levels. The users with better channel conditions employ successive int erference cancellation (SIC) to remove the messages intended for other users before decoding their own [7]. The benefit of usin g NOMA can be illustrated by the following example. Consider that there is a user close to the edge of its cell, denoted by A, whose channel condition is very poor. For conventional MA, an orthogonal bandwidth channel, e.g., a time slot, will be allocated to this user, and the other users cannot use this time slot. The key idea of NOMA is to squeeze another user with better channel condition, denoted by B, into this time slot. SinceA’s channel condition is very poor, the interference fromB will not cause much performance degradation to A, but the overall system throughput can be significantly improved since additional information can be delivered between the base station (BS) andB. The design of NOMA for uplink transmissions has been proposed in [4], and the performance of NOMA with randomly deployed mobile stations has been characterized in [5]. The combination of cooperative diver sity with NOMA has been considered in [8]. Z. Ding and H. V. Poor are with the Department of Electrical En gineering, Princeton University, Princeton, NJ 08544, USA. Z. Ding is a l o with the School of Computing and Communications, Lancaster Univers ity, LA1 4WA, UK. Pingzhi Fan is with the Institute of Mobile Communicatio ns, Southwest Jiaotong University, Chengdu, China. Since multiple users are admitted at the same time, frequency and spreading code, co-channel interference will be strong in NOMA systems, i.e., a NOMA system is interference limited. As a result, it may not be realistic to ask all the users in the system to perform NOMA jointly. A promising alternative is to build a hybrid MA system, in which NOMA is combined with conventional MA. In particular, the users i n the system can be divided into multiple groups, where NOMA is implemented within each group and different groups are allocated with orthogonal bandwidth resources. Obviously the performance of this hybrid MA scheme is very dependent on which users are grouped together, and the aim of this paper is to investigate the effect of this grouping. Particularly , tn this paper, we focus on a downlink communication scenario with one BS and multiple users, where the users are ordered according to their connections to the BS, i.e., the m-th user has them-th worst connection to the BS. Consider that two users, the m-th user and then-th user, are selected for performing NOMA jointly, wherem < n. The impact of user pairing on the performance of NOMA will be characterized in this paper, where two types of NOMA will be considered. One is based on fixed power allocation, termed F-NOMA, and the other is cognitive radio inspired NOMA, termed CR-NOMA. For the F-NOMA scheme, the probability that F-NOMA can achieve a larger sum rate than conventional MA is first studie d, where an exact expression for this probability as well as its high signal-to-noise ratio (SNR) approximation are obtain ed. These developed analytical results demonstrate that it is a lmost certain for F-NOMA to outperform conventional MA, and the channel quality of then-th user is critical to this probability. In addition, the gap between the sum rates achieved by FNOMA and conventional MA is also studied, and it is shown that this gap is determined by how different the two users’ channel conditions are, as initially reported in [8]. For ex ample, if n = M , it is preferable to choose m = 1, i.e., pairing the user with the best channel condition with the user with the worst channel condition. The reason for this phenomenon can be explained as follows. When m is small, them-th user’s channel condition is poor, and the data rate supported by thi s user’s channel is also small. Therefore the spectral efficie ncy of conventional MA is low, since the bandwidth allocated to thi s user cannot be accessed by other users. The use of F-NOMA ensures that then-th user will have access to the resource allocated to them-th user. If(n−m) is small, then-th user’s channel quality is similar to the m-th user’s, and the benefit to use NOMA is limited. But ifn >> m, then-th user can use the bandwidth resource much more efficiently than the m-th user, i.e., a larger (n−m) will result in a larger performance gap between F-NOMA and conventional MA. The key idea of CR-NOMA is to opportunistically serve the n-th user on the condition that the m-th user’s quality of service (QoS) is guaranteed. Particularly the transmit pow er
Crossing the Divide: Representations of Deafness in Biography
This remarkable volume examines the process by which three deaf, French biographers from the 19th and 20th centuries attempted to cross the cultural divide between deaf and hearing worlds through their work. The very different approach taken by each writer sheds light on determining at what point an individual s assimilation into society endanger his or her sense of personal identity. Author Hartig begins by assessing the publications of Jean-Ferdinand Berthier (1803-1886). Berthier wrote about Auguste Bebian, Abbe de l Epee, and Abbe Sicard, all of whom taught at the National Institute for the Deaf in Paris. Although Berthier presented compelling portraits of their entire lives, he paid special attention to their political and social activism, his main interest. Yvonne Pitrois (1880-1937) pursued her particular interest in the lives of deaf-blind people. Her biography of Helen Keller focused on her subject s destiny in conjunction with her unique relationship with Anne Sullivan. Corinne Rocheleau-Rouleau (1881-1963) recounted the historical circumstances that led French-Canadian pioneer women to leave France. The true value of her work resides in her portraits of these pioneer women: maternal women, warriors, religious women, with an emphasis on their lives and the choices they made. "Crossing the Divide" reveals clearly the passion these biographers shared for narrating the lives of those they viewed as heroes of an emerging French deaf community. All three used the genre of biography not only as a means of external exploration but also as a way to plumb their innermost selves and to resolve ambivalence about their own deafness."
Carvacrol ameliorates thioacetamide-induced hepatotoxicity by abrogation of oxidative stress, inflammation, and apoptosis in liver of Wistar rats.
The present study was designed to investigate the protective effects of carvacrol against thioacetamide (TAA)-induced oxidative stress, inflammation and apoptosis in liver of Wistar rats. In this study, rats were subjected to concomitant prophylactic oral pretreatment of carvacrol (25 and 50 mg kg(-1) body weight (b.w.)) against the hepatotoxicity induced by intraperitoneal administration of TAA (300 mg kg(-1) b.w.). Efficacy of carvacrol against the hepatotoxicity was evaluated in terms of biochemical estimation of antioxidant enzyme activities, histopathological changes, and expressions of inflammation and apoptosis. Carvacrol pretreatment prevented deteriorative effects induced by TAA through a protective mechanism in a dose-dependent manner that involved reduction of oxidative stress, inflammation and apoptosis. We found that the protective effect of carvacrol pretreatment is mediated by its inhibitory effect on nuclear factor kappa B activation, Bax and Bcl-2 expression, as well as by restoration of histopathological changes against TAA administration. We may suggest that carvacrol efficiently ameliorates liver injury caused by TAA.
Gray level image processing using contrast enhancement and watershed segmentation with quantitative evaluation
Both image enhancement and image segmentation are most practical approaches among virtually all automated image recognition systems. Feature extraction and recognition have numerous applications on telecommunication, weather forecasting, environment exploration and medical diagnosis. The adaptive image contrast stretching is a typical image enhancement approach and watershed segmentation is a typical image segmentation approach. Under conditions of an improper or disturbed illumination, the adaptive contrast stretching should be conducted, which adapts to intensity distributions. Watershed segmentation is a feasible approach to separate different objects automatically, where watershed lines separate the catchment basins. The erosion and dilation operations are essential procedures involved in watershed segmentation. To avoid over-segmentation, the markers for foreground and background can be selected accordingly. Quantitative measures (gray level energy, discrete entropy, relative entropy and mutual information) are proposed to evaluate the actual improvement via two techniques. These methodologies can be easily expanded to many other image processing approaches.
Non-invasive blood oxygen saturation monitoring for neonates using reflectance pulse oximeter
Blood oxygen saturation is one of the key parameters for health monitoring of premature infants at the neonatal intensive care unit (NICU). In this paper, we propose and demonstrate a design of a wearable wireless blood saturation monitoring system. Reflectance pulse oxymeter based on Near Infrared Spectroscopy (NIRS) techniques are applied for enhancing the flexibility of measurements at different locations on the body of the neonates and the compatibility to be integrated into a non-invasive monitoring platform, such as a neonatal smart jacket. Prototypes with the reflectance sensors embedded in soft fabrics are built. The thickness of device is minimized to optimize comfort. To evaluate the performance of the prototype, experiments on the premature babies were carried out at NICU of Máxima Medical Centre (MMC) in Veldhoven, the Netherlands. The results show that the heart rate and SpO2 measured by the proposed design are corresponding to the readings of the standard monitor.
Self-Configuration Fuzzy System for Inverse Kinematics of Robot Manipulators
Kinematics is the study of motion without regard to the forces that create it. Generally, kinematics for robot manipulators includes two problems, forward kinematics problem and inverse kinematics problem. Because of the complexity of inverse kinematics, it is difficult to find the solutions for it. This paper applies a self-configuration fuzzy system to finding the solutions for inverse kinematics of robot manipulators. In this paper, the problem of fuzzy approach for inverse kinematics is described first. Then a self-configuration fuzzy system is introduced. Based on a small group of given input-output pairs selected by covering its workspace, an initially simple fuzzy model for inverse kinematics is built, including some basic rules, the number and the parameters of membership functions. Then, by applying given input-output data pairs to this simple fuzzy model, approximating error can be calculated. Consequently, the optimum rule conclusion, optimized membership functions, and new structure can be obtained. Furthermore, an overall analysis in the domain of the whole function is carried out instead of concentrating on the subspace. After optimization problems are solved, a fuzzy system is well defined to solve the inverse kinematics. Finally, the simulation verifies this self-configuration fuzzy system for inverse kinematics of robot manipulators
A Self-Supervised Decision Fusion Framework for Building Detection
In this study, a new building detection framework for monocular satellite images, called self-supervised decision fusion (SSDF) is proposed. The model is based on the idea of self-supervision, which aims to generate training data automatically from each individual test image, without human interaction. This approach allows us to use the advantages of the supervised classifiers in a fully automated framework. We combine our previous supervised and unsupervised building detection frameworks to suggest a self-supervised learning architecture. Hence, we borrow the major strength of the unsupervised approach to obtain one of the most important clues, the relation of a building, and its cast shadow. This important information is, then, used in order to satisfy the requirement of training sample selection. Finally, an ensemble learning algorithm, called fuzzy stacked generalization (FSG), fuses a set of supervised classifiers trained on the automatically generated dataset with various shape, color, and texture features. We assessed the building detection performance of the proposed approach over 19 test sites and compare our results with the state of the art algorithms. Our experiments show that the supervised building detection method requires more than 30% of the ground truth (GT) training data to reach the performance of the proposed SSDF method. Furthermore, the SSDF method increases the F-score by 2 percentage points (p.p.) on the average compared to performance of the unsupervised method.
A Circuit-Compatible SPICE model for Enhancement Mode Carbon Nanotube Field Effect Transistors
This paper presents a circuit-compatible compact model for short channel length (5 nm~100 nm), quasi-ballistic single wall carbon nanotube field-effect transistors (CNFETs). For the first time, a universal circuit-compatible CNFET model was implemented with HSPICE. This model includes practical device non-idealities, e.g. the quantum confinement effects in both circumferential and channel length direction, the acoustical/optical phonon scattering in channel region and the resistive source/drain, as well as the real time dynamic response with a transcapacitance array. This model is valid for CNFET for a wide diameter range and various chiralities as long as the carbon nanotube (CNT) is semiconducting
DNW-Controllable triggered voltage of the integrated diode triggered SCR (IDT-SCR) ESD protection device
A novel Integrated Diode Triggered SCR (IDT-SCR), with low capacitance (<200fF), and a controllable VT1 is proposed for 1.8V application. The triggering is determined by the DNW biasing plus one integrated diode. The fail-current density is 16 times higher, and leakage 20 times lower, compared to a traditional capacitive-triggered bigFET.
Using Support Vector Machine to identify imaging biomarkers of neurological and psychiatric disease: A critical review
Standard univariate analysis of neuroimaging data has revealed a host of neuroanatomical and functional differences between healthy individuals and patients suffering a wide range of neurological and psychiatric disorders. Significant only at group level however these findings have had limited clinical translation, and recent attention has turned toward alternative forms of analysis, including Support-Vector-Machine (SVM). A type of machine learning, SVM allows categorisation of an individual's previously unseen data into a predefined group using a classification algorithm, developed on a training data set. In recent years, SVM has been successfully applied in the context of disease diagnosis, transition prediction and treatment prognosis, using both structural and functional neuroimaging data. Here we provide a brief overview of the method and review those studies that applied it to the investigation of Alzheimer's disease, schizophrenia, major depression, bipolar disorder, presymptomatic Huntington's disease, Parkinson's disease and autistic spectrum disorder. We conclude by discussing the main theoretical and practical challenges associated with the implementation of this method into the clinic and possible future directions.
Distribution System Planning With Reliability
This paper presents a model for solving the multistage planning problem of a distribution network. The objective function to be minimized is the net present value of the investment cost to add, reinforce or replace feeders and substations, losses cost, and operation and maintenance cost. The model considers three levels of load in each node and two investment alternatives for each resource to be added, reinforced or replaced. The nonlinear objective function is approximated by a piecewise linear function, resulting in a mixed integer linear model that is solved using standard mathematical programming. The model allows us to find multiple solutions in addition to the optimal one, helping the decision maker to analyze and choose from a pool of solutions. In addition to the optimization problem, reliability indexes and associated costs are computed for each solution, based on the regulation model used in Brazil. Numerical results and discussion are presented for an illustrative 27-node test network.
An Efficient Method for Vehicle License Plate Detection in Complex Scenes
In this paper, we propose an efficient method for license plate localization in the images with various situations and complex background. At the first, in order to reduce problems such as low quality and low contrast in the vehicle images, image contrast is enhanced by the two different methods and the best for following is selected. At the second part, vertical edges of the enhanced image are extracted by sobel mask. Then the most of the noise and background edges are removed by an effective algorithm. The output of this stage is given to a morphological filtering to extract the candidate regions and finally we use several geometrical features such as area of the regions, aspect ratio and edge density to eliminate the non-plate regions and segment the plate from the input car image. This method is performed on some real images that have been captured at the different imaging conditions. The appropriate experimental results show that our proposed method is nearly independent to environmental conditions such as lightening, camera angles and camera distance from the automobile, and license plate rotation.
Biocomplexity: adaptive behavior in complex stochastic dynamical systems.
Existing methods of complexity research are capable of describing certain specifics of bio systems over a given narrow range of parameters but often they cannot account for the initial emergence of complex biological systems, their evolution, state changes and sometimes-abrupt state transitions. Chaos tools have the potential of reaching to the essential driving mechanisms that organize matter into living substances. Our basic thesis is that while established chaos tools are useful in describing complexity in physical systems, they lack the power of grasping the essence of the complexity of life. This thesis illustrates sensory perception of vertebrates and the operation of the vertebrate brain. The study of complexity, at the level of biological systems, cannot be completed by the analytical tools, which have been developed for non-living systems. We propose a new approach to chaos research that has the potential of characterizing biological complexity. Our study is biologically motivated and solidly based in the biodynamics of higher brain function. Our biocomplexity model has the following features, (1) it is high-dimensional, but the dimensionality is not rigid, rather it changes dynamically; (2) it is not autonomous and continuously interacts and communicates with individual environments that are selected by the model from the infinitely complex world; (3) as a result, it is adaptive and modifies its internal organization in response to environmental factors by changing them to meet its own goals; (4) it is a distributed object that evolves both in space and time towards goals that is continually re-shaping in the light of cumulative experience stored in memory; (5) it is driven and stabilized by noise of internal origin through self-organizing dynamics. The resulting theory of stochastic dynamical systems is a mathematical field at the interface of dynamical system theory and stochastic differential equations. This paper outlines several possible avenues to analyze these systems. Of special interest are input-induced and noise-generated, or spontaneous state-transitions and related stability issues.
Performance of Diamond and CBN Single-Layered Grinding Wheels in Grinding Titanium
Pathways for irony detection in tweets
Posts on Twitter allow users to express ideas and opinions in a very dynamic way. The volume of data available is incredibly high in this support, and it may provide relevant clues regarding the judgement of the public on certain product, event, service etc. While in standard sentiment analysis the most common task is to classify the utterances according to their polarity, it is clear that detecting ironic senses represent a big challenge for Natural Language Processing. By observing a corpus constitued by tweets, we propose a set of patterns that might suggest ironic/sarcastic statements. Thus, we developed special clues for irony detection, through the implementation and evaluation of a set of patterns.
Deploying secure multi-party computation for financial data analysis
We show how to collect and analyze financial data for a consortium of ICT companies using secret sharing and secure multiparty computation (MPC). This is the first time where the actual MPC computation on real data was done over the internet with computing nodes spread geographically apart. We describe the technical solution and present user feedback revealing that MPC techniques give sufficient assurance for data donors to submit their sensitive information.
Structural modeling and analysis of dengue-mediated inhibition of interferon signaling pathway.
Dengue virus (DENV) belongs to the family Flaviviridae and can cause major health problems worldwide, including dengue fever and dengue shock syndrome. DENV replicon in human cells inhibits interferon α and β with the help of its non-structural proteins. Non-structural protein 5 (NS5) of DENV is responsible for the proteasome-mediated degradation of signal transducer and activator of transcription (STAT) 2 protein, which has been implicated in the development of resistance against interferon-mediated antiviral effect. This degradation of STAT2 primarily occurs with the help of E3 ubiquitin ligases. Seven in absentia homologue (SIAH) 2 is a host protein that can mediate the ubiquitination of proteins and is known for its interaction with NS5. In this study, comprehensive computational analysis was performed to characterize the protein-protein interactions between NS5, SIAH2, and STAT2 to gain insight into the residues and sites of interaction between these proteins. The objective of the study was to structurally characterize the NS5-STAT2, SIAH2-STAT2, and NS5-SIAH2 interactions along with the determination of the possible reaction pattern for the degradation of STAT2. Docking and physicochemical studies indicated that DENV NS5 may first interact with the host SIAH2, which can then proceed towards binding with STAT2 from the side of SIAH2. These implications are reported for the first time and require validation by wet-lab studies.
Light-wave mixing and scattering with quantum gases.
We present a semiclassical theoretical framework on light-wave mixing and scattering with single-component quantum gases. We show that these optical processes originating from elementary excitations with dominant collective atomic recoil motion are stimulated Raman or hyper-Raman in nature. In the forward direction the wave-mixing process, which is the most efficient process in normal gases, is strongly reduced by the condensate structure factor even though the Bogoliubov dispersion relation automatically compensates the optical-wave phase mismatch. In the backward direction, however, the free-particle-like condensate structure factor and Bogoliubov dispersion result in highly efficient light-wave mixing and collective atomic recoil motion that are enhanced by a stimulated hyper-Raman gain and a very narrow two-photon motional state resonance.
Formal representations of uncertainty ∗
The recent development of uncertainty theories that account for the notion of belief is linked to the emergence, in the XXth century, of Decision Theory and Artificial Intelligence. Nevertheless, this topic was dealt with very differently by each area. Decision Theory insisted on the necessity to found representations on the empirical observation of individuals choosing between courses of action, regardless of any other type of information. Any axiom in the theory should be liable of empirical validation. Probabilistic representations of uncertainty can then be justified with a subjectivist point of view, without necessary reference to frequency. Degrees of probability then evaluate to what extent an agent believes in the occurrence of an event or in the truth of a proposition. In contrast, Artificial Intelligence adopted a more introspective approach aiming at formalizing intuitions, reasoning processes, through the statement of reasonable axioms, often without reference to probability. Actually, until the nineties Artificial Intelligence essentially focused on purely qualitative and ordinal (in fact, logical) representations.
Security Issues for Cloud Computing
In this paper we discuss security issues for cloud computing including storage security, data security, and network security and secure virtualization. Then we select some topics and describe them in more detail. In particular, we discuss a scheme for secure third party publications of documents in a cloud. Next we discuss secure federated query processing with map Reduce and Hadoop. Next we discuss the use of secure coprocessors for cloud computing. Third we discuss XACML implementation for Hadoop. We believe that building trusted applications from untrusted components will be a major aspect of secure cloud computing.
A federated semantic metadata registry framework for enabling interoperability across clinical research and care domains
In order to enable secondary use of Electronic Health Records (EHRs) by bridging the interoperability gap between clinical care and research domains, in this paper, a unified methodology and the supporting framework is introduced which brings together the power of metadata registries (MDR) and semantic web technologies. We introduce a federated semantic metadata registry framework by extending the ISO/IEC 11179 standard, and enable integration of data element registries through Linked Open Data (LOD) principles where each Common Data Element (CDE) can be uniquely referenced, queried and processed to enable the syntactic and semantic interoperability. Each CDE and their components are maintained as LOD resources enabling semantic links with other CDEs, terminology systems and with implementation dependent content models; hence facilitating semantic search, much effective reuse and semantic interoperability across different application domains. There are several important efforts addressing the semantic interoperability in healthcare domain such as IHE DEX profile proposal, CDISC SHARE and CDISC2RDF. Our architecture complements these by providing a framework to interlink existing data element registries and repositories for multiplying their potential for semantic interoperability to a greater extent. Open source implementation of the federated semantic MDR framework presented in this paper is the core of the semantic interoperability layer of the SALUS project which enables the execution of the post marketing safety analysis studies on top of existing EHR systems.
Dynamic pose graph SLAM: Long-term mapping in low dynamic environments
Maintaining a map of an environment that changes over time is a critical challenge in the development of persistently autonomous mobile robots. Many previous approaches to mapping assume a static world. In this work we incorporate the time dimension into the mapping process to enable a robot to maintain an accurate map while operating in dynamical environments. This paper presents Dynamic Pose Graph SLAM (DPG-SLAM), an algorithm designed to enable a robot to remain localized in an environment that changes substantially over time. Using incremental smoothing and mapping (iSAM) as the underlying SLAM state estimation engine, the Dynamic Pose Graph evolves over time as the robot explores new places and revisits previously mapped areas. The approach has been implemented for planar indoor environments, using laser scan matching to derive constraints for SLAM state estimation. Laser scans for the same portion of the environment at different times are compared to perform change detection; when sufficient change has occurred in a location, the dynamic pose graph is edited to remove old poses and scans that no longer match the current state of the world. Experimental results are shown for two real-world dynamic indoor laser data sets, demonstrating the ability to maintain an up-to-date map despite long-term environmental changes.
Self-categorization with a novel mixed-race group moderates automatic social and racial biases.
People perceive and evaluate others according to social categories. Yet social perception is complicated by the fact that people have multiple social identities, and self-categorization with these identities shifts from one situation to another. Two experiments examined whether self-categorization with a novel mixed-race group would override automatic racial bias. Participants assigned to a mixed-race group had more positive automatic evaluations of Black ingroup than Black outgroup members. Comparing these evaluations to Black and White faces unaffiliated with either group indicated this preference was driven by ingroup bias rather than outgroup derogation. Moreover, both outgroup and unaffiliated faces elicited automatic racial bias (White > Black), suggesting that automatic evaluations are sensitive to both the current intergroup context (positive evaluations of novel ingroup members) and race (racial bias toward outgroup and unaffiliated faces). These experiments provide evidence that self-categorization can override automatic racial bias and that automatic evaluations shift between and within social contexts.
Hepatoprotective effect of tocopherol against isoniazid and rifampicin induced hepatotoxicity in albino rabbits.
Antitubercular drug induced hepatotoxicity is a major hurdle for an effective treatment of tuberculosis. The present study was undertaken to assess the hepatoprotective potential of tocopherol (50 mg/kg and 100 mg/kg, ip) and to compare it with cimetidine (120 mg/kg, ip). Hepatotoxicity was produced by giving isoniazid (INH, 50 mg/kg, po) and rifampicin (RMP, 100 mg/kg, po) combination to albino rabbits for 7 days. Assessment of liver injury was done by estimating levels of alanine transaminase (ALT) and argininosuccinic acid lyase (ASAL) in serum and by histopathological examination of liver. Results revealed that pretreatment with high dose of tocopherol (100 mg/kg) prevented both biochemical as well as histopathological evidence of hepatic damage induced by INH and RMP combination. Moreover, tocopherol (100 mg/kg) was found to be a more effective hepatoprotective agent as compared to cimetidine.