title
stringlengths
8
300
abstract
stringlengths
0
10k
Exploring Human-Like Attention Supervision in Visual Question Answering
Attention mechanisms have been widely applied in the Visual Question Answering (VQA) task, as they help to focus on the area-of-interest of both visual and textual information. To answer the questions correctly, the model needs to selectively target different areas of an image, which suggests that an attention-based model may benefit from an explicit attention supervision. In this work, we aim to address the problem of adding attention supervision to VQA models. Since there is a lack of human attention data, we first propose a Human Attention Network (HAN) to generate human-like attention maps, training on a recently released dataset called Human ATtention Dataset (VQA-HAT). Then, we apply the pre-trained HAN on the VQA v2.0 dataset to automatically produce the human-like attention maps for all image-question pairs. The generated human-like attention map dataset for the VQA v2.0 dataset is named as Human-Like ATtention (HLAT) dataset. Finally, we apply human-like attention supervision to an attention-based VQA model. The experiments show that adding human-like supervision yields a more accurate attention together with a better performance, showing a promising future for human-like attention supervision in VQA.
Modeling and Design of Multilevel Bang–Bang CDRs in the Presence of ISI and Noise
Multilevel clock-and-data recovery (CDR) systems are analyzed, modeled, and designed. A stochastic analysis provides probability density functions that are used to estimate the effect of intersymbol interference (ISI) and additive white noise on the characteristics of the phase detector (PD) in the CDR. A slope detector based novel multilevel bang-bang CDR architecture is proposed and modeled using the stochastic analysis and its performance compared with a typical multilevel Alexander PD-based CDR for equal-loop bandwidths. The rms jitter of the CDRs are predicted using a linear jitter model and a Markov chain and verified using behavioral simulations. Jitter tolerance simulations are also employed to compare the two CDRs. Both analytical calculations and behavioral simulations predict that at equal-loop bandwidths, the proposed architecture is superior to the Alexander type CDR at large ISI and low signal-to-noise ratios.
Adapting the Lesk Algorithm for Word Sense Disambiguation to WordNet by Satanjeev Banerjee
All human languages have words that can mean different things in different contexts. Word sense disambiguation is the process of automatically figuring out the intended meaning of such a word when used in a sentence. One of the several approaches proposed in the past is Michael Lesk’s 1986 algorithm. This algorithm is based on two assumptions. First, when two words are used in close proximity in a sentence, they must be talking of a related topic and second, if one sense each of the two words can be used to talk of the same topic, then their dictionary definitions must use some common words. For example, when the words ”pine cone” occur together, they are talking of ”evergreen trees”, and indeed one meaning each of these two words has the words ”evergreen” and ”tree” in their definitions. Thus we can disambiguate neighbouring words in a sentence by comparing their definitions and picking those senses whose definitions have the most number of common words. The biggest drawback of this algorithm is that dictionary definitions are often very short and just do not have enough words for this algorithm to work well. We deal with this problem by adapting this algorithm to the semantically organized lexical database called WordNet. Besides storing words and their meaning like a normal dictionary, WordNet also ”connects” related words together. We overcome the problem of short definitions by looking for common words not only between the definitions of the words being disambiguated, but also between the definitions of words that are closely related to them in WordNet. We show that our algorithm achieves an 83% improvement in accuracy over the standard Lesk algorithm, and that it compares favourably with other systems evaluated on the same data.
Sensitivity to change of AIMS2 and AIMS2-SF components in comparison to M-HAQ and VAS-pain.
OBJECTIVE To examine sensitivity to change of Dutch versions of AIMS2 (arthritis impact measurement scales-2) and AIMS2-SF (short form) components, in comparison with M-HAQ (modified health assessment questionnaire) and the 100 mm visual analogue scale for pain (VAS-pain) in patients with rheumatoid arthritis. METHODS 218 patients participated in a study on patient education. Participants completed the Dutch AIMS2, M-HAQ, and VAS-pain at baseline and after one year; 165 completed both assessments. The education programme did not have any effect on health status. Patients were classified according to change over one year in their responses to the AIMS2 question about general health perception: improved health (n = 32), no change (n = 101), and poorer health (n = 32). Changes in scores over one year were tested with paired t tests, and standardised response means were calculated for AIMS2 and AIMS2-SF components, M-HAQ total score, and VAS-pain in the three classifications of change in health perception. RESULTS AIMS2 and AIMS2-SF physical, symptom, and affect components showed similar sensitivity to change. The physical and symptom components performed better than M-HAQ and VAS-pain. AIMS2 and AIMS2-SF social interaction and role components were not sensitive to changes in general health perception. The role component was only applicable in 63 patients, because the others were unemployed, disabled, or retired. CONCLUSIONS AIMS2-SF is a good alternative to the AIMS2 long form for the assessment of health status in rheumatoid arthritis, and is preferable to M-HAQ and VAS-pain. Use of the AIMS2-SF makes it easier and less costly to collect data and reduces the burden on patients.
Potential of compost tea for suppressing plant diseases
Numerous studies have demonstrated that water-based compost preparations, referred to as compost tea and compost-water extract, can suppress phytopathogens and plant diseases. Despite its potential, compost tea has generally been considered as inadequate for use as a biocontrol agent in conventional cropping systems but important to organic producers who have limited disease control options. The major impediments to the use of compost tea have been the lessthan-desirable and inconsistent levels of plant disease suppression as influenced by compost tea production and application factors including compost source and maturity, brewing time and aeration, dilution and application rate and application frequency. Although the mechanisms involved in disease suppression are not fully understood, sterilization of compost tea has generally resulted in a loss in disease suppressiveness. This indicates that the mechanisms of suppression are often, or predominantly, biological, although physico-chemical factors have also been implicated. Increasing the use of molecular approaches, such as metagenomics, metaproteomics, metatranscriptomics and metaproteogenomics should prove useful in better understanding the relationships between microbial abundance, diversity, functions and disease suppressive efficacy of compost tea. Such investigations are crucial in developing protocols for optimizing the compost tea production process so as to maximize disease suppressive effect without exposing the manufacturer or user to the risk of human pathogens. To this end, it is recommended that compost tea be used as part of an integrated disease management system.
UAV Formation Control : Theory and Application !
Unmanned airborne vehicles (UAVs) are finding use in military operations and starting to find use in civilian operations. UAVs often fly in formation, meaning that the distances between individual pairs of UAVs stay fixed, and the formation of UAVs in a sense moves as a rigid entity. In order to maintain the shape of a formation, it is enough to maintain the distance between a certain number of the agent pairs; this will result in the distance between all pairs being constant. We describe how to characterize the choice of agent pairs to secure this shape-preserving property for a planar formation, and we describe decentralized control laws which will stably restore the shape of a formation when the distances between nominated agent pairs become unequal to their prescribed values. A mixture of graph theory, nonlinear systems theory and linear algebra is relevant. We also consider a particular practical problem of flying a group of three UAVs in an equilateral triangle, with the centre of mass following a nominated trajectory reflecting constraints on turning radius, and with a requirement that the speeds of the UAVs are constant, and nearly (but not necessarily exactly) equal.
Sparsity regularization for image reconstruction with Poisson data
This work investigates three penalized-likelihood expectation maximization (EM) algorithms for image reconstruction with Poisson data where the images are known a priori to be sparse in the space domain. The penalty functions considered are the 1 norm, the 0 “norm,” and a penalty function based on the sum of logarithms of pixel values, R(x) = ∑np j=1 log (xj δ + 1 ) . Our results show that the 1 penalized algorithm reconstructs scaled versions of the maximum-likelihood (ML) solution, which does not improve the sparsity over the traditional ML estimate. Due to the singularity of the Poisson log-likelihood at zero, the 0 penalized EM algorithm is equivalent to the maximum-likelihood EM algorithm. We demonstrate that the penalty based on the sum of logarithms produces sparser images than the ML solution. We evaluated these algorithms using experimental data from a position-sensitive Compton-imaging detector, where the spatial distribution of photon-emitters is known to be sparse.
Visual attention: control, representation, and time course.
Three central problems in the recent literature on visual attention are reviewed. The first concerns the control of attention by top-down (or goal-directed) and bottom-up (or stimulus-driven) processes. The second concerns the representational basis for visual selection, including how much attention can be said to be location- or object-based. Finally, we consider the time course of attention as it is directed to one stimulus after another.
Learning Extraction Patterns for Subjective Expressions
This paper presents a bootstrapping process that learns linguistically rich extraction patterns for subjective (opinionated) expressions. High-precision classifiers label unannotated data to automatically create a large training set, which is then given to an extraction pattern learning algorithm. The learned patterns are then used to identify more subjective sentences. The bootstrapping process learns many subjective patterns and increases recall while maintaining high precision.
Number 4 , August 1988 Collision Detection and Response for Computer Animation
When several objects are moved about by computer animarion, there is the chance that they will interpenetrate. This is often an undesired state, particularly if the animation is seeking to model a realistic world. Two issues are involved: detecting that a collision has occurred, and responding to it. The former is fundamentally a kinematic problem, involving the positional relationship of objects in the world. The latter is a dynamic problem, in that it involves predicting behavior according to physical laws. This paper discusses collision detection and response in general, presents two collision detection algorithms, describes modeling collisions of arbitrary bodies using springs, and presents an analytical collision response algorithm for articulated rigid bodies that conserves linear and angular momentum.
SemSearch: A Search Engine for the Semantic Web
Semantic search promises to produce precise answers to user queries by taking advantage of the availability of explicit semantics of information in the context of the semantic web. Existing tools have been primarily designed to enhance the performance of traditional search technologies but with little support for naive users, i.e., ordinary end users who are not necessarily familiar with domain specific semantic data, ontologies, or SQL-like query languages. This paper presents SemSearch, a search engine, which pays special attention to this issue by hiding the complexity of semantic search from end users and making it easy to use and effective. In contrast with existing semantic-based keyword search engines which typically compromise their capability of handling complex user queries in order to overcome the problem of knowledge overhead, SemSearch not only overcomes the problem of knowledge overhead but also supports complex queries. Further, SemSearch provides comprehensive means to produce precise answers that on the one hand satisfy user queries and on the other hand are self-explanatory and understandable by end users. A prototype of the search engine has been implemented and applied in the semantic web portal of our lab. An initial evaluation shows promising results.
Deep Learning for Photoacoustic Tomography from Sparse Data
The development of fast and accurate image reconstruction algorithms is a central aspect of computed tomography. In this paper, we investigate this issue for the sparse data problem in photoacoustic tomography (PAT). We develop a direct and highly efficient reconstruction algorithm based on deep learning. In our approach image reconstruction is performed with a deep convolutional neural network (CNN), whose weights are adjusted prior to the actual image reconstruction based on a set of training data. The proposed reconstruction approach can be interpreted as a network that uses the PAT filtered backprojection algorithm for the first layer, followed by the U-net architecture for the remaining layers. Actual image reconstruction with deep learning consists in one evaluation of the trained CNN, which does not require time consuming solution of the forward and adjoint problems. At the same time, our numerical results demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative approaches for PAT from sparse data. keywords Photoacoustic tomography, sparse data, image reconstruction, deep learning, convolutional neural networks, inverse problems.
AILA - design of an autonomous mobile dual-arm robot
This paper presents the design of the robot AILA, a mobile dual-arm robot system developed as a research platform for investigating aspects of the currently booming multidisciplinary area of mobile manipulation. The robot integrates and allows in a single platform to perform research in most of the areas involved in autonomous robotics: navigation, mobile and dual-arm manipulation planning, active compliance and force control strategies, object recognition, scene representation, and semantic perception. AILA has 32 degrees of freedom, including 7-DOF arms, 4-DOF torso, 2-DOF head, and a mobile base equipped with six wheels, each of them with two degrees of freedom. The primary design goal was to achieve a lightweight arm construction with a payload-to-weight ratio greater than one. Besides, an adjustable body should sustain the dual-arm system providing an extended workspace. In addition, mobility is provided by means of a wheel-based mobile base. As a result, AILA's arms can lift 8kg and weigh 5.5kg, thus achieving a payload-to-weight ratio of 1.45. The paper will provide an overview of the design, especially in the mechatronics area, as well as of its realization, the sensors incorporated in the system, and its control software.
ReSuMe-New Supervised Learning Method for Spiking Neural Networks
In this report I introduce ReSuMe a new supervised learning method for Spiking Neural Networks. The research on ReSuMe has been primarily motivated by the need of inventing an efficient learni ng method for control of movement for the physically disabled. Howeve r, thorough analysis of the ReSuMe method reveals its suitability not on ly to the task of movement control, but also to other real-life applicatio ns including modeling, identification and control of diverse non-statio nary, nonlinear objects. ReSuMe integrates the idea of learning windows, known from t he spikebased Hebbian rules, with a novel concept of remote supervis ion. General overview of the method, the basic definitions, the netwo rk architecture and the details of the learning algorithm are presented . The properties of ReSuMe such as locality, computational simplicity a nd the online processing suitability are discussed. ReSuMe learning abi lities are illustrated in a verification experiment.
High-Voltage Spark Gap in a Regime of Subnanosecond Switching
This paper describes an investigation of a high-voltage spark gap in the conditions of subnanosecond switching. The high-voltage pulsed generator “Sinus” is used to charge a coaxial line loaded to a high-pressure spark gap. Typical charging time for the coaxial line is in a range from 1 to 2 ns, maximum charging voltage is up to 250 kV, and a range of pressures for the gap is from 2 to 9 MPa. The theoretical models of the switching process on subnanosecond time scale are examined. A general equation for temporal behavior of the gap voltage, which is applicable for the avalanche model and the Rompe-Weitzel model has been obtained. It is revealed that the approach based on the avalanche model offers a possibility to describe the switching process only at extremely high overvoltages. The Rompe-Weitzel model demonstrates a good agreement with the experimental data both for the conditions of static breakdown and for the regime of a high overvolted gap. Comparison of experimental and calculated voltage waveforms has made it possible to estimate an empirical constant in the Rompe-Weitzel model (the price of ionization). This constant is varied from 420 to 920 eV, depending on the initial electric field in the gap.
Curvature preserving fingerprint ridge orientation smoothing using Legendre polynomials
Smoothing fingerprint ridge orientation involves a principal discrepancy. Too little smoothing can result in noisy orientation fields (OF), too much smoothing will harm high curvature areas, especially singular points (SP). In this paper we present a fingerprint ridge orientation model based on Legendre polynomials. The motivation for the proposed method can be found by analysing the almost exclusively used method in literature for orientation smoothing, proposed by Witkin and Kass (1987) more than two decades ago. The main contribution of this paper argues that the vectorial data (sine and cosine data) should be smoothed in a coupled way and the approximation error should not be evaluated employing vectorial data. For evaluating the proposed method we use a Poincare-Index based SP detection algorithm. The experiments show, that in comparison to competing methods the proposed method has improved orientation smoothing capabilities, especially in high curvature areas.
HSV and RGB color histograms comparing for objects tracking among non overlapping FOVs, using CBTF
Object tracking over wide-areas, such as an airport, the downtown of a large city or any large public area, is done by multiple cameras. Especially in realistic application, those cameras have non overlapping Field of Views (FOVs). Multiple camera tracking is very important to establish correspondence among detected objects across different cameras. In this paper we investigate color histogram techniques to evaluate inter-camera tracking algorithm based on object appearances. We compute HSV and RGB color histograms in order to evaluate their performance in establishing correspondence between object appearances in different FOVs before and after Cumulative Brightness Transfer Function (CBTF).
Xenopus sonic hedgehog as a potential morphogen during embryogenesis and thyroid hormone-dependent metamorphosis
The hedgehog family of proteins have been implicated as important signaling molecules in establishing cell positional information and tissue patterning. Here we present the cloning and characterization of a hedgehog homologue from Xenopus laevis similar to the sonic class of vertebrate hedgehog genes. We isolated Xenopus hedgehog (Xhh) from a subtractive hybridization screen designed to identify genes induced by thyroid hormone during metamorphosis of the X.laevis gastrointestinal tract. In the intestine, Xhh mRNA expression was up-regulated at the climax of metamorphosis (stage 62) when intestinal epithelium underwent morphogenesis. Treatment of pre-metamorphic tadpoles with exogenous thyroid hormone (TH) resulted in a similar pattern of Xhh induction. Furthermore, TH induction was resistant to inhibitors of protein synthesis suggesting that Xhh is a direct thyroid hormone response gene. The expression and TH regulation of Xhh was not limited to the intestine, but was also observed in the limb and a mixture of pancreas and stomach. Throughout development, Xhh mRNA was present at varying levels with the earliest expression being detected at neurula stage. The highest levels of Xhh were observed between stages 33 and 40 shortly before tadpole feeding begins. Whole mount in situ hybridization analysis of Xhh expression in pre-hatching, stage 32 tadpoles demonstrated staining in the notochord and floor plate similar to that observed for other vertebrate hedgehog genes. Together, these data suggest a putative role for Xhh in organ development during both amphibian embryogenesis and metamorphosis.
Where to Add Actions in Human-in-the-Loop Reinforcement Learning
In order for reinforcement learning systems to learn quickly in vast action spaces such as the space of all possible pieces of text or the space of all images, leveraging human intuition and creativity is key. However, a human-designed action space is likely to be initially imperfect and limited; furthermore, humans may improve at creating useful actions with practice or new information. Therefore, we propose a framework in which a human adds actions to a reinforcement learning system over time to boost performance. In this setting, however, it is key that we use human effort as efficiently as possible, and one significant danger is that humans waste effort adding actions at places (states) that aren’t very important. Therefore, we propose Expected Local Improvement (ELI), an automated method which selects states at which to query humans for a new action. We evaluate ELI on a variety of simulated domains adapted from the literature, including domains with over a million actions and domains where the simulated experts change over time. We find ELI demonstrates excellent empirical performance, even in settings where the synthetic “experts” are quite poor.
The impact of consumer trust on attitudinal loyalty and purchase intentions in B2C e-marketplaces: Intermediary trust vs. seller trust
The online merchant of an e-marketplace consists of an intermediary, providing the market infrastructure, and the community of sellers conducting business within that infrastructure. Typically, consumers willingly buy from unknown sellers within an e-marketplace, despite the apparent risk, since they trust the institutional mechanisms furnished by the relatively well-known intermediary. Consumers’ trust in one component of the e-marketplace merchant may not only affect their trust in the other, but also influence the way consumers make online purchases. This paper explores the impact of trust on consumer behavior in e-marketplaces. An empirical study has been conducted to accomplish our research objectives, using ttitudinal loyalty a questionnaire survey of 222 active e-marketplace shoppers in Korea. The results reveal that consumer trust in an intermediary has a strong influence upon both attitudinal loyalty and purchase intentions, although consumer trust in the community of sellers has no significant effect on the two constructs representing consumer behavior. In addition, it was found that trust is transferred from an intermediary to the community of sellers, implying that the trustworthiness of the intermediary plays a critical role in whic from determining the extent to offers some implications
FingerSound: Recognizing unistroke thumb gestures using a ring
We introduce FingerSound, an input technology to recognize unistroke thumb gestures, which are easy to learn and can be performed through eyes-free interaction. The gestures are performed using a thumb-mounted ring comprising a contact microphone and a gyroscope sensor. A K-Nearest-Neighbor(KNN) model with a distance function of Dynamic Time Warping (DTW) is built to recognize up to 42 common unistroke gestures. A user study, where the real-time classification results were given, shows an accuracy of 92%-98% by a machine learning model built with only 3 training samples per gesture. Based on the user study results, we further discuss the opportunities, challenges and practical limitations of FingerSound when deploying it to real-world applications in the future.
Crowdsourcing Linked Open Data for Disaster Management
This paper shows how Linked Open Data can ease the challenges of information triage in disaster response efforts. Recently, disaster management has seen a revolution in data collection. Local victims as well as people all over the world collect observations and make them available on the web. Yet, this crucial and timely information source comes unstructured. This hinders a processing and integration, and often a general consideration of this information. Linked Open Data is supported by number of freely available technologies, backed up by a large community in academia and it offers the opportunity to create flexible mash-up solutions. At hand of the Ushahidi Haiti platform, this paper suggests crowdsourced Linked Open Data. We take a look at the requirements, the tools that are there to meet these requirements, and suggest an architecture to enable non-experts to contribute Linked Open Data.
Incidence rate and risk factors for vaginal vault prolapse repair after hysterectomy
Our objective was to estimate the incidence and identify the risk factors for vaginal vault prolapse repair after hysterectomy. We conducted a case control study among 6,214 women who underwent hysterectomy from 1982 to 2002. Cases (n = 32) were women who required vaginal vault suspension following the hysterectomy through December 2005. Controls (n = 236) were women, randomly selected from the same cohort, who did not require pelvic organ prolapse surgery. The incidence of vaginal vault prolapse repair was 0.36 per 1,000 women-years. The cumulative incidence was 0.5%. Risk factors included preoperative prolapse (odds ratio (OR) 6.6; 95% confidence interval (CI) 1.5–28.4) and sexual activity (OR 1.3; 95% CI 1.0–1.5). Vaginal hysterectomy was not a risk factor when preoperative prolapse was taken into account (OR 0.9; 95% CI 0.5–1.8).Vaginal vault prolapse repair after hysterectomy is an infrequent event and is due to preexisting weakness of pelvic tissues.
Stylizing animation by example
Skilled artists, using traditional media or modern computer painting tools, can create a variety of expressive styles that are very appealing in still images, but have been unsuitable for animation. The key difficulty is that existing techniques lack adequate temporal coherence to animate these styles effectively. Here we augment the range of practical animation styles by extending the guided texture synthesis method of Image Analogies [Hertzmann et al. 2001] to create temporally coherent animation sequences. To make the method art directable, we allow artists to paint portions of keyframes that are used as constraints. The in-betweens calculated by our method maintain stylistic continuity and yet change no more than necessary over time.
Classification of the Pigmented Skin lesions in Dermoscopic Images by Shape Features Extraction
ifferentiation of benign and malignant (melanoma) of the pigmented skin lesions is difficult even for the dermatologists thus in this paper a new analysis of the dermatoscopic images have been proposed. Segmentation, feature extraction and classification are the major steps of images analysis. In Segmentation step we use an improved FFCM based segmentation method (our previous work) to achieve to binary segmented image. In feature extraction step, the shape features are extracted from the binary segmented image. After normalizing of the features, in classification step, the feature vectors are classified into two groups (benign and malignant) by SVM classifier. The classification result for the accuracy is 71.39%, specificity is 85.95%, and it has the satisfactory results in sensitivity metrics.
Abstract syntax graphs for domain specific languages
This paper presents a representation for embedded domain specific languages (EDSLs) using abstract syntax graphs (ASGs). The purpose of this representation is to deal with the important problem of defining operations that require observing or preserving sharing and recursion in EDSLs in an expressive, yet easy-to-use way. In contrast to more conventional representations based on abstract syntax trees, ASGs represent sharing and recursion explicitly as binder constructs. We use a functional representation of ASGs based on structured graphs, where binders are encoded with parametric higher-order abstract syntax. We show how adapt to this representation to well-typed ASGs. This is especially useful for EDSLs, which often reuse the type system of the host language. We also show an alternative class-based encoding of(well-typed) ASGs that enables extensible and modular well-typed EDSLs while allowing the manipulation of sharing and recursion.
Assessing dimensions of perceived visual aesthetics of web sites
Despite its centrality to human thought and practice, aesthetics has for the most part played a petty role in human–computer interaction research. Increasingly, however, researchers attempt to strike a balance between the traditional concerns of human–computer interaction and considerations of aesthetics. Thus, recent research suggests that the visual aesthetics of computer interfaces is a strong determinant of users’ satisfaction and pleasure. However, the lack of appropriate concepts and measures of aesthetics may severely constraint future research in this area. To address this issue, we conducted four studies in order to develop a measurement instrument of perceived web site aesthetics. Using exploratory and confirmatory factor analyses we found that users’ perceptions consist of two main dimensions, which we termed ‘‘classical aesthetics’’ and ‘‘expressive aesthetics’’. The classical aesthetics dimension pertains to aesthetic notions that presided from antiquity until the 18th century. These notions emphasize orderly and clear design and are closely related to many of the design rules advocated by usability experts. The expressive aesthetics dimension is manifested by the designers’ creativity and originality and by the ability to break design conventions. While both dimensions of perceived aesthetic are drawn from a pool of aesthetic judgments, they are clearly distinguishable from each other. Each of the aesthetic dimensions is measured by a fiveitem scale. The reliabilities, factor structure and validity tests indicate that these items reflect the two perceived aesthetics dimensions adequately. r 2003 Elsevier Ltd. All rights reserved. ARTICLE IN PRESS This study was supported by a grant from the Burda Centre for Innovative Communications at BenGurion University of the Negev. *Corresponding author. Tel.: +972-8-6472226; fax: +972-8-6477527. E-mail address: [email protected] (N. Tractinsky). 1071-5819/$ see front matter r 2003 Elsevier Ltd. All rights reserved. doi:10.1016/j.ijhcs.2003.09.002
Morfessor 2.0: Toolkit for statistical morphological segmentation
Morfessor is a family of probabilistic machine learning methods for finding the morphological segmentation from raw text data. Recent developments include the development of semi-supervised methods for utilizing annotated data. Morfessor 2.0 is a rewrite of the original, widely-used Morfessor 1.0 software, with well documented command-line tools and library interface. It includes new features such as semi-supervised learning, online training, and integrated evaluation code.
Cyber attacking tactical radio networks
Cyber attacks in the Internet are common knowledge for even nontechnical people. Same attack techniques can also be used against any military radio networks in the battlefield. This paper describes a test setup that can be used to test tactical radio networks against cyber vulnerabilities. The test setup created is versatile and can be adapted to any command and control system on any level of the OSI model. Test setup uses as much publicly or commercially available tools as possible. Need for custom made components is minimized to decrease costs, to decrease deployment time and to increase usability. With architecture described, same tools used in IP network testing can be used in tactical radio networks. Problems found in any level of the system can be fixed in co-operation with vendors of the system. Cyber testing should be adapted as part of acceptance tests of any new military communication system.
YAGO3: A Knowledge Base from Multilingual Wikipedias
We present YAGO3, an extension of the YAGO knowledge base that combines the information from the Wikipedias in multiple languages. Our technique fuses the multilingual information with the English WordNet to build one coherent knowledge base. We make use of the categories, the infoboxes, and Wikidata, and learn the meaning of infobox attributes across languages. We run our method on 10 different languages, and achieve a precision of 95%-100% in the attribute mapping. Our technique enlarges YAGO by 1m new entities and 7m new facts.
Color map optimization for 3D reconstruction with consumer depth cameras
We present a global optimization approach for mapping color images onto geometric reconstructions. Range and color videos produced by consumer-grade RGB-D cameras suffer from noise and optical distortions, which impede accurate mapping of the acquired color data to the reconstructed geometry. Our approach addresses these sources of error by optimizing camera poses in tandem with non-rigid correction functions for all images. All parameters are optimized jointly to maximize the photometric consistency of the reconstructed mapping. We show that this optimization can be performed efficiently by an alternating optimization algorithm that interleaves analytical updates of the color map with decoupled parameter updates for all images. Experimental results demonstrate that our approach substantially improves color mapping fidelity.
From goods to service ( s ) : Divergences and convergences of logics
There are two logics or mindsets from which to consider and motivate a transition from goods to service(s). The first, “goods-dominant (G-D) logic”, views services in terms of a type of (e.g., intangible) good and implies that goods production and distribution practices should be modified to deal with the differences between tangible goods and services. The second logic, “service-dominant (S-D) logic”, considers service – a process of using ones resources for the benefit of and in conjunction with another party – as the fundamental purpose of economic exchange and implies the need for a revised, service-driven framework for all of marketing. This transition to a service-centered logic is consistent with and partially derived from a similar transition found in the business-marketing literature — for example, its shift to understanding exchange in terms value rather than products and networks rather than dyads. It also parallels transitions in other sub-disciplines, such as service marketing. These parallels and the implications for marketing theory and practice of a full transition to a service-logic are explored. © 2008 Elsevier Inc. All rights reserved. Over the last several decades, leading-edge firms, as well as many business scholars and consultants, have advocated the need for refocusing substantial firm activity or transforming the entire firm orientation from producing output, primarily manufactured goods, to a concern with service(s) (see, e.g., Davies, Brady, & Hobday, 2007; Gebauer& Fleisch, 2007). These initiatives can be found in both business-to-business (e.g., IBM, GE) and businessto-consumer enterprises (e.g. Lowe's, Kodak, Apple) and in some cases entire industries (e.g., software-as-a-service). The common justification is that these initiatives are analogous with the shift from a manufacturing to a service economy in developed countries, if not globally. That is, it is based on the idea that virtually all economies are producing and exchanging more services than they are goods; thus, services require increased attention. This perception suggests that firms need to redirect the production and marketing strategy that they have adopted for manufactured goods by adjusting them for the distinguishing characteristics of services. ⁎ Corresponding author. Tel.: +1 808 956 8167; fax: +1 808 956 9886. E-mail addresses: [email protected] (S.L. Vargo), [email protected] (R.F. Lusch). 1 Tel.: +1 520 621 7480. 0019-8501/$ see front matter © 2008 Elsevier Inc. All rights reserved. doi:10.1016/j.indmarman.2007.07.004 Please cite this article as: Vargo, S. L., & Lusch, R. F., From goods to service(s): (2008), doi:10.1016/j.indmarman.2007.07.004 This logic of the need for a shift in the activities of the enterprise and/or industry to match the analogous shift in the economy is so intuitively compelling that it is an apparent truism. It is a logic that follows naturally from marketing's traditional foundational thought. But is it the only logic; is it the correct logic? Does it move business-to-business (B2B) firms and/or academic marketing thought in a desirable and enhanced direction? While we agree that a shift to a service focus is desirable, if not essential to a firm's well being and the advancement of academic thought, we question the conventional, underlying rationale and the associated, implied approach. The purpose of this commentary is to explore this traditional logical foundation with its roots in the manufacturing and provision of tangible output and to propose an alternative logic, one grounded in a revised understanding of the meaning of service as a process and its central role in economic exchange. It is a logic that represents the convergence and extension of divergent marketing thought by sub-disciplines and other research initiatives. We argue that this more service-centric logic not only amplifies the necessity for the development of a service focus but it also provides a stronger foundation for theory development and, consequently, application. It is a logic that provides a framework for elevating knowledge discovery in business marketing (as well as other “sub-disciplines”) beyond the Divergences and convergences of logics, Industrial Marketing Management 2 S.L. Vargo, R.F. Lusch / Industrial Marketing Management xx (2008) xxx–xxx ARTICLE IN PRESS identification and explanation of B2B marketing differences from other forms of marketing to a level capable of informing not only the business-marketing firm but “mainstream” marketing in general. Thus, we argue a service-centered focus is enriching and unifying. 1. Alternative logics Broadly speaking, there are two perspectives for the consideration of service(s). One views goods (tangible output embedded with value) as the primary focus of economic exchange and “services” (usually plural) as either (1) a restricted type of (intangible) good (i.e., as units of output) or (2) an add-on that enhances the value of a good. We (Vargo & Lusch, 2004a; Lusch & Vargo, 2006a) call this logic goods-dominant (G-D) logic. Others have referred to it as the “neoclassical economics research tradition” (e.g., Hunt, 2000), “manufacturing logic” (e.g., Normann, 2001), “old enterprise logic” (Zuboff & Maxmin, 2002), and “marketing management” (Webster, 1992). Regardless of the label, G-D logic points toward using principles developed to manage goods production to manage services “production” and “delivery”. The second logic considers “service” (singular) – a process of doing something for another party – in its own right, without reference to goods and identifies service as the primary focus of exchange activity. We (Vargo & Lusch, 2004a, 2006) call this logic service-dominant (S-D) logic. In S-D logic, goods continue to play an important, service-delivery role, at least in a subset of economic exchange. In contrast to implying the modification of goods-based models of exchange to fit a transition to service, S-D logic provides a service-based foundation centered on service-driven principles. We show that this transition is highly consistent with many contemporary business-marketing models. 2. Goods-dominant logic As the label implies, G-D logic is centered on the good – or more recently, the “product”, to include both tangible (goods) and intangible (services) units of output – as archetypical units of exchange. The essence of G-D logic is that economic exchange is fundamentally concerned with units of output (products) that are embedded with value during the manufacturing (or farming, or extraction) process. For efficiency, this production ideally takes place in isolation from the customer and results in standardized, inventoriable goods. The roots of G-D logic are found in the work of Smith (1776) and took firmer, paradigmatic grasp in the context of the Industrial Revolution during the quest for a science of economics, at a time when “science” meant Newtonian mechanics, a paradigm for which the idea of goods embedded with value was particularly amenable. Management and mainstream academic marketing, as well as society in general, inherited this logic from economics (see Vargo, Lusch, & Morgan, 2006; Vargo & Morgan, 2005). However, since formal marketing thought emerged over 100 years ago, G-D logic and its associated concept of embedded value (or utility) have caused problems for marketers. For exPlease cite this article as: Vargo, S. L., & Lusch, R. F., From goods to service(s): (2008), doi:10.1016/j.indmarman.2007.07.004 ample, in the mid 20th Century, it caused Alderson (1957, p. 69) to declare: “What is needed is not an interpretation of the utility created bymarketing, but a marketing interpretation of the whole process of creating utility”. But the G-D-logic-based economic theory, with its co-supportive concepts of embedded value (production) and value destruction (consumption) was itself deeply embedded in marketing thought. It was not long after this period, we believe for related reasons, that academic marketing started becoming fragmented, with various marketing concerns taking on an increasingly separate, or sub-disciplinarian, identity. 3. Subdividing and breaking free from G-D logic Arguably, the establishment of many of the sub-disciplines of marketing, such as business-to-business marketing, services marketing, and international marketing, is a response to the limitations and lack of robustness of G-D logic as a foundation for understanding value creation and exchange. That is, while GD logic might have been reasonably adequate as a foundation when marketing was primarily concerned with the distribution of commodities, the foundation was severely restricted as marketing expanded its scope to the more general issues of value creation and exchange. 3.1. Business-to-business marketing Initial sub-disciplinary approaches have typically involved trying to fit the models of mainstream marketing to the particular phenomena of concern. For example, as marketers (both academic and applied) began to address issues of industrial marketing and found that manymainstreammarketingmodels did not seem to apply, the natural course of action was not to question the paradigmatic foundation but rather first to identify how B2B marketing was different from mainstream, consumer marketing and then to identify the ways that business marketers needed to adjust. Thus, early attempts led to the identification of prototypical characteristics of business marketing — derived demand, fluctuating demand, professional buyers, etc. (see Fern & Brown, 1984). But we suggest that the creation of business-tobusiness marketing as a sub-discipline was more because of the inability of the G-D-logic-grounded mainstream marketing to provide a suitable foundation for understanding inter-enterprise exchange phenomena than it was because of any real and essential difference compared to enterprise-to-individual exchange. Support for this contention can be found in the fact that busin
Low-dose propranolol and exercise capacity in postural tachycardia syndrome: a randomized study.
OBJECTIVE To determine the effect of low-dose propranolol on maximal exercise capacity in patients with postural tachycardia syndrome (POTS). METHODS We compared the effect of placebo vs a single low dose of propranolol (20 mg) on peak oxygen consumption (VO2max), an established measure of exercise capacity, in 11 patients with POTS and 7 healthy subjects in a randomized, double-blind study. Subjects exercised on a semirecumbent bicycle, with increasing intervals of resistance to maximal effort. RESULTS Maximal exercise capacity was similar between groups following placebo. Low-dose propranolol improved VO2max in patients with POTS (24.5 ± 0.7 placebo vs 27.6 ± 1.0 mL/min/kg propranolol; p = 0.024), but not healthy subjects. The increase in VO2max in POTS was associated with attenuated peak heart rate responses (142 ± 8 propranolol vs 165 ± 4 bpm placebo; p = 0.005) and improved stroke volume (81 ± 4 propranolol vs 67 ± 3 mL placebo; p = 0.013). In a separate cohort of POTS patients, neither high-dose propranolol (80 mg) nor metoprolol (100 mg) improved VO2max, despite similar lowering of heart rate. CONCLUSIONS These findings suggest that nonselective β-blockade with propranolol, when used at the low doses frequently used for treatment of POTS, may provide a modest beneficial effect to improve heart rate control and exercise capacity. CLASSIFICATION OF EVIDENCE This study provides Class II evidence that a single low dose of propranolol (20 mg) as compared with placebo is useful in increasing maximum exercise capacity measured 1 hour after medication.
A longitudinal study of follow predictors on twitter
Follower count is important to Twitter users: it can indicate popularity and prestige. Yet, holistically, little is understood about what factors -- like social behavior, message content, and network structure - lead to more followers. Such information could help technologists design and build tools that help users grow their audiences. In this paper, we study 507 Twitter users and a half-million of their tweets over 15 months. Marrying a longitudinal approach with a negative binomial auto-regression model, we find that variables for message content, social behavior, and network structure should be given equal consideration when predicting link formations on Twitter. To our knowledge, this is the first longitudinal study of follow predictors, and the first to show that the relative contributions of social behavior and mes-sage content are just as impactful as factors related to social network structure for predicting growth of online social networks. We conclude with practical and theoretical implications for designing social media technologies.
Different types of cold adaptation in humans.
Human adaptation to cold may occur through acclimatization or acclimation and includes genetic, physiologic, morphological or behavioural responses. It has been studied in indigenous populations, during polar or ski expeditions, sporting activities, military training, in urban people, or under controlled conditions involving exposures to cold air or water. Although divergent results exist between the studies, the main cold adaptation responses are either insulative (circulatory adjustments, increase of fat layer) or metabolic (shivering or nonshivering thermogenesis) and may be positive (enhanced) or negative (blunted). The pattern of cold adaptation is dependent on the type (air, water) and intensity (continuous, intermittent) of the cold exposure. In addition, several individual factors like age, sex, body composition, exercise, diet, fitness and health modify the responses to cold. Habituation of thermal sensations to cold develops first, followed by cardiovascular, metabolic and endocrinological responses. If the repeated cold stimulus is discontinued, adaptation will gradually disappear. The functional significance of physiological cold adaptation is unclear, and some of the responses can even be harmful and predispose to cold injuries. The article summarises recent research information concerning with the thermoregulatory responses related to repeated exposures to cold (air or water), and also discusses the determinants of cold adaptation, as well as its functional significance.
Automatic feature extraction and selection for condition monitoring and related datasets
In this paper a combination of methods for feature extraction and selection is proposed suitable for extracting highly relevant features for machine condition monitoring and related applications from time domain, frequency domain, time-frequency domain and the statistical distribution of the measurement values. The approach is fully automated and suitable for multiple condition monitoring tasks such as vibration and process sensor based analysis. This versatility is demonstrated by evaluating two condition monitoring datasets from our own experiments plus multiple freely available time series classification tasks and comparing the achieved results with the results of algorithms previously suggested or even specifically designed for these datasets.
Lymphangiogenesis and cancer metastasis
Metastasis is a characteristic trait of most tumour types and the cause for the majority of cancer deaths. Many tumour types, including melanoma and breast and prostate cancers, first metastasize via lymphatic vessels to their regional lymph nodes. Although the connection between lymph node metastases and shorter survival times of patients was made decades ago, the active involvement of the lymphatic system in cancer, metastasis has been unravelled only recently, after molecular markers of lymphatic vessels were identified. A growing body of evidence indicates that tumour-induced lymphangiogenesis is a predictive indicator of metastasis to lymph nodes and might also be a target for prevention of metastasis. This article reviews the current understanding of lymphangiogenesis in cancer anti-lymphangiogenic strategies for prevention and therapy of metastatic disease, quantification of lymphangiogenesis for the prognosis and diagnosis of metastasis and in vivo imaging technologies for the assessment of lymphatic vessels, drainage and lymph nodes.
An integrated telehealth system for remote administration of an adult autism assessment.
We developed a telehealth system to administer an autism assessment remotely. The remote assessment system integrates videoconferencing, stimuli presentation, recording, image and video presentation, and electronic assessment scoring into an intuitive software platform. This is an advancement over existing technologies used in telemental health, which currently require several devices. The number of children, adolescents, and adults with autism spectrum disorders (ASDs) has increased dramatically over the past 20 years and is expected to continue to increase in coming years. In general, there are not many clinicians trained in either the diagnosis or treatment of adults with ASD. Given the number of adults with autism in need, a remote assessment system can potentially provide a solution to the lack of trained clinicians. The goal is to make the remote assessment system as close to face-to-face assessment as possible, yet versatile enough to support deployment in underserved areas. The primary challenge to achieving this goal is that the assessment requires social interaction that appears natural and fluid, so the remote system needs to be able to support fluid natural interaction. For this study we developed components to support this type of interaction and integrated these components into a system capable of supporting the entire autistic assessment protocol. We then implemented the system and evaluated the system on real patients. The results suggest that we have achieved our goal in developing a system with high-quality interaction that is easy to use.
A new model for Machine Comprehension via multi-perspective context matching and bidrectional attention flow
To answer a question about a context paragraph, there needs to be a complex model for interactions between these two. Previous Machine Comprehension (MC) where either not large enough to train end-to-end deep neural networks, or not hard to learn. Recently, after the release of SQuAD dataset dataset, several adept models have been proposed for the task of MC. In this work we try to combine the ideas of two state-of-the-art models (BiDAF and MPCM) with our new ideas to obtain a new model for question answering task. Promising experimental results on the test set of SQuAD encourages us to continue working on the proposed model.
A 12-b, 1-GS/s 6.1 mW current-steering DAC in 14 nm FinFET with 80 dB SFDR for 2G/3G/4G cellular application
A 14nm FinFET CMOS 12-b current-steering digital-to-analog (DAC) for 2G/3G/4G cellular applications is presented. A bit segmentation of 6-bit thermometer and 6-bit binary is adopted, and it utilizes the dynamic element matching (DEM) technique to suppress the spurious tones caused by the current source mismatches in 3-D FinFETs. In addition, to keep the voltage drop across each transistor within long-term reliability limit, output switches are designed with shielding transistors while achieving make-before-break operation with the proposed low crossing point level shifter. The active area of a single DAC is 0.036 mm2, and its power consumption is 6.1 mW with SFDR of 80 dBc.
CHOPPER: Optimizing Data Partitioning for In-memory Data Analytics Frameworks
The performance of in-memory based data analytic frameworks such as Spark is significantly affected by how data is partitioned. This is because the partitioning effectively determines task granularity and parallelism. Moreover, different phases of a workload execution can have different optimal partitions. However, in the current implementations, the tuning knobs controlling the partitioning are either configured statically or involve a cumbersome programmatic process for affecting changes at runtime. In this paper, we propose CHOPPER, a system for automatically determining the optimal number of partitions for each phase of a workload and dynamically changing the partition scheme during workload execution. CHOPPER monitors the task execution and DAG scheduling information to determine the optimal level of parallelism. CHOPPER repartitions data as needed to ensure efficient task granularity, avoids data skew, and reduces shuffle traffic. Thus, CHOPPER allows users to write applications without having to hand-tune for optimal parallelism. Experimental results show that CHOPPER effectively improves workload performance by up to 35.2% compared to standard Spark setup.
Predicting the course of functional limitation among older adults with knee pain: do local signs, symptoms and radiographs add anything to general indicators?
OBJECTIVE To determine the additional prognostic value of clinical history, physical examination and x-ray findings to a previously derived simple generic model (age, body mass index, anxiety and pain severity) in a cohort of older adults with knee pain. METHODS Prospective cohort study in community-dwelling adults in North Staffordshire. 621 participants (aged >or=50 years) reporting knee pain who attended a research clinic at recruitment and were followed up by postal questionnaire at 18 months. Poor functional outcome was measured by the Physical Functioning Scale of the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) at 18-month follow-up defined in 60% of participants. RESULTS Three clinical history variables (bilateral knee pain, duration of morning stiffness and inactivity gelling) were independently associated with poor outcome. The addition of the "clinical history" model to the "generic" model led to a statistical improvement in model fit (likelihood ratio (LR) = 24.84, p = 0.001). Two physical examination variables (knee tender point count and single-leg balance) were independently associated with poor outcome but did not lead to a significant improvement when added to the "clinical history and generic" model (LR = 6.34, p = 0.50). Functional outcome was significantly associated with severity of knee radiographic osteoarthritis (OA), but did not lead to any improvement in fit when added to the "generic, clinical history and physical examination" model (LR = 1.86, p = 0.39). CONCLUSIONS Clinical history, physical examination and severity of radiographic knee OA are of limited value over generic factors when trying to predict which older adults with knee pain will experience progressive or persistent functional difficulties.
CLAMS: Bringing Quality to Data Lakes
With the increasing incentive of enterprises to ingest as much data as they can in what is commonly referred to as "data lakes", and with the recent development of multiple technologies to support this "load-first" paradigm, the new environment presents serious data management challenges. Among them, the assessment of data quality and cleaning large volumes of heterogeneous data sources become essential tasks in unveiling the value of big data. The coveted use of unstructured and semi-structured data in large volumes makes current data cleaning tools (primarily designed for relational data) not directly adoptable. We present CLAMS, a system to discover and enforce expressive integrity constraints from large amounts of lake data with very limited schema information (e.g., represented as RDF triples). This demonstration shows how CLAMS is able to discover the constraints and the schemas they are defined on simultaneously. CLAMS also introduces a scale-out solution to efficiently detect errors in the raw data. CLAMS interacts with human experts to both validate the discovered constraints and to suggest data repairs. CLAMS has been deployed in a real large-scale enterprise data lake and was experimented with a real data set of 1.2 billion triples. It has been able to spot multiple obscure data inconsistencies and errors early in the data processing stack, providing huge value to the enterprise.
Goodman versus Rawls: Politics, Philosophy, and Theology
Lenn Goodman's book, Religious Pluralism and Values in the Public Sphere,1 is a tour de force, challenging the most important liberal thinker of the past 50 years, the late John Rawls. In this book, Goodman challenges Rawls' political intolerance of religious voices, Rawls' anti-metaphysical philosophy, and Rawls disdain for theology. The author basically agrees with Goodman's political critique, and mostly agrees with his philosophical position. The difference between Goodman and the author lies in the different schools of thought in Jewish theology to which each of them ascribes. That difference is especially seen in their differing approaches the question of revelation.
The prevalence of secondary dentinal lesions in cheek teeth from horses with clinical signs of pulpitis compared to controls.
REASONS FOR PERFORMING STUDY With the advent of detailed oral examination in horses using dental mirrors and rigid endoscopy, secondary dentinal lesions are observed more frequently. More information regarding the association of secondary dentinal defects with apical dental disease would improve the sensitivity of oral examination as a diagnostic aid for pulpitis. OBJECTIVES To assess prevalence and severity of secondary dentinal defects observed on examination of occlusal surfaces of cheek teeth (CT) from horses showing clinical signs of pulpitis compared to asymptomatic controls. METHODS Records from all cases of equine CT exodontia at the University of Bristol over a 4 year period were examined. Case selection criteria included the presence of clinical signs of pulpitis, an intact extracted tooth and availability of a complete history and follow up. Cases where coronal fracture or periodontal pocketing featured were excluded. CT from cadavers with no history of dental disease served as normal controls. Triadan positions and eruption ages of control teeth were matched with those of teeth extracted from cases. CT from selected cases and control teeth were examined occlusally. Secondary dentinal defects were identified and graded. Prevalence of occlusal lesions in CT with pulpitis and controls was compared. RESULTS From the records of 120 horses where exodontia was performed, 40 cases matched selection criteria. Twenty-three mandibular and 21 maxillary CT were extracted from cases. The controls consisted of 60 mandibular and 60 maxillary CT from 7 cadaver skulls. Secondary dentinal defects were significantly over-represented in CT extracted from cases of pulpitis (P < 0.001). Of diseased mandibular CT, 56.5% had defects compared to none of the controls. Of diseased maxillary CT, 57% had defects compared with 1.6% of controls. Multiple defective secondary dentinal areas and severe lesions were more prevalent in diseased mandibular CT compared with diseased maxillary CT. CONCLUSIONS AND PRACTICAL SIGNIFICANCE: Careful examination of occlusal secondary dentine is an essential component in investigation of suspected pulpitis in equine CT.
Beatmap generator for Osu Game using machine learning approach
Rhythm game as one of the most-played game genres has its own attractiveness. Each song in the game gives its player new excitement to try another song or another difficulty level. However, behind every song being played is a lot of work. A beatmap should be created in order for a song to be played in the game. This paper presents an alternate way to create a beatmap that is considered playable for Osu Game utilizing beat and melody detection using machine learning approach and SVM as its learning method. The steps consists of notes detection and notes placement. Notes detection basically consists of features extraction from an audio file using DSP Java Library and learning process using Weka and LibSVM. However, detect the presence of notes only does not solve anything. The notes should be placed in the game using PRAAT and Note Placement Algorithm. From this process, a beatmap can be created from a song in about 3 minutes and the accuracy of the note detection is 86%.
Using Argument Mining to Assess the Argumentation Quality of Essays
Argument mining aims to determine the argumentative structure of texts. Although it is said to be crucial for future applications such as writing support systems, the benefit of its output has rarely been evaluated. This paper puts the analysis of the output into the focus. In particular, we investigate to what extent the mined structure can be leveraged to assess the argumentation quality of persuasive essays. We find insightful statistical patterns in the structure of essays. From these, we derive novel features that we evaluate in four argumentation-related essay scoring tasks. Our results reveal the benefit of argument mining for assessing argumentation quality. Among others, we improve the state of the art in scoring an essay’s organization and its argument strength.
Expression of thymidylate synthase in gastric cancer patients treated with 5-fluorouracil and doxorubicin-based adjuvant chemotherapy after curative resection
We evaluated the expression of thymidylate synthase (TS) in locally advanced gastric cancer patients treated with adjuvant chemotherapy after curative resection and investigated the association between TS expression and clinicopathologic characteristics including prognosis of the patients. TS expression was evaluated by immunohistochemical staining using TS106 monoclonal antibody in 103 locally advanced gastric cancer patients (stage IB–IV) who underwent 5-fluorouracil (5-FU) and doxorubicin-based adjuvant chemotherapy after curative resection. 65 patients (63%) had primary tumours with high TS expression (≥ 25% of tumour cells positive), and 38 patients (37%) demonstrated low TS expression (< 25% of tumour cells positive or no staining). High TS expression was associated with male gender (P=0.002), poorly differentiated histology (P=0.015), and mixed type in Lauren's classification (P=0.027). There were no statistically significant differences in 4-year disease-free survival (60.0% vs 57.2%, P=0.548) and overall survival (59.6% vs 59.3%, P=0.792) between high-TS group and low-TS group. In conclusion, although high TS expression was associated with poorly differentiated histology and mixed type in Lauren's classification, it did not predict poor disease-free and overall survival in gastric cancer patients treated with 5-FU and doxorubicin-based adjuvant chemotherapy after curative resection. Further prospective studies including the evaluation of other biological markers associated with the resistance to 5-FU and doxorubicin are necessary. © 2001 Cancer Research Campaign http://www.bjcancer.com
On the Benefits of Pre-Equalization for ACO-OFDM and Flip-OFDM Indoor Wireless Optical Transmissions Over Dispersive Channels
This paper analyzes the performances of indoor optical wireless data transmissions based on unipolar orthogonal frequency division multiplexing (OFDM). In particular, it is shown that using frequency-domain pre-equalization can provide benefits in terms of the reduction in the required optical transmit power for a given desired bit error rate (BER) from uncoded transmissions. Known for its power efficiency, asymmetrically clipped optical OFDM (ACO-OFDM) is considered as a unipolar modulation scheme for intensity modulation with direct detection (IM/DD). In addition, flip-OFDM is also considered as an alternative unipolar modulation scheme which is known to be as power efficient as ACO-OFDM. For both ACO-OFDM and flip-OFDM, analytical and simulation results show that using pre-equalization can save up to 2 dB of transmit optical power for a typical indoor optical wireless transmission scenario with the bit rate of 10 Mbps and the BER target of 10-5.
Simulink based VoIP Analysis
Voice communication over internet not be possible without a reliable data network, this was first available when distributed network topologies were used in conjunction with data packets. Early network used single centre node network in which a single workstation (Server) is responsible for the communication. This posed problems as if there was a fault with the centre node, (workstation) nothing would work. This problem was solved by the distributed system in which reliability increases by spreading the load between many nodes. The idea of packet switching & distributed network were combined, this combination were increased reliability, speed & responsible for voice communication over internet. Voice-over-IP (VoIP) These data packets travel through a packet-switched network such as the Internet and arrive at their destination where they are decompressed using a compatible Codec (audio coder/decoder) and converted back to analogue audio. This paper deals with the Simulink architecture for VoIP network. KeywordsVoIP, G.711, Wave file
Angiotensin-(1-7)-induced renal vasodilation in hypertensive humans is attenuated by low sodium intake and angiotensin II co-infusion.
Current evidence suggests that angiotensin-(1-7) plays an important role in the regulation of tissue blood flow. This evidence, however, is restricted to studies in animals and human forearm. Therefore, we studied the effects of intrarenal angiotensin-(1-7) infusion on renal blood flow in hypertensive humans. To assess the influence of renin-angiotensin system activity, sodium intake was varied and co-infusion with angiotensin II was performed in a subgroup. In 57 hypertensive patients who were scheduled for renal angiography, renal blood flow was measured ((133)Xenon washout method) before and during intrarenal infusion of angiotensin-(1-7) (3 incremental doses: 0.27, 0.9, and 2.7 ng/kg per minute). Patients were randomized into low or high sodium intake. These 2 groups of patients received angiotensin-(1-7), with or without intrarenal co-infusion of angiotensin II (0.3 ng/kg per minute). Angiotensin-(1-7) infusion resulted in intrarenal vasodilation in patients adhering to a sodium-rich diet. This vasodilatory effect of angiotensin-(1-7) was clearly attenuated by low sodium intake, angiotensin II co-infusion, or both. Regression analyses showed that the prevailing renin concentration was the only independent predictor of angiotensin-(1-7)-induced renal vasodilation. In conclusion, angiotensin-(1-7) induces renal vasodilation in hypertensive humans, but the effect of angiotensin-(1-7) is clearly attenuated by low sodium intake and co-infusion of angiotensin II. This supports the hypothesis that angiotensin-(1-7) induced renal vasodilation depends on the degree of renin-angiotensin-system activation.
Recent Progress in Inflationary Cosmology
We discuss two important modifications of inflationary paradigm. Until very recently we believed that inflation automatically leads to flatness of the universe, Omega = 1. We also thought that post-inflationary phase transitions in GUTs may occur only after thermalization, which made it very difficult to have baryogenesis in GUTs and to obtain superheavy topological defects after inflation. We will describe a very simple version of chaotic inflation which leads to a division of the universe into infinitely many open universes with all possible values of Omega from 1 to 0. We will show also that in many inflationary models quantum fluctuations of scalar and vector fields produced during reheating are much greater than they would be in a state of thermal equilibrium. This leads to cosmological phase transitions of a new type, which may result in an efficient GUT baryogenesis, in a copious production of topological defects and in a secondary stage of inflation after reheating.
Fertility and ovarian function are preserved in women treated with an intensified regimen of cyclophosphamide, adriamycin, vincristine and prednisone (Mega-CHOP) for non-Hodgkin lymphoma.
BACKGROUND Intensive chemotherapy is widely used to improve the outcome of aggressive non-Hodgkin lymphoma (NHL). Since these regimens may cause premature ovarian failure (POF), the ovarian function was studied in 13 consecutive women aged < or =40 years, treated with four cycles of intensified CHOP (cyclophosphamide 2000-3000 mg/m2 per cycle doxorubicin 50 mg/m2, vincristine 1.4 mg/m2 (maximum 2 mg) and prednisone 100 mg/day were given every 3 weeks). METHODS Patients aged <60 years with aggressive NHL were eligible for participating in a non-randomized phase II study if they had stage I, II, B, bulky, or stages III, IV disease with the age-adjusted international prognostic index of low-intermediate to high-risk score. Seven patients were concomitantly treated with D-TRP6-GnRH analogue (Decapeptyl; Ferring, Germany) for minimizing gonadal toxicity. RESULTS With a median follow-up of 70 months only one patient had POF, while 12 patients retained fertility and eight conceived spontaneously delivering 12 healthy babies. CONCLUSION It appears that high-dose cyclophosphamide does not affect the ovarian function or fertility in patients exposed to this medication during four consecutive cycles of intensified CHOP.
Medical evaluation of children with chronic abdominal pain: Impact of diagnosis, physician practice orientation, and maternal trait anxiety on mothers’ responses to the evaluation
This study examined the effects of diagnosis (functional versus organic), physician practice orientation (biomedical versus biopsychosocial), and maternal trait anxiety (high versus low) on mothers' responses to a child's medical evaluation for chronic abdominal pain. Mothers selected for high (n=80) and low (n=80) trait anxiety imagined that they were the mother of a child with chronic abdominal pain described in a vignette. They completed questionnaires assessing their negative affect and pain catastrophizing. Next, mothers were randomly assigned to view one of four video vignettes of a physician-actor reporting results of the child's medical evaluation. Vignettes varied by diagnosis (functional versus organic) and physician practice orientation (biomedical versus biopsychosocial). Following presentation of the vignettes, baseline questionnaires were re-administered and mothers rated their satisfaction with the physician. Results indicated that mothers in all conditions reported reduced distress pre- to post-vignette; however, the degree of the reduction differed as a function of diagnosis, presentation, and anxiety. Mothers reported more post-vignette negative affect, pain catastrophizing, and dissatisfaction with the physician when the physician presented a functional rather than an organic diagnosis. These effects were significantly greater for mothers with high trait anxiety who received a functional diagnosis presented by a physician with a biomedical orientation than for mothers in any other condition. Anxious mothers of children evaluated for chronic abdominal pain may be less distressed and more satisfied when a functional diagnosis is delivered by a physician with a biopsychosocial rather than a biomedical orientation.
Super Mario as a String: Platformer Level Generation Via LSTMs
The procedural generation of video game levels has existed for at least 30 years, but only recently have machine learning approaches been used to generate levels without specifying the rules for generation. A number of these have looked at platformer levels as a sequence of characters and performed generation using Markov chains. In this paper we examine the use of Long ShortTerm Memory recurrent neural networks (LSTMs) for the purpose of generating levels trained from a corpus of Super Mario Brothers levels. We analyze a number of different data representations and how the generated levels fit into the space of human authored Super Mario
Health literacy and its influencing factors in Iranian diabetic patients
BACKGROUND Health literacy is the ability to obtain, read, understand and use healthcare information to make appropriate health decisions and follow instructions for treatment. The aim of this study was to identify the effect of various factors on health literacy in patients with diabetes. METHODS 407 patients with diabetes older than 15 years of age were identified from the Diabetes Clinic affiliated to the Institute of Endocrinology and Metabolism (IEM) of Iran University of Medical Sciences. We assessed patients' health literacy using the Persian version of Test of Functional Health Literacy in Adults (TOFHLA) questionnaire. RESULTS Mean age of the patients was 55.8 ± 11.3 years, and 61.7% the participants were female.. Overall, 18.2% of the patients had adequate health literacy skills, 11.8% had marginal and 70.0% inadequate health literacy skills. Male participants performed better than females (p< 0.01) and older patients had lower health literacy score than younger patients (p< 0.001). Furthermore, patients with higher educational and occupational levels had higher functional health literacy score than those with lower levels (p< 0.001). CONCLUSION Health literacy score in Iranian patients with diabetes seems inadequate. Therefrom effective interventions should be designed and implemented for this group of patients to improve diabetes outcomes.
Intensity of Proliferative Processes and Degree of Oxidative Stress in the Mucosa of the Ileum in Crohn’s Disease
The intensity of proliferative processes (estimated from Ki-67 expression) and degree oxidative stress (chemiluminescence assay) in biopsy specimens from the terminal portion of the ileal mucosa were studied in patients with Crohn’s disease. Crohn’s disease is characterized by hyper-regenerative processes in the ileal mucosa. The labeling index (Ki-67 expression) in biopsy specimens from the intact ileal mucosa in patients with the irritable bowel syndrome (reference group) was 10.64 ± 0.62%. The corresponding values in patients receiving monotherapy with mesalazine (group 1) and combination therapy with mesalazine and dalargin (group 2) were 24.05 ± 1.17 and 22.90 ± 0.92%, respectively. Analysis of free radical oxidation showed that this state is accompanied oxidative stress. Spontaneous and H2O2-induced luminol-dependent chemiluminescence in biopsy specimens from the ileal mucosa was 1.8-2.3-fold higher compared to the reference group. After therapy, the labeling index in groups 1 and 2 decreased to 18.60 ± 1.18 and 14.38 ± 0.81%, respectively. Histologically, normalization of the disease symptoms was more pronounced after combination therapy. The decrease in free radical oxidation in this group of patients was more pronounced than after mesalazine monotherapy. Our results suggest that oxidative stress plays a role in the hyper-regenerative reaction.
Speaker Recognition System Based On MFCC and DCT
This paper examines and presents an approach to the recognition of speech signal using frequency spectral information with Mel frequency. It is a dominant feature for speech recognition. Mel-frequency cepstral coefficients (MFCCs) are the coefficients that collectively represent the shortterm power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a non linear mel scale of frequency. The performance of MFCC is affected by the number of filters, the shape of filters, the way that filters are spaced, and the way that the power spectrum is warped. In this paper the optimum values of above parameters are chosen to get an efficiency of 99.5 % over a very small length of audio file.
Renal function in long-term survivors of stem cell transplantation in childhood. A prospective trial
The aim of this prospective study was to assess glomerular and tubular renal function before, and 1 and 2 years after hematological stem cell therapy (HSCT) in children and adolescents. 137 consecutive patients undergoing HSCT, for malignant diseases, were included in a prospective trial. Forty-four patients were followed for up to 1 year after HSCT and 36 for up to 2 years, without relapse. Ninety healthy school children were used as a control group. The following parameters were investigated: inulin clearance (GFR), urinary excretion of albumin, α1-microglobulin (α1-MG), calcium, β-N-acetylglucosaminidase (β-NAG) and Tamm–Horsfall protein (THP), tubular phosphate reabsorption (TP/Clcr) and percent reabsorption of amino acids (TAA). Significantly lower GFR was found 1 and 2 years after HSCT but within the normal range in the period before HSCT. There was no correlation between GFR within the first month after HSCT and long-term outcome of GFR. Tubular dysfunction was found in 14–45% of patients 1 and 2 years after HSCT depending on the parameter investigated. Pathological values 1 and 2 years after HSCT were found for α1-MG excretion in 40% and 39%, respectively, for TP/Clcr in 44% and 45%, for β-NAG in 26% and 19%. Median TP/Clcr was significantly lower 2 years after HSCT than before. TAA was mildly impaired in 7/14 patients before, in 5/29 one and in 9/29 2 years after HSCT, but median TAA was within normal range at all times. The median excretion of albumin, THP and calcium was within the normal range at all investigations. No influence of ifosfamide pre-treatment on the severity of tubulopathy was found. The investigation of tubular renal function should be part of a long-term follow-up in children after HSCT. Bone Marrow Transplantation (2001) 27, 319–327.
A survey and comparison of peer-to-peer overlay network schemes
Over the Internet today, computing and communications environments are significantly more complex and chaotic than classical distributed systems, lacking any centralized organization or hierarchical control. There has been much interest in emerging Peer-to-Peer (P2P) network overlays because they provide a good substrate for creating large-scale data sharing, content distribution, and application-level multicast applications. These P2P overlay networks attempt to provide a long list of features, such as: selection of nearby peers, redundant storage, efficient search/location of data items, data permanence or guarantees, hierarchical naming, trust and authentication, and anonymity. P2P networks potentially offer an efficient routing architecture that is self-organizing, massively scalable, and robust in the wide-area, combining fault tolerance, load balancing, and explicit notion of locality. In this article we present a survey and comparison of various Structured and Unstructured P2P overlay networks. We categorize the various schemes into these two groups in the design spectrum, and discuss the application-level network performance of each group.
High performance true random number generator based on FPGA block RAMs
This paper presents a new method for creating TRNGs in Xilinx FPGAs. Due to its simplicity and ease of implementation, the design constitutes a valuable alternative to existing methods for creating single-chip TRNGs. Its main advantages are the high throughput, the portability and the low amount of resources it occupies inside the chip. Therefore, it could further extend the use of FPGA chips in cryptography. Our primary source of entropy is a True Dual-Port Block-RAM operating at high frequency, which is used in a special architecture that creates a concurrent write conflict. The paper also describes the practical issues which make it possible to convert that conflict into a strong entropy source. Depending on the users' requirements, it is possible to connect many units of this generator in parallel on a single FPGA device, thus increasing the bit generation throughput up to the Gbps level. The generator has successfully passed the major statistical test batteries.
How Big Are the Tax Benefits of Debt ?
I integrate under firm-specific benefit functions to estimate that the capitalized tax benefit of debt equals 9.7 percent of firm value ~or as low as 4.3 percent, net of personal taxes!. The typical firm could double tax benefits by issuing debt until the marginal tax benefit begins to decline. I infer how aggressively a firm uses debt by observing the shape of its tax benefit function. Paradoxically, large, liquid, profitable firms with low expected distress costs use debt conservatively. Product market factors, growth options, low asset collateral, and planning for future expenditures lead to conservative debt usage. Conservative debt policy is persistent. DO THE TAX BENEFITS of debt affect corporate financing decisions? How much do they add to firm value? These questions have puzzled researchers since the work of Modigliani and Miller ~1958, 1963!. Recent evidence indicates that tax benefits are one of the factors that affect financing choice ~e.g., MacKie-Mason ~1990!, Graham ~1996a!!, although opinion is not unanimous on which factors are most important or how they contribute to firm value ~Shyam-Sunder and Myers ~1998!, Fama and French ~1998!!. Researchers face several problems when they investigate how tax incentives affect corporate financial policy and firm value. Chief among these problems is the difficulty of calculating corporate tax rates due to data problems and the complexity of the tax code. Other challenges include quantifying the effects of interest taxation at the personal level and understanding the bankruptcy process and the attendant costs of financial distress. In this * Graham is at the Fuqua School of Business, Duke University. Early conversations with Rick Green, Eric Hughson, Mike Lemmon, and S. P. Kothari were helpful in formulating some of the ideas in this paper. I thank Peter Fortune for providing the bond return data and Eli Ofek for supplying the managerial entrenchment data. I also thank three referees for detailed comments; also, Jennifer Babcock, Ron Bagley, Alon Brav, John Campbell, John Chalmers, Bob Dammon, Eugene Fama, Roger Gordon, Mark Grinblatt, Burton Hollifield, Steve Huddart, Arvind Krishnamurthy, Rich Lyons, Robert MacDonald, Ernst Maug, Ed Maydew, Roni Michaely, Phillip O’Conner, John Persons, Dick Rendleman, Oded Sarig, René Stulz ~the editor!, Bob Taggart, S. Viswanathan, Ralph Walkling, and Jaime Zender; seminar participants at Carnegie Mellon, Chicago, Duke, Ohio State, Rice, the University of British Columbia, the University of North Carolina, Washington University, William and Mary, and Yale; seminar participants at the NBER Public Economics and Corporate workshops, the National Tax Association annual conference, the American Economic Association meetings, and the Eighth Annual Conference on Financial Economics and Accounting for helpful suggestions and feedback. Jane Laird gathered the state tax rate information. All errors are my own. This paper previously circulated under the title “tD or Not tD? Using Benefit Functions to Value and Infer the Costs of Interest Deductions.” THE JOURNAL OF FINANCE • VOL. LV, NO. 5 • OCT. 2000
QPass: a Merit-based Evaluation of Soccer Passes
Quantitative analysis of soccer players’ passing ability focuses on descriptive statistics without considering the players’ real contribution to the passing and ball possession strategy of their team. Which player is able to help the build-up of an attack, or to maintain the possession of the ball? We introduce a novel methodology called QPass to answer questions like these quantitatively. Based on the analysis of an entire season, we rank the players based on the intrinsic value of their passes using QPass. We derive an album of pass trajectories for different gaming styles. Our methodology reveals a quite counterintuitive paradigm: losing the ball possession could lead to better chances to win a game.
Modeling Under-dispersed Count Data Using Generalized Poisson Regression Approach
This paper models household fertility decisions by using a generalized Poisson regression model. Since the fertility data used in the paper exhibit under-dispersion, the generalized Poisson regression model has statistical advantages over both standard Poisson regression and negative binomial regression models, and is suitable for analysis of count data that exhibit either over-dispersion or under-dispersion. The model is estimated by maximum likelihood procedure. Approximate tests for the dispersion and goodness-of-fit measures for comparing alternative models are discussed. Based on observations from the Bangladesh Demographic Health Survey 2011, the empirical results support the generalized Poisson regression to model the under dispersed data.
On Quaternions and Octonions : Their Geometry , Arithmetic , and Symmetry
Conway and Smith’s book is a wonderful introduction to the normed division algebras: the real numbers (R), the complex numbers (C), the quaternions (H), and the octonions (O). The first two are well-known to every mathematician. In contrast, the quaternions and especially the octonions are sadly neglected, so the authors rightly concentrate on these. They develop these number systems from scratch, explore their connections to geometry, and even study number theory in quaternionic and octonionic versions of the integers. Conway and Smith warm up by studying two famous subrings of C: the Gaussian integers and Eisenstein integers. The Gaussian integers are the complex numbers x + iy for which x and y are integers. They form a square lattice:
Simultaneous transcutaneous electrical nerve stimulation mitigates simulator sickness symptoms in healthy adults: a crossover study
BACKGROUND Flight simulators have been used to train pilots to experience and recognize spatial disorientation, a condition in which pilots incorrectly perceive the position, location, and movement of their aircrafts. However, during or after simulator training, simulator sickness (SS) may develop. Spatial disorientation and SS share common symptoms and signs and may involve a similar mechanism of dys-synchronization of neural inputs from the vestibular, visual, and proprioceptive systems. Transcutaneous electrical nerve stimulation (TENS), a maneuver used for pain control, was found to influence autonomic cardiovascular responses and enhance visuospatial abilities, postural control, and cognitive function. The purpose of present study was to investigate the protective effects of TENS on SS. METHODS Fifteen healthy young men (age: 28.6 ± 0.9 years, height: 172.5 ± 1.4 cm, body weight: 69.3 ± 1.3 kg, body mass index: 23.4 ± 1.8 kg/m2) participated in this within-subject crossover study. SS was induced by a flight simulator. TENS treatment involved 30 minutes simultaneous electrical stimulation of the posterior neck and the right Zusanli acupoint. Each subject completed 4 sessions (control, SS, TENS, and TENS + SS) in a randomized order. Outcome indicators included SS symptom severity and cognitive function, evaluated with the Simulator Sickness Questionnaire (SSQ) and d2 test of attention, respectively. Sleepiness was rated using the Visual Analogue Scales for Sleepiness Symptoms (VAS-SS). Autonomic and stress responses were evaluated by heart rate, heart rate variability (HRV) and salivary stress biomarkers (salivary alpha-amylase activity and salivary cortisol concentration). RESULTS Simulator exposure increased SS symptoms (SSQ and VAS-SS scores) and decreased the task response speed and concentration. The heart rate, salivary stress biomarker levels, and the sympathetic parameter of HRV increased with simulator exposure, but parasympathetic parameters decreased (p < 0.05). After TENS treatment, SS symptom severity significantly decreased and the subjects were more able to concentrate and made fewer cognitive test errors (p < 0.05). CONCLUSIONS Sympathetic activity increased and parasympathetic activity decreased after simulator exposure. TENS was effective in reducing SS symptoms and alleviating cognitive impairment. TRIAL REGISTRATION NUMBER Australia and New Zealand Clinical Trials Register: http://ACTRN12612001172897.
Computerized classification of intraductal breast lesions using histopathological images
In the diagnosis of preinvasive breast cancer, some of the intraductal proliferations pose a special challenge. The continuum of intraductal breast lesions includes the usual ductal hyperplasia (UDH), atypical ductal hyperplasia (ADH), and ductal carcinoma in situ (DCIS). The current standard of care is to perform percutaneous needle biopsies for diagnosis of palpable and image-detected breast abnormalities. UDH is considered benign and patients diagnosed UDH undergo routine follow-up, whereas ADH and DCIS are considered actionable and patients diagnosed with these two subtypes get additional surgical procedures. About 250 000 new cases of intraductal breast lesions are diagnosed every year. A conservative estimate would suggest that at least 50% of these patients are needlessly undergoing unnecessary surgeries. Thus, improvement in the diagnostic reproducibility and accuracy is critically important for effective clinical management of these patients. In this study, a prototype system for automatically classifying breast microscopic tissues to distinguish between UDH and actionable subtypes (ADH and DCIS) is introduced. This system automatically evaluates digitized slides of tissues for certain cytological criteria and classifies the tissues based on the quantitative features derived from the images. The system is trained using a total of 327 regions of interest (ROIs) collected across 62 patient cases and tested with a sequestered set of 149 ROIs collected across 33 patient cases. An overall accuracy of 87.9% is achieved on the entire test data. The test accuracy of 84.6% is obtained with borderline cases (26 of the 33 test cases) only, when compared against the diagnostic accuracies of nine pathologists on the same set (81.2% average), indicates that the system is highly competitive with the expert pathologists as a stand-alone diagnostic tool and has a great potential in improving diagnostic accuracy and reproducibility when used as a “second reader” in conjunction with the pathologists.
Intuitionistic fuzzy based DEMATEL method for developing green practices and performances in a green supply chain
Environmental topics have gained much consideration in corporate green operations. Globalization, stakeholder pressures, and stricter environmental regulations have made organizations develop environmental practices. Thus, green supply chain management (GSCM) is now a proactive approach for organizations to enhance their environmental performance and achieve competitive advantages. This study pioneers using the decision-making trial and evaluation laboratory (DEMATEL) method with intuitionistic fuzzy sets to handle the important and causal relationships between GSCM practices and performances. DEMATEL evaluates GSCM practices to find the main practices to improve both environmental and economic performances. This study uses intuitionistic fuzzy set theory to handle the linguistic imprecision and the ambiguity of human being’s judgment. A case study from the automotive industry is presented to evaluate the efficiency of the proposed method. The results reveal ‘‘internal management support’’, ‘‘green purchasing’’ and ‘‘ISO 14001 certification’’ are the most significant GSCM practices. The practical results of this study offer useful insights for managers to become more environmentally responsible, while improving their economic and environmental performance goals. Further, a sensitivity analysis of results, managerial implications, conclusions, limitations and future research opportunities are provided. 2015 Elsevier Ltd. All rights reserved.
Behavior-based robotics as a tool for synthesis of artificial behavior and analysis of natural behavior
Work in behavior-based systems focuses on functional modeling, that is, the synthesis of life-like and/or biologically inspired behavior that is robust, repeatable and adaptive. Inspiration from cognitive science, neuroscience and biology drives the development of new methods and models in behavior-based robotics, and the results tie together several related fields including artificial life, evolutionary computation, and multi-agent systems. Ideas from artificial intelligence and engineering continue to be explored actively and applied to behavior-based robots as their role in animal modeling and practical applications is being developed.
Semantic Annotation of Data Processing Pipelines in Scientific Publications
Data processing pipelines are a core object of interest for data scientist and practitioners operating in a variety of data-related application domains. To effectively capitalise on the experience gained in the creation and adoption of such pipelines, the need arises for mechanisms able to capture knowledge about datasets of interest, data processing methods designed to achieve a given goal, and the performance achieved when applying such methods to the considered datasets. However, due to its distributed and often unstructured nature, this knowledge is not easily accessible. In this paper, we use (scientific) publications as source of knowledge about Data Processing Pipelines. We describe a method designed to classify sentences according to the nature of the contained information (i.e. scientific objective, dataset, method, software, result), and to extract relevant named entities. The extracted information is then semantically annotated and published as linked data in open knowledge repositories according to the DMS ontology for data processing metadata. To demonstrate the effectiveness and performance of our approach, we present the results of a quantitative and qualitative analysis performed on four different conference series.
A survey of context data distribution for mobile ubiquitous systems
The capacity to gather and timely deliver to the service level any relevant information that can characterize the service-provisioning environment, such as computing resources/capabilities, physical device location, user preferences, and time constraints, usually defined as context-awareness, is widely recognized as a core function for the development of modern ubiquitous and mobile systems. Much work has been done to enable context-awareness and to ease the diffusion of context-aware services; at the same time, several middleware solutions have been designed to transparently implement context management and provisioning in the mobile system. However, to the best of our knowledge, an in-depth analysis of the context data distribution, namely, the function in charge of distributing context data to interested entities, is still missing. Starting from the core assumption that only effective and efficient context data distribution can pave the way to the deployment of truly context-aware services, this article aims at putting together current research efforts to derive an original and holistic view of the existing literature. We present a unified architectural model and a new taxonomy for context data distribution by considering and comparing a large number of solutions. Finally, based on our analysis, we draw some of the research challenges still unsolved and identify some possible directions for future work.
Role of endothelin-1 in exposure to high altitude: Acute Mountain Sickness and Endothelin-1 (ACME-1) study.
BACKGROUND The degree of pulmonary hypertension in healthy subjects exposed to acute hypobaric hypoxia at high altitude was found to be related to increased plasma endothelin (ET)-1. The aim of the present study was to investigate the effects of ET-1 antagonism on pulmonary hypertension, renal water, and sodium balance under acute and prolonged exposure to high-altitude-associated hypoxia. METHODS AND RESULTS In a double-blind fashion, healthy volunteers were randomly assigned to receive bosentan (62.5 mg for 1 day and 125 mg for the following 2 days; n=10) or placebo (n=10) at sea level and after rapid ascent to high altitude (4559 m). At sea level, bosentan did not induce any significant changes in hemodynamic or renal parameters. At altitude, bosentan induced a significant reduction of systolic pulmonary artery pressure (21+/-7 versus 31+/-7 mm Hg, P<0.03) and a mild increase in arterial oxygen saturation versus placebo after just 1 day of treatment. However, both urinary volume and free water clearance (H2OCl/glomerular filtration rate) were significantly reduced versus placebo after 2 days of ET-1 antagonism (1100+/-200 versus 1610+/-590 mL; -6.7+/-3.5 versus -1.8+/-4.8 mL/min, P<0.05 versus placebo for both). Sodium clearance and segmental tubular function were not significantly affected by bosentan administration. CONCLUSIONS The present results indicate that the early beneficial effect of ET-1 antagonism on pulmonary blood pressure is followed by an impairment in volume adaptation. These findings must be considered for the prevention and treatment of acute mountain sickness.
Optimizing Image Steganography using Particle Swarm Optimization Algorithm
Image Steganography is the computing field of hiding information from a source into a target image in a way that it becomes almost imperceptible from one’s eyes. Despite the high capacity of hiding information, the usual Least Significant Bit (LSB) techniques could be easily discovered. In order to hide information in more significant bits, the target image should be optimized. In this paper, it is proposed an optimization solution based on the Standard Particle Swarm Optimization 2011 (PSO), which has been compared with a previous Genetic Algorithm-based approach showing promising results. Specifically, it is shown an adaptation in the solution in order to keep the essence of PSO while remaining message hosted bits unchanged.
Compact and Computationally Efficient Representation of Deep Neural Networks
At the core of any inference procedure in deep neural networks are dot product operations, which are the component that require the highest computational resources. For instance, deep neural networks such as VGG-16 require up to 15 gigaoperations in order to perform the dot products present in a single forward pass, which results in significant energy consumption and therefore limit their use in resource-limited environments, e.g., on embedded devices or smartphones. A common approach to reduce the cost of inference is to reduce its memory complexity by lowering the entropy of the weight matrices of the neural network, e.g., by pruning and quantizing their elements. However, the quantized weight matrices are then usually represented either by a dense or sparse matrix storage format, whose associated dot product complexity is not bounded by the entropy of the matrix. This means that the associated inference complexity ultimately depends on the implicit statistical assumptions that these matrix representations make about the weight distribution, which can be in many cases suboptimal. In this paper we address this issue and present new efficient representations for matrices with low entropy statistics. These new matrix formats have the novel property that their memory and algorithmic complexity are implicitly bounded by the entropy of the matrix, consequently implying that they are guaranteed to become more efficient as the entropy of the matrix is being reduced. In our experiments we show that performing the dot product under these new matrix formats can indeed be more energy and time efficient under practically relevant assumptions. For instance, we are able to attain up to x42 compression ratios, x5 speed ups and x90 energy savings when we convert in a lossless manner the weight matrices of state-of-the-art networks such as AlexNet, VGG-16, ResNet152 and DenseNet into the new matrix formats and benchmark their respective dot product operation. Keywords—Neural network compression, computationally efficient deep learning, data structures, sparse matrices, lossless coding.
Phase II Trial of High-Dose , Intermittent Calcitriol ( 1 , 25 Dihydroxyvitamin D 3 ) and Dexamethasone in Androgen-Independent Prostate Cancer
Received August 1
Perceptions of social media on students' academic engagement in tertiary education
Social media has been gaining popularity among university students who use social media at higher rates than the general population. Students consequently spend a significant amount of time on social media, which may inevitably have an effect on their academic engagement. Subsequently, scholars have been intrigued to examine the impact of social media on students' academic engagement. Research that has directly explored the use of social media and its impact on students in tertiary institutions has revealed limited and mixed findings, particularly within a South African context; thus leaving a window of opportunity to further investigate the impact that social media has on students' academic engagement. This study therefore aims to investigate the use of social media in tertiary institutions, the impact that the use thereof has on students' academic engagement and to suggest effective ways of using social media in tertiary institutions to improve students' academic engagement from students' perspectives. This study used an interpretivist (inductive) approach in order to determine and comprehend student's perspectives and experiences towards the use of social media and the effects thereof on their academic engagement. A single case study design at Rhodes University was used to determine students' perceptions and data was collected using an online survey. The findings reveal that students use social media for both social and academic purposes. Students further perceived that social media has a positive impact on their academic engagement and suggest that using social media at tertiary level could be advantageous and could enhance students' academic engagement.
Informed consent: how much does the patient understand?
Comprehension and recall of the information contained in the informed consent statement was tested in clinically hypertensive patients entering a controlled trial comparing hydrochlorothiazide and propranolol. The consent statement was the primary vehicle for conveying the information to the patient. The average of correct answers to a multiple-choice quiz was 71.6% at 2 hr and 61.2% at 3 mo after the consent procedure. The effectiveness of recall did not correlate with level of education. Patients exhibited greater comprehension of the action of the drugs than of their side effects. Nearly all patients indicated their belief that they would receive the best possible care. While 95% wanted to be informed about the trial, 75% stated they would have given their consent even without this information.
Metabolic effects of dialyzate glucose in chronic hemodialysis: results from a prospective, randomized crossover trial.
BACKGROUND There is no agreement concerning dialyzate glucose concentration in hemodialysis (HD) and 100 and 200 mg/dL (G100 and G200) are frequently used. G200 may result in diffusive glucose flux into the patient, with consequent hyperglycemia and hyperinsulinism, and electrolyte alterations, in particular potassium (K) and phosphorus (P). This trial compared metabolic effects of G100 versus G200. METHODS Chronic HD patients participated in this randomized, single masked, controlled crossover trial (www.clinicaltrials.gov: #NCT00618033) consisting of two consecutive 3-week segments with G100 and G200, respectively. Intradialytic serum glucose (SG) and insulin concentrations (SI) were measured at 0, 30, 60, 120, 180, 240 min and immediately post-HD; P and K were measured at 0, 120, 180 min and post-HD. Hypoglycemia was defined as an SG<70 mg/dL. Mean SG and SI were computed as area under the curve divided by treatment time. RESULTS Fourteen diabetic and 15 non-diabetic subjects were studied. SG was significantly higher with G200 as compared to G100, both in diabetic {G200: 192.8±48.1 mg/dL; G100: 154.0±27.3 mg/dL; difference 38.8 [95% confidence interval (CI): 21.2-56.4] mg/dL; P<0.001} and non-diabetic subjects [G200: 127.0±11.2 mg/dL; G100 106.5±10.8 mg/dL; difference 20.6 (95% CI: 15.3-25.9) mg/dL; P<0.001]. SI was significantly higher with G200 in non-diabetic subjects. Frequency of hypoglycemia, P and K serum levels, interdialytic weight gain and adverse intradialytic events did not differ significantly between G100 and G200. CONCLUSION G200 may exert unfavorable metabolic effects in chronic HD patients, in particular hyperglycemia and hyperinsulinism, the latter in non-diabetic subjects.
TaintDroid: an information flow tracking system for real-time privacy monitoring on smartphones
Today's smartphone operating systems frequently fail to provide users with adequate control over and visibility into how third-party applications use their privacy-sensitive data. We address these shortcomings with TaintDroid, an efficient, systemwide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid provides real-time analysis by leveraging Android's virtualized execution environment. Using TaintDroid to monitor the behavior of 30 popular third-party Android applications, we found 68 instances of misappropriation of users' location and device identification information across 20 applications. Monitoring sensitive data with TaintDroid provides informed use of third-party applications for phone users and valuable input for smartphone security service firms seeking to identify misbehaving applications.
Analysis of a polling system for telephony traffic with application to wireless LANs
Recently, polling has been included as a resource sharing mechanism in the medium access control (MAC) protocol of several communication systems, such as the IEEE 802.11 wireless local area network, primarily to support real-time traffic. Furthermore, to allow these communication systems to support multimedia traffic, the polling scheme often coexists with other MAC schemes such as random access. Motivated by these systems, we develop a model for a polling system with vacations, where the vacations represent the time periods in which the resource sharing mechanism used is a non-polling mode. The real-time traffic served by the polling mode in our study is telephony. We use an on-off Markov modulated fluid (MMF) model to characterize telephony sources. Our analytical study and a counterpart validating simulation study show the following. Since voice codec rates are much smaller than link transmission rates, the queueing delay that arises from waiting for a poll dominates the total delay experienced by a voice packet. To keep delays low, the number of telephone calls that can be admitted must be chosen carefully according to delay tolerance, loss tolerance, codec rates, protocol overheads and the amount of bandwidth allocated to the polling mode. The effect of statistical multiplexing gain obtained by exploiting the on-off characteristics of telephony traffic is more noticeable when the impact of polling overhead is small
Fast visibility restoration from a single color or gray level image
One source of difficulties when processing outdoor images is the presence of haze, fog or smoke which fades the colors and reduces the contrast of the observed objects. We introduce a novel algorithm and variants for visibility restoration from a single image. The main advantage of the proposed algorithm compared with other is its speed: its complexity is a linear function of the number of image pixels only. This speed allows visibility restoration to be applied for the first time within real-time processing applications such as sign, lane-marking and obstacle detection from an in-vehicle camera. Another advantage is the possibility to handle both color images or gray level images since the ambiguity between the presence of fog and the objects with low color saturation is solved by assuming only small objects can have colors with low saturation. The algorithm is controlled only by a few parameters and consists in: atmospheric veil inference, image restoration and smoothing, tone mapping. A comparative study and quantitative evaluation is proposed with a few other state of the art algorithms which demonstrates that similar or better quality results are obtained. Finally, an application is presented to lane-marking extraction in gray level images, illustrating the interest of the approach.
Face Alignment via Regressing Local Binary Features
This paper presents a highly efficient and accurate regression approach for face alignment. Our approach has two novel components: 1) a set of local binary features and 2) a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. This approach achieves the state-of-the-art results when tested on the most challenging benchmarks to date. Furthermore, because extracting and regressing local binary features are computationally very cheap, our system is much faster than previous methods. It achieves over 3000 frames per second (FPS) on a desktop or 300 FPS on a mobile phone for locating a few dozens of landmarks. We also study a key issue that is important but has received little attention in the previous research, which is the face detector used to initialize alignment. We investigate several face detectors and perform quantitative evaluation on how they affect alignment accuracy. We find that an alignment friendly detector can further greatly boost the accuracy of our alignment method, reducing the error up to 16% relatively. To facilitate practical usage of face detection/alignment methods, we also propose a convenient metric to measure how good a detector is for alignment initialization.
Folate and vitamin B12 levels in levodopa-treated Parkinson's disease patients: their relationship to clinical manifestations, mood and cognition.
We tested the hypothesis that mood, clinical manifestations and cognitive impairment of levodopa-treated Parkinson's disease (PD) patients are associated with vitamin B12 and folate deficiency. To this end, we performed this cross-sectional study by measuring serum folate and vitamin B12 blood levels in 111 consecutive PD patients. Levodopa-treated PD patients showed significantly lower serum levels of folate and vitamin B12 than neurological controls, while depressed patients had significantly lower serum folate levels as compared to non-depressed. Cognitively impaired PD patients exhibited significantly lower serum vitamin B12 levels as compared to cognitively non-impaired. In conclusion, lower folate levels were associated with depression, while lower vitamin B12 levels were associated with cognitive impairment. The effects of vitamin supplementation merit further attention and investigation.
Engineering the CA(2+)-activated photoprotein aequorin with reduced affinity for calcium.
Two stage PCR has been used to introduce single amino acid substitutions into the EF hand structures of the Ca(2+)-activated photoprotein aequorin. Transcription of PCR products, followed by cell free translation of the mRNA, allowed characterisation of recombinant proteins in vitro. Substitution of D to A at position 119 produced an active photoprotein with a Ca2+ affinity reduced by a factor of 20 compared to the wild type recombinant aequorin. This recombinant protein will be suitable for measuring Ca2+ inside the endoplasmic reticulum, the mitochondria, endosomes and the outside of live cells.
Delphin2: an over actuated autonomous underwater vehicle for manoeuvring research
Delphin2 is a hover capable torpedo style Autonomous Underwater Vehicle (AUV), developed at the University of Southampton to provide a test bed for research in marine robotics, primarily to enhance the manoeuvring capability of AUVs. This paper describes the mechanical design of the vehicle and its software architecture. The performance of the vehicle is presented as well as preliminary findings from the vehicle’s first fully autonomous video survey missions in Lough Erne, Northern Ireland. It is interesting to note that the low-cost of the vehicle and its development using a succession of MEng and PhD students has provided an excellent training environment for specialists in the growing area of marine autonomous vehicles.
Free-form shape design using triangulated surfaces
We present an approach to modeling with truly mutable yet completely controllable free-form surfaces of arbitrary topology. Surfaces may be pinned down at points and along curves, cut up and smoothly welded back together, and faired and reshaped in the large. This style of control is formulated as a constrained shape optimization, with minimization of squared principal curvatures yielding graceful shapes that are free of the parameterization worries accompanying many patch-based approaches. Triangulated point sets are used to approximate these smooth variational surfaces, bridging the gap between patch-based and particle-based representations. Automatic refinement, mesh smoothing, and re-triangulation maintain a good computational mesh as the surface shape evolves, and give sample points and surface features much of the freedom to slide around in the surface that oriented particles enjoy. The resulting surface triangulations are constructed and maintained in real time.
Etiology and Treatment Modalities of Anterior Open Bite Malocclusion
1878-3317/$ e see front matter Copyright 2013, Ta http://dx.doi.org/10.1016/j.jecm.2013.01.004 The complexity of anterior open bite is attributed to a combination of skeletal, dental, soft tissue, and habitual factors. Multiple treatment strategies aimed at different etiologies of anterior open bite have been proposed. However, the tendency toward relapse after conventional or surgical orthodontic treatment has been indicated. Therefore, anterior open bite is considered one of the most challenging dentofacial deformities to treat. The aim of this article is to review the etiologies, dentofacial morphology, treatment modalities, retention, and stability of anterior open bite. The etiology of anterior open bite malocclusions is multifactorial and numerous theories have been proposed, including genetic, anatomic and environmental factors. The diagnosis and treatment modalities are variable according to the etiology. Failure of tongue posture adaptation subsequent to orthodontic and/or surgical treatment might be the primary reason for relapse of anterior open bite. Prolonged retention with fixed or removable retainers is advisable and necessary in most cases of open bite treatment. The treatment of anterior open bite remains a tough challenge to the clinician; careful diagnosis and timely intervention with proper treatment modalities and appliance selection will improve the treatment outcomes and long-term stability. Copyright 2013, Taipei Medical University. Published by Elsevier Taiwan LLC. All rights reserved.
Compressive Sensing Based Positioning Using RSS of WLAN Access Points
The sparse nature of location finding problem makes the theory of compressive sensing desirable for indoor positioning in Wireless Local Area Networks (WLANs). In this paper, we address the received signal strength (RSS)-based localization problem in WLANs using the theory of compressive sensing (CS), which offers accurate recovery of sparse signals from a small number of measurements by solving an $\ell_1$-minimization problem. A pre-processing procedure of orthogonalization is used to induce incoherence needed in the CS theory. In order to mitigate the effects of RSS variations due to channel impediments, the proposed positioning system consists of two steps: coarse localization by exploiting affinity propagation, and fine localization by the CS theory. In the fine localization stage, access point selection problem is studied to further increase the accuracy. We implement the positioning system on a WiFi-integrated mobile device (HP iPAQ hx4700 with Windows Mobile 2003 Pocket PC) to evaluate the performance. Experimental results indicate that the proposed system leads to substantial improvements on localization accuracy and complexity over the widely used traditional fingerprinting methods.
Critical Failure Factors in ERP Implementation
This study firstly examines the current literature concerning ERP implementation problems during implementation phases and causes of ERP implementation failure. A multiple case study research methodology was adopted to understand “why” and “how” these ERP systems could not be implemented successfully. Different stakeholders (including top management, project manager, project team members and ERP consultants) from these case studies were interviewed, and ERP implementation documents were reviewed for triangulation. An ERP life cycle framework was applied to study the ERP implementation process and the associated problems in each phase of ERP implementation. Fourteen critical failure factors were identified and analyzed, and three common critical failure factors (poor consultant effectiveness, project management effectiveness and poo555îr quality of business process re-engineering) were examined and discussed. Future research on ERP implementation and critical failure factors is discussed. It is hoped that this research will help to bridge the current literature gap and provide practical advice for both academics and practitioners.
Kinect Identity: Technology and Experience
Kinect Identity, a key component of Microsoft's Kinect for the Xbox 360, combines multiple technologies and careful user interaction design to achieve the goal of recognizing and tracking player identity.
What-and-where to match: Deep spatially multiplicative integration networks for person re-identification
Matching pedestrians across disjoint camera views, known as person re-identification (re-id), is a challenging problem that is of importance to visual recognition and surveillance. Most existing methods exploit local regions within spatial manipulation to perform matching in local correspondence. However, they essentially extract fixed representations from pre-divided regions for each image and perform matching based on the extracted representation subsequently. For models in this pipeline, local finer patterns that are crucial to distinguish positive pairs from negative ones cannot be captured, and thus making them underperformed. In this paper, we propose a novel deep multiplicative integration gating function, which answers the question of what-and-where to match for effective person re-id. To address what to match, our deep network emphasizes common local patterns by learning joint representations in a multiplicative way. The network comprises two Convolutional Neural Networks (CNNs) to extract convolutional activations, and generates relevant descriptors for pedestrian matching. This thus, leads to flexible representations for pair-wise images. To address where to match, we combat the spatial misalignment by performing spatially recurrent pooling via a four-directional recurrent neural network to impose spatial depenEmail addresses: [email protected] (Lin Wu ), [email protected] (Yang Wang), [email protected] (Xue Li), [email protected] (Junbin Gao) Preprint submitted to Elsevier 25·7·2017 ar X iv :1 70 7. 07 07 4v 1 [ cs .C V ] 2 1 Ju l 2 01 7 dency over all positions with respect to the entire image. The proposed network is designed to be end-to-end trainable to characterize local pairwise feature interactions in a spatially aligned manner. To demonstrate the superiority of our method, extensive experiments are conducted over three benchmark data sets: VIPeR, CUHK03 and Market-1501.
Erratum to: Long-term efficacy and safety of subcutaneous pasireotide in acromegaly: results from an open-ended, multicenter, Phase II extension study
Pasireotide has a broader somatostatin receptor binding profile than other somatostatin analogues. A 16-week, Phase II trial showed that pasireotide may be an effective treatment for acromegaly. An extension to this trial assessed the long-term efficacy and safety of pasireotide. This study was an open-label, single-arm, open-ended extension study (primary efficacy and safety evaluated at month 6). Patients could enter the extension if they achieved biochemical control (GH B 2.5 lg/L and normal IGF-1) or showed clinically relevant improvements during the core study. Thirty of the 60 patients who received pasireotide (200–900 lg bid) in the core study entered the extension. At extension month 6, of the 26 evaluable patients, six were biochemically controlled, of whom five had achieved control during the core study. Normal IGF-1 was achieved by 13/26 patients and GH B 2.5 lg/L by 12/26 at month 6. Nine patients received pasireotide for C24 months in the extension; three who were biochemically controlled at month 24 had achieved control during the core study. Of 29 patients with MRI data, nine had significant (C20 %) tumor volume reduction during the core study; an additional eight had significant reduction during the extension. The most common adverse events were transient gastrointestinal disturbances; Clinical Trials Registration Number: NCT00171730. Electronic supplementary material The online version of this article (doi:10.1007/s11102-013-0478-0) contains supplementary material, which is available to authorized users. S. Petersenn (&) ENDOC Center for Endocrine Tumors, Altonaer Str. 59, 20357 Hamburg, Germany e-mail: [email protected] A. J. Farrall Brain Research Imaging Centre, Western General Hospital, University of Edinburgh, Edinburgh, UK C. Block Department of Endocrinology, Diabetology and Metabolism, Antwerp University Hospital, Antwerp, Belgium S. Melmed Division of Endocrinology and Metabolism, Cedars-Sinai Medical Center, Los Angeles, CA, USA J. Schopohl Medizinische Klinik und Poliklinik IV, Campus Innenstadt, University of Munich, Munich, Germany P. Caron Service d’Endocrinologie, Maladies Métaboliques et Nutrition, Centre Hospitalier Universitaire Larrey, Toulouse, France R. Cuneo Department of Diabetes and Endocrinology, Princess Alexandra Hospital, Brisbane, Australia D. Kleinberg Neuroendocrine Unit, New York University School of Medicine, New York, NY, USA A. Colao Dipartimento di Medicina Clinica e Chirurgia, Università Federico II di Napoli, Naples, Italy M. Ruffin G. Hughes Novartis Pharma AG, Basel, Switzerland K. Hermosillo Reséndiz K. Hu Novartis Pharmaceuticals, East Hanover, NJ, USA A. Barkan Pituitary and Neuroendocrine Center, University of Michigan, Ann Arbor, MI, USA 123 Pituitary (2014) 17:132–140 DOI 10.1007/s11102-013-0478-0
Security Shortcomings and Countermeasures for the SAE J1939 Commercial Vehicle Bus Protocol
In the recent years, countless security concerns related to automotive systems were revealed either by academic research or real life attacks. While current attention was largely focused on passenger cars, due to their ubiquity, the reported bus-related vulnerabilities are applicable to all industry sectors where the same bus technology is deployed, i.e., the CAN bus. The SAE J1939 specification extends and standardizes the use of CAN to commercial vehicles where security plays an even higher role. In contrast to empirical results that attest such vulnerabilities in commercial vehicles by practical experiments, here, we determine that existing shortcomings in the SAE J1939 specifications open road to several new attacks, e.g., impersonation, denial of service (DoS), distributed DoS, etc. Taking the advantage of an industry-standard CANoe based simulation, we demonstrate attacks with potential safety critical effects that are mounted while still conforming to the SAE J1939 standard specification. We discuss countermeasures and security enhancements by including message authentication mechanisms. Finally, we evaluate and discuss the impact of employing these mechanisms on the overall network communication.
General Embedded Quantization for Wavelet-Based Lossy Image Coding
Embedded quantization is a mechanism employed by many lossy image codecs to progressively refine the distortion of a (transformed) image. Currently, the most common approach to do so in the context of wavelet-based image coding is to couple uniform scalar deadzone quantization (USDQ) with bitplane coding (BPC). USDQ+BPC is convenient for its practicality and has proved to achieve competitive coding performance. But the quantizer established by this scheme does not allow major variations. This paper introduces a multistage quantization scheme named general embedded quantization (GEQ) that provides more flexibility to the quantizer. GEQ schemes can be devised for specific decoding rates achieving optimal coding performance. Practical approaches of GEQ schemes achieve coding performance similar to that of USDQ+BPC while requiring fewer quantization stages. The performance achieved by GEQ is evaluated in this paper through experimental results carried out in the framework of modern image coding systems.
Lessons from the Cancer Genome
Systematic studies of the cancer genome have exploded in recent years. These studies have revealed scores of new cancer genes, including many in processes not previously known to be causal targets in cancer. The genes affect cell signaling, chromatin, and epigenomic regulation; RNA splicing; protein homeostasis; metabolism; and lineage maturation. Still, cancer genomics is in its infancy. Much work remains to complete the mutational catalog in primary tumors and across the natural history of cancer, to connect recurrent genomic alterations to altered pathways and acquired cellular vulnerabilities, and to use this information to guide the development and application of therapies.
Bidirectional LSTM-RNN for Improving Automated Assessment of Non-Native Children's Speech
Recent advances in ASR and spoken language processing have led to improved systems for automated assessment for spoken language. However, it is still challenging for automated scoring systems to achieve high performance in terms of the agreement with human experts when applied to non-native children’s spontaneous speech. The subpar performance is mainly caused by the relatively low recognition rate on non-native children’s speech. In this paper, we investigate different neural network architectures for improving non-native children’s speech recognition and the impact of the features extracted from the corresponding ASR output on the automated assessment of speaking proficiency. Experimental results show that bidirectional LSTM-RNN can outperform feed-forward DNN in ASR, with an overall relative WER reduction of 13.4%. The improved speech recognition can then boost the language proficiency assessment performance. Correlations between the rounded automated scores and expert scores range from 0.66 to 0.70 for the three speaking tasks studied, similar to the humanhuman agreement levels for these tasks.
Total quality management and sustainable competitive advantage
Although it is generally accepted that Total Quality Management (TQM) can generate a sustainable competitive advantage, there is, surprisingly, little or no theory to underpin that belief. Therefore, the primary purpose of this paper is to explore the validity of the claim. By drawing on the market-based theory of competitive advantage, resource-based theory of the firm, and systems theory, we are able to conclude that the belief is warranted. We deduce that the content of TQM is capable of producing a costor differentiation-based advantage, and that the tacitness and complexity that are inherent in the process of TQM have the potential to generate the barriers to imitation that are necessary for sustainability. D 2000 Elsevier Science Inc. All rights reserved.
Failure mode and effects analysis using intuitionistic fuzzy hybrid weighted Euclidean distance operator
Failure mode and effects analysis using intuitionistic fuzzy hybrid weighted Euclidean distance operator Hu-Chen Liu a , Long Liu b & Ping Li c d a Department of Industrial Engineering and Management, Tokyo Institute of Technology, Tokyo, Japan b College of Design and Innovation, Tongji University, Shanghai, PR China c East Hospital Affiliated to Tongji University, No.150 Jimo Road, Shanghai, PR China d Shanghai Pudong New Area Zhoupu Hospital, No.135 Guanyue Road, Shanghai, PR China Version of record first published: 21 Jan 2013.
Reflectance Capture Using Univariate Sampling of BRDFs
We propose the use of a light-weight setup consisting of a collocated camera and light source – commonly found on mobile devices – to reconstruct surface normals and spatially-varying BRDFs of near-planar material samples. A collocated setup provides only a 1-D “univariate” sampling of a 3-D isotropic BRDF. We show that a univariate sampling is sufficient to estimate parameters of commonly used analytical BRDF models. Subsequently, we use a dictionary-based reflectance prior to derive a robust technique for per-pixel normal and BRDF estimation. We demonstrate real-world shape and capture, and its application to material editing and classification, using real data acquired using a mobile phone.
Effects of extreme obliquity variations on the habitability of exoplanets.
We explore the impact of obliquity variations on planetary habitability in hypothetical systems with high mutual inclination. We show that large-amplitude, high-frequency obliquity oscillations on Earth-like exoplanets can suppress the ice-albedo feedback, increasing the outer edge of the habitable zone. We restricted our exploration to hypothetical systems consisting of a solar-mass star, an Earth-mass planet at 1 AU, and 1 or 2 larger planets. We verified that these systems are stable for 10(8) years with N-body simulations and calculated the obliquity variations induced by the orbital evolution of the Earth-mass planet and a torque from the host star. We ran a simplified energy balance model on the terrestrial planet to assess surface temperature and ice coverage on the planet's surface, and we calculated differences in the outer edge of the habitable zone for planets with rapid obliquity variations. For each hypothetical system, we calculated the outer edge of habitability for two conditions: (1) the full evolution of the planetary spin and orbit and (2) the eccentricity and obliquity fixed at their average values. We recovered previous results that higher values of fixed obliquity and eccentricity expand the habitable zone, but we also found that obliquity oscillations further expand habitable orbits in all cases. Terrestrial planets near the outer edge of the habitable zone may be more likely to support life in systems that induce rapid obliquity oscillations as opposed to fixed-spin planets. Such planets may be the easiest to directly characterize with space-borne telescopes.