title
stringlengths
8
300
abstract
stringlengths
0
10k
Analytical model for effects of twisting on litz-wire losses
Litz wire uses complex twisting to balance currents between strands. Most models assume that the twisting works perfectly to accomplish this balancing, and thus are not helpful in choosing the details of the twisting configuration. A complete model that explicitly models the effect of twisting on loss is introduced. Skin effect and proximity effect are each modeled at the level of the individual strands and at each level of the twisting construction. Comparisons with numerical simulations are used to verify the model. The results are useful for making design choices for the twisting configuration and the pitches of each twisting step. Windings with small numbers of turns are the most likely to have significant bundle-level effects that are not captured by conventional models, and are the most important to model and optimize with this approach.
Spectral analysis of surface electromyography (EMG) of upper esophageal sphincter-opening muscles during head lift exercise.
Although recent studies have shown enhancement of deglutitive upper esophageal sphincter opening in healthy elderly patients performing an isometric/isotonic head lift exercise (HLE), the muscle groups affected by this process are not known. A shift in the spectral analysis of surface EMG activity seen with muscle fatigue can be used to identify muscles affected by an exercise. The objective of this study was to use spectral analysis to evaluate surface EMG activities in the suprahyoid (SHM), infrahyoid (IHM), and sternocleidomastoid (SCM) muscle groups during the HLE. Surface EMG signals were recorded continuously on a TECA Premiere II during two phases of the HLE protocol in eleven control subjects. In the first phase of the protocol, surface EMG signals were recorded simultaneously from the three muscle groups for a period of 20 s. In the second phase, a 60 s recording was obtained for each of three successive trials with individual muscle groups. The mean frequency (MNF), median frequency (MDF), root mean square (RMS), and average rectified value (ARV) were used as spectral variables to assess the fatigue of the three muscle groups during the exercise. Least squares regression lines were fitted to each variable data set. Our findings suggest that during the HLE the SHM, IHM, and SCM muscle groups all show signs of fatigue; however, the SCM muscle group fatigued faster than the SHM and IHM muscle groups. Because of its higher fatigue rate, the SCM muscle group may play a limiting role in the HLE.
The Hitchhiker's Guide to Testing Statistical Significance in Natural Language Processing
Statistical significance testing is a standard statistical tool designed to ensure that experimental results are not coincidental. In this opinion/theoretical paper we discuss the role of statistical significance testing in Natural Language Processing (NLP) research. We establish the fundamental concepts of significance testing and discuss the specific aspects of NLP tasks, experimental setups and evaluation measures that affect the choice of significance tests in NLP research. Based on this discussion, we propose a simple practical protocol for statistical significance test selection in NLP setups and accompany this protocol with a brief survey of the most relevant tests. We then survey recent empirical papers published in ACL and TACL during 2017 and show that while our community assigns great value to experimental results, statistical significance testing is often ignored or misused. We conclude with a brief discussion of open issues that should be properly addressed so that this important tool can be applied in NLP research in a statistically sound manner1.
Reducing vasomotor symptoms with acupuncture in breast cancer patients treated with adjuvant tamoxifen: a randomized controlled trial
To evaluate true acupuncture to control acupuncture (CTRL) (non-insertive stimulation at non-acupuncture points) in breast cancer patients treated with adjuvant tamoxifen suffering from hot flushes and sweatings. Eighty-four patients were randomized to receive either true acupuncture or CTRL twice a week for 5 weeks. Seventy-four patients were treated according to the protocol. In the true acupuncture group 42% (16/38) reported improvements in hot flushes after 6 weeks compared to 47% (17/36) in the CTRL group (95% CI, −28 to 18%). Both groups reported improvement regarding severity and frequencies in hot flushes and sweatings but no statistical difference was found between the groups. In a subanalysis regarding the severity of sweatings at night a statistically significant difference P = 0.03 was found in the true acupuncture group. Former experience of true acupuncture did not influence the perception of true acupuncture or CTRL. No significant differences in hormonal levels were found before and after treatment. In conclusion, convincing data that true acupuncture is more effective than CTRL in reducing vasomotor symptoms is still lacking. Our study shows that both true and CTRL reduce vasomotor symptoms in breast cancer patients treated with adjuvant tamoxifen.
A Simplified 8 × 8 Transformation and Quantization Real-Time IP-Block for MPEG-4 H.264/AVC Applications: a New Design Flow Approach
Current multimedia design processes suffer from the excessively large time spent on testing new IP-blocks with references based on large video encoders specifications (usually several thousands lines of code). The appropriate testing of a single IP-block may require the conversion of the overall encoder from software to hardware, which is difficult to complete in the short time required by the competition-driven reduced time-to-market demanded for the adoption of a new video coding standard. This paper presents a new design flow to accelerate the conformance testing of an IP-block using the H.264/AVC software reference model. An example block of the simplified 8 × 8 transformation and quantization, which is adopted in FRExt, is provided as a case study demonstrating the effectiveness of the approach.
Towards a Dynamic Theory of Strategy – Michael Porter (SMJ 1991)
• Develop and implement an internally consistent set of goals and functional policies (this is, a solution to the agency problem) • These internally consistent set of goals and policies aligns the firm’s strengths and weaknesses with external (industry) opportunities and threats (SWOT) in a dynamic balance • The firm’s strategy has to be concerned with the exploitation of its “distinctive competences” (early reference to RBV)
Unicorn: Continual Learning with a Universal, Off-policy Agent
Some real-world domains are best characterized as a single task, but for others this perspective is limiting. Instead, some tasks continually grow in complexity, in tandem with the agent’s competence. In continual learning, also referred to as lifelong learning, there are no explicit task boundaries or curricula. As learning agents have become more powerful, continual learning remains one of the frontiers that has resisted quick progress. To test continual learning capabilities we consider a challenging 3D domain with an implicit sequence of tasks and sparse rewards. We propose a novel agent architecture called Unicorn, which demonstrates strong continual learning and outperforms several baseline agents on the proposed domain. The agent achieves this by jointly representing and learning multiple policies efficiently, using a parallel off-policy learning setup.
Citation count prediction as a link prediction problem
The citation count is an important factor to estimate the relevance and significance of academic publications. However, it is not possible to use this measure for papers which are too new. A solution to this problem is to estimate the future citation counts. There are existing works, which point out that graph mining techniques lead to the best results. We aim at improving the prediction of future citation counts by introducing a new feature. This feature is based on frequent graph pattern mining in the so-called citation network constructed on the basis of a dataset of scientific publications. Our new feature improves the accuracy of citation count prediction, and outperforms the state-of-the-art features in many cases which we show with experiments on two real datasets.
On Poisson Graphical Models
Undirected graphical models, such as Gaussian graphical models, Ising, and multinomial/categorical graphical models, are widely used in a variety of applications for modeling distributions over a large number of variables. These standard instances, however, are ill-suited to modeling count data, which are increasingly ubiquitous in big-data settings such as genomic sequencing data, user-ratings data, spatial incidence data, climate studies, and site visits. Existing classes of Poisson graphical models, which arise as the joint distributions that correspond to Poisson distributed node-conditional distributions, have a major drawback: they can only model negative conditional dependencies for reasons of normalizability given its infinite domain. In this paper, our objective is to modify the Poisson graphical model distribution so that it can capture a rich dependence structure between count-valued variables. We begin by discussing two strategies for truncating the Poisson distribution and show that only one of these leads to a valid joint distribution. While this model can accommodate a wider range of conditional dependencies, some limitations still remain. To address this, we investigate two additional novel variants of the Poisson distribution and their corresponding joint graphical model distributions. Our three novel approaches provide classes of Poisson-like graphical models that can capture both positive and negative conditional dependencies between count-valued variables. One can learn the graph structure of our models via penalized neighborhood selection, and we demonstrate the performance of our methods by learning simulated networks as well as a network from microRNA-sequencing data.
System usage behavior as a proxy for user satisfaction: an empirical investigation
Organizations are increasingly recognizing that user satisfaction with information systems is one of the most important determinants of the success of those systems. However, current satisfaction measures involve an intrusion into the users' worlds, and are frequently deemed to be too cumbersome to be justi®ed ®nancially and practically. This paper describes a methodology designed to solve this contemporary problem. Based on theory which suggests that behavioral observations can be used to measure satisfaction, system usage statistics from an information system were captured around the clock for 6 months to determine users' satisfaction with the system. A traditional satisfaction evaluation instrument, a validated survey, was applied in parallel, to verify that the analysis of the behavioral data yielded similar results. The ®nal results were analyzed statistically to demonstrate that behavioral analysis is a viable alternative to the survey in satisfaction measurement. # 1999 Elsevier Science B.V. All rights reserved.
Round-Robin Arbiter Design and Generation
In this paper, we introduce a Round-robin Arbiter Generator (RAG) tool. The RAG tool can generate a design for a Bus Arbiter (BA). The BA is able to handle the exact number of bus masters for both on-chip and off-chip buses. RAG can also generate a distributed and parallel hierarchical Switch Arbiter (SA). The first contribution of this paper is the automated generation of a round-robin token passing BA to reduce time spent on arbiter design. The generated arbiter is fair, fast, and has a low and predictable worst-case wait time. The second contribution of this paper is the design and integration of a distributed fast arbiter, e.g., for a terabit switch, based on 2x2 and 4x4 switch arbiters (SAs). Using a .25µ TSMC standard cell library from LEDA Systems [10, 14], we show the arbitration time of a 256x256 SA for a terabit switch and demonstrate that the SA generated by RAG meets the time constraint to achieve approximately six terabits of throughput in a typical network switch design. Furthermore, our generated SA performs better than the Ping-Pong Arbiter and Programmable Priority Encoder by a factor of 1.9X and 2.4X, respectively.
Annotating Opinions in the World Press
In this paper we present a detailed scheme for annotating expressions of opinions, beliefs, emotions, sentiment and speculation (private states) in the news and other discourse. We explore inter-annotator agreement for individual private state expressions, and show that these low-level annotations are useful for producing higher-level subjective sentence annotations.
Why Doesn't This Feel Empowering? Working Through the Repressive Myths of Critical Pedagogy
Elizabeth Ellsworth finds that critical pedagogy, as represented in her review of the literature, has developed along a highly abstract and Utopian line which does not necessarily sustain the daily workings of the education its supporters advocate. The author maintains that the discourse of critical pedagogy is based on rationalist assumptions that give rise to repressive myths. Ellsworth argues that if these assumptions, goals, implicit power dynamics, and issues of who produces valid knowledge remain untheorized and untouched, critical pedagogues will continue to perpetuate relations of domination in their classrooms.The author paints a complex portrait of the practice of teaching for liberation. She reflects on her own role as a White middle-class woman and professor engaged with a diverse group of students developing an antiracist course. Grounded in a clearly articulated political agenda and her experience as a feminist teacher, Ellsworth provides a critique of "empowerment," "student voice," "dialog...
A Mathematical Model of the Effect of Immunogenicity on Therapeutic Protein Pharmacokinetics
A mathematical pharmacokinetic/anti-drug-antibody (PK/ADA) model was constructed for quantitatively assessing immunogenicity for therapeutic proteins. The model is inspired by traditional pharmacokinetic/pharmacodynamic (PK/PD) models, and is based on the observed impact of ADA on protein drug clearance. The hypothesis for this work is that altered drug PK contains information about the extent and timing of ADA generation. By fitting drug PK profiles while accounting for ADA-mediated drug clearance, the model provides an approach to characterize ADA generation during the study, including the maximum ADA response, sensitivity of ADA response to drug dose level, affinity maturation rate, time lag to observe an ADA response, and the elimination rate for ADA–drug complex. The model also provides a mean to estimate putative concentration–time profiles for ADA, ADA–drug complex, and ADA binding affinity-time profile. When simulating ADA responses to various drug dose levels, bell-shaped dose–response curves were generated. The model contains simultaneous quantitative modeling and provides estimation of the characteristics of therapeutic protein drug PK and ADA responses in vivo. With further experimental validation, the model may be applied to the simulation of ADA response to therapeutic protein drugs in silico, or be applied in subsequent PK/PD models.
How does pycnogenol® influence oxidative damage to DNA and its repair ability in elderly people?
Our purpose in this randomized, double blind, placebo controlled study was to find out the possible effect of a polyphenolic pine bark extract, Pycnogenol® (Pyc) on the level of 8-oxo-7,8-dihydroguanine (8-oxoG) as representative of oxidative damage to DNA and on the DNA repair ability of elderly people. According to our results, three months of Pyc administration had no effect on the level of oxidative damage to DNA or on repair ability, but we found a relationship between the level of 8-oxoG and repair ability of DNA in this group. To conclude, even if the positive effect of Pyc was not confirmed in the case of elderly people it is important to highlight the necessity of further investigations about the mechanisms of Pyc acting on different age groups.
Closed Kinetic Chain Upper Extremity Stability test (CKCUES test): a reliability study in persons with and without shoulder impingement syndrome
BACKGROUND The Close Kinetic Chain Upper Extremity Stability Test (CKCUES test) is a low cost shoulder functional test that could be considered as a complementary and objective clinical outcome for shoulder performance evaluation. However, its reliability was tested only in recreational athletes' males and there are no studies comparing scores between sedentary and active samples. The purpose was to examine inter and intrasession reliability of CKCUES Test for samples of sedentary male and female with (SIS), for samples of sedentary healthy male and female, and for male and female samples of healthy upper extremity sport specific recreational athletes. Other purpose was to compare scores within sedentary and within recreational athletes samples of same gender. METHODS A sample of 108 subjects with and without SIS was recruited. Subjects were tested twice, seven days apart. Each subject performed four test repetitions, with 45 seconds of rest between them. The last three repetitions were averaged and used to statistical analysis. Intraclass Correlation Coefficient ICC2,1 was used to assess intrasession reliability of number of touches score and ICC2,3 was used to assess intersession reliability of number of touches, normalized score, and power score. Test scores within groups of same gender also were compared. Measurement error was determined by calculating the Standard Error of the Measurement (SEM) and Minimum detectable change (MDC) for all scores. RESULTS The CKCUES Test showed excellent intersession reliability for scores in all samples. Results also showed excellent intrasession reliability of number of touches for all samples. Scores were greater in active compared to sedentary, with exception of power score. All scores were greater in active compared to sedentary and SIS males and females. SEM ranged from 1.45 to 2.76 touches (based on a 95% CI) and MDC ranged from 2.05 to 3.91(based on a 95% CI) in subjects with and without SIS. At least three touches are needed to be considered a real improvement on CKCUES Test scores. CONCLUSION Results suggest CKCUES Test is a reliable tool to evaluate upper extremity functional performance for sedentary, for upper extremity sport specific recreational, and for sedentary males and females with SIS.
Comparison of group recommendation algorithms
In recent years recommender systems have become the common tool to handle the information overload problem of educational and informative web sites, content delivery systems, and online shops. Although most recommender systems make suggestions for individual users, in many circumstances the selected items (e.g., movies) are not intended for personal usage but rather for consumption in groups. This paper investigates how effective group recommendations for movies can be generated by combining the group members’ preferences (as expressed by ratings) or by combining the group members’ recommendations. These two grouping strategies, which convert traditional recommendation algorithms into group recommendation algorithms, are combined with five commonly used recommendation algorithms to calculate group recommendations for different group compositions. The group recommendations are not only assessed in terms of accuracy, but also in terms of other qualitative aspects that are important for users such as diversity, coverage, and serendipity. In addition, the paper discusses the influence of the size and composition of the group on the quality of the recommendations. The results show that the grouping strategy which produces the most accurate results depends on the algorithm that is used for generating individual recommendations. Therefore, the paper proposes a combination of grouping strategies which outperforms each individual strategy in terms of accuracy. Besides, the results show that the accuracy of the group recommendations increases as the similarity between members of the group increases. Also the diversity, coverage, and serendipity of the group recommendations are to a large extent dependent on the used grouping strategy and recommendation algorithm. Consequently for (commercial) group recommender systems, the grouping strategy and algorithm have to be chosen carefully in order to optimize the desired quality metrics of the group recommendations. The conclusions of this paper can be used as guidelines for this selection process.
Anomaly Detection with Generative Adversarial Networks for Multivariate Time Series
Today’s Cyber-Physical Systems (CPSs) are large, complex, and affixed with networked sensors and actuators that are targets for cyber-attacks. Conventional detection techniques are unable to deal with the increasingly dynamic and complex nature of the CPSs. On the other hand, the networked sensors and actuators generate large amounts of data streams that can be continuously monitored for intrusion events. Unsupervised machine learning techniques can be used to model the system behaviour and classify deviant behaviours as possible attacks. In this work, we proposed a novel Generative Adversarial Networks-based Anomaly Detection (GAN-AD) method for such complex networked CPSs. We used LSTM-RNN in our GAN to capture the distribution of the multivariate time series of the sensors and actuators under normal working conditions of a CPS. Instead of treating each sensor’s and actuator’s time series independently, we model the time series of multiple sensors and actuators in the CPS concurrently to take into account of potential latent interactions between them. To exploit both the generator and the discriminator of our GAN, we deployed the GAN-trained discriminator together with the residuals between generator-reconstructed data and the actual samples to detect possible anomalies in the complex CPS. We used our GAN-AD to distinguish abnormal attacked situations from normal working conditions for a complex six-stage Secure Water Treatment (SWaT) system. Experimental results showed that the proposed strategy is effective in identifying anomalies caused by various attacks with high detection rate and low false positive rate as compared to existing methods.
Filtering Corrupted Image and Edge Detection in Restored Grayscale Image Using Derivative Filters
In this paper, different first and second derivative filters are investigated to find edge map after denoising a corrupted gray scale image. We have proposed a new derivative filter of first order and described a novel approach of edge finding with an aim to find better edge map in a restored gray scale image. Subjective method has been used by visually comparing the performance of the proposed derivative filter with other existing first and second order derivative filters. The root mean square error and root mean square of signal to noise ratio have been used for objective evaluation of the derivative filters. Finally, to validate the efficiency of the filtering schemes different algorithms are proposed and the simulation study has been carried out using MATLAB 5.0.
The Influence of Coral Reef Benthic Condition on Associated Fish Assemblages
Accumulative disturbances can erode a coral reef's resilience, often leading to replacement of scleractinian corals by macroalgae or other non-coral organisms. These degraded reef systems have been mostly described based on changes in the composition of the reef benthos, and there is little understanding of how such changes are influenced by, and in turn influence, other components of the reef ecosystem. This study investigated the spatial variation in benthic communities on fringing reefs around the inner Seychelles islands. Specifically, relationships between benthic composition and the underlying substrata, as well as the associated fish assemblages were assessed. High variability in benthic composition was found among reefs, with a gradient from high coral cover (up to 58%) and high structural complexity to high macroalgae cover (up to 95%) and low structural complexity at the extremes. This gradient was associated with declining species richness of fishes, reduced diversity of fish functional groups, and lower abundance of corallivorous fishes. There were no reciprocal increases in herbivorous fish abundances, and relationships with other fish functional groups and total fish abundance were weak. Reefs grouping at the extremes of complex coral habitats or low-complexity macroalgal habitats displayed markedly different fish communities, with only two species of benthic invertebrate feeding fishes in greater abundance in the macroalgal habitat. These results have negative implications for the continuation of many coral reef ecosystem processes and services if more reefs shift to extreme degraded conditions dominated by macroalgae.
Cognitive, social, and physiological determinants of emotional state.
The problem of which cues, internal or external, permit a person to label and identify his own emotional state has been with us since the days that James (1890) first tendered his doctrine that "the bodily changes follow directly the perception of the exciting fact, and that our feeling of the same changes as they occur is the emotion" (p. 449). Since we are aware of a variety of feeling and emotion states, it should follow from James' proposition that the various emotions will be accompanied by a variety of differentiable bodily states. Following James' pronouncement, a formidable number of studies were undertaken in search of the physiological differentiators of the emotions. The results, in these early days, were almost uniformly negative. All of the emotional states experi-
Query-based debugging
data structure queries (A). Some queries check properties of abstract data struct [11][131] such as stacks, hash tables, trees, and so on. These queries are not domain because the data structures can hold data of any domain. These queries are also differ the programming construct queries, because they check the constraints of well-defined a data structures. For example, a query about a binary tree may find the number of its nod have only one child. On the other hand, programming construct queries usually span di data structures. Abstract data structure queries can usually be expressed as class invar could be packaged with the class that implements an ADT. However, the queries that p information rather than detect violations are best answered by dynamic queries. For ex monitoring B+ trees using queries may indicate whether this data structure is efficient f underlying problem. Program construct queries (P). Program construct queries verify object relationships that related to the program implementation and not directly to the problem domain. Such q verify and visualize groups of objects that have to conform to some constraints because lower level of program design and implementation. For example, in a graphical user int implementation, every window object has a parent window, and this window referenc children widgets through the widget_collection collection (section 5.2.2). Such construct is n
Real Lives and White Lies in the Funding of Scientific Research
It is a summer day in 2009 in Cambridge, England, and K. (39) looks out of his lab window, wondering why he chose the life of a scientist [1]. Yet it had all begun so well! His undergraduate studies in Prague had excited him about biomedical research, and he went on to a PhD at an international laboratory in Heidelberg. There, he had every advantage, technical and intellectual, and his work had gone swimmingly. He had moved to a Wellcome-funded research institute in England in 1999. And although his postdoc grant, as is typical, was for only two years, he won a rare career development award that gave him some independence for four more years. A six-year postdoc was an unusual opportunity, and it allowed him to define his own research field. By 2004, he had published six experimental papers in good journals, and on four of these, he was first author. It was the high point in his career, and when he applied for posts in Cambridge, London, Stanford, and Tubingen, he was short-listed for them all. He chose Cambridge University and a Royal Society Research Fellowship that offered him up to ten years’ salary. This should have brought the peace of mind to plan projects that would take five years, or even longer. So, what went wrong? It had taken almost a year to prepare, submit, and be awarded his initial research grant (for late 2005–late 2008) from a publicly funded agency in the UK, the Biotechnology and Biological Sciences Research Council (BBSRC). Immediately, he hired a technician and started to train her carefully. During early 2006, he took on a postdoc, Frieda, and a student, and they both settled in well. However, by mid 2007, K. began to worry about his future: although the BBSRC grant had run for less than two years, it was already high time to apply for another. He submitted a new application in October 2007 and, although it was well reviewed and received a high rating, he found out in spring 2008 that it was not funded. As an insurance, he had concocted a different project (you cannot submit to different agencies with the same plans) and sent it to Cancer Research, UK, in February 2008—this application was also excellently reviewed but also turned down in August 2008. Now he was near the end of his initial grant, and his technician, who had a family to support, left to take a more secure job in a nearby research institute—laying waste to all her specialised knowledge that they had both worked so hard to build. Soon, Frieda’s postdoc grant was about to end, so K. applied to several local colleges and trusts to keep her going, but won her only another 6 months’ salary. Becoming anxious and not sleeping well, he had approached the Wellcome Trust in the autumn of 2008 with a rewritten and updated version of the rejected BBSRC project. But, with her extra 6 months over in April 2009 and no security (she has two small children), Frieda had been concentrating less and less on her work; she reluctantly abandoned her lifelong ambition to be a researcher. She looked for a job in science publishing or in the granting agencies— both of which, ironically, offer better working conditions and much better security of employment than research. That morning, Frieda had been offered a post as assistant editor of a journal, and it was time to organise her leaving party. So, on this summer day in 2009, K. has one student left in his group and has one grant application outstanding. If he gets the grant, the salary dedicated for Frieda could be given to a new postdoc, but that person would have to be trained all over again. Yes, the second half of 2007 and all of 2008 had been a nightmare—14 of these 18 months had been almost entirely devoted to writing grant applications. K. now sees how he has changed from being an enthusiastic scientist into an insecure bureaucrat. He feels he has lost much of his last 3 years and wasted his BBSRC grant, despite doing his very best (see Box 1). K.’s plight (an authentic one) illustrates how the present funding system in science eats its own seed corn [2]. To expect a young scientist to recruit and train students and postdocs as well as producing and publishing new and original work within two years (in order to fuel the next grant application) is preposterous. It is neither right nor sensible to ask scientists to become astrologists and predict precisely the path their research will follow—and then to judge them on how persuasively they can put over this fiction. It takes far too long to write a grant because the requirements are so complex and demanding. Applications have become so detailed
A survey of document image classification: problem statement, classifier architecture and performance evaluation
Document image classification is an important step in Office Automation, Digital Libraries, and other document image analysis applications. There is great diversity in document image classifiers: they differ in the problems they solve, in the use of training data to construct class models, and in the choice of document features and classification algorithms. We survey this diverse literature using three components: the problem statement, the classifier architecture, and performance evaluation. This brings to light important issues in designing a document classifier, including the definition of document classes, the choice of document features and feature representation, and the choice of classification algorithm and learning mechanism. We emphasize techniques that classify single-page typeset document images without using OCR results. Developing a general, adaptable, high-performance classifier is challenging due to the great variety of documents, the diverse criteria used to define document classes, and the ambiguity that arises due to ill-defined or fuzzy document classes.
Autologous hematopoietic stem cell transplantation as salvage treatment for advanced B cell chronic lymphocytic leukemia
Given the generally poor outcome of advanced B cell chronic lymphocytic leukemia, experimental approaches are warranted, especially for younger patients in whom classical treatments have failed. We therefore conducted a prospective single-center study, using polychemotherapy (ESHAP) to prepare patients for hematopoietic stem cell collection and autologous stem cell transplantation as consolidation therapy. Twenty patients entered the study. An adequate response to ESHAP was obtained in 13 patients, and sufficient stem cells for grafting were obtained in eight of the 12 patients who underwent the collection procedure. Six of these grafted patients are alive in complete clinical remission a median of 30 months after transplantation. It should be noted that we were only able to graft 40% of the patients enrolled in this study, either because a new remission could not be obtained or because not enough hematopoietic stem cells could be collected. This argues for stem cell collection as soon as a first remission is obtained, even if the autograft is done later in the course of the disease.
Structural optimization using sensitivity analysis and a level-set method q
In the context of structural optimization we propose a new numerical method based on a combination of the classical shape derivative and of the level-set method for front propagation. We implement this method in two and three space dimensions for a model of linear or nonlinear elasticity. We consider various objective functions with weight and perimeter constraints. The shape derivative is computed by an adjoint method. The cost of our numerical algorithm is moderate since the shape is captured on a fixed Eulerian mesh. Although this method is not specifically designed for topology optimization, it can easily handle topology changes. However, the resulting optimal shape is strongly dependent on the initial guess. 2003 Elsevier B.V. All rights reserved.
Deep Fragment Embeddings for Bidirectional Image Sentence Mapping
We introduce a model for bidirectional retrieval of images and sentences through a deep, multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. We then introduce a structured max-margin objective that allows our model to explicitly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions for the image-sentence retrieval task since the inferred inter-modal alignment of fragments is explicit.
Traffic sign detection and classification using colour feature and neural network
Automatic traffic sign detection and recognition is a field of computer vision which is very important aspect for advanced driver support system. This paper proposes a framework that will detect and classify different types of traffic signs from images. The technique consists of two main modules: road sign detection, and classification and recognition. In the first step, colour space conversion, colour based segmentation are applied to find out if a traffic sign is present. If present, the sign will be highlighted, normalized in size and then classified. Neural network is used for classification purposes. For evaluation purpose, four type traffic signs such as Stop Sign, No Entry Sign, Give Way Sign, and Speed Limit Sign are used. Altogether 300 sets images, 75 sets for each type are used for training purposes. 200 images are used testing. The experimental results show the detection rate is above 90% and the accuracy of recognition is more than 88%.
Intelligent Lighting Control for Vision-Based Robotic Manipulation
The ability of a robot vision system to capture informative images is greatly affected by the condition of lighting in the scene. This paper reveals the importance of active lighting control for robotic manipulation and proposes novel strategies for good visual interpretation of objects in the workspace. Good illumination means that it helps to get images with large signal-to-noise ratio, wide range of linearity, high image contrast, and true color rendering of the object's natural properties. It should also avoid occurrences of highlight and extreme intensity unbalance. If only passive illumination is used, the robot often gets poor images where no appropriate algorithms can be used to extract useful information. A fuzzy controller is further developed to maintain the lighting level suitable for robotic manipulation and guidance in dynamic environments. As carried out in this paper, with both examples of numerical simulations and practical experiments, it promises satisfactory results with the proposed idea of active lighting control.
Decision Forests, Convolutional Networks and the Models in-Between
This paper investigates the connections between two state of the art classifiers: decision forests (DFs, including decision jungles) and convolutional neural networks (CNNs). Decision forests are computationally efficient thanks to their conditional computation property (computation is confined to only a small region of the tree, the nodes along a single branch). CNNs achieve state of the art accuracy, thanks to their representation learning capabilities. We present a systematic analysis of how to fuse conditional computation with representation learning and achieve a continuum of hybrid models with different ratios of accuracy vs. efficiency. We call this new family of hybrid models conditional networks. Conditional networks can be thought of as: i) decision trees augmented with data transformation operators, or ii) CNNs, with block-diagonal sparse weight matrices, and explicit data routing functions. Experimental validation is performed on the common task of image classification on both the CIFAR and Imagenet datasets. Compared to state of the art CNNs, our hybrid models yield the same accuracy with a fraction of the compute cost and much smaller number of parameters.
Low-CO(2) electricity and hydrogen: a help or hindrance for electric and hydrogen vehicles?
The title question was addressed using an energy model that accounts for projected global energy use in all sectors (transportation, heat, and power) of the global economy. Global CO(2) emissions were constrained to achieve stabilization at 400-550 ppm by 2100 at the lowest total system cost (equivalent to perfect CO(2) cap-and-trade regime). For future scenarios where vehicle technology costs were sufficiently competitive to advantage either hydrogen or electric vehicles, increased availability of low-cost, low-CO(2) electricity/hydrogen delayed (but did not prevent) the use of electric/hydrogen-powered vehicles in the model. This occurs when low-CO(2) electricity/hydrogen provides more cost-effective CO(2) mitigation opportunities in the heat and power energy sectors than in transportation. Connections between the sectors leading to this counterintuitive result need consideration in policy and technology planning.
Security of biometric authentication systems
This overview paper outlines our views of actual security of biometric authentication and encryption systems. The attractiveness of some novel approaches like cryptographic key generation from biometric data is in some respect understandable, yet so far has lead to various shortcuts and compromises on security. Our paper starts with an introductory section that is followed by a section about variability of biometric characteristics, with a particular attention paid to biometrics used in large systems. The following sections then discuss the potential for biometric authentication systems, and for the use of biometrics in support of cryptographic applications as they are typically used in computer systems.
Efficient Data Structures for Massive N-Gram Datasets
The efficient indexing of large and sparse N-gram datasets is crucial in several applications in Information Retrieval, Natural Language Processing and Machine Learning. Because of the stringent efficiency requirements, dealing with billions of N-grams poses the challenge of introducing a compressed representation that preserves the query processing speed. In this paper we study the problem of reducing the space required by the representation of such datasets, maintaining the capability of looking up for a given N-gram within micro seconds. For this purpose we describe compressed, exact and lossless data structures that achieve, at the same time, high space reductions and no time degradation with respect to state-of-the-art software packages. In particular, we present a trie data structure in which each word following a context of fixed length k, i.e., its preceding k words, is encoded as an integer whose value is proportional to the number of words that follow such context. Since the number of words following a given context is typically very small in natural languages, we are able to lower the space of representation to compression levels that were never achieved before. Despite the significant savings in space, we show that our technique introduces a negligible penalty at query time.
AENet: Learning Deep Audio Features for Video Analysis
We propose a new deep network for audio event recognition, called AENet. In contrast to speech, sounds coming from audio events may be produced by a wide variety of sources. Furthermore, distinguishing them often requires analyzing an extended time period due to the lack of clear subword units that are present in speech. In order to incorporate this long-time frequency structure of audio events, we introduce a convolutional neural network (CNN) operating on a large temporal input. In contrast to previous works, this allows us to train an audio event detection system end to end. The combination of our network architecture and a novel data augmentation outperforms previous methods for audio event detection by 16%. Furthermore, we perform transfer learning and show that our model learned generic audio features, similar to the way CNNs learn generic features on vision tasks. In video analysis, combining visual features and traditional audio features, such as mel frequency cepstral coefficients, typically only leads to marginal improvements. Instead, combining visual features with our AENet features, which can be computed efficiently on a GPU, leads to significant performance improvements on action recognition and video highlight detection. In video highlight detection, our audio features improve the performance by more than 8% over visual features alone.
Assessment of older people: self-maintaining and instrumental activities of daily living.
THE use of formal devices for assessing function is becoming standard in agencies serving the elderly. In the Gerontological Society's recent contract study on functional assessment (Howell, 1968), a large assortment of rating scales, checklists, and other techniques in use in applied settings was easily assembled. The present state of the trade seems to be one in which each investigator or practitioner feels an inner compusion to make his own scale and to cry that other existent scales cannot possibly fit his own setting. The authors join this company in presenting two scales first standardized on their own population (Lawton, 1969). They take some comfort, however, in the fact that one scale, the Physical Self-Maintenance Scale (PSMS), is largely a scale developed and used by other investigators (Lowenthal, 1964), which was adapted for use in our own institution. The second of the scales, the Instrumental Activities of Daily Living Scale (IADL), taps a level of functioning heretofore inadequately represented in attempts to assess everyday functional competence. Both of the scales have been tested further for their usefulness in a variety of types of institutions and other facilities serving community-resident older people. Before describing in detail the behavior measured by these two scales, we shall briefly describe the schema of competence into which these behaviors fit (Lawton, 1969). Human behavior is viewed as varying in the degree of complexity required for functioning in a variety of tasks. The lowest level is called life maintenance, followed by the successively more complex levels of func-
Finding the WRITE Stuff: Automatic Identification of Discourse Structure in Student Essays
automated feedback that helps them revise their work and ultimately improve their writing skills. These applications also address educational researchers’ interest in individualized instruction. Specifically, feedback that refers explicitly to students’own writing is more effective than general feedback.3 Our discourse analysis software, which is embedded in Criterion (www.etstechnologies.com), an online essay evaluation application, uses machine learning to identify discourse elements in student essays. The system makes decisions that exemplify how teachers perform this task. For instance, when grading student essays, teachers comment on the discourse structure. Teachers might explicitly state that the essay lacks a thesis statement or that an essay’s single main idea has insufficient support. Training the systems to model this behavior requires human judges to annotate a data sample of student essays. The annotation schema reflects the highly structured discourse of genres such as persuasive writing. Our discourse analysis system uses a voting algorithm that takes into account the discourse labeling decisions of three independent systems. The three systems employ natural language processing methods to extract essay-based features that help predict the discourse labels. They also use machine learning to classify the sentences in an essay as particular discourse elements. Our tool automatically labels discourse elements in student essays written on any topic and across writing genres.
The -48 C/T polymorphism in the presenilin 1 promoter is associated with an increased risk of developing Alzheimer's disease and an increased Abeta load in brain.
Mutations in the presenilin 1 gene (PS1) account for the majority of early onset, familial, autosomal dominant forms of Alzheimer's disease (AD), whereas its role in other late onset forms of AD remains unclear. A -48 C/T polymorphism in the PS1 promoter has been associated with an increased genetic risk in early onset complex AD and moreover has been shown to influence the expression of the PS1 gene. This raises the possibility that previous conflicting findings from association studies with homozygosity for the PS1 intron 8 polymorphism might be the result of linkage disequilibrium with the -48 CC genotype. Here we provide further evidence of increased risk of AD associated with homozygosity for the -48 CC genotype (odds ratio=1.6). We also report a phenotypic correlation with Abeta(40), Abeta(42(43)), and total Abeta load in AD brains. The -48 CC genotype was associated with 47% greater total Abeta load (p<0.003) compared to CT + TT genotype bearers. These results suggest that the -48 C/T polymorphism in the PS1 promoter may increase the risk of AD, perhaps by altering PS1 gene expression and thereby influencing Abeta load.
Smart Electricity Meter Data Intelligence for Future Energy Systems: A Survey
Smart meters have been deployed in many countries across the world since early 2000s. The smart meter as a key element for the smart grid is expected to provide economic, social, and environmental benefits for multiple stakeholders. There has been much debate over the real values of smart meters. One of the key factors that will determine the success of smart meters is smart meter data analytics, which deals with data acquisition, transmission, processing, and interpretation that bring benefits to all stakeholders. This paper presents a comprehensive survey of smart electricity meters and their utilization focusing on key aspects of the metering process, different stakeholder interests, and the technologies used to satisfy stakeholder interests. Furthermore, the paper highlights challenges as well as opportunities arising due to the advent of big data and the increasing popularity of cloud environments.
Molecular mechanisms and animal models of spinal muscular atrophy.
Spinal muscular atrophy (SMA), the leading genetic cause of infant mortality, is characterized by the degeneration of spinal motor neurons and muscle atrophy. Although the genetic cause of SMA has been mapped to the Survival Motor Neuron1 (SMN1) gene, mechanisms underlying selective motor neuron degeneration in SMA remain largely unknown. Here we review the latest developments and our current understanding of the molecular mechanisms underlying SMA pathogenesis, focusing on the animal model systems that have been developed, as well as new diagnostic and treatment strategies that have been identified using these model systems. This article is part of a special issue entitled: Neuromuscular Diseases: Pathology and Molecular Pathogenesis.
A Fully-Convolutional Framework for Semantic Segmentation
In this paper we propose a deep learning technique to improve the performance of semantic segmentation tasks. Previously proposed algorithms generally suffer from the over-dependence on a single modality as well as a lack of training data. We made three contributions to improve the performance. Firstly, we adopt two models which are complementary in our framework to enrich field-of-views and features to make segmentation more reliable. Secondly, we repurpose the datasets form other tasks to the segmentation task by training the two models in our framework on different datasets. This brings the benefits of data augmentation while saving the cost of image annotation. Thirdly, the number of parameters in our framework is minimized to reduce the complexity of the framework and to avoid over- fitting. Experimental results show that our framework significantly outperforms the current state-of-the-art methods with a smaller number of parameters and better generalization ability.
Efficacy and acceptability of reduced intensity constraint-induced movement therapy for children aged 9-11 years with hemiplegic cerebral palsy: a pilot study.
OBJECTIVE Assess efficacy and acceptability of reduced intensity constraint-induced movement therapy (CIMT) in children with cerebral palsy (CP). METHODS Single-subject research design and semi-structured interviews. Children (9-11y) with hemiplegia underwent five baseline assessments followed by two weeks CIMT. Six further assessments were performed during treatment and follow-up phases. The primary outcome was the Melbourne Assessment of Unilateral Upper Limb Function (MUUL). Quantitative data were analysed using standard single-subject methods and qualitative data by thematic analysis. RESULTS Four of the seven participants demonstrated statistically significant improvements in MUUL (3-11%, p < .05). Two participants achieved significant improvements in active range of motion but strength and tone remained largely unchanged. Qualitative interviews highlighted limitations of the restraint, importance of family involvement, and coordination of treatment with education. CONCLUSIONS Reduced intensity CIMT may be effective for some children in this population; however it is not suitable for all children with hemiplegia.
The Effects of Experimentally Induced Low Back Pain on Spine Rotational Stiffness and Local Dynamic Stability
Local dynamic stability, quantified using the maximum finite-time Lyapunov exponent (λ max), and the muscular contributions to spine rotational stiffness can provide pertinent information regarding the neuromuscular control of the spine during movement tasks. The primary goal of the present study was to assess if experimental capsaicin-induced low back pain (LBP) affects spine stability and the neuromuscular control of repetitive trunk movements in a group of healthy participants with no history of LBP. Fourteen healthy males were recruited for this investigation. Each participant was asked to complete three trials (baseline, in pain, and recovery) of 35 cycles of a repetitive trunk flexion/extension task at a rate of 0.25 Hz. Local dynamic stability and the muscular contributions to lumbar spine rotational stiffness were significantly impaired during the LBP trial compared to the baseline trial (p < 0.05); however, there was a trend for these measures to recover after a 1 h rest. This study provides evidence that capsaicin can effectively induce LBP, thereby altering spine rotational stiffness and local dynamic stability. Future research should directly compare the effects capsaicin-induced LBP and intramuscular/intraligamentous induced LBP on these same variables.
Dynamic Privacy Pricing: A Multi-Armed Bandit Approach With Time-Variant Rewards
Recently, the conflict between exploiting the value of personal data and protecting individuals' privacy has attracted much attention. Personal data market provides a promising solution to this conflict, while determining the price of privacy is a tough issue. In this paper, we study the pricing problem in a setting where a data collector sequentially buys data from multiple data owners whose valuations of privacy are randomly drawn from an unknown distribution. To maximize the total payoff, the collector needs to dynamically adjust the prices offered to owners. We model the sequential decision-making problem of the collector as a multi-armed bandit problem with each arm representing a candidate price. Specifically, the privacy protection technique adopted by the collector is taken into account. Protecting privacy generally causes a negative effect on the value of data, and this effect is embodied by the time-variant distributions of the rewards associated with arms. Based on the classic upper confidence bound policy, we propose two learning policies for the bandit problem. The first policy estimates the expected reward of a price by counting how many times the price has been accepted by data owners. The second policy treats the time-variant data value as a context and uses ridge regression to estimate the rewards in different contexts. Simulation results on real-world data demonstrate that by applying the proposed policies, the collector can get a payoff which is close to that he can get by setting a fixed price, which is the best in hindsight, for all data owners.
Deep Multimodal Feature Analysis for Action Recognition in RGB+D Videos
Single modality action recognition on RGB or depth sequences has been extensively explored recently. It is generally accepted that each of these two modalities has different strengths and limitations for the task of action recognition. Therefore, analysis of the RGB+D videos can help us to better study the complementary properties of these two types of modalities and achieve higher levels of performance. In this paper, we propose a new deep autoencoder based shared-specific feature factorization network to separate input multimodal signals into a hierarchy of components. Further, based on the structure of the features, a structured sparsity learning machine is proposed which utilizes mixed norms to apply regularization within components and group selection between them for better classification performance. Our experimental results show the effectiveness of our cross-modality feature analysis framework by achieving state-of-the-art accuracy for action classification on five challenging benchmark datasets.
The 1928–1929 eruption of Kammourta volcano — evidence of tectono-magmatic activity in the Manda-Inakir rift and comparison with the Asal Rift, Afar depression, Republic of Djibuti
There are two rifts zones in the Republic of Djibuti: the active Asal rift (birthplace of the Ardoukôba basaltic volcano in 1978) and the poorly known Manda-Inakir rift described here. The most recent volcanic event in the Manda-Inakir rift was the formation of the Kammourta basaltic cone, probably in 1928, accompanied by strong seismic activity. This historic eruption and related tectonic features show that the Manda-Inakir rift, like Asal, is presently active. The Kammourta basalt, of transitional alkaline type, belongs to the Manda-Inakir differentiated series, which ranges from basalt to rhyolite. In contrast, volcanic rocks of the Asal rift are entirely transitional tholeiitic basalt. The differences in magmatic affinity and tectonics between these two rift zones reflect the more advanced evolution of rifting in the Asal zone than in Manda-Inakir.
CoDraw: Visual Dialog for Collaborative Drawing
In this work, we propose a goal-driven collaborative task that contains vision, language, and action in a virtual environment as its core components. Specifically, we develop a collaborative ‘Image Drawing’ game between two agents, called CoDraw. Our game is grounded in a virtual world that contains movable clip art objects. Two players, Teller and Drawer, are involved. The Teller sees an abstract scene containing multiple clip arts in a semantically meaningful configuration, while the Drawer tries to reconstruct the scene on an empty canvas using available clip arts. The two players communicate via two-way communication using natural language. We collect the CoDraw dataset of ∼10K dialogs consisting of 138K messages exchanged between a Teller and a Drawer from Amazon Mechanical Turk (AMT). We analyze our dataset and present three models to model the players’ behaviors, including an attention model to describe and draw multiple clip arts at each round. The attention models are quantitatively compared to the other models to show how the conventional approaches work for this new task. We also present qualitative visualizations.
Design of a new, light and portable mechanism for knee CPM machine with a user-friendly interface
After a knee joint surgery, due to severe pain and immobility of the patient, the tissue around the knee become harder and knee stiffness will occur, which may causes many problems such as scar tissue swelling, bleeding, and fibrosis. A CPM (Continuous Passive Motion) machine is an apparatus that is being used to patient recovery, retrieving moving abilities of the knee, and reducing tissue swelling, after the knee joint surgery. This device prevents frozen joint syndrome (adhesive capsulitis), joint stiffness, and articular cartilage destruction by stimulating joint tissues, and flowing synovial fluid and blood around the knee joint. In this study, a new, light, and portable CPM machine with an appropriate interface, is designed and manufactured. The knee joint can be rotated from the range of -15° to 120° with a pace of 0.1 degree/sec to 1 degree/sec by this machine. One of the most important advantages of this new machine is its own user-friendly interface. This apparatus is controlled via an Android-based application; therefore, the users can use this machine easily via their own smartphones without the necessity to an extra controlling device. Besides, because of its apt size, this machine is a portable device. Smooth movement without any vibration and adjusting capability for different anatomies are other merits of this new CPM machine.
Prosthetic gingival reconstruction in a fixed partial restoration. Part 1: introduction to artificial gingiva as an alternative therapy.
The Class III defect environment entails a vertical and horizontal deficiency in the edentulous ridge. Often, bone and soft tissue surgical procedures fall short of achieving a natural esthetic result. Alternative surgical and restorative protocols for these types of prosthetic gingival restorations are presented in this three-part series, which highlights the diagnostic and treatment aspects as well as the lab and maintenance challenges. A complete philosophical approach involves both a biologic understanding of the limitations of the hard and soft tissue healing process as well as that of multiple adjacent implants in the esthetic zone. These limitations may often necessitate the use of gingiva-colored "pink" restorative materials and essential preemptive planning via three-dimensional computer-aided design/computer-assisted manufacture to achieve the desired esthetic outcome. The present report outlines a rationale for consideration of artificial gingiva when planning dental prostheses. Prosthetic gingiva can overcome the limitations of grafting and should be a consideration in the initial treatment plan. (Int J Periodontics Restorative Dent 2009;29:471-477.).
Intersecting Parallel-Plate Waveguide Loaded Cavities for Dual-Mode and Dual-Band Filters
Dual-mode and/or dual-band microwave filters often employ high quality factor (Q ), physically large, and frequency static cavity resonators or low Q, compact, and tunable planar resonators. While each resonator type has advantages, choosing a dual-mode and/or dual-band resonator type is often limited by these extremes. In this paper, a new dual-mode and/or dual-band resonator is shown with Q (360-400) that is higher than that of planar resonators while being frequency tunable (6.7% tuning range) and compact relative to standard cavity resonators. In addition, both degenerate modes of the resonator are tunable using a single actuator. The resonator is used in a single-resonator two-pole filter design and a double-resonator dual-band filter design. An analytical model is developed and design techniques are given for both designs. Measured results confirm that the proposed resonator fits between the design spaces of established dual-mode and/or dual-band resonator types and could find application in systems that require a combination of relatively high Q, tuning capability, and ease of integration.
What Do New Views of Knowledge and Thinking Have to Say about Research on Teacher Learning ?
T he education and research communities are abuzz with new (or at least re-discovered) ideas about the nature of cognition and learning. Terms like situated cognition," "distributed cognition," and "communities of practice" fill the air. Recent dialogue in Educational Researcher (Anderson, Reder, & Simon, 1996, 1997; Greeno, 1997) typifies this discussion. Some have argued that the shifts in world view that these discussions represent are even more fundamental than the now-historical shift from behaviorist to cognitive views of learning (Shuell, 1986). These new iaeas about the nature of knowledge, thinking, and learning--which are becoming known as the "situative perspective" (Greeno, 1997; Greeno, Collins, & Resnick, 1996)--are interacting with, and sometimes fueling, current reform movements in education. Most discussions of these ideas and their implications for educational practice have been cast primarily in terms of students. Scholars and policymakers have considered, for example, how to help students develop deep understandings of subject matter, situate students' learning in meaningful contexts, and create learning communities in which teachers and students engage in rich discourse about important ideas (e.g., National Council of Teachers of Mathematics, 1989; National Education Goals Panel, 1991; National Research Council, 1993). Less attention has been paid to teachers--either to their roles in creating learning experiences consistent with the reform agenda or to how they themselves learn new ways of teaching. In this article we focus on the latter. Our purpose in considering teachers' learning is twofold. First, we use these ideas about the nature of learning and knowing as lenses for understanding recent research on teacher learning. Second, we explore new issues about teacher learning and teacher education that this perspective brings to light. We begin with a brief overview of three conceptual themes that are central to the situative perspect ive-that cognition is (a) situated in particular physical and social contexts; (b) social in nature; and (c) distributed across the individual, other persons, and tools.
When poor solubility becomes an issue: from early stage to proof of concept.
Drug absorption, sufficient and reproducible bioavailability and/or pharmacokinetic profile in humans are recognized today as one of the major challenges in oral delivery of new drug substances. The issue arose especially when drug discovery and medicinal chemistry moved from wet chemistry to combinatorial chemistry and high throughput screening in the mid-1990s. Taking into account the drug product development times of 8-12 years, the apparent R&D productivity gap as determined by the number of products in late stage clinical development today, is the result of the drug discovery and formulation development in the late 1990s, which were the early and enthusiastic times of the combinatorial chemistry and high throughput screening. In parallel to implementation of these new technologies, tremendous knowledge has been accumulated on biological factors like transporters, metabolizing enzymes and efflux systems as well as on the physicochemical characteristics of the drug substances like crystal structures and salt formation impacting oral bioavailability. Research tools and technologies have been, are and will be developed to assess the impact of these factors on drug absorption for the new chemical entities. The conference focused specifically on the impact of compounds with poor solubility on analytical evaluation, prediction of oral absorption, substance selection, material and formulation strategies and development. The existing tools and technologies, their potential utilization throughout the drug development process and the directions for further research to overcome existing gaps and influence these drug characteristics were discussed in detail.
A wideband monolithically integrated photonic receiver in 0.25-µm SiGe:C BiCMOS technology
This work presents a 54 Gb/s monolithically integrated silicon photonics receiver (Rx). A germanium photodiode (Ge-PD) is monolithically integrated with a transimpedance amplifier (TIA) and low frequency feedback loop to compensate for the DC input overload current. Bandwidth enhancement techniques are used to extend the bandwidth compared to previously published monolithically integrated receivers. Implemented in a 0.25 μm SiGe:C BiCMOS electronic/photonic integrated circuit (EPIC) technology, the Rx operates at λ=1.55 μm, achieves an optical/electrical (O/E) bandwidth of 47GHz with only ±5ps group delay variation and a sensitivity of 0.2dBm for 4.5×10-11 BER at 40 Gb/s and 0.97dBm for 1.05×10-6 BER at 54 Gb/s. It dissipates 73mW of power, while occupying 1.6mm2 of area. To the best of the author's knowledge, this work presents the state-of-the-art bandwidth and bit rate in monolithically integrated photonic receivers.
Wireless Relay Communications with Unmanned Aerial Vehicles: Performance and Optimization
In this paper, we investigate a communication system in which unmanned aerial vehicles (UAVs) are used as relays between ground-based terminals and a network base station. We develop an algorithm for optimizing the performance of the ground-to-relay links through control of the UAV heading angle. To quantify link performance, we define the ergodic normalized transmission rate (ENTR) for the links between the ground nodes and the relay, and derive a closed-form expression for it in terms of the eigenvalues of the channel correlation matrix. We show that the ENTR can be approximated as a sinusoid with an offset that depends on the heading of the UAV. Using this observation, we develop a closed-form expression for the UAV heading that maximizes the uplink network data rate while keeping the rate of each individual link above a certain threshold. When the current UAV relay assignments cannot meet the minimum link requirements, we investigate the deployment and heading control problem for new UAV relays as they are added to the network, and propose a smart handoff algorithm that updates node and relay assignments as the topology of the network evolves.
Development of the quadruped walking robot, TITAN-IX -- mechanical design concept and application for the humanitarian de-mining robot
This paper proposes a quadruped walking robot that has high performance as a working machine. This robot is needed for various tasks controlled by tele-operation, especially for humanitarian mine detection and removal. Since there are numerous personnel landmines that are still in place from many wars, it is desirable to provide a safe and inexpensive tool that civilians can use to remove those mines. The authors have been working on the concept of the humanitarian demining robot systems for 4 years and have performed basic experiments with the Ž rst prototype VK-I using the modiŽ ed quadruped walking robot, TITAN-VIII. After those experiments, it was possible to reŽ ne some concepts and now the new robot has a tool (end-effector)changing system on its back, so that by utilizing the legs as manipulation arms and connecting various tools to the foot, it can perform mine detection and removal tasks. To accomplish these tasks, we developed various end-effectors that can be attached to the working leg. In this paper we will discuss the mechanical design of the new walking robot called TITAN-IX to be applied to the new system VK-II.
Infant growth patterns in the slums of Dhaka in relation to birth weight, intrauterine growth retardation, and prematurity.
BACKGROUND Relations between size and maturity at birth and infant growth have been studied inadequately in Bangladesh, where the incidence of low birth weight is high and most infants are breast-fed. OBJECTIVE This study was conducted to describe infant growth patterns and their relations to birth weight, intrauterine growth retardation, and prematurity. DESIGN A total of 1654 infants born in selected low-socioeconomic areas of Dhaka, Bangladesh, were enrolled at birth. Weight and length were measured at birth and at 1, 3, 6, 9, and 12 mo of age. RESULTS The infants' mean birth weight was 2516 g, with 46.4% weighing <2500 g; 70% were small for gestational age (SGA) and 17% were premature. Among the SGA infants, 63% had adequate ponderal indexes. The mean weight of the study infants closely tracked the -2 SD curve of the World Health Organization pooled breast-fed sample. Weight differences by birth weight, SGA, or preterm categories were retained throughout infancy. Mean z scores based on the pooled breast-fed sample were -2.38, -1. 72, and -2.34 at birth, 3 mo, and 12 mo. Correlation analysis showed greater plasticity of growth in the first 3 mo of life than later in the first year. CONCLUSIONS Infant growth rates were similar to those observed among breast-fed infants in developed countries. Most study infants experienced chronic intrauterine undernourishment. Catch-up growth was limited and weight at 12 mo was largely a function of weight at birth. Improvement of birth weight is likely to lead to significant gains in infant nutritional status in this population, although interventions in the first 3 mo are also likely to be beneficial.
Nonlinear Systems Identification Using Deep Dynamic Neural Networks
Neural networks are known to be effective function approximators. Recently, deep neural networks have proven to be very effective in pattern recognition, classification tasks and human-level control to model highly nonlinear realworld systems. This paper investigates the effectiveness of deep neural networks in the modeling of dynamical systems with complex behavior. Three deep neural network structures are trained on sequential data, and we investigate the effectiveness of these networks in modeling associated characteristics of the underlying dynamical systems. We carry out similar evaluations on select publicly available system identification datasets. We demonstrate that deep neural networks are effective model estimators from input-output data.
A Hybrid Algorithm Based on ACO and PSO for Capacitated Vehicle Routing Problems
The vehicle routing problem VRP is a well-known combinatorial optimization problem. It has been studied for several decades because finding effective vehicle routes is an important issue of logistic management. This paper proposes a new hybrid algorithm based on two main swarm intelligence SI approaches, ant colony optimization ACO and particle swarm optimization PSO , for solving capacitated vehicle routing problems CVRPs . In the proposed algorithm, each artificial ant, like a particle in PSO, is allowed to memorize the best solution ever found. After solution construction, only elite ants can update pheromone according to their own best-so-far solutions. Moreover, a pheromone disturbance method is embedded into the ACO framework to overcome the problem of pheromone stagnation. Two sets of benchmark problems were selected to test the performance of the proposed algorithm. The computational results show that the proposed algorithm performs well in comparison with existing swarm intelligence approaches.
A cooperative intrusion detection system for ad hoc networks
Mobile ad hoc networking (MANET) has become an exciting and important technology in recent years because of the rapid proliferation of wireless devices. MANETs are highly vulnerable to attacks due to the open medium, dynamically changing network topology, cooperative algorithms, lack of centralized monitoring and management point, and lack of a clear line of defense. In this paper, we report our progress in developing intrusion detection (ID) capabilities for MANET. Building on our prior work on anomaly detection, we investigate how to improve the anomaly detection approach to provide more details on attack types and sources. For several well-known attacks, we can apply a simple rule to identify the attack type when an anomaly is reported. In some cases, these rules can also help identify the attackers. We address the run-time resource constraint problem using a cluster-based detection scheme where periodically a node is elected as the ID agent for a cluster. Compared with the scheme where each node is its own ID agent, this scheme is much more efficient while maintaining the same level of effectiveness. We have conducted extensive experiments using the ns-2 and MobiEmu environments to validate our research.
Pulse pair beamforming and the effects of reflectivity field variations on imaging radars: PPB AND REFLECTIVITY EFFECTS ON IMAGING
[1] Coherent radar imaging (CRI), which is fundamentally a beamforming process, has been used to create images of microscale, reflectivity structures within the resolution volume of atmospheric Doppler radars. This powerful technique has the potential to unlock many new discoveries in atmospheric studies. The Turbulent Eddy Profiler (TEP) is a unique 915 MHz boundary layer radar consisting of a maximum of 91 independent receivers. The TEP configuration allows sophisticated CRI algorithms to be implemented providing significant improvement in angular resolution. The present work includes a thorough simulation study of some of the capabilities of the TEP system. The pulse pair processor, used for radial velocity and spectral width estimation with meteorological radars, is combined with beamforming technique, in an efficient manner, to the imaging radar case. By numerical simulation the new technique is shown to provide robust and computationally efficient estimates of the spectral moments. For this study, a recently developed atmospheric radar simulation method is employed that uses the ten thousand scattering points necessary for the high resolution imaging simulation. Previous methods were limited in the number of scatterers due to complexity issues. Radial velocity images from the beamforming radar are used to estimate the three-dimensional wind field map within the resolution volume. It is shown that a large root mean square (RMS) error in imputed three-dimensional wind fields can occur using standard Fourier imaging. This RMS error does not improve even as SNR is increased. The cause of the error is reflectivity variations within the resolution volume. The finite beamwidth of the beamformer skews the radial velocity estimate, and this results in poor wind field estimates. Adaptive Capon beamforming consistently outperforms the Fourier method in the quantitative study and has been demonstrated to enhance the performance compared to the Fourier method.
Short-term traffic flow prediction with Conv-LSTM
The accurate short-term traffic flow prediction can provide timely and accurate traffic condition information which can help one to make travel decision and mitigate the traffic jam. Deep learning (DL) provides a new paradigm for the analysis of big data generated by the urban daily traffic. In this paper, we propose a novel end-to-end deep learning architecture which consists of two modules. We combine convolution and LSTM to form a Conv-LSTM module which can extract the spatial-temporal information of the traffic flow information. Furthermore, a Bi-directional LSTM module is also adopted to analyze historical traffic flow data of the prediction point to get the traffic flow periodicity feature. The experimental results on the real dataset show that the proposed approach can achieve a better prediction accuracy compared with the existing approaches.
Impaired High‐Density Lipoprotein Anti‐Oxidative Function Is Associated With Outcome in Patients With Chronic Heart Failure
BACKGROUND Oxidative stress is mechanistically linked to the pathogenesis of chronic heart failure (CHF). Antioxidative functions of high-density lipoprotein (HDL) particles have been found impaired in patients with ischemic cardiomyopathy; however, the impact of antioxidative HDL capacities on clinical outcome in CHF patients is unknown. We therefore investigated the predictive value of antioxidative HDL function on mortality in a representative cohort of patients with CHF. METHODS AND RESULTS We prospectively enrolled 320 consecutive patients admitted to our outpatient department for heart failure and determined antioxidative HDL function using the HDL oxidative index (HOI). During a median follow-up time of 2.8 (IQR: 1.8-4.9) years, 88 (27.5%) patients reached the combined cardiovascular endpoint defined as the combination of death due to cardiovascular events and heart transplantation. An HOI ≥1 was significantly associated with survival free of cardiovascular events in Cox regression analysis with a hazard ratio (HR) of 2.28 (95% CI 1.48-3.51, P<0.001). This association remained significant after comprehensive multivariable adjustment for potential confounders with an adjusted HR of 1.83 (95% CI 1.1-2.92, P=0.012). Determination of HOI significantly enhanced risk prediction beyond that achievable with N-terminal pro-B-type natriuretic peptide indicated by improvements in net reclassification index (32.4%, P=0.009) and integrated discrimination improvement (1.4%, P=0.04). CONCLUSIONS Impaired antioxidative HDL function represents a strong and independent predictor of mortality in patients with CHF. Implementation of HOI leads to a substantial improvement of risk prediction in patients with CHF.
Parallel Local Algorithms for Core, Truss, and Nucleus Decompositions
Finding the dense regions of a graph and relations among them is a fundamental task in network analysis. Nucleus decomposition is a principled framework of algorithms that generalizes the k-core and k-truss decompositions. It can leverage the higher-order structures to locate the dense subgraphs with hierarchical relations. Computation of the nucleus decomposition is performed in multiple steps, known as the peeling process, and it requires global information about the graph at any time. This prevents the scalable parallelization of the computation. Also, it is not possible to compute approximate and fast results by the peeling process, because it does not produce the densest regions until the algorithm is complete. In a previous work, Lu et al. proposed to iteratively compute the h-indices of vertex degrees to obtain the core numbers and prove that the convergence is obtained after a finite number of iterations. In this work, we generalize the iterative h-index computation for any nucleus decomposition and prove convergence bounds. We present a framework of local algorithms to obtain the exact and approximate nucleus decompositions. Our algorithms are pleasingly parallel and can provide approximations to explore time and quality trade-offs. Our shared-memory implementation verifies the efficiency, scalability, and effectiveness of our algorithms on real-world networks. In particular, using 24 threads, we obtain up to 4.04x and 7.98x speedups for k-truss and (3, 4) nucleus decompositions.
Beta-cell dysfunction in subjects with impaired glucose tolerance and early type 2 diabetes: comparison of surrogate markers with first-phase insulin secretion from an intravenous glucose tolerance test.
OBJECTIVE Methods to assess beta-cell function in clinical studies are limited. The aim of the current study was to compare a direct measure of insulin secretion with fasting surrogate markers in relation to glucose tolerance status. RESEARCH DESIGN AND METHODS In 1,380 individuals from the Insulin Resistance Atherosclerosis Study, beta-cell function was assessed using a frequently sampled intravenous glucose tolerance test (first-phase insulin secretion; acute insulin response [AIR]), homeostasis model assessment of beta-cell function (HOMA-B), proinsulin levels, and the proinsulin-to-insulin ratio. Beta-cell function was cross-sectionally analyzed by glucose tolerance categories (normal glucose tolerance [NGT], n = 712; impaired glucose tolerance [IGT], n = 353; newly diagnosed diabetes by 2-h glucose from an oral glucose tolerance test [OGTT] [DM2h], n = 80; newly diagnosed diabetes by fasting glucose [DMf], n = 135; or newly diagnosed diabetes by fasting and 2-h glucose and established diabetes on diet/exercise only [DM], n = 100). RESULTS In Spearman correlation analyses, proinsulin and the proinsulin-to-insulin ratio were only modestly inversely related to AIR (r values from -0.02 to -0.27), and AIR was strongly related to HOMA-B (r values 0.56 and 0.58). HOMA-B markedly underestimated the magnitude of the beta-cell defect across declining glucose tolerance, especially for IGT and new DM by OGTT compared with AIR. Analyses adjusting for insulin sensitivity showed that beta-cell function was compromised in IGT, DM2h, DMf, and DM, relative to NGT, by 13, 12, 59, and 62% (HOMA-B) and by as much as 40, 60, 80, and 75%, using AIR. CONCLUSIONS Subjects with IGT and early-stage, asymptomatic type 2 diabetic patients have more pronounced beta-cell defects than previously estimated from epidemiological studies using homeostasis model assessment.
Reconstructing/Deconstructing the Earliest Eukaryotes How Comparative Genomics Can Help
We could reconstruct the evolution of eukaryote-specific molecular and cellular machinery if some living eukaryotes retained primitive cellular structures and we knew which eukaryotes these were. It's not clear that either is the case, but the expanding protist genomic database could help us in several ways.
Functional Entity Relationship Model and Update Operations
A data model, called the entity-relationship model, is proposed. This model incorporates some of the important semantic information in the real world. A special diagramatic technique is introduced as a tool for data base design. An example of data base design and description using the model and the diagramatic technique is given. Some implications on data integrity, information retrieval, and data manipulation are discussed.The entity-relationship model can be used as a basis for unification of different views of data: the network model, the relational model, and the entity set model. Semantic ambiguities in these models are analyzed. Possible ways to derive their views of data from the entity-relationship model are presented.
Extending ACL2 with SMT Solvers
We present our extension of ACL2 with Satisfiability Modulo Theories (SMT) solvers using ACL2's trusted clause processor mechanism. We are particularly interested in the verification of physical systems including Analog and Mixed-Signal (AMS) designs. ACL2 offers strong induction abilities for reasoning about sequences and SMT complements deduction methods like ACL2 with fast nonlinear arithmetic solving procedures. While SAT solvers have been integrated into ACL2 in previous work, SMT methods raise new issues because of their support for a broader range of domains including real numbers and uninterpreted functions. This paper presents Smtlink, our clause processor for integrating SMT solvers into ACL2. We describe key design and implementation issues and describe our experience with its use.
Identifying reasons for ERP system customization in SMEs: a multiple case study
Purpose The purpose of this article is to investigate pos sible reasons for ERP system customization in small and medium-sized enterprises (SMEs), with a particular focus on distinguishing influential factors of the SME context. Design/methodology/approach An exploratory qualitative research approach was employed, as the study aims to identify new insights within the SME context. A multiple case study of four SMEs was conducted. Data were collected through 34 qualitati ve interviews with multiple informants across the four cases. Findings – The study reports findings from four SMEs where ERP customization has been applied to match organizational needs. First, the level and ty pe of ERP system customization applied by the case organizations were investigated. Then, the reasons for ERP system customization were explored. The analysis identified seven possible reasons leading to ERP system customization, classified according to two phases of the ERP life-cycle (prior to “goin g-live”, after “going-live”). Reasons specific to t he SME context include unique business processes, owne rship type, and organizational stage of growth. Research limitations/implications The study is based on four cases only. Further r esearch is needed to investigate the applicability of our find ings in different contexts. Practical implications The study findings are believed to be valuable f or organizations about to implement an ERP system as well as for ERP vendors. By identifying the reasons leading to ERP system customization and investigating the effect o f the SME context, the study contributes to better understanding of ERP system implementation in SMEs. Originality/value – The article contributes to the scarce literature on reasons for ERP system customization in SMEs. By classifying the reasons i nto two phases of the ERP life-cycle, the study also contributes by exploring ERP system customizat ion practice in different phases of the ERP lifecycle.
Chapter Eighteen. British marxist history
Given the undoubted importance of Thompson's contribution to radical history, this chapter concentrates on surveying the debates occasioned by his work. In 1935, Comintern General Secretary George Dimitrov noted that across Europe fascists were writing national historical myths through which they hoped to justify their contemporary political project. Against 'economistic' models of historical materialism, Thompson sought to re-emphasise human agency at the heart of his Marxism, and in particular to reaffirm the importance of ideas as the basis for action. This reinterpretation of Marxism allowed him to conceptualise both the rise of Stalinism and the revolt against it in 1956. Keywords: British Marxist history; Marxism; Stalinism
Book recommendation system for digital library based on user profiles by using association rule
Due to wide application of management system, information data grows rapidly. On one hand, people have a large number of information resources. On the other hand, the time cost and difficulty of people finding the proper information increases. To tackle the problems, book recommendation is one of the solutions for university libraries which possess huge volumes of books and reading-intensive users. This paper proposes a library book recommendation system based on user profile loaning and apply association rule to create model. The result shows that new association rule algorithm suitable to apply for recommender book in library.
Generalizing Top Trading Cycles for Housing Markets with Fractional Endowments
The housing market setting constitutes a fundamental model of exchange economies of goods. In most of the work concerning housing markets, it is assumed that agents own and are allocated discrete houses. The drawback of this assumption is that it does not cater for randomized assignments or allocation of time-shares. Recently, house allocation with fractional endowment of houses was considered by Athanassoglou and Sethuraman (2011) who posed the open problem of generalizing Gale’s Top Trading Cycles (TTC) algorithm to the case of housing markets with fractional endowments. In this paper, we address the problem and present a generalization of TTC called FTTC that is polynomialtime as well as core stable and Pareto optimal with respect to stochastic dominance even if there are indifferences in the preferences. We prove that if each agent owns one discrete house, FTTC coincides with a state of the art strategyproof mechanism for housing markets with discrete endowments and weak preferences. We show that FTTC satisfies a maximal set of desirable properties by proving two impossibility theorems. Firstly, we prove that with respect to stochastic dominance, core stability and no justified envy are incompatible. Secondly, we prove that there exists no individual rational, Pareto optimal and weak strategyproof mechanism, thereby answering another open problem posed by Athanassoglou and Sethuraman (2011). The second impossibility implies a number of results in the literature.
Industrie 4.0: Enabling technologies
With the development of industries, we have realized the third industrial revolution. Following the development of Cyber-Physical Systems (CPS), industrial wireless network and some other enabling technologies, the fourth industrial revolution is being gradually rolled out. This paper presents an overview of the background, concept, basic methods, major technologies and application scenarios for industrie 4.0. In our view, industrie 4.0 as an abstract concept can closely integrate the physical world with virtual world. This strategy of industrie 4.0 will lead to more and more people coming to participate in the manufacturing process and further popularize our products through CPS technology. The typical approach for industrie 4.0 is the social manufacturing. In fact, the social manufacturing can directly link our customers' need and our industries, but it must be based on the enabling technologies, such as embedded systems, wireless sensor network, industrial robots, 3D printing, cloud computing, and big data. Therefore, this paper in detail explains these concepts, advantages and the relations to industries. We can foresee that our life will be changed to be more efficient, fast, safe and convenient due to the development of industrie 4.0 in the near future.
A survey of e-government business models in the Netherlands
As most countries are moving their public services online, governments are trying different business models to harness the Internet for service delivery improvement. Yet, it is not entirely clear what value-added aspects should be included and what a business model brings to the governments. This paper examines the current state of development of e-government business models in Netherlands at the municipality level and identifies recommendation to further expand these business models.First, we examined a sample of websites to identify the underlying business models, and then we compared it with references to business models in the e-commerce literature. The cross referencing aimed to refine our search and to bring as an initial number of e-government business models. Next, we surveyed the status of e-government business models around the port of Rotterdam based on our current understanding of business models. Against this, we then proposed recommendations, which should result in e-government business models. These recommendations aim to help governments design the next generation of e-government business models not only to increase efficiency but also to pump prime new technological development and to enlarge wider social participation.
New York Before Chinatown: Orientalism and the Shaping of American Culture, 1776-1882
From George Washington's desire (in the heat of the Revolutionary War) for a proper set of Chinese porcelains for afternoon tea, to the lives of Chinese-Irish couples in the 1830s, to the commercial success of Cheng and Eng (the "Siamese twins"), to rising fears of the "heathen Chinee", this work offers a look at the role Chinese people, things and ideas played in the fashioning of American culture and politics. Piecing together various historical fragments and ancedotes from the years before Chinatown emerged in the 1870s, historian John Kuo Wei Tchen redraws Manhattan's historical landscape and seeks to broaden our understanding of the role of port cultures in the making of American identities. Techen tells his story in three parts. In the first, he explores America's fascination with Asia as a source of luxury items, cultural taste and lucrative trade. In the second he explains how Chinese people and things become objects of curiosity in the expansive commercial marketplace. In the third part, Tchen focuses on how Americans' attitude toward the Chinese changed from fascination to demonization.
Relative Wulst volume is correlated with orbit orientation and binocular visual field in birds
In mammals, species with more frontally oriented orbits have broader binocular visual fields and relatively larger visual regions in the brain. Here, we test whether a similar pattern of correlated evolution is present in birds. Using both conventional statistics and modern comparative methods, we tested whether the relative size of the Wulst and optic tectum (TeO) were significantly correlated with orbit orientation, binocular visual field width and eye size in birds using a large, multi-species data set. In addition, we tested whether relative Wulst and TeO volumes were correlated with axial length of the eye. The relative size of the Wulst was significantly correlated with orbit orientation and the width of the binocular field such that species with more frontal orbits and broader binocular fields have relatively large Wulst volumes. Relative TeO volume, however, was not significant correlated with either variable. In addition, both relative Wulst and TeO volume were weakly correlated with relative axial length of the eye, but these were not corroborated by independent contrasts. Overall, our results indicate that relative Wulst volume reflects orbit orientation and possibly binocular visual field, but not eye size.
A Formal System for Euclid's Elements
We present a formal system, E , which provides a faithful model of the proofs in Euclid’s Elements , including the use of diagrammatic reasoning.
Game theory applications in wireless networks : A
Game theory is a set of tools developed to model interactions between agents with conflicting interests [5]. It is a field of applied mathematics that defines and evaluates interactive decision situations. It provides analytical tools to predict the outcome of complicated interactions between rational entities, where rationality demands strict adherence to a strategy based on observed or measured results [13]. Originally developed to model problems in the field of economics, game theory has recently been applied to network problems, in most cases to solve the resource allocation problems in a competitive environment. The reason that game theory is an adapted choice for studying cooperative communications is various. Nodes in the network are independent agents, making decisions only for their own interests. Game theory provides us sufficient theoretical tools to analyze the network users’ behaviors and actions. Game theory, also primarily deals with distributed optimization, which often requires local information only. Thus it enables us to design distributed algorithms. [14]. This article surveys the literature on game theory as they apply to wireless networks. First, a brief overview of classifications of games, important definitions used in games (Nash Equilibrium, Pareto efficiency, Pure, Mixed and Fully mixed strategies) and game models are presented. Then, we identified five areas of application of game theory in wireless networks; therefore, we discuss related work to game theory in communication networks, cognitive radio networks, wireless sensor networks, resource allocation and power control. Finally, we discuss the limitations of the application of game theory in wireless networks.
The influence of neuropsychological rehabilitation on symptomatology and quality of life following brain injury: a controlled long-term follow-up.
PRIMARY OBJECTIVE To establish whether, following acquired brain injury, intensive post-acute neuropsychological rehabilitation could have long-term beneficial effects. METHODS AND PROCEDURES A group of 37 adults who had suffered cerebrovascular accidents or traumatic brain injuries and who had undergone a rehabilitation programme were followed up 12-22 years post-injury, together with a non-rehabilitated control group of 13 adults, matched for brain-injury and demographics characteristics. Both groups completed a set of questionnaires concerning broad aspects of psychological well-being. Significant others completed similar questionnaires. MAIN OUTCOMES AND RESULTS The rehabilitation group showed significantly lower levels of brain injury symptoms and higher levels of competency at follow-up. They also rated internal locus of control and general self-efficacy as significantly higher than the control group. Anxiety and depression levels were significantly lower and quality of life significantly higher in the rehabilitation group for both the subjects themselves and for their significant others. CONCLUSIONS Within methodological limitations this study suggests that post-acute neuropsychological rehabilitation can have long-term beneficial effects.
Coronary flow reserve by contrast enhanced transesophageal coronary sinus Doppler measurements can evaluate diabetic microvascular dysfunction.
BACKGROUND This study was undertaken to investigate whether coronary flow reserve (CFR) using coronary sinus flow (CSF), which can be measured by transesophageal Doppler echocardiography (TEDE), especially when contrast enhanced, is useful in evaluating microvascular dysfunction in patients with diabetes mellitus (DM). METHODS AND RESULTS CSF recordings using contrast enhanced TEDE were performed before and after adenosine triphosphate infusion (0.15 mg x kg(-1) x min(-1)) in 16 patients with type 2 DM and diabetic retinopathy and in 13 non-DM patients (control). Coronary angiography revealed normal epicardial coronary arteries. CFR was defined as the ratio of the antegrade flow velocity time integral in hyperemic conditions and basal levels. Clear envelopes of CSF were obtained in all DM patients using contrast-enhanced TEDE. CFR using CSF in the DM group was significantly decreased compared with the control group (1.4+/-0.4 vs 2.1+/-0.5, p<0.01), but there were no significant differences of age, ejection fraction, rate of hypertension and hypercholesterolemia between the 2 groups. Using 1.7 of CFR as the cut-off value, diabetic microvascular dysfunction could be detected with 82% sensitivity and 83% specificity. CONCLUSIONS CFR calculated by CSF using contrast-enhanced TEDE may be useful for evaluating diabetic microvascular dysfunction.
Recursive Spatial Transformer (ReST) for Alignment-Free Face Recognition
Convolutional Neural Network (CNN) has led to significant progress in face recognition. Currently most CNNbased face recognition methods follow a two-step pipeline, i.e. a detected face is first aligned to a canonical one predefined by a mean face shape, and then it is fed into a CNN to extract features for recognition. The alignment step transforms all faces to the same shape, which can cause loss of geometrical information which is helpful in distinguishing different subjects. Moreover, it is hard to define a single optimal shape for the following recognition, since faces have large diversity in facial features, e.g. poses, illumination, etc. To be free from the above problems with an independent alignment step, we introduce a Recursive Spatial Transformer (ReST) module into CNN, allowing face alignment to be jointly learned with face recognition in an end-to-end fashion. The designed ReST has an intrinsic recursive structure and is capable of progressively aligning faces to a canonical one, even those with large variations. To model non-rigid transformation, multiple ReST modules are organized in a hierarchical structure to account for different parts of faces. Overall, the proposed ReST can handle large face variations and non-rigid transformation, and is end-to-end learnable and adaptive to input, making it an effective alignment-free face recognition solution. Extensive experiments are performed on LFW and YTF datasets, and the proposed ReST outperforms those two-step methods, demonstrating its effectiveness.
Engineering Design Performance Management - from Alchemy to Science through ISTa (Invited Talk)
The drive for performance is omnipresent in modern society. We believe this to be true, although we only have a vague idea of what “performance” really means. The demand for management is omnipresent in modern society. We accept this to be true, although management theory is a science barely out of its infancy (Who wishes to be supervised by an infant?). Performance management is considered to be the need for the hour in modern society. We are told this is true, although we feel that we are trying to cope with something that we have very little comprehension of. Engineering is the omnipresent backbone of modern society. We experience this to be true, although we acknowledge that design is least as much an art as it is a science, a world where uncertainty rules. The impact of Information Society Technology (IST) is omnipresent in modern society. We understand this to be true; although we know that there is no point in automating something we don’t understand. Somewhat ironically, one could conclude that Engineering Design Performance Management (EDPM) is about the challenge to handle the uncertain and appraise the unknown. Not to forget about IST embarked on a mission to automate everything it possibly could to pretend that there are ready answers. This is like alchemy, but for performance. Alchemy hovered between worlds. So does contemporary performance management hovering between fiction and reality. Alchemists proposed to use the philosopher's stone (materia prima), a mysterious, unknown substance that they believed to have the power to transmute base metals into gold. So does contemporary performance management by hailing IST as its “Magnus Opus”. Without a doubt, it is high time to rebuild a firm foundation of performance management. We need a consistent framework addressing the relevant aspects of performance management from the abstract level to the concrete level. Only than IST will be able to unfold its full potential, and deliver on its promises. The strategic potential of IST does not lie in empty automation that enforces unrealistic and oppressive processes. It lies in enabling better decision making in a highly complex environment of change, uncertainty, risk, and urgency.
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks
We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples. Given a sentence and a target syntactic form (e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the sentence with the desired syntax. We show it is possible to create training data for this task by first doing backtranslation at a very large scale, and then using a parser to label the syntactic transformations that naturally occur during this process. Such data allows us to train a neural encoderdecoder model with extra inputs to specify the target syntax. A combination of automated and human evaluations show that SCPNs generate paraphrases that follow their target specifications without decreasing paraphrase quality when compared to baseline (uncontrolled) paraphrase systems. Furthermore, they are more capable of generating syntactically adversarial examples that both (1) “fool” pretrained models and (2) improve the robustness of these models to syntactic variation when used to augment their training data.
$W$ -Band Waveguide Filters Fabricated by Laser Micromachining and 3-D Printing
This paper presents two W-band waveguide bandpass filters, one fabricated using laser micromachining and the other 3-D printing. Both filters are based on coupled resonators and are designed to have a Chebyshev response. The first filter is for laser micromachining and it is designed to have a compact structure allowing the whole filter to be made from a single metal workpiece. This eliminates the need to split the filter into several layers and therefore yields an enhanced performance in terms of low insertion loss and good durability. The second filter is produced from polymer resin using a stereolithography 3-D printing technique and the whole filter is plated with copper. To facilitate the plating process, the waveguide filter consists of slots on both the broadside and narrow side walls. Such slots also reduce the weight of the filter while still retaining the filter's performance in terms of insertion loss. Both filters are fabricated and tested and have good agreement between measurements and simulations.
Biochemical markers of bone turnover in patients with spinal metastases after resistance training under radiotherapy – a randomized trial
To compare the effects of resistance training versus passive physical therapy on bone turnover markers (BTM) in the metastatic bone during radiation therapy (RT) in patients with spinal bone metastases. Secondly, to evaluate an association of BTM to local response, skeletal-related events (SRE), and number of metastases. In this randomized trial, 60 patients were allocated from September 2011 to March 2013 into one of the two arms: resistance training (Arm A) or passive physical therapy (Arm B) with thirty patients in each arm during RT. Biochemical markers such as pyridinoline (PYD), desoxy-pyridinoline (DPD), bone alkaline phosphatase (BAP), total amino-terminal propeptide of type I collagen (PINP), beta-isomer of carboxy-terminal telopeptide of type I collagen (CTX-I), and cross-linked N-telopeptide of type I collagen (NTX) were analyzed at baseline, and three months after RT. Mean change values of PYD and CTX-I were significantly lower at 3 months after RT (p = 0.035 and p = 0.043) in Arm A. Importantly, all markers decreased in both arms, except of PYD and CTX-I in arm B, although significance was not reached for some biomarkers. In arm A, the local response was significantly higher (p = 0.003) and PINP could be identified as a predictor for survivors (OR 0.968, 95%CI 0.938–0.999, p = 0.043). BAP (OR 0.974, 95%CI 0.950–0.998, p = 0.034) and PINP (OR 1.025, 95%CI 1.001–1.049, p = 0.044) were related with an avoidance of SRE. In this group of patients with spinal bone metastases, we were able to show that patients with guided resistance training of the paravertebral muscles can influence BTM. PYD and CTX-I decreased significantly in arm A. PINP can be considered as a complementary tool for prediction of local response, and PINP as well as BAP for avoidance of SRE. Clinical trial identifier NCT 01409720. August 2, 2011.
Advances in Autism Systemic-attachment formulation for families of children with autism
Purpose – Case formulation has gained increasing prominence as a guide to intervention across a range of clinical problems. It offers a contrasting orientation to diagnosis and its value is considered in the context of clinical work with autistic spectrum disorders (ASD). The purpose of this paper is to argue that case formulation integrating attachment, systemic and narrative perspectives offers a valuable way forward in assisting people with the diagnosis and their families. Design/methodology/approach – The literature on ASD and related conditions is reviewed to examine levels of co-morbidity, consider the role of parental mental health difficulties and explore the issues inherent with current approaches to diagnosis. Findings – ASD is found to have a high level of co-morbidity with other difficulties, such as anxiety and insecure attachment. Research findings, alongside the authors own clinical experience, are developed to suggest that formulation can allow the possibility of early intervention based on a holistic appraisal of the array of difficulties present prior to a diagnosis. Originality/value – It is argued that the use of this systemic-attachment formulation approach could offset the exacerbation in ASD and related conditions, and deterioration in families’ mental health, whilst they face long waiting times for a diagnosis.
Error patterns in MLC NAND flash memory: Measurement, characterization, and analysis
As NAND flash memory manufacturers scale down to smaller process technology nodes and store more bits per cell, reliability and endurance of flash memory reduce. Wear-leveling and error correction coding can improve both reliability and endurance, but finding effective algorithms requires a strong understanding of flash memory error patterns. To enable such understanding, we have designed and implemented a framework for fast and accurate characterization of flash memory throughout its lifetime. This paper examines the complex flash errors that occur at 30-40nm flash technologies. We demonstrate distinct error patterns, such as cycle-dependency, location-dependency and value-dependency, for various types of flash operations. We analyze the discovered error patterns and explain why they exist from a circuit and device standpoint. Our hope is that the understanding developed from this characterization serves as a building block for new error tolerance algorithms for flash memory.
BlinDar: An invisible eye for the blind people making life easy for the blind with Internet of Things (IoT)
Blindness is a condition in which an individual loses the ocular perception. Mobility and self-reliability for the visually impaired and blind people has always been a problem. In this paper a smart Electronic Traveling Aid (ETA) called BlinDar has been proposed. This smart guiding ETA ameliorates the life of blind as it is well equipped with Internet of Things (IoT) and is meant to aid the visually impaired and blind to walk without constraint in close as well as open environments. BlinDar is a highly efficient, reliable, fast responding, light weight, low power consuming and cost effective device for the blind. Ultrasonic sensors have been used to detect the obstacle and potholes within a range of 2m. GPS and ESP8266 Wi-Fi module has been used for sharing the location with the cloud. MQ2 gas sensor is used for detecting fire in path and a RF Tx/Rx module for finding the stick when it is misplaced. Arduino Mega2560 is the microcontroller used, which has 54 digital I/O pins which makes the interfacing of components easy.
The impact of modified Hatha yoga on chronic low back pain: a pilot study.
PURPOSE The purpose of this randomized pilot study was to evaluate a possible design for a 6-week modified hatha yoga protocol to study the effects on participants with chronic low back pain. PARTICIPANTS Twenty-two participants (M = 4; F = 17), between the ages of 30 and 65, with chronic low back pain (CLBP) were randomized to either an immediate yoga based intervention, or to a control group with no treatment during the observation period but received later yoga training. METHODS A specific CLBP yoga protocol designed and modified for this population by a certified yoga instructor was administered for one hour, twice a week for 6 weeks. Primary functional outcome measures included the forward reach (FR) and sit and reach (SR) tests. All participants completed Oswestry Disability Index (ODI) and Beck Depression Inventory (BDI) questionnaires. Guiding questions were used for qualitative data analysis to ascertain how yoga participants perceived the instructor, group dynamics, and the impact of yoga on their life. ANALYSIS To account for drop outs, the data were divided into better or not categories, and analyzed using chi-square to examine differences between the groups. Qualitative data were analyzed through frequency of positive responses. RESULTS Potentially important trends in the functional measurement scores showed improved balance and flexibility and decreased disability and depression for the yoga group but this pilot was not powered to reach statistical significance. Significant limitations included a high dropout rate in the control group and large baseline differences in the secondary measures. In addition, analysis of the qualitative data revealed the following frequency of responses (1) group intervention motivated the participants and (2) yoga fostered relaxation and new awareness/learning. CONCLUSION A modified yoga-based intervention may benefit individuals with CLB, but a larger study is necessary to provide definitive evidence. Also, the impact on depression and disability could be considered as important outcomes for further study. Additional functional outcome measures should be explored. This pilot study supports the need for more research investigating the effect of yoga for this population.
Biliary exosomes influence cholangiocyte regulatory mechanisms and proliferation through interaction with primary cilia.
Exosomes are small extracellular vesicles that are thought to participate in intercellular communication. Recent work from our laboratory suggests that, in normal and cystic liver, exosome-like vesicles accumulate in the lumen of intrahepatic bile ducts, presumably interacting with cholangiocyte cilia. However, direct evidence for exosome-ciliary interaction is limited and the physiological relevance of such interaction remains unknown. Thus, in this study, we tested the hypothesis that biliary exosomes are involved in intercellular communication by interacting with cholangiocyte cilia and inducing intracellular signaling and functional responses. Exosomes were isolated from rat bile by differential ultracentrifugation and characterized by scanning, transmission, and immunoelectron microscopy. The exosome-ciliary interaction and its effects on ERK1/2 signaling, expression of the microRNA, miR-15A, and cholangiocyte proliferation were studied on ciliated and deciliated cultured normal rat cholangiocytes. Our results show that bile contains vesicles identified as exosomes by their size, characteristic "saucer-shaped" morphology, and specific markers, CD63 and Tsg101. When NRCs were exposed to isolated biliary exosomes, the exosomes attached to cilia, inducing a decrease of the phosphorylated-to-total ERK1/2 ratio, an increase of miR-15A expression, and a decrease of cholangiocyte proliferation. All these effects of biliary exosomes were abolished by the pharmacological removal of cholangiocyte cilia. Our findings suggest that bile contains exosomes functioning as signaling nanovesicles and influencing intracellular regulatory mechanisms and cholangiocyte proliferation through interaction with primary cilia.
Daily Routine Recognition through Activity Spotting
This paper explores the possibility of using low-level activity spotting for daily routine recognition. Using occurrence statistics of lowlevel activities and simple classifiers based on their statistics allows to train a discriminative classifier for daily routine activities such as working and commuting. Using a recently published data set we find that the number of required low-level activities is surprisingly low, thus, enabling efficient algorithms for daily routine recognition through low-level activity spotting. More specifically we employ the JointBoosting-framework using low-level activity spotters as weak classiers. By using certain lowlevel activities as support, we achieve an overall recall rate of over 90% and precision rate of over 88%. Tuning down the weak classifiers using only 2.61% of the original data still yields recall and precision rates of 80% and 83%.
Dual Polarized Array Antenna for S / X Band Active Phased Array Radar Application
A dual-band dual-polarized microstrip antenna array for an advanced multi-function radio function concept (AMRFC) radar application operating at S and X-bands is proposed. Two stacked planar arrays with three different thin substrates (RT/Duroid 5880 substrates with εr=2.2 and three different thicknesses of 0.253 mm, 0.508 mm and 0.762 mm) are integrated to provide simultaneous operation at S band (3~3.3 GHz) and X band (9~11 GHz). To allow similar scan ranges for both bands, the S-band elements are selected as perforated patches to enable the placement of the X-band elements within them. Square patches are used as the radiating elements for the X-band. Good agreement exists between the simulated and the measured results. The measured impedance bandwidth (VSWR≤2) of the prototype array reaches 9.5 % and 25 % for the Sand X-bands, respectively. The measured isolation between the two orthogonal polarizations for both bands is better than 15 dB. The measured cross-polarization level is ≤—21 dB for the S-band and ≤—20 dB for the X-band.
Integrating NAND flash devices onto servers
Flash is a widely used storage device in portable mobile devices such as smart phones, digital cameras, and MP3 players. It provides high density and low power, properties that are appealing for other computing domains. In this paper, we examine its use in the server domain. Wear-out has the potential to limit the use of Flash in this domain. To seriously consider Flash in the server domain, architectural support must exist to address this lack of reliability. This paper first provides a survey of current and potential Flash usage models in a data center. We then advocate using Flash as an extended system memory usage model---OS managed disk cache---and describe the necessary architectural changes. Specifically we propose two key changes. The first improves performance and reliability by splitting Flash-based disk caches into separate read and write regions. The second improves reliability by employing a programmable Flash memory controller. It changes the error code strength (number of correctable bits) and the number of bits that a memory cell can store (cell density) in response to the demands of the application.
Comparison of SVM and ANN for classification of eye events in EEG
The eye events (eye blink, eyes close and eyes open) are usually considered as biological artifacts in the electroencephalographic (EEG) signal. One can control his or her eye blink by proper training and hence can be used as a control signal in Brain Computer Interface (BCI) applications. Support vector machines (SVM) in recent years proved to be the best classification tool. A comparison of SVM with the Artificial Neural Network (ANN) always provides fruitful results. A one-against-all SVM and a multilayer ANN is trained to detect the eye events. A comparison of both is made in this paper.
Critical Success Factors of Total Quality Management and their impact on Performance of Iranian Automotive Industry
This paper shows a model to conduct an empirical study in Iranian automotive industry in order to improve their performance. There are many factors which are effective factors in improving performance of Iranian automobile industry namely, leadership, customer focus, training, supplier quality management, product design, process management, and team work. The quality improvement plays a fundamental role in determining the performance in Iranian manufacturing industries. In this research, a model has been developed that includes Quality culture, Critical success factors of Total Quality Management and quality improvement to study their influence on the performance of Iranian automotive industry. It is hoped that this paper can provide an academic source for both academicians and managers due to investigate the relationship between Quality culture, critical success factors of Total Quality Management, Quality improvement, and Performance in a systematic manner to increase successful rate of Total Quality Management implementation.
Italian managers: fidelity or performance
Is there a link between Italy’s disappointing productivity growth and the way Italian firms select and develop managerial talent? We collect extensive information on the characteristics of Italian managers and of the firms that employ them. In particular, we analyze the incentive structure that managers face, their career profile, and their use of time. Our data indicate that a fraction of firms – especially non‐family firms and multinationals – adopt a performance model, whereby managers are hired through formal channels (business contacts, head‐hunters, ads), they are assessed regularly and rewarded, promoted and dismissed on the basis on the assessment results. Other firms – especially family firms and firms that operate on the national market only – instead adopt a fidelity model of managerial talent development: they hire managers on the basis of personal or family contacts, they do not assess their performance formally, and they reward them based on the quality of their relationship with the firm’s owners.
Cluster By: a new sql extension for spatial data aggregation
The development of areas such as remote and airborne sensing, location based services, and geosensor networks enables the collection of large volumes of spatial data. These datasets necessitate the wide application of spatial databases. Queries on these geo-referenced data often require the aggregation of isolated data points to form spatial clusters and obtain properties of the clusters. However, current SQL standard does not provide an effective way to form and query spatial clusters. In this paper, we aim at introducing Cluster By into spatial databases to allow a broad range of interesting queries to be posted on spatial clusters. We also provide a language construct to specify spatial clustering algorithms. The extension is demonstrated with several motivating examples.
Android based heart rate monitoring and automatic notification system
In this paper, the design of an integrated portable device that can monitor heart rate (HR) continuously and send notifications through short message service (SMS) over the cellular network using Android application is presented. The primary goal of our device is to ensure medical attention within the first few critical hours of an ailment of the patient with poor heart condition, hence boost chances of his or her survival. In situations where there is an absence of doctor or clinic nearby (e.g., rural area) and where the patient cannot realize their actual poor heart condition, is where our implemented system is of paramount importance. The designed system shows the real time HR on the mobile screen through Android application continuously and if any abnormal HR of the patient is detected, the system will immediately send a message to the concerned doctors and relatives whose numbers were previously saved in the Android application. This device ensures nonstop monitoring for cardiac events along with notifying immediately when an emergency strikes.
Cats Rule and Dogs Drool!: Classifying Stance in Online Debate
A growing body of work has highlighted the challenges of identifying the stance a speaker holds towards a particular topic, a task that involves identifying a holistic subjective disposition. We examine stance classification on a corpus of 4873 posts across 14 topics on ConvinceMe.net, ranging from the playful to the ideological. We show that ideological debates feature a greater share of rebuttal posts, and that rebuttal posts are significantly harder to classify for stance, for both humans and trained classifiers. We also demonstrate that the number of subjective expressions varies across debates, a fact correlated with the performance of systems sensitive to sentimentbearing terms. We present results for identifing rebuttals with 63% accuracy, and for identifying stance on a per topic basis that range from 54% to 69%, as compared to unigram baselines that vary between 49% and 60%. Our results suggest that methods that take into account the dialogic context of such posts might be fruitful.
Statistical base value of 24-hour blood pressure distribution in patients with essential hypertension.
The purpose of this study was to calculate statistically the minimum (base) blood pressure (BP) of nighttime (sleep-time) BP values obtained by ambulatory BP monitoring (ABPM) and to investigate its clinical significance. Twenty-four-hour recording of ECG with ABPM was performed directly (n=89) or indirectly (n=117) in 206 patients with essential hypertension. A telemeter was used for the direct method and a multi-biomedical recorder (TM2425) was used for indirect measurement. First, minimum heart rate (HR0=60/RR0) was determined from sleep-time ECG. The mean product of sleep-time diastolic BP (DBP) and pulse interval (RR) was divided by RR0 to obtain DBP0 [DBP0=(DBPxRR)s/RR0]. The correlation between systolic BP (SBP) and DBP was used to determine SBP0 corresponding to DBP0. Statistical base mean BP (MBP0) was calculated from these values, and its reproducibility and relation to hypertension severity were investigated. MBP0 values were similar to true base values of sleep-time MBP obtained by the direct method (mean+/-SD difference, 2.0+/-4.2 mm Hg). Direct MBP0 criteria predicted hypertension severity (mild, moderate, or severe target organ damage) more accurately (predictive accuracy, 89%) than daytime MBP criteria (53%, P<0.01). Almost the same results were obtained using indirect MBP0 criteria. Day-to-day indirect MBP0 variation (mean absolute difference) was smaller (2.4+/-1.8 mm Hg) than day-to-day daytime and nighttime MBP variation (6.3+/-5.3 and 5.4+/-3.4 mm Hg, respectively; n=61, P<0.01), and the correlation coefficient between day-to-day variations of daytime MBP and physical activity (measured by an acceleration sensor) was 0.38 (P<0.05). In conclusion, statistical base BP was almost equal to true base (minimum) BP of sleep-time BP distribution. It was closely related to the severity of hypertensive organ damage, was highly reproducible, and is considered likely to serve stochastically and physiologically as a representative BP value in an individual subject.
Antiviral effect, safety, and pharmacokinetics of five-day oral administration of Deleobuvir (BI 207127), an investigational hepatitis C virus RNA polymerase inhibitor, in patients with chronic hepatitis C.
Deleobuvir (BI 207127) is an investigational oral nonnucleoside inhibitor of hepatitis C virus (HCV) NS5B RNA polymerase. Antiviral activity, virology, pharmacokinetics, and safety were assessed in HCV genotype 1-infected patients receiving 5 days' deleobuvir monotherapy. In this double-blind phase 1b study, treatment-naive (TN; n = 15) and treatment-experienced (TE; n = 45) patients without cirrhosis received placebo or deleobuvir at 100, 200, 400, 800, or 1,200 mg every 8 h (q8h) for 5 days. Patients with cirrhosis (n = 13) received deleobuvir at 400 or 600 mg q8h for 5 days. Virologic analyses included NS5B genotyping and phenotyping of individual isolates. At day 5, patients without cirrhosis had dose-dependent median HCV RNA reductions of up to 3.8 log10 (with no placebo response); patients with cirrhosis had median HCV RNA reductions of approximately 3.0 log10. Three patients discontinued due to adverse events (AEs). The most common AEs were gastrointestinal, nervous system, and skin/cutaneous tissue disorders. Plasma exposure of deleobuvir was supraproportional at doses ≥ 400 mg q8h and approximately 2-fold higher in patients with cirrhosis than in patients without cirrhosis. No virologic breakthrough was observed. NS5B substitutions associated with deleobuvir resistance in vitro were detected in 9/59 patients; seven encoded P495 substitutions, including P495L, which conferred 120- to 310-fold-decreased sensitivity to deleobuvir. P495 variants did not persist in follow-up without selective drug pressure. Deleobuvir monotherapy was generally well tolerated and demonstrated dose-dependent antiviral activity against HCV genotype 1 over 5 days.
Augmented reality in education: a meta-review and cross-media analysis
Augmented reality (AR) is an educational medium increasingly accessible to young users such as elementary school and high school students. Although previous research has shown that AR systems have the potential to improve student learning, the educational community remains unclear regarding the educational usefulness of AR and regarding contexts in which this technology is more effective than other educational mediums. This paper addresses these topics by analyzing 26 publications that have previously compared student learning in AR versus non-AR applications. It identifies a list of positive and negative impacts of AR experiences on student learning and highlights factors that are potentially underlying these effects. This set of factors is argued to cause differences in educational effectiveness between AR and other media. Furthermore, based on the analysis, the paper presents a heuristic questionnaire generated for judging the educational potential of AR experiences.