title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Performance Evaluation of GNSS for Train Localization | Global Navigation Satellite Systems (GNSS) are applicable to deliver train locations in real time. This train localization function should comply with railway functional safety standards; thus, the GNSS performance needs to be evaluated in consistent with railway EN 50126 standard [Reliability, Availability, Maintainability, and Safety (RAMS)]. This paper demonstrates the performance of the GNSS receiver for train localization. First, the GNSS performance and railway RAMS properties are compared by definitions. Second, the GNSS receiver measurements are categorized into three states (i.e., up, degraded, and faulty states). The relations between the states are illustrated in a stochastic Petri net model. Finally, the performance properties are evaluated using real data collected on the railway track in High Tatra Mountains in Slovakia. The property evaluation is based on the definitions represented by the modeled states. |
Improving Semantic Parsing via Answer Type Inference | In this work, we show the possibility of inferring the answer type before solving a factoid question and leveraging the type information to improve semantic parsing. By replacing the topic entity in a question with its type, we are able to generate an abstract form of the question, whose answer corresponds to the answer type of the original question. A bidirectional LSTM model is built to train over the abstract form of questions and infer their answer types. It is also observed that if we convert a question into a statement form, our LSTM model achieves better accuracy. Using the predicted type information to rerank the logical forms returned by AgendaIL, one of the leading semantic parsers, we are able to improve the F1-score from 49.7% to 52.6% on the WEBQUESTIONS data. |
An electro-fluid-dynamic simulator for the cardiovascular system. | This work presents the initial studies and the proposal for a cardiovascular system electro-fluid-dynamic simulator to be applied in the development of left ventricular assist devices (LVADs). The simulator, which is being developed at University Sao Judas Tadeu and at Institute Dante Pazzanese of Cardiology, is composed of three modules: (i) an electrical analog model of the cardiovascular system operating in the PSpice electrical simulator environment; (ii) an electronic controller, based on laboratory virtual instrumentation engineering workbench (LabVIEW) acquisition and control tool, which will act over the physical simulator; and (iii) the physical simulator: a fluid-dynamic equipment composed of pneumatic actuators and compliance tubes for the simulation of active cardiac chambers and big vessels. The physical simulator (iii) is based on results obtained from the electrical analog model (i) and physiological parameters. |
Simulation and control of an electro-hydraulic actuated clutch | The basic function of any type of automotive transmission is to transfer the engine torque to the vehicle with the desired ratio smoothly and efficiently and the most common control devices inside the transmission are clutches and hydraulic pistons. The automatic control of the clutch engagement plays a crucial role in Automatic Manual Transmission (AMT) vehicles, being seen as an increasingly important enabling technology for the automotive industry. It has a major role in automatic gear shifting and traction control for improved safety, drivability and comfort and, at the same time, for fuel economy. In this paper, a model for a wet clutch actuated by an electrohydraulic valve used by Volkswagen for automatic transmissions is presented. Starting from the developed model, a simulator was implemented in Matlab/Simulink and the model was validated against data obtained from a test-bench provided by Continental Automotive Romania, which includes the Volkswagen wet clutch actuated by the electro-hydraulic valve. Then, a predictive control strategy is applied to the model of the electro-hydraulic actuated clutch with the aims of controlling the clutch piston displacement and decreasing the influence of the network-induced delays on the control performances. The simulation results obtained with the proposed method are compared with the ones obtained with different networked controllers and it is shown that the strategy proposed in this paper can indeed improve the performances of the control system. & 2011 Elsevier Ltd. All rights reserved. |
The Evolution of Multiple Memory Systems | The existence of multiple memory systems has been proposed in a number of areas, including cognitive psychology, neuropsychology, and the study of animal learning and memory. We examine whether the existence of such multiple systems seems likely on evolutionary grounds. Multiple systems adapted to serve seemingly similar functions, which differ in important ways, are a common evolutionary outcome. The evolution of multiple memory systems requires memory systems to be specialized to such a degree that the functional problems each system handles cannot be handled by another system. We define this condition as functional incompatibility and show that it occurs for a number of the distinctions that have been proposed between memory systems. The distinction between memory for song and memory for spatial locations in birds, and between incremental habit formation and memory for unique episodes in humans and other primates provide examples. Not all memory systems are highly specialized in function, however, and the conditions under which memory systems could evolve to serve a wide range of functions are also discussed. |
AGA: Attribute-Guided Augmentation | We consider the problem of data augmentation, i.e., generating artificial samples to extend a given corpus of training data. Specifically, we propose attributed-guided augmentation (AGA) which learns a mapping that allows to synthesize data such that an attribute of a synthesized sample is at a desired value or strength. This is particularly interesting in situations where little data with no attribute annotation is available for learning, but we have access to a large external corpus of heavily annotated samples. While prior works primarily augment in the space of images, we propose to perform augmentation in feature space instead. We implement our approach as a deep encoder-decoder architecture that learns the synthesis function in an end-to-end manner. We demonstrate the utility of our approach on the problems of (1) one-shot object recognition in a transfer-learning setting where we have no prior knowledge of the new classes, as well as (2) object-based one-shot scene recognition. As external data, we leverage 3D depth and pose information from the SUN RGB-D dataset. Our experiments show that attribute-guided augmentation of high-level CNN features considerably improves one-shot recognition performance on both problems. |
Droop control in LV-grids | Remote electrification with island supply systems, the increasing acceptance of the microgrids concept and the penetration of the interconnected grid with DER and RES require the application of inverters and the development of new control algorithms. One promising approach is the implementation of conventional f/U-droops into the respective inverters, thus down scaling the conventional grid control concept to the low voltage grid. Despite contradicting line parameters, the applicability of this proceeding is outlined and the boundary conditions are derived |
Suggestive Annotation: A Deep Active Learning Framework for Biomedical Image Segmentation | Image segmentation is a fundamental problem in biomedical image analysis. Recent advances in deep learning have achieved promising results on many biomedical image segmentation benchmarks. However, due to large variations in biomedical images (different modalities, image settings, objects, noise, etc), to utilize deep learning on a new application, it usually needs a new set of training data. This can incur a great deal of annotation effort and cost, because only biomedical experts can annotate effectively, and often there are too many instances in images (e.g., cells) to annotate. In this paper, we aim to address the following question: With limited effort (e.g., time) for annotation, what instances should be annotated in order to attain the best performance? We present a deep active learning framework that combines fully convolutional network (FCN) and active learning to significantly reduce annotation effort by making judicious suggestions on the most effective annotation areas. We utilize uncertainty and similarity information provided by FCN and formulate a generalized version of the maximum set cover problem to determine the most representative and uncertain areas for annotation. Extensive experiments using the 2015 MICCAI Gland Challenge dataset and a lymph node ultrasound image segmentation dataset show that, using annotation suggestions by our method, state-of-the-art segmentation performance can be achieved by using only 50% of training data. |
Curing regular expressions matching algorithms from insomnia, amnesia, and acalculia | The importance of network security has grown tremendously and a collection of devices have been introduced, which can improve the security of a network. Network intrusion detection systems (NIDS) are among the most widely deployed such system; popular NIDS use a collection of signatures of known security threats and viruses, which are used to scan each packet's payload. Today, signatures are often specified as regular expressions; thus the core of the NIDS comprises of a regular expressions parser; such parsers are traditionally implemented as finite automata. Deterministic Finite Automata (DFA) are fast, therefore they are often desirable at high network link rates. DFA for the signatures, which are used in the current security devices, however require prohibitive amounts of memory, which limits their practical use.
In this paper, we argue that the traditional DFA based NIDS has three main limitations: first they fail to exploit the fact that normal data streams rarely match any virus signature; second, DFAs are extremely inefficient in following multiple partially matching signatures and explodes in size, and third, finite automaton are incapable of efficiently keeping track of counts. We propose mechanisms to solve each of these drawbacks and demonstrate that our solutions can implement a NIDS much more securely and economically, and at the same time substantially improve the packet throughput. |
Incorporating Semantic and Geometric Priors in Deep Pose Regression | Deep learning has enabled recent breakthroughs across a wide spectrum of scene understanding tasks, however, its applicability to camera pose regression has been unfruitful due to the direct formulation that renders it incapable of encoding scene-specific constrains. In this work, we propose the VLocNet++ architecture that overcomes this limitation by simultaneously embedding geometric and semantic knowledge of the world into the pose regression network. We employ a multitask learning approach to exploit the inter-task relationship between learning semantics, regressing 6-DoF global pose and odometry for the mutual benefit of each of these tasks. Furthermore, in order to enforce global consistency during camera pose regression, we propose the novel Geometric Consistency Loss function that leverages the predicted relative motion estimated from odometry to constrict the search space while training. Extensive experiments on the challenging Microsoft 7-Scenes benchmark and our DeepLoc dataset demonstrate that our approach exceeds the state-of-the-art outperforming local feature-based methods while simultaneously performing multiple tasks and exhibiting substantial robustness in challenging scenarios. |
Structuring Computer Generated Proofs | One of the main disadvantages of computer generated proofs of mathematical theorems is their complexity and incomprehensibility. Proof transformation procedures have been designed in order to state these proofs in a formalism that is more familiar to a human mathematician. But usually the essential idea of a proof is still not easily visible. We describe a procedure to transform proofs represented as abstract refutation graphs into natural deduction proofs. During this process topological properties of the refutation graphs can be exploited in order to obtain structured proofs. |
Effects of peer education intervention on HIV/AIDS related sexual behaviors of secondary school students in Addis Ababa, Ethiopia: a quasi-experimental study | BACKGROUND
Worldwide, about 50% of all new cases of HIV occur in youth between age 15 and 24 years. Studies in various sub-Saharan African countries show that both out of school and in school adolescents and youth are engaged in risky sexual behaviors. School-based health education has been a cornerstone of youth-focused HIV prevention efforts since the early 1990s. In addition, peer-based interventions have become a common method to effect important health-related behavior changes and address the HIV/AIDS pandemic. Thus, the aim of this study was to evaluate efficacy of peer education on changing HIV related risky sexual behaviors among school youth in Addis Ababa, Ethiopia.
METHODS
A quasi experimental study with peer education intervention was conducted in purposively selected four secondary schools (two secondary schools for the intervention and other two for the control group) in Addis Ababa, Ethiopia. Five hundred sixty students from randomly selected sections of grade 11 were assessed through anonymous questionnaires conducted in pre- and post-intervention periods. Pertinent data on socio-demographic and sexual behavior related factors were collected. The statistical packages used for data entry and analysis were epi-info version 3.5.4 and SPSS version 20.0 respectively. Chi-square test and multivariable logistic regressions were used for testing association between peer education intervention and sexual behaviors of students. In addition to testing association between dependent and independent variables, multi-variable analysis was employed to control for the effects of confounding variables.
RESULTS
When the pre and post intervention data of each group were compared, comprehensive Knowledge of HIV (P-Values =0.004) and willingness to go for HIV counseling and testing (P-value = 0.01) showed significant differences among intervention group students during post intervention period. Moreover, students in the intervention group were more likely to use condoms during post intervention period compared to students of the control group [AOR = 4.73 (95% CI (1.40-16.0)].
CONCLUSION
Despite short follow up period, students in the intervention group demonstrated positive changes in HIV related comprehensive knowledge and showed better interest to go for HIV testing in the near future. Furthermore, positive changes on risky sexual behaviors were reported from the intervention group. Implementing secondary school targeted peer education by allocating appropriate amounts of resources (money, man power, materials and time) could play significant role to prevent and control HIV/AIDS among school youth. |
A durable and energy efficient main memory using phase change memory technology | Using nonvolatile memories in memory hierarchy has been investigated to reduce its energy consumption because nonvolatile memories consume zero leakage power in memory cells. One of the difficulties is, however, that the endurance of most nonvolatile memory technologies is much shorter than the conventional SRAM and DRAM technology. This has limited its usage to only the low levels of a memory hierarchy, e.g., disks, that is far from the CPU.
In this paper, we study the use of a new type of nonvolatile memories -- the Phase Change Memory (PCM) as the main memory for a 3D stacked chip. The main challenges we face are the limited PCM endurance, longer access latencies, and higher dynamic power compared to the conventional DRAM technology. We propose techniques to extend the endurance of the PCM to an average of 13 (for MLC PCM cell) to 22 (for SLC PCM) years. We also study the design choices of implementing PCM to achieve the best tradeoff between energy and performance. Our design reduced the total energy of an already low-power DRAM main memory of the same capacity by 65%, and energy-delay2 product by 60%. These results indicate that it is feasible to use PCM technology in place of DRAM in the main memory for better energy efficiency. |
Practice Pattern of Gastroenterologists for the Management of GERD Under the Minimal Influence of the Insurance Reimbursement Guideline: A Multicenter Prospective Observational Study | The objective of the study was to document practice pattern of gastroenterologists for the management of gastroesophageal reflux disease (GERD) under the minimal influence of the insurance reimbursement guideline. Data on management for 1,197 consecutive patients with typical GERD symptoms were prospectively collected during 16 weeks. In order to minimize the influence of reimbursement guideline on the use of proton pump inhibitors (PPIs), rabeprazole was used for the PPI treatment. A total of 861 patients (72%) underwent endoscopy before the start of treatment. PPIs were most commonly prescribed (87%). At the start of treatment, rabeprazole 20 mg daily was prescribed to 94% of the patients who received PPI treatment and 10 mg daily to the remaining 6%. At the third visits, rabeprazole 20 mg daily was prescribed to 70% of those who were followed and 10 mg daily for the remaining 30%. Continuous PPI treatment during the 16-week period was performed in 63% of the study patients. In conclusion, a full-dose PPI is preferred for the initial and maintenance treatment of GERD under the minimal influence of the insurance reimbursement guideline, which may reflect a high proportion of GERD patients requiring a long-term treatment of a full-dose PPI. |
Neural Combinatorial Optimization with Reinforcement Learning | This paper presents a framework to tackle combinatorial opt imization problems using neural networks and reinforcement learning. We focus on the traveling salesman problem (TSP) and train a recurrent network that, g iven a set of city coordinates, predicts a distribution over different city p ermutations. Using negative tour length as the reward signal, we optimize the parame ters of the recurrent network using a policy gradient method. We compare learning the network parameters on a set of training graphs against learning them on ind ividual test graphs. The best results are obtained when the network is first optimi zed on a training set and then refined on individual test graphs. Without any super vision and with minimal engineering, Neural Combinatorial Optimization achi eves close to optimal results on 2D Euclidean graphs with up to 100 nodes. |
HIV-1 Vaccine-Induced T-Cell Reponses Cluster in Epitope Hotspots that Differ from Those Induced in Natural Infection with HIV-1 | Several recent large clinical trials evaluated HIV vaccine candidates that were based on recombinant adenovirus serotype 5 (rAd-5) vectors expressing HIV-derived antigens. These vaccines primarily elicited T-cell responses, which are known to be critical for controlling HIV infection. In the current study, we present a meta-analysis of epitope mapping data from 177 participants in three clinical trials that tested two different HIV vaccines: MRKAd-5 HIV and VRC-HIVAD014-00VP. We characterized the population-level epitope responses in these trials by generating population-based epitope maps, and also designed such maps using a large cohort of 372 naturally infected individuals. We used these maps to address several questions: (1) Are vaccine-induced responses randomly distributed across vaccine inserts, or do they cluster into immunodominant epitope hotspots? (2) Are the immunodominance patterns observed for these two vaccines in three vaccine trials different from one another? (3) Do vaccine-induced hotspots overlap with epitope hotspots induced by chronic natural infection with HIV-1? (4) Do immunodominant hotspots target evolutionarily conserved regions of the HIV genome? (5) Can epitope prediction methods be used to identify these hotspots? We found that vaccine responses clustered into epitope hotspots in all three vaccine trials and some of these hotspots were not observed in chronic natural infection. We also found significant differences between the immunodominance patterns generated in each trial, even comparing two trials that tested the same vaccine in different populations. Some of the vaccine-induced immunodominant hotspots were located in highly variable regions of the HIV genome, and this was more evident for the MRKAd-5 HIV vaccine. Finally, we found that epitope prediction methods can partially predict the location of vaccine-induced epitope hotspots. Our findings have implications for vaccine design and suggest a framework by which different vaccine candidates can be compared in early phases of evaluation. |
American Photography and the American Dream | James Guimond's powerful study reveals how documentary photographers have expressed or contested the idea of the American Dream throughout the twentieth century. In Guimond's formulation issues like growth, equality, and national identity came under the rubric of the Dream as it has been used to measure how well the nation is living up to its social and political ideals. A pathbreaking book, American Photography and the American Dream examines the most important photographers and developments in the documentary genre during this century. It encompasses the reform-era images of Francis Benjamin Johnston and Lewis Hine; the work of Farm Security Administration (FSA) photographers like Walker Evans and Dorothea Lange during both the 1930s and 1940s after the FSA photography unit broke up; the American-Way-of-Life pictures published by Life , Look , and the United States Information Agency during the 1940s and 1950s; the iconoclastic images of William Klein, Diane Arbus, and Robert Frank; and the work of four photographers of the 1970s and 1980s: Bill Owens, Chauncey Hare, Susan Meiselas, and Michael Williamson. Guimond pays close attention to the specific historical circumstances in which the pictures were made, to the roles the photographers played in making their images, to their intentions, stated and unstated, and to the original contexts in which the images were published or exhibited. These images, he shows, are not merely pictures on museum walls but revelations that can help us understand how we as Americans have seen ourselves, one another, and the world around us. |
Hepatoprotective effects of the dual peroxisome proliferator-activated receptor alpha/delta agonist, GFT505, in rodent models of nonalcoholic fatty liver disease/nonalcoholic steatohepatitis. | UNLABELLED
Nonalcoholic fatty liver disease (NAFLD) covers a spectrum of liver damage ranging from simple steatosis to nonalcoholic steatohepatitis (NASH), fibrosis, and cirrhosis. To date, no pharmacological treatment is approved for NAFLD/NASH. Here, we report on preclinical and clinical data with GFT505, a novel dual peroxisome proliferator-activated receptor alpha/delta (PPAR-α/δ) agonist. In the rat, GFT505 concentrated in the liver with limited extrahepatic exposure and underwent extensive enterohepatic cycling. The efficacy of GFT505 was assessed in animal models of NAFLD/NASH and liver fibrosis (Western diet [WD]-fed human apolipoprotein E2 [hApoE2] transgenic mice, methionine- and choline-deficient diet-fed db/db mice, and CCl4 -induced fibrosis in rats). GFT505 demonstrated liver-protective effects on steatosis, inflammation, and fibrosis. In addition, GFT505 improved liver dysfunction markers, decreased hepatic lipid accumulation, and inhibited proinflammatory (interleukin-1 beta, tumor necrosis factor alpha, and F4/80) and profibrotic (transforming growth factor beta, tissue inhibitor of metalloproteinase 2, collagen type I, alpha 1, and collagen type I, alpha 2) gene expression. To determine the role of PPAR-α-independent mechanisms, the effect of GFT505 was assessed in hApoE2 knock-in/PPAR-α knockout mice. In these mice, GFT505 also prevented WD-induced liver steatosis and inflammation, indicating a contribution of PPAR-α-independent mechanisms. Finally, the effect of GFT505 on liver dysfunction markers was assessed in a combined analysis of four phase II clinical studies in metabolic syndrome patients. GFT505 treatment decreased plasma concentrations of alanine aminotransferase, gamma-glutamyl transpeptidase, and alkaline phosphatase.
CONCLUSION
The dual PPAR-α/δ agonist, GFT505, is a promising liver-targeted drug for treatment of NAFLD/NASH. In animals, its protective effects are mediated by both PPAR-α-dependent and -independent mechanisms. |
Head-of-bed elevation and early outcomes of gastric reflux, aspiration and pressure ulcers: a feasibility study. | BACKGROUND
Guidelines recommending head of bed (HOB) elevation greater than 30º to prevent ventilator-associated pneumonia conflict with guidelines to prevent pressure ulcers, which recommend HOB elevation less than 30º.
OBJECTIVES
To examine the feasibility of 45º HOB elevation and describe and compare the occurrence of reflux, aspiration, and pressure ulcer development at 30º and 45º HOB elevation.
METHODS
A randomized 2-day crossover trial was conducted. HOB angle was measured every 30 seconds. Oral and tracheal secretions were analyzed for pepsin presence. Skin was assessed for pressure ulcers. Wilcoxon signed rank tests and Kendall τ correlations were conducted.
RESULTS
Fifteen patients were enrolled; 11 completed both days. Patients were maintained at 30º (mean, 30º) for 96% of minutes and at 45º (mean, 39º) for 77% of minutes. No patients showed signs of pressure ulcers. A total of 188 oral secretions were obtained, 82 (44%) were pepsin-positive; 174 tracheal secretions were obtained, 108 (62%) were pepsin-positive. The median percentage of pepsin-positive oral secretions was not significantly higher (P = .11) at 30º elevation (54%) than at 45º elevation (20%). The median percentage of pepsin-positive tracheal secretions was not significantly higher (P = .37) at 30º elevation (71%) than 45º elevation (67%). Deeper sedation correlated with increased reflux (P = .03).
CONCLUSIONS
HOB elevation greater than 30º is feasible and preferred to 30º for reducing oral secretion volume, reflux, and aspiration without pressure ulcer development in gastric-fed patients receiving mechanical ventilation. More deeply sedated patients may benefit from higher HOB elevations. |
Algorithm selection by rational metareasoning as a model of human strategy selection | Selecting the right algorithm is an important problem in computer science, because the algorithm often has to exploit the structure of the input to be efficient. The human mind faces the same challenge. Therefore, solutions to the algorithm selection problem can inspire models of human strategy selection and vice versa. Here, we view the algorithm selection problem as a special case of metareasoning and derive a solution that outperforms existing methods in sorting algorithm selection. We apply our theory to model how people choose between cognitive strategies and test its prediction in a behavioral experiment. We find that people quickly learn to adaptively choose between cognitive strategies. People’s choices in our experiment are consistent with our model but inconsistent with previous theories of human strategy selection. Rational metareasoning appears to be a promising framework for reverse-engineering how people select between cognitive strategies and translating the results into better solutions to the algorithm selection problem. |
Analysis of Hydroxy Fatty Acids from the Pollen of Brassica campestris L. var. oleifera DC. by UPLC-MS/MS | Ultraperformance liquid chromatography coupled with negative electrospray tandem mass spectrometry (UPLC-ESI-MS/MS) was used to determine 7 hydroxy fatty acids in the pollen of Brassica campestris L. var. oleifera DC. All the investigated hydroxy fatty acids showed strong deprotonated molecular ions [M-H](-), which underwent two major fragment pathways of the allyl scission and the β-fission of the alcoholic hydroxyl group. By comparison of their molecular ions and abundant fragment ions with those of reference compounds, they were tentatively assigned as 15,16-dihydroxy-9Z,12Z-octadecadienoic acid (1), 10,11,12-trihydroxy-(7Z,14Z)-heptadecadienoic acid (2), 7,15,16-trihydroxy-9Z,12Z-octadecadienoic acid (3), 15,16-dihydroxy-9Z,12Z-octadecadienoic acid (4), 15-hydroxy-6Z,9Z,12Z-octadecatrienoic acid (5), 15-hydroxy-9Z,12Z- octadecadienoic acid (6), and 15-hydroxy-12Z-octadecaenoic acid (7), respectively. Compounds 3, 5, and 7 are reported for the first time. |
Broadcast Encryption with Traitor Tracing | In this thesis, we look at definitions and black-box constructions with efficient instantiations for broadcast encryption and traitor tracing. We begin by looking at the security notions for broadcast encryption found in the literature. Since there is no easy way to compare these existing notions, we propose a framework of security notions for which we establish relationships. We then show where existing notions fit within this framework. Second, we present a black-box construction of a decentralized dynamic broadcast encryption scheme. This scheme does not rely on any trusted authorities, and new users can join at any time. It achieves the strongest security notion based on the security of its components and has an efficient instantiation that is fully secure under the DDH assumption in the standard model. Finally, we give a black-box construction of a message-based traitor tracing scheme, which allows tracing not only based on pirate decoders but also based on watermarks contained in a message. Our scheme is the first one to obtain the optimal ciphertext rate of 1 asymptotically. We then show that at today’s data rates, the scheme is already practical for standard choices of values. |
Strategic integration of knowledge management and customer relationship management | Purpose – The purpose of this paper is to introduce the concept of strategic integration of knowledge management (KM ) and customer relationship management (CRM). The integration is a strategic issue that has strong ramifications in the long-term competitiveness of organizations. It is not limited to CRM; the concept can also be applied to supply chain management (SCM), product development management (PDM), eterprise resource planning (ERP) and retail network management (RNM) that offer different perspectives into knowledge management adoption. Design/methodology/approach – Through literature review and establishing new perspectives with examples, the components of knowledge management, customer relationship management, and strategic planning are amalgamated. Findings – Findings include crucial details in the various components of knowledge management, customer relationship management, and strategic planning, i.e. strategic planning process, value formula, intellectual capital measure, different levels of CRM and their core competencies. Practical implications – Although the strategic integration of knowledge management and customer relationship management is highly conceptual, a case example has been provided where the concept is applied. The same concept could also be applied to other industries that focus on customer service. Originality/value – The concept of strategic integration of knowledge management and customer relationship management is new. There are other areas, yet to be explored in terms of additional integration such as SCM, PDM, ERP, and RNM. The concept of integration would be useful for future research as well as for KM and CRM practitioners. |
Self-compassion and intuitive eating in college women: examining the contributions of distress tolerance and body image acceptance and action. | Self-compassion has been linked to higher levels of psychological well-being. The current study evaluated whether this effect also extends to a more adaptive food intake process. More specifically, this study investigated the relationship between self-compassion and intuitive eating among 322 college women. In order to further clarify the nature of this relationship this research additionally examined the indirect effects of self-compassion on intuitive eating through the pathways of distress tolerance and body image acceptance and action using both parametric and non-parametric bootstrap resampling analytic procedures. Results based on responses to the self-report measures of the constructs of interest indicated that individual differences in body image acceptance and action (β = .31, p < .001) but not distress tolerance (β = .00, p = .94) helped explain the relationship between self-compassion and intuitive eating. This effect was retained in a subsequent model adjusted for body mass index (BMI) and self-esteem (β = .19, p < .05). Results provide preliminary support for a complementary perspective on the role of acceptance in the context of intuitive eating to that of existing theory and research. The present findings also suggest the need for additional research as it relates to the development and fostering of self-compassion as well as the potential clinical implications of using acceptance-based interventions for college-aged women currently engaging in or who are at risk for disordered eating patterns. |
PCA for large data sets with parallel data summarization | Parallel processing is essential for large-scale analytics. Principal Component Analysis (PCA) is a well known model for dimensionality reduction in statistical analysis, which requires a demanding number of I/O and CPU operations. In this paper, we study how to compute PCA in parallel. We extend a previous sequential method to a highly parallel algorithm that can compute PCA in one pass on a large data set based on summarization matrices. We also study how to integrate our algorithm with a DBMS; our solution is based on a combination of parallel data set summarization via user-defined aggregations and calling the MKL parallel variant of the LAPACK library to solve Singular Value Decomposition (SVD) in RAM. Our algorithm is theoretically shown to achieve linear speedup, linear scalability on data size, quadratic time on dimensionality (but in RAM), spending most of the time on data set summarization, despite the fact that SVD has cubic time complexity on dimensionality. Experiments with large data sets on multicore CPUs show that our solution is much faster than the R statistical package as well as solving PCA with SQL queries. Benchmarking on multicore CPUs and a parallel DBMS running on multiple nodes confirms linear speedup and linear scalability. |
Expertise recommender: a flexible recommendation system and architecture | Locating the expertise necessary to solve difficult problems is a nuanced social and collaborative problem. In organizations, some people assist others in locating expertise by making referrals. People who make referrals fill key organizational roles that have been identified by CSCW and affiliated research. Expertise locating systems are not designed to replace people who fill these key organizational roles. Instead, expertise locating systems attempt to decrease workload and support people who have no other options. Recommendation systems are collaborative software that can be applied to expertise locating. This work describes a general recommendation architecture that is grounded in a field study of expertise locating. Our expertise recommendation system details the work necessary to fit expertise recommendation to a work setting. The architecture and implementation begin to tease apart the technical aspects of providing good recommendations from social and collaborative concerns. |
A randomised trial of single-dose radiotherapy to prevent procedure tract metastasis by malignant mesothelioma | A single 9-MeV electron treatment, following invasive thoracic procedures in patients with malignant pleural mesothelioma, was examined. In all, 58 sites were randomised to prophylactic radiotherapy or not. There was no statistically significant difference in tract metastasis. A single 10-Gy treatment with 9-MeV electrons appears ineffective. |
Classification with Hybrid Generative/Discriminative Models | Although discriminatively trained classifiers are usually more accurate when labeled training data is abundant, previous work has sh own that when training data is limited, generative classifiers can ou t-perform them. This paper describes a hybrid model in which a high-dim ensional subset of the parameters are trained to maximize generative likelihood, and another, small, subset of parameters are discriminativ ely trained to maximize conditional likelihood. We give a sample complexi ty bound showing that in order to fit the discriminative parameters we ll, the number of training examples required depends only on the logari thm of the number of feature occurrences and feature set size. Experim ental results show that hybrid models can provide lower test error and can p roduce better accuracy/coverage curves than either their purely g nerative or purely discriminative counterparts. We also discuss sever al advantages of hybrid models, and advocate further work in this area. |
Facial expression recognition based on Local Binary Patterns: A comprehensive study | Automatic facial expression analysis is an interesting and challenging problem, and impacts important applications in many areas such as human–computer interaction and data-driven animation. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. In this paper, we empirically evaluate facial representation based on statistical local features, Local Binary Patterns, for person-independent facial expression recognition. Different machine learning methods are systematically examined on several databases. Extensive experiments illustrate that LBP features are effective and efficient for facial expression recognition. We further formulate Boosted-LBP to extract the most discriminant LBP features, and the best recognition performance is obtained by using Support Vector Machine classifiers with Boosted-LBP features. Moreover, we investigate LBP features for low-resolution facial expression recognition, which is a critical problem but seldom addressed in the existing work. We observe in our experiments that LBP features perform stably and robustly over a useful range of low resolutions of face images, and yield promising performance in compressed low-resolution video sequences captured in real-world environments. 2008 Elsevier B.V. All rights reserved. |
Web usability: A user-centered design approach | Any books that you read, no matter how you got the sentences that have been read from the books, surely they will give you goodness. But, we will show you one of recommendation of the book that you need to read. This web usability a user centered design approach is what we surely mean. We will show you the reasonable reasons why you need to read this book. This book is a kind of precious book written by an experienced author. |
A wideband substrate integrated waveguide slotted array antenna with multimode and multidirectional characteristics | This paper presents the design of a substrate integrated waveguide slotted array antenna (SIW-SAA) with wideband, multimode and multidirectional characteristics. The slotted array consists of three transverse slots in the top wall of SIW which are inclined at 45° with respect to a single longitudinal slot. The number of slots and the position of the slotted array determine the impedance bandwidth. It is found from the simulated and measured results that the impedance bandwidth of the antenna is 92% (10 GHz–27 GHz) for S11 ≤ −10 dB. Multiple modes get excited in the SIW-SAA resulting in multidirectional characteristics and an equivalent circuit model is used to explain the physics of this phenomenon. In the X band, the lowest mode (TE10) of SIW exists and the combination of slotted array and SIW is responsible for unidirectional radiation. Bidirectional and omnidirectional patterns are observed in the elevation plane for Ku and K bands, respectively due to higher order modes that are excited in the bottom wall of SIW. With these multidirectional characteristics, the proposed antenna is an attractive choice in airborne control systems, long distance communications and broadcasting. |
Automatic Feature Engineering for Answer Selection and Extraction | This paper proposes a framework for automatically engineering features for two important tasks of question answering: answer sentence selection and answer extraction. We represent question and answer sentence pairs with linguistic structures enriched by semantic information, where the latter is produced by automatic classifiers, e.g., question classifier and Named Entity Recognizer. Tree kernels applied to such structures enable a simple way to generate highly discriminative structural features that combine syntactic and semantic information encoded in the input trees. We conduct experiments on a public benchmark from TREC to compare with previous systems for answer sentence selection and answer extraction. The results show that our models greatly improve on the state of the art, e.g., up to 22% on F1 (relative improvement) for answer extraction, while using no additional resources and no manual feature engineering. |
Development of PZT and PZN-PT Based Unimorph Actuators for Micromechanical Flapping Mechanisms | This paper focuses on the design, fabrication and characterization of unimorph actuators for a microaerial flapping mechanism. PZT-5H and PZN-PT are investigated as piezoelectric layers in the unimorph actuators. Design issues for microaerial flapping actuators are discussed, and criteria for the optimal dimensions of actuators are determined. For low power consumption actuation, a square wave based electronic driving circuit is proposed. Fabricated piezoelectric unimorphs are characterized by an optical measurement system in quasi-static and dynamic mode. Experimental performance of PZT5H and PZN-PT based unimorphs is compared with desired design specifications. A 1 d.o.f. flapping mechanism with a PZT-5H unimorph is constructed, and 180◦ stroke motion at 95 Hz is achieved. Thus, it is shown that unimorphs could be promising flapping mechanism actuators. |
Knowledge management capability, customer relationship management, and service quality | Knowledge management capability, customer relationship management, and service quality Shu-Mei Tseng Article information: To cite this document: Shu-Mei Tseng , (2016),"Knowledge management capability, customer relationship management, and service quality", Journal of Enterprise Information Management, Vol. 29 Iss 2 pp. Permanent link to this document: http://dx.doi.org/10.1108/JEIM-04-2014-0042 |
Control Barrier Function Based Quadratic Programs with Application to Automotive Safety Systems | Safety critical systems involve the tight coupling between potentially conflicting control objectives and safety constraints. As a means of creating a formal framework for controlling systems of this form, and with a view toward automotive applications, this paper develops a methodology that allows safety conditions—expressed as control barrier functions— to be unified with performance objectives—expressed as control Lyapunov functions—in the context of real-time optimizationbased controllers. Safety conditions are specified in terms of forward invariance of a set, and are verified via two novel generalizations of barrier functions; in each case, the existence of a barrier function satisfying Lyapunov-like conditions implies forward invariance of the set, and the relationship between these two classes of barrier functions is characterized. In addition, each of these formulations yields a notion of control barrier function (CBF), providing inequality constraints in the control input that, when satisfied, again imply forward invariance of the set. Through these constructions, CBFs can naturally be unified with control Lyapunov functions (CLFs) in the context of a quadratic program (QP); this allows for the achievement of control objectives (represented by CLFs) subject to conditions on the admissible states of the system (represented by CBFs). The mediation of safety and performance through a QP is demonstrated on adaptive cruise control and lane keeping, two automotive control problems that present both safety and performance considerations coupled with actuator bounds. |
Hemorrhagic complications in a phase II study of sunitinib in patients of nasopharyngeal carcinoma who has previously received high-dose radiation. | BACKGROUND
We aimed to evaluate the safety and efficacy of single-agent sunitinib in nasopharyngeal carcinoma (NPC).
METHODS
Eligible patients had progressive disease after prior platinum-based chemotherapy. Sunitinib was given as continuous once-daily dosing of 37.5 mg in 4-week cycles until progression.
RESULTS
Thirteen patients were enrolled. Recruitment was stopped after two patients died of hemorrhagic events. All patients had previously received curative radiotherapy (RT) to nasopharynx/neck (including nine patients who had chemoradiotherapy). Patients received a median of three cycles of sunitinib. One patient was still on sunitinib with stable disease after 24 cycles. Hemorrhagic events occurred in nine patients (64%), including epistaxis in six, hemoptyses in three and hematemesis in two patients. Prior RT to thorax was significantly associated with hemoptyses (P = 0.03). Two patients with local tumor invasion into the carotid sheath developed fatal epistaxis/hematemesis within the first cycle of sunitinib, likely due to internal carotid blowout after tumor shrinkage.
CONCLUSIONS
Sunitinib demonstrated modest clinical activity in heavily pretreated NPC patients. However, the high incidence of hemorrhage from the upper aerodigestive tract in NPC patients who received prior high-dose RT to the region is of concern. Direct vascular invasion by tumors appeared to increase the risk of serious bleeding. |
Parametric Classification Techniques for Theater Ballistic Missile Defense | ne of the fundamental challenges in theater ballistic missile defense (TBMD) is ascertaining which element in the threat complex is the lethal object. To classify the lethal object and other objects in the complex, it is necessary to model how these objects will appear to TBMD sensors. This article describes a generic parametric approach to building classifier models. The process is illustrated with an example of building a classifier for an infrared sensor. The formulas for probability of classification error are derived. The probability of error for a proposed classification scheme is vital to assessing its efficacy in system trade studies. ( |
A systematic review of complementary and alternative medicine interventions for the management of cancer-related fatigue. | Fatigue, experienced by patients during and following cancer treatment, is a significant clinical problem. It is a prevalent and distressing symptom yet pharmacological interventions are used little and confer limited benefit for patients. However, many cancer patients use some form of complementary and alternative medicine (CAM), and some evidence suggests it may relieve fatigue. A systematic review was conducted to appraise the effectiveness of CAM interventions in ameliorating cancer-related fatigue. Systematic searches of biomedical, nursing, and specialist CAM databases were conducted, including Medline, Embase, and AMED. Included papers described interventions classified as CAM by the National Centre of Complementary and Alternative Medicine and evaluated through randomized controlled trial (RCT) or quasi-experimental design. Twenty studies were eligible for the review, of which 15 were RCTs. Forms of CAM interventions examined included acupuncture, massage, yoga, and relaxation training. The review identified some limited evidence suggesting hypnosis and ginseng may prevent rises in cancer-related fatigue in people undergoing treatment for cancer and acupuncture and that biofield healing may reduce cancer-related fatigue following cancer treatments. Evidence to date suggests that multivitamins are ineffective at reducing cancer-related fatigue. However, trials incorporated within the review varied greatly in quality; most were methodologically weak and at high risk of bias. Consequently, there is currently insufficient evidence to conclude with certainty the effectiveness or otherwise of CAM in reducing cancer-related fatigue. The design and methods employed in future trials of CAM should be more rigorous; increasing the strength of evidence should be a priority. |
A Chebyshev-Davidson Algorithm for Large Symmetric Eigenproblems | A polynomial filtered Davidson-type algorithm is proposed for symmetric eigenproblems, in which the correction-equation of the Davidson approach is replaced by a polynomial filtering step. The new approach has better global convergence and robustness properties when compared with standard Davidson-type methods. The typical filter used in this paper is based on Chebyshev polynomials. The goal of the polynomial filter is to amplify components of the desired eigenvectors in the subspace, which has the effect of reducing both the number of steps required for convergence and the cost in orthogonalizations and restarts. Numerical results are presented to show the effectiveness of the proposed approach. |
An assessment of the impact of the NHS Health Technology Assessment Programme. | OBJECTIVES
To consider how the impact of the NHS Health Technology Assessment (HTA) Programme should be measured. To determine what models are available and their strengths and weaknesses. To assess the impact of the first 10 years of the NHS HTA programme from its inception in 1993 to June 2003 and to identify the factors associated with HTA research that are making an impact.
DATA SOURCES
Main electronic databases from 1990 to June 2005. The documentation of the National Coordinating Centre for Health Technology Assessment (NCCHTA). Questionnaires to eligible researchers. Interviews with lead investigators. Case study documentation.
REVIEW METHODS
A literature review of research programmes was carried out, the work of the NCCHTA was reviewed, lead researchers were surveyed and 16 detailed case studies were undertaken. Each case study was written up using the payback framework. A cross-case analysis informed the analysis of factors associated with achieving payback. Each case study was scored for impact before and after the interview to assess the gain in information due to the interview. The draft write-up of each study was checked with each respondent for accuracy and changed if necessary.
RESULTS
The literature review identified a highly diverse literature but confirmed that the 'payback' framework pioneered by Buxton and Hanney was the most widely used and most appropriate model available. The review also confirmed that impact on knowledge generation was more easily quantified than that on policy, behaviour or especially health gain. The review of the included studies indicated a higher level of impact on policy than is often assumed to occur. The survey showed that data pertinent to payback exist and can be collected. The completed questionnaires showed that the HTA Programme had considerable impact in terms of publications, dissemination, policy and behaviour. It also showed, as expected, that different parts of the Programme had different impacts. The Technology Assessment Reports (TARs) for the National Institute for Health and Clinical Excellence (NICE) had the clearest impact on policy in the form of NICE guidance. Mean publications per project were 2.93 (1.98 excluding the monographs), above the level reported for other programmes. The case studies revealed the large diversity in the levels and forms of impacts and the ways in which they arise. All the NICE TARs and more than half of the other case studies had some impact on policy making at the national level whether through NICE, the National Screening Committee, the National Service Frameworks, professional bodies or the Department of Health. This underlines the importance of having a customer or 'receptor' body. A few case studies had very considerable impact in terms of knowledge production and in informing national and international policies. In some of these the principal investigator had prior expertise and/or a research record in the topic. The case studies confirmed the questionnaire responses but also showed how some projects led to further research.
CONCLUSIONS
This study concluded that the HTA Programme has had considerable impact in terms of knowledge generation and perceived impact on policy and to some extent on practice. This high impact may have resulted partly from the HTA Programme's objectives, in that topics tend to be of relevance to the NHS and have policy customers. The required use of scientific methods, notably systematic reviews and trials, coupled with strict peer reviewing, may have helped projects publish in high-quality peer-reviewed journals. Further research should cover more detailed, comprehensive case studies, as well as enhancement of the 'payback framework'. A project that collated health research impact studies in an ongoing manner and analysed them in a consistent fashion would also be valuable. |
Tramadol/paracetamol combination tablet for postoperative pain following ambulatory hand surgery: a double-blind, double-dummy, randomized, parallel-group trial | This randomized, double-blind, double-dummy, multicenter trial compared efficacy and safety of tramadol HCL 37.5 mg/paracetamol 325 mg combination tablet with tramadol HCL 50 mg capsule in the treatment of postoperative pain following ambulatory hand surgery with iv regional anesthesia. Patients received trial medication at admission, immediately after surgery, and every 6 hours after discharge until midnight of the first postoperative day. Analgesic efficacy was assessed by patients (n = 128 in each group, full analysis set) and recorded in a diary on the evening of surgery day and of the first postoperative day. They also documented the occurrence of adverse events. By the end of the first postoperative day, the proportion of treatment responders based on treatment satisfaction (primary efficacy variable) was comparable between the groups (78.1% combination, 71.9% tramadol; P = 0.24) and mean pain intensity (rated on a numerical scale from 0 = no pain to 10 = worst imaginable pain) had been reduced to 1.7 ± 2.0 for both groups. Under both treatments, twice as many patients experienced no pain (score = 0) on the first postoperative day compared to the day of surgery (35.9% vs 16.4% for tramadol/paracetamol and 36.7% vs 18% for tramadol treatment). Rescue medication leading to withdrawal (diclofenac 50 mg) was required by 17.2% patients with tramadol/paracetamol and 13.3% with tramadol. Adverse events (mainly nausea, dizziness, somnolence, vomiting, and increased sweating) occurred less frequently in patients under combination treatment (P = 0.004). Tramadol/paracetamol combination tablets provided comparable analgesic efficacy with a better safety profile to tramadol capsules in patients experiencing postoperative pain following ambulatory hand surgery. |
Architectural considerations for playback of quality adaptive video over the Internet | Lack of QoS support in the Internet has not prevented rapid growth of streaming applications. However many of these applications do not perform congestion control effectively. Thus there is significant concern about the effects on co-existing well-behaved traffic and the potential for congestion collapse. In addition, most such applications are unable to perform quality adaptation on-the-fly as available bandwidth changes during a session. This paper aims to provide some architectural insights on the design of video playback applications in the Internet. We present fundamental design principles for Internet applications and identify end-to-end congestion control, quality adaptation and error control as the three major building blocks for Internet video playback applications. We discuss the design space for each of these components, and within that space, present an end-to-end architecture suited for playback of layered-encoded stored video streams. Our architecture reconciles congestion control and quality adaptation which occur on different timescales. It exhibits a TCP-friendly behavior by adopting the RAP protocol for end-to-end congestion control. Additionally, it uses a layered framework for quality adaptation with selective retransmission to maximize the quality of the delivered stream as available bandwidth changes. We argue that the architecture can be generalized by replacing the suggested mechanism for each component by another from the same design space as long as all components remain compatible. This work was carried out while Reza Rejaie was with USC/ISI. This work was supported at USC/ISI by DARPA under contract No. DABT6395-C0095 and DABT63-96-C-0054 as part of SPT and VINT projects. |
Environmental Scanning-The Impact of the Stakeholder Concept | This paper discusses the advantages of the use of the stakeholder framework as a basis for focusing an organization's environmental scanning effort. Arising from the discussion, a contingency model for environmental scanning is developed to relate the focus and method used for environmental scanning to the dynamism of the environment and the power of the stakeholder relative to the organization. Steps for implementing the environmental scanning system are then discussed. Increasing environmental turbulence in the remotely useful from a managerial or theo1950's led practicing managers and theoretical perspective... Scanning systems reticians to the realization that organiface two problems: ( 1) how to register zations could no longer be regarded as needed information, and (2) how to act closed systems--organizations had to be upon the information," (Pfeffer & Salancik, regarded as open systems. The open 1978). system concept of organizations necessarily led to the inclusion of environmental King overcame these problems to some considerations in the planning process. In extent by analyzing the type of inforso doing, strategic planning came into mation needed for strategic planning in being (Ansoff & Thanheiser, 1978). The order to derive some sort of focus for the mere fact that environmental considerscanner (King & Cleland, 1978). Thus, he ations had to be included in the planning identified six environmental information process necessitated the development of a sub-systems--the image, the customer, the process whereby information about the potential customer, the competitive inforenvironment could be collected, analyzed, mation, the regulatory information, and and acted upon. The concept of environthe critical intelligence information submental scanning thus came into being. systems. However the focus and the method to be used in the environmental But concepts are paper tigers--it is pracscanning process were still ill-defined. tice that counts. Agui lar ( 1967) recognized Subsequently, Fahey developed a typology four modes of environmental scanning-of scanning processes based on the impetus undirected viewing, conditioned viewing, for scanning (Fahey, King, Vadake, 1981). informal search, and formal search. This Thus crisis initiated scanning gave rise to did not help the manager to determine irregular scanning. Scanning initiated by what to scan, nor how to scan it. And the need for problem solving was to be some eleven years later, Pfeffer remarked done on a periodic basis, whi le scanning for that "... the allusion to the environment is opportunity finding and problem avoidance frequently pro forma and seldom follows up was to be done continuously. Clearly at the open systems perspective with anything this juncture there is still no method of |
Classification and Phylogenetic Analysis of African Ternary Rhythm Timelines | A combinatorial classification and a phylogenetic analysis of the ten 12/8 time, seven-stroke bell rhythm timelines in African and Afro-American music are presented. New methods for rhythm classification are proposed based on measures of rhythmic oddity and off-beatness. These combinatorial classifications reveal several new uniqueness properties of the Bembé bell pattern that may explain its widespread popularity and preference among the other patterns in this class. A new distance measure called the swap-distance is introduced to measure the non-similarity of two rhythms that have the same number of strokes. A swap in a sequence of notes and rests of equal duration is the location interchange of a note and a rest that are adjacent in the sequence. The swap distance between two rhythms is defined as the minimum number of swaps required to transform one rhythm to the other. A phylogenetic analysis using Splits Graphs with the swap distance shows that each of the ten bell patterns can be derived from one of two “canonical” patterns with at most four swap operations, or from one with at most five swap operations. Furthermore, the phylogenetic analysis suggests that for these ten bell patterns there are no “ancestral” rhythms not contained in this set. |
A Survey of Text Similarity Approaches | Measuring the similarity between words, sentences, paragraphs and documents is an important component in various tasks such as information retrieval, document clustering, word-sense disambiguation, automatic essay scoring, short answer grading, machine translation and text summarization. This survey discusses the existing works on text similarity through partitioning them into three approaches; String-based, Corpus-based and Knowledge-based similarities. Furthermore, samples of combination between these similarities are presented. |
Mechanisms of suicidal erythrocyte death. | Erythrocyte injury such as osmotic shock, oxidative stress or energy depletion stimulates the formation of prostaglandin E2 through activation of cyclooxygenase which in turn activates a Ca2+ permeable cation channel. Increasing cytosolic Ca2+ concentrations activate Ca2+ sensitive K+ channels leading to hyperpolarization, subsequent loss of KCl and (further) cell shrinkage. Ca2+ further stimulates a scramblase shifting phosphatidylserine from the inner to the outer cell membrane. The scramblase is sensitized for the effects of Ca2+ by ceramide which is formed by a sphingomyelinase following several stressors including osmotic shock. The sphingomyelinase is activated by platelet activating factor PAF which is released by activation of phospholipase A2. Phosphatidylserine at the erythrocyte surface is recognised by macrophages which engulf and degrade the affected cells. Moreover, phosphatidylserine exposing erythrocytes may adhere to the vascular wall and thus interfere with microcirculation. Erythrocyte shrinkage and phosphatidylserine exposure ('eryptosis') mimic features of apoptosis in nucleated cells which however, involves several mechanisms lacking in erythrocytes. In kidney medulla, exposure time is usually too short to induce eryptosis despite high osmolarity. Beyond that high Cl- concentrations inhibit the cation channel and high urea concentrations the sphingomyelinase. Eryptosis is inhibited by erythropoietin which thus extends the life span of circulating erythrocytes. Several conditions trigger premature eryptosis thus favouring the development of anemia. On the other hand, eryptosis may be a mechanism of defective erythrocytes to escape hemolysis. Beyond their significance for erythrocyte survival and death the mechanisms involved in 'eryptosis' may similarly contribute to apoptosis of nucleated cells. |
Comparative Constitutional Federalism: Europe and America | Foreword by Lord Mackenzie-Stuart Preface The First Phases of American Federalism by Jack N. Rakove Economic Integration and Interregional Migration in the United States Federal System by Jonathan D. Varat Preservation of Cultural Diversity in a Federal System: The Role of the Regions by Giandomenico Majone Putting Up and Putting Down: Tolerance Reconsidered by Martha Minow Protecting Human Rights in a Federal System by A. E. Dick Howard Conclusion by Mark Tushnet Bibliographical Essay References Index |
High Sensitivity MEMS Strain Sensor: Design and Simulation | In this article, we report on the new design of a miniaturized strain microsensor. The proposed sensor utilizes the piezoresistive properties of doped single crystal silicon. Employing the Micro Electro Mechanical Systems (MEMS) technology, high sensor sensitivities and resolutions have been achieved. The current sensor design employs different levels of signal amplifications. These amplifications include geometric, material and electronic levels. The sensor and the electronic circuits can be integrated on a single chip, and packaged as a small functional unit. The sensor converts input strain to resistance change, which can be transformed to bridge imbalance voltage. An analog output that demonstrates high sensitivity (0.03mV/me), high absolute resolution (1μe) and low power consumption (100μA) with a maximum range of ±4000μe has been reported. These performance characteristics have been achieved with high signal stability over a wide temperature range (±50oC), which introduces the proposed MEMS strain sensor as a strong candidate for wireless strain sensing applications under harsh environmental conditions. Moreover, this sensor has been designed, verified and can be easily modified to measure other values such as force, torque…etc. In this work, the sensor design is achieved using Finite Element Method (FEM) with the application of the piezoresistivity theory. This design process and the microfabrication process flow to prototype the design have been presented. |
A survey on ontology mapping | Ontology is increasingly seen as a key factor for enabling interoperability across heterogeneous systems and semantic web applications. Ontology mapping is required for combining distributed and heterogeneous ontologies. Developing such ontology mapping has been a core issue of recent ontology research. This paper presents ontology mapping categories, describes the characteristics of each category, compares these characteristics, and surveys tools, systems, and related work based on each category of ontology mapping. We believe this paper provides readers with a comprehensive understanding of ontology mapping and points to various research topics about the specific roles of ontology mapping. |
Abstract Bodies: Sixties Sculpture in the Expanded Field of Gender | David Getsy, Abstract Bodies: Sixties Sculpture in the Expanded Field of Gender New Haven, CT, Yale University Press, 2015, 256 pp., 50 colour + 50 b/w illustrations, hardcover, £35.55. ISBN 978-0-30019-675-7David Getsy's Abstract Bodies: Sixties Sculpture in the Expanded Field of Gender presents a rigorous art-historical account and a theoretically bold reinterpretation of North American abstract sculpture from the 1960s. This richly illustrated volume comprises four main chapters, each one focusing on selected case studies by canonical artists, including David Smith, John Chamberlain, Nancy Grossman and Dan Flavin. The book's conclusion brings into the discussion post-i98os sculpture by two contemporary artists, Heather Cassils and Scott Burton, successfully carrying the author's theoretical and political concerns on the category of the 'transgender' into their contemporary context. Getsy's reading is set against the grain of the majority of the critical literature on abstract sculpture that typically pits abstraction against figuration as two opposing and mutually exclusive formal categories. Instead, the author undertakes an ambitious interdisciplinary analysis that recasts celebrated examples of abstract sculpture in bodily terms, not by seeking out any formal resemblance with the human figure, or even by evoking the body as metaphor, but by mobilizing the category of transgender, 'an umbrella term used to refer to all individuals who live outside of normative sex/gender relations', as a potent interpretative framework (p. xv). More specifically, Abstract Bodies explores how 'the emerging public recognition of the presence of transformable genders and bodies in the 1960s correlate[s] with the sculpture's contentious relationship to figuration and the body in that decade' (p. xii). While this correlation may seem tenuous at best, especially with his choice of canonical male artists, Getsy is not oblivious of this fact; instead, he expressly states that he resolves to 'infect the canon' by 'finding ways to re-read [these artists] to find capacitating sites in their work ... making sure no one can ever look at a John Chamberlain again without thinking about questions of gender that were not his politics, but are maybe ours'.1Throughout the book, Getsy posits abstract sculpture as an 'open and contested category', employing a set of recurring operative terms such as openness, variability, possibility and polyvalence. As he writes in the introduction:The cultivation of possibility is an ethical and political, not just a theoretical, aim. The artists I discuss offered abstract bodies and, with them, open accounts of personhood's variability and possibility. Their sculptures do this by moving away from the human form and the rendering of the body. Rather, they figure it in the abstract. That is, these works evoke the concept of the body without mimesis, producing a gap between that calling forth of the human and the presentation of the artworks that resolutely refuse to provide an anchoring image of the body. In that gap, there grew new versions of genders, new bodily morphologies, and a new attention to the shifting and successive potentials of these categories. Activated by the conventions of sculpture's attachment to the human body, these abstractions posited unforeclosed sites for identifying and cultivating polyvalence. (p. 41)In other words, Getsy identifies a semantic openness or ambiguity in these sculptures that he purposefully reads through the interpretative lens of shifting gender identities. Even though Getsy casts his book in terms of interdisciplinary research - in 'the expanded field of gender', as the book's subtitle indicates - his main methods remain decidedly art historical, encompassing close visual and formal analysis of the artworks, sustained archival research, and an exhaustive discursive analysis of textual material such as artists' statements, interviews, titles of artworks, as well as a rigorous reinterpretation of secondary critical literature. … |
High-Sensitivity Software-Configurable 5.8-GHz Radar Sensor Receiver Chip in 0.13-$\mu$ m CMOS for Noncontact Vital Sign Detection | In this paper, analyses on sensitivity and link budget have been presented to guide the design of high-sensitivity noncontact vital sign detector. Important design issues such as flicker noise, baseband bandwidth, and gain budget have been discussed with practical considerations of analog-to-digital interface and signal processing methods in noncontact vital sign detection. Based on the analyses, a direct-conversion 5.8-GHz radar sensor chip with 1-GHz bandwidth was designed and fabricated. This radar sensor chip is software configurable to set the operation point and detection range for optimal performance. It integrates all the analog functions on-chip so that the output can be directly sampled for digital signal processing. Measurement results show that the fabricated chip has a sensitivity of better than -101 dBm for ideal detection in the absence of random body movement. Experiments have been performed successfully in laboratory environment to detect the vital signs of human subjects. |
Experimental Demonstration of Vertical ${\rm Cu}\hbox{-}{\rm SiO}_{2}\hbox{-}{\rm Si}$ Hybrid Plasmonic Waveguide Components on an SOI Platform | Vertical Cu-SiO2-Si hybrid plasmonic waveguides (HPWs) along with various passive components are fabricated on a silicon-on-insulator platform using standard complementary metal-oxide-semiconductor (CMOS) technology and characterized at 1550-nm telecommunication wavelengths. The HPW exhibits relatively low propagation loss of ~0.12 dB/μm and high coupling efficiency of ~86% with the conventional Si strip waveguide. A plasmonic waveguide-ring resonator with 1.09-μm radius exhibits extinction ratio of ~13.7 dB, free spectral range of ~106 nm, and Q-factor of ~63. These superior performances together with the CMOS compatibility make the HPW an attractive candidate for future dense Si nanophotonic integrated circuits. |
A framework for classifying and comparing software architecture evaluation methods | Software architecture evaluation has been proposed as a means to achieve quality attributes such as maintainability and reliability in a system. The objective of the evaluation is to assess whether or not the architecture lead to the desired quality attributes. Recently, there have been a number of evaluation methods proposed. There is, however, little consensus on the technical and nontechnical issues that a method should comprehensively address and which of the existing methods is most suitable for a particular issue. We present a set of commonly known but informally described features of an evaluation method and organizes them within a framework that should offer guidance on the choice of the most appropriate method for an evaluation exercise. We use this framework to characterise eight SA evaluation methods. |
Predicting conversion from MCI to AD with FDG-PET brain images at different prodromal stages | Early diagnosis of Alzheimer disease (AD), while still at the stage known as mild cognitive impairment (MCI), is important for the development of new treatments. However, brain degeneration in MCI evolves with time and differs from patient to patient, making early diagnosis a very challenging task. Despite these difficulties, many machine learning techniques have already been used for the diagnosis of MCI and for predicting MCI to AD conversion, but the MCI group used in previous works is usually very heterogeneous containing subjects at different stages. The goal of this paper is to investigate how the disease stage impacts on the ability of machine learning methodologies to predict conversion. After identifying the converters and estimating the time of conversion (TC) (using neuropsychological test scores), we devised 5 subgroups of MCI converters (MCI-C) based on their temporal distance to the conversion instant (0, 6, 12, 18 and 24 months before conversion). Next, we used the FDG-PET images of these subgroups and trained classifiers to distinguish between the MCI-C at different stages and stable non-converters (MCI-NC). Our results show that MCI to AD conversion can be predicted as early as 24 months prior to conversion and that the discriminative power of the machine learning methods decreases with the increasing temporal distance to the TC, as expected. These findings were consistent for all the tested classifiers. Our results also show that this decrease arises from a reduction in the information contained in the regions used for classification and by a decrease in the stability of the automatic selection procedure. |
The frequent and conserved DR3-B8-A1 extended haplotype confers less diabetes risk than other DR3 haplotypes. | AIM
The goal of this study was to develop and implement methodology that would aid in the analysis of extended high-density single nucleotide polymorphism (SNP) major histocompatibility complex (MHC) haplotypes combined with human leucocyte antigen (HLA) alleles in relation to type 1 diabetes risk.
METHODS
High-density SNP genotype data (2918 SNPs) across the MHC from the Type 1 Diabetes Genetics Consortium (1240 families), in addition to HLA data, were processed into haplotypes using PedCheck and Merlin, and extended DR3 haplotypes were analysed.
RESULTS
With this large dense set of SNPs, the conservation of DR3-B8-A1 (8.1) haplotypes spanned the MHC (>/=99% SNP identity). Forty-seven individuals homozygous for the 8.1 haplotype also shared the same homozygous genotype at four 'sentinel' SNPs (rs2157678 'T', rs3130380 'A', rs3094628 'C' and rs3130352 'T'). Conservation extended from HLA-DQB1 to the telomeric end of the SNP panels (3.4 Mb total). In addition, we found that the 8.1 haplotype is associated with lower risk than other DR3 haplotypes by both haplotypic and genotypic analyses [haplotype: p = 0.009, odds ratio (OR) = 0.65; genotype: p = 6.3 x 10(-5), OR = 0.27]. The 8.1 haplotype (from genotypic analyses) is associated with lower risk than the high-risk DR3-B18-A30 haplotype (p = 0.01, OR = 0.23), but the DR3-B18-A30 haplotype did not differ from other non-8.1 DR3 haplotypes relative to diabetes association.
CONCLUSION
The 8.1 haplotype demonstrates extreme conservation (>3.4 Mb) and is associated with significantly lower risk for type 1 diabetes than other DR3 haplotypes. |
Frictions in Shadow Banking: Evidence from the Lending Behavior of Money Market Mutual Funds | We document frictions in money market mutual fund lending that lead to the transmission of distress across borrowers. Using novel security-level holdings data, we show that funds with exposure to Eurozone banks suffered large outflows in mid-2011. These outflows had significant spillovers: non-European issuers relying on such funds raised less short-term debt financing. Issuer characteristics do not explain the results: holding fixed the issuer, funds with higher Eurozone exposure cut lending more. Due to credit market frictions, funds with low Eurozone exposure provided substitute financing only to issuers they had pre-existing relationships with, even though issuers are large, highly-rated firms. JEL G01, G18, G21, G28, G32 ∗This paper was previously circulated under the title “The Quiet Run of 2011: Money Market Mutual Funds and the European Debt Crisis.” We thank Susan Christoffersen (discussant), Isil Erel, Robin Greenwood, Sam Hanson, Victoria Ivashina, Andrew Karolyi (editor), Patrick McCabe, Morgan Ricks, David Scharfstein, Alexander Schulz (discussant), Jeremy Stein, René Stulz, Krista Schwarz (discussant), Mike Weisbach, two anonymous referees, and seminar participants at the Bank of Canada/Bank of Spain workshop on International Financial Markets, 2012 EFA Annual Meetings, the Federal Reserve Board, Harvard Business School, the Ohio State University, and 2013 WFA Annual Meetings for helpful comments, and Pete Crane for discussing the money markets with us. |
Group Atmosphere, Shared Understanding, and Perceived Conflict in Virtual Teams: Findings from an Experiment | In this paper, we present a theoretical model on the relationships among group atmosphere, shared understanding, and perceived task conflict in virtual teams. We validate the theoretical model by analyzing data that was collected in a laboratory experiment on virtual teams. We find that cultural diversity of virtual team adversely affects group atmosphere and group atmosphere has a positive influence on the development of shared understanding in these teams. We also find that the development of shared understanding weakens perceived task conflict in virtual teams. However, we do not find a strong support for the moderating effect of avoidance conflict management style on the relationship between shared understanding and perceived task conflict. |
Long-term prognosis of HIV-infected patients with Kaposi sarcoma treated with pegylated liposomal doxorubicin. | INTRODUCTION
Incidence of Kaposi sarcoma (KS) in human immunodeficiency virus (HIV)-infected persons has dramatically decreased in the highly active antiretroviral therapy era. However, this tumor still represents the most common cancer in this population.
OBJECTIVES
The objectives of this study were to evaluate long-term prognosis of HIV-infected patients with KS who had received pegylated liposomal doxorubicin (PLD) and, more specifically, to assess tumor relapse rate, mortality, and cause of death in these subjects.
DESIGN
This study was a retrospective review of all patients with KS who had received PLD in centers belonging to the Caelyx/KS Spanish Group. Kaplan-Meier analysis and univariate and multivariate Cox-regression analysis were used to assess the rate of and factors associated with relapse and death through January 2006.
RESULTS
A total of 98 patients received PLD from September 1997 through June 2002. Median follow-up after initiation of treatment was 28.7 months (interquartile range, 6.6-73.2 months); during follow-up, 29 patients died (a mortality rate of 14.6% per year). In 9 patients (31%), the cause of death was related to the appearance of other tumors (including 7 lymphomas, 1 gastrointestinal adenocarcinoma, and 1 tongue epidermoid cancer). Death caused by progression of KS occurred in 3 cases. Death risk was inversely related to CD4(+) cell counts at the end of follow-up (hazard ratio for every increase in CD4(+) cell count of 100 cells/microL, 0.7; 95% confidence interval, 0.5-0.9). A relapse study was performed for 61 patients who had complete or partial response to PLD and who attended a control visit after treatment completion. After a median follow-up of 50 months (interquartile range, 17.2-76 months), 8 patients (13%) had experienced relapse; 5 of these patient experienced relapse within the first year after stopping PLD. The only factor that was independently related to risk of relapse was having a CD4(+) cell count >200 cells/microL at baseline (hazard ratio, 6.2; 95% confidence interval, 1.2-30). Lower CD4(+) cell count at the end of follow-up was marginally associated with relapse (hazard ratio for every increase in CD4(+) cell count of 100 cells/microL, 0.7; 95% confidence interval, 0.6-1.01).
CONCLUSIONS
Treatment of KS with PLD in HIV-infected patients is followed by a low relapse rate, with most relapses occurring during the first year after stopping chemotherapy. However, the mortality rate in this population was high, in part because of an unexpectedly high incidence of other tumors, mainly lymphomas. |
International crime and justice | International crime and justice is an emerging field that covers international and transnational crimes that have not been the focus of mainstream criminology or criminal justice. This book examines the field from a global perspective. It provides an introduction to the nature of international and transnational crimes and the theoretical perspectives that assist in understanding the relationship between social change and crime opportunities resulting from globalization, migration, and culture conflicts. Written by a team of world experts, International Crime and Justice examines the central role of victims’ rights in the development of legal frameworks for the prevention and control of transnational and international crimes. It also discusses the challenges in delivering justice and obtaining international cooperation in efforts to deter, detect, and respond to these crimes. Arranged in nine parts, International Crime and Justice provides readers with an understanding of the main concepts relevant to the topic and the complex nature of the problems. |
Unsupervised learning of models for object recognition | A method is presented to learn object class models from unlab eled and unsegmented cluttered scenes for the purpose of visual object recognition. The variabili ty across a class of objects is modeled in a principled way, treating objects as flexible constellatio ns f rigid parts (features). Variability is represented by a joint probability density function (pdf) o n the shape of the constellation and the output of part detectors. Corresponding “constellation mo dels” can be learned in a completely unsupervised fashion. In a first stage, the learning method a utomatically identifies distinctive parts in the training set by applying a clustering algorithm to pat terns selected by an interest operator. It then learns the statistical shape model using expectation m aximization. Mixtures of constellation models can be defined and applied to “discover” object catego ri s in an unsupervised manner. The method achieves very good classification results on human fa ces, cars, leaves, handwritten letters, and cartoon characters. |
Humanoid robosoccer goal detection using hough transform | Goal posts detection is a critical robot soccer ability which is needed to be accurate, robust and efficient. A goal detection method using Hough transform to get the detailed goal features is presented in this paper. In the beginning, the image preprocessing and Hough transform implementation are described in detail. A new modification on the θ parameter range in Hough transform is explained and applied to speed up the detection process. Line processing algorithm is used to classify the line detected, and then the goal feature extraction method, including the line intersection calculation, is done. Finally, the goal distance from the robot body is estimated using triangle similarity. The experiment is performed on our university humanoid robot with the goal dimension of 225 cm in width and 110 cm in height, in yellow color. The result shows that the goal detection method, including the modification in Hough transform, is able to extract the goal features seen by the robot correctly, with the lowest speed of 5 frames per second. Additionally, the goal distance estimation is accomplished with maximum error of 20 centimeters. |
DeepGRU: Deep Gesture Recognition Utility | We introduce DeepGRU, a deep learning-based gesture and action recognizer. Our method is intuitive and easy to implement, yet versatile and powerful for various application scenarios. Using only raw pose and vector data, DeepGRU can achieve high recognition accuracy regardless of the dataset size, the number of training samples or the choice of the input device. At the heart of our method lies a set of stacked gated recurrent units (GRU), two fully connected layers and a global attention model. We demonstrate that even in the absence of powerful hardware, and using only the CPU, our method can still be trained in a short period of time, making it suitable for rapid prototyping and development scenarios. We evaluate our proposed method on 7 publicly available datasets, spanning over a broad range of interactions as well as dataset sizes. In many cases we outperform the state-of-the-art pose-based methods. For instance, we achieve a recognition accuracy of 84.9% and 92.3% on cross-subject and cross-view tests of the NTU RGB+D dataset respectively, and also 100% recognition accuracy on the UT-Kinect dataset. |
Extracting blood flow parameters from Photoplethysmograph signals: A review | Infrared sensors are used in Photoplethysmography measurements (PPG) to get blood flow parameters in the vascular system. It is a simple, low-cost non-invasive optical technique that is commonly placed on a finger or toe, to detect blood volume changes in the micro-vascular bed of tissue. The sensor use an infrared source and a photo detector to detect the infrared wave which is not absorbed. The recorded infrared waveform at the detector side is called the PPG signal. This paper reviews the various blood flow parameters that can be extracted from this PPG signal including the existence of an endothelial disfunction as an early detection tool of vascular diseases. |
3D Mesh Compression: Survey, Comparisons, and Emerging Trends | 3D meshes are commonly used to represent virtual surface and volumes. However, their raw data representations take a large amount of space. Hence, 3D mesh compression has been an active research topic since the mid 1990s. In 2005, two very good review articles describing the pioneering works were published. Yet, new technologies have emerged since then. In this article, we summarize the early works and put the focus on these novel approaches. We classify and describe the algorithms, evaluate their performance, and provide synthetic comparisons. We also outline the emerging trends for future research. |
Interferon dimers: IFN-PEG-IFN. | Increasingly complex proteins can be made by a recombinant chemical approach where proteins that can be made easily can be combined by site-specific chemical conjugation to form multifunctional or more active protein therapeutics. Protein dimers may display increased avidity for cell surface receptors. The increased size of protein dimers may also increase circulation times. Cytokines bind to cell surface receptors that dimerise, so much of the solvent accessible surface of a cytokine is involved in binding to its target. Interferon (IFN) homo-dimers (IFN-PEG-IFN) were prepared by two methods: site-specific bis-alkylation conjugation of PEG to the two thiols of a native disulphide or to two imidazoles on a histidine tag of two His8-tagged IFN (His8IFN). Several control conjugates were also prepared to assess the relative activity of these IFN homo-dimers. The His8IFN-PEG20-His8IFN obtained by histidine-specific conjugation displayed marginally greater in vitro antiviral activity compared to the IFN-PEG20-IFN homo-dimer obtained by disulphide re-bridging conjugation. This result is consistent with previous observations in which enhanced retention of activity was made possible by conjugation to an N-terminal His-tag on the IFN. Comparison of the antiviral and antiproliferative activities of the two IFN homo-dimers prepared by disulphide re-bridging conjugation indicated that IFN-PEG10-IFN was more biologically active than IFN-PEG20-IFN. This result suggests that the size of PEG may influence the antiviral activity of IFN-PEG-IFN homo-dimers. |
Motion Planning in Dynamic Environments: Obstacles Moving Along Arbitrary Trajectories | This paper generalizes the concept of velocity obstacles [3] to obstacles moving along arbitrary trajectories. We introduce the non-linear velocity obstacle, which takes into account the shape, velocity and path curvature of the moving obstacle. The non-linear v-obstacle allows selecting a single avoidance maneuver (if one exists) that avoids any number of obstacles moving on any known trajectories. For unknown trajectories, the non-linear v-obstacles can be used to generate local avoidance maneuvers based on the current velocity and path curvature of the moving obstacle. This elevates the planning strategy to a second order method, compared to the first order avoidance using the linear v-obstacle, and zero order avoidance using only position information. Analytic expressions for the non-linear v-obstacle are derived for general trajectories in the plane. The nonlinear v-obstacles are demonstrated in a complex traffic example. |
Tinea capitis among children at one suburban primary school in the City of Maputo, Mozambique. | This study evaluated the prevalence of Tinea capitis among schoolchildren at one primary school and also identified the causative agents. Scalp flakes were collected from children presenting clinical signs suggestive of Tinea capitis. Dermatophytes were identified by following standard mycological procedures. This study found a clinical prevalence of Tinea capitis of 9.6% (110/1149). The dermatophytes isolated were Microsporum audouinii, Trichophyton violaceum, and Trichophyton mentagrophytes. The most prevalent causative agent in this study was Microsporum audouinii, thus confirming the findings from previous cross-sectional studies carried out in the city of Maputo. |
Strange Foreign Bodies | I curated this group exhibition, which includes works by seven internationally acclaimed artists, as a response to The Hunterian's tercentennial celebration 'William Hunter and the Anatomy of the Modern Museum.' It reconsiders the aesthetic and political legacies of Hunter's Enlightenment inquiry into human and animal embodiment, and his encyclopaedic collecting of art and ethnography. The works included newly-commissioned prints and sculptures by Claire Barclay along with films, prints and sculptures by Christine Borland, Sarah Browne, Alex Impey, Shahryar Nashat and Phillip Warnell. An essay was published to accompany the exhibition. |
Goal Profiles, Mental Toughness and its Influence on Performance Outcomes among Wushu Athletes. | This study examined the association between goal orientations and mental toughness and its influence on performance outcomes in competition. Wushu athletes (n = 40) competing in Intervarsity championships in Malaysia completed Task and Ego Orientations in Sport Questionnaire (TEOSQ) and Psychological Performance Inventory (PPI). Using cluster analysis techniques including hierarchical methods and the non-hierarchical method (k-means cluster) to examine goal profiles, a three cluster solution emerged viz. cluster 1 - high task and moderate ego (HT/ME), cluster 2 - moderate task and low ego (MT/LE) and, cluster 3 - moderate task and moderate ego (MT/ME). Analysis of the fundamental areas of mental toughness based on goal profiles revealed that athletes in cluster 1 scored significantly higher on negative energy control than athletes in cluster 2. Further, athletes in cluster 1 also scored significantly higher on positive energy control than athletes in cluster 3. Chi-square (χ(2)) test revealed no significant differences among athletes with different goal profiles on performance outcomes in the competition. However, significant differences were observed between athletes (medallist and non medallist) in self- confidence (p = 0.001) and negative energy control (p = 0.042). Medallist's scored significantly higher on self-confidence (mean = 21.82 ± 2.72) and negative energy control (mean = 19.59 ± 2.32) than the non-medallists (self confidence-mean = 18.76 ± 2.49; negative energy control mean = 18.14 ± 1.91). Key pointsMental toughness can be influenced by certain goal profile combination.Athletes with successful outcomes in performance (medallist) displayed greater mental toughness. |
A model for the neuronal implementation of selective visual attention based on temporal correlation among neurons | We propose a model for the neuronal implementation of selective visual attention based on temporal correlation among groups of neurons. Neurons in primary visual cortex respond to visual stimuli with a Poisson distributed spike train with an appropriate, stimulus-dependent mean firing rate. The spike trains of neurons whose receptive fields donot overlap with the “focus of attention” are distributed according to homogeneous (time-independent) Poisson process with no correlation between action potentials of different neurons. In contrast, spike trains of neurons with receptive fields within the focus of attention are distributed according to non-homogeneous (time-dependent) Poisson processes. Since the short-term average spike rates of all neurons with receptive fields in the focus of attention covary, correlations between these spike trains are introduced which are detected by inhibitory interneurons in V4. These cells, modeled as modified integrate-and-fire neurons, function as coincidence detectors and suppress the response of V4 cells associated with non-attended visual stimuli. The model reproduces quantitatively experimental data obtained in cortical area V4 of monkey by Moran and Desimone (1985). |
Spectral Sparse Representation for Clustering: Evolved from PCA, K-means, Laplacian Eigenmap, and Ratio Cut | Dimensionality reduction, cluster analysis, and sparse representation are basic components in machine learning. However, their relationships have not yet been fully investigated. In this paper, we find that the spectral graph theory underlies a series of these elementary methods and can unify them into a complete framework. The methods include PCA, K-means, Laplacian eigenmap (LE), ratio cut (Rcut), and a new sparse representation method developed by us, called spectral sparse representation (SSR). Further, extended relations to conventional over-complete sparse representations (e.g., method of optimal directions, KSVD), manifold learning (e.g., kernel PCA, multidimensional scaling, Isomap, locally linear embedding), and subspace clustering (e.g., sparse subspace clustering, low-rank representation) are incorporated. We show that, under an ideal condition from the spectral graph theory, PCA, K-means, LE, and Rcut are unified together. And when the condition is relaxed, the unification evolves to SSR, which lies in the intermediate between PCA/LE and K-mean/Rcut. An efficient algorithm, NSCrt, is developed to solve the sparse codes of SSR. SSR combines merits of both sides: its sparse codes reduce dimensionality of data meanwhile revealing cluster structure. For its inherent relation to cluster analysis, the codes of SSR can be directly used for clustering. Scut, a clustering approach derived from SSR reaches the state-of-the-art performance in the spectral clustering family. The one-shot solution obtained by Scut is comparable to the optimal result of K-means that are run many times. Experiments on various data sets demonstrate the properties and strengths of SSR, NSCrt, and Scut. |
The "syncope and dementia" study: a prospective, observational, multicenter study of elderly patients with dementia and episodes of "suspected" transient loss of consciousness. | BACKGROUND AND AIM
Syncope and related falls are one of the main causes and the predominant cause of hospitalization in elderly patients with dementia. However, the diagnostic protocol for syncope is difficult to apply to patients with dementia. Thus, we developed a "simplified" protocol to be used in a prospective, observational, and multicenter study in elderly patients with dementia and transient loss of consciousness suspected for syncope or unexplained falls. Here, we describe the protocol, its feasibility and the characteristics of the patients enrolled in the study.
METHODS
Patients aged ≥65 years with a diagnosis of dementia and one or more episodes of transient loss of consciousness during the previous 3 months, subsequently referred to a Geriatric Department in different regions of Italy, from February 2012 to May 2014, were enrolled. A simplified protocol was applied in all patients. Selected patients underwent a second-level evaluation.
RESULTS
Three hundred and three patients were enrolled; 52.6% presented with episodes suspected to be syncope, 44.5% for unexplained fall and 2.9% both. Vascular dementia had been previously diagnosed in 53.6% of participants, Alzheimer's disease in 23.5% and mixed forms in 12.6%. Patients presented with high comorbidity (CIRS score = 3.6 ± 2), severe functional impairment, (BADL lost = 3 ± 2), and polypharmacy (6 ± 3 drugs).
CONCLUSION
Elderly patients with dementia enrolled for suspected syncope and unexplained falls have high comorbidity and disability. The clinical presentation is often atypical and the presence of unexplained falls is particularly frequent. |
An Embedded Declarative Language for Hierarchical Object Structure Traversal | A common challenge in processing large domain-specific models and in-memory object structures (e.g., complex XML documents) is writing traversals and queries on them. Object-oriented (OO) designs, particularly those based on the Visitor pattern, are commonly used for developing traversals. However, such OO designs limit the reusability and independent evolution of visitation actions (i.e., the actions to be performed at each traversed node) due to tight coupling between the traversal logic and visitation actions, particularly when a variety of different traversals are needed. Code generators developed for traversal specification languages alleviate some of these problems but their high cost of development is often prohibitive. This paper presents Language for Embedded quEry and traverSAl (LEESA), which provides a generative programming approach for embedding object structure queries and traversal specifications within a host language, C++. By virtue of being declarative, LEESA significantly reduces the development cost of programs operating on complex object structures compared to the traditional techniques. |
Block-wise construction of tree-like relational features with monotone reducibility and redundancy | We describe an algorithm for constructing a set of tree-like conjunctive relational features by combining smaller conjunctive blocks. Unlike traditional level-wise approaches which preserve the monotonicity of frequency, our block-wise approach preserves monotonicity of feature reducibility and redundancy, which are important in propositionalization employed in the context of classification learning. With pruning based on these properties, our block-wise approach efficiently scales to features including tens of first-order atoms, far beyond the reach of state-of-the art propositionalization or inductive logic programming systems. |
Can the Quality of Pearls from the Japanese Pearl Oyster (Pinctada fucata) be Explained by the Gene Expression Patterns of the Major Shell Matrix Proteins in the Pearl Sac? | For pearl culture, the pearl oyster is forced open and a nucleus is implanted into the gonad with a mantle graft. The outer mantle epithelial cells of the implanted mantle graft elongate and surrounding the nucleus a pearl sac is formed. Shell matrix proteins secreted by the pearl sac play an important role in the regulation of pearl formation. Recently, seven shell matrix proteins were identified from the pearl oyster Pinctada fucata. However, there is a paucity of information on the function of these proteins and their gene expression patterns. Our study aims to elucidate the relationship between pearl type, quality, and gene expression patterns of six shell matrix proteins (msi60, n16, nacrein, msi31, prismalin-14, and aspein) in the pearl sac based on real-time PCR analysis. After culturing for about 2 months, the pearl sac tissues were collected from 22 individuals: 12 with high quality (HP), nine with low quality (LP), and one with organic (ORG) pearl formation. The surface of each of the 12 HP pearls was composed only of a nacreous layer; in contrast, that of the nine LP pearls was composed of nacreous and prismatic layers. The six target gene expressions were detected in all individuals. However, delta threshold cycle (ΔC T) for msi31 was significantly higher in the HP than in the LP individuals (Mann–Whitney’s U test, p = 0.02). This means that the relative expression level of msi31, which constitutes the framework of the prismatic layer, was higher in the LP than in the HP individuals. |
Real-Time Convex Optimization in Signal Processing | This article shows the potential for convex optimization methods to be much more widely used in signal processing. In particular, automatic code generation makes it easier to create convex optimization solvers that are made much faster by being designed for a specific problem family. The disciplined convex programming framework that has been shown useful in transforming problems to a standard form may be extended to create solvers themselves. Much work remains to be done in exploring the capabilities and limitations of automatic code generation. As computing power increases, and as automatic code generation improves, the authors expect convex optimization solvers to be found more and more often in real-time signal processing applications. |
Why the Diagnosis of Attention Deficit Hyperactivity Disorder Matters | BACKGROUND
Attention Deficit Hyperactivity disorder (ADHD) is one of the most common and challenging childhood neurobehavioral disorders. ADHD is known to negatively impact children, their families, and their community. About one-third to one-half of patients with ADHD will have persistent symptoms into adulthood. The prevalence in the United States is estimated at 5-11%, representing 6.4 million children nationwide. The variability in the prevalence of ADHD worldwide and within the US may be due to the wide range of factors that affect accurate assessment of children and youth. Because of these obstacles to assessment, ADHD is under-diagnosed, misdiagnosed, and undertreated.
OBJECTIVES
We examined factors associated with making and receiving the diagnosis of ADHD. We sought to review the consequences of a lack of diagnosis and treatment for ADHD on children's and adolescent's lives and how their families and the community may be involved in these consequences.
METHODS
We reviewed scientific articles looking for factors that impact the identification and diagnosis of ADHD and articles that demonstrate naturalistic outcomes of diagnosis and treatment. The data bases PubMed and Google scholar were searched from the year 1995 to 2015 using the search terms "ADHD, diagnosis, outcomes." We then reviewed abstracts and reference lists within those articles to rule out or rule in these or other articles.
RESULTS
Multiple factors have significant impact in the identification and diagnosis of ADHD including parents, healthcare providers, teachers, and aspects of the environment. Only a few studies detailed the impact of not diagnosing ADHD, with unclear consequences independent of treatment. A more significant number of studies have examined the impact of untreated ADHD. The experience around receiving a diagnosis described by individuals with ADHD provides some additional insights.
CONCLUSION
ADHD diagnosis is influenced by perceptions of many different members of a child's community. A lack of clear understanding of ADHD and the importance of its diagnosis and treatment still exists among many members of the community including parents, teachers, and healthcare providers. More basic and clinical research will improve methods of diagnosis and information dissemination. Even before further advancements in science, strong partnerships between clinicians and patients with ADHD may be the best way to reduce the negative impacts of this disorder. |
An Approach to the Use of Depth Cameras for Weed Volume Estimation | The use of depth cameras in precision agriculture is increasing day by day. This type of sensor has been used for the plant structure characterization of several crops. However, the discrimination of small plants, such as weeds, is still a challenge within agricultural fields. Improvements in the new Microsoft Kinect v2 sensor can capture the details of plants. The use of a dual methodology using height selection and RGB (Red, Green, Blue) segmentation can separate crops, weeds, and soil. This paper explores the possibilities of this sensor by using Kinect Fusion algorithms to reconstruct 3D point clouds of weed-infested maize crops under real field conditions. The processed models showed good consistency among the 3D depth images and soil measurements obtained from the actual structural parameters. Maize plants were identified in the samples by height selection of the connected faces and showed a correlation of 0.77 with maize biomass. The lower height of the weeds made RGB recognition necessary to separate them from the soil microrelief of the samples, achieving a good correlation of 0.83 with weed biomass. In addition, weed density showed good correlation with volumetric measurements. The canonical discriminant analysis showed promising results for classification into monocots and dictos. These results suggest that estimating volume using the Kinect methodology can be a highly accurate method for crop status determination and weed detection. It offers several possibilities for the automation of agricultural processes by the construction of a new system integrating these sensors and the development of algorithms to properly process the information provided by them. |
Package characterization on advanced NBA-QFN structure | In recent years, electronic product have been demanded more functionalities, miniaturization, higher performance, reliability and low cost. Therefore, IC chip is required to deliver more signal I/O and better electrical characteristics under the same package footprint. None-Lead Bump Array (NBA) Chip Scale Structure is then developed to meet those requirements offering better electrical performance, more I/O accommodation and high transmission speed. To evaluate NBA package capability, the solder joint life, package warpage, die corner stress and thermal performance are characterized. Firstly, investigations on the warpage, die corner stress and thermal performance of NBA-QFN structure are performed by the use of Finite Element Method (FEM). Secondly, experiments are conducted for the solder joint reliability performance with different solder coverage and standoff height In the conclusion of this study, NBA-QFN would have no warpage risk, lower die corner stress and better thermal performance than TFBGA from simulation result. Beside that, the simulation result shows good agreement with experimental data. From the drop test study, with solder coverage less than 50% and standoff height lower than 40um would perform better solder joint life than others. |
Limited Preemptive Scheduling for Real-Time Systems. A Survey | The question whether preemptive algorithms are better than nonpreemptive ones for scheduling a set of real-time tasks has been debated for a long time in the research community. In fact, especially under fixed priority systems, each approach has advantages and disadvantages, and no one dominates the other when both predictability and efficiency have to be taken into account in the system design. Recently, limited preemption models have been proposed as a viable alternative between the two extreme cases of fully preemptive and nonpreemptive scheduling. This paper presents a survey of the existing approaches for reducing preemptions and compares them under different metrics, providing both qualitative and quantitative performance evaluations. |
Advanced Signal Processing for Vital Sign Extraction With Applications in UWB Radar Detection of Trapped Victims in Complex Environments | Ultra-wideband (UWB) radar plays an important role in search and rescue at disaster relief sites. Identifying vital signs and locating buried survivors are two important research contents in this field. In general, it is hard to identify a human's vital signs (breathing and heartbeat) in complex environments due to the low signal-to-noise ratio of the vital sign in radar signals. In this paper, advanced signal-processing approaches are used to identify and to extract human vital signs in complex environments. First, we apply Curvelet transform to remove the source-receiver direct coupling wave and background clutters. Next, singular value decomposition is used to de-noise in the life signals. Finally, the results are presented based on FFT and Hilbert-Huang transform to separate and to extract human vital sign frequencies, as well as the micro-Doppler shift characteristics. The proposed processing approach is first tested by a set of synthetic data generated by FDTD simulation for UWB radar detection of two trapped victims under debris at an earthquake site of collapsed buildings. Then, it is validated by laboratory experiments data. The results demonstrate that the combination of UWB radar as the hardware and advanced signal-processing algorithms as the software has potential for efficient vital sign detection and location in search and rescue for trapped victims in complex environment. |
A linguistic ontology of space for natural language processing | a r t i c l e i n f o a b s t r a c t We present a detailed semantics for linguistic spatial expressions supportive of computational processing that draws substantially on the principles and tools of ontological engineering and formal ontology. We cover language concerned with space, actions in space and spatial relationships and develop an ontological organization that relates such expressions to general classes of fixed semantic import. The result is given as an extension of a linguistic ontology, the Generalized Upper Model, an organization which has been used for over a decade in natural language processing applications. We describe the general nature and features of this ontology and show how we have extended it for working particularly with space. Treaitng the semantics of natural language expressions concerning space in this way offers a substantial simplification of the general problem of relating natural spatial language to its contextualized interpretation. Example specifications based on natural language examples are presented, as well as an evaluation of the ontology's coverage, consistency, predictive power, and applicability. |
Class-E LCCL for capacitive power transfer system | This paper presented the design of a capacitive power transfer (CPT) system by implementing a Class-E inverter due to its high efficiency, which in theory can reach close to 100% performance. However, the Class-E inverter is highly sensitive to its circuit parameters under the scenario of having small capacitance at the coupling plate. As a solution, an additional capacitor can be integrated into the Class-E inverter for increased coupling capacitance for a better performance. Both simulation and experimental investigations were carried out to verify the high efficiency CPT system based on Class-E inverter with additional capacitor. The outcome of the investigation exhibited 96% of overall DC-DC power transfer efficiency with 0.97 W output at 1MHz operating frequency. |
ASR Confidence Estimation with Speaker-Adapted Recurrent Neural Networks | Confidence estimation for automatic speech recognition has been very recently improved by using Recurrent Neural Networks (RNNs), and also by speaker adaptation (on the basis of Conditional Random Fields). In this work, we explore how to obtain further improvements by combining RNNs and speaker adaptation. In particular, we explore different speakerdependent and -independent data representations for Bidirectional Long Short Term Memory RNNs of various topologies. Empirical tests are reported on the LibriSpeech dataset showing that the best results are achieved by the proposed combination of RNNs and speaker adaptation. |
Formal method integration via heterogeneous notations | Formal Method Integration via Heterogeneous Notations Richard Freeman Paige Doctor of Philosophy Graduate Department of Computer Science University of Toronto 1997 Method integration is the procedure of combining multiple methods to form a new technique. In the context of software engineering, this can involve combining speci cation techniques, rules and guidelines for design and implementation, and sequences of steps for managing an entire development. In current practice, method integration is often an ad-hoc process, where links between methods are de ned on a case-by-case basis. In this dissertation, we examine an approach to formal method integration based on so-called heterogeneous notations: compositions of compatible notations. We set up a basis that can be used to formally de ne the meaning of compositions of formal and semiformal notations. Then, we examine how this basis can be used in combining methods used for system speci cation, design, and implementation. We demonstrate integration using this basis in a pure formal setting, and in a setting involving both formal and semiformal methods. This culminates in a description of a meta-method for formal method integration based on heterogeneous notations, and a discussion of the properties and bene ts obtained through the use of such a method. A number of examples are provided that suggest how heterogeneous notations and integrated methods can be used, and how formal method integration based on the meta-method can occur. ii Acknowledgements First and primary thanks must go to my advisor, Rick Hehner. With Rick's guidance, wisdom, good humour, and enthusiasm, this dissertation has grown from the germ of a rather silly idea to a nal product. Rick is responsible (directly and indirectly) for much of the clarity and conciseness of the dissertation; ambiguity, wa ing, and errors are my responsibility alone. Special acknowledgement goes to my second reader, Pamela Zave, for her detailed, thorough, and careful reading of the dissertation. Her good advice and perceptive comments made the dissertation stronger. I'm also grateful to my committee for `torturing' me for three and a half years. In particular: Ric Holt, John Mylopoulos, Alberto Mendelzon, and Dave Wortman. Their suggestions, criticisms, and thorough grilling sessions helped immensely. Many other people commented on the research or presentation of the dissertation, especially when I was out giving job talks. In particular: Fran cois Pitt, Victor Ng, Jonathan Ostro , Nick Graham, Kasi Periysamy, Peter King, Hanan Lut yya, Mike Bauer, Paul Sorenson, Jim Hoover, Piotr Rudnicki, Phil Brooke, and many others who attended Marktoberdorf 1996. I would especially like to thank those at Marktoberdorf who showed me that many (dubious) ideas sound good after a few rounds of beer. My friends provided much-needed support and distraction during my four and a half years of graduate school. Thanks to: Rob and Caitlin Jarrett, Matt \Plague Boy" Wilson, Ed Sykes, Leo and Andr e Robichaud, Fran cois Pitt, Victor Ng, Arnold Rosenbloom, the CSGSBS, Wayne \The King" Hayes, Ian Maione, Wally Omiotek, Todd Ford, Brian Vaughan, the Commish, Kent Epp, and the rest of the Rangers/Mudhens. I thank NSERC, the Ontario government, NATO, and the Department of Computer Science for their support. Finally, special thanks to my family for putting up with me, and all this strange computer stu . iii |
IoT enabled environmental monitoring system for smart cities | A smart city enables the effective utilization of resources and better quality of services to the citizens. To provide services such as air quality management, weather monitoring and automation of homes and buildings in a smart city, the basic parameters are temperature, humidity and CO2. This paper presents a customised design of an Internet of Things (IoT) enabled environment monitoring system to monitor temperature, humidity and CO2. In developed system, data is sent from the transmitter node to the receiver node. The data received at the receiver node is monitored and recorded in an excel sheet in a personal computer (PC) through a Graphical User Interface (GUI), made in LabVIEW. An Android application has also been developed through which data is transferred from LabVIEW to a smartphone, for monitoring data remotely. The results and the performance of the proposed system is discussed. |
Computer-aided diagnosis of solid breast nodules: use of an artificial neural network based on multiple sonographic features | A computer-aided diagnosis (CAD) algorithm identifying breast nodule malignancy using multiple ultrasonography (US) features and artificial neural network (ANN) classifier was developed from a database of 584 histologically confirmed cases containing 300 benign and 284 malignant breast nodules. The features determining whether a breast nodule is benign or malignant were extracted from US images through digital image processing with a relatively simple segmentation algorithm applied to the manually preselected region of interest. An ANN then distinguished malignant nodules in US images based on five morphological features representing the shape, edge characteristics, and darkness of a nodule. The structure of ANN was selected using k-fold cross-validation method with k=10. The ANN trained with randomly selected half of breast nodule images showed the normalized area under the receiver operating characteristic curve of 0.95. With the trained ANN, 53.3% of biopsies on benign nodules can be avoided with 99.3% sensitivity. Performance of the developed classifier was reexamined with new US mass images in the generalized patient population of total 266 (167 benign and 99 malignant) cases. The developed CAD algorithm has the potential to increase the specificity of US for characterization of breast lesions. |
Social Media, News and Political Information during the US Election: Was Polarizing Content Concentrated in Swing States? | US voters shared large volumes of polarizing political news and information in the form of links to content from Russian, WikiLeaks and junk news sources. Was this low quality political information distributed evenly around the country, or concentrated in swing states and particular parts of the country? In this data memo we apply a tested dictionary of sources about political news and information being shared over Twitter over a ten day period around the 2016 Presidential Election. Using self-reported location information, we place a third of users by state and create a simple index for the distribution of polarizing content around the country. We find that (1) nationally, Twitter users got more misinformation, polarizing and conspiratorial content than professionally produced news. (2) Users in some states, however, shared more polarizing political news and information than users in other states. (3) Average levels of misinformation were higher in swing states than in uncontested states, even when weighted for the relative size of the user population in each state. We conclude with some observations about the impact of strategically disseminated polarizing information on public life. COMPUTATIONAL PROPAGANDA AND THE 2016 US ELECTION Social media plays an important role in the circulation of ideas about public policy and politics. Political actors and governments worldwide are deploying both people and algorithms to shape public life. Bots are pieces of software intended to perform simple, repetitive, and robotic tasks. They can perform legitimate tasks on social media like delivering news and information—real news as well as junk—or undertake malicious activities like spamming, harassment and hate speech. Whatever their uses, bots on social media platforms are able to rapidly deploy messages, replicate themselves, and pass as human users. They are also a pernicious means of spreading junk news over social networks of family and friends. Computational propaganda flourished during the 2016 US Presidential Election. There were numerous examples of misinformation distributed online with the intention of misleading voters or simply earning a profit. Multiple media reports have investigated how “fake news” may have propelled Donald J. Trump to victory. What kinds of political news and information were social media users in the United States sharing in advance of voting day? How much of it was extremist, sensationalist, conspiratorial, masked commentary, fake, or some other form of junk news? Was this misleading information concentrated in the battleground states where the margins of victory for candidates had big consequences for electoral outcomes? SOCIAL MEDIA AND JUNK NEWS Junk news, widely distributed over social media platforms, can in many cases be considered to be a form of computational propaganda. Social media platforms have served significant volumes of fake, sensational, and other forms of junk news at sensitive moments in public life, though most platforms reveal little about how much of this content there is or what its impact on users may be. The World Economic Forum recently identified the rapid spread of misinformation online as among the top 10 perils to society. Prior research has found that social media favors sensationalist content, regardless of whether the content has been fact checked or is from a reliable source. When junk news is backed by automation, either through dissemination algorithms that the platform operators cannot fully explain or through political bots that promote content in a preprogrammed way, political actors have a powerful set of tools for computational propaganda. Both state and non-state political actors deliberately manipulate and amplify non-factual information online. Fake news websites deliberately publish misleading, deceptive or incorrect information purporting to be real news for political, economic or cultural gain. These sites often rely on social media to attract web traffic and drive engagement. Both fake news websites and political bots are crucial tools in digital propaganda attacks—they aim to influence conversations, demobilize opposition and generate false support. SAMPLING AND METHOD Our analysis is based on a dataset of 22,117,221 tweets collected between November 1-11, 2016, that contained hashtags related to politics and the election in the US. Our previous analyses have been based on samples of political conversation, over Twitter that used hashtags that were relevant to the US election as a whole. In this analysis, we selected users who provided some evidence of physical location across the United States in their profiles. Within our initial sample, approximately 7,083,691 tweets, 32 percent of the total |
Severity Analyses of Single-Vehicle Crashes Based on Rough Set Theory | A single-vehicle crash is a typical pattern of traffic accidents and tends to cause heavy loss. The purpose of this study is to identify the factors significantly influencing single-vehicle crash injury severity, using a data selected from Beijing city for a 4-year period. Rough set theory was applied to complete the injury severity analysis, and followed by applying cross-validation method to estimate the prediction accuracy of extraction rules. Results show that it is effective for analyzing the severity of Single-vehicle crashes with rough set theory. |
Ensemble Named Entity Recognition (NER): Evaluating NER Tools in the Identification of Place Names in Historical Corpora | The field of Spatial Humanities has advanced substantially in the past years. The identification and extraction of toponyms and spatial information mentioned in historical text collections has allowed its use in innovative ways, making possible the application of spatial analysis and the mapping of these places with geographic information systems. For instance, automated place name identification is possible with Named Entity Recognition (NER) systems. Statistical NER methods based on supervised learning, in particular, are highly successful with modern datasets. However, there are still major challenges to address when dealing with historical corpora. These challenges include language changes over time, spelling variations, transliterations, OCR errors, and sources written in multiple languages among others. In this article, considering a task of place name recognition over two collections of historical correspondence, we report an evaluation of five NER systems and an approach that combines these through a voting system. We found that although individual performance of each NER system was corpus dependent, the ensemble combination was able to achieve consistent measures of precision and recall, outperforming the individual NER systems. In addition, the results showed that these NER systems are not strongly dependent on preprocessing and translation to Modern English. |
Efficacy and safety of hyaluronan treatment in combination therapy with home exercise for knee osteoarthritis pain. | OBJECTIVE
To assess the efficacy and safety of intra-articular injections of sodium hyaluronate combined with a home exercise program (HEP) in the management of pain associated with osteoarthritis (OA) of the knee.
DESIGN
Single-blinded, parallel-design, 1-year clinical study with sequential enrollment.
SETTING
University-based outpatient physiatric practice.
PARTICIPANTS
Sixty patients (18 men, 42 women; age, > or =50 y) with moderate-to-severe pain associated with OA of the knee.
INTERVENTIONS
(1) Five weekly intra-articular hyaluronate injections (5-HYL); (2) 3 weekly intra-articular hyaluronate injections (3-HYL); or (3) a combination of an HEP with 3 weekly intra-articular hyaluronate injections (3-HYL+HEP).
MAIN OUTCOME MEASURES
The primary outcome measure was a 100-mm visual analog scale for pain after a 50-foot walk (15.24 m). Secondary measures included the Western Ontario and McMaster Universities Osteoarthritis Index subscales.
RESULTS
The 3-HYL+HEP group had significantly faster onset of pain relief compared with the 3-HYL (P<.01) and 5-HYL groups (P=.01). All groups showed a mean symptomatic improvement from baseline (reduction in baseline pain at 3 mo was 59%, 49%, and 48% for the 3-HYL+HEP, 3-HYL, and 5-HYL groups, respectively) that was clinically and statistically significant. There were no between-group differences in the incidence or nature of adverse events.
CONCLUSIONS
The combined use of hyaluronate injections with HEP should be considered for management of moderate-to-severe pain in patients with knee OA. |
Inductive-deductive systems: a mathematical logic and statistical learning perspective | This document was only presented in a conference with no copyrighted proceedings. The theorems about incompleteness of arithmetic have often been cited as an argument against automatic theorem proving and expert systems. However, these theorems rely on a worst-case analysis, which might happen to be overly pessimistic with respect to real-world domain applications. For this reason, a new framework for a probabilistic analysis of logical complexity is presented in this paper. Specifically, the rate of non-decidable clauses and the convergence of a set of axioms toward the target one when the latter exists in the language are studied, by combining results from mathematical logic and from statistical learning. Two theoretical settings are considered, where learning relies respectively on Turing oracles guessing the provability of a statement from a set of statements, and computable approximations thereof. Interestingly, both settings lead to similar results regarding the convergence rate towards completeness. |
Psychlab: A Psychology Laboratory for Deep Reinforcement Learning Agents | Psychlab is a simulated psychology laboratory inside the first-person 3D game world of DeepMind Lab (Beattie et al., 2016). Psychlab enables implementations of classical laboratory psychological experiments so that they work with both human and artificial agents. Psychlab has a simple and flexible API that enables users to easily create their own tasks. As examples, we are releasing Psychlab implementations of several classical experimental paradigms including visual search, change detection, random dot motion discrimination, and multiple object tracking. We also contribute a study of the visual psychophysics of a specific state-of-the-art deep reinforcement learning agent: UNREAL (Jaderberg et al., 2016). This study leads to the surprising conclusion that UNREAL learns more quickly about larger target stimuli than it does about smaller stimuli. In turn, this insight motivates a specific improvement in the form of a simple model of foveal vision that turns out to significantly boost UNREAL’s performance, both on Psychlab tasks, and on standard DeepMind Lab tasks. By open-sourcing Psychlab we hope to facilitate a range of future such studies that simultaneously advance deep reinforcement learning and improve its links with cognitive science. |
A Compact Embedding for Facial Expression Similarity | Most of the existing work on automatic facial expression analysis focuses on discrete emotion recognition, or facial action unit detection. However, facial expressions do not always fall neatly into pre-defined semantic categories. Also, the similarity between expressions measured in the action unit space need not correspond to how humans perceive expression similarity. Different from previous work, our goal is to describe facial expressions in a continuous fashion using a compact embedding space that mimics human visual preferences. To achieve this goal, we collect a large-scale faces-in-the-wild dataset with human annotations in the form: Expressions A and B are visually more similar when compared to expression C, and use this dataset to train a neural network that produces a compact (16-dimensional) expression embedding. We experimentally demonstrate that the learned embedding can be successfully used for various applications such as expression retrieval, photo album summarization, and emotion recognition. We also show that the embedding learned using the proposed dataset performs better than several other embeddings learned using existing emotion or action unit datasets. |
Evolving an expert checkers playing program without using human expertise | An evolutionary algorithm has taught itself how to play the game of checkers without using features that would normally require human expertise. Using only the raw positions of pieces on the board and the piece differential, the evolutionary program optimized artificial neural networks to evaluate alternative positions in the game. Over the course of several hundred generations, the program taught itself to play at a level that is competitive with human experts (one level below human masters). This was verified by playing the best-evolved neural network against 165 human players on an Internet gaming zone. The neural network’s performance earned a rating that is better than 99.61 percent of all registered players at the website. Control experiments between the best-evolved neural network and a program that relies on material advantage indicate the superiority of the neural network both at equal levels of look-ahead and CPU time. The results suggest that the principles of Darwinian evolution may be usefully applied to solving problems that have not yet been solved by human expertise. |
Hoaxy: A Platform for Tracking Online Misinformation | Massive amounts of misinformation have been observed to spread in uncontrolled fashion across social media. Examples include rumors, hoaxes, fake news, and conspiracy theories. At the same time, several journalistic organizations devote significant efforts to high-quality fact checking of online claims. The resulting information cascades contain instances of both accurate and inaccurate information, unfold over multiple time scales, and often reach audiences of considerable size. All these factors pose challenges for the study of the social dynamics of online news sharing. Here we introduce Hoaxy, a platform for the collection, detection, and analysis of online misinformation and its related factchecking efforts. We discuss the design of the platform and present a preliminary analysis of a sample of public tweets containing both fake news and fact checking. We find that, in the aggregate, the sharing of fact-checking content typically lags that of misinformation by 10–20 hours. Moreover, fake news are dominated by very active users, while fact checking is a more grass-roots activity. With the increasing risks connected to massive online misinformation, social news observatories have the potential to help researchers, journalists, and the general public understand the dynamics of real and fake news sharing. |
Self-powered wireless temperature sensors exploit RFID technology | Emerging RFID technology lets us embed sensors into a very small chip, creating a wireless sensing device. So, we set out to develop such a single-chip versatile temperature sensor. We also wanted to be able to transfer our design to an implantable temperature sensor for an animal healthcare application with minimal structural modification. We discuss the implementation of temperature sensor. The fully integrated complementary metal-oxide semiconductor (CMOS) batteryless device measures temperature and performs calibration to compensate for the sensor's inherent imperfections. An RF link using passive RFID's backscattering technique wirelessly transmits the data to a reading device while extracting power from the same "airwave," letting the device operate anywhere and last almost forever. The entire microchip, including the temperature sensor, consumes less than a few microamperes over a half a second, so the scanning device can capture data from longer read distances. |
When in Rome ... Learn why the Romans do what they do: how multicultural learning experiences facilitate creativity. | Research suggests that living in and adapting to foreign cultures facilitates creativity. The current research investigated whether one aspect of the adaptation process-multicultural learning-is a critical component of increased creativity. Experiments 1-3 found that recalling a multicultural learning experience: (a) facilitates idea flexibility (e.g., the ability to solve problems in multiple ways), (b) increases awareness of underlying connections and associations, and (c) helps overcome functional fixedness. Importantly, Experiments 2 and 3 specifically demonstrated that functional learning in a multicultural context (i.e., learning about the underlying meaning or function of behaviors in that context) is particularly important for facilitating creativity. Results showed that creativity was enhanced only when participants recalled a functional multicultural learning experience and only when participants had previously lived abroad. Overall, multicultural learning appears to be an important mechanism by which foreign living experiences lead to creative enhancement. |
Stabilization at upright equilibrium position of a double inverted pendulum with unconstrained bat optimization | A double inverted pendulum plant has been in the domain of control researchers as an established model for studies on stability. The stability of such as a system taking the linearized plant dynamics has yielded satisfactory results by many researchers using classical control techniques. The established model that is analyzed as part of this work was tested under the influence of time delay, where the controller was fine tuned using a BAT algorithm taking into considering the fitness function of square of error. This proposed method gave results which were better when compared without time delay wherein the calculated values indicated the issues when incorporating time delay. |
Getting a grip on tangible interaction: a framework on physical space and social interaction | Our current understanding of human interaction with hybrid or augmented environments is very limited. Here we focus on 'tangible interaction', denoting systems that rely on embodied interaction, tangible manipulation, physical representation of data, and embeddedness in real space. This synthesis of prior 'tangible' definitions enables us to address a larger design space and to integrate approaches from different disciplines. We introduce a framework that focuses on the interweaving of the material/physical and the social, contributes to understanding the (social) user experience of tangible interaction, and provides concepts and perspectives for considering the social aspects of tangible interaction. This understanding lays the ground for evolving knowledge on collaboration-sensitive tangible interaction design. Lastly, we analyze three case studies, using the framework, thereby illustrating the concepts and demonstrating their utility as analytical tools. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.