title
stringlengths
8
300
abstract
stringlengths
0
10k
Combinatorial Optimization on Gate Model Quantum Computers: A Survey
The advent of quantum computing processors with possibility to scale beyond experimental capacities magnifies the importance of studying their applications. Combinatorial optimization problems can be one of the promising applications of these new devices. These problems are recurrent in industrial applications and they are in general difficult for classical computing hardware. In this work, we provide a survey of the approaches to solving different types of combinatorial optimization problems, in particular quadratic unconstrained binary optimization (QUBO) problems on a gate model quantum computer. We focus mainly on four different approaches including digitizing the adiabatic quantum computing, global quantum optimization algorithms, the quantum algorithms that approximate the ground state of a general QUBO problem, and quantum sampling. We also discuss the quantum algorithms that are custom designed to solve certain types of QUBO problems.
Inferring human activities from GPS tracks
The collection of huge amount of tracking data made possible by the widespread use of GPS devices, enabled the analysis of such data for several applications domains, ranging from traffic management to advertisement and social studies. However, the raw positioning data, as it is detected by GPS devices, lacks of semantic information since this data does not natively provide any additional contextual information like the places that people visited or the activities performed. Traditionally, this information is collected by hand filled questionnaire where a limited number of users are asked to annotate their tracks with the activities they have done. With the purpose of getting large amount of semantically rich trajectories, we propose an algorithm for automatically annotating raw trajectories with the activities performed by the users. To do this, we analyse the stops points trying to infer the Point Of Interest (POI) the user has visited. Based on the category of the POI and a probability measure based on the gravity law, we infer the activity performed. We experimented and evaluated the method in a real case study of car trajectories, manually annotated by users with their activities. Experimental results are encouraging and will drive our future works.
Texture brush: an interactive surface texturing interface
This paper presents Texture Brush, an interactive interface for texturing 3D surfaces. We extend the conventional exponential map to a more general setting, in which the generator can be an arbitrary curve. Based on our extended exponential map, we develop a local parameterization method which naturally supports anisotropic texture mapping. With Texture Brush, the user can easily specify such local parameterization with a free-form stroke on the surface. We also propose a set of intuitive operations which are mainly based on 3D painting metaphor, including texture painting, texture cloning, texture animation design, and texture editing. Compared to the existing surface texturing techniques, our method enables a smoother and more natural work flow so that the user can focus on the design task itself without switching back and forth among different tools or stages. The encouraging experimental results and positive evaluation by artists demonstrate the efficacy of our Texture Brush for interactive texture mapping.
A comprehensive study of hybrid neural network hidden Markov model for offline handwritten Chinese text recognition
This paper proposes an effective segmentation-free approach using a hybrid neural network hidden Markov model (NN-HMM) for offline handwritten Chinese text recognition (HCTR). In the general Bayesian framework, the handwritten Chinese text line is sequentially modeled by HMMs with each representing one character class, while the NN-based classifier is adopted to calculate the posterior probability of all HMM states. The key issues in feature extraction, character modeling, and language modeling are comprehensively investigated to show the effectiveness of NN-HMM framework for offline HCTR. First, a conventional deep neural network (DNN) architecture is studied with a well-designed feature extractor. As for the training procedure, the label refinement using forced alignment and the sequence training can yield significant gains on top of the frame-level cross-entropy criterion. Second, a deep convolutional neural network (DCNN) with automatically learned discriminative features demonstrates its superiority to DNN in the HMM framework. Moreover, to solve the challenging problem of distinguishing quite confusing classes due to the large vocabulary of Chinese characters, NN-based classifier should output 19900 HMM states as the classification units via a high-resolution modeling within each character. On the ICDAR 2013 competition task of CASIA-HWDB database, DNN-HMM yields a promising character error rate (CER) of 5.24% by making a good trade-off between the computational complexity and recognition accuracy. To the best of our knowledge, DCNN-HMM can achieve a best published CER of 3.53%.
Joint Optimization of User-desired Content in Multi-document Summaries by Learning from User Feedback
In this paper, we propose an extractive multi-document summarization (MDS) system using joint optimization and active learning for content selection grounded in user feedback. Our method interactively obtains user feedback to gradually improve the results of a state-of-the-art integer linear programming (ILP) framework for MDS. Our methods complement fully automatic methods in producing highquality summaries with a minimum number of iterations and feedbacks. We conduct multiple simulation-based experiments and analyze the effect of feedbackbased concept selection in the ILP setup in order to maximize the user-desired content in the summary.
Privacy-preserving Network Provenance
Network accountability, forensic analysis, and failure diagnosis are becoming increasingly important for network management and security. Network provenance significantly aids network administrators in these tasks by explaining system behavior and revealing the dependencies between system states. Although resourceful, network provenance can sometimes be too rich, revealing potentially sensitive information that was involved in system execution. In this paper, we propose a cryptographic approach to preserve the confidentiality of provenance (sub)graphs while allowing users to query and access the parts of the graph for which they are authorized. Our proposed solution is a novel application of searchable symmetric encryption (SSE) and more generally structured encryption (SE). Our SE-enabled provenance system allows a node to enforce access control policies over its provenance data even after the data has been shipped to remote nodes (e.g., for optimization purposes). We present a prototype of our design and demonstrate its practicality, scalability, and efficiency for both provenance maintenance and querying.
Quad-Active-Bridge DC–DC Converter as Cross-Link for Medium-Voltage Modular Inverters
One of the main challenges of the solid-state transformer (SST) lies in the implementation of the dc–dc stage. In this paper, a quadruple-active-bridge (QAB) dc–dc converter is investigated to be used as a basic module of a modular three-stage SST. Besides the feature of high power density and soft-switching operation (also found in others converters), the QAB converter provides a solution with reduced number of high-frequency transformers, since more bridges are connected to the same multiwinding transformer. To ensure soft switching for the entire operation range of the QAB converter, the triangular current-mode modulation strategy, previously adopted for the dual-active-bridge converter, is extended to the QAB converter. The theoretical analysis is developed considering balanced (equal power processed by the medium-voltage (MV) cells) and unbalanced (unequal power processed by the MV cells) conditions. In order to validate the theoretical analysis developed in the paper, a 2-kW prototype is built and experimented.
Payments system reform: the Australian experience
Most central banks have some type of broad responsibility for oversight of the payments system. Often this responsibility is coupled with regulatory powers relating to high-value payments. In Australia, however, the responsibility runs much broader than this, encompassing the efficiency and competitiveness of the payments system as a whole, including retail payments. This responsibility was given to the RBA following a wide-ranging inquiry into the structure of financial regulation in the mid-1990s—the so-called Wallis Inquiry. This inquiry recommended that bank supervision be moved from the RBA to a stand-alone prudential regulator—today known as the Australian Prudential Regulation Authority (APRA)—but also recommended that the RBA be given responsibility for the overall efficiency of the payments system. This recommendation reflected, in part, recognition of the fact that the RBA was already highly enmeshed in the payments system and had considerable expertise in what are often highly technical matters. It nevertheless came as a surprise to us. We had not been arguing for an extension of our powers, and we had not been seeking responsibility for payments system efficiency. In accepting the committee’s recommendations, the government took the rather unusual step of establishing a second board within the RBA— the Payments System Board. This board is chaired by the governor and has Payments System Reform: The Australian Experience
Novel Control Method for Multimodule PV Microinverter With Multiple Functions
This paper presents a novel control method for multimodule photovoltaic microinverter (MI). The proposed MI employs a two-stage topology with active-clamped current-fed push–pull converter cascaded with a full-bridge inverter. This system can operate in grid-connected mode to feed power to the grid with a programmable power factor. This system can also operate in line-interactive mode, i.e., share load power without feeding power to the grid. In the event of grid power failure, the MI can operate in a standalone mode to supply uninterruptible power to the load. This paper presents a multiloop control scheme with power programmable capability for achieving the above multiple functions. In addition, the proposed control scheme embedded a multimodule parallel capability that multiple MI modules can be paralleled to enlarge the capacity with autonomous control in all operation modes. Finally, three 250-W MI modules are adopted to demonstrate the effectiveness of the proposed control method in simulations as well as experiments.
Thoughts and Reflections
SUMMARY The author presents his hopes for the future, shaped by a lifetime of dedication to the field of religion and disability.
Modern features for systems programming languages
This paper presents a case for the design and implementation of a modern programming language for systems programming. It shows that traditional systems languages like C and Fortran possess features no longer relevant to the modern world. The paper also demonstrates how many of these features have a negative impact on the practice of systems programming. Finally, it proposes alternatives to these features that promote better practice.Additionally, the paper presents a number of features that should be included in a modern systems languages and argues in favor of their inclusion. It shows that these features have a beneficial impact on the expressive power of the language or the practices it promotes. The paper also demonstrates that these features do not compromise the objectives of simplicity, efficiency, and direct control that characterize a systems language.
A complete U-V-disparity study for stereovision based 3D driving environment analysis
Reliable understanding of the 3D driving environment is vital for obstacle detection and adaptive cruise control (ACC) applications. Laser or millimeter wave radars have shown good performance in measuring relative speed and distance in a highway driving environment. However the accuracy of these systems decreases in an urban traffic environment as more confusion occurs due to factors such as parked vehicles, guardrails, poles and motorcycles. A stereovision based sensing system provides an effective supplement to radar-based road scene analysis with its much wider field of view and more accurate lateral information. This paper presents an efficient solution using a stereovision based road scene analysis algorithm which employs the "U-V-disparity" concept. This concept is used to classify a 3D road scene into relative surface planes and characterize the features of road pavement surfaces, roadside structures and obstacles. Real-time implementation of the disparity map calculation and the "U-V-disparity" classification is also presented.
Texture overlay onto deformable surface for virtual clothing
In this article, we describe a method for overlaying arbitrary texture image onto surface of T-shirt worn by a user. In this method, the texture image is previously divided into a number of patches. On the T-shirt, markers are printed at the positions corresponding to the vertices of the patches. The markers on the surface of the T-shirt are tracked in the motion image taken by a camera. The texture image is warped according to the tracked positions of the markers, which is overlaid onto the captured image. This article presents experimental results with the pilot system of virtual clothing implemented based on the proposed method.
Are labour-intensive efforts to prevent pressure ulcers cost-effective?
BACKGROUND Pressure ulcers are a major problem in Danish healthcare with a prevalence of 13-43% among hospitalized patients. The associated costs to the Danish Health Care Sector are estimated to be €174.5 million annually. In 2010, The Danish Society for Patient Safety introduced the Pressure Ulcer Bundle (PUB) in order to reduce hospital-acquired pressure ulcers by a minimum of 50% in five hospitals. The PUB consists of evidence-based preventive initiatives implemented by ward staff using the Model for Improvement. OBJECTIVE To investigate the cost-effectiveness of labour-intensive efforts to reduce pressure ulcers in the Danish Health Care Sector, comparing the PUB with standard care. METHODS A decision analytic model was constructed to assess the costs and consequences of hospital-acquired pressure ulcers during an average hospital admission in Denmark. The model inputs were based on a systematic review of clinical efficacy data combined with local cost and effectiveness data from the Thy-Mors Hospital, Denmark. A probabilistic sensitivity analysis (PSA) was conducted to assess the uncertainty. RESULTS Prevention of hospital-acquired pressure ulcers by implementing labour-intensive effects according to the PUB was cost-saving and resulted in an improved effect compared to standard care. The incremental cost of the PUB was -€38.62. The incremental effects were a reduction of 9.3% prevented pressure ulcers and 0.47% prevented deaths. The PSAs confirmed the incremental cost-effectiveness ratio (ICER)'s dominance for both prevented pressure ulcers and saved lives with the PUB. CONCLUSION This study shows that labour-intensive efforts to reduce pressure ulcers on hospital wards can be cost-effective and lead to savings in total costs of hospital and social care. KEY LIMITATIONS The data included in the study regarding costs and effects of the PUB in Denmark were based on preliminary findings from a pilot study at Thy-Mors Hospital and literature.
Single Image Super-Resolution via Multiple Mixture Prior Models
Example learning-based single image super-resolution (SR) is a promising method for reconstructing a high-resolution (HR) image from a single-input low-resolution (LR) image. Lots of popular SR approaches are more likely either time-or space-intensive, which limit their practical applications. Hence, some research has focused on a subspace view and delivered state-of-the-art results. In this paper, we utilize an effective way with mixture prior models to transform the large nonlinear feature space of LR images into a group of linear subspaces in the training phase. In particular, we first partition image patches into several groups by a novel selective patch processing method based on difference curvature of LR patches, and then learning the mixture prior models in each group. Moreover, different prior distributions have various effectiveness in SR, and in this case, we find that student-t prior shows stronger performance than the well-known Gaussian prior. In the testing phase, we adopt the learned multiple mixture prior models to map the input LR features into the appropriate subspace, and finally reconstruct the corresponding HR image in a novel mixed matching way. Experimental results indicate that the proposed approach is both quantitatively and qualitatively superior to some state-of-the-art SR methods.
DrAcc: a DRAM based Accelerator for Accurate CNN Inference
Modern Convolutional Neural Networks (CNNs) are computation and memory intensive. Thus it is crucial to develop hardware accelerators to achieve high performance as well as power/energy-efficiency on resource limited embedded systems. DRAM-based CNN accelerators exhibit great potentials but face inference accuracy and area overhead challenges. In this paper, we propose DrAcc, a novel DRAM-based processing-in-memory CNN accelerator. DrAcc achieves high inference accuracy by implementing a ternary weight network using in-DRAM bit operation with simple enhancements. The data partition and mapping strategies can be flexibly configured for the best trade-off among performance, power and energy consumption, and DRAM data reuse factors. Our experimental results show that DrAcc achieves 84.8 FPS (frame per second) at 2W and 2.9× power efficiency improvement over the process-near-memory design.
The Cambridge Economic History of Modern Europe by Stephen Broadberry
Unlike most existing textbooks on the economic history of modern Europe, which offer a country-by-country approach, The Cambridge Economic History of Modern Europe rethinks Europe's economic history since 1700 as unified and pan-European, with the material organized by topic rather than by country. This second volume tracks Europe's economic history through three major phases since 1870. The first phase was an age of globalization and of European economic and political dominance that lasted until the First World War. The second, from 1914 to 1945, was one of war, deglobalization, and depression and the third was one of growing integration not only within Europe but also between Europe and the global economy. Leading authors offer comprehensive and accessible introductions to these patterns of globalization and deglobalization as well as to key themes in modern economic history such as economic growth, business cycles, sectoral developments, and population and living standards.
Comparative Analysis and Simulation of Different CMOS Full Adders Using Cadence in 90 nm Technology
This paper analyzes & compares four adders with different logic styles (Conventional, transmission gate, 14 transistors & GDI based technique) for transistor count, power dissipation, delay and power delay product. It is performed in virtuoso platform, using Cadence tool with available GPDK - 90nm kit. The width of NMOS and PMOS is set at 120nm and 240nm respectively. Transmission gate full adder has sheer advantage of high speed but consumes more power. GDI full adder gives reduced voltage swing not being able to pass logic 1 and logic 0 completely showing degraded output. Transmission gate full adder shows better performance in terms of delay (0.417530 ns), whereas 14T full adder shows better performance in terms of all three aspects.
Withdrawal of immunosuppression in Crohn's disease treated with scheduled infliximab maintenance: a randomized trial.
BACKGROUND & AIMS The benefit to risk ratio of concomitant immunosuppressives with scheduled infliximab (IFX) maintenance therapy for Crohn's disease is an issue of debate. We aimed to study the influence of immunosuppressives discontinuation in patients in remission with combination therapy in an open-label, randomized, controlled trial. METHODS Patients with controlled disease > or = 6 months after the start of IFX (5 mg/kg intravenously) combined with immunosuppressives were randomized to continue (Con) or to interrupt (Dis) immunosuppressives, while all patients received scheduled IFX maintenance therapy for 104 weeks. Primary end point was the proportion of patients who required a decrease in IFX dosing interval or stopped IFX therapy. Secondary end points included IFX trough levels, safety, and mucosal healing. RESULTS A similar proportion (24/40, 60% Con) and (22/40, 55% Dis) of patients needed a change in IFX dosing interval or stopped IFX therapy (11/40 Con, 9/40 Dis). C-reactive protein (CRP) was higher and IFX trough levels were lower in the Dis group (Dis: CRP, 2.8 mg/L; interquartile range [IQR], 1.0-8.0; Con: CRP, 1.6 mg/L; IQR, 1.0-5.6, P < .005; trough IFX: Dis: 1.65 microg/mL; IQR, 0.54-3.68; Con: 2.87 microg/mL; IQR, 1.35-4.72, P < .0001). Low IFX trough levels correlated with increased CRP and clinical score. Mucosal ulcers were absent at week 104 in 64% (Con) and 61% (Dis) of evaluated patients with ongoing response to IFX. CONCLUSIONS Continuation of immunosuppressives beyond 6 months offers no clear benefit over scheduled IFX monotherapy but is associated with higher median IFX trough and decreased CRP levels. The impact of these observations on long-term outcomes needs to be explored further.
A power assessment of machining tools
Energy conservation is becoming a more important ideal in today's society, due to the increasing awareness of environmental and economic impacts. This project experimentally measures the power consumption, which is related to the energy consumption, of machines in the Laboratory for Manufacturing and Productivity, in order to determine the energy cost of the machines. This project then compares the results found experimentally to the theoretical minimum energy consumption in order to reference the measurements to the ideal energy consumption. Finally, this project attempts to find documentation of these energy costs in order to project the results found experimentally onto machines not physically available for measurement. This project found that the machines in the Laboratory for Manufacturing and Productivity used more energy than was necessary while running, due to the sometimes large amount of power needed to run the idle machines. The specifications given by the machine's manufacturers were adequate to estimate the maximum power requirements. Combining these estimates with the motor properties allowed one to estimate the power requirements of both unloaded operation (while the machine was idle) as well as loaded operation. Thesis Supervisor: Timothy G. Gutowski Title: Professor
Facile synthesis of cadmium sulphide-polyaniline (CdS-PANI) nanocomposite and its field emission investigations
A simple two step chemical route has been employed to synthesize cadmium sulphide - polyaniline (CdS-PANI) nanocomposite. In the first step PANI nanotubes are synthesized by oxidative polymerization followed by deposition of CdS nanoparticles on them. In order to investigate the structural, chemical and optical properties of the CdS-PANI nanocomposite, various characterization techniques have been used. The microscopic analysis reveals that the PANI nanotubes having length in several microns with cavity diameter of ~ 10 to 20 nm are decorated with CdS nanoparticles of 5 to 10 nm. The XRD spectrum indicates formation of crystalline hexagonal phase of CdS, supported by the SAED pattern obtained during TEM analysis. The FTIR and UV-visible spectra confirm formation of the conducting phase of PANI. The field emission studies of the CdS-PANI nanocomposite indicate low turn on field of 1.4 V/μm, corresponding to emission current density ~ 1 μA/cm2, and emission current density of ~ 5.5 mA/cm2 has been drawn at an applied field of 3.8 V/μm. Furthermore the emission current is observed to be fairly stable at the preset values over long term duration. The enhanced field emission properties, exhibited in terms of low turn on field, and delivery of very high emission current density at relatively lower applied field, are attributed to the nanometric dimensions of the PANI nanotubes and CdS nanoparticles, and modulation of electronic properties due to formation of heterojunction. The overall field emission results propose the CdS-PANI nanocomposite as a promising material for field emission based devices.
Chinese Dialect Identification Using Tone Features Based on Pitch Flux
This paper presents a method to extract tone relevant features based on pitch flux from continuous speech signal. The autocorrelations of two adjacent frames are calculated and the covariance between them is estimated to extract multi-dimensional pitch flux features. These features, together with MFCCs, are modeled in a 2-stream GMM models, and are tested in a 3-dialect identification task for Chinese. The pitch flux features have shown to be very effective in identifying tonal languages with short speech segments. For the test speech segments of 3 seconds, 2-stream model achieves more than 30% error reduction over MFCC-based model
Top-down control of visual attention
Top-down visual attention improves perception of selected stimuli and that improvement is reflected in the neural activity at many stages throughout the visual system. Recent studies of top-down attention have elaborated on the signatures of its effects within visual cortex and have begun identifying its causal basis. Evidence from these studies suggests that the correlates of spatial attention exhibited by neurons within the visual system originate from a distributed network of structures involved in the programming of saccadic eye movements. We summarize this evidence and discuss its relationship to the neural mechanisms of spatial working memory.
The optimization of success probability for software projects using genetic algorithms
The software development process is usually affected by many risk factors that may cause the loss of control and failure, thus which need to be identified and mitigated by project managers. Software development companies are currently improving their process by adopting internationally accepted practices, with the aim of avoiding risks and demonstrating the quality of their work. This paper aims to develop a method to identify which risk factors are more influential in determining project outcome. This method must also propose a cost effective investment of project resources to improve the probability of project success. To achieve these aims, we use the probability of success relative to cost to calculate the efficiency of the probable project outcome. The definition of efficiency used in this paper was proposed by researchers in the field of education. We then use this efficiency as the fitness function in an optimization technique based on genetic algorithms. This method maximizes the success probability output of a prediction model relative to cost. The optimization method was tested with several software risk prediction models that have been developed based on the literature and using data from a survey which collected information from in-house and outsourced software development projects in the Chilean software industry. These models predict the probability of success of a project based on the activities undertaken by the project manager and development team. The results show that the proposed method is very useful to identify those activities needing greater allocation of resources, and which of these will have a higher impact on the projects success probability. Therefore using the measure of efficiency has allowed a modular approach to identifying those activities in software development on which to focus the project’s limited resources to improve its probability of success. The genetic algorithm and the measure of efficiency presented in this paper permits model independence, in both prediction of success and cost evaluation.
Elderly people at home: technological help in everyday activities
The aim of this paper is to understand to what extent elderly people are likely to accept a technological aid in performing everyday activities. In this perspective, the present research focused on elderly people's strategies in performing everyday activities at home, in order to understand in what domestic domains technology can be considered an acceptable help. We administered a questionnaire focusing on preferred strategies in carrying out common domestic tasks, and on attitudes towards new technologies and home modification to a sample of 123 elderly people living in Rome. Results show that the adoption of a strategy, including the introduction of technological devices, is highly problem-specific, while personal factors are relevant only in particular situations. With increasing age, people are more inclined to give up, and higher educational levels correspond to more frequent technological solutions.
Single channel audio source separation using convolutional denoising autoencoders
Deep learning techniques have been used recently to tackle the audio source separation problem. In this work, we propose to use deep fully convolutional denoising autoencoders (CDAEs) for monaural audio source separation. We use as many CDAEs as the number of sources to be separated from the mixed signal. Each CDAE is trained to separate one source and treats the other sources as background noise. The main idea is to allow each CDAE to learn suitable spectral-temporal filters and features to its corresponding source. Our experimental results show that CDAEs perform source separation slightly better than the deep feedforward neural networks (FNNs) even with fewer parameters than FNNs.
Neural correlates of individual differences in pain-related fear and anxiety
Although individual differences in fear and anxiety modulate the pain response and may even cause more suffering than the initiating physical stimulus, little is known about the neural systems mediating this relationship. The present study provided the first examination of the neural correlates of individual differences in the tendency to (1) feel anxious about the potentially negative implications of physical sensations, as measured by the anxiety sensitivity index (ASI), and (2) fear various types of physical pain, as indexed by the fear of pain questionnaire (FPQ). In separate sessions, participants completed these questionnaires and experienced alternating blocks of noxious thermal stimulation (45-50 degrees C) and neutral thermal stimulation (38 degrees C) during the collection of whole-brain fMRI data. Regression analyses demonstrated that during the experience of pain, ASI scores predicted activation of a medial prefrontal region associated with self-focused attention, whereas FPQ scores predicted activation of a ventral lateral frontal region associated with response regulation and anterior and posterior cingulate regions associated with monitoring and evaluation of affective responses. These functional relationships cannot be wholly explained by generalized anxiety (indexed by STAI-T scores), which did not significantly correlate with activation of any regions. The present findings may help clarify both the impact of individual differences in emotion on the neural correlates of pain, and the roles in anxiety, fear, and pain processing played by medial and orbitofrontal systems.
Fault maintenance trees: Reliability centered maintenance via statistical model checking
The current trend in infrastructural asset management is towards risk-based (a.k.a. reliability centered) maintenance, promising better performance at lower cost. By maintaining crucial components more intensively than less important ones, dependability increases while costs decrease. This requires good insight into the effect of maintenance on the dependability and associated costs. To gain these insights, we propose a novel framework that integrates fault tree analysis with maintenance. We support a wide range of maintenance procedures and dependability measures, including the system reliability, availability, mean time to failure, as well as the maintenance and failure costs over time, split into different cost components. Technically, our framework is realized via statistical model checking, a state-of-the-art tool for flexible modelling and simulation. Our compositional approach is flexible and extendible. We deploy our framework to two cases from industrial practice: insulated joints, and train compressors.
The Influence of Information Overload on the Development of Trust and Purchase Intention Based on Online Product Reviews in a Mobile vs . Web Environment : A Research Proposal
Information overload has been studied extensively by decision science researchers, particularly in the context of task-based optimization decisions. Media selection research has similarly investigated the extent to which task characteristics influence media choice and use. This paper outlines a proposed study, which would compare the effectiveness of web-based online product review systems in facilitation trust and purchase intention to those of mobile product review systems. We propose that since web-based systems are more effective in fostering focus and are less prone to navigation frustration, information overload is less likely influence the extent to which a consumer trusts an online product review.
3D solid fin model construction from 2D shapes using non-uniform rational B-spline surfaces
A computer aided design (CAD) tool has been specifically developed for rapid and easy design of solid models for surfboard and sailboard fins. This tool simplifies the lofting of advanced fin cross-sectional foils, in this instance based upon the family of standard airfoil series set by the National Advisory Committee for Aeronautics (NACA), whilst retaining a basic parametric description at each cross-section. This paper describes the way in which non-uniform rational B-spline (NURBS) surfaces are created from 2D profile splines, and are then used to generate 3D geometrical surfaces of the fins, which can be imported directly into commercial software packages for finite element stress analysis (FEA) and computational fluid dynamics (CFD). Pressure distributions, lift and drag forces are determined from a CFD flow analysis for various fins designed with this tool, and the results suggest that the incorporation of advanced foils into surfboard fins could indeed lead to increased performance over fins foiled using current standard techniques. 2006 Elsevier Ltd. All rights reserved.
The 12-item General Health Questionnaire (GHQ-12): translation and validation study of the Iranian version
BACKGROUND The objective of this study was to translate and to test the reliability and validity of the 12-item General Health Questionnaire (GHQ-12) in Iran. METHODS Using a standard 'forward-backward' translation procedure, the English language version of the questionnaire was translated into Persian (Iranian language). Then a sample of young people aged 18 to 25 years old completed the questionnaire. In addition, a short questionnaire containing demographic questions and a single measure of global quality of life was administered. To test reliability the internal consistency was assessed by Cronbach's alpha coefficient. Validity was performed using convergent validity. Finally, the factor structure of the questionnaire was extracted by performing principal component analysis using oblique factor solution. RESULTS In all 748 young people entered into the study. The mean age of respondents was 21.1 (SD = 2.1) years. Employing the recommended method of scoring (ranging from 0 to 12), the mean GHQ score was 3.7 (SD = 3.5). Reliability analysis showed satisfactory result (Cronbach's alpha coefficient = 0.87). Convergent validity indicated a significant negative correlation between the GHQ-12 and global quality of life scores as expected (r = -0.56, P < 0.0001). The principal component analysis with oblique rotation solution showed that the GHQ-12 was a measure of psychological morbidity with two-factor structure that jointly accounted for 51% of the variance. CONCLUSION The study findings showed that the Iranian version of the GHQ-12 has a good structural characteristic and is a reliable and valid instrument that can be used for measuring psychological well being in Iran.
T-cell enumeration from dried blood spots by quantifying rearranged T-cell receptor-beta genes.
Significant hurdles remain to large-scale implementation of medical interventions in the developing world due to the lack of a modern diagnostic infrastructure. This is especially pertinent to the international roll-out of antiretroviral drugs to treat HIV, which ideally includes a CD4 T-cell count to determine eligibility. We designed a novel technique to estimate mature T-cell numbers by calculating the amount of rearranged T-cell receptor beta genes from dried blood spots of HIV-infected individuals in the United States and Uganda. It was observed that the rearranged T-cell receptor beta count correlated well with total lymphocyte counts from both study populations (Baltimore R=0.602, Uganda R=0.497; p<0.001) and the ability for this measurement to determine antiretroviral initiation was similar to total lymphocyte counts, which can be used to determine eligibility in HIV+children. This technique as well as other dried blood spot based technologies could increase the diagnostic and monitoring capabilities in resource-limited settings.
A Convolutional Neural Network for Leaves Recognition Using Data Augmentation
Recently, convolutional neural networks (ConvNets) have achieved marvellous results in different field of recognition, especially in computer vision. In this paper, a seven-layer ConvNet using data augmentation is proposed for leaves recognition. First, we implement multiform transformations (e.g., rotation and translation etc.) to enlarge the dataset without changing their labels. This novel technique recently makes tremendous contribution to the performance of ConvNets as it is able to reduce the over-fitting degree and enhance the generalization ability of the ConvNet. Moreover, in order to get the shapes of leaves, we sharpen all the images with a random parameter. This method is similar to the edge detection, which has been proved useful in the image classification. Then we train a deep convolutional neural network to classify the augmented leaves data with three groups of test set and finally find that the method is quite feasible and effective. The accuracy achieved by our algorithm outperforms other methods for supervised learning on the popular leaf dataset Flavia.
Security Assessment of Code Obfuscation Based on Dynamic Monitoring in Android Things
Android-based Internet-of-Things devices with excellent compatibility and openness are constantly emerging. A typical example is Android Things that Google supports. Compatibility based on the same platform can provide more convenient personalization services centering on mobile devices, while this uniformity-based computing environment can expose many security vulnerabilities. For example, new mobile malware running on Android can instantly transition to all connected devices. In particular, the Android platform has a structural weakness that makes it easy to repackage applications. This can lead to malicious behavior. To protect mobile apps that are vulnerable to malicious activity, various code obfuscation techniques are applied to key logic. The most effective one of this kind involves safely concealing application programming interfaces (API). It is very important to ensure that obfuscation is applied to the appropriate API with an adequate degree of resistance to reverse engineering. Because there is no objective evaluation method, it depends on the developer judgment. Therefore, in this paper, we propose a scheme that can quantitatively evaluate the level of hiding of APIs, which represent the function of the Android application based on machine learning theory. To perform the quantitative evaluation, the API information is obtained by static analysis of a DEX file, and the API-called code executed in Dalvik in the Android platform is dynamically extracted. Moreover, the sensitive APIs are classified using the extracted API and Naive Bayes classification. The proposed scheme yields a high score according to the level of hiding of the classified API. We tested the proposed scheme on representative applications of the Google Play Store. We believe it can be used as a model for obfuscation assessment schemes, because it can evaluate the level of obfuscation in general without relying on specific obfuscation tools.
Attribute-based encryption with non-monotonic access structures
We construct an Attribute-Based Encryption (ABE) scheme that allows a user's private key to be expressed in terms of any access formula over attributes. Previous ABE schemes were limited to expressing only monotonic access structures. We provide a proof of security for our scheme based on the Decisional Bilinear Diffie-Hellman (BDH) assumption. Furthermore, the performance of our new scheme compares favorably with existing, less-expressive schemes.
Hemodynamics of cerebral aneurysm initiation: the role of wall shear stress and spatial wall shear stress gradient.
BACKGROUND AND PURPOSE Cerebral aneurysms are preferentially located at arterial curvatures and bifurcations that are exposed to major hemodynamic forces, increasingly implicated in the life cycle of aneurysms. By observing the natural history of aneurysm formation from its preaneurysm state, we aimed to examine the hemodynamic microenvironment related to aneurysm initiation at certain arterial segments later developing an aneurysm. MATERIALS AND METHODS The 3 patients included in the study underwent cerebral angiography with 3D reconstruction before a true aneurysm developed. The arterial geometries obtained from the 3D-DSA models were used for flow simulation by using finite-volume modeling. The WSS and SWSSG at the site of the future aneurysm and the flow characteristics of the developed aneurysms were analyzed. RESULTS The analyzed regions of interest demonstrated significantly increased WSS, accompanied by an increased positive SWSSG in the adjacent proximal region. The WSS reached values of >5 times the temporal average values of the parent vessel, whereas the SWSSG approximated or exceeded peaks of 40 Pa/mm in all 3 cases. All patients developed an aneurysm within 2 years, 1 of which ruptured. CONCLUSIONS The results of this hemodynamic study, in accordance with the clinical follow-up, suggest that the combination of high WSS and high positive SWSSG focused on a small segment of the arterial wall may have a role in the initiation process of aneurysm formation.
A field usability evaluation of a wearable system
This empirical field study describes the wearable system, the study method used, and summarizes the experiences of aircraft maintenance specialists who participated in a field usability evaluation conducted at their United States Air Force Reserve facility in the summer of 1996. This usability evaluation aimed at collecting accurate, detailed information for use in the evolutionary design and development of wearable systems for vehicle maintenance workers. All participants were able to complete their tasks using the wearable prototype and they expressed willingness to use such a system in the future. Also, we report on future requirements for wearable maintenance aids elicited from these users.
Defining architecture components of the Big Data Ecosystem
Big Data are becoming a new technology focus both in science and in industry and motivate technology shift to data centric architecture and operational models. There is a vital need to define the basic information/semantic models, architecture components and operational models that together comprise a so-called Big Data Ecosystem. This paper discusses a nature of Big Data that may originate from different scientific, industry and social activity domains and proposes improved Big Data definition that includes the following parts: Big Data properties (also called Big Data 5V: Volume, Velocity, Variety, Value and Veracity), data models and structures, data analytics, infrastructure and security. The paper discusses paradigm change from traditional host or service based to data centric architecture and operational models in Big Data. The Big Data Architecture Framework (BDAF) is proposed to address all aspects of the Big Data Ecosystem and includes the following components: Big Data Infrastructure, Big Data Analytics, Data structures and models, Big Data Lifecycle Management, Big Data Security. The paper analyses requirements to and provides suggestions how the mentioned above components can address the main Big Data challenges. The presented work intends to provide a consolidated view of the Big Data phenomena and related challenges to modern technologies, and initiate wide discussion.
A Recommender System Combining Social Networks for Tourist Attractions
The fast development of Web technologies has introduced a world of big data. How efficiently and effectively to retrieve the information from the ocean of data that the users really want is an important topic. Recommendation systems have become a popular approach to personalized information retrieval. On the other hand, social media have quickly entered into your life. The information from social networks can be an effective indicator for recommender systems. In this paper we present a recommendation mechanism which calculates similarity among users and users' trustability and analyzes information collected from social networks. To validate our method an information system for tourist attractions built on this recommender system has been presented. We further evaluate our system by experiments. The results show our method is feasible and effective.
MonoFusion: Real-time 3D reconstruction of small scenes with a single web camera
MonoFusion allows a user to build dense 3D reconstructions of their environment in real-time, utilizing only a single, off-the-shelf web camera as the input sensor. The camera could be one already available in a tablet, phone, or a standalone device. No additional input hardware is required. This removes the need for power intensive active sensors that do not work robustly in natural outdoor lighting. Using the input stream of the camera we first estimate the 6DoF camera pose using a sparse tracking method. These poses are then used for efficient dense stereo matching between the input frame and a key frame (extracted previously). The resulting dense depth maps are directly fused into a voxel-based implicit model (using a computationally inexpensive method) and surfaces are extracted per frame. The system is able to recover from tracking failures as well as filter out geometrically inconsistent noise from the 3D reconstruction. Our method is both simple to implement and efficient, making such systems even more accessible. This paper details the algorithmic components that make up our system and a GPU implementation of our approach. Qualitative results demonstrate high quality reconstructions even visually comparable to active depth sensor-based systems such as KinectFusion.
A Study on the Design Optimization of an AUV by Using Computational Fluid Dynamic Analysis
Autonomous Underwater Vehicles (AUV's) provide an important means for collecting detailed scientific information from the ocean depths. The hull resistance of an AUV is an important factor in determining the power requirements and range of the vehicle. This paper describes a design method using Computational Fluid Dynamics (CFD) for determining the hull resistance of an AUV under development. The CFD results reveal the distribution of hydrodynamic values (velocity, pressure, etc.) of the AUV with a ducted propeller. The optimization of the AUV hull profile for reducing the total resistance is also discussed in this paper. This paper demonstrates that shape optimization in conceptual design is possible by using a commercial CFD package. The optimum designs to minimize the drag force of the AUV were carried out, for a given object function and constraints.
Mortality rates among Kanyawara chimpanzees.
Demographic data from wild chimpanzees are of considerable interest for understanding the evolution of the human life history. Published mortality data, however, come primarily from chimpanzee populations that have recently suffered dramatic, human-induced declines, and exhibit rates of reproduction well below replacement. Here we present a life table for chimpanzees living in the Kanyawara community of Kibale National Park, comprising 1129 individual risk years and 56 deaths. This community has shown modest growth over the past 25 years, avoiding some of the worst impacts of human contact. Sex differences in mortality at Kanyawara appeared similar to those reported from other sites. However, overall mortality rates were significantly lower than those reported from the long-term study sites of Gombe, Taï and Mahale. Kanyawara chimpanzees in this sample had a life expectancy at birth of 19 years, and individuals living to age 14 could expect to live for another 24 years. Life table data from Kanyawara indicate a mean mortality rate of 3.9% per year over the ages of 10-35, substantially less than the equivalent figure of 6.8% from a sample of other long-term chimpanzee study sites. The comparable adult mortality rate from a range of human foraging societies is ∼2%. The Kanyawara data thus suggest an important downward revision in adult mortality rates for wild chimpanzees, but they do not challenge the existence of an important difference in adult mortality between humans and chimpanzees.
Endosymbiotic gene transfer: organelle genomes forge eukaryotic chromosomes
Genome sequences reveal that a deluge of DNA from organelles has constantly been bombarding the nucleus since the origin of organelles. Recent experiments have shown that DNA is transferred from organelles to the nucleus at frequencies that were previously unimaginable. Endosymbiotic gene transfer is a ubiquitous, continuing and natural process that pervades nuclear DNA dynamics. This relentless influx of organelle DNA has abolished organelle autonomy and increased nuclear complexity.
Can characters reveal your native language? A language-independent approach to native language identification
A common approach in text mining tasks such as text categorization, authorship identification or plagiarism detection is to rely on features like words, part-of-speech tags, stems, or some other high-level linguistic features. In this work, an approach that uses character n-grams as features is proposed for the task of native language identification. Instead of doing standard feature selection, the proposed approach combines several string kernels using multiple kernel learning. Kernel Ridge Regression and Kernel Discriminant Analysis are independently used in the learning stage. The empirical results obtained in all the experiments conducted in this work indicate that the proposed approach achieves state of the art performance in native language identification, reaching an accuracy that is 1.7% above the top scoring system of the 2013 NLI Shared Task. Furthermore, the proposed approach has an important advantage in that it is language independent and linguistic theory neutral. In the cross-corpus experiment, the proposed approach shows that it can also be topic independent, improving the state of the art system by 32.3%.
Haematoma in a hydrocele of the canal of Nuck mimicking a Richter’s hernia
We report a haematoma in a hydrocele of the canal of Nuck in a 69-year-old female. She presented with a right-sided groin swelling, the differential for which included an irreducible inguinal hernia or haematoma given her aspirin and clopidegrel use. Successful treatment involved evacuation of the haematoma with excision of the sac. Despite a high index of suspicion for a haematoma, these swellings should ideally be explored given the potential for co-existence of a hernia.
The Road Ahead - Shared red bricks
70 REQUIREMENTS SUCH AS THOSE in the International Technology Roadmap for Semiconductors (ITRS) drive the electronic design automation (EDA) industry (see http://public.itrs.net) as well as other semiconductor supplier industries, such as lithography, front-end processing, and assembly and packaging. This new department will explore the semiconductor roadmap and its implications for EDA. In particular, future columns will attempt to answer three important questions:
Evidence-based interventions in dementia: A pragmatic cluster-randomised trial of an educational intervention to promote earlier recognition and response to dementia in primary care (EVIDEM-ED)
BACKGROUND The National Dementia Strategy seeks to enhance general practitioners' diagnostic and management skills in dementia. Early diagnosis in dementia within primary care is important as this allows those with dementia and their family care networks to engage with support services and plan for the future. There is, however, evidence that dementia remains under-detected and sub-optimally managed in general practice. An earlier unblinded, cluster randomised controlled study tested the effectiveness of educational interventions in improving detection rates and management of dementia in primary care. In this original trial, a computer decision support system and practice-based educational workshops were effective in improving rates of detecting dementia although not in changing clinical management. The challenge therefore is to find methods of changing clinical management. Our aim in this new trial is to test a customised educational intervention developed for general practice, promoting both earlier diagnosis and concordance with management guidelines. DESIGN/METHOD The customised educational intervention combines practice-based workshops and electronic support material. Its effectiveness will be tested in an unblinded cluster randomised controlled trial with a pre-post intervention design, with two arms; normal care versus the educational intervention. Twenty primary care practices have been recruited with the aim of gaining 200 patient participants. We will examine whether the intervention is effective, pragmatic and feasible within the primary care setting. Our primary outcome measure is an increase in the proportion of patients with dementia who receive at least two dementia-specific management reviews per year. We will also examine important secondary outcomes such as practice concordance with management guidelines and benefits to patients and carers in terms of quality of life and carer strain. DISCUSSION The EVIDEM-ED trial builds on the earlier study but the intervention is different in that it is specifically customised to the educational needs of each practice. If this trial is successful it could have implications for the implementation of the National Dementia Strategy. TRIAL REGISTRATION NCT00866099.
Metropolis procedural modeling
Procedural representations provide powerful means for generating complex geometric structures. They are also notoriously difficult to control. In this article, we present an algorithm for controlling grammar-based procedural models. Given a grammar and a high-level specification of the desired production, the algorithm computes a production from the grammar that conforms to the specification. This production is generated by optimizing over the space of possible productions from the grammar. The algorithm supports specifications of many forms, including geometric shapes and analytical objectives. We demonstrate the algorithm on procedural models of trees, cities, buildings, and Mondrian paintings.
TRM-IoT: A trust management model based on fuzzy reputation for internet of things
Since a large scale Wireless Sensor Network (WSN) is to be completely integrated into Internet as a core part of Internet of Things (IoT) or Cyber Physical System (CPS), it is necessary to consider various security challenges that come with IoT/CPS, such as the detection of malicious attacks. Sensors or sensor embedded things may establish direct communication between each other using 6LoWPAN protocol. A trust and reputation model is recognized as an important approach to defend a large distributed sensor networks in IoT/CPS against malicious node attacks, since trust establishment mechanisms can stimulate collaboration among distributed computing and communication entities, facilitate the detection of untrustworthy entities, and assist decision-making process of various protocols. In this paper, based on in-depth understanding of trust establishment process and quantitative comparison among trust establishment methods, we present a trust and reputation model TRM-IoT to enforce the cooperation between things in a network of IoT/CPS based on their behaviors. The accuracy, robustness and lightness of the proposed model is validated through a wide set of simulations.
Some physicochemical parameters of potable water supply in Warri, Niger Delta area of Nigeria
Some physiochemical parameters of potable water supply in relation to surface water, shallow well and borehole water in Warri area of Niger Delta region of Nigeria were assessed to determine their pollution profiles. Parameters such pH, turbidity, total suspended solids (TSS), chemical oxygen demand (COD), dissolved oxygen (DO), sulphate, phosphate, chloride and bicarbonate in the different water supply were determined. With few exceptions, the ranking of physicochemical parameters is in the order: surface water > shallow well water > borehole water. Some of the parameters were above WHO standard for drinking water, so there is need for strict monitoring to ensure quality water supply for human health.
Tightly coupled UWB/IMU pose estimation
In this paper we propose a 6DOF tracking system combining Ultra-Wideband measurements with low-cost MEMS inertial measurements. A tightly coupled system is developed which estimates position as well as orientation of the sensor-unit while being reliable in case of multipath effects and NLOS conditions. The experimental results show robust and continuous tracking in a realistic indoor positioning scenario.
Nonlinear Dynamic Modeling of a Single-Phase Permanent-Magnet Brushless DC Motor Using 2-D Static Finite-Element Results
A nonlinear dynamic model is proposed for a single-phase permanent-magnet brushless DC (SP PM BLDC) motor to reduce simulation time compared to transient finite-element model and finite-element co-simulation model. The model uses a lookup table of EMF constant and cogging torque obtained from two-dimensional static finite-element analyses. A machine model based on a lookup table is coupled to an inverter model and a load model to form a complete motor model. Nonlinear effects such as cogging torque, magnetic saturation, armature reaction, and switching transients are considered in the model. The simulated results are validated experimentally with a prototype SP PM BLDC external rotor ceiling fan motor.
Translation in Prokaryotes.
This review summarizes our current understanding of translation in prokaryotes, focusing on the mechanistic and structural aspects of each phase of translation: initiation, elongation, termination, and ribosome recycling. The assembly of the initiation complex provides multiple checkpoints for messenger RNA (mRNA) and start-site selection. Correct codon-anticodon interaction during the decoding phase of elongation results in major conformational changes of the small ribosomal subunit and shapes the reaction pathway of guanosine triphosphate (GTP) hydrolysis. The ribosome orchestrates proton transfer during peptide bond formation, but requires the help of elongation factor P (EF-P) when two or more consecutive Pro residues are to be incorporated. Understanding the choreography of transfer RNA (tRNA) and mRNA movements during translocation helps to place the available structures of translocation intermediates onto the time axis of the reaction pathway. The nascent protein begins to fold cotranslationally, in the constrained space of the polypeptide exit tunnel of the ribosome. When a stop codon is reached at the end of the coding sequence, the ribosome, assisted by termination factors, hydrolyzes the ester bond of the peptidyl-tRNA, thereby releasing the nascent protein. Following termination, the ribosome is dissociated into subunits and recycled into another round of initiation. At each step of translation, the ribosome undergoes dynamic fluctuations between different conformation states. The aim of this article is to show the link between ribosome structure, dynamics, and function.
Differential susceptibility of blue catfish, Ictalurus furcatus (Valenciennes), channel catfish, I. punctatus (Rafinesque), and blue x channel catfish hybrids to channel catfish virus.
Channel catfish virus (CCV), also known as ictalurid herpesvirus-1 (IHV-1), primarily affects juvenile channel catfish, Ictalurus punctatus (Rafinesque), that are less than 6 months old and was first reported by Fijan (1968). CCV outbreaks can be sporadic, and are usually associated with fry and fingerlings when the water temperature is above 25 C (Plumb 1978). The virus has been reported to be transmitted vertically (Wise, Harrell, Busch & Boyle 1988) and horizontally (reviewed in Plumb 1978). The external signs of CCV disease (CCVD) include exophthalmia, distended abdomen and haemorrhages at the bases of fins. The trunk kidney may exhibit oedema and necrosis, and this tissue is commonly used to confirm the presence of the virus using a tissue culture assay (reviewed in Wolf 1988). In addition, the virus appears to maintain a latent state in leucocytes (Bowser, Munson, Jarboe, Francis-Floyd & Waterstrat 1985), which raises the possibility that latent CCV infection may alter the immune response to other pathogens. It has been over 30 years since it was determined that different strains of catfish exhibited differential resistance to CCV when the virus was mixed with their feed (Plumb, Green, Smitherman & Pardue 1975). A subsequent study by Plumb & Chappell (1978) examined the relative susceptibility of blue catfish, I. furcatus (Valenciennes), and reciprocal blue · channel hybrids to CCV. Since Plumb s (1978) study, there have been no reports on the relative susceptibility of blue catfish or hybrids to CCV. The present study was conducted to determine the relative susceptibility of four different groups of fish: blue catfish, a blue · channel hybrid, a group of channel catfish obtained from 10 farms in the Mississippi Delta, hereafter referred to as the industry pool (IP), and a new strain of catfish produced by the Catfish Genetics Research Unit of USDA at Stoneville, MS (USDA 102 · 103). For each strain of fish, nine replicate tanks were stocked with 40 fish per tank; eight tanks were used for virus challenge while the remaining tank was used as an uninfected control. Fish were placed in 38 L tanks that were filled to 11 L and had a flowthrough rate of 1.8 L min and allowed to acclimatize for 8 days. Fish were fed to satiation twice per day beginning the day after stocking and feeding continued throughout the course of the study. The blue catfish (avg. wt. 3.45 0.18 g) used in this study were of the D and B strain. Fry from seven different spawns were pooled and raised communally in tanks until used in the challenge. The hybrid catfish (avg. wt. 4.10 0.21 g) were produced by crossing female USDA 103 strain channel catfish and D and B strain blue catfish. Twelve hybrid spawns were pooled and used in this Journal of Fish Diseases 2008, 31, 77–79
t-FFD: free-form deformation by using triangular mesh
A new method of free-form deformation, t-FFD, is proposed. An original shape of large-scale polygonal mesh or point-cloud is deformed by using a control mesh, which is constituted of a set of triangles with arbitrary topology and geometry, including the cases of disconnection or self-intersection. For modeling purposes, a designer can handle the shape directly or indirectly, and also locally or globally. This method works on a simple mapping mechanism. First, each point of the original shape is parametrized by the local coordinate system on each triangle of the control mesh. After modifying the control mesh, the point is mapped according to each modified triangle. Finally, the mapped locations are blended as a new position of the original point, then a smoothly deformed shape is achieved. Details of the t-FFD are discussed and examples are shown.
Accuracy of diagnostic tests for Cushing's syndrome: a systematic review and metaanalyses.
CONTEXT The diagnosis of Cushing's syndrome (CS) requires the use of tests of unregulated hypercortisolism that have unclear accuracy. OBJECTIVE Our objective was to summarize evidence on the accuracy of common tests for diagnosing CS. DATA SOURCES We searched electronic databases (MEDLINE, EMBASE, Web of Science, Scopus, and citation search for key articles) from 1975 through September 2007 and sought additional references from experts. STUDY SELECTION Eligible studies reported on the accuracy of urinary free cortisol (UFC), dexamethasone suppression test (DST), and midnight cortisol assays vs. reference standard in patients suspected of CS. DATA EXTRACTION Reviewers working in duplicate and independently extracted study characteristics and quality and data to estimate the likelihood ratio (LR) and the 95% confidence interval (CI) for each result. DATA SYNTHESIS We found 27 eligible studies, with a high prevalence [794 (9.2%) of 8631 patients had CS] and severity of CS. The tests had similar accuracy: UFC (n = 14 studies; LR+ 10.6, CI 5.5-20.5; LR- 0.16, CI 0.08-0.33), salivary midnight cortisol (n = 4; LR+ 8.8, CI 3.5-21.8; LR- 0.07, CI 0-1.2), and the 1-mg overnight DST (n = 14; LR+ 16.4, CI 9.3-28.8; LR- 0.06, CI 0.03-0.14). Combined testing strategies (e.g. a positive result in both UFC and 1-mg overnight DST) had similar diagnostic accuracy (n = 3; LR+ 15.4, CI 0.7-358; LR- 0.11, CI 0.007-1.57). CONCLUSIONS Commonly used tests to diagnose CS appear highly accurate in referral practices with samples enriched with patients with CS. Their performance in usual clinical practice remains unclear.
SALMA: Standard Arabic Language Morphological Analysis
Morphological analyzers are preprocessors for text analysis. Many Text Analytics applications need them to perform their tasks. This paper reviews the SALMA-Tools (Standard Arabic Language Morphological Analysis) [1]. The SALMA-Tools is a collection of open-source standards, tools and resources that widen the scope of Arabic word structure analysis - particularly morphological analysis, to process Arabic text corpora of different domains, formats and genres, of both vowelized and non-vowelized text. Tag-assignment is significantly more complex for Arabic than for many languages. The morphological analyzer should add the appropriate linguistic information to each part or morpheme of the word (proclitic, prefix, stem, suffix and enclitic); in effect, instead of a tag for a word, we need a subtag for each part. Very fine-grained distinctions may cause problems for automatic morphosyntactic analysis - particularly probabilistic taggers which require training data, if some words can change grammatical tag depending on function and context; on the other hand, fine-grained distinctions may actually help to disambiguate other words in the local context. The SALMA - Tagger is a fine grained morphological analyzer which is mainly depends on linguistic information extracted from traditional Arabic grammar books and prior-knowledge broad-coverage lexical resources; the SALMA - ABCLexicon. More fine-grained tag sets may be more appropriate for some tasks. The SALMA - Tag Set is a standard tag set for encoding, which captures long-established traditional fine-grained morphological features of Arabic, in a notation format intended to be compact yet transparent.
Design of a compact high power phased array for 5G FD-MIMO system at 29 GHz
This paper presents a new design concept of a beam steerable high gain phased array antenna based on WR28 waveguide at 29 GHz frequency for fifth generation (5G) full dimension multiple input multiple output (FD-MIMO) system. The 8×8 planar phased array is fed by a three dimensional beamformer to obtain volumetric beam scanning ranging from −60 to +60 degrees both in azimuth and elevation direction. Beamforming network (BFN) is designed using 16 set of 8×8 Butler matrix beamformer to get 64 beam states, which control the horizontal and vertical angle. This is a new concept to design waveguide based high power three-dimensional beamformer for volumetric multibeam in Ka band for 5G application. The maximum gain of phased array is 28.5 dBi that covers 28.9 GHz to 29.4 GHz frequency band.
Food Recognition for Dietary Assessment Using Deep Convolutional Neural Networks
Table 2 – Results of the proposed method for different voting schemes and variants compared to a method from the literature Diet management is a key factor for the prevention and treatment of diet-related chronic diseases. Computer vision systems aim to provide automated food intake assessment using meal images. We propose a method for the recognition of food items in meal images using a deep convolutional neural network (CNN) followed by a voting scheme. Our approach exploits the outstanding descriptive ability of a CNN, while the patch-wise model allows the generation of sufficient training samples, provides additional spatial flexibility for the recognition and ignores background pixels.
Clinical experience with the Palmaz-Schatz coronary stent.
Complications that occurred in 247 patients who underwent successful elective stenting to native coronary arteries with the Palmaz-Schatz balloon expandable stent included subacute thrombosis in 7 patients (2.8%), myocardial infarction in 3 (1.2%), death 3 (1.2%), urgent bypass surgery in 4 (1.6%) and major bleeding events in 24 (9.7%). Angiographic restenosis occurred in 21 (20%) of 103 patients who received a single stent. Subgroup analysis, however, revealed that restenosis of a single stent occurred in 3 (7%) of 45 patients without prior angioplasty compared with 25 (27%) of 91 patients with prior angioplasty. Patients with "suboptimal" angioplasty results (dis-section) who received a single stent seemed to have a higher thrombosis rate perioperatively (4 [4%] of 98), but no higher incidence of restenosis (7 [15%] of 46) than that of the total group of patients who received a single stent. Coronary stenting may be a valuable adjunct to coronary angioplasty in carefully selected patients. Complication rates are similar to those of routine angioplasty; however, angiographic restenosis may be reduced in certain subsets of patients.
Data Driven Reduced Order Modeling of Fluid Dynamics Using Linear Multistep Network
In this effort we propose a data-driven learning framework for reduced order modeling of fluid dynamics. Designing accurate and efficient reduced order models for nonlinear fluid dynamic problems is challenging for many practical engineering applications. Classical projection-based model reduction methods generate reduced systems by projecting full-order differential operators into low-dimensional subspaces. However, these techniques usually lead to severe instabilities in the presence of highly nonlinear dynamics, which dramatically deteriorates the accuracy of the reduced-order models. In contrast, our new framework exploits linear multistep networks, based on implicit Adams-Moulton schemes, to construct the reduced system. The advantage is that the method optimally approximates the full order model in the low-dimensional space with a given supervised learning task. Moreover, our approach is non-intrusive, such that it can be applied to other complex nonlinear dynamical systems with sophisticated legacy codes. We demonstrate the performance of our method through the numerical simulation of a twodimensional flow past a circular cylinder with Reynolds number Re = 100. The results reveal that the new data-driven model is significantly more accurate than standard projectionbased approaches.
Devices, systems, and methods for automated monitoring enabling precision agriculture
Addressing the challenges of feeding the burgeoning world population with limited resources requires innovation in sustainable, efficient farming. The practice of precision agriculture offers many benefits towards addressing these challenges, such as improved yield and efficient use of such resources as water, fertilizer and pesticides. We describe the design and development of a light-weight, multi-spectral 3D imaging device that can be used for automated monitoring in precision agriculture. The sensor suite consists of a laser range scanner, multi-spectral cameras, a thermal imaging camera, and navigational sensors. We present techniques to extract four key data products - plant morphology, canopy volume, leaf area index, and fruit counts - using the sensor suite. We demonstrate its use with two systems: multi-rotor micro aerial vehicles and on a human-carried, shoulder-mounted harness. We show results of field experiments conducted in collaboration with growers and agronomists in vineyards, apple orchards and orange groves.
Patent analysis for competitive technical intelligence and innovative thinking
Patents are a very useful source of technical information. The public availability of patents over the Internet, with for some databases (eg. Espacenet) the assurance of a constant format, allows the development of high value added products using this information source and provides an easy way to analyze patent information. This simple and powerful tool facilitates the use of patents in academic research, in SMEs and in developing countries providing a way to use patents as a ideas resource thus improving technological innovation.
Mobility Metric based LEACH-Mobile Protocol
Cluster based protocols like LEACH were found best suited for routing in wireless sensor networks. In mobility centric environments some improvements were suggested in the basic scheme. LEACH-Mobile is one such protocol. The basic LEACH protocol is improved in the mobile scenario by ensuring whether a sensor node is able to communicate with its cluster head. Since all the nodes, including cluster head is moving it will be better to elect a node as cluster head which is having less mobility related to its neighbours. In this paper, LEACH-Mobile protocol has been enhanced based on a mobility metric “remoteness” for cluster head election. This ensures high success rate in data transfer between the cluster head and the collector nodes even though nodes are moving. We have simulated and compared our LEACH-Mobile-Enhanced protocol with LEACHMobile. Results show that inclusion of neighbouring node information improves the routing protocol.
Coreference Resolution Using Competition Learning Approach
In this paper we propose a competition learning approach to coreference resolution. Traditionally, supervised machine learning approaches adopt the singlecandidate model. Nevertheless the preference relationship between the antecedent candidates cannot be determined accurately in this model. By contrast, our approach adopts a twin-candidate learning model. Such a model can present the competition criterion for antecedent candidates reliably, and ensure that the most preferred candidate is selected. Furthermore, our approach applies a candidate filter to reduce the computational cost and data noises during training and resolution. The experimental results on MUC-6 and MUC-7 data set show that our approach can outperform those based on the singlecandidate model.
Cognitive PHY and MAC layers for dynamic spectrum access and sharing of TV bands
Research in the physical (PHY) and medium access control (MAC) layers for dynamic spectrum access (DSA) and dynamic spectrum sharing (DSS) is still at its infancy. Aspects such as spectrum sensing, coexistence, measurement and spectrum management, network reliability and QoS support in face of the need to avoid causing harmful interference into incumbents, to name a few, are key to the success of future cognitive radio (CR) systems and have received little attention so far. In addition, it is critical to understand the interplay of these various cognitive radio concepts and how they impact the overall network performance. In this paper we address these questions by presenting the design and performance evaluation of a CR-based PHY and MAC for DSA and DSS of vacant television (TV) channels. This air interface described here forms the baseline of the current IEEE 802.22 draft standard, and features a number of key PHY and MAC CR-based components for use by license-exempt devices in the spectrum that is currently allocated primarily to TV services. Through simulations and prototyping, we analyze the performance of this first CR-based wireless network with respect to spectrum sensing, system capacity, QoS support, coexistence, and network reliability.
Simple BM25 extension to multiple weighted fields
This paper describes a simple way of adapting the BM25 ranking formula to deal with structured documents. In the past it has been common to compute scores for the individual fields (e.g. title and body) independently and then combine these scores (typically linearly) to arrive at a final score for the document. We highlight how this approach can lead to poor performance by breaking the carefully constructed non-linear saturation of term frequency in the BM25 function. We propose a much more intuitive alternative which weights term frequencies <i>before</i> the non-linear term frequency saturation function is applied. In this scheme, a structured document with a title weight of two is mapped to an unstructured document with the title content repeated twice. This more verbose unstructured document is then ranked in the usual way. We demonstrate the advantages of this method with experiments on Reuters Vol1 and the TREC dotGov collection.
Developing Academic Language in English Language Learners Through Sheltered Instruction
This article describes a study examining the effects of Sheltered Instruction Observation Protocol (SIOP) model instruction on the academic language performance of middle and high school English language learners. The SIOP model is an approach for teaching content curriculum to students learning through a new language. Teachers employ techniques that make the content concepts accessible and also develop students’ skills in the new language. Using a quasiexperimental design, the research was conducted in content area and English as a second language classes in two districts in northern New Jersey over 2 years. The analysis presents student achievement data from state-mandated language proficiency tests in the final year of the intervention, after most of the treatment teachers had completed their professional development in the SIOP model. There were statistically significant differences in the average mean scores in favor of the treatment student group on Writing, Oral Language, and Total English scores of the IDEA Language Proficiency Tests with small to medium effect sizes. The results from this study show that the SIOP model offers a promising approach to professional development that can improve the quality of instruction to English language learners and increase their English language achievement. doi: 10.1002/tesq.20
A Low-Profile Wideband Circularly Polarized Crossed-Dipole Antenna With Wide Axial-Ratio and Gain Beamwidths
A low-profile wideband circularly polarized (CP) crossed-dipole antenna that has both wide axial-ratio beamwidth (ARBW) and half-power beamwidth (HPBW) is presented in this paper. The crossed dipoles are composed of four trapezoidal patch arms, and they are fed by a pair of vacant-quarter printed rings to generate CP radiation. Four identical parasitic elements that consist of a horizontal triangle patch and a vertical metallic plate are symmetrically intervened between the crossed dipoles and the ground plane. It has been found that the parasitic elements can effectively decrease the antenna profile, increase the operating bandwidth, and simultaneously enhance the ARBW as well as the HPBW. A prototype was fabricated and measured to verify the design. The measured results show that the prototype has a low profile of $0.1\lambda _{0}$ , a −10 dB impedance bandwidth of 78.3% and a 3 dB AR bandwidth of 63.4%. Moreover, a 3 dB ARBW of more than 120° and an HPBW of more than 110° are achieved simultaneously within a wide passband of 50.7%.
OCNet: Object Context Network for Scene Parsing
In this paper, we address the problem of scene parsing with deep learning and focus on the context aggregation strategy for robust segmentation. Motivated by that the label of a pixel is the category of the object that the pixel belongs to, we introduce an object context pooling (OCP) scheme, which represents each pixel by exploiting the set of pixels that belong to the same object category with such a pixel, and we call the set of pixels as object context. Our implementation, inspired by the self-attention approach, consists of two steps: (i) compute the similarities between each pixel and all the pixels, forming a socalled object context map for each pixel served as a surrogate for the true object context, and (ii) represent the pixel by aggregating the features of all the pixels weighted by the similarities. The resulting representation is more robust compared to existing context aggregation schemes, e.g., pyramid pooling modules (PPM) in PSPNet and atrous spatial pyramid pooling (ASPP), which do not differentiate the context pixels belonging to the same object category or not, making the reliability of contextually aggregated representations limited. We empirically demonstrate our approach and two pyramid extensions with state-ofthe-art performance on three semantic segmentation benchmarks: Cityscapes, ADE20K and LIP. Code has been made available at: https://github.com/PkuRainBow/ OCNet.pytorch.
Experimental Results for Dexterous Quadruped Locomotion Planning with RoboSimian
RoboSimian is a quadruped robot inspired by an ape-like morphology, with four symmetric limbs that provide a large dexterous workspace and high torque output capabilities. Advantages of using RoboSimian for rough terrain locomotion include (1) its large, stable base of support, and (2) existence of redundant kinematic solutions, toward avoiding collisions with complex terrain obstacles. However, these same advantages provide significant challenges in experimental implementation of walking gaits. Specifically: (1) a wide support base results in high variability of required body pose and foothold heights, in particular when compared with planning for humanoid robots, (2) the long limbs on RoboSimian have a strong proclivity for self-collision and terrain collision, requiring particular care in trajectory planning, and (3) having rear limbs outside the field of view requires adequate perception with respect to a world map. In our results, we present a tractable means of planning statically stable and collision-free gaits, which combines practical heuristics for kinematics with traditional randomized (RRT) search algorithms. In planning experiments, our method outperforms other tested methodologies. Finally, real-world testing indicates that perception limitations provide the greatest challenge in real-world implementation.
Efficient Nash equilibrium approximation through Monte Carlo counterfactual regret minimization
Recently, there has been considerable progress towards algorithms for approximating Nash equilibrium strategies in extensive games. One such algorithm, Counterfactual Regret Minimization (CFR), has proven to be effective in two-player zero-sum poker domains. While the basic algorithm is iterative and performs a full game traversal on each iteration, sampling based approaches are possible. For instance, chance-sampled CFR considers just a single chance outcome per traversal, resulting in faster but less precise iterations. While more iterations are required, chance-sampled CFR requires less time overall to converge. In this work, we present new sampling techniques that consider sets of chance outcomes during each traversal to produce slower, more accurate iterations. By sampling only the public chance outcomes seen by all players, we take advantage of the imperfect information structure of the game to (i) avoid recomputation of strategy probabilities, and (ii) achieve an algorithmic speed improvement, performing O(n) work at terminal nodes in O(n) time. We demonstrate that this new CFR update converges more quickly than chance-sampled CFR in the large domains of poker and Bluff.
Gamification in theory and action: A survey
Gamification has drawn the attention of academics, practitioners and business professionals in domains as diverse as education, information studies, human–computer interaction, and health. As yet, the term remains mired in diverse meanings and contradictory uses, while the concept faces division on its academic worth, underdeveloped theoretical foundations, and a dearth of standardized guidelines for application. Despite widespread commentary on its merits and shortcomings, little empirical work has sought to validate gamification as a meaningful concept and provide evidence of its effectiveness as a tool for motivating and engaging users in non-entertainment contexts. Moreover, no work to date has surveyed gamification as a field of study from a human–computer studies perspective. In this paper, we present a systematic survey on the use of gamification in published theoretical reviews and research papers involving interactive systems and human participants. We outline current theoretical understandings of gamification and draw comparisons to related approaches, including alternate reality games (ARGs), games with a purpose (GWAPs), and gameful design. We present a multidisciplinary review of gamification in action, focusing on empirical findings related to purpose and context, design of systems, approaches and techniques, and user impact. Findings from the survey show that a standard conceptualization of gamification is emerging against a growing backdrop of empirical participantsbased research. However, definitional subjectivity, diverse or unstated theoretical foundations, incongruities among empirical findings, and inadequate experimental design remain matters of concern. We discuss how gamification may to be more usefully presented as a subset of a larger effort to improve the user experience of interactive systems through gameful design. We end by suggesting points of departure for continued empirical investigations of gamified practice and its effects. & 2014 Elsevier Ltd. All rights reserved.
The Application of MIMO to Non-Orthogonal Multiple Access
This paper considers the application of multiple-input multiple-output (MIMO) techniques to nonorthogonal multiple access (NOMA) systems. A new design of precoding and detection matrices for MIMO-NOMA is proposed and its performance is analyzed for the case with a fixed set of power allocation coefficients. To further improve the performance gap between MIMO-NOMA and conventional orthogonal multiple access schemes, user pairing is applied to NOMA and its impact on the system performance is characterized. More sophisticated choices of power allocation coefficients are also proposed to meet various quality-of-service requirements. Finally, computer simulation results are provided to facilitate the performance evaluation of MIMO-NOMA and also demonstrate the accuracy of the developed analytical results.
Job Satisfaction and Organizational Commitment in the Public Sector: A Study of a „Closed‟ Government Agency
This paper provides findings from the qualitative stage of a study on organizational commitment and job satisfaction. Whilst a number of studies report empirical research in the private sector, as yet there is little research published on the public sector especially 'closed' government departments. Interviews were conducted to explore the influence of job satisfaction and level of organizational commitment in a Malaysian public sector organization. The findings disclosed why employees chose to stay even when they were not satisfied and lacked commitment. Low commitment and lack of satisfaction may lead to low morale and a lack of sense of belonging. These issues may benefit from being addressed in order to reduce negative impacts on performance and wellbeing. Findings from this study provide insights into ways of developing a sustainable public sector.
Algorithms for Advanced Battery-Management Systems
Lithium-ion (Li-ion) batteries are ubiquitous sources of energy for portable electronic devices. Compared to alternative battery technologies, Li-ion batteries provide one of the best energy-to-weight ratios, exhibit no memory effect, and have low self-discharge when not in use. These beneficial properties, as well as decreasing costs, have established Li-ion batteries as a leading candidate for the next generation of automotive and aerospace applications. In the automotive sector, increasing demand for hybrid electric vehicles (HEVs), plug-in HEVs (PHEVs), and EVs has pushed manufacturers to the limits of contemporary automotive battery technology. This limitation is gradually forcing consideration of alternative battery technologies, such as Li-ion batteries, as a replacement for existing leadacid and nickel-metal-hydride batteries. Unfortunately, this replacement is a challenging task since automotive applications demand large amounts of energy and power and must operate safely, reliably, and durably at these scales. The article presents a detailed description and model of a Li-ion battery. It begins the section "Intercalation-Based Batteries" by providing an intuitive explanation of the fundamentals behind storing energy in a Li-ion battery. In the sections "Modeling Approach" and "Li-Ion Battery Model," it present equations that describe a Li-ion cell's dynamic behavior. This modeling is based on using electrochemical principles to develop a physics-based model in contrast to equivalent circuit models. A goal of this article is to present the electrochemical model from a controls perspective.
A DUAL-BAND RF ENERGY HARVESTING USING FREQUENCY LIMITED DUAL-BAND IMPEDANCE MATCHING
Abstract—In this paper, a novel dual-band RF-harvesting RF-DC converter with a frequency limited impedance matching network (M/N) is proposed. The proposed RF-DC converter consists of a dual-band impedance matching network, a rectifier circuit with villard structure, a wideband harmonic suppression low-pass filter (LPF), and a termination load. The proposed dual-band M/N can match two receiving band signals and suppress the out-of-band signals effectively, so the back-scattered nonlinear frequency components from the nonlinear rectifying diodes to the antenna can be blocked. The fabricated circuit provides the maximum RF-DC conversion efficiency of 73.76% and output voltage 7.09 V at 881MHz and 69.05% with 6.86V at 2.4GHz with an individual input signal power of 22 dBm. Moreover, the conversion efficiency of 77.13% and output voltage of 7.25V are obtained when two RF waves with input dual-band signal power of 22 dBm are fed simultaneously.
The second wave of synthetic biology: from modules to systems
Synthetic biology is a research field that combines the investigative nature of biology with the constructive nature of engineering. Efforts in synthetic biology have largely focused on the creation and perfection of genetic devices and small modules that are constructed from these devices. But to view cells as true 'programmable' entities, it is now essential to develop effective strategies for assembling devices and modules into intricate, customizable larger scale systems. The ability to create such systems will result in innovative approaches to a wide range of applications, such as bioremediation, sustainable energy production and biomedical therapies.
Circadian rhythm of cortisol and neighborhood characteristics in a population-based sample: the Multi-Ethnic Study of Atherosclerosis.
Although stress is often hypothesized to contribute to the effects of neighborhoods on health, very few studies have investigated associations of neighborhood characteristics with stress biomarkers. This study helps address the gap in the literature by examining whether neighborhood characteristics are associated with cortisol profiles. Analyses were based on data from the Multi-Ethnic Study of Atherosclerosis Stress study, which collected multiple measures of salivary cortisol over three days on a population based sample of approximately 800 adults. Multilevel models with splines were used to examine associations of cortisol levels with neighborhood poverty, violence, disorder, and social cohesion. Neighborhood violence was significantly associated with lower cortisol values at wakeup and with a slower decline in cortisol over the earlier part of the day, after sociodemographic controls. Associations were weaker and less consistent for neighborhood poverty, social cohesion, and disorder. Results revealed suggestive, though limited, evidence linking neighborhood contexts to cortisol circadian rhythms.
Red Kant, or the Persistence of the Third "Critique" in Adorno and Jameson
slogan-the one absolute and we may even say 'transhistorical' imperative of all dialectical thought-will unsurprisingly turn out to be the moral of The Political Unconscious as well."' A great deal of critical theory, since Jameson issued his mandate, has assumed an identity between the aesthetic (particularly in its Kantian and modernist versions) and the process of ideological deformation of the material, the real, the sociopolitical; ultimately, of the historical. For many critics working during the last two decades from the Marxian or Marxian-derived premises of the "critique of aesthetic ideology," it has been axiomatic that Kantian aesthetics and the art contemporaneous with it establish an essentialist or transcendental ideology of literary-cultural value whose Other, from romanticism through modernism, will be the material, the social, and the historical; whose Other, to formulate it more precisely, will be the critical attempt to engage the material, social, and historical from a political, interventionist
Metaphor is like analogy
A mind is a computer.
A multiscale retinex for bridging the gap between color images and the human observation of scenes
Direct observation and recorded color images of the same scenes are often strikingly different because human visual perception computes the conscious representation with vivid color and detail in shadows, and with resistance to spectral shifts in the scene illuminant. A computation for color images that approaches fidelity to scene observation must combine dynamic range compression, color consistency-a computational analog for human vision color constancy-and color and lightness tonal rendition. In this paper, we extend a previously designed single-scale center/surround retinex to a multiscale version that achieves simultaneous dynamic range compression/color consistency/lightness rendition. This extension fails to produce good color rendition for a class of images that contain violations of the gray-world assumption implicit to the theoretical foundation of the retinex. Therefore, we define a method of color restoration that corrects for this deficiency at the cost of a modest dilution in color consistency. Extensive testing of the multiscale retinex with color restoration on several test scenes and over a hundred images did not reveal any pathological behaviour.
Can Google nowcast the market trend of Iranian mobile games?
The Internet has become an integral part of everyday life. In this paper, we investigate if the Internet behavior of consumers correlates with their actual behavior in computer games market. In particular, we aim to investigate to what extent Web search data can be exploited to predict the ranking of mobile games in the world. Based on our findings that show the existence of such correlations, we use web search data (from Google) about mobile games in Iran to nowcast1 (predict the present status of) the ranking of Iranian mobile games.
Understanding the cognitive impact on children who are treated for medulloblastoma.
OBJECTIVE Risk-adapted treatment approaches employed within contemporary medulloblastoma treatment protocols aim to reduce the neurotoxicity directed at the central nervous system. Despite these important steps to reduce radiation dose exposure, an overwhelming majority of medulloblastoma survivors continue to experience academic failure and significant learning delays. METHODS A review of the current literature is presented. RESULTS Deficits in intellectual function, academic achievement, memory, attention, and processing speed are reported. Finally, intervention programs, including pharmacotherapy and experimental cognitive intervention studies, are discussed. A review of neuroimaging studies shows changes in brain tissue following chemotherapy and radiation treatment. CONCLUSIONS Declining IQ and academic struggles may be predated by difficulties with attention, memory, and processing speed. More clinical trials directed at treating and preventing neurocognitive late effects through cognitive rehabilitation are needed.
Managerial perceptions of political risk in international projects
This paper examines the vulnerability of international projects to political risks. A brief review of the literature on general risks – natural, financial, cultural and political – is undertaken and then a more detailed review of the literature on political risk is presented. It was found that relatively few studies of political risk, particularly in the context of international projects, have been carried out. More particularly the focus has been almost exclusively on developed, rather than developing, countries. Questionnaires were distributed therefore to the entire target population of Jordanian international projects. The findings suggest that the political risk associated with international projects poses a threat to the majority of respondents and that the vulnerability to political risk is related to a firm’s degree of internationalisation. International projects are more concerned about host-society and interstate related risks than host-government related risks.
Flexible capacitive sensors for high resolution pressure measurement
Thin, flexible, robust capacitive pressure sensors have been the subject of research in many fields where axial strain sensing with high spatial resolution and pressure resolution is desirable for small loads, such as tactile robotics and biomechanics. Simple capacitive pressure sensors have been designed and implemented on flexible substrates in general agreement with performance predicted by an analytical model. Two designs are demonstrated for comparison. The first design uses standard flex circuit technology, and the second design uses photolithography techniques to fabricate capacitive sensors with higher spatial and higher pressure resolution. Sensor arrays of varying sensor size and spacing are tested with applied loads from 0 to 1 MPa. Pressure resolution and linearity of the sensors are significantly improved with the miniaturized, custom fabricated sensor array compared to standard flexible circuit technology.
A foundation for the study of behavior change support systems
The emerging ambient persuasive technology looks very promising for many areas of personal and ubiquitous computing. Persuasive applications aim at changing human attitudes or behavior through the power of software designs. This theory-creating article suggests the concept of a behavior change support system (BCSS), whether web-based, mobile, ubiquitous, or more traditional information system to be treated as the core of research into persuasion, influence, nudge, and coercion. This article provides a foundation for studying BCSSs, in which the key constructs are the O/C matrix and the PSD model. It will (1) introduce the archetypes of behavior change via BCSSs, (2) describe the design process for building persuasive BCSSs, and (3) exemplify research into BCSSs through the domain of health interventions. Recognizing the themes put forward in this article will help leverage the full potential of computing for producing behavioral changes.
Automatic Teacher Modeling from Live Classroom Audio
We investigate automatic analysis of teachers' instructional strategies from audio recordings collected in live classrooms. We collected a data set of teacher audio and human-coded instructional activities (e.g., lecture, question and answer, group work) in 76 middle school literature, language arts, and civics classes from eleven teachers across six schools. We automatically segment teacher audio to analyze speech vs. rest patterns, generate automatic transcripts of the teachers' speech to extract natural language features, and compute low-level acoustic features. We train supervised machine learning models to identify occurrences of five key instructional segments (Question & Answer, Procedures and Directions, Supervised Seatwork, Small Group Work, and Lecture) that collectively comprise 76% of the data. Models are validated independently of teacher in order to increase generalizability to new teachers from the same sample. We were able to identify the five instructional segments above chance levels with F1 scores ranging from 0.64 to 0.78. We discuss key findings in the context of teacher modeling for formative assessment and professional development.
StreamScope: Continuous Reliable Distributed Processing of Big Data Streams
STREAMSCOPE (or STREAMS) is a reliable distributed stream computation engine that has been deployed in shared 20,000-server production clusters at Microsoft. STREAMS provides a continuous temporal stream model that allows users to express complex stream processing logic naturally and declaratively. STREAMS supports business-critical streaming applications that can process tens of billions (or tens of terabytes) of input events per day continuously with complex logic involving tens of temporal joins, aggregations, and sophisticated userdefined functions, while maintaining tens of terabytes inmemory computation states on thousands of machines. STREAMS introduces two abstractions, rVertex and rStream, to manage the complexity in distributed stream computation systems. The abstractions allow efficient and flexible distributed execution and failure recovery, make it easy to reason about correctness even with failures, and facilitate the development, debugging, and deployment of complex multi-stage streaming applications.
Silk - Generating RDF Links while Publishing or Consuming Linked Data
The central idea of the Web of Data is to interlink data items using RDF links. However, in practice most data sources are not sufficiently interlinked with related data sources. The Silk Link Discovery Framework addresses this problem by providing tools to generate links between data items based on user-provided link specifications. It can be used by data publishers to generate links between data sets as well as by Linked Data consumers to augment Web data with additional RDF links. In this poster we present the Silk Link Discovery Framework and report on two usage examples in which we employed Silk to generate links between two data sets about movies as well as to find duplicate persons in a stream of data items that is crawled from the Web.
Action MACH a spatio-temporal Maximum Average Correlation Height filter for action recognition
In this paper we introduce a template-based method for recognizing human actions called action MACH. Our approach is based on a maximum average correlation height (MACH) filter. A common limitation of template-based methods is their inability to generate a single template using a collection of examples. MACH is capable of capturing intra-class variability by synthesizing a single Action MACH filter for a given action class. We generalize the traditional MACH filter to video (3D spatiotemporal volume), and vector valued data. By analyzing the response of the filter in the frequency domain, we avoid the high computational cost commonly incurred in template-based approaches. Vector valued data is analyzed using the Clifford Fourier transform, a generalization of the Fourier transform intended for both scalar and vector-valued data. Finally, we perform an extensive set of experiments and compare our method with some of the most recent approaches in the field by using publicly available datasets, and two new annotated human action datasets which include actions performed in classic feature films and sports broadcast television.
Modeling Restaurant Context for Food Recognition
Food photos are widely used in food logs for diet monitoring and in social networks to share social and gastronomic experiences. A large number of these images are taken in restaurants. Dish recognition in general is very challenging, due to different cuisines, cooking styles, and the intrinsic difficulty of modeling food from its visual appearance. However, contextual knowledge can be crucial to improve recognition in such scenario. In particular, geocontext has been widely exploited for outdoor landmark recognition. Similarly, we exploit knowledge about menus and location of restaurants and test images. We first adapt a framework based on discarding unlikely categories located far from the test image. Then, we reformulate the problem using a probabilistic model connecting dishes, restaurants, and locations. We apply that model in three different tasks: dish recognition, restaurant recognition, and location refinement. Experiments on six datasets show that by integrating multiple evidences (visual, location, and external knowledge) our system can boost the performance in all tasks.
Evaluation of nail abnormalities.
Knowledge of the anatomy and function of the nail apparatus is essential when performing the physical examination. Inspection may reveal localized nail abnormalities that should be treated, or may provide clues to an underlying systemic disease that requires further workup. Excessive keratinaceous material under the nail bed in a distal and lateral distribution should prompt an evaluation for onychomycosis. Onychomycosis may be diagnosed through potassium hydroxide examination of scrapings. If potassium hydroxide testing is negative for the condition, a nail culture or nail plate biopsy should be performed. A proliferating, erythematous, disruptive mass in the nail bed should be carefully evaluated for underlying squamous cell carcinoma. Longitudinal melanonychia (vertical nail bands) must be differentiated from subungual melanomas, which account for 50 percent of melanomas in persons with dark skin. Dystrophic longitudinal ridges and subungual hematomas are local conditions caused by trauma. Edema and erythema of the proximal and lateral nail folds are hallmark features of acute and chronic paronychia. Clubbing may suggest an underlying disease such as cirrhosis, chronic obstructive pulmonary disease, or celiac sprue. Koilonychia (spoon nail) is commonly associated with iron deficiency anemia. Splinter hemorrhages may herald endocarditis, although other causes should be considered. Beau lines can mark the onset of a severe underlying illness, whereas Muehrcke lines are associated with hypoalbuminemia. A pincer nail deformity is inherited or acquired and can be associated with beta-blocker use, psoriasis, onychomycosis, tumors of the nail apparatus, systemic lupus erythematosus, Kawasaki disease, and malignancy.
Efficacy and safety of torcetrapib, a novel cholesteryl ester transfer protein inhibitor, in individuals with below-average high-density lipoprotein cholesterol levels.
OBJECTIVES This study was designed to evaluate the efficacy and safety of torcetrapib, a cholesteryl ester transfer protein (CETP) inhibitor, in subjects with low high-density lipoprotein cholesterol (HDL-C) levels. BACKGROUND Evidence suggests HDL-C is atheroprotective. A proven mechanism for increasing the level of HDL-C is the inhibition of CETP. METHODS A total of 162 subjects with below-average HDL-C (men <44 mg/dl; women <54 mg/dl) who were not taking lipid-modifying therapy were randomized to double-blind treatment with torcetrapib 10, 30, 60, or 90 mg/day or placebo ( approximately 30 subjects per group). RESULTS The percent change from baseline to Week 8 with torcetrapib (least-squares mean difference from placebo) was dose-dependent and ranged from 9.0% to 54.5% for HDL-C (p < or = 0.0001 for 30 mg and higher doses) and from 3.0% to -16.5% for low-density lipoprotein cholesterol (LDL-C) (p < 0.01 for 90-mg dose). Low-density lipoprotein cholesterol lowering was less in subjects with higher (>150 mg/dl) versus lower levels of baseline triglycerides; at 60 mg, the change in LDL-C was 0.1% versus -22.2% (p < 0.0001), respectively. Particle size for both HDL and LDL increased with torcetrapib. There were no dose-related increases in the frequency of adverse events. Significant blood pressure increases were noted in 2 of 140 subjects. CONCLUSIONS Torcetrapib resulted in substantial dose-dependent elevations in HDL-C, accompanied by moderate decreases in LDL-C at the higher doses. Torcetrapib was generally well tolerated.
Association of the preoperative neutrophil-to-lymphocyte and platelet-to-lymphocyte ratios with lymph node metastasis and recurrence in patients with medullary thyroid carcinoma
The preoperative neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) are known to be prognostic factors in several cancers. However, no previous investigation has been performed to evaluate the significance of the NLR and PLR in medullary thyroid carcinoma (MTC).The aim of this study was to identify the ability of the preoperative NLR or PLR to predict lymph node metastasis and recurrence in patients with MTC. Data from all patients with MTC who had undergone surgery at our institution from May 2009 to May 2016 were retrospectively evaluated. Receiver operating characteristic (ROC) analysis was performed to identify optimal NLR and PLR cutoff points, and we assessed independent predictors of lymph node metastasis and recurrence using univariate and multivariate analyses.Based on the inclusion and exclusion criteria, a total of 70 patients were enrolled in this study. The ideal cutoff points for predicting lymph node involvement were 2.7 for the NLR and 105.3 for the PLR. The optimal cutoff points of the NLR and PLR for predicting recurrence were 2.8 and 129.8, respectively. Using the cutoff values, we found that PLR>105.3 (odds ratio [OR] 4.782, 95% confidence interval [CI] 1.4-16.7) was an independent predictor of lymph node metastasis and that PLR>129.8 (OR 3.838, 95% CI 1.1-13.5) was an independent predictor of recurrence.Our study suggests that the preoperative PLR, but not NLR, was significantly associated with lymph node metastasis and recurrence in patients with MTC.
An agent-assisted QoS-based routing algorithm for wireless sensor networks
Existing routing algorithms are not effective in supporting the dynamic characteristics of wireless sensor networks (WSNs) and cannot ensure sufficient quality of service in WSN applications. This paper proposes a novel agent-assisted QoS-based routing algorithm for wireless sensor networks. In the proposed algorithm, the synthetic QoS of WSNs is chosen as the adaptive value of a Particle Swarm Optimization algorithm to improve the overall performance of network. Intelligent software agents are used to monitor changes in network topology, network communication flow, and each node’s routing state. These agents can then participate in network routing and network maintenance. Experiment results show that the proposed algorithm can ensure better quality of service in wireless sensor networks compared with traditional algorithms. Crown Copyright & 2011 Published by Elsevier Ltd. All rights reserved.
Cardiorheumatology: cardiac involvement in systemic rheumatic disease
Autoimmune rheumatic diseases can affect the cardiac vasculature, valves, myocardium, pericardium, and conduction system, leading to a plethora of cardiovascular manifestations that can remain clinically silent or lead to substantial cardiovascular morbidity and mortality. Although the high risk of cardiovascular pathology in patients with autoimmune inflammatory rheumatological diseases is not owing to atherosclerosis alone, this particular condition contributes substantially to cardiovascular morbidity and mortality—the degree of coronary atherosclerosis observed in patients with rheumatic diseases can be as accelerated, diffuse, and extensive as in patients with diabetes mellitus. The high risk of atherosclerosis is not solely attributable to traditional cardiovascular risk factors: dysfunctional immune responses, a hallmark of patients with rheumatic disorders, are thought to cause chronic tissue-destructive inflammation. Prompt recognition of cardiovascular abnormalities is needed for timely and appropriate management, and aggressive control of traditional risk factors remains imperative in patients with rheumatic diseases. Moreover, therapies directed towards inflammatory process are crucial to reduce cardiovascular disease morbidity and mortality. In this Review, we examine the multiple cardiovascular manifestations in patients with rheumatological disorders, their underlying pathophysiology, and available management strategies, with particular emphasis on the vascular aspects of the emerging field of 'cardiorheumatology'.
Texture-aware ASCII art synthesis with proportional fonts
We present a fast structure-based ASCII art generation method that accepts arbitrary images (real photograph or hand-drawing) as input. Our method supports not only fixed width fonts, but also the visually more pleasant and computationally more challenging proportional fonts, which allows us to represent challenging images with a variety of structures by characters. We take human perception into account and develop a novel feature extraction scheme based on a multi-orientation phase congruency model. Different from most existing contour detection methods, our scheme does not attempt to remove textures as much as possible. Instead, it aims at faithfully capturing visually sensitive features, including both main contours and textural structures, while suppressing visually insensitive features, such as minor texture elements and noise. Together with a deformation-tolerant image similarity metric, we can generate lively and meaningful ASCII art, even when the choices of character shapes and placement are very limited. A dynamic programming based optimization is proposed to simultaneously determine the optimal proportional-font characters for matching and their optimal placement. Experimental results show that our results outperform state-of-the-art methods in term of visual quality.
Construct validation of a triangular love scale.
This article presents a construct validation of a love scale based upon a triangular theory of love. The article opens with a review of some of the major theories of love, and with a discussion of some of the major issues in love research. Next it briefly reviews selected elements of the triangular theory of love, according to which love can be understood as comprising three componentsÐintimacy, passion, and decision/ commitment. Then the article presents two studies constituting the construct validation of the love scale. The construct validation comprises aspects of internal validationÐdetermination of whether the internal structure of the data is consistent with the theoryÐand external validationÐdetermination of whether the scale based on the theory shows sensible patterns of correlations with external measures. The data are generally, but not completely supportive of the utility of the triangular love scale.
Efficient delivery of sticky siRNA and potent gene silencing in a prostate cancer model using a generation 5 triethanolamine-core PAMAM dendrimer.
Successful achievement of RNA interference in therapeutic applications requires safe and efficient vectors for siRNA delivery. In the present study, we demonstrate that a triethanolamine (TEA)-core PAMAM dendrimer of generation 5 (G(5)) is able to deliver sticky siRNAs bearing complementary A(n)/T(n) 3'-overhangs effectively to a prostate cancer model in vitro and in vivo and produce potent gene silencing of the heat shock protein 27, leading to a notable anticancer effect. The complementary A(n)/T(n) (n = 5 or 7) overhangs characteristic of these sticky siRNA molecules help the siRNA molecules self-assemble into "gene-like" longer double-stranded RNAs thus endowing a low generation dendrimer such as G(5) with greater delivery capacity. In addition, the A(n)/T(n) (n = 5 or 7) overhangs act as protruding molecular arms that allow the siRNA molecule to enwrap the dendrimer and promote a better interaction and stronger binding, ultimately contributing toward the improved delivery activity of G(5). Consequently, the low generation dendrimer G(5) in combination with sticky siRNA therapeutics may constitute a promising gene silencing-based approach for combating castration-resistant prostate tumors or other cancers and diseases, for which no effective treatment currently exists.