title
stringlengths
8
300
abstract
stringlengths
0
10k
What is the function of the claustrum?
The claustrum is a thin, irregular, sheet-like neuronal structure hidden beneath the inner surface of the neocortex in the general region of the insula. Its function is enigmatic. Its anatomy is quite remarkable in that it receives input from almost all regions of cortex and projects back to almost all regions of cortex. We here briefly summarize what is known about the claustrum, speculate on its possible relationship to the processes that give rise to integrated conscious percepts, propose mechanisms that enable information to travel widely within the claustrum and discuss experiments to address these questions.
Chapter 0 Immersion in Digital Games : a Review of Gaming Experience Research
 Abstract— Immersion is a widely valued experience when playing digital games however it is only one component of the gaming experience. In this chapter, we review our specific approach to immersion in relation to other concepts that are used to describe gaming experiences. These include: concepts that are not specific to games such as flow and attention; generic conceptualizations of the gaming experience of which immersion may form part, such as incorporation; and specific concepts around immersion, engagement and involvement such as presence and other formulations of immersion. To illustrate the sorts of studies being done in this area, we describe one experiment in detail which aims to position immersion in relation to presence. The chapter makes three contributions: a clear formulation of immersion as one aspect of gaming experience; an overview of the state-of-the-art of gaming experience research; and a demonstration of the possibility to achieve an empirically founded understanding of these rich, subjective experiences.
Improving efficiency and accuracy in multilingual entity extraction
There has recently been an increased interest in named entity recognition and disambiguation systems at major conferences such as WWW, SIGIR, ACL, KDD, etc. However, most work has focused on algorithms and evaluations, leaving little space for implementation details. In this paper, we discuss some implementation and data processing challenges we encountered while developing a new multilingual version of DBpedia Spotlight that is faster, more accurate and easier to configure. We compare our solution to the previous system, considering time performance, space requirements and accuracy in the context of the Dutch and English languages. Additionally, we report results for 9 additional languages among the largest Wikipedias. Finally, we present challenges and experiences to foment the discussion with other developers interested in recognition and disambiguation of entities in natural language text.
Analysis of Audio-Visual Features for Unsupervised Speech Recognition
Research on “zero resource” speech processing focuses on learning linguistic information from unannotated, or raw, speech data, in order to bypass the expensive annotations required by current speech recognition systems. While most recent zero-resource work has made use of only speech recordings, here, we investigate the use of visual information as a source of weak supervision, to see whether grounding speech in a visual context can provide additional benefit for language learning. Specifically, we use a dataset of paired images and audio captions to supervise learning of low-level speech features that can be used for further “unsupervised” processing of any speech data. We analyze these features and evaluate their performance on the Zero Resource Challenge 2015 evaluation metrics, as well as standard keyword spotting and speech recognition tasks. We show that features generated with a joint audiovisual model contain more discriminative linguistic information and are less speaker-dependent than traditional speech features. Our results show that visual grounding can improve speech representations for a variety of zero-resource tasks.
Automatic Interlinking of Music Datasets on the Semantic Web
In this paper, we describe current efforts towards interlinking music-related datasets on the Web. We first explain some initial interlinking experiences, and the poor results obtained by taking a näıve approach. We then detail a particular interlinking algorithm, taking into account both the similarities of web resources and of their neighbours. We detail the application of this algorithm in two contexts: to link a Creative Commons music dataset to an editorial one, and to link a personal music collection to corresponding web identifiers. The latter provides a user with personally meaningful entry points for exploring the web of data, and we conclude by describing some concrete tools built to generate and use such links.
Scene Recognition by Manifold Regularized Deep Learning Architecture
Scene recognition is an important problem in the field of computer vision, because it helps to narrow the gap between the computer and the human beings on scene understanding. Semantic modeling is a popular technique used to fill the semantic gap in scene recognition. However, most of the semantic modeling approaches learn shallow, one-layer representations for scene recognition, while ignoring the structural information related between images, often resulting in poor performance. Modeled after our own human visual system, as it is intended to inherit humanlike judgment, a manifold regularized deep architecture is proposed for scene recognition. The proposed deep architecture exploits the structural information of the data, making for a mapping between visible layer and hidden layer. By the proposed approach, a deep architecture could be designed to learn the high-level features for scene recognition in an unsupervised fashion. Experiments on standard data sets show that our method outperforms the state-of-the-art used for scene recognition.
SMART LIVING USING BLUETOOTH- BASED ANDROID SMARTPHONE
With the development of modern technology and Android Smartphone, Smart Living is gradually changing people’s life. Bluetooth technology, which aims to exchange data wirelessly in a short distance using short-wavelength radio transmissions, is providing a necessary technology to create convenience, intelligence and controllability. In this paper, a new Smart Living system called home lighting control system using Bluetooth-based Android Smartphone is proposed and prototyped. First Smartphone, Smart Living and Bluetooth technology are reviewed. Second the system architecture, communication protocol and hardware design aredescribed. Then the design of a Bluetooth-based Smartphone application and the prototype are presented. It is shown that Android Smartphone can provide a platform to implement Bluetooth-based application for Smart Living.
Can concept sorting provide a reliable, valid and sensitive measure of medical knowledge structure?
CONTEXT Evolution from novice to expert is associated with the development of expert-type knowledge structure. The objectives of this study were to examine reliability and validity of concept sorting (ConSort) as a measure of static knowledge structure and to determine the relationship between concepts in static knowledge structure and concepts used during diagnostic reasoning. METHOD ConSort was used to identify static knowledge concepts and analysis of think-aloud protocols was used to identify dynamic knowledge concepts (used during diagnostic reasoning). Intra- and inter-rater reliability, and correlation across cases, were evaluated. Construct validity was evaluated by comparing proportions of nephrologists and students with expert-type knowledge structure. Sensitivity and specificity of static knowledge concepts as a predictor of dynamic knowledge concepts were estimated. RESULTS Thirteen first-year medical students and 19 nephrologists participated. Intra- and inter-rater agreement for determination of static knowledge concepts were 1.0 and 0.90, respectively. Reliability across cases was 0.45. The proportions of nephrologists and students identified as having expert-type knowledge structure were 82.9% and 55.8%, respectively (p=0.001). Sensitivity and specificity of ConSort((c)) in predicting concepts that were used during diagnostic reasoning were 96.8% and 27.8% for nephrologists and 87.2% and 55.1% for students. CONCLUSIONS ConSort is a reliable, valid and sensitive tool for studying static knowledge structure. The applicability of tools that evaluate static knowledge structure should be explored as an addition to existing tools that evaluate dynamic tasks such as diagnostic reasoning.
Convolutional Neural Network based Malignancy Detection of Pulmonary Nodule on Computer Tomography
Without performing biopsy that could lead physical damages to nerves and vessels, Computerized Tomography (CT) is widely used to diagnose the lung cancer due to the high sensitivity of pulmonary nodule detection. However, distinguishing pulmonary nodule inbetween malignant and benign is still not an easy task. As the CT scans are mostly in relatively low resolution, it is not easy for radiologists to read the details of the scan image. In the past few years, the continuing rapid growth of CT scan analysis system has generated a pressing need for advanced computational tools to extract useful features to assist the radiologist in reading progress. Computer-aided detection (CAD) systems have been developed to reduce observational oversights by identifying the suspicious features that a radiologist looks for during case review. Most previous CAD systems rely on low-level non-texture imaging features such as intensity, shape, size or volume of the pulmonary nodules. However, the pulmonary nodules have a wide variety in shapes and sizes, and also the high visual similarities between benign and malignant patterns, so relying on non-texture imaging features is difficult for diagnosis of the nodule types. To overcome the problem of non-texture imaging features, more recent CAD systems adopted the supervised or unsupervised learning scheme to translate the content of the nodules into discriminative features. Such features enable high-level imaging features highly correlated with shape and texture. Convolutional neural networks (ConvNets), supervised methods related to deep learning, have been improved rapidly in recent years. Due to their great success in computer vision tasks, they are also expected to be helpful in medical imaging. In this thesis, a CAD based on a deep convolutional neural network (ConvNet) is designed and evaluated for malignant pulmonary nodules on computerized tomography. The proposed ConvNet, which is the core component of the proposed CAD system, is trained on the LUNGx challenge database to classify benign and malignant pulmonary nodules on CT. The architecture of the proposed ConvNet consists of 3 convolutional layers with maximum pooling operations and rectified linear units (ReLU) activations, followed by 2 denser layers with full-connectivities, and the architecture is carefully tailored for pulmonary nodule classification by considering the
Merging Occupancy Grid Maps From Multiple Robots
Mapping can potentially be speeded up in a significant way by using multiple robots exploring different parts of the environment. But the core question of multirobot mapping is how to integrate the data of the different robots into a single global map. A significant amount of research exists in the area of multirobot mapping that deals with techniques to estimate the relative robots poses at the start or during the mapping process. With map merging, the robots in contrast individually build local maps without any knowledge about their relative positions. The goal is then to identify regions of overlap at which the local maps can be joined together. A concrete approach to this idea is presented in form of a special similarity metric and a stochastic search algorithm. Given two maps m and m', the search algorithm transforms m' by rotations and translations to find a maximum overlap between m and m'. In doing so, the heuristic similarity metric guides the search algorithm toward optimal solutions. Results from experiments with up to six robots are presented based on simulated as well as real-world map data
Training an Active Random Field for Real-Time Image Denoising
Many computer vision problems can be formulated in a Bayesian framework based on Markov random fields (MRF) or conditional random fields (CRF). Generally, the MRF/CRF model is learned independently of the inference algorithm that is used to obtain the final result. In this paper, we observe considerable gains in speed and accuracy by training the MRF/CRF model together with a fast and suboptimal inference algorithm. An active random field (ARF) is defined as a combination of a MRF/CRF based model and a fast inference algorithm for the MRF/CRF model. This combination is trained through an optimization of a loss function and a training set consisting of pairs of input images and desired outputs. We apply the ARF concept to image denoising, using the Fields of Experts MRF together with a 1-4 iteration gradient descent algorithm for inference. Experimental validation on unseen data shows that the ARF approach obtains an improved benchmark performance as well as a 1000-3000 times speedup compared to the Fields of Experts MRF. Using the ARF approach, image denoising can be performed in real-time, at 8 fps on a single CPU for a 256times256 image sequence, with close to state-of-the-art accuracy.
Concepts and technology development for the autonomous assembly and reconfiguration of modular space systems
This thesis will present concepts of modular space systems, including definitions and specific examples of how modularity has been incorporated into past and present space missions. In addition, it will present two architectures that utilize modularity in more detail to serve as examples of possible applications. The first example is a fully modular spacecraft design, which has standardized and reconfigurable components with multiple decoupled subsystems. This concept was developed into a testbed called Self-assembling Wireless Autonomous and Reconfigurable Modules (SWARM). This project sought to demonstrate the use of modular spacecraft in a laboratory environment, and to investigate the “cost,” or penalty, of modularity. The second example investigates the on-orbit assembly of a segmented primary mirror, which is part of a large space-based telescope. The objective is to compare two methods for assembling the mirror. The first method uses a propellant-based spacecraft to move the segments from a central stowage stack to the mirror assembly. The second is an electromagnetic-based method that uses superconducting electromagnetic coils as a means of applying force and torque between two assembling vehicles to produce the same results as the propellant-based system. Fully modular systems could have the ability to autonomously assemble and reconfigure in space. This ability will certainly involve very complex rendezvous and docking maneuvers that will require advanced docking ports and sensors. To this end, this thesis investigates the history of docking ports, and presents a comprehensive list of functional requirements. It then describes the design and implementation of the Universal Docking Port (UDP). Lastly, it explores the development of an optical docking sensor called the Miniature Video Docking Sensor (MVDS), which uses a set of infrared LED’s, a miniature CCD-based video camera, and an Extended Kalman Filter to determine the six relative degrees of freedom of two docking vehicles. It uses the Synchronized Position Hold Engage and Reorient Experimental Satellites (SPHERES) to demonstrate this fully integrated docking system. Thesis Advisor: David W. Miller Title: Associate Professor of Aeronautics and Astronautics, MIT
AutoExtend: Combining Word Embeddings with Semantic Resources
We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource. The method is based on encoding and decoding the word embeddings and is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The obtained embeddings live in the same vector space as the input word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet, GermaNet, and Freebase as semantic resources. AutoExtend achieves state-of-the-art performance on Word-in-Context Similarity and Word Sense Disambiguation tasks.
Personality Traits , Emotional Intelligence and Academic Achievements of University Students
This research investigated the relationships between personality traits, emotional intelligence and academic achievements among 160 university students in Malaysia. Big Five Inventory (BFI) was used to measure the five dimensions of personality traits extraversion, agreeableness, conscientiousness, neuroticism, openness; Schutte Emotional Intelligence Scale (SEIS) was used to measure emotional intelligence and students’ academic achievement was measured by Cumulative Grade Point Average (CGPA). Bivariate analysis using Pearson Correlation method indicated that extraversion (r=.311, p<.05), agreeableness (r=.378, p<.05), conscientiousness (r=.315, p<.05) and openness (r=.497, p<.05) were positively and significantly correlated with emotional intelligence. Neuroticism (r= -.303, p<.05) was found negatively and significantly associated with emotional intelligence. However, emotional intelligence (r=.002, p>.05) was insignificantly associated with academic achievement. Future researches are recommended to employ Structural Equation Modeling analysis to determine how both personality traits and emotional intelligence have an impact on academic achievements.
La calidad de vida de las mujeres en edad media varía según el estado menopáusico
136 1 Médico ginecólogo. Profesor Titular, Jefe del Departamento de Investigaciones, Facultad de Medicina, Universidad de Cartagena, Colombia. 2 Médico. Estudiante de la Maestría de Epidemiología Clínica. Facultad de Medicina, Universidad de Cartagena, en convenio con la Universidad Nacional de Colombia. 3 Médica ginecóloga. Cali, Colombia. Correspondencia: Álvaro Monterrosa; [email protected] Edificio City Bank Oficina 6-A. La Matuna, teléfono 57-5-66542211, fax: 57-5-6600084, móvil: 57-3157311275, CartagenaColombia
Stator/rotor slot and winding pole pair combinations of DC biased sinusoidal vernier reluctance machines
The recent novel DC biased sinusoidal current reluctance machines (DC-biased VRMs) are similar to switched reluctance machines (SRMs) in terms of structure, as they both adopt the doubly salient structure and concentrated windings. However, the special characteristic of DC-biased VRMs is that their phase currents have both DC current and AC current. Therefore, it can be inferred that the vibration and noise can be much smaller than SRMs as the phase current waveform is smooth. Besides, compared with variable flux reluctance machines with specialized field windings, DC-biased VRMs exhibit better performance such as fewer copper loss and higher torque density. In this paper, the stator/rotor slot combinations and armature winding configurations with different pole pairs of the DC-biased VRMs are deeply investigated. Firstly, relationships among stator/rotor slots and armature winding pole pairs are given based on the working principles. Then, several feasible stator/rotor slot combinations are obtained, and the electromagnetic performances are compared by the finite element analysis (FEA). The results show that the 8-rotor-slot, 5 armature pole pair machine exhibits highest torque in rated load, but with highest torque density, the 10-rotor-slot, 4-armature pole pair machine shows highest torque in over load condition, and the 12/11 and 12/13 machine presents the minimum pulsation torque.
State-Space Constrained Model Predictive Control
Constrained State-space Model Predictive Control is presented in the paper. Predictive controller based on incremental linear state-space process model and quadratic criterion is derived. Typical types of constraints are considered – limits on manipulated, state and controlled variables. Control experiments with nonlinear model of multivariable laboratory process are simulated first and real experiment is realized afterwards.
The deep kernelized autoencoder
Autoencoders learn data representations (codes) in such a way that the input is reproduced at the output of the network. However, it is not always clear what kind of properties of the input data need to be captured by the codes. Kernel machines have experienced great success by operating via inner-products in a theoretically well-defined reproducing kernel Hilbert space, hence capturing topological properties of input data. In this paper, we enhance the autoencoder’s ability to learn effective data representations by aligning inner products between codes with respect to a kernel matrix. By doing so, the proposed kernelized autoencoder allows learning similarity-preserving embeddings of input data, where the notion of similarity is explicitly controlled by the user and encoded in a positive semi-definite kernel matrix. Experiments are performed for evaluating both reconstruction and kernel alignment performance in classification tasks and visualization of high-dimensional data. Additionally, we show that our method is capable to emulate kernel principal component analysis on a denoising task, obtaining competitive results at a much lower computational cost.
Face recognition based on convolutional neural network and support vector machine
Face recognition is an important embodiment of human-computer interaction, which has been widely used in access control system, monitoring system and identity verification. However, since face images vary with expressions, ages, as well as poses of people and illumination conditions, the face images of the same sample might be different, which makes face recognition difficult. There are two main requirements in face recognition, the high recognition rate and less training time. In this paper, we combine Convolutional Neural Network (CNN) and Support Vector Machine (SVM) to recognize face images. CNN is used as a feature extractor to acquire remarkable features automatically. We first pre-train our CNN by ancillary data to get the updated weights, and then train the CNN by the target dataset to extract more hidden facial features. Finally we use SVM as our classifier instead of CNN to recognize all the classes. With the input of facial features extracted from CNN, SVM will recognize face images more accurately. In our experiments, some face images in the Casia-Webfaces database are used for pre-training, and FERET database is used for training and testing. The results in experiments demonstrate the efficiency with high recognition rate and less training time.
Protein-flexible chain polymer interactions to explain protein partition in aqueous two-phase systems and the protein-polyelectrolyte complex formation.
Complexes formation between two model proteins (catalase and chymotrypsin) and polyelectrolytes (polyvinyl sulphonate and polyacrilic acid) and a non-charged flexible chain polymer (PCF) as polyethylene propylene oxide (molecular mass 8400) was studied by a spectroscopy technique combination: UV absorption, fluorescence emission and circular dichroism. All the polymers increase the protein surface hydrophobicity (S(0)) parameter value as a proof of the modification of the protein surface exposed to the solvent. Chymotrypsin showed an increase in its biological activity in polymer presence, which suggests a change in the superficial microenvironment. The decrease in the biological activity of catalase might be due to a competition between the polymer and the substrate. This result agrees with the polymer effect on the catalase superficial hydrophobic area. It was found that, when flexible chain polymers increase protein stability and the enzymatic activity they could be used to isolate this enzyme without inducing loss of protein enzymatic activity. Our findings suggest that the interactions are dependent on the protein physico-chemical parameters such as: isoelectric pH, hydrophobic surface area, etc.
Fragment-based lead discovery using X-ray crystallography.
Fragment screening offers an alternative to traditional screening for discovering new leads in drug discovery programs. This paper describes a fragment screening methodology based on high throughput X-ray crystallography. The method is illustrated against five proteins (p38 MAP kinase, CDK2, thrombin, ribonuclease A, and PTP1B). The fragments identified have weak potency (>100 microM) but are efficient binders relative to their size and may therefore represent suitable starting points for evolution to good quality lead compounds. The examples illustrate that a range of molecular interactions (i.e., lipophilic, charge-charge, neutral hydrogen bonds) can drive fragment binding and also that fragments can induce protein movement. We believe that the method has great potential for the discovery of novel lead compounds against a range of targets, and the companion paper illustrates how lead compounds have been identified for p38 MAP kinase starting from fragments such as those described in this paper.
Air-ground localization and map augmentation using monocular dense reconstruction
We propose a new method for the localization of a Micro Aerial Vehicle (MAV) with respect to a ground robot. We solve the problem of registering the 3D maps computed by the robots using different sensors: a dense 3D reconstruction from the MAV monocular camera is aligned with the map computed from the depth sensor on the ground robot. Once aligned, the dense reconstruction from the MAV is used to augment the map computed by the ground robot, by extending it with the information conveyed by the aerial views. The overall approach is novel, as it builds on recent developments in live dense reconstruction from moving cameras to address the problem of air-ground localization. The core of our contribution is constituted by a novel algorithm integrating dense reconstructions from monocular views, Monte Carlo localization, and an iterative pose refinement. In spite of the radically different vantage points from which the maps are acquired, the proposed method achieves high accuracy whereas appearance-based, state-of-the-art approaches fail. Experimental validation in indoor and outdoor scenarios reported an accuracy in position estimation of 0.08 meters and real time performance. This demonstrates that our new approach effectively overcomes the limitations imposed by the difference in sensors and vantage points that negatively affect previous techniques relying on matching visual features.
Porphyria in Switzerland, 15 years experience.
BACKGROUND The porphyrias, a group of seven metabolic disorders in the haem biosynthesis, can be classified into acute and non-acute porphyrias. A common symptom of acute porphyrias is severe acute abdominal pain, whereas cutaneous photosensitivity can occur in both acute and non-acute porphyrias. All porphyrias, except for sporadic porphyria cutanea tarda (sPCT), are hereditary disorders caused by mutations in the respective genes. We present porphyria cases documented in our porphyria centre during the past 15 years. METHODS Diagnosis was based on clinical symptoms and biochemical analyses. Mutation analysis was performed in patients/families with a confirmed hereditary porphyria. RESULTS AND CONCLUSIONS As the porphyria specialist centre of Switzerland, we perform the specialized analyses required for the diagnosis of all types of porphyrias, and give advice to patients, physicians and other laboratories. We therefore estimated that our data cover 80-90% of all diagnosed Swiss cases. A total of 217 patients from 170 families were diagnosed including, 111 acute intermittent porphyria, 45 erythropoietic protoporphyria, 30 variegate porphyria, 21 sPCT, five congenital erythropoietic porphyria, four hereditary coproporphyria and one hepatoerythropoietic porphyria patient. Systematic monitoring of the patients would allow early detection of the potential life-threatening complications such as hepatocellular carcinoma and renal insufficiency in acute porphyrias, and liver failure in EPP. Seventy-five percent of all families underwent genetic testing. Identification of pre-symptomatic mutation carriers so that these individuals and their physicians can be consulted with safety on drug use and other preventive measures, is important in managing acute porphyrias. The unique phenomenon of founder mutations in the Swiss population is also discussed.
Was wir über das Lernverhalten unserer Studierenden wissen. Welche Faktoren beeinflussen den Lernerfolg?
Im ZEITLast-Projekt 1 wurde per Zeitbudget-Methode täglich fünf Monate lang in 27 Stichproben aus unterschiedlichen Fächern und Hochschulen die Workload der Bachelor-Studierenden detailliert erhoben. Die minutiöse Erhebung und Analyse der Workload hatte zu der überraschenden Ei nsicht geführt, dass das subjektive Empfinden von Zeit und Belastung in keiner Weise der objektiv gemessenen Zeit entspricht. Die Analyse offenbart nicht nur die Schwäche des unbetreuten Selbststudiums, sondern öffnet den Blick zu gleich für die enorme Diversität in Motivation und Lernverhalten. Während Studierend mit einem zeitlichen Einsatz unter 20 Stunden pro Woche keine/einige/alle Pr üfungen bestanden, bestanden Studierende mit einem zeitlichen Einsatz über 40 Stunden pro Woche etliche Prüfungen nicht und vice versa. Es konnte na chgewiesen werden, dass die Zeit, die Studierende im Studium aufwenden, keine korrelative Beziehung zum Prüfungserfolg eingeht. Die hohe individuelle und interindividuelle Varianz selbst im Präsenzstudium, aber besonders im Selbststudiums führt zur Einsicht, dass es weder die bzw. den Normalstudierenden gibt noch einen normalen gleichförmigen Studienverlauf . Differenziert man jedoch die Studierenden nach ihrer Motivation, so we rden Gruppenprofile von Studierenden erkennbar, die sich doch nach Workload und Studienerfolg unterscheiden. Für die Hochschulen und die Hochschuldidaktik ist es deshalb wichtig, nach Studienstrukturen zu suchen, in denen der He t rogenität Rechnung getragen wird. Um diese Erkenntnisse abzusichern, wurden in einer Metastudie 2 300 empirische Studien aus der Lernforschung zu Workload und Studienerfolg gesichtet. Die Höhe der Workload erweist sich nicht als Prädiktor für den Studienerfolg. Die bek annten demographischen Variablen verlieren im Vergleich zu Variablen des Lernverhaltens wie Gewissenhaftigkeit, Aufmerksamkeit, Konzentration u nd persistente Zielverfolgung an prognostischer Kraft. 1 http://www.zhw.uni-hamburg.de/zhw/?page_id=419 2 Eine ausführliche Darstellung der Metastudie zum Lernv rhalten wird im August 2014 erscheinen: Schulmeister, R.: Auf der Suche nach Determinanten des St udienerfolgs. In J. Brockmann und A. Pilniok (Hrsg.): Studieneingangsphase in der Rechtswissenschaft. No mos, 2014, S. 72-204.
Digital Asset Management with Distributed Permission over Blockchain and Attribute-Based Access Control
Digital asset management (DAM) has increasing benefits in booming global Internet economy, but it is still a great challenge for providing an effective way to manage, store, ingest, organize and retrieve digital asset. To do it, we present a new digital asset management platform, called DAM-Chain, with Transaction-based Access Control (TBAC) which integrates the distribution ABAC model and the blockchain technology. In this platform, the ABAC provides flexible and diverse authorization mechanisms for digital asset escrowed into blockchain while the blockchain's transactions serve as verifiable and traceable medium of access request procedure. We also present four types of transactions to describe the TBAC access control procedure, and provide the algorithms of these transactions corresponding to subject registration, object escrowing and publication, access request and grant. By maximizing the strengths of both ABAC and blockchain, this platform can support flexible and diverse permission management, as well as verifiable and transparent access authorization process in an open decentralized environment.
EPID dosimetry for pretreatment quality assurance with two commercial systems
This study compares the EPID dosimetry algorithms of two commercial systems for pretreatment QA, and analyzes dosimetric measurements made with each system alongside the results obtained with a standard diode array. 126 IMRT fields are examined with both EPID dosimetry systems (EPIDose by Sun Nuclear Corporation, Melbourne FL, and Portal Dosimetry by Varian Medical Systems, Palo Alto CA) and the diode array, MapCHECK (also by Sun Nuclear Corporation). Twenty-six VMAT arcs of varying modulation complexity are examined with the EPIDose and MapCHECK systems. Optimization and commissioning testing of the EPIDose physics model is detailed. Each EPID IMRT QA system is tested for sensitivity to critical TPS beam model errors. Absolute dose gamma evaluation (3%, 3 mm, 10% threshold, global normalization to the maximum measured dose) yields similar results (within 1%-2%) for all three dosimetry modalities, except in the case of off-axis breast tangents. For these off-axis fields, the Portal Dosimetry system does not adequately model EPID response, though a previously-published correction algorithm improves performance. Both MapCHECK and EPIDose are found to yield good results for VMAT QA, though limitations are discussed. Both the Portal Dosimetry and EPIDose algorithms, though distinctly different, yield similar results for the majority of clinical IMRT cases, in close agreement with a standard diode array. Portal dose image prediction may overlook errors in beam modeling beyond the calculation of the actual fluence, while MapCHECK and EPIDose include verification of the dose calculation algorithm, albeit in simplified phantom conditions (and with limited data density in the case of the MapCHECK detector). Unlike the commercial Portal Dosimetry package, the EPIDose algorithm (when sufficiently optimized) allows accurate analysis of EPID response for off-axis, asymmetric fields, and for orthogonal VMAT QA. Other forms of QA are necessary to supplement the limitations of the Portal Vision Dosimetry system.
Comparison of the OxyMask and Venturi mask in the delivery of supplemental oxygen: pilot study in oxygen-dependent patients.
BACKGROUND The OxyMask (Southmedic Inc, Canada) is a new face mask for oxygen delivery that uses a small 'diffuser' to concentrate and direct oxygen toward the mouth and nose. The authors hypothesized that this unique design would enable the OxyMask to deliver oxygen more efficiently than a Venturi mask (Hudson RCI, USA) in patients with chronic hypoxemia. METHODS Oxygen-dependent patients with chronic, stable respiratory disease were recruited to compare the OxyMask and Venturi mask in a randomized, single-blind, cross-over design. Baseline blood oxygen saturation (SaO2) was established breathing room air, followed in a random order by supplemental oxygen through the OxyMask or Venturi mask. Oxygen delivery was titrated to maintain SaO2 4% to 5% and 8% to 9% above baseline for two separate 30 min periods of stable breathing. Oxygen flow rate, partial pressure of inspired and expired oxygen (PO2) and carbon dioxide (PCO2), minute ventilation, heart rate, nasal and oral breathing, SaO2 and transcutaneous PCO2 were collected continuously. The study was repeated following alterations to the OxyMask design, which improved clearance of carbon dioxide. RESULTS Thirteen patients, aged 28 to 79 years, were studied initially using the original OxyMask. Oxygen flow rate was lower, inspired PO2 was higher and expired PO2 was lower while using the OxyMask. Minute ventilation and inspired and expired PCO2 were significantly higher while using the OxyMask, whereas transcutaneous PCO2, heart rate and the ratio of nasal to oral breathing did not change significantly throughout the study. Following modification of the OxyMask, 13 additional patients, aged 18 to 79 years, were studied using the same protocol. The modified OxyMask provided a higher inspired PO2 at a lower flow rate, without evidence of carbon dioxide retention. CONCLUSIONS Oxygen is delivered safely and more efficiently by the OxyMask than by the Venturi mask in stable oxygen-dependent patients.
A Self-Organizing Spatial Vocabulary
Language is a shared set of conventions for mapping meanings to utterances. This paper explores self-organization as the primary mechanism for the formation of a vocabulary. It reports on a computational experiment in which a group of distributed agents develop ways to identify each other using names or spatial descriptions. It is also shown that the proposed mechanism copes with the acquisition of an existing vocabulary by new agents entering the community and with an expansion of the set of meanings.
Increased olfactory sensitivity in euthymic patients with bipolar disorder with event-related episodes compared with patients with bipolar disorder without such episodes.
OBJECTIVE Some patients with bipolar disorder experience mood episodes following emotional life events, whereas others do not. There is evidence that orbitofrontal hypoactivity may be related to this, because the orbitofrontal cortex is involved in the regulation of emotional and behavioural responses to external events. The close anatomical and functional connection between the orbitofrontal cortex and olfactory processing suggests that patients with bipolar disorder and heightened emotional reactivity may exhibit altered olfactory function compared with patients with bipolar disorder who do not exhibit this sensitivity. METHODS In this pilot study, olfactory function was assessed in patients with bipolar disorder and a history of event-triggered episodes (n = 7) and in patients with bipolar disorder without such a history (n = 9) at the Department of Psychiatry and the Taste and Smell Clinic of the University of Dresden, Germany. Each patient's bipolar disorder was in remission at study entry, and they were on monotherapy with mood stabilizers. Assessment included olfactory event-related potentials (ERP) and psychophysical tests for odour threshold, odour identification and olfactory quality discrimination. RESULTS Odour thresholds were lower in patients with bipolar disorder and event-triggered episodes compared with the other patient group. In addition, patients with event-triggered episodes exhibited shorter N1 peak latencies of the olfactory ERP. CONCLUSIONS Our findings indicate disinhibition of orbitofrontal areas involved in the processing of emotional events in a subset of patients with bipolar illness.
The so-called Hoffman's lymphangitis of the penis: Is it a lymphangitis or a phlebitis?
Hoffman's plastic lymphangitis of the penis is a benign, uncommon entity whose aetiology is still unknown. Microscopically there is a fibrous thickening of the involved lymph vessels but the primary localization of the lesion—lymphatic or venous—is still debated. Anatomo-clinical and histological findings suggest a primary involvement of the lymphatic system of the penis, probably related to a prolonged period of sexual excitement.
Toward constructing evidence-based legal arguments using legal decision documents and machine learning
This paper explores how to extract argumentation-relevant information automatically from a corpus of legal decision documents, and how to build new arguments using that information. For decision texts, we use the Vaccine/Injury Project (V/IP) Corpus, which contains default-logic annotations of argument structure. We supplement this with presuppositional annotations about entities, events, and relations that play important roles in argumentation, and about the level of confidence that arguments would be successful. We then propose how to integrate these semantic-pragmatic annotations with syntactic and domain-general semantic annotations, such as those generated in the DeepQA architecture, and outline how to apply machine learning and scoring techniques similar to those used in the IBM Watson system for playing the Jeopardy! question-answer game. We replace this game-playing goal, however, with the goal of learning to construct legal arguments.
Aligning Product Categories using Anchor Products
E-commerce sites group similar products into categories, and these categories are further organized in a taxonomy. Since different sites have different products and cater to a variety of shoppers, the taxonomies differ both in the categorization of products and the textual representation used for these categories. In this paper, we propose a technique to align categories across sites, which is useful information to have in product graphs. We use breadcrumbs present on the product pages to infer a site’s taxonomy. We generate a list of candidate category pairs for alignment using anchor products products present in two or more sites. We use multiple similarity and distance metrics to compare these candidates. To generate the final set of alignments, we propose a model that combines these metrics with a set of structural constraints. The model is based on probabilistic soft logic (PSL), a scalable probabilistic programming framework. We run experiments on data extracted from Amazon, Ebay, Staples and Target, and show that the distance metric based on products, and the use of PSL to combine various metrics and structural constraints lead to improved alignments.
Towards a decision-making structure for selecting a research design in empirical software engineering
Several factors make empirical research in software engineering particularly challenging as it requires studying not only technology but its stakeholders’ activities while drawing concepts and theories from social science. Researchers, in general, agree that selecting a research design in empirical software engineering research is challenging, because the implications of using individual research methods are not well recorded. The main objective of this article is to make researchers aware and support them in their research design, by providing a foundation of knowledge about empirical software engineering research decisions, in order to ensure that researchers make well-founded and informed decisions about their research designs. This article provides a decision-making structure containing a number of decision points, each one of them representing a specific aspect on empirical software engineering research. The article provides an introduction to each decision point and its constituents, as well as to the relationships between the different parts in the decision-making structure. The intention is the structure should act as a starting point for the research design before going into the details of the research design chosen. The article provides an in-depth discussion of decision points in relation to the research design when conducting empirical research.
Enhanced Feeding and Diminished Postnatal Growth Failure in Very-Low-Birth-Weight Infants
OBJECTIVE The aim of the present study was to determine whether an increased supply of energy, protein, essential fatty acids, and vitamin A reduces postnatal growth failure in very-low-birth-weight infants. METHODS Fifty infants with birth weight <1500 g were randomized to an intervention (n = 24) or a control (n = 26) feeding protocol within 24 hours after birth. Forty-four infants were included in the final analysis. This study was discontinued because of an increased occurrence of septicemia in the intervention group. RESULTS The intervention group had a lower mean birth weight (P = 0.03) and a higher proportion of infants small-for-gestational age (P = 0.04) than the control group. Other baseline characteristics were similar. The median (interquartile range) energy and protein supplies during the first 4 weeks of life were higher in the intervention group: 139 (128-145) versus 126 (121-128) kcal · kg · day (P < 0.001) and 4.0 (3.9-4.2) versus 3.2 (3.1-3.3) g · kg · day (P < 0.001). The infants in the intervention group regained birth weight faster (P = 0.001) and maintained their z scores for weight and head circumference from birth to 36 weeks' postmenstrual age (both P < 0.001). The median (interquartile range) growth velocity was 17.4 (16.3-18.6) g · kg · day in the intervention group and 13.8 (13.2-15.5) g · kg · day in the control group (P < 0.001). In line with the improved growth in the intervention group, the proportion of growth-restricted infants was 11 of 23 both at birth and at 36 weeks' postmenstrual age, whereas this proportion increased among the controls from 4 of 21 to 13 of 21 (P = 0.04). CONCLUSIONS Enhanced supply of energy, protein, essential fatty acids, and vitamin A caused postnatal growth along the birth percentiles for both weight and head circumference.
Insulin detemir causes increased symptom awareness during hypoglycaemia compared to human insulin.
AIM The long-acting insulin analogue detemir (Levemir) has structural and physicochemical properties which differ from human insulin. The aim of the present study was to test whether this leads to altered hormone and symptom response during hypoglycaemia. METHODS 12 healthy subjects [6f/6m, age 32 +/- 6 years (mean +/- s.d.), body mass index (BMI) 24.2 +/- 2.5 kg/m(2)] underwent a 200-min stepwise hypoglycaemic clamp (45 min steps of 4.4, 3.7, 3.0 and 2.3 mmol/l) with either detemir or human insulin in random order. A bolus of detemir (660 mU/kg) or human insulin (60 mU/kg) was given before insulin was infused at a rate of 5 (detemir) or 2 (human insulin) mU/kg/min. Blood was drawn and a semi-quantitative symptom questionnaire was administered before and after each plateau of the hypoglycaemic clamp. Cognitive function was assessed during each step. RESULTS Blood glucose levels and glucose infusion rates were comparable with detemir and human insulin. The total symptom score was higher with detemir during the 3 and 2.3 mmol glucose step compared to human insulin (p = 0.048). Especially sweating was increased with detemir (p = 0.02) with an earlier and faster increase during the clamp (interaction insulin x time: p = 0.04). No significant differences between detemir and human insulin in cortisol, norepinephrine, epinephrine, glucagon, growth hormone, lactate or free fatty acid (FFA) levels during hypoglycaemia were observed, and there were no significant differences in cognitive function tests. CONCLUSIONS Insulin detemir increased symptom awareness during hypoglycaemia compared to human insulin in healthy individuals, whereas counter-regulatory hormone response and cognitive function were unaltered.
Organizational Culture and Leadership in ERP Implementation
This paper theorizes how leadership affects ERP implementation by fostering the desired organizational culture. We contend that ERP implementation success is positively related with organizational culture along the dimensions of learning and development, participative decision making, power sharing, support and collaboration, and tolerance for risk and conflicts. In addition, we identify the strategic and tactical actions that the top management can take to influence organizational culture and foster a culture conducive to ERP implementation. The theoretical contributions and managerial implications of this study are discussed. © 2007 Elsevier B.V. All rights reserved.
Human Motion Segmentation via Robust Kernel Sparse Subspace Clustering
Studies on human motion have attracted a lot of attentions. Human motion capture data, which much more precisely records human motion than videos do, has been widely used in many areas. Motion segmentation is an indispensable step for many related applications, but current segmentation methods for motion capture data do not effectively model some important characteristics of motion capture data, such as Riemannian manifold structure and containing non-Gaussian noise. In this paper, we convert the segmentation of motion capture data into a temporal subspace clustering problem. Under the framework of sparse subspace clustering, we propose to use the geodesic exponential kernel to model the Riemannian manifold structure, use correntropy to measure the reconstruction error, use the triangle constraint to guarantee temporal continuity in each cluster and use multi-view reconstruction to extract the relations between different joints. Therefore, exploiting some special characteristics of motion capture data, we propose a new segmentation method, which is robust to non-Gaussian noise, since correntropy is a localized similarity measure. We also develop an efficient optimization algorithm based on block coordinate descent method to solve the proposed model. Our optimization algorithm has a linear complexity while sparse subspace clustering is originally a quadratic problem. Extensive experiment results both on simulated noisy data set and real noisy data set demonstrate the advantage of the proposed method.
A controlled study of formal thought disorder in children with autism and multiple complex developmental disorders.
UNLABELLED Along with well-defined categories in classification systems (e.g., autistic disorders and attention-deficit/hyperactivity disorder (ADHD)), practitioners are confronted with many children showing mixed forms of developmental psychopathology. These clusters of symptoms are on the borderlines of more defined categories. The late Donald Cohen proposed heuristic criteria to study a group defined by impaired social sensitivity, impaired regulation of affect, and thinking disorders under the name multiple complex developmental disorders (MCDD). Although these children meet criteria for pervasive developmental disorder--not otherwise specified (PDD-NOS), they have additional important clinical features, such as thought disorder. After highlighting similarities and differences between MCDD and comparable groups (e.g., multidimensionally impaired children), this paper presents the findings of a study comparing formal thought disorder scores in children with MCDD to children with autism spectrum diagnoses, such as autistic disorder (AD), and to children with nonspectrum diagnoses, such as ADHD and anxiety disorders. METHODS Videotaped speech samples of four groups of high-functioning, latency-aged children with MCDD, AD, ADHD, and anxiety disorders were compared to a control group of normal children using the Kiddie Formal Thought Disorder Rating Scale (K-FTDS). RESULTS High formal thought disorder scores were found both in the AD and MCDD groups, low rates in the ADHD groups, and no thought disorder in the anxiety disorder and normal control groups. The severity of formal thought disorder was related to verbal IQ scores within the AD and MCDD groups. CONCLUSION High formal thought scores in children with complex developmental disorders, such as AD and MCDD, appear to reflect impaired communication skills rather than early signs of psychosis.
Institutional Repository Roman Hoffmann and Raya Muttarak Learn from the Past , Prepare for the Future : Impacts of Education and Experience on Disaster Preparedness in the Philippines and Thailand
Please c Disaster Summary. — This study aims at understanding the role of education in promoting disaster preparedness. Strengthening resilience to climate-related hazards is an urgent target of Goal 13 of the Sustainable Development Goals. Preparing for a disaster such as stockpiling of emergency supplies or having a family evacuation plan can substantially minimize loss and damages from natural hazards. However, the levels of household disaster preparedness are often low even in disaster-prone areas. Focusing on determinants of personal disaster preparedness, this paper investigates: (1) pathways through which education enhances preparedness; and (2) the interplay between education and experience in shaping preparedness actions. Data analysis is based on face-to-face surveys of adults aged 15 years in Thailand (N = 1,310) and the Philippines (N = 889, female only). Controlling for socio-demographic and contextual characteristics, we find that formal education raises the propensity to prepare against disasters. Using the KHB method to further decompose the education effects, we find that the effect of education on disaster preparedness is mainly mediated through social capital and disaster risk perception in Thailand whereas there is no evidence that education is mediated through observable channels in the Philippines. This suggests that the underlying mechanisms explaining the education effects are highly context-specific. Controlling for the interplay between education and disaster experience, we show that education raises disaster preparedness only for those households that have not been affected by a disaster in the past. Education improves abstract reasoning and anticipation skills such that the better educated undertake preventive measures without needing to first experience the harmful event and then learn later. In line with recent efforts of various UN agencies in promoting education for sustainable development, this study provides a solid empirical evidence showing positive externalities of education in disaster risk reduction. 2017TheAuthors.PublishedbyElsevierLtd.This is an open access article under theCCBY-NC-ND license (http://creativecommons.org/ licenses/by-nc-nd/4.0/).
Recurrent Neural Networks and Pitch Representations for Music Tasks
We present results from experiments in using several pitch representations for jazz-oriented musical tasks performed by a recurrent neural network. We have run experiments with several kinds of recurrent networks for this purpose, and have found that Long Short-term Memory networks provide the best results. We show that a new pitch representation called Circles of Thirds works as well as two other published representations for these tasks, yet it is more succinct and enables faster learning. Recurrent Neural Networks and Music Many researchers are familiar with feedforward neural networks consisting of 2 or more layers of processing units, each with weighted connections to the next layer. Each unit passes the sum of its weighted inputs through a nonlinear sigmoid function. Each layer’s outputs are fed forward through the network to the next layer, until the output layer is reached. Weights are initialized to small initial random values. Via the back-propagation algorithm (Rumelhart et al. 1986), outputs are compared to targets, and the errors are propagated back through the connection weights. Weights are updated by gradient descent. Through an iterative training procedure, examples (inputs) and targets are presented repeatedly; the network learns a nonlinear function of the inputs. It can then generalize and produce outputs for new examples. These networks have been explored by the computer music community for classifying chords (Laden and Keefe 1991) and other musical tasks (Todd and Loy 1991, Griffith and Todd 1999). A recurrent network uses feedback from one or more of its units as input in choosing the next output. This means that values generated by units at time step t-1, say y(t-1), are part of the inputs x(t) used in selecting the next set of outputs y(t). A network may be fully recurrent; that is all units are connected back to each other and to themselves. Or part of the network may be fed back in recurrent links. Todd (Todd 1991) uses a Jordan recurrent network (Jordan 1986) to reproduce classical songs and then to produce new songs. The outputs are recurrently fed back as inputs as shown in Figure 1. In addition, self-recurrence on the inputs provides a decaying history of these inputs. The weight update algorithm is back-propagation, using teacher forcing (Williams and Zipser 1988). With teacher forcing, the target outputs are presented to the recurrent inputs from the output units (instead of the actual outputs, which are not correct yet during training). Pitches (on output or input) are represented in a localized binary representation, with one bit for each of the 12 chromatic notes. More bits can be added for more octaves. C is represented as 100000000000. C# is 010000000000, D is 001000000000. Time is divided into 16th note increments. Note durations are determined by how many increments a pitch’s output unit is on (one). E.g. an eighth note lasts for two time increments. Rests occur when all outputs are off (zero). Figure 1. Jordan network, with outputs fed back to inputs. (Mozer 1994)’s CONCERT uses a backpropagationthrough-time (BPTT) recurrent network to learn various musical tasks and to learn melodies with harmonic accompaniment. Then, CONCERT can run in generation mode to compose new music. The BPTT algorithm (Williams and Zipser 1992, Werbos 1988, Campolucci 1998) can be used with a fully recurrent network where the outputs of all units are connected to the inputs of all units, including themselves. The network can include external inputs and optionally, may include a regular feedforward output network (see Figure 2). The BPTT weight updates are proportional to the gradient of the sum of errors over every time step in the interval between start time t0 and end time t1, assuming the error at time step t is affected by the outputs at all previous time steps, starting with t0. BPTT requires saving all inputs, states, and errors for all time steps, and updating the weights in a batch operation at the end, time t1. One sequence (each example) requires one batch weight update. Figure 2. A fully self-recurrent network with external inputs, and optional feedforward output attachment. If there is no output attachment, one or more recurrent units are designated as output units. CONCERT is a combination of BPTT with a layer of output units that are probabilistically interpreted, and a maximum likelihood training criterion (rather than a squared error criterion). There are two sets of outputs (and two sets of inputs), one set for pitch and the other for duration. One pass through the network corresponds to a note, rather than a slice of time. We present only the pitch representation here since that is our focus. Mozer uses a psychologically based representation of musical notes. Figure 3 shows the chromatic circle (CC) and the circle of fifths (CF), used with a linear octave value for CONCERT’s pitch representation. Ignoring octaves, we refer to the rest of the representation as CCCF. Six digits represent the position of a pitch on CC and six more its position on CF. C is represented as 000000 000000, C# as 000001 111110, D as 000011 111111, and so on. Mozer uses -1,1 rather than 0,1 because of implementation details. Figure 3. Chromatic Circle on Left, Circle of Fifths on Right. Pitch position on each circle determines its representation. For chords, CONCERT uses the overlapping subharmonics representation of (Laden and Keefe, 1991). Each chord tone starts in Todd’s binary representation, but 5 harmonics (integer multiples of its frequency) are added. C3 is now C3, C4, G4, C5, E5 requiring a 3 octave representation. Because the 7th of the chord does not overlap with the triad harmonics, Laden and Keefe use triads only. C major triad C3, E3, G3, with harmonics, is C3, C4, G4, C5, E5, E3, E4, B4, E5, G#5, G3, G4, D4, G5, B5. The triad pitches and harmonics give an overlapping representation. Each overlapping pitch adds 1 to its corresponding input. CONCERT excludes octaves, leaving 12 highly overlapping chord inputs, plus an input that is positive when certain key-dependent chords appear, and learns waltzes over a harmonic chord structure. Eck and Schmidhuber (2002) use Long Short-term Memory (LSTM) recurrent networks to learn and compose blues music (Hochreiter and Schmidhuber 1997, and see Gers et al., 2000 for succinct pseudo-code for the algorithm). An LSTM network consists of input units, output units, and a set of memory blocks, each of which includes one or more memory cells. Blocks are connected to each other recurrently. Figure 4 shows an LSTM network on the left, and the contents of one memory block (this one with one cell) on the right. There may also be a direct connection from external inputs to the output units. This is the configuration found in Gers et al., and the one we use in our experiments. Eck and Schmidhuber also add recurrent connections from output units to memory blocks. Each block contains one or more memory cells that are self-recurrent. All other units in the block gate the inputs, outputs, and the memory cell itself. A memory cell can “cache” errors and release them for weight updates much later in time. The gates can learn to delay a block’s outputs, to reset the memory cells, and to inhibit inputs from reaching the cell or to allow inputs in. Figure 4. An LSTM network on the left and a one-cell memory block on the right, with input, forget, and output gates. Black squares on gate connections show that the gates can control whether information is passed to the cell, from the cell, or even within the cell. Weight updates are based on gradient descent, with multiplicative gradient calculations for gates, and approximations from the truncated BPTT (Williams and Peng 1990) and Real-Time Recurrent Learning (RTRL) (Robinson and Fallside 1987) algorithms. LSTM networks are able to perform counting tasks in time-series. Eck and Schmidhuber’s model of blues music is a 12-bar chord sequence over which music is composed/improvised. They successfully trained an LSTM network to learn a sequence of blues chords, with varying durations. Splitting time into 8th note increments, each chord’s duration is either 8 or 4 time steps (whole or half durations). Chords are sets of 3 or 4 tones (triads or triads plus sevenths), represented in a 12-bit localized binary representation with values of 1 for a chord pitch, and 0 for a non-chord pitch. Chords are inverted to fit in 1 octave. For example, C7 is represented as 100010010010 (C,E,G,B-flat), and F7 is 100101000100 (F,A,C,E-flat inverted to C,E-flat,F,A). The network has 4 memory blocks, each containing 2 cells. The outputs are considered probabilities of whether the corresponding note is on or off. The goal is to obtain an output of more that .5 for each note that should be on in a particular chord, with all other outputs below .5. Eck and Schmidhuber’s work includes learning melody and chords with two LSTM networks containing 4 blocks each. Connections are made from the chord network to the melody network, but not vice versa. The authors composed short 1-bar melodies over each of the 12 possible bars. The network is trained on concatenations of the short melodies over the 12-bar blues chord sequence. The melody network is trained until the chords network has learned according to the criterion. In music generation mode, the network can generate new melodies using this training. In a system called CHIME (Franklin 2000, 2001), we first train a Jordan recurrent network (Figure 1) to produce 3 Sonny Rollins jazz/blues melodies. The current chord and index number of the song are non-recurrent inputs to the network. Chords are represented as sets of 4 note values of 1 in a 12-note input layer, with non-chord note inputs set to 0 just as in Eck and Schmidhuber’s chord representation. Chords are also inverted to fit within one octave. 24 (2 octaves) of the outputs are notes, and the 25th is a rest. Of these 25, the unit with the largest value
Rethinking ADHD and LD in DSM-5: proposed changes in diagnostic criteria.
The Diagnostic and Statistical Manual of Mental Disorders (DSM) is currently undergoing revision that will lead to a fifth edition (DSM-5) in 2013. This article first provides a brief synopsis of the DSM-5 administrative structure, procedures, and guiding principles to enhance understanding of how changes are made in the DSM. The next two sections (on attention-deficit/hyperactivity disorder and learning disorders, respectively) highlight the major concerns and controversies surrounding the DSM-IV diagnostic criteria for these two disorders and provide a rationale for the proposed changes to the criteria, along with a commentary on the empirical evidence on which the proposed changes were based.
Personality Traits as Predictors of Leadership Style Preferences : Investigating the Relationship Between Social Dominance Orientation and Attitudes Towards Authentic Leaders
Cite this article Romager JA, Hughes K, Trimble JE. Personality traits as predictors of leadership style preferences: Investigating the relationship between social dominance orientation and attitudes towards authentic leaders. Soc Behav Res Pract Open J. 2017; 3(1): 1-9. doi: 10.17140/SBRPOJ-3-110 Personality Traits as Predictors of Leadership Style Preferences: Investigating the Relationship Between Social Dominance Orientation and Attitudes Towards Authentic Leaders Original Research
FitNets: Hints for Thin Deep Nets
While depth tends to improve network performances, it also m akes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed a t obtaining small and fast-to-execute models, and it has shown that a student netw ork could imitate the soft output of a larger teacher network or ensemble of networ ks. In this paper, we extend this idea to allow the training of a student that is d eeper and thinner than the teacher, using not only the outputs but also the inte rmediate representations learned by the teacher as hints to improve the traini ng process and final performance of the student. Because the student intermedia te hidden layer will generally be smaller than the teacher’s intermediate hidde n layer, additional parameters are introduced to map the student hidden layer to th e prediction of the teacher hidden layer. This allows one to train deeper studen s that can generalize better or run faster, a trade-off that is controlled by the ch osen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teache r network.
Severe Traumatic Brain Injury in Austria I: Introduction to the study
ZIELE: Die Ziele des Projekts "Schweres Schädelhirntrauma in Österreich" waren es, den gegenwärtigen Stand der Behandlung von Patienten mit schwerem Schädelhirntrauma zu erheben und den Effekt der Verwendung von standardisierten Behandlungsrichtlinien auf das Ergebnis der Behandlung zu überprüfen. Das Ziel der vorliegenden Arbeit ist es, eine detaillierte Beschreibung der Ziele, Methoden und allgemeinen Ergebnisse der Studie zu geben. Diese Arbeit soll auch als Basis für die nachfolgenden Arbeiten dienen, in denen die Ergebnisse der Studie detailliert dargestellt und diskutiert werden sollen. PATIENTEN UND METHODEN: In die Studie wurden alle Patienten mit schwerem Schädelhirntrauma eingeschlossen, die in einem der fünf beteiligten Zentren behandelt wurden. Daten zu Unfall, prähospitaler Versorgung, Behandlung im Krankenhaus und zum Status der Patienten wurden prospektiv gesammelt. Die Patientendaten wurden während der ersten zehn Behandlungstage täglich und danach bis zum Ablauf des ersten Jahres nach Entlassung von der Intensivstation erhoben. Die Daten wurden in eine internet-basierte Datenbank eingegeben. Die Daten wurden ausgewertet, um Epidemiologie, prähospitale Versorgung, konservatives und chirurgisches Management zu beschreiben, sowie den Effekt der Verwendung von Richtlinien zu überprüfen. ERGEBNISSE: Der Datensatz umfasst insgesamt Daten von 492 Patienten aus den fünf beteiligten Zentren; diese Daten wurden über einen Zeitraum von 3 Jahren gesammelt. Die Datenqualität ist gut; die Anzahl fehlender Daten ist gering. Die Mortalität auf der Intensivstation lag bei 31,6%. Das endgültige Behandlungsergebnis war bei 23% "gute Erholung", bei 10% "mäßige Behinderung", bei 8% "schwere Behinderung", bei 6% "vegetatives Zustandsbild", und 38% verstarben. Das endgültige Behandlungsergebnis war in 16% der Fälle nicht zu erheben. SCHLUSSFOLGERUNGEN: Die Studie hat gezeigt, dass die Verwendung einer internet-basierten Datenbank ein wertvolles Instrument sein kann, wenn in einer multizentrischen Studie eine große Anzahl von Variablen für eine große Patientenzahl gesammelt werden soll. Die Ergebnisse unserer Studie werden die Basis weiterer Untersuchungen zur Behandlung von Patienten mit schwerem Schädelhirntrauma sein. OBJECTIVES: The goals of the Austrian Severe Traumatic Brain Injury Study were to investigate the current management of patients with severe traumatic brain injury in Austria and to assess the effects of introducing guidelines for the management of severe traumatic brain injury upon the outcome of these patients. The purpose of this paper is to give a detailed description of the goals, methods, and overall results of the study, and to provide an introduction to a series of papers where the results of the study will be presented and discussed. STUDY DESIGN: The study included patients with severe traumatic brain injury from five centers in Austria. Data on accident, pre-hospital treatment, hospital treatment, and patient status were collected prospectively. Patient data was entered daily for the first 10 days in hospital and then up to a year after discharge from intensive care. All data was entered into an internet-based database. The data was evaluated to describe epidemiology, pre-hospital treatment, medical management, and surgical management; the evaluation also assessed the effects of guideline-based management on traumatic brain injury patients. RESULTS: The data set comprises a total of 492 patient records from the 5 participating hospitals; this data was collected over a 3-year period. Data quality is considered good; the number of missing data items is low. ICU mortality was 31.6%. Final outcome: 23% of the patients had a good recovery, 10% had moderate disabilities, 8% had severe disabilities, 6% were persistent vegetative, and 38% died. Final outcome was unknown in 16% of patients. CONCLUSIONS: This study proved that an internetbased database may be a valuable tool for prospective multicenter studies if many variables have to be collected for a high number of patients. The results of our study provide enough evidence to initiate further research on many aspects of the management of traumatic brain injury patients.
Experiential learning: AMEE Guide No. 63.
This Guide provides an overview of educational theory relevant to learning from experience. It considers experience gained in clinical workplaces from early medical student days through qualification to continuing professional development. Three key assumptions underpin the Guide: learning is 'situated'; it can be viewed either as an individual or a collective process; and the learning relevant to this Guide is triggered by authentic practice-based experiences. We first provide an overview of the guiding principles of experiential learning and significant historical contributions to its development as a theoretical perspective. We then discuss socio-cultural perspectives on experiential learning, highlighting their key tenets and drawing together common threads between theories. The second part of the Guide provides examples of learning from experience in practice to show how theoretical stances apply to clinical workplaces. Early experience, student clerkships and residency training are discussed in turn. We end with a summary of the current state of understanding.
An Approach to Data Analysis in 5G Networks
5G networks expect to provide significant advances in network management compared to traditional mobile infrastructures by leveraging intelligence capabilities such as data analysis, prediction, pattern recognition and artificial intelligence. The key idea behind these actions is to facilitate the decision-making process in order to solve or mitigate common network problems in a dynamic and proactive way. In this context, this paper presents the design of Self-Organized Network Management in Virtualized and Software Defined Networks (SELFNET) Analyzer Module, which main objective is to identify suspicious or unexpected situations based on metrics provided by different network components and sensors. The SELFNET Analyzer Module provides a modular architecture driven by use cases where analytic functions can be easily extended. This paper also proposes the data specification to define the data inputs to be taking into account in diagnosis process. This data specification has been implemented with different use cases within SELFNET Project, proving its effectiveness.
FUSION OF LIDAR DATA AND AERIAL IMAGERY FOR A MORE COMPLETE SURFACE DESCRIPTION
Photogrammetry is the traditional method of surface reconstruction such as the generation of DTMs. Recently, LIDAR emerged as a new technology for rapidly capturing data on physical surfaces. The high accuracy and automation potential results in a quick delivery of DEMs/DTMs derived from the raw laser data. The two methods deliver complementary surface information. Thus it makes sense to combine data from the two sensors to arrive at a more robust and complete surface reconstruction. This paper describes two aspects of merging aerial imagery and LIDAR data. The establishment of a common reference frame is an absolute prerequisite. We solve this alignment problem by utilizing sensor-invariant features. Such features correspond to the same object space phenomena, for example to breaklines and surface patches. Matched sensor invariant features lend themselves to establishing a common reference frame. Feature-level fusion is performed with sensor specific features that are related to surface characteristics. We show the synergism between these features resulting in a richer and more abstract surface description.
Sraffa’s Contributions to the Methodology of Economics
This article highlights some of Sraffa’s main contributions to the methodology of economics. It argues that Sraffa rejected counterfactual reasoning and hence the ‘marginal method’ of analysis. Sraffa’s theory is built solely on factual and objective information and hence it removes psychology from economics as well. Sraffa’s theory shows that considerations of demand and the condition of equilibrium of demand and supply are irrelevant to a logical theory of prices. The article goes on to show how Sraffa’s novel approach to economic theory was able to expose the logical errors of ‘marginal method’ in the Austrian and Neo-classical theories of distribution. Finally, it also raises some questions for Sraffa’s theory that need to be resolved.JEL: B2, B3, B4, B5
Designing Business Models for the Internet of Things
According to Gershenfeld and Vasseur (2014) the impressive growth of the Internet in the past two decades is about to be overshadowed as the "things" that surround us start going online. The "Internet of Things" (IOT), a term coined by Kevin Ashton of Procter & Gamble in 1998, has become a new paradigm that views all objects around us connected to the network, providing anyone with “anytime, anywhere” access to information (ITU, 2005; Gomez et al., 2013). The IOT describes the interconnection of objects or “things” for various purposes including identification, communication, sensing, and data collection (Oriwoh et al., 2013). “Things” range from mobile devices to general household objects embedded with capabilities for sensing or communication through the use of technologies such as radio frequency identification (RFID) (Oriwoh et al., 2013; Gomez et al., 2013). The IOT represents the future of computing and communications, and its developThis article investigates challenges pertaining to business model design in the emerging context of the Internet of Things (IOT). The evolution of business perspectives to the IOT is driven by two underlying trends: i) the change of focus from viewing the IOT primarily as a technology platform to viewing it as a business ecosystem; and ii) the shift from focusing on the business model of a firm to designing ecosystem business models. An ecosystem business model is a business model composed of value pillars anchored in ecosystems and focuses on both the firm's method of creating and capturing value as well as any part of the ecosystem's method of creating and capturing value. The article highlights three major challenges of designing ecosystem business models for the IOT, including the diversity of objects, the immaturity of innovation, and the unstructured ecosystems. Diversity refers to the difficulty of designing business models for the IOT due to a multitude of different types of connected objects combined with only modest standardization of interfaces. Immaturity suggests that quintessential IOT technologies and innovations are not yet products and services but a "mess that runs deep". The unstructured ecosystems mean that it is too early to tell who the participants will be and which roles they will have in the evolving ecosystems. The study argues that managers can overcome these challenges by using a business model design tool that takes into account the ecosystemic nature of the IOT. The study concludes by proposing the grounds for a new design tool for ecosystem business models and suggesting that "value design" might be a more appropriate term when talking about business models in ecosystems. New web-based business models being hatched for the Internet of Things are bringing together market players who previously had no business dealings with each other. Through partnerships and acquisitions, [...] they have to sort out how they will coordinate their business development efforts with customers and interfaces with other stakeholders.
Dust Removal with Boundary and Spatial Constraint for Videos Captured in Car
Videos captured in car often suffer from duston the wind screen glass. The dust particles on wind screendecrease the video quality and make them blur. Removingdust and restoring high quality dust-free video is a challengingtask in the field of video stream processing. In this paper, we propose an improved iterative optimization pipeline toremove dust from the videos. Our method employs boundaryconstraint to keep transmission map in a reasonable rangeand use spatial constraint on the transmission map to avoidintroduction of significant halo artifacts into the resultantvideo. With optimized transmission map as an initial condition, our method can separate dust layer and background layer frominput video frames and keeps the background frames beingcolor faithful with fine details. Test results demonstrate thatour proposed method can recover dust-free videos from thedust contaminated input videos and keep the resultant video color faithful.
The Asymmetric Business Cycle
The business cycle is a fundamental yet elusive concept in macroeconomics. In this paper, we consider the problem of measuring the business cycle. First, we argue for the output-gap view that the business cycle corresponds to transitory deviations in economic activity away from a permanent, or trend, level. Then we investigate the extent to which a general model-based approach to estimating trend and cycle for the U.S. economy leads to measures of the business cycle that reflect models versus the data. We find empirical support for a nonlinear time series model that produces a business cycle measure with an asymmetric shape across NBER expansion and recession phases. Specifically, this business cycle measure suggests that recessions are periods of relatively large and negative transitory fluctuations in output. However, several close competitors to the nonlinear model produce business cycle measures of widely differing shapes and magnitudes. Given this model-based uncertainty, we construct a model-averaged measure of the business cycle. This measure also displays an asymmetric shape and is closely related to other measures of economic slack such as the unemployment rate and capacity utilization.
Sensor-driven agenda for intelligent home care of the elderly
a Department of Informatics/CCTC, University of Minho, Braga, Portugal b Universidad de Castilla-La Mancha, Instituto de Investigación en Informática de Albacete, 02071 Albacete, Spain c Institute of Polymers and Composites IPC/I3N, University of Minho, Guimarães, Portugal d Polytechnic Institute of Cávado and Ave, Barcelos, Portugal e Life and Health Sciences Research Institute (ICVS), School of Health Sciences, University of Minho, Campus de Gualtar, 4710-057 Braga, Portugal
Combining human and machine intelligence in large-scale crowdsourcing
We show how machine learning and inference can be harnessed to leverage the complementary strengths of humans and computational agents to solve crowdsourcing tasks. We construct a set of Bayesian predictive models from data and describe how the models operate within an overall crowdsourcing architecture that combines the efforts of people and machine vision on the task of classifying celestial bodies defined within a citizens’ science project named Galaxy Zoo. We show how learned probabilistic models can be used to fuse human and machine contributions and to predict the behaviors of workers. We employ multiple inferences in concert to guide decisions on hiring and routing workers to tasks so as to maximize the efficiency of large-scale crowdsourcing processes based on expected utility.
INDOOR NAVIGATION FROM POINT CLOUDS: 3D MODELLING AND OBSTACLE DETECTION
In the recent years, indoor modelling and navigation has become a research of interest because many stakeholders require navigation assistance in various application scenarios. The navigational assistance for blind or wheelchair people, building crisis management such as fire protection, augmented reality for gaming, tourism or training emergency assistance units are just some of the direct applications of indoor modelling and navigation. Navigational information is traditionally extracted from 2D drawings or layouts. Real state of indoors, including opening position and geometry for both windows and doors, and the presence of obstacles is commonly ignored. In this work, a real indoor-path planning methodology based on 3D point clouds is developed. The value and originality of the approach consist on considering point clouds not only for reconstructing semantically-rich 3D indoor models, but also for detecting potential obstacles in the route planning and using these for readapting the routes according to the real state of the indoor depictured by the laser scanner. * Corresponding author
Managing meiotic recombination in plant breeding.
Crossover recombination is a crucial process in plant breeding because it allows plant breeders to create novel allele combnations on chromosomes that can be used for breeding superior F1 hybrids. Gaining control over this process, in terms of increasing crossover incidence, altering crossover positions on chromosomes or silencing crossover formation, is essential for plant breeders to effectively engineer the allelic composition of chromosomes. We review the various means of crossover control that have been described or proposed. By doing so, we sketch a field of science that uses both knowledge from classic literature and the newest discoveries to manage the occurrence of crossovers for a variety of breeding purposes.
Ki67 index is an independent prognostic factor in epithelioid but not in non-epithelioid malignant pleural mesothelioma: a multicenter study
Background:Estimating the prognosis in malignant pleural mesothelioma (MPM) remains challenging. Thus, the prognostic relevance of Ki67 was studied in MPM.Methods:Ki67 index was determined in a test cohort of 187 cases from three centres. The percentage of Ki67-positive tumour cells was correlated with clinical variables and overall survival (OS). The prognostic power of Ki67 index was compared with other prognostic factors and re-evaluated in an independent cohort (n=98).Results:Patients with Ki67 higher than median (>15%) had significantly (P<0.001) shorter median OS (7.5 months) than those with low Ki67 (19.1 months). After multivariate survival analyses, Ki67 proved to be—beside histology and treatment—an independent prognostic marker in MPM (hazard ratio (HR): 2.1, P<0.001). Interestingly, Ki67 was prognostic exclusively in epithelioid (P<0.001) but not in non-epithelioid subtype. Furthermore, Ki67 index was significantly lower in post-chemotherapy samples when compared with chemo-naive cases. The prognostic power was comparable to other recently published prognostic factors (CRP, fibrinogen, neutrophil-to-leukocyte ratio (NLR) and nuclear grading score) and was recapitulated in the validation cohort (P=0.048).Conclusion:This multicentre study demonstrates that Ki67 is an independent and reproducible prognostic factor in epithelioid but not in non-epithelioid MPM and suggests that induction chemotherapy decreases the proliferative capacity of MPM.
HPV testing as an adjunct to cytology in the follow up of women treated for cervical intraepithelial neoplasia.
OBJECTIVE To evaluate human papillomavirus (HPV) testing in combination with cytology in the follow up of treated women. DESIGN A prospective study. SETTING Three UK centres: Manchester, Aberdeen and London. POPULATION OR SAMPLE Women treated for cervical intraepithelial neoplasia (CIN). METHODS Women were recruited at 6 months of follow up, and cytology and HPV testing was carried out at 6 and 12 months. If either or both results were positive, colposcopy and if appropriate, a biopsy and retreatment was performed. At 24 months, cytology alone was performed. MAIN OUTCOME MEASURES Cytology and histology at 6, 12 and 24 months. RESULTS Nine hundred and seventeen women were recruited at 6 months of follow up, with 778 (85%) and 707 (77.1%) being recruited at 12 and 24 months, respectively. At recruitment, 700 women had had high-grade CIN (grades 2 or 3) and 217 had CIN1. At 6 months, 14.6% were HPV positive and 10.7% had non-negative cytology. Of those with negative cytology, 9% were HPV positive. Of the 744 women who were cytology negative/HPV negative at baseline, 3 women with CIN2, 1 with CIN3, 1 with cancer and 1 with vaginal intraepithelial neoplasia (VAIN)1 were identified at 24 months. Nine of 10 cases of CIN3/cervical glandular intraepithelial neoplasia (CGIN) occurred in HPV-positive women. At 23 months, cancer was identified in a woman treated for CGIN with clear resection margins, who had been cytology negative/HPV negative at both 6 and 12 months. CONCLUSIONS Women who are cytology negative and HPV negative at 6 months after treatment for CIN can safely be returned to 3-year recall.
Measurement of Layer Thickness and Permittivity Using a New Multilayer Model From GPR Data
The conventional method, i.e., the common middle point (CMP) method, has been used for many years for estimating the depth and permittivity of layered media from ground-penetrating radar (GPR) data. However, the CMP method results in noticeable errors in thickness and permittivity readings with the increase of antenna separation. To improve the measurement accuracy, a new mathematical model is presented, covering GPR measurement in one- and two-layer cases. In this model, we first check all the possible wave paths when the GPR signal propagates in the multilayer environment. We not only consider the effects from the air-ground interface but also introduce a ray-path-searching process in the GPR measurement using Fermat's shortest path law. The shortest path is then used in the process of GPR data inversion in order to calculate the depth and permittivity of each layer. Finally, we use the transmission-line matrix (TLM) method to simulate the propagation of a GPR signal in the multilayered formation. A time-sequence image that was produced by the finite-difference time-domain method has also been used to explain this presented model. By comparing the numerical simulation results with the measured results, it is found that the estimated layer thickness and permittivity by the new model agree well with the simulated results. It proves that the new model is more accurate and closer to the real measured situation.
Template-Based Information Extraction without the Templates
Standard algorithms for template-based information extraction (IE) require predefined template schemas, and often labeled data, to learn to extract their slot fillers (e.g., an embassy is the Target of a Bombing template). This paper describes an approach to template-based IE that removes this requirement and performs extraction without knowing the template structure in advance. Our algorithm instead learns the template structure automatically from raw text, inducing template schemas as sets of linked events (e.g., bombings include detonate, set off, and destroy events) associated with semantic roles. We also solve the standard IE task, using the induced syntactic patterns to extract role fillers from specific documents. We evaluate on the MUC-4 terrorism dataset and show that we induce template structure very similar to handcreated gold structure, and we extract role fillers with an F1 score of .40, approaching the performance of algorithms that require full knowledge of the templates.
User feedback in the appstore: An empirical study
Application distribution platforms - or app stores - such as Google Play or Apple AppStore allow users to submit feedback in form of ratings and reviews to downloaded applications. In the last few years, these platforms have become very popular to both application developers and users. However, their real potential for and impact on requirements engineering processes are not yet well understood. This paper reports on an exploratory study, which analyzes over one million reviews from the Apple AppStore. We investigated how and when users provide feedback, inspected the feedback content, and analyzed its impact on the user community. We found that most of the feedback is provided shortly after new releases, with a quickly decreasing frequency over time. Reviews typically contain multiple topics, such as user experience, bug reports, and feature requests. The quality and constructiveness vary widely, from helpful advices and innovative ideas to insulting offenses. Feedback content has an impact on download numbers: positive messages usually lead to better ratings and vice versa. Negative feedback such as shortcomings is typically destructive and misses context details and user experience. We discuss our findings and their impact on software and requirements engineering teams.
String similarity measures and joins with synonyms
A string similarity measure quantifies the similarity between two text strings for approximate string matching or comparison. For example, the strings "Sam" and "Samuel" can be considered similar. Most existing work that computes the similarity of two strings only considers syntactic similarities, e.g., number of common words or q-grams. While these are indeed indicators of similarity, there are many important cases where syntactically different strings can represent the same real-world object. For example, "Bill" is a short form of "William". Given a collection of predefined synonyms, the purpose of the paper is to explore such existing knowledge to evaluate string similarity measures more effectively and efficiently, thereby boosting the quality of string matching. In particular, we first present an expansion-based framework to measure string similarities efficiently while considering synonyms. Because using synonyms in similarity measures is, while expressive, computationally expensive (NP-hard), we propose an efficient algorithm, called selective-expansion, which guarantees the optimality in many real scenarios. We then study a novel indexing structure called SI-tree, which combines both signature and length filtering strategies, for efficient string similarity joins with synonyms. We develop an estimator to approximate the size of candidates to enable an online selection of signature filters to further improve the efficiency. This estimator provides strong low-error, high-confidence guarantees while requiring only logarithmic space and time costs, thus making our method attractive both in theory and in practice. Finally, the results from an empirical study of the algorithms verify the effectiveness and efficiency of our approach.
The Airport Gate Assignment Problem: Mathematical Model and a Tabu Search Algorithm
In this paper, we consider an Airport Gate Assignment Problem that dynamically assigns airport gates to scheduled ights based on passengers' daily origin and destination ow data. The objective of the problem is to minimize the overall connection times that passengers walk to catch their connection ights. We formulate this problem as a mixed 0-1 quadratic integer programming problem and then reformulate it as a mixed 0-1 integer problem with a linear objective function and constraints. We design a simple tabu search meta-heuristic to solve the problem. The algorithm exploits the special properties of di erent types of neighborhood moves, and create highly e ective candidate list strategies. We also address issues of tabu short term memory, dynamic tabu tenure, aspiration rule, and various intensi cation and diversi cation strategies. Preliminary computational experiments are conducted and the results are presented and analyzed.
An Improved Algorithm for Harris Corner Detection
Corner points are formed from two or more edges and edges usually define the boundary between two different objects or parts of the same objects. In this novel we discuss the theory of the Harris corner detection and indicate its disadvantage. Then it proposes an improved algorithm of Harris detection algorithm based on the neighboring point eliminating method. It reduces the time of the detection, and makes the corners distributing more homogenous so that avoids too many corners stay together. Experimental results show that the algorithm can detect corner more equality distributing, and can be used in some fact applications such as image registration well. Key words-Corner point; Harris corner; neighboring point; homogenous distributing 1. Introduce Corner point is a maximum curvature point of the two-dimensional image brightness change or the edge of sharp curves in a image, which haves widely used in such as the camera calibration, virtual scene reconstruction, motion estimation, image registration and more computer vision tasks. These points keep important features of image and can also be effective in reducing the amount of data information, allow real-time processing possible. Now, there are two corner detection methods: based on the image edge method and based on the image gray scale method. The former always need to encode the image edge, which is heavily dependent on image segmentation and edge extraction, have large difficulty and calculate, and if the detected area haves some change, it will lead to the failure of the operation, therefore, this method is not used widely. The latter method detect corner by calculating the curvature and gradient, it avoid these disadvantage. In currently, it haves Moravec operator, Harris operator, Susan operator and so on. Experimental shows that Harris corner detection method haves the best result in above mentioned. But in fact applications, Harris algorithm must to give a proper threshold T, and the algorithm based on the provided threshold to detect the ideal corner. T always depend on the image, specifically it is difficult to determine under the different colors. And it gets the ideal T after user blind set T more times. In addition, the points which have the larger eigenvalues are concentrated in some regions cause the detect corners are non-uniform distribution; and if T dropped, even though the overall distribution of the corner towards reasonable, but it also leads to the points close together, have the clustering phenomenon, it will affect the later analysis and processing. 2. Harris Corner Detection Algorithm 2.1 Detection Theory Harris corner detection algorithm is based on the point feature extraction of signal. It makes the window ω (usually rectangular area) to move infinitesimal displacement to any direction, and the variation of gray can be defined as: − = + + 2 , , , , ] [ v u v y u x v u y x I I w E + + = v u v u y x O yY xX w , 2 2 2 , )] , ( [ 2 2 2 By Cxy Ax + + = T y x M y x ) , ( ) , ( = (1) Where X and Y are one-order gray-level gradient, they can be got by convolution with 978-1-4244-4131-0/09/$25.00 ©2009 IEEE
Minimizing Execution Costs when Using Globally Distributed Cloud Services
Cloud computing is an emerging technology that allows users to utilize on-demand computation, storage, data and services from around the world. However, Cloud service providers charge users for these services. Specifically, to access data from their globally distributed storage edge servers, providers charge users depending on the user’s location and the amount of data transferred. When deploying data-intensive applications in a Cloud computing environment, optimizing the cost of transferring data to and from these edge servers is a priority, as data play the dominant role in the application’s execution. In this paper, we formulate a non-linear programming model to minimize the data retrieval and execution cost of data-intensive workflows in Clouds. Our model retrieves data from Cloud storage resources such that the amount of data transferred is inversely proportional to the communication cost. We take an example of an ‘intrusion detection’ application workflow, where the data logs are made available from globally distributed Cloud storage servers. We construct the application as a workflow and experiment with Cloud based storage and compute resources. We compare the cost of multiple executions of the workflow given by a solution of our non-linear program against that given by Amazon CloudFront’s ‘nearest’ single data source selection. Our results show a savings of three-quarters of total cost using our model.
Design of a Scalable Event Notification Service : Interface and Architecture
Event-based distributed systems are programmed to operate in response to events. An event notification service is an application-independent infrastructure that supports the construction of event-based systems. While numerous technologies have been developed for supporting event-based interactions over local-area networks, these technologies do not scale well to wide-area networks such as the Internet. Wide-area networks pose new challenges that have to be attacked with solutions that specifically address issues of scalability. This paper presents Siena, a scalable event notification service that is based on a distributed architecture of event servers. We first present a formally defined interface that is based on an extension to the publish/subscribe protocol. We then describe and compare several different server topologies and routing algorithms. We conclude by briefly discussing related work, our experience with an initial implementation of Siena, and a framework for evaluating the scalability of event notification services such as Siena.
Metric learning with spectral graph convolutions on brain connectivity networks
Graph representations are often used to model structured data at an individual or population level and have numerous applications in pattern recognition problems. In the field of neuroscience, where such representations are commonly used to model structural or functional connectivity between a set of brain regions, graphs have proven to be of great importance. This is mainly due to the capability of revealing patterns related to brain development and disease, which were previously unknown. Evaluating similarity between these brain connectivity networks in a manner that accounts for the graph structure and is tailored for a particular application is, however, non-trivial. Most existing methods fail to accommodate the graph structure, discarding information that could be beneficial for further classification or regression analyses based on these similarities. We propose to learn a graph similarity metric using a siamese graph convolutional neural network (s-GCN) in a supervised setting. The proposed framework takes into consideration the graph structure for the evaluation of similarity between a pair of graphs, by employing spectral graph convolutions that allow the generalisation of traditional convolutions to irregular graphs and operates in the graph spectral domain. We apply the proposed model on two datasets: the challenging ABIDE database, which comprises functional MRI data of 403 patients with autism spectrum disorder (ASD) and 468 healthy controls aggregated from multiple acquisition sites, and a set of 2500 subjects from UK Biobank. We demonstrate the performance of the method for the tasks of classification between matching and non-matching graphs, as well as individual subject classification and manifold learning, showing that it leads to significantly improved results compared to traditional methods.
Attribute-Based Functional Encryption on Lattices
We introduce a broad lattice manipulation technique for expressive cryptography, and use it to realize functional encryption for access structures from post-quantum hardness assumptions. Speci cally, we build an e cient key-policy attribute-based encryption scheme, and prove its security in the selective sense from learning-witherrors intractability in the standard model.
Contribution of Motivational Management to Employee Performance
The relationship between motivation and performance is a universal concern and is often talked about but many organizations do not make concrete efforts to study it in detail. These organizations blindly apply the popular motivational theories without findings and instigators which would be the result of an intensive study on motivation. In today’s hyper-competitive marketplace, understanding what fosters and forwards employee motivation and thus organizational performance is critical. Motivation is about stimulating people to action and to achieve a desired task. For organizations of all sorts to be efficient and successful, enough of every person’s drives must be stimulated and satisfied to ensure effective performance. There is more emphasizes on excellent management as a major contributor to personal effectiveness, group efficiency and business success. The concept of motivational management was there in practice since the age of slavery; however, its motivation was as a result of fear, suffering and intimidation. Today, managers have at their disposal managing instruments by which they can stimulate latent efforts and performance of their teams. The research set out to investigate the contribution of motivational management to employee performance in the Vehicle Body Building industry. Dodi Autotech (K) Limited and Two M Autotech (K) Limited were identified for the study. The researcher first reviewed the relevant literature on motivation, the role of management in motivation and in particular motivational management as a factor of motivation. Questionnaires were used to collect information from the organization’s employees. The researcher interviewed the senior managers and also spent a day in the organizations’ premises observing the workers as they worked and interacted with one another. This was to gather information on behaviour and attitudes at work and towards work in the organizations. The data collected was entered and analyzed on the SPSS software, after which it was presented and interpreted using a combination of tables, bar charts and continuous prose. The Chi-square test of association was used in testing the hypothesis of the study. The result showed that employees in the two organizations of study were to a very large extent influenced to perform by a combination of intrinsic and extrinsic motivational factors applied through management initiatives. The research found the following motivational variables to have significantly influenced employee retention in both organizations; challenging/interesting work; awareness of the relationship between work, organization goals and priorities; performance progress review; performance discussions and rewards for good performance. Motivational management can influence the workplace behaviour and attitudes both positively and negatively. The researcher intends to create awareness on the importance of designing and maintaining an environment that is stress free and an environment that is conducive for optimum employee performance The research was then concluded by indicating areas of improvement and recommending management methods that enhance the employee motivation that leads to increased employee performance.
Challenges of introducing and implementing mobile payments : A Qualitative study of the Swedish mobile payment application WyWallet
Citizens of Sweden are now facing a large shift in their habits of managing money transferring using mobile phones. They are now forced to use an application called WyWallet when making purchases w ...
Kabbalah Logic and Semantic Foundations for a Postmodern Fuzzy Set and Fuzzy Logic Theory
Despite half a century of fuzzy sets and fuzzy logic progress, as fuzzy sets address complex and uncertain information through the lens of human knowledge and subjectivity, more progress is needed in the semantics of fuzzy sets and in exploring the multi-modal aspect of fuzzy logic due to the different cognitive, emotional and behavioral angles of assessing truth. We lay here the foundations of a postmodern fuzzy set and fuzzy logic theory addressing these issues by deconstructing fuzzy truth values and fuzzy set membership functions to re-capture the human knowledge and subjectivity structure in membership function evaluations. We formulate a fractal multi-modal logic of Kabbalah which integrates the cognitive, emotional and behavioral levels of humanistic systems into epistemic and modal, deontic and doxastic and dynamic multi-modal logic. This is done by creating a fractal multi-modal Kabbalah possible worlds semantic frame of Kripke model type. The Kabbalah possible worlds semantic frame integrates together both the multi-modal logic aspects and their Kripke possible worlds model. We will not focus here on modal operators and axiom sets. We constructively define a fractal multi-modal Kabbalistic L-fuzzy set as the central concept of the postmodern fuzzy set theory based on Kabbalah logic and semantics.
Radiometric CCD camera calibration and noise estimation
Changes in measured image irradiance have many physical causes and are the primary cue for several visual processes, such as edge detection and shape from shading. Using physical models for charged-coupled device ( C C D ) video cameras and material reflectance, we quantify the variation in digitized pixel values that is due to sensor noise and scene variation. This analysis forms the basis of algorithms for camera characterization and calibration and for scene description. Specifically, algorithms are developed for estimating the parameters of camera noise and for calibrating a camera to remove the effects of fixed pattern nonuniformity and spatial variation in dark current. While these techniques have many potential uses, we describe in particular how they can be used to estimate a measure of scene variation. This measure is independent of image irradiance and can be used to identify a surface from a single sensor band over a range of situations. Experimental results confirm that the models presented in this paper are useful for modeling the different sources of variation in real images obtained from video cameras. Index T e m s C C D cameras, computer vision, camera calibration, noise estimation, reflectance variation, sensor modeling.
Applications of Linear Algebra in Information Retrieval and Hypertext Analysis
Information retrieval is concerned with representing content in a form that can be easily accessed by users with information needs [61, 65]. A definition at this level of generality applies equally well to any index-based retrieval system or database application; so let us focus the topic a little more carefully. Information retrieval, as a field, works primarily with highly unstructured content, such as text documents written in natural language; it deals with information needs that are generally not formulated according to precise specifications; and its criteria for success are based in large part on the demands of a diverse set of human users. Our purpose in this short article is not to provide a survey of the field of information retrieval — for this we refer the reader to texts and surveys such as [25, 29, 51, 60, 61, 62, 63, 65, 70]. Rather, we wish to discuss some specific applications of techniques from linear algebra to information retrieval and hypertext analysis. In particular, we focus on spectral methods — the use of eigenvectors and singular vectors of matrices — and their role in these areas. After briefly introducing the use of vector-space models in information retrieval [52, 65], we describe the application of the singular value decomposition to dimensionreduction, through the Latent Semantic Indexing technique [14]. We contrast this with several other approaches to clustering and dimension-reduction based on vector-space models.
The Use of Gamification in Higher Education : An Empirical Study
The use of gamification in higher education has increased considerably over the past decades. An empirical study was conducted in Hungary with two groups of students to investigate their behaviour while interacting with Kahoot! The results were analyzed based on the technology acceptance model. They indicate that the positive attitude, good experience and ease of availability contributed to improve student performance which strengthened the intention to use the application. Besides these, the perceived utility was positively influenced by the ease of use as consequence. Keywords—Gamification; education; Hungary; technology acceptance model; university student
Manganese(V) Porphycene Complex Responsible for Inert C-H Bond Hydroxylation in a Myoglobin Matrix.
A mechanistic study of H2O2-dependent C-H bond hydroxylation by myoglobin reconstituted with a manganese porphycene was carried out. The X-ray crystal structure of the reconstituted protein obtained at 1.5 Å resolution reveals tight incorporation of the complex into the myoglobin matrix at pH 8.5, the optimized pH value for the highest turnover number of hydroxylation of ethylbenzene. The protein generates a spectroscopically detectable two-electron oxidative intermediate in a reaction with peracid, which has a half-life up to 38 s at 10 °C. Electron paramagnetic resonance spectra of the intermediate with perpendicular and parallel modes are silent, indicating formation of a low-spin MnV-oxo species. In addition, the MnV-oxo species is capable of promoting the hydroxylation of sodium 4-ethylbenzenesulfonate under single turnover conditions with an apparent second-order rate constant of 2.0 M-1 s-1 at 25 °C. Furthermore, the higher bond dissociation enthalpy of the substrate decreases the rate constant, in support of the proposal that the H-abstraction is one of the rate-limiting steps. The present engineered myoglobin serves as an artificial metalloenzyme for inert C-H bond activation via a high-valent metal species similar to the species employed by native monooxygenases such as cytochrome P450.
Pharmacokinetic linearity of i.v. vinorelbine from an intra-patient dose escalation study design
As pharmacokinetics represents a bridge between pharmacological concentrations and clinical regimens, the pharmacokinetic exploration of the therapeutic dose range is a major outcome. This study was aimed at assessing pharmacokinetic linearity of i.v. vinorelbine through an open design with intra-patient dose escalation (3 doses/group). Three groups of six patients received either 20–25–30 mg/m2; or 25–30–35 mg/m2; or 30–35–40 mg/m2. The inclusion criteria were: histologically confirmed tumour with at least one assessable target lesion, age 25–75 years, WHO PS ≤2, normal haematology and biochemistry, life expectancy ≥3 months. The pharmacokinetics was evaluated in both whole blood and plasma over 120 h. Twenty-six patients were recruited and 18 were evaluable for pharmacokinetics. The toxicity consisted in grade ≤3 leucopenia and neutropenia (<20% of courses) and two grade 4 constipation with rapid recovery (2/54 courses). Compared to blood, plasma was demonstrated to underestimate the pharmacokinetic parameters. In blood, the drug total clearance was about 0.6 l/h/kg, with minor contribution of renal clearance, steady state volume of distribution close to 13 l/h/kg, and elimination half-life at about 40 h. A pharmacokinetic linearity was demonstrated up to 40 mg/m2, and even up to 45 mg/m2 when pooling data from another study. A pharmacokinetic–pharmacodynamic relationship was evidenced on leucopenia and neutropenia when pooling the data from the two studies.
Association of frontal QRS-T angle--age risk score on admission electrocardiogram with mortality in patients admitted with an acute coronary syndrome.
Risk assessment is central to the management of acute coronary syndromes. Often, however, assessment is not complete until the troponin concentration is available. Using 2 multicenter prospective observational studies (Evaluation of Methods and Management of Acute Coronary Events [EMMACE] 2, test cohort, 1,843 patients; and EMMACE-1, validation cohort, 550 patients) of unselected patients with acute coronary syndromes, a point-of-admission risk stratification tool using frontal QRS-T angle derived from automated measurements and age for the prediction of 30-day and 2-year mortality was evaluated. Two-year mortality was lowest in patients with frontal QRS-T angles <38° and highest in patients with frontal QRS-T angles >104° (44.7% vs 14.8%, p <0.001). Increasing frontal QRS-T angle-age risk (FAAR) scores were associated with increasing 30-day and 2-year mortality (for 2-year mortality, score 0 = 3.7%, score 4 = 57%; p <0.001). The FAAR score was a good discriminator of mortality (C statistics 0.74 [95% confidence interval 0.71 to 0.78] at 30 days and 0.77 [95% confidence interval 0.75 to 0.79] at 2 years), maintained its performance in the EMMACE-1 cohort at 30 days (C statistics 0.76 (95% confidence interval 0.71 to 0.8] at 30 days and 0.79 (95% confidence interval 0.75 to 0.83] at 2 years), in men and women, in ST-segment elevation myocardial infarction and non-ST-segment elevation myocardial infarction, and compared favorably with the Global Registry of Acute Coronary Events (GRACE) score. The integrated discrimination improvement (age to FAAR score at 30 days and at 2 years in EMMACE-1 and EMMACE-2) was p <0.001. In conclusion, the FAAR score is a point-of-admission risk tool that predicts 30-day and 2-year mortality from 2 variables across a spectrum of patients with acute coronary syndromes. It does not require the results of biomarker assays or rely on the subjective interpretation of electrocardiograms.
SPECTRE: A Fast and Scalable Cryptocurrency Protocol
A growing body of research on Bitcoin and other permissionless cryptocurrencies that utilize Nakamoto’s blockchain has shown that they do not easily scale to process a high throughput of transactions, or to quickly approve individual transactions; blocks must be kept small, and their creation rates must be kept low in order to allow nodes to reach consensus securely. As of today, Bitcoin processes a mere 3-7 transactions per second, and transaction confirmation takes at least several minutes. We present SPECTRE, a new protocol for the consensus core of crypto-currencies that remains secure even under high throughput and fast confirmation times. At any throughput, SPECTRE is resilient to attackers with up to 50% of the computational power (up until the limit defined by network congestion and bandwidth constraints). SPECTRE can operate at high block creation rates, which implies that its transactions confirm in mere seconds (limited mostly by the round-trip-time in the network). Key to SPECTRE’s achievements is the fact that it satisfies weaker properties than classic consensus requires. In the conventional paradigm, the order between any two transactions must be decided and agreed upon by all non-corrupt nodes. In contrast, SPECTRE only satisfies this with respect to transactions performed by honest users. We observe that in the context of money, two conflicting payments that are published concurrently could only have been created by a dishonest user, hence we can afford to delay the acceptance of such transactions without harming the usability of the system. Our framework formalizes this weaker set of requirements for a crypto-currency’s distributed ledger. We then provide a formal proof that SPECTRE satisfies these requirements.
Algorithmic Framework for Model-based Reinforcement Learning with Theoretical Guarantees
Model-based reinforcement learning (RL) is considered to be a promising approach to reduce the sample complexity that hinders model-free RL. However, the theoretical understanding of such methods has been rather limited. This paper introduces a novel algorithmic framework for designing and analyzing model-based RL algorithms with theoretical guarantees. We design a meta-algorithm with a theoretical guarantee of monotone improvement to a local maximum of the expected reward. The meta-algorithm iteratively builds a lower bound of the expected reward based on the estimated dynamical model and sample trajectories, and then maximizes the lower bound jointly over the policy and the model. The framework extends the optimism-in-face-of-uncertainty principle to non-linear dynamical models in a way that requires no explicit uncertainty quantification. Instantiating our framework with simplification gives a variant of model-based RL algorithms Stochastic Lower Bounds Optimization (SLBO). Experiments demonstrate that SLBO achieves stateof-the-art performance when only one million or fewer samples are permitted on a range of continuous control benchmark tasks.1
School leadership effects revisited : a review of empirical studies guided by indirect-effect models
Fourteen leadership effect studies that used indirect-effect models were quantitatively analysed to explore the most promising mediating variables. The results indicate that total effect sizes based on indirect-effect studies appear to be low, quite comparable to the results of some meta-analyses of direct-effect studies. As the earlier indirect-effect studies tended to include a broad range of mainly school organisational conditions as intermediary variables, more recent studies focus more sharply on instructional conditions. The results of the conceptual analysis and the quantitative research synthesis would seem to support conceptualising educational leadership as a detached and ‘lean’ kind of meta-control, which would make maximum use of the available substitutes and self-organisation offered by the school staff and school organisational structural provisions. The coupling of conceptual analysis and systematic review of studies driven by indirect-effect models provides a new perspective on leadership effectiveness.
Point-of-Interest Recommendation in Location Based Social Networks with Topic and Location Awareness
The wide spread use of location based social networks (LBSNs) has enabled the opportunities for better location based services through Point-of-Interest (POI) recommendation. Indeed, the problem of POI recommendation is to provide personalized recommendations of places of interest. Unlike traditional recommendation tasks, POI recommendation is personalized, locationaware, and context depended. In light of this difference, this paper proposes a topic and location aware POI recommender system by exploiting associated textual and context information. Specifically, we first exploit an aggregated latent Dirichlet allocation (LDA) model to learn the interest topics of users and to infer the interest POIs by mining textual information associated with POIs. Then, a Topic and Location-aware probabilistic matrix factorization (TL-PMF) method is proposed for POI recommendation. A unique perspective of TL-PMF is to consider both the extent to which a user interest matches the POI in terms of topic distribution and the word-of-mouth opinions of the POIs. Finally, experiments on real-world LBSNs data show that the proposed recommendation method outperforms state-of-the-art probabilistic latent factor models with a significant margin. Also, we have studied the impact of personalized interest topics and word-of-mouth opinions on POI recommendations.
Adolescent sexual networking and HIV transmission in rural Uganda.
Information on 861 adolescents shows that in 1991 36 per cent reported having been sexually active in the previous 12 months, but only 6.2 per cent had ever used a condom (11% males, 2.4% females). The HIV infection rate was 5.9 per cent overall, 0.8 per cent in males and 9.9 per cent in females. The proportion sexually active and the rate of HIV infection rise with age. The annual incidence of HIV infection was 2.0 per 100 person-years of follow-up among all adolescents, 0.8 in males and 3.0 in females. The annual mortality rate among HIV-negative adolescents was 0.37 per cent versus 3.92 per cent among the HIV-positive adolescents, a rate ratio of 10.6. Sexual network data were collected on 389 adolescents aged 15-19 years of whom 55 per cent were sexually active. The median age of first sexual intercourse was 15 years in either sex. The 214 adolescents reported 339 sexual relationships of which 38.5 per cent were with spouses, 36 per cent with boy or girl friends and 21 per cent with 'friends'. There were 52 concurrent sexual relationships reported by 35 adolescents. Males report higher rates of concurrent sexual relationships than females. The sexual partners of boys were mainly younger students and housemaids while the girls' partners were mainly older traders and salaried workers. Adolescents in this community report high rates of sexual activity and have complex sexual networks. They probably are important in the dynamics of HIV infection.
A Computational Interpretation of Forcing in Type Theory
In a previous work, we showed the uniform continuity of definable functionals in intuitionistic type theory as an application of forcing with dependent types. The argument was constructive, and so contains implicitly an algorithm which computes a witness that a given functional is uniformly continuous. We present here such an algorithm, which provides a possible computational interpretation of forcing.
Comparing Reward Shaping, Visual Hints, and Curriculum Learning
When considering how to reduce the learning effort required for Reinforcement Learning (RL) agents on complex tasks, designers can apply several common approaches. Reward shaping boosts the immediate reward provided by the environment, effectively encouraging (or discouraging) specific actions. Curriculum learning (Bengio et al. 2009) aims to help an agent learn a complex task by learning a sequence of simpler tasks. Hints may also be provided (e.g., a yellow brick road), which fall outside the notion of shaping or a curricula. Despite the prevalence of these approaches, few studies examine how they compare to (or complement) each other or when an approach is better. As a first step in this direction, we analyze shaping, hints, and curricula for a Deep RL agent in Malmo (Johnson et al. 2016), a research platform for Minecraft. Figure 1 (left) shows the layouts used in our study, which are distinguished by the number of rooms, the placement of the target, and whether color is included. For all rooms, the starting position of the agent is selected from five blocks at the bottom of the room (highlighted gray). In one-room situations and the right-most two-room situation, the target is always chosen from the five blocks at the top of the room (highlighted gray). In the left-most two-room situation, the target is set just beyond the doorway. Visual hints are provided in some situations by coloring some of the floor blocks blue. We seek to answer whether shaping, hints, or the curricula have the most impact on performance, which we measure as the time to reach the target, the distance from the target, the cumulative reward, or the number of actions taken. For this task, performance is most impacted by the curriculum used and hints while shaping had less impact, suggesting that designing an effective curriculum and providing appropriate hints deserve more attention for similar navigation tasks with Deep RL agents. Our methodology provides an evaluation protocol, serving as a foundation for further studies that tease apart when (and why) methods excel or fail.
Explaining Entrepreneurial Intentions by Means of the Theory of Planned Behaviour
Purpose – This paper sets out to present a detailed empirical investigation of the entrepreneurial intentions of business students. The authors employ the theory of planned behaviour (TPB), in which intentions are regarded as resulting from attitudes, perceived behavioural control, and subjective norms. Design/methodology/approach – The methodology used was a replication study among samples of undergraduate students of business administration at four different universities (total n 1⁄4 1; 225). Five operationalisations of intentions are used as well as a composite measure. Prior to the main study, qualitative research conducted at two other universities (total n 1⁄4 373) was held to operationalise the components of the TPB. Findings – The results show that the two most important variables to explain entrepreneurial intentions are entrepreneurial alertness and the importance attached to financial security. Research limitations/implications – Various research design features are used that result in better and more detailed explanations of entrepreneurial intentions. Practical implications – Should one want to stimulate entrepreneurship in educational or training settings, then this paper’s results provide guidance. Several suggestions are offered on how entrepreneurial alertness can be improved and financial security concerns can be reduced. Originality/value – The study provides detailed and solid results on entrepreneurial intentions which are positioned in the career literature.
A Particle Swarm Optimization ( PSO )-based Heuristic for Scheduling Work fl ow Applications in Cloud Computing Environments
Cloud computing environments facilitate applications by providing virtualized resources that can be provisioned dynamically. However, users are charged on a pay-per-use basis. User applications may incur large data retrieval and execution costs when they are scheduled taking into account only the ‘execution time’. In addition to optimizing execution time, the cost arising from data transfers between resources as well as execution costs must also be taken into account. In this paper, we present a particle swarm optimization (PSO) based scheduling heuristic for data intensive applications that takes into account both computation cost and data transmission cost. We experiment with a workflow application by varying its computation and communication costs. We analyze the cost savings when using PSO as compared to using existing ‘Best Resource Selection’ (BRS) algorithm. Our results show that we can achieve: a) as much as 3 times cost savings as compared to BRS, b) good distribution of workload onto resources, when using PSO based scheduling heuristic.
A cyclical evaluation model of information security maturity
! ABSTRACT Purpose The lack of a security evaluation method might expose organizations to several risky situations. This paper aims at presenting a cyclical evaluation model of information security maturity. Design/methodology/approach This model was developed through the definition of a set of steps to be followed in order to obtain periodical evaluation of maturity and continuous improvement of controls. Findings – This model is based on controls present in ISO/IEC 27002, provides a means to measure the current situation of information security management through the use of a maturity model and provides a subsidy to take appropriate and feasible improvement actions, based on risks. A case study is performed and the results indicate that the method is efficient for evaluating the current state of information security, to support information security management, risks identification and business and internal control processes. Research limitations/implications It is possible that modifications to the process may be needed where there is less understanding of security requirements, such as in a less mature organization. Originality/value This paper presents a generic model applicable to all kinds of organizations. The main contribution of this paper is the use of a maturity scale allied to the cyclical process of evaluation, providing the generation of immediate indicators for the management of information security. !
A Linear Approximation Method for Probabilistic Inference
There have been a number of techniques developed in recent years for the efficient analysis of probabilistic inference problems, represented as Bayes' networks or influence diagrams (Lauritzen and Spiegelhalter [9], Pearl [12], Shachter [14]). To varying degrees these methods exploit the conditional independence assumed and revealed in the problem structure to analyze problems in polynomial time, essentially polynomial in the number of variables and the size of the largest state space encountered during the evaluation. Unfortunately, there are many problems of interest for which the variables of interest are continuous rather than discrete, so the relevant state spaces become infinite and the polynomial complexity is of little help.
Multiword Expressions in NLP: General Survey and a Special Case of Verb-Noun Constructions
This chapter presents a survey of contemporary NLP research on Multiword Expressions (MWEs). MWEs pose a huge problem to precise language processing due to their idiosyncratic nature and diversity of their semantic, lexical, and syntactical properties. The chapter begins by considering MWEs definitions, describes some MWEs classes, indicates problems MWEs generate in language applications and their possible solutions, presents methods of MWE encoding in dictionaries and their automatic detection in corpora. The chapter goes into more detail on a particular MWE class called Verb-Noun Constructions (VNCs). Due to their frequency in corpus and unique characteristics, VNCs present a research problem in their own right. Having outlined several approaches to VNC representation in lexicons, the chapter explains the formalism of Lexical Function as a possible VNC representation. Such representation may serve as a tool for VNCs automatic detection in a corpus. The latter is illustrated on Spanish material applying some supervised learning methods commonly used for NLP tasks.
Analysis of Metastability Errors in Conventional, LSB-First, and Asynchronous SAR ADCs
A practical model for characterizing comparator metastability errors in SAR ADCs is presented, and is used to analyze not only the conventional SAR but also LSB-first and asynchronous versions. This work makes three main contributions: first, it is shown that for characterizing metastability it is more reasonable to use input signals with normal or Laplace distributions. Previous work used uniformly-distributed signals in the interest of making derivations easier, but this simplifying assumption overestimated SMR by as much as 18 dB compared to the more reasonable analysis presented here. Second, this work shows that LSB-first SAR ADCs achieve SMR performance equal to or better than conventional SARs with the same metastability window, depending on bandwidth. Finally, the analysis is used to develop a framework for calculating the maximum effective sample rate for asynchronous SAR ADCs, and in doing so demonstrates that proximity detectors are not effective solutions to improving metastability performance.
Design Guidelines for Object-Oriented Deductive Systems
This paper presents a set of design guidelines for the construction of complex, real-world problem-solving systems using a hybrid object-oriented deductive formalism. These guidelines address implementation issues in a multi-paradigm environment. Examples are provided in the context of a large-scale knowledge based system (KBS). >
Model Predictive Control of semi-active and active suspension systems with available road preview
Semi-active and active suspensions influencing the vertical dynamics of vehicles enable improvement of ride comfort and handling characteristics compared to vehicles with passive suspensions. If the road height profile in front of the car is measured using vehicle sensors, more efficient control strategies can be applied. First, regarding vehicles with continuously variable dampers: A Model Predictive Controller incorporating the nonlinear constraints of the damper characteristic is set up and compared to a controller using linear constraints. The approximate linear constraints are obtained by a prediction of passive vehicle behavior over the preview horizon using a linearized model. Additionally, a controller without input constraints and clipping of the damper force is applied. This results in a quadratic program without constraints, which can be solved efficiently. The result of applying the optimal force without constraints is also evaluated, which corresponds to an ideal high bandwidth actuator. Next, two controllers for vehicles with a low bandwidth active suspension and variable dampers are proposed. While the first approach optimizes only the actuator displacement combined with the damper's soft characteristic, the second approach optimizes both the damper force and the actuator displacement. Simulation results of the controllers and the active and semi-active suspensions over real road height profiles are presented.
Decoding the "Free/Open Source (F/OSS) Software Puzzle"a survey of theoretical and empirical contributions
F/OSS software has been described by many as a puzzle. In the past five years, it has stimulated the curiosity of scholars in a variety of fields, including economics, law, psychology, anthropology and computer science, so that the number of contributions on the subject has increased exponentially. The purpose of this paper is to provide a sufficiently comprehensive account of these contributions in order to draw some general conclusions on the state of our understanding of the phenomenon and identify directions for future research. The exercise suggests that what is puzzling about F/OSS is not so much the fact that people freely contribute to a good they make available to all, but rather the complexity of its institutional structure and its ability to organizationally evolve over time. JEL Classification: K11, L22, L23, L86, O31, O34.
Effect of nutrition education intervention based on Pender's Health Promotion Model in improving the frequency and nutrient intake of breakfast consumption among female Iranian students.
OBJECTIVE To determine the effectiveness of nutrition education intervention based on Pender's Health Promotion Model in improving the frequency and nutrient intake of breakfast consumption among female Iranian students. DESIGN The quasi-experimental study based on Pender's Health Promotion Model was conducted during April-June 2011. Information (data) was collected by self-administered questionnaire. In addition, a 3 d breakfast record was analysed. P < 0·05 was considered significant. SETTING Two middle schools in average-income areas of Qom, Iran. SUBJECTS One hundred female middle-school students. RESULTS There was a significant reduction in immediate competing demands and preferences, perceived barriers and negative activity-related affect constructs in the experimental group after education compared with the control group. In addition, perceived benefit, perceived self-efficacy, positive activity-related affect, interpersonal influences, situational influences, commitment to a plan of action, frequency and intakes of macronutrients and most micronutrients of breakfast consumption were also significantly higher in the experimental group compared with the control group after the nutrition education intervention. CONCLUSIONS Constructs of Pender's Health Promotion Model provide a suitable source for designing strategies and content of a nutrition education intervention for improving the frequency and nutrient intake of breakfast consumption among female students.
Viewing Purpose Through an Eriksonian Lens
One theme in Erik Erikson's work is the importance of finding a purpose for life. This article discusses the role of purpose in Erikson's writings and uses this review as a foundation for investigating Erikson's claims. Using a longitudinal sample of adolescents, Study 1 shows that identity and purpose development are intertwined processes insofar as increased commitment on one dimension corresponds to increased commitment on the other. Study 2 demonstrates that, although identity and purpose commitment are correlated, purpose commitment uniquely predicts Big Five personality trait levels, particularly for those traits related to maturity. Results are discussed as an impetus for future purpose development research and as support for largely unexamined Eriksonian claims.
Two faces of narcissism on SNS: The distinct effects of vulnerable and grandiose narcissism on SNS privacy control
This study suggests narcissism as an important psychological factor that predicts one’s behavioral intention to control information privacy on SNS. Particularly, we approach narcissism as a two-dimensional construct—vulnerable and grandiose narcissism—to provide a better understanding of the role of narcissism in SNS usage. As one of the first studies to apply a two-dimensional approach to narcissism in computer-mediated communication, our results show that vulnerable narcissism has a significant positive effect on behavioral intention to control privacy on SNS, while grandiose narcissism has no effect. This effect was found when considering other personality traits, including self-esteem, computer anxiety, and concern for information privacy. The results indicate that unidimensional approaches to narcissism cannot solely predict SNS behaviors, and the construct of narcissism should be broken down into two orthogonal constructs. 2014 Elsevier Ltd. All rights reserved.
Visual Dynamics: Probabilistic Future Frame Synthesis via Cross Convolutional Networks
We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose to model future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-world video frames. We also show that our model can be applied to visual analogy-making, and present an analysis of the learned network representations.
Teaching and Research: The Human Capital Paradigm.
The authors analyze why research is rewarded in academic institutions where teaching is the primary concern and suggest why even publication in obscure journals may serve as a measure of the worth of an instructor to teaching institutions.
Camel: Content-Aware and Meta-path Augmented Metric Learning for Author Identification
In this paper, we study the problem of author identification in big scholarly data, which is to effectively rank potential authors for each anonymous paper by using historical data. Most of the existing deanonymization approaches predict relevance score of paper-author pair via feature engineering, which is not only time and storage consuming, but also introduces irrelevant and redundant features or miss important attributes. Representation learning can automate the feature generation process by learning node embeddings in academic network to infer the correlation of paper-author pair. However, the learned embeddings are often for general purpose (independent of the specific task), or based on network structure only (without considering the node content). To address these issues and make a further progress in solving the author identification problem, we propose Camel, a content-aware and meta-path augmented metric learning model. Specifically, first, the directly correlated paper-author pairs are modeled based on distance metric learning by introducing a push loss function. Next, the paper content embedding encoded by the gated recurrent neural network is integrated into the distance loss. Moreover, the historical bibliographic data of papers is utilized to construct an academic heterogeneous network, wherein a meta-path guided walk integrative learning module based on the task-dependent and content-aware Skipgram model is designed to formulate the correlations between each paper and its indirect author neighbors, and further augments the model. Extensive experiments demonstrate that Camel outperforms the state-of-the-art baselines. It achieves an average improvement of 6.3% over the best baseline method.
The Power of Revealed Preference Tests : Ex-Post Evaluation of Experimental Design ∗
Revealed preference tests are elegant nonparametric tools that ask whether choice data conforms to optimizing behavior. These tests present a vexing tension between goodness-of-fit and power. If the test finds violations, is there an acceptable tolerance for goodness-of-fit? If no violations are found, was the test demanding enough to be powerful? This paper complements the many on goodness-of-fit by presenting several new indices of power. By focusing on the underlying probability model induced by sampling, we attempt to unify the two approaches. We illustrate applications of the indices, and provide a field guide to applying them to experimental data. ∗We are grateful to Oleg Balashov, David Bjerk, Khai Chiong, Ian Crawford, Federico Echenique, Joseph Guse, Shachar Kariv, Grigory Kosenok, Justin McCrary, Matt Shum, and Hal Varian, as well as seminar participants at the California Econometrics Conference, California Institute of Technology, Stanford University, and the University of California Berkeley for helpful comments. We owe special thanks to Gautam Tripathi for important insights at the early stages of this project. We also acknowledge the financial support of the National Science Foundation.
Video abstraction: A systematic review and classification
The demand for various multimedia applications is rapidly increasing due to the recent advance in the computing and network infrastructure, together with the widespread use of digital video technology. Among the key elements for the success of these applications is how to effectively and efficiently manage and store a huge amount of audio visual information, while at the same time providing user-friendly access to the stored data. This has fueled a quickly evolving research area known as video abstraction. As the name implies, video abstraction is a mechanism for generating a short summary of a video, which can either be a sequence of stationary images (keyframes) or moving images (video skims). In terms of browsing and navigation, a good video abstract will enable the user to gain maximum information about the target video sequence in a specified time constraint or sufficient information in the minimum time. Over past years, various ideas and techniques have been proposed towards the effective abstraction of video contents. The purpose of this article is to provide a systematic classification of these works. We identify and detail, for each approach, the underlying components and how they are addressed in specific works.