title
stringlengths
8
300
abstract
stringlengths
0
10k
entrepreneurship in UK higher education: The fictive entrepreneur and the fictive student
This article posits the idea of the ‘fictive entrepreneur’ and the ‘fictive student’ to explore how the historical masculinisation of entrepreneurship has informed UK policy and higher education (HE) approaches to entrepreneurship education, and the implications of this for female students. Using a Bourdieuian perspective, discourse analysis is employed to critically analyse policy and research documents and identify entrepreneurship discourses that construct both a ‘fictive entrepreneur’ that students should aspire to become, and a ‘fictive student’ who will benefit from HE entrepreneurship education. It argues that rather than being gender neutral or meritocratic, these discourses of entrepreneurship are saturated with gendered meanings which position HE students and entrepreneurs in potentially damaging ways.
RAVLT and nonverbal analog: French forms and clinical findings.
BACKGROUND Objective clinical evaluation of memory frequently requires serial testing but the issue of whether multi-formed tests are equivalent and can be used interchangeably is seldom examined. An added problem in bilingual Canadian settings is the extent to which it is appropriate to measure French speakers' performance on translations of English tests. The present work used the Rey Auditory Verbal Learning Test (RAVLT) and a nonverbal analog, the Aggie Figures Learning Test (AFLT), to examine whether a) different forms of the same test are equivalent, b) performance on the two tests is comparable, c) two language groups perform similarly, and d) the RAVLT can detect dysfunction in patients with temporal lobe epilepsy (TLE). METHODS We compared three French versions of the RAVLT and three forms of the AFLT in 114 healthy francophone adults. We subsequently compared the performance of the same francophone subjects to a previously obtained sample of anglophones on both tests, and then administered the RAVLT to anglophone or francophone patients with TLE. RESULTS For both tasks the three forms were equivalent and performance on the RAVLT was comparable to that on the AFLT. Francophone subjects performed slightly worse on the RAVLT compared to anglophones but performance of the two language groups did not differ on the AFLT. Finally, left TLE patients were impaired compared to right on the RAVLT, but no performance differences were observed across the two language groups in the patient sample. CONCLUSIONS The RAVLT and AFLT are useful tools for examination of learning and memory in French and English speaking populations. On the RAVLT, the lesion effect in patients is not affected by differences in performance between language groups.
Performance analysis of the Janus WebRTC gateway
This paper takes an in-depth look at the performance of the Janus WebRTC gateway. Janus is a modular, open-source gateway allowing WebRTC clients to seamlessly interact with legacy real-time communication technologies, both standard and proprietary, and with each other. This is achieved by attaching technology-specific plugins on top of a barebones core implementing all of the functions and protocols mandated by the RTCWEB/WebRTC specification suites. The paper focuses on assessing the scalability of the Janus architecture, by selecting three representative use cases, followed by a detailed analysis of a real-world scenario associated with multi-point audio conferencing.
How useful are your comments?: analyzing and predicting youtube comments and comment ratings
An analysis of the social video sharing platform YouTube reveals a high amount of community feedback through comments for published videos as well as through meta ratings for these comments. In this paper, we present an in-depth study of commenting and comment rating behavior on a sample of more than 6 million comments on 67,000 YouTube videos for which we analyzed dependencies between comments, views, comment ratings and topic categories. In addition, we studied the influence of sentiment expressed in comments on the ratings for these comments using the SentiWordNet thesaurus, a lexical WordNet-based resource containing sentiment annotations. Finally, to predict community acceptance for comments not yet rated, we built different classifiers for the estimation of ratings for these comments. The results of our large-scale evaluations are promising and indicate that community feedback on already rated comments can help to filter new unrated comments or suggest particularly useful but still unrated comments.
Microgrid: a conceptual solution
Application of individual distributed generators can cause as many problems as it may solve. A better way to realize the emerging potential of distributed generation is to take a system approach which views generation and associated loads as a subsystem or a "microgrid". During disturbances, the generation and corresponding loads can separate from the distribution system to isolate the microgrid's load from the disturbance (providing UPS services) without harming the transmission grid's integrity. This ability to island generation and loads together has a potential to provide a higher local reliability than that provided by the power system as a whole. In this model it is also critical to be able to use the waste heat by placing the sources near the heat load. This implies that a unit can be placed at any point on the electrical system as required by the location of the heat load.
Efficient Multioutput Gaussian Processes through Variational Inducing Kernels
Interest in multioutput kernel methods is increasing, whether under the guise of multitask learning, multisensor networks or structured output data. From the Gaussian process perspective a multioutput Mercer kernel is a covariance function over correlated output functions. One way of constructing such kernels is based on convolution processes (CP). A key problem for this approach is efficient inference. Álvarez and Lawrence recently presented a sparse approximation for CPs that enabled efficient inference. In this paper, we extend this work in two directions: we introduce the concept of variational inducing functions to handle potential non-smooth functions involved in the kernel CP construction and we consider an alternative approach to approximate inference based on variational methods, extending the work by Titsias (2009) to the multiple output case. We demonstrate our approaches on prediction of school marks, compiler performance and financial time series.
Application of Deep Belief Networks for opcode based malware detection
Deep belief nets (DBNs) have been successfully applied in various fields ranging from image classification and audio recognition to information retrieval. Compared with traditional shallow neural networks, DBNs can use unlabeled data to pretrain a multi-layer generative model, which can better solve the overfitting problem during training neural networks. In this study we represent malware as opcode sequences and use DBNs to detect malware. We compare the performance of DBNs with three widely used classification algorithms: Support Vector Machines (SVM), Decision Tree and k-Nearest Neighbor algorithm (KNN). The DBN model gives detection accuracy that is equal to the best of the other models. When using additional unlabeled data for DBN pre-training, DBNs performed better than the compared classification algorithms. We also use the DBNs as an autoencoder to extract the feature vectors of the input data. The experiments shows that the autoencoder can effectively model the underlying structure of the input data, and can significantly reduce the dimensions of feature vectors.
Norms as a basis for governing sociotechnical systems
We understand a sociotechnical system as a multistakeholder cyber-physical system. We introduce governance as the administration of such a system by the stakeholders themselves. In this regard, governance is a peer-to-peer notion and contrasts with traditional management, which is a top-down hierarchical notion. Traditionally, there is no computational support for governance and it is achieved through out-of-band interactions among system administrators. Not surprisingly, traditional approaches simply do not scale up to large sociotechnical systems. We develop an approach for governance based on a computational representation of norms in organizations. Our approach is motivated by the Ocean Observatory Initiative, a thirty-year $400 million project, which supports a variety of resources dealing with monitoring and studying the world's oceans. These resources include autonomous underwater vehicles, ocean gliders, buoys, and other instrumentation as well as more traditional computational resources. Our approach has the benefit of directly reflecting stakeholder needs and assuring stakeholders of the correctness of the resulting governance decisions while yielding adaptive resource allocation in the face of changes in both stakeholder needs and physical circumstances.
100 W GaN HEMT power amplifier module with > 60% efficiency over 100–1000 MHz bandwidth
We have demonstrated a decade bandwidth 100 W GaN HEMT power amplifier module with 15.5–18.6 dB gain, 104–121 W CW output power and 61.4–76.6 % drain efficiency over the 100–1000 MHz band. The 2 × 2 inch compact power amplifier module combines four 30 W lossy matched broadband GaN HEMT PAs packaged in a ceramic SO8 package. Each of the 4 devices is fully matched to 50 Ω and obtains 30.8–35.7 W with 68.6–79.6 % drain efficiency over the band. The packaged amplifiers contain a GaN on SiC device operating at 48V drain voltage, alongside GaAs integrated passive matching circuitry. The four devices are combined using a broadband low loss coaxial balun. We believe this combination of output power, bandwidth and efficiency is the best reported to date. These amplifiers are targeted for use in multi-band public mobile radios and for instrumentation applications.
The Enterprise Ontology
This document presents the Enterprise Ontology a collection of terms and de nitions relevant to business enterprises It was developed as part of the Enterprise Project a collaborative e ort to provide a framework for enterprise modelling The Enterprise Ontology will serve as a basis for this framework which includes methods and a computer toolset for enterprise modelling We give an overview of the Enterprise Project elaborate on the intended use of the Ontology and discuss the process we went through to build it The scope of the Enterprise Ontology is limited to those core concepts required for the project however it is expected that it will appeal to a wider audience It should not be considered static during the course of the project the Enterprise Ontology will be further re ned and extended
‘Fine-tuned’ correction of tibial slope with a temporary external fixator in opening wedge high-tibial osteotomy
The authors describe a surgical procedure to ‘fine-tune’ the tibial slope during high-tibial osteotomy. Fifteen consecutive patients were treated for medial compartment osteoarthritis of the knee using a temporary unilateral external fixator and accompanying internal fixator composed of two plates (with different sized space). All 15 patients were evaluated by measuring femoro-tibial angles (FTAs) in the frontal plane, and using the proximal tibial anatomical axis (PTAA) and the posterior tibial cortex (PTC) methods to assess tibial slope in the sagittal plane. FTA, PTAA, and PTC angles were measured using: (1) radiographs taken before surgery, (2) fluoroscopic images taken after inserting the first plate during surgery, (3) fluoroscopic images taken after tibial slope restoration using an external fixator system during surgery, and finally using, (4) radiographs taken after surgery. In all patients, preoperative PTAA and PTC angles increased significantly after inserting the first plate posteromedially at osteotomy site. After tibial slope had been accurately restored using the external fixator system, PTAA and PTC angles decreased to the preoperative tibial slope level without changing femorotibial angles in the coronal plane. The authors were able to acquire a consistent and reproducible natural tibial slope using tibial slope “fine tuning” using an external fixator and a stable internal fixator.
Thin SiGe buffer layer growth by in situ low energy hydrogen plasma preparation
Abstract A new method to relax thin constant composition molecular beam epitaxy (MBE) grown SiGe buffer layers on silicon (100) substrates has been studied. A low energy plasma cleaning process (LEPC) using hydrogen prior to deposition can reduce the Si 1− x Ge x layer thickness to approximately 10% of a standard graded buffer. The layers were characterised by secondary ion mass spectroscopy (SIMS), high resolution X-ray diffraction (HRXRD), transmission electron microscopy (TEM), Rutherford backscattering (RBS) and atomic force microscopy (AFM). The threading dislocation density is 10 5 cm –2 (TEM measurement) and is comparable to or even lower than in standard buffers. Ge concentration and Si 1− x Ge x layer thickness are limited by spontaneous relaxation during growth. The critical thickness on hydrogen prepared wafers is reduced to nearly 50% of that prepared in the standard manner. A complete post-epitaxial relaxation has been obtained for a 160-nm-thick Si 0.83 Ge 0.17 layer at temperatures as low as 600°C. AFM investigations of such samples show significantly reduced area-RMS values of 0.3–0.9 nm. By means of multistep growth the Ge content can be increased. In a two-step mode thin buffers up to 34% Ge were grown with a high crystal quality.
Discourse analysis.
Discourse Analysis Introduction. Discourse analysis is the study of language in use. It rests on the basic premise that language cannot be understood without reference to the context, both linguistic and extra-linguistic, in which it is used. It draws from the findings and methodologies of a wide range of fields, such as anthropology, philosophy, sociology, social and cognitive psychology, and artificial intelligence. It is itself a broad field comprised of a large number of linguistic subfields and approaches, including speech act theory, conversation analysis, pragmatics, and the ethnography of speaking. At the same time, the lines between certain linguistic subfields, in particular psycholinguistics, anthropological linguistics, and cognitive linguistics and discourse analysis overlap, and approaches to the study of discourse are informed by these subfields, and in many cases findings are independently corroborated. As a very interdisciplinary approach, the boundaries of this field are fuzzy. 1 The fundamental assumption underlying all approaches to discourse analysis is that language must be studied as it is used, in its context of production, and so the object of analysis is very rarely in the form of a sentence. Instead, written or spoken texts, usually larger than one sentence or one utterance, provide the data. In other words, the discourse analyst works with naturally occurring corpora, and with such corpora come a wide variety of features such as hesitations, non-standard forms, self-corrections, repetitions, incomplete clauses, words, and so—all linguistic material which would be relegated to performance by Chomsky (1965) and so stand outside the scope of analysis for many formal linguists. But for the discourse analyst, such " performance " data are 1 It is interesting, in this light, to compare the contents of several standard handbooks of discourse analysis. Brown and Yule (1986) focus heavily on pragmatics and information structure, while Schiffrin (1994) includes several chapters directly related to sociolinguistic methodologies (i.e. chapters on interactional sociolinguistics, ethnomethodology and variation analysis). Mey (1993) has three chapters on conversation analysis (a topic which Schiffrin also covers) and a chapter on " societal pragmatics. " Lenore Grenoble, Discourse Analysis 2 indeed relevant and may in fact be the focus of research. The focus on actual instances of language use also means that the analysis does not look at language only as an abstract system; this is a fundamental difference between formal work on syntax versus discourse analysis. This paper first provides an overview of discourse analysis and …
A meta-analysis of the technology acceptance model
A statistical meta-analysis of the technology acceptance model (TAM) as applied in various fields was conducted using 88 published studies that provided sufficient data to be credible. The results show TAM to be a valid and robust model that has been widely used, but which potentially has wider applicability. A moderator analysis involving user types and usage types was performed to investigate conditions under which TAM may have different effects. The study confirmed the value of using students as surrogates for professionals in some TAM studies, and perhaps more generally. It also revealed the power of meta-analysis as a rigorous alternative to qualitative and narrative literature review methods. # 2006 Elsevier B.V. All rights reserved.
How to Fit when No One Size Fits
While “no one size fits all” is a sound philosophy for system designers to follow, it poses multiple challenges for application developers and system administrators. It can be hard for an application developer to pick one system when the needs of her application match the features of multiple “one size” systems. The choice becomes considerably harder when different components of an application fit the features of different “one size” systems. Considerable manual effort goes into creating and tuning such multi-system applications. An application’s data and workload properties may change over time, often in unpredictable and bursty ways. Consequently, the “one size” system that is best for an application can change over time. Adapting to change can be hard when application development is coupled tightly with any individual “one size” system. In this paper, we make the case for developing a new breed of Database Management Systems that we term DBMS. A DBMS contains multiple “one size” systems internally. An application specifies its execution requirements on aspects like performance, availability, consistency, change, and cost to the DBMS declaratively. For all requests (e.g., queries) made by the application, the DBMS will select the execution plan that meets the application’s requirements best. A unique aspect of the execution plan in a DBMS is that the plan includes the selection of one or more “one size” systems. The plan is then deployed and managed automatically on the selected system(s). If application requirements change beyond what was planned for originally by the DBMS, then the application can be reoptimized and redeployed; usually with no additional effort required from the application developer. The DBMS approach has the potential to address the challenges that application developers and system administrators face from the vast and growing number of “one size” systems today. However, this approach poses many research challenges that we discuss in this paper. We are taking the DBMS approach in a platform, called Cyclops, that we are building for continuous query execution. We will use Cyclops throughout the paper to give concrete illustrations of the benefits and challenges of the DBMS approach. This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2013. 6 Biennial Conference on Innovative Data Systems Research (CIDR ’13) January 6-9, 2013, Asilomar, California, USA.
Multidimensional Process Mining Using Process Cubes
Process mining techniques enable the analysis of processes using event data. For structured processes without too many variations, it is possible to show a relative simple model and project performance and conformance information on it. However, if there are multiple classes of cases exhibiting markedly different behaviors, then the overall process will be too complex to interpret. Moreover, it will be impossible to see differences in performance and conformance for the different process variants. The different process variations should be analysed separately and compared to each other from different perspectives to obtain meaningful insights about the different behaviors embedded in the process. This paper formalizes the notion of process cubes where the event data is presented and organized using different dimensions. Each cell in the cube corresponds to a set of events which can be used as an input by any process mining technique. This notion is related to the well-known OLAP (Online Analytical Processing) data cubes, adapting the OLAP paradigm to event data through multidimensional process mining. This adaptation is far from trivial given the nature of event data which cannot be easily summarized or aggregated, conflicting with classical OLAP assumptions. For example, multidimensional process mining can be used to analyze the different versions of a sales processes, where each version can be defined according to different dimensions such as location or time, and then the different results can be compared. This new way of looking at processes may provide valuable insights for process optimization.
Extracting Complex Biological Events with Rich Graph-Based Feature Sets
We describe a system for extracting complex events among genes and proteins from biomedical literature, developed in context of the BioNLP’09 Shared Task on Event Extraction. For each event, its text trigger, class, and arguments are extracted. In contrast to the prevailing approaches in the domain, events can be arguments of other events, resulting in a nested structure that better captures the underlying biological statements. We divide the task into independent steps which we approach as machine learning problems. We define a wide array of features and in particular make extensive use of dependency parse graphs. A rule-based post-processing step is used to refine the output in accordance with the restrictions of the extraction task. In the shared task evaluation, the system achieved an F-score of 51.95% on the primary task, the best performance among the participants.
Autoencoder-based feature learning for cyber security applications
This paper presents a novel feature learning model for cyber security tasks. We propose to use Auto-encoders (AEs), as a generative model, to learn latent representation of different feature sets. We show how well the AE is capable of automatically learning a reasonable notion of semantic similarity among input features. Specifically, the AE accepts a feature vector, obtained from cyber security phenomena, and extracts a code vector that captures the semantic similarity between the feature vectors. This similarity is embedded in an abstract latent representation. Because the AE is trained in an unsupervised fashion, the main part of this success comes from appropriate original feature set that is used in this paper. It can also provide more discriminative features in contrast to other feature engineering approaches. Furthermore, the scheme can reduce the dimensionality of the features thereby signicantly minimising the memory requirements. We selected two different cyber security tasks: networkbased anomaly intrusion detection and Malware classication. We have analysed the proposed scheme with various classifiers using publicly available datasets for network anomaly intrusion detection and malware classifications. Several appropriate evaluation metrics show improvement compared to prior results.
Night work and breast cancer in women: a Swedish cohort study
OBJECTIVES Recent research has suggested a moderate link between night work and breast cancer in women, mainly through case-control studies, but non-significant studies are also common and cohort studies are few. The purpose of the present study was to provide new information from cohort data through investigating the association between the number of years with night work and breast cancer among women. DESIGN Cohort study of individuals exposed to night shift work in relation to incidence of breast cancer in women. SETTING Individuals in the Swedish Twin registry, with follow-up in the Swedish Cancer Registry. PARTICIPANTS 13,656 women from the Swedish Twin Registry, with 3404 exposed to night work. OUTCOME MEASURES Breast cancer from the Swedish Cancer Registry (463 cases) during a follow-up time of 12 years. RESULTS A Cox proportional hazards regression analysis with control for a large number of confounders showed that the HR was HR=1.68 (95% CI 0.98 to 2.88) for the group with >20 years of night work. When the follow-up time was limited to ages below 60 years, those exposed >20 years showed a HR=1.77 (95% CI 1.03 to 3.04). Shorter exposure to night work showed no significant effects. CONCLUSIONS The present results, together with previous work, suggest that night work is associated with an increased risk of breast cancer in women, but only after relatively long-term exposure.
Biodentine-a novel dentinal substitute for single visit apexification
Use of an apical plug in management of cases with open apices has gained popularity in recent years. Biodentine, a new calcium silicate-based material has recently been introduced as a dentine substitute, whenever original dentine is damaged. This case report describes single visit apexification in a maxillary central incisor with necrotic pulp and open apex using Biodentine as an apical barrier, and a synthetic collagen material as an internal matrix. Following canal cleaning and shaping, calcium hydroxide was placed as an intracanal medicament for 1 mon. This was followed by placement of small piece of absorbable collagen membrane beyond the root apex to serve as matrix. An apical plug of Biodentine of 5 mm thickness was placed against the matrix using pre-fitted hand pluggers. The remainder of canal was back-filled with thermoplasticized gutta-percha and access cavity was restored with composite resin followed by all-ceramic crown. One year follow-up revealed restored aesthetics and function, absence of clinical signs and symptoms, resolution of periapical rarefaction, and a thin layer of calcific tissue formed apical to the Biodentine barrier. The positive clinical outcome in this case is encouraging for the use of Biodentine as an apical plug in single visit apexification procedures.
Cosine Siamese Models for Stance Detection
Fake news detection has received much attention since the 2016 United States Presidential election, where election outcomes are thought to have been influenced by the unregulated abundance of fake news articles. The recently released Fake News Challenge (FNC) aims to address the fake news problem by decomposing the problem into distinct NLP tasks, the first of which is stance detection. We employ a neural architecture consisting of two homologous subnetworks for headline and body processing and a subsequent node for headline/body comparison and stance prediction. Headlines and bodies are represented with a weighted bag-of-words combination of word vectors passed through a ReLU, where the weights are learned. Stance is quantified by computing the cosine similarity of these weighted bag-of-words representations, and the score is regressed to a relaxed, continuous label space in which the true discrete labels are posited to lie. Our model, which outperforms other recurrent methods, achieves an FNC score of 0.891 out of 1.00, a 0.10 increase from the published 0.79 FNC baseline. The cosine similarity function induces a natural geometry among the learned headline and body representations, with unrelated inputs generally orthogonal to each other and agreeing inputs nearly collinear. Our findings implicate the importance of the optimization objective, as opposed to the architecture of the subnetwork models, to success in stance detection, echoing recent work demonstrating the competitiveness of weighted bag-of-words models for textual similarity tasks.
Rolling shutter motion deblurring
Although motion blur and rolling shutter deformations are closely coupled artifacts in images taken with CMOS image sensors, the two phenomena have so far mostly been treated separately, with deblurring algorithms being unable to handle rolling shutter wobble, and rolling shutter algorithms being incapable of dealing with motion blur. We propose an approach that delivers sharp and undistorted output given a single rolling shutter motion blurred image. The key to achieving this is a global modeling of the camera motion trajectory, which enables each scanline of the image to be deblurred with the corresponding motion segment. We show the results of the proposed framework through experiments on synthetic and real data.
Mr. Bennet, his coachman, and the Archbishop walk into a bar but only one of them gets recognized: On The Difficulty of Detecting Characters in Literary Texts
Characters are fundamental to literary analysis. Current approaches are heavily reliant on NER to identify characters, causing many to be overlooked. We propose a novel technique for character detection, achieving significant improvements over state of the art on multiple datasets.
Performance of X-ray detectors with SiPM readout in cargo accelerator-based inspection systems
Existing requirements for high throughput cargo radiography inspection include high resolution images (better than 5 mm line pair resolution), penetration beyond 400 mm steel equivalent, material discrimination (organic, inorganic, high Z), high scan speeds (> 10kph, up to 60kph), low dose and small radiation exclusion zone; all in a cost effective system. To meet and exceed these requirements research into a number of new radiography methods has been initiated. Novel concepts relying on intrapulse modulated energy X-ray sources, mono-energetic gamma-ray sources, and new types of fast X-ray detectors, Scintillation-Cherenkov Detectors, are expected to be more beneficial being combined with unique features of Silicon Photomultiplier (SiPM) technology.
Connecting the Dots Between MLE and RL for Sequence Generation
Sequence generation models such as recurrent networks can be trained with a diverse set of learning algorithms. For example, maximum likelihood learning is simple and efficient, yet suffers from the exposure bias problem. Reinforcement learning like policy gradient addresses the problem but can have prohibitively poor exploration efficiency. A variety of other algorithms such as RAML, SPG, and data noising, have also been developed from different perspectives. This paper establishes a formal connection between these algorithms. We present a generalized entropy regularized policy optimization formulation, and show that the apparently divergent algorithms can all be reformulated as special instances of the framework, with the only difference being the configurations of reward function and a couple of hyperparameters. The unified interpretation offers a systematic view of the varying properties of exploration and learning efficiency. Besides, based on the framework, we present a new algorithm that dynamically interpolates among the existing algorithms for improved learning. Experiments on machine translation and text summarization demonstrate the superiority of the proposed algorithm.
The global prevalence of common mental disorders: a systematic review and meta-analysis 1980-2013.
BACKGROUND Since the introduction of specified diagnostic criteria for mental disorders in the 1970s, there has been a rapid expansion in the number of large-scale mental health surveys providing population estimates of the combined prevalence of common mental disorders (most commonly involving mood, anxiety and substance use disorders). In this study we undertake a systematic review and meta-analysis of this literature. METHODS We applied an optimized search strategy across the Medline, PsycINFO, EMBASE and PubMed databases, supplemented by hand searching to identify relevant surveys. We identified 174 surveys across 63 countries providing period prevalence estimates (155 surveys) and lifetime prevalence estimates (85 surveys). Random effects meta-analysis was undertaken on logit-transformed prevalence rates to calculate pooled prevalence estimates, stratified according to methodological and substantive groupings. RESULTS Pooling across all studies, approximately 1 in 5 respondents (17.6%, 95% confidence interval:16.3-18.9%) were identified as meeting criteria for a common mental disorder during the 12-months preceding assessment; 29.2% (25.9-32.6%) of respondents were identified as having experienced a common mental disorder at some time during their lifetimes. A consistent gender effect in the prevalence of common mental disorder was evident; women having higher rates of mood (7.3%:4.0%) and anxiety (8.7%:4.3%) disorders during the previous 12 months and men having higher rates of substance use disorders (2.0%:7.5%), with a similar pattern for lifetime prevalence. There was also evidence of consistent regional variation in the prevalence of common mental disorder. Countries within North and South East Asia in particular displayed consistently lower one-year and lifetime prevalence estimates than other regions. One-year prevalence rates were also low among Sub-Saharan-Africa, whereas English speaking counties returned the highest lifetime prevalence estimates. CONCLUSIONS Despite a substantial degree of inter-survey heterogeneity in the meta-analysis, the findings confirm that common mental disorders are highly prevalent globally, affecting people across all regions of the world. This research provides an important resource for modelling population needs based on global regional estimates of mental disorder. The reasons for regional variation in mental disorder require further investigation.
L-carnitine supplementation for the management of fatigue in patients with hypothyroidism on levothyroxine treatment: a randomized, double-blind, placebo-controlled trial.
Hypothyroid patients experience fatigue-related symptoms despite adequate thyroid hormone replacement. Thyroid hormone plays an essential role in carnitine-dependent fatty acid import and oxidation. We investigated the effects of L-carnitine supplementation on fatigue in patients with hypothyroidism. In total, 60 patients (age 50.0 ± 9.2 years, 3 males, 57 females) who still experienced fatigue (fatigue severity scale [FSS] score ≥ 36) were given L-carnitine (n = 30, 990 mg L-carnitine twice daily) or placebo (n = 30) for 12 weeks. After 12 weeks, although neither the FSS score nor the physical fatigue score (PFS) changed significantly, the mental fatigue score (MFS) was significantly decreased by treatment with L-carnitine compared with placebo (from 4.5 ± 1.9 to 3.9 ± 1.5 vs. from 4.2 ± 1.8 to 4.6 ± 1.6, respectively; P < 0.01). In the L-carnitine group, 75.0%, 53.6%, and 50.0% of patients showed improvement in the FSS score, PFS, and MFS, respectively, but only 20.0%, 24.0%, and 24.0%, respectively, did so in the placebo group (all P < 0.05). Both the PFS and MFS were significantly improved in patients younger than 50 years and those with free T3 ≥ 4.0 pg/mL by treatment with L-carnitine compared with placebo. Additionally, the MFS was significantly improved in patients taking thyroid hormone after thyroid cancer surgery. These results suggest that L-carnitine supplementation may be useful in alleviating fatigue symptoms in hypothyroid patients, especially in those younger than 50 years and those who have hypothyroidism after thyroidectomy for thyroid cancer (ClinicalTrials.gov: NCT01769157).
Low-power manycore accelerator for personalized biomedical applications
Wearable personal health monitoring systems can offer a cost effective solution for human healthcare. These systems must provide both highly accurate, secured and quick processing and delivery of vast amount of data. In addition, wearable biomedical devices are used in inpatient, outpatient, and at home e-Patient care that must constantly monitor the patient's biomedical and physiological signals 24/7. These biomedical applications require sampling and processing multiple streams of physiological signals with strict power and area footprint. The processing typically consists of feature extraction, data fusion, and classification stages that require a large number of digital signal processing and machine learning kernels. In response to these requirements, in this paper, a low-power, domain-specific many-core accelerator named Power Efficient Nano Clusters (PENC) is proposed to map and execute the kernels of these applications. Experimental results show that the manycore is able to reduce energy consumption by up to 80% and 14% for DSP and machine learning kernels, respectively, when optimally parallelized. The performance of the proposed PENC manycore when acting as a coprocessor to an Intel Atom processor is compared with existing commercial off-the-shelf embedded processing platforms including Intel Atom, Xilinx Artix-7 FPGA, and NVIDIA TK1 ARM-A15 with GPU SoC. The results show that the PENC manycore architecture reduces the energy by as much as 10X while outperforming all off-the-shelf embedded processing platforms across all studied machine learning classifiers.
Efficient implementation of sorting on multi-core SIMD CPU architecture
Sorting a list of input numbers is one of the most fundamental problems in the field of computer science in general and high-throughput database applications in particular. Although literature abounds with various flavors of sorting algorithms, different architectures call for customized implementations to achieve faster sorting times. This paper presents an efficient implementation and detailed analysis of MergeSort on current CPU architectures. Our SIMD implementation with 128-bit SSE is 3.3X faster than the scalar version. In addition, our algorithm performs an efficient multiway merge, and is not constrained by the memory bandwidth. Our multi-threaded, SIMD implementation sorts 64 million floating point numbers in less than 0.5 seconds on a commodity 4-core Intel processor. This measured performance compares favorably with all previously published results. Additionally, the paper demonstrates performance scalability of the proposed sorting algorithm with respect to certain salient architectural features of modern chip multiprocessor (CMP) architectures, including SIMD width and core-count. Based on our analytical models of various architectural configurations, we see excellent scalability of our implementation with SIMD width scaling up to 16X wider than current SSE width of 128-bits, and CMP core-count scaling well beyond 32 cores. Cycle-accurate simulation of Intel’s upcoming x86 many-core Larrabee architecture confirms scalability of our proposed algorithm.
The effect of YOCAS©® yoga for musculoskeletal symptoms among breast cancer survivors on hormonal therapy
Up to 50 % of breast cancer survivors on aromatase inhibitor therapy report musculoskeletal symptoms such as joint and muscle pain, significantly impacting treatment adherence and discontinuation rates. We conducted a secondary data analysis of a nationwide, multi-site, phase II/III randomized, controlled, clinical trial examining the efficacy of yoga for improving musculoskeletal symptoms among breast cancer survivors currently receiving hormone therapy (aromatase inhibitors [AI] or tamoxifen [TAM]). Breast cancer survivors currently receiving AI (N = 95) or TAM (N = 72) with no participation in yoga during the previous 3 months were randomized into 2 arms: (1) standard care monitoring and (2) standard care plus the 4-week yoga intervention (2x/week; 75 min/session) and included in this analysis. The yoga intervention utilized the UR Yoga for Cancer Survivors (YOCAS©®) program consisting of breathing exercises, 18 gentle Hatha and restorative yoga postures, and meditation. Musculoskeletal symptoms were assessed pre- and post-intervention. At baseline, AI users reported higher levels of general pain, muscle aches, and total physical discomfort than TAM users (all P ≤ 0.05). Among all breast cancer survivors on hormonal therapy, participants in the yoga group demonstrated greater reductions in musculoskeletal symptoms such as general pain, muscle aches and total physical discomfort from pre- to post-intervention than the control group (all P ≤ 0.05). The severity of musculoskeletal symptoms was higher for AI users compared to TAM users. Among breast cancer survivors on hormone therapy, the brief community-based YOCAS©® intervention significantly reduced general pain, muscle aches, and physical discomfort.
Adding saxagliptin to extended-release metformin vs. uptitrating metformin dosage.
AIM To investigate whether patients taking metformin for type 2 diabetes mellitus (T2DM) have improved glycaemic control without compromising tolerability by adding an agent with a complementary mechanism of action vs. uptitrating metformin. METHODS Adults with T2DM and glycated haemoglobin (HbA1c) between 7.0 and 10.5% receiving metformin extended release (XR) 1500 mg/day for ≥8 weeks were randomized to receive saxagliptin 5 mg added to metformin XR 1500 mg (n = 138) or metformin XR uptitrated to 2000 mg/day (n = 144). Endpoints were change from baseline to week 18 in HbA1c (primary), 120-min postprandial glucose (PPG), fasting plasma glucose (FPG) and the proportion of patients achieving HbA1c <7%. RESULTS At week 18, the adjusted mean reduction from baseline HbA1c was -0.88% for saxagliptin + metformin XR and -0.35% for uptitrated metformin XR (difference, -0.52%; p < 0.0001). For 120-min PPG and FPG, differences in adjusted mean change from baseline between saxagliptin + metformin XR and uptitrated metformin XR were -1.3 mmol/l (-23.32 mg/dl) (p = 0.0013) and -0.73 mmol/l (-13.18 mg/dl) (p = 0.0030), respectively. More patients achieved HbA1c <7.0% with saxagliptin + metformin XR than with uptitrated metformin XR (37.2 vs. 26.1%; p = 0.0459). The proportions of patients experiencing any adverse events (AEs) were generally similar between groups; neither group showed any notable difference in hypoglycaemia or gastrointestinal AEs. CONCLUSION Adding saxagliptin to metformin XR provided superior glycaemic control compared with uptitrating metformin XR without the emergence of additional safety concerns.
A comparison of short-term treatment with inhaled fluticasone propionate and zafirlukast for patients with persistent asthma.
PURPOSE To compare the short-term efficacy and safety of low-dose fluticasone propionate with that of oral zafirlukast therapy for patients previously treated with beta-2-agonists alone, and to evaluate the potential therapeutic benefit of switching from zafirlukast to a low-dose inhaled corticosteroid. SUBJECTS AND METHODS This study consisted of a 4-week randomized, double-blind treatment period followed by a 4-week open-label period. Two hundred ninety-four patients > or =12 years old with asthma previously uncontrolled with beta-2-agonists alone were randomly assigned to treatment with low-dose inhaled fluticasone (88 microg twice daily) or oral zafirlukast (20 mg twice daily). After 4 weeks, all patients discontinued their double-blind therapy and received open-label fluticasone (88 microg twice daily). Outcomes included pulmonary function, asthma symptoms, albuterol use, asthma exacerbations, and adverse events. RESULTS During the double-blind treatment period, fluticasone patients had significantly greater improvements in morning peak flow (29.3 L/min vs. 18.3 L/min), percentage of symptom-free days (19.8% vs. 11.6%), and daily albuterol use (-1.8 puffs per day vs. -1.1 puffs per day) compared with zafirlukast patients (P < or =0.025, each comparison). During the open-label treatment period, patients switched from zafirlukast to fluticasone experienced additional improvements in morning peak flow (17.2 L/min), evening peak flow (13.6 L/min), and FEV(1) (0.11 liter) and daily albuterol use (-0.9 puffs daily) compared with values obtained at the end of the double-blind treatment period (P < or =0.001, each comparison). CONCLUSION Low-dose fluticasone was more effective than zafirlukast in improving pulmonary function and symptoms in patients with persistent asthma. In addition, switching patients from zafirlukast to fluticasone further improved clinical outcomes.
Association of Adiponectin with Adolescent Cardiovascular Health in a Dietary Intervention Study.
OBJECTIVE To investigate whether an infancy-onset, low saturated fat-oriented dietary intervention influences serum adiponectin concentration in adolescents, and to study the association of adiponectin with subclinical markers of vascular health, and cardio-metabolic risk factors. STUDY DESIGN The longitudinal, randomized Special Turku Coronary Risk Factor Intervention Project aimed to modify child's dietary fat quality replacing saturated fat with unsaturated fat. Serum adiponectin (n = 521) along with weight, height, high-density lipoprotein cholesterol, C-reactive protein (CRP), triglycerides, and insulin were measured at age 15 years. Adiposity was assessed using body mass index, waist circumference, and abdominal fat thickness measured with ultrasound. Metabolic syndrome was defined according to International Diabetes Foundation criteria. Vascular ultrasound measures including carotid intima-media thickness (IMT) were assessed. RESULTS Adiponectin concentrations were similar in the intervention and control groups (P = .16). Adiponectin associated with carotid IMT (r = -0.13, P = .005), high-density lipoprotein cholesterol (r = 0.18, P < .0001), triglycerides (r = -0.16, P = .0004), CRP (r = -0.10, P = .02), insulin (r = -0.14, P = .002), and adiposity (r = -0.18-0.24, P ≤ .0001). When adjusted for adiposity indices, the association with carotid IMT was only marginally diluted (P = .03-.06), but the associations with insulin and CRP became nonsignificant. Adolescents with adiponectin ≤median had 4-fold risk of metabolic syndrome than peers with adiponectin >median (CI 1.8-10.2, P = .0001). CONCLUSIONS In healthy adolescents, low serum adiponectin is related with carotid IMT and metabolic syndrome. We found no evidence that repeated low saturated fat-oriented dietary counseling would influence serum adiponectin in adolescence. TRIAL REGISTRATION Registered with ClinicalTrials.gov: NCT00223600.
ABRIR at NTCIR-9 GeoTime Task Usage of Wikipedia and GeoNames for Handling Named Entity Information
In the previous NTCIR8-GeoTime task, ABRIR (Appropriate Boolean query Reformulation for Information Retrieval) proved to be one of the most effective systems for retrieving documents with Geographic and Temporal constraints. However, failure analysis showed that the identification of named entities and relationships between these entities and the query is important in improving the quality of the system. In this paper, we propose to use Wikipedia and GeoNames as resources for extracting knowledge about named entities. We also modify our system to use such information.
Predicting Price Changes in Ethereum
The market capitalization of publicly traded cryptocurrencies is currently above $230 billion (Bovaird, 2017). Bitcoin, the most valuable cryptocurrency, serves primarily as a digital store of value (Van Alstyne, 2014), and its price predictability has been well-studied (Hegazy and Mumford, 2016). However, Ethereum has the second-highest market capitalization and supports much more functionality than Bitcoin. While its price predictability is sparsely covered in published literature, the technology’s additional functionality may cause Ether’s price predictability to differ significantly from that of Bitcoin. These characteristics are outlined in the following subsection; the underlying details of Bitcoin (Nakamoto, 2008) and Ethereum (Buterin, 2013) are elided, as they are described in depth in the cited papers.
An Experiment in Software Prototyping Productivity ∗
We describe the results of an experiment in which several conventional programming languages, together with the functional language Haskell, were used to prototype a Naval Surface Warfare Center (NSWC) requirement for a Geometric Region Server. The resulting programs and development metrics were reviewed by a committee chosen by the Navy. The results indicate that the Haskell prototype took significantly less time to develop and was considerably more concise and easier to understand than the corresponding prototypes written in several different imperative languages, including Ada and C++. ∗This work was supported by the Advanced Research Project Agency and the Office of Naval Research under Arpa Order 8888, Contract N00014-92-C-0153.
Equality and the Inter-American Court of Human Rights: what is the Ideology?
This works analyzes the jurisprudence of the Inter-American Court of Human Rights in connection with equality and non-discrimination. The thesis has two major parts. The first part discusses a possible pattern or standard to predict the results of the Inter-American Court in connection with equality and non-discrimination. The author revised cases from this Court in order to predict when the Court will find that certain behaviour violates the rights to equality and non-discrimination of the American Convention of Human Rights. The work claims that the identificacion by the Court of certain groups as “vulnerable” is the key to find a violation of the rights to equality and non-discrimination. The second part aims to make a theoretical analysis in connection with these findings. The author studies what are the possible ideological constructions that can be found in the Court’s jurisprudence. This part uses concepts and ideas developed by Slavoj Žižek in relation to Ideology. After explaining what is ideology for Žižek, the author analyzes if this ideology as constructed by Žižek is present in the work of the Court. The theoretical analysis is done trough three sections. The first section analyzes ideological constructions found in the work of the Inter-American Court. The second attempts to explain the ideological constructions found in the Court’s jurisprudence connected with equality and non-discrimination. The last chapter analyzes the ideology found in the research and claims put forward in the first section of the thesis.
Mindfulness based stress reduction (MBSR(BC)) in breast cancer: evaluating fear of recurrence (FOR) as a mediator of psychological and physical symptoms in a randomized control trial (RCT)
To investigate the mechanism(s) of action of mindfulness based stress reduction (MBSR(BC)) including reductions in fear of recurrence and other potential mediators. Eighty-two post-treatment breast cancer survivors (stages 0–III) were randomly assigned to a 6-week MBSR(BC) program (n = 40) or to usual care group (UC) (n = 42). Psychological and physical variables were assessed as potential mediators at baseline and at 6 weeks. MBSR(BC) compared to UC experienced favorable changes for five potential mediators: (1) change in fear of recurrence problems mediated the effect of MBSR(BC) on 6-week change in perceived stress (z = 2.12, p = 0.03) and state anxiety (z = 2.03, p = 0.04); and (2) change in physical functioning mediated the effect of MBSR(BC) on 6-week change in perceived stress (z = 2.27, p = 0.02) and trait anxiety (z = 1.98, p = 0.05). MBSR(BC) reduces fear of recurrence and improves physical functioning which reduces perceived stress and anxiety. Findings support the beneficial effects of MBSR(BC) and provide insight into the possible cognitive mechanism of action.
Genome-wide DNA methylation profile implicates potential cartilage regeneration at the late stage of knee osteoarthritis.
OBJECTIVE The aim of this work was to characterize the genome-wide DNA methylation profile of cartilage from three regions of tibial plateau isolated from patients with primary knee osteoarthritis (OA), providing the first DNA methylation study that reflects OA progression. METHODS The unique model system was used to section three regions of tibial plateau: the outer lateral tibial plateau (oLT), the inner lateral tibial plateau (iLT) and the inner medial tibial plateau (iMT) regions which represented the early, intermediate and late stages of OA, respectively. Genome-wide DNA methylation profile was examined using Illumina Infinium HumanMethylation450 BeadChip array. Comparisons of the iLT/oLT and iMT/oLT groups were carried out to identify differentially methylated (DM) probes (DMPs) associated with OA progression. DM genes were analyzed to identify the gene ontologies (GO), pathways, upstream regulators and networks. RESULTS No significant DMPs were identified in iLT/oLT group, while 519 DMPs were identified in iMT/oLT group. Over half of them (68.2%) were hypo-methylated and enriched in enhancers and OpenSea. Upstream regulator analysis identified many microRNAs. DM genes were enriched in transcription factors, especially homeobox genes and in Wnt/β-catenin signaling pathway. These genes also showed changes in expression when analyzed with expression profiles generated from previous studies. CONCLUSION Our data suggested the changes in DNA methylation occurred at the late stage of OA. Pathways and networks enriched in identified DM genes highlighted potential etiologic mechanism and implicated the potential cartilage regeneration in the late stage of knee OA.
Pedestrian Behaviour Monitoring: Methods and Experiences
The investigation of pedestrian spatio-temporal behaviour is of particular interest in many different research fields. Disciplines like travel behaviour research and tourism research, social sciences, artificial intelligence, geoinformation and many others have approached this subject from different perspectives. Depending on the particular research questions, various methods of data collection and analysis have been developed and applied in order to gain insight into specific aspects of human motion behaviour and the determinants influencing spatial activities. In this contribution, we provide a general overview about most commonly used methods for monitoring and analysing human spatio-temporal behaviour. After discussing frequently used empirical methods of data collection and emphasising related advantages and limitations, we present seven case studies concerning the collection and analysis of human motion behaviour following different purposes.
Clinical management where medicine meets management. All of a rush.
The time between a myocardial infarction patient arriving at hospital and receiving life-saving drugs is decreasing. Schemes to speed up care include paramedics administering thrombolytic drugs. Hospitals are also finding ways of speeding up admission patients.
Short term use of oral corticosteroids and related harms among adults in the United States: population based cohort study
Objective To determine the frequency of prescriptions for short term use of oral corticosteroids, and adverse events (sepsis, venous thromboembolism, fractures) associated with their use.Design Retrospective cohort study and self controlled case series.Setting Nationwide dataset of private insurance claims.Participants Adults aged 18 to 64 years who were continuously enrolled from 2012 to 2014.Main outcome measures Rates of short term use of oral corticosteroids defined as less than 30 days duration. Incidence rates of adverse events in corticosteroid users and non-users. Incidence rate ratios for adverse events within 30 day and 31-90 day risk periods after drug initiation.Results Of 1 548 945 adults, 327 452 (21.1%) received at least one outpatient prescription for short term use of oral corticosteroids over the three year period. Use was more frequent among older patients, women, and white adults, with significant regional variation (all P<0.001). The most common indications for use were upper respiratory tract infections, spinal conditions, and allergies. Prescriptions were provided by a diverse range of specialties. Within 30 days of drug initiation, there was an increase in rates of sepsis (incidence rate ratio 5.30, 95% confidence interval 3.80 to 7.41), venous thromboembolism (3.33, 2.78 to 3.99), and fracture (1.87, 1.69 to 2.07), which diminished over the subsequent 31-90 days. The increased risk persisted at prednisone equivalent doses of less than 20 mg/day (incidence rate ratio 4.02 for sepsis, 3.61 for venous thromboembolism, and 1.83 for fracture; all P<0.001).Conclusion One in five American adults in a commercially insured plan were given prescriptions for short term use of oral corticosteroids during a three year period, with an associated increased risk of adverse events.
Ordered arrays of nanoporous gold nanoparticles
A combination of a "top-down" approach (substrate-conformal imprint lithography) and two "bottom-up" approaches (dewetting and dealloying) enables fabrication of perfectly ordered 2-dimensional arrays of nanoporous gold nanoparticles. The dewetting of Au/Ag bilayers on the periodically prepatterned substrates leads to the interdiffusion of Au and Ag and the formation of an array of Au-Ag alloy nanoparticles. The array of alloy nanoparticles is transformed into an array of nanoporous gold nanoparticles by a following dealloying step. Large areas of this new type of material arrangement can be realized with this technique. In addition, this technique allows for the control of particle size, particle spacing, and ligament size (or pore size) by varying the period of the structure, total metal layer thickness, and the thickness ratio of the as-deposited bilayers.
A Modified AES Based Algorithm for Image Encryption
With the fast evolution of digital data exchange, security information becomes much important in data storage and transmission. Due to the increasing use of images in industrial process, it is essential to protect the confidential image data from unauthorized access. In this paper, we analyze the Advanced Encryption Standard (AES), and we add a key stream generator (A5/1, W7) to AES to ensure improving the encryption performance; mainly for images characterised by reduced entropy. The implementation of both techniques has been realized for experimental purposes. Detailed results in terms of security analysis and implementation are given. Comparative study with traditional encryption algorithms is shown the superiority of the modified algorithm. Keywords—Cryptography, Encryption, Advanced Encryption Standard (AES), ECB mode, statistical analysis, key stream generator.
Particle Numbers of Lipoprotein Subclasses and Arterial Stiffness among Middle-aged men from the ERA JUMP study
We examined the association between serum lipoprotein subclasses and the three measures of arterial stiffness, that is, (i) carotid-femoral pulse wave velocity (cfPWV), which is a gold standard measure of central arterial stiffness, (ii) brachial-ankle PWV (baPWV), which is emerging as a combined measure of central and peripheral arterial stiffness and (iii) femoral-ankle PWV (faPWV), which is a measure of peripheral arterial stiffness. Among a population-based sample of 701 apparently healthy Caucasian, Japanese American and Korean men aged 40–49 years, concentrations of lipoprotein particles were assessed by nuclear magnetic resonance (NMR) spectroscopy, and the PWV was assessed with an automated waveform analyzer (VP2000, Omron, Japan). Multiple linear regressions were performed to analyse the association between each NMR lipoprotein subclasses and PWV measures, after adjusting for cardiovascular risk factors and other confounders. A cutoff of P<0.01 was used for determining significance. All PWV measures had significant correlations with total and small low-density lipoprotein particle number (LDL-P) (all P<0.0001) but not LDL cholesterol (LDL-C) (all P>0.1), independent of race and age. In multivariate regression analysis, no NMR lipoprotein subclass was significantly associated with cfPWV (all P>0.01). However, most NMR lipoprotein subclasses had significant associations with both baPWV and faPWV (P<0.01). In this study of healthy middle-aged men, as compared with cfPWV, both baPWV and faPWV had stronger associations with particle numbers of lipoprotein subclasses. Our results may suggest that both baPWV and faPWV are related to arterial stiffness and atherosclerosis, whereas cfPWV may represent arterial stiffness alone.
TORCS, The Open Racing Car Simulator
The open racing car simulator (TORCS [14]), is a modern, modular, highlyportable multi-player, multi-agent car simulator. Its high degree of modularity and portability render it ideal for artificial intelligence research. Indeed, a number of research-oriented competitions and papers have already appeared that make use of the TORCS engine. The purpose of this document is to introduce the structure of TORCS to the general artificial intelligence and machine learning community and explain how it is possible to tests agents on the platform. TORCS can be used to develop artificially intelligent (AI) agents for a variety of problems. At the car level, new simulation modules can be developed, which include intelligent control systems for various car components. At the driver level, a low-level API gives detailed (but only partial) access to the simulation state. This could be used to develop anything from mid-level control systems to complex driving agents that find optimal racing lines, react successfully in unexpected situations and make good tactical race decisions. Finally, for researchers that like a challenge and are also interested in visual processing, a 3d projection interface is available.
Mixed-Mode S-Parameter Characterization of Differential Structures
Combined differential-mode and common-mode (mixedmode) scattering parameters (s-parameters) are well adapted to accurate measurements of linear networks at RF and microwave frequencies. The relationships between standard s-parameters with two-port vector network analyzer (VNA) and mixed-mode s-parameters with four-port VNA are derived in this paper. An example differential structure was measured with standard two-port VNA and mixed-mode four-port VNA. The correlation of standard s-parameters and mixed-mode s-parameters is presented as well.
LPM: lightweight progressive meshes towards smooth transmission of Web3D media over internet
Transmission of Web3D media over the Internet can be slow, especially when downloading huge 3D models through relatively limited bandwidth. Currently, 3D compression and progressive meshes are used to alleviate the problem, but these schemes do not consider similarity among the 3D components, leaving rooms for improvement in terms of efficiency. This paper proposes a similarity-aware 3D model reduction method, called Lightweight Progressive Meshes (LPM). The key idea of LPM is to search similar components in a 3D model, and reuse them through the construction of a Lightweight Scene Graph (LSG). The proposed LPM offers three significant benefits. First, the size of 3D models can be reduced for transmission without almost any precision loss of the original models. Second, when rendering, decompression is not needed to restore the original model, and instanced rendering can be fully exploited. Third, it is extremely efficient under very limited bandwidth, especially when transmitting large 3D scenes. Performance on real data justifies the effectiveness of our LPM, which improves the state-of-the-art in Web3D media transmission.
Subspace Clustering of Categorical and Numerical Data With an Unknown Number of Clusters
In clustering analysis, data attributes may have different contributions to the detection of various clusters. To solve this problem, the subspace clustering technique has been developed, which aims at grouping the data objects into clusters based on the subsets of attributes rather than the entire data space. However, the most existing subspace clustering methods are only applicable to either numerical or categorical data, but not both. This paper, therefore, studies the soft subspace clustering of data with both of the numerical and categorical attributes (also simply called mixed data for short). Specifically, an attribute-weighted clustering model based on the definition of object-cluster similarity is presented. Accordingly, a unified weighting scheme for the numerical and categorical attributes is proposed, which quantifies the attribute-to-cluster contribution by taking into account both of intercluster difference and intracluster similarity. Moreover, a rival penalized competitive learning mechanism is further introduced into the proposed soft subspace clustering algorithm so that the subspace cluster structure as well as the most appropriate number of clusters can be learned simultaneously in a single learning paradigm. In addition, an initialization-oriented method is also presented, which can effectively improve the stability and accuracy of $k$ -means-type clustering methods on numerical, categorical, and mixed data. The experimental results on different benchmark data sets show the efficacy of the proposed approach.
Variational Inference for Crowdsourcing
Crowdsourcing has become a popular paradigm for labeling large datasets. However, it has given rise to the computational task of aggregating the crowdsourced labels provided by a collection of unreliable annotators. We approach this problem by transforming it into a standard inference problem in graphical models, and applying approximate variational methods, including belief propagation (BP) and mean field (MF). We show that our BP algorithm generalizes both majority voting and a recent algorithm by Karger et al. [1], while our MF method is closely related to a commonly used EM algorithm. In both cases, we find that the performance of the algorithms critically depends on the choice of a prior distribution on the workers’ reliability; by choosing the prior properly, both BP and MF (and EM) perform surprisingly well on both simulated and real-world datasets, competitive with state-of-the-art algorithms based on more complicated modeling assumptions.
What is Social Informatics and Why Does it Matter?
It is a field that is defined by its topic (and fundamental questions about it) rather than by a family of methods, much like urban studies or gerontology. Social informatics has been a subject of systematic analytical and critical research for the last 25 years. This body of research has developed theories and findings that are pertinent to understanding the design, development, and operation of usable information systems, including intranets, electronic forums, digital libraries and electronic journals.
Enabling the Sharing Economy: Privacy Respecting Contract based on Public Blockchain
Blockchain is a novel way to construct fully distributed systems and has the potential to disrupt many businesses including Uber and Airbnb within the sharing economy. Specifically, blockchain provides a method to enforce the agreement between a user and the physical property owner without using any trusted party, e.g., if a user pays the agreed money, the blockchain guarantees that he/she has access to the property. While a blockchain based system has many desirable features, it may leak privacy information of involved parties due to its openness to the public. To mitigate the privacy concern, we propose a privacy respecting approach for blockchain-based sharing economy applications, which leverages a zero-knowledge scheme. We also analyze the security features and performance of the proposed approach to demonstrate its effectiveness in these applications.
Retrospective correction of MR intensity inhomogeneity by information minimization
In this paper, the problem of retrospective correction of intensity inhomogeneity in magnetic resonance (MR) images is addressed. A novel model-based correction method is proposed, based on the assumption that an image corrupted by intensity inhomogeneity contains more information than the corresponding uncorrupted image. The image degradation process is described by a linear model, consisting of a multiplicative and an additive component which are modeled by a combination of smoothly varying basis functions. The degraded image is corrected by the inverse of the image degradation model. The parameters of this model are optimized such that the information of the corrected image is minimized while the global intensity statistic is preserved. The method was quantitatively evaluated and compared to other methods on a number of simulated and real MR images and proved to be effective, reliable, and computationally attractive. The method can be widely applied to different types of MR images because it solely uses the information that is naturally present in an image, without making assumptions on its spatial and intensity distribution. Besides, the method requires no preprocessing, parameter setting, nor user interaction. Consequently, the proposed method may be a valuable tool in MR image analysis.
Knowledge Graph Representation with Jointly Structural and Textual Encoding
The objective of knowledge graph embedding is to encode both entities and relations of knowledge graphs into continuous low-dimensional vector spaces. Previously, most works focused on symbolic representation of knowledge graph with structure information, which can not handle new entities or entities with few facts well. In this paper, we propose a novel deep architecture to utilize both structural and textual information of entities. Specifically, we introduce three neural models to encode the valuable information from text description of entity, among which an attentive model can select related information as needed. Then, a gating mechanism is applied to integrate representations of structure and text into a unified architecture. Experiments show that our models outperform baseline by margin on link prediction and triplet classification tasks. Source codes of this paper will be available on Github.
Krivine's classical realisability from a categorical perspective
In a sequence of papers (Krivine 2001; Krivine 2003; Krivine 2009) J.-L. Krivine has introduced his notion of Classical Realizability for classical second order logic and Zermelo-Fraenkel set theory. Moreover, in more recent work (Krivine 2008) he has considered forcing constructions on top of it with the ultimate aim of providing a realizability interpretation for the axiom of choice. The aim of this paper is to show how Krivine’s classical realizability can be understood as an instance of the categorical approach to realizability as started by Hyland in (Hyland 1982) and described in detail in (van Oosten 2008). Moreover, we will give an intuitive explanation of the iteration of realizability as can be found in (Krivine 2008).
Patient-reported outcomes in multiple sclerosis: Relationships among existing scales and the development of a brief measure.
Several patient-reported outcome (PRO) measures are commonly used in multiple sclerosis (MS) research, but the relationship among items across measures is uncertain. We proposed to evaluate the associations between items from a standard battery of PRO measures used in MS research and to develop a brief, reliable and valid instrument measure by combining these items into a single measure. Subjects (N = 537) enrolled in CLIMB complete a PRO battery that includes the Center for Epidemiologic Studies Depression Scale, Medical Outcomes Study Modified Social Support Survey, Modified Fatigue Impact Scale and Multiple Sclerosis Quality of Life-54. Subjects were randomly divided into two samples: calibration (n = 269) and validation (n = 268). In the calibration sample, an Exploratory Factor Analysis (EFA) was used to identify latent constructs within the battery. The model constructed based on the EFA was evaluated in the validation sample using Confirmatory Factor Analysis (CFA), and reliability and validity were assessed for the final measure. The EFA in the calibration sample revealed an eight factor solution, and a final model with one second-order factor along with the eight first-order factors provided the best fit. The model combined items from each of the four parent measures, showing important relationships among the parent measures. When the model was fit using the validation sample, the results confirmed the validity and reliability of the model. A brief PRO for MS (BPRO-MS) that combines MS-related psychosocial and quality of life domains can be used to assess overall functioning in mildly disabled MS patients.
Your Chi-Square Test is Statistically Significant : Now What ?
Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data from two recent journal articles were used to illustrate these approaches. A call is made for greater consideration of foundational techniques such as the chi-square tests.
Aiming at Theory Building : Adopting and Adapting the Case Study Design
Although the advantages of case study design are widely recognised, its original positivist underlying assumptions may mislead interpretive researchers aiming at theory building. The paper discusses the limitations of the case study design for theory building and explains how grounded theory systemic process adds to the case study design. The author reflects upon his experience in conducting research on the articulation of both traditional social networks and new virtual networks in six rural communities in Peru, using both case study design and grounded theory in a combined fashion in order to discover an emergent theory.
Apparel Classification with Style
We introduce a complete pipeline for recognizing and classifying people’s clothing in natural scenes. This has several interesting applications, including e-commerce, event and activity recognition, online advertising, etc. The stages of the pipeline combine a number of state-of-the-art building blocks such as upper body detectors, various feature channels and visual attributes. The core of our method consists of a multi-class learner based on a Random Forest that uses strong discriminative learners as decision nodes. To make the pipeline as automatic as possible we also integrate automatically crawled training data from the web in the learning process. Typically, multi-class learning benefits from more labeled data. Because the crawled data may be noisy and contain images unrelated to our task, we extend Random Forests to be capable of transfer learning from different domains. For evaluation, we define 15 clothing classes and introduce a benchmark data set for the clothing classification task consisting of over 80, 000 images, which we make publicly available. We report experimental results, where our classifier outperforms an SVM baseline with 41.38 % vs 35.07 % average accuracy on challenging benchmark data.
Work climate, work values and professional commitment as predictors of job satisfaction in nurses.
AIM To investigate the effect of some psychosocial variables on nurses' job satisfaction. BACKGROUND Nurses' job satisfaction is one of the most important factors in determining individuals' intention to stay or leave a health-care organisation. Literature shows a predictive role of work climate, professional commitment and work values on job satisfaction, but their conjoint effect has rarely been considered. METHODS A cross-sectional questionnaire survey was adopted. Participants were hospital nurses and data were collected in 2011. RESULTS Professional commitment and work climate positively predicted nurses' job satisfaction. The effect of intrinsic vs. extrinsic work value orientation on job satisfaction was completely mediated by professional commitment. CONCLUSIONS Nurses' job satisfaction is influenced by both contextual and personal variables, in particular work climate and professional commitment. According to a more recent theoretical framework, work climate, work values and professional commitment interact with each other in determining nurses' job satisfaction. IMPLICATIONS FOR NURSING MANAGEMENT Nursing management must be careful to keep the context of work tuned to individuals' attitude and vice versa. Improving the work climate can have a positive effect on job satisfaction, but its effect may be enhanced by favouring strong professional commitment and by promoting intrinsic more than extrinsic work values.
Towards Automated Analysis of Connectomes: The Configurable Pipeline for the Analysis of Connectomes (C-PAC)
Using web corpus statistics for program analysis
Several program analysis tools - such as plagiarism detection and bug finding - rely on knowing a piece of code's relative semantic importance. For example, a plagiarism detector should not bother reporting two programs that have an identical simple loop counter test, but should report programs that share more distinctive code. Traditional program analysis techniques (e.g., finding data and control dependencies) are useful, but do not say how surprising or common a line of code is. Natural language processing researchers have encountered a similar problem and addressed it using an n-gram model of text frequency, derived from statistics computed over text corpora. We propose and compute an n-gram model for programming languages, computed over a corpus of 2.8 million JavaScript programs we downloaded from the Web. In contrast to previous techniques, we describe a code n-gram as a subgraph of the program dependence graph that contains all nodes and edges reachable in n steps from the statement. We can count n-grams in a program and count the frequency of n-grams in the corpus, enabling us to compute tf-idf-style measures that capture the differing importance of different lines of code. We demonstrate the power of this approach by implementing a plagiarism detector with accuracy that beats previous techniques, and a bug-finding tool that discovered over a dozen previously unknown bugs in a collection of real deployed programs.
Geo-semantic segmentation
The availability of GIS (Geographical Information System) databases for many urban areas, provides a valuable source of information for improving the performance of many computer vision tasks. In this paper, we propose a method which leverages information acquired from GIS databases to perform semantic segmentation of the image alongside with geo-referencing each semantic segment with its address and geo-location. First, the image is segmented into a set of initial super-pixels. Then, by projecting the information from GIS databases, a set of priors are obtained about the approximate location of the semantic entities such as buildings and streets in the image plane. However, there are significant inaccuracies (misalignments) in the projections, mainly due to inaccurate GPS-tags and camera parameters. In order to address this misalignment issue, we perform data fusion such that it improves the segmentation and GIS projections accuracy simultaneously with an iterative approach. At each iteration, the projections are evaluated and weighted in terms of reliability, and then fused with the super-pixel segmentations. First segmentation is performed using random walks, based on the GIS projections. Then the global transformation which best aligns the projections to their corresponding semantic entities is computed and applied to the projections to further align them to the content of the image. The iterative approach continues until the projections and segments are well aligned.
Text processing for text-to-speech systems in Indian languages
To build a natural sounding speech synthesis system, it is essential that the text processing component produce an appropriate sequence of phonemic units corresponding to an arbitrary input text. In this paper we discuss our efforts in addressing the issues of Font-to-Akshara mapping, pronunciation rules for Aksharas, text normalization in the context of building text-to-speech systems in Indian languages.
A system failure framework for innovation policy design
This article sets out a policy framework for implementing ‘system of innovation’ (SI)-based strategies. On the basis of a literature review on system failures, the study designs an SI-policy framework that can provide policy makers with practical leads how to design, analyse and evaluate policy measures in the field of innovation. The functioning of the framework is illustrated on the basis of an evaluation of Dutch cluster policy. From this illustration, it can be concluded that the SI-framework provides helpful leads for policy design and evaluation and renders more specific policy recommendations than the generally used market failure approach. q 2004 Elsevier Ltd. All rights reserved.
Generating query substitutions
We introduce the notion of query substitution, that is, generating a new query to replace a user's original search query. Our technique uses modifications based on typical substitutions web searchers make to their queries. In this way the new query is strongly related to the original query, containing terms closely related to all of the original terms. This contrasts with query expansion through pseudo-relevance feedback, which is costly and can lead to query drift. This also contrasts with query relaxation through boolean or TFIDF retrieval, which reduces the specificity of the query. We define a scale for evaluating query substitution, and show that our method performs well at generating new queries related to the original queries. We build a model for selecting between candidates, by using a number of features relating the query-candidate pair, and by fitting the model to human judgments of relevance of query suggestions. This further improves the quality of the candidates generated. Experiments show that our techniques significantly increase coverage and effectiveness in the setting of sponsored search.
Social-Aware Stateless Routingin Pocket Switched Networks
Existing social-aware routing protocols for packet switched networks make use of the information about the social structure of the network deduced by state information of nodes (e.g., history of past encounters) to optimize routing. Although these approaches are shown to have superior performance to social-oblivious, stateless routing protocols (BinarySW, Epidemic), the improvement comes at the cost of considerable storage overhead required on the nodes. In this paper we present SANE, the first routing mechanism that combines the advantages of both social-aware and stateless approaches. SANE is based on the observation - that we validate on a real-world trace - that individuals with similar interests tend to meet more often. In SANE, individuals (network members) are characterized by their interest profile, a compact representation of their interests. By implementing a simple routing rule based on interest profile similarity, SANE is free of network state information, thus overcoming the storage capacity problem with existing social-aware approaches. Through thorough experiments, we show the superiority of SANE over existing approaches, both stateful, social-aware and stateless, social-oblivious. We discuss the statelessness of our approach in the supplementary file, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/TPDS.2014.2307857, of this manuscript. Our interest-based approach easily enables innovative networking services, such as interest-casting. An interest-casting protocol is also introduced in this paper, and evaluated through experiments based on both real-world and synthetic mobility traces.
Phase 1 clinical results with tandutinib (MLN518), a novel FLT3 antagonist, in patients with acute myelogenous leukemia or high-risk myelodysplastic syndrome: safety, pharmacokinetics, and pharmacodynamics.
Tandutinib (MLN518/CT53518) is a novel quinazoline-based inhibitor of the type III receptor tyrosine kinases: FMS-like tyrosine kinase 3 (FLT3), platelet-derived growth factor receptor (PDGFR), and KIT. Because of the correlation between FLT3 internal tandem duplication (ITD) mutations and poor prognosis in acute myelogenous leukemia (AML), we conducted a phase 1 trial of tandutinib in 40 patients with either AML or high-risk myelodysplastic syndrome (MDS). Tandutinib was given orally in doses ranging from 50 mg to 700 mg twice daily The principal dose-limiting toxicity (DLT) of tandutinib was reversible generalized muscular weakness, fatigue, or both, occurring at doses of 525 mg and 700 mg twice daily. Tandutinib's pharmacokinetics were characterized by slow elimination, with achievement of steady-state plasma concentrations requiring greater than 1 week of dosing. Western blotting showed that tandutinib inhibited phosphorylation of FLT3 in circulating leukemic blasts. Eight patients had FLT3-ITD mutations; 5 of these were evaluable for assessment of tandutinib's antileukemic effect. Two of the 5 patients, treated at 525 mg and 700 mg twice daily, showed evidence of antileukemic activity, with decreases in both peripheral and bone marrow blasts. Tandutinib at the MTD (525 mg twice daily) should be evaluated more extensively in patients with AML with FLT3-ITD mutations to better define its antileukemic activity.
A Triangulation study: Bahraini nursing students' perceptions of nursing as a career
Background: There is a broad international literature examining the perceptions, experiences and values of nursing students with very little investigative work from the Gulf region and no published work on the perceptions of student nurses from Bahrain. The literature shows that students have a wide range of pre-existing perceptions about nursing and that those early perceptions have a profound influence on their decision to continue with their nursing studies. Historically, in a context of migration, Bahrain has been attractive to expatriate nurses and this has created an overreliance on external manpower which leads to the detriment of developing an indigenous nursing profession. This study aims to identify the perceptions and experiences of student nurses in Bahrain about nursing as a career choice and generate an understanding of the factors influencing recruitment to nursing from the Bahraini population. Methods: A triangulation research design engaging quantitative and qualitative data collection methods was used in the study. Data were obtained through student nurses’ written reflections, self-reporting questionnaires and focus groups collected during their nursing programme. The study participants were the first ever cohort of 38 Bahraini nursing students attending the first private University in Bahrain where the study took place. Qualitative data was analyzed using Colaizzi’s methodology and quantitative data was analyzed using SPSS Version 17. Results: The participants perceived nursing as caring, helping people and a humanitarian job. Nursing was considered to be a tough job and not well accepted socially with cultural issues impacting on the values attached to nursing as a career choice. Participants prior to entering nursing used the internet as the most potent source of information and they were also motivated by their parents and friends to join nursing. Participants stated their commitment to a nursing career, and their plans to continue with participation in higher education. Conclusions: Some of the issues raised in the study are reflective of the international literature; however there are fundamental issues particular to the Gulf region, which will require attention in a context of an overall national nursing recruitment strategy.
IN-SILICO INTERACTION STUDIES ON INHIBITORY ACTION OF TETRANORTRITERPENOIDS ON ACTIN
In-silico interaction studies on forty two tetranortriterpenoids, which include four classes of compounds azadiratchins, salannins, nimbins and intact limonoids, with actin have been carried out using Autodock Vina and Surflex Dock. The docking scores and predicted hydrogen bonds along with spatial confirmation of the molecules indicate that actin could be a possible target for insect antifeedant studies, and a good correlation has been obtained between the percentage feeding index (PFI) and the binding energy of these molecules. The enhancement of the activity in the photo products and its reduction in microwave products observed in in-vivo studies are well brought out by this study. The study reveals Arg 183 in actin to be the most favoured residue for binding in most compounds whereas Tyr 69 is favoured additionally for salannin and nimbin type of compounds. In the case of limonoids Gln 59 seems to have hydrogen bonding interactions with most of the compounds. The present study reveals that the fit for PFI vs. binding energy is better for individual classes of compounds and can be attributed to the binding of ligand with different residues. This comprehensive in-silico analysis of interaction between actin as a receptor and tetranortriterpenoids may help in the understanding of the mode of action of bioinsecticides, and designing better lead molecules.
Association between exposure to environmental tobacco smoke and the development of acute coronary syndromes: the CARDIO2000 case-control study.
OBJECTIVE To investigate the association between environmental tobacco smoke (ETS) exposure (at least 30 minutes a day) and the risk of developing acute coronary syndromes (ACS). DESIGN AND SETTING The CARDIO2000 is a case-control study which was conducted in Greece from 2000 to 2001. Cases included 847 individuals with a first event of ACS and 1078 cardiovascular disease-free controls. Cases and controls were frequency matched on age (within three years of age), sex, and region. MAIN OUTCOME MEASURES ACS was defined as a diagnosis of first acute myocardial infarction or unstable angina. Main independent variable: Exposure to ETS was measured by self report as follows: after the second day of hospitalisation for the cases, and at the entry for the controls, participants were asked whether they were currently exposed to tobacco smoke from other people (home and/or work) for more than 30 minutes a day. The responses were categorised into three levels: no exposure, occasional exposure (< 3 times per week), and regular exposure. In addition participants were asked how many years they had been exposed. Because these were self reported assessments and prone to bias, the results were compared to reports obtained from subjects' relatives or friends, using the Kendal's tau coefficient that showed high agreement. RESULTS 731 (86%) of the patients and 605 (56%) of the controls reported current exposure of 30 minutes per day or more to ETS. Among current non-smokers, cases were 47% more likely to report regular exposure to ETS (odds ratio (OR) 1.47, 95% confidence interval (CI) 1.26 to 1.80) compared to controls. Exposure to ETS at work was associated with a greater risk of ACS compared to home exposure (+97% v +33%). The risk of ACS was also raised in active smokers (OR 2.83, 95% CI 2.07 to 3.31) regularly exposed to ETS. CONCLUSIONS This study supports the hypothesis that exposure to ETS increases the risk of developing ACS. The consistency of these findings with the existing totality of evidence presented in the literature supports the role of ETS in the aetiology of ACS.
ENGINEERED CEMENTITIOUS COMPOSITES (ECC) – TAILORED COMPOSITES THROUGH MICROMECHANICAL MODELING
This article provides a brief overview of several classes of fiber reinforced cement based composites and suggests future directions in FRC development. Special focus is placed on micromechanics based design methodology of strain-hardening cement based composites. As example, a particular engineered cementitious composite newly developed at the ACE-MRL at the University of Michigan is described in detail with regard to its design, material composition, processing, and mechanical properties. Three potential applications which utilize the unique properties of such composites are cited in this paper, and future research needs are identified. * To appear in Fiber Reinforced Concrete: Present and the Future, Eds: N. Banthia, A. Bentur, and A. Mufti, Canadian Society of Civil Engineers, 1997.
Signal Processing for Big Data [From the Guest Editors]
The articles in this special section delineate the theoretical and algorithmic underpinnings along with the relevance of signal processing tools to the emerging field of big data and introduce readers to the challenges and opportunities for SP research on (massive-scale) data analytics. The latter entails an extended and continuously refined technological wish list, which is envisioned to encompass high-dimensional, decentralized, parallel, online, and robust statistical signal processing as well as large, distributed, fault-tolerant, and intelligent systems engineering. The goal is to selectively sample a diverse gamut of big data challenges and opportunities through surveys of methodological advances, as well as more focused- and application-oriented contributions chosen on the basis of timeliness, importance, and relevance to signal processing.
Verifying Properties of Parallel Programs: An Axiomatic Approach
An axiomatic method for proving a number of properties of parallel programs is presented. Hoare has given a set of axioms for partial correctness, but they are not strong enough in most cases. This paper defines a more powerful deductive system which is in some sense complete for partial correctness. A crucial axiom provides for the use of auxiliary variables, which are added to a parallel program as an aid to proving it correct. The information in a partial correctness proof can be used to prove such properties as mutual exclusion, freedom from deadlock, and program termination. Techniques for verifying these properties are presented and illustrated by application to the dining philosophers problem.
Stochastic Variance Reduction for Nonconvex Optimization
We study nonconvex finite-sum problems and analyze stochastic variance reduced gradient (SVRG) methods for them. SVRG and related methods have recently surged into prominence for convex optimization given their edge over stochastic gradient descent (SGD); but their theoretical analysis almost exclusively assumes convexity. In contrast, we obtain non-asymptotic rates of convergence of SVRG for nonconvex optimization, showing that it is provably faster than SGD and gradient descent. We also analyze a subclass of nonconvex problems on which SVRG attains linear convergence to the global optimum. We extend our analysis to mini-batch variants, showing (theoretical) linear speedup due to minibatching in parallel settings.
The International Prevalence Study on Physical Activity: results from 20 countries
BACKGROUND Physical activity (PA) is one of the most important factors for improving population health, but no standardised systems exist for international surveillance. The International Physical Activity Questionnaire (IPAQ) was developed for international surveillance. The purpose of this study was a comparative international study of population physical activity prevalence across 20 countries. METHODS Between 2002-2004, a standardised protocol using IPAQ was used to assess PA participation in 20 countries [total N = 52,746, aged 18-65 years]. The median survey response rate was 61%. Physical activity levels were categorised as "low", "moderate" and "high". Age-adjusted prevalence estimates are presented by sex. RESULTS The prevalence of "high PA" varied from 21-63%; in eight countries high PA was reported for over half of the adult population. The prevalence of "low PA" varied from 9% to 43%. Males more frequently reported high PA than females in 17 of 20 countries. The prevalence of low PA ranged from 7-41% among males, and 6-49% among females. Gender differences were noted, especially for younger adults, with males more active than females in most countries. Markedly lower physical activity prevalence (10% difference) with increasing age was noted in 11 of 19 countries for males, but only in three countries for women. The ways populations accumulated PA differed, with some reporting mostly vigorous intensity activities and others mostly walking. CONCLUSION This study demonstrated the feasibility of international PA surveillance, and showed that IPAQ is an acceptable surveillance instrument, at least within countries. If assessment methods are used consistently over time, trend data will inform countries about the success of their efforts to promote physical activity.
View-based and modular eigenspaces for face recognition
In this work we describe experiments with eigenfaces for recognition and interactive search in a large-scale face database. Accurate visual recognition is demonstrated using a database of O(10) faces. The problem of recognition under general viewing orientation is also examined. A view-based multiple-observer eigenspace technique is proposed for use in face recognition under variable pose. In addition, a modular eigenspace description technique is used which incorporates salient features such as the eyes, nose and mouth, in an eigenfeature layer. This modular representation yields higher recognition rates as well as a more robust framework for face recognition. An automatic feature extraction technique using feature eigentemplates is also demonstrated.
DoA Estimation and Capacity Analysis for 3-D Millimeter Wave Massive-MIMO/FD-MIMO OFDM Systems
With the promise of meeting future capacity demands, 3-D massive-MIMO/full dimension multiple-input-multiple-output (FD-MIMO) systems have gained much interest in recent years. Apart from the huge spectral efficiency gain, 3-D massive-MIMO/FD-MIMO systems can also lead to significant reduction of latency, simplified multiple access layer, and robustness to interference. However, in order to completely extract the benefits of the system, accurate channel state information is critical. In this paper, a channel estimation method based on direction of arrival (DoA) estimation is presented for 3-D millimeter wave massive-MIMO orthogonal frequency division multiplexing (OFDM) systems. To be specific, the DoA is estimated using estimation of signal parameter via rotational invariance technique method, and the root mean square error of the DoA estimation is analytically characterized for the corresponding MIMO-OFDM system. An ergodic capacity analysis of the system in the presence of DoA estimation error is also conducted, and an optimum power allocation algorithm is derived. Furthermore, it is shown that the DoA-based channel estimation achieves a better performance than the traditional linear minimum mean squared error estimation in terms of ergodic throughput and minimum chordal distance between the subspaces of the downlink precoders obtained from the underlying channel and the estimated channel.
Training Structured Efficient Convolutional Layers
Typical recent neural network designs are primarily convolutional layers, but 1 the tricks enabling structured efficient linear layers (SELLs) have not yet been 2 adapted to the convolutional setting. We present a method to express the weight 3 tensor in a convolutional layer using diagonal matrices, discrete cosine transforms 4 (DCTs) and permutations that can be optimised using standard stochastic gradient 5 methods. A network composed of such structured efficient convolutional layers 6 (SECL) outperforms existing low-rank networks and demonstrates competitive 7 computational efficiency. 8
Training Agent for First-Person Shooter Game with Actor-Critic Curriculum Learning
In this paper, we propose a new framework for training vision-based agent for First-Person Shooter (FPS) Game, in particular Doom. Our framework combines the state-of-the-art reinforcement learning approach (Asynchronous Advantage Actor-Critic (A3C) model [Mnih et al. (2016)]) with curriculum learning. Our model is simple in design and only uses game states from the AI side, rather than using opponents’ information [Lample & Chaplot (2016)]. On a known map, our agent won 10 out of the 11 attended games and the champion of Track1 in ViZDoom AI Competition 2016 by a large margin, 35% higher score than the second place.
Struggling to organize across nationalborders: The case of global resourcemanagement in professional service firms
A growing body of research has challenged the commonly accepted view that multinationals have evolved into globally integrated networks, demonstrating instead that such organizations are sites of conflict between competing rationalities emerging from distinctive national institutional contexts. However, this research has neglected professional service firms (PSFs) in spite of them often being held as exemplars of the integrated network model. This article redresses this imbalance by focusing, in particular, on how PSFs seek to coordinate the horizontal flow of their human resources as a mechanism of inter-unit knowledge sharing. Drawing upon interviews in four PSFs, I show that these organizations have developed resource management systems that cannot simply be reduced to national institutional contexts. However, I also demonstrate that the process of resource management is subject to inter-unit conflicts that undermine its raison d’etre. I argue that these tensions are symptomatic of both the Anglo-American model of multinational management and cross-national differences in market conditio
Intrinsic Motivation Systems for Autonomous Mental Development
Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development. The complexity of the robot's activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology
Achieving Ultra-Low Latency in 5G Millimeter Wave Cellular Networks
The IMT 2020 requirements of 20 Gb/s peak data rate and 1 ms latency present significant engineering challenges for the design of 5G cellular systems. Systems that make use of the mmWave bands above 10 GHz ---where large regions of spectrum are available --- are a promising 5G candidate that may be able to rise to the occasion. However, although the mmWave bands can support massive peak data rates, delivering these data rates for end-to-end services while maintaining reliability and ultra-low-latency performance to support emerging applications and use cases will require rethinking all layers of the protocol stack. This article surveys some of the challenges and possible solutions for delivering end-to-end, reliable, ultra-low-latency services in mmWave cellular systems in terms of the MAC layer, congestion control, and core network architecture.
Going the Distance for TLB Prefetching: An Application-Driven Study
The importance of the Translation Lookaside Buffer (TLB) on system performance is well known. There have been numerous prior efforts addressing TLB design issues for cutting down access times and lowering miss rates. However, it was only recently that the first exploration [26] on prefetching TLB entries ahead of their need was undertaken and a mechanism called Recency Prefetching was proposed. There is a large body of literature on prefetching for caches, and it is not clear how they can be adapted (or if the issues are different) for TLBs, how well suited they are for TLB prefetching, and how they compare with the recency prefetching mechanism.This paper presents the first detailed comparison of different prefetching mechanisms (previously proposed for caches) - arbitrary stride prefetching, and markov prefetching - for TLB entries, and evaluates their pros and cons. In addition, this paper proposes a novel prefetching mechanism, called Distance Prefetching, that attempts to capture patterns in the reference behavior in a smaller space than earlier proposals. Using detailed simulations of a wide variety of applications (56 in all) from different benchmark suites and all the SPEC CPU2000 applications, this paper demonstrates the benefits of distance prefetching.
Quasicontraction Mappings in Modular Spaces without Δ2-Condition
As a generalization to Banach contraction principle, Ćirić introduced the concept of quasi-contraction mappings. In this paper, we investigate these kinds of mappings in modular function spaces without the 2-condition. In particular, we prove the existence of fixed points and discuss their uniqueness.
Procedural Level Design for Platform Games
Although other genres have used procedural level generation to extend gameplay and replayability, platformer games have not yet seen successful level generation. This paper proposes a new four-layer hierarchy to represent platform game levels, with a focus on representing repetition, rhythm, and connectivity. It also proposes a way to use this model to procedurally generate new levels.
Learning Mixtures of Product Distributions Using Correlations and Independence
We study the problem of learning mixtures of distributions, a natural formalization of clustering. A mixture of distributions is a collection of distributionsD = {D1, . . .DT }, andmixing weights , {w1, . . . , wT } such that
The Growing Importance of Social Skills in the Labor Market
The slow growth of high-paying jobs in the U.S. since 2000 and rapid advances in computer technology have sparked fears that human labor will eventually be rendered obsolete. Yet while computers perform cognitive tasks of rapidly increasing complexity, simple human interaction has proven difficult to automate. In this paper, I show that the labor market increasingly rewards social skills. Since 1980, jobs with high social skill requirements have experienced greater relative growth throughout the wage distribution. Moreover, employment and wage growth has been strongest in jobs that require high levels of both cognitive skill and social skill. To understand these patterns, I develop a model of team production where workers “trade tasks” to exploit their comparative advantage. In the model, social skills reduce coordination costs, allowing workers to specialize and trade more efficiently. The model generates predictions about sorting and the relative returns to skill across occupations, which I test and confirm using data from the NLSY79. The female advantage in social skills may have played some role in the narrowing of gender gaps in labor market outcomes since 1980. ∗[email protected]. Thanks to Pol Antras, David Autor, Avi Feller, Lawrence Katz, Sandy Jencks, Richard Murnane, Doug Staiger and Lowell Taylor for reading early drafts of this paper and for providing insightful feedback. Thanks to Felipe Barrera-Osorio, Amitabh Chandra, Asim Khwaja, Alan Manning, Guy Michaels, Luke Miratrix, Todd Rogers, Marty West and seminar participants at LSE and PSE for helpful comments. Olivia Chi, Madeleine Gelblum, Lauren Reisig and Stephen Yen provided superb research assistance. Special thanks to David Autor and Brendan Price for sharing their data and programs. Extra special thanks to Lisa Kahn and Chris Walters for “trading tasks” with me. All errors are my own.
Analyzing College Students' Social Media Communication Apprehension.
Research has shown that college students are heavy users of social media. Yet, very little has looked at the connection between college students' social media use and communication apprehension (CA). Due to the shortage of research concerning CA and social media, this study aims to test the relationship between social media CA and introversion in relationship to social media use and social media addiction. To test these relationships, 396 undergraduate students were surveyed. The survey consisted of instruments used to measure the individuals' levels of social media use, social media addiction, introversion, and CA. After conducting multiple linear regressions, it was determined that there was a negative relationship between social media CA and introversion with (1) social media use and (2) social media addiction. Results indicated that social media CA was significantly related to social media addiction. These findings suggest that college students might gravitate toward social media to communicate rather than face-to-face communication.
Protection, regeneration and replacement of hair cells in the cochlea: implications for the future treatment of sensorineural hearing loss.
In the last few years progress has been made in understanding basic mechanisms involved in damage to the inner ear and various potential therapeutic approaches have been developed. It was shown that hair cell loss mediated by noise or toxic drugs may be prevented by antioxidants, inhibitors of intracellular stress pathways and neurotrophic factors/neurotransmission blockers. Moreover, there is hope that once hair cells are lost, their regeneration can be induced or that stem cells can be used to build up new hair cells. However, although tremendous progress has been made, most of the concepts discussed in this review are still in the "animal stage" and it is difficult to predict which approach will finally enter clinical practice. In my opinion it is highly probable that some concepts of hair cell protection will enter clinical practice first, while others, such as the use of stem cells to restore hearing, are still far from clinical utility.
Robotics & Constructivism in Education : the TERECoP project
This paper presents the European project “Teacher Education on Robotics-Enhanced Constructivist Pedagogical Methods” (TERECoP). A first premise of this project concerns the implementation of constructivist – constructionist methods not only in classroom, but in teacher education as well. A second premise is referred to the technology-enhanced learning as occurred in the implementation of different kinds of curriculum innovation in the classrooms. A third is related with the emerging need for a teaching as a research-based profession and for the creation of a culture in which researchers and teachers can create a shared body of knowledge. Although the role of teacher is crucial for the successful introduction of robotics in classrooms, only few projects have been undertaken to train school teachers in using this, completely new for them, technology. TERECoP project’s aim and ambition is to contribute to fill in this gap suggesting a constructivist model of teacher training in these new technologies. Learning theories, modelling, technology and languages are the main aspects we will have to deal with. The main questions we are currently trying to answer (probably in this order) are: what is “Robotics” at School? Which methodology should we use to apply “Robotics” at school and teacher education? How can we design educational activities (within students’ curricula and teacher training courses) once we have answered to the two previous questions? Our work within the TERECoP project tries to give some answers to these questions. The paper describes the starting point of this project, focusing on the context, on the aims of the project and on the different partners’ countries experiences, and outlines the different stages that are going to be developed to implement the project giving a description of every one. Finally some preliminary conclusions are presented.
Effects of Whole-Body Cryotherapy in Comparison with Other Physical Modalities Used with Kinesitherapy in Rheumatoid Arthritis
Whole-body cryotherapy (WBC) has been frequently used to supplement the rehabilitation of patients with rheumatoid arthritis (RA). The aim of this study was to compare the effect of WBC and traditional rehabilitation (TR) on clinical parameters and systemic levels of IL-6, TNF-α in patients with RA. The study group comprised 25 patients who were subjected to WBC (-110 °C) and 19 patients who underwent a traditional rehabilitation program. Some clinical variables and levels of interleukin-6 (IL-6) and tumor necrosis factor-α (TNF-α) were used to assess the outcomes. After therapy both groups exhibited similar improvement in pain, disease activity, fatigue, time of walking, and the number of steps over a distance of 50 m. Only significantly better results were observed in HAQ in TR group (p < 0.05). However, similar significant reduction in IL-6 and TNF-α level was observed. The results showed positive effects of a 2-week rehabilitation program for patients with RA regardless of the kind of the applied physical procedure.
Sequential Data Cleaning: A Statistical Approach
Errors are prevalent in data sequences, such as GPS trajectories or sensor readings. Existing methods on cleaning sequential data employ a constraint on value changing speeds and perform constraint-based repairing. While such speed constraints are effective in identifying large spike errors, the small errors that do not significantly deviate from the truth and indeed satisfy the speed constraints can hardly be identified and repaired. To handle such small errors, in this paper, we propose a statistical based cleaning method. Rather than declaring a broad constraint of max/min speeds, we model the probability distribution of speed changes. The repairing problem is thus to maximize the likelihood of the sequence w.r.t. the probability of speed changes. We formalize the likelihood-based cleaning problem, show its NP-hardness, devise exact algorithms, and propose several approximate/heuristic methods to trade off effectiveness for efficiency. Experiments on real data sets (in various applications) demonstrate the superiority of our proposal.
A semantic network-based evolutionary algorithm for modeling memetic evolution and creativity
We introduce a novel evolutionary algorithm (EA) with a semantic network-based representation. For enabling this, we establish new formulations of EA variation operators, crossover and mutation, that we adapt to work on semantic networks. The algorithm employs commonsense reasoning to ensure all operations preserve the meaningfulness of the networks, using ConceptNet and WordNet knowledge bases. The algorithm can be classified as a novel memetic algorithm (MA), given that (1) individuals represent pieces of information that undergo evolution, as in the original sense of memetics as it was introduced by Dawkins; and (2) this is different from existing MA, where the word “memetic” has been used as a synonym for local refinement after global optimization. For evaluating the approach, we introduce an analogical similarity-based fitness measure that is computed through structure mapping. This setup enables the open-ended generation of networks analogous to a given base network.
Ticagrelor versus prasugrel in acute coronary syndrome patients with high on-clopidogrel platelet reactivity following percutaneous coronary intervention: a pharmacodynamic study.
OBJECTIVES The study aimed to compare the antiplatelet action of ticagrelor with prasugrel in acute coronary syndrome (ACS) patients with high on-treatment platelet reactivity (HTPR) while on clopidogrel after percutaneous coronary intervention (PCI). BACKGROUND Newer P2Y12 inhibitors like prasugrel and ticagrelor provide stronger platelet inhibition compared with clopidogrel. Both agents are efficacious in patients with HTPR while on clopidogrel, but direct comparison between them has not yet been reported. METHODS In a prospective, single-center, single-blind study, 44 (of 139 screened, 31.7%) ACS patients with HTPR while on clopidogrel 24 h post-PCI were randomized to either ticagrelor 90 mg twice daily or prasugrel 10 mg once daily for 15 days with a crossover directly to the alternate treatment for another 15 days. HTPR was defined as platelet reactivity units (PRU) ≥ 235 as assessed by the VerifyNow P2Y12 function assay. RESULTS The primary endpoint of platelet reactivity at the end of the 2 treatment periods was lower for ticagrelor (32.9 PRU, 95% confidence interval [CI]: 18.7 to 47.2) compared with prasugrel (101.3 PRU, 95% CI: 86.8 to 115.7) with a least squares mean difference of -68.3 PRU (95% CI: -88.6 to -48.1; p < 0.001). The secondary endpoint of HTPR rate was 0% for ticagrelor and 2.4% for prasugrel (1 of 42, p = 0.5). No patient exhibited a major bleeding event at either treatment group. CONCLUSIONS In patients with ACS exhibiting HTPR while on clopidogrel 24 h post-PCI, ticagrelor produces a significantly higher platelet inhibition compared with prasugrel. (Ticagrelor Versus Prasugrel in Acute Coronary Syndromes After Percutaneous Coronary Intervention; NCT01360437).
Forces acting on a biped robot. Center of pressure-zero moment point
In the area of biped robot research, much progress has been made in the past few years. However, some difficulties remain to be dealt with, particularly about the implementation of fast and dynamic walking gaits, in other words anthropomorphic gaits, especially on uneven terrain. In this perspective, both concepts of center of pressure (CoP) and zero moment point (ZMP) are obviously useful. In this paper, the two concepts are strictly defined, the CoP with respect to ground-feet contact forces, the ZMP with respect to gravity plus inertia forces. Then, the coincidence of CoP and ZMP is proven, and related control aspects are examined. Finally, a virtual CoP-ZMP is defined, allowing us to extend the concept when walking on uneven terrain. This paper is a theoretical study. Experimental results are presented in a companion paper, analyzing the evolution of the ground contact forces obtained from a human walker wearing robot feet as shoes.
National computerization and road map to information technology research and development
In 1981, Singapore has launched a national computerization program. Several educational, training, civil service computerization and computer industry promotion programs have been carried out. In 1985, encouraged by the positive results achieved, a new mission was added to create a R & D culture and capability for transferring information technology to Singapore for productivity gain and for supporting a successful information and software industry in Singapore for economic growth. Two R & D organizations, Institute of Systems Science (ISS) and Information Technology Institute (ITI) were hence established. This paper gives a historical perspective of Singapore's national computerization program. Some of the Information Technology (IT) R & D projects currently underway are described. Requirements, difficulties, progress and potential impact of these ambitious R & D plans are also discussed.
Unsupervised Language Acquisition
Children are exposed to speech and other environmental evidence, from which they learn language. How do they do this? More specifically, how do children map from complex, physical signals to grammars that enable them to generate and interpret new utterances from their language? This thesis presents a computational theory of unsupervised language acquisition. By computational we mean that the theory precisely defines procedures for learning language, procedures that have been implemented and tested in the form of computer programs. By unsupervised we mean that the theory explains how language learning can take place with no explicit help from a teacher, but only exposure to ordinary spoken or written utterances. The theory requires very little of the learning environment. For example, it predicts that much knowledge of language can be acquired even in situations where the learner has no access to the meaning of utterances. In this way the theory is extremely conservative, making few or no assumptions that are not obviously true of the situation children learn in. The theory is based heavily on concepts borrowed from machine learning and statistical estimation. In particular, learning takes place by fitting a stochastic, generative model of language to the evidence. Thus, the goal of the learner is to acquire a grammar under which the evidence is “typical”, in a statistical sense. Much of the thesis is devoted to explaining conditions that must hold for this learning strategy to arrive at the desired form of grammar. The thesis introduces a variety of technical innovations, among them a common representation for evidence and grammars that has many linguistically and statistically desirable properties. In this representation, both utterances and parameters in the grammar are represented by composing parameters. A second contribution is a learning strategy that separates the “content” of linguistic parameters from their representation. Algorithms based on it suffer from few of the search problems that have plagued other computational approaches to language acquisition. The theory has been tested on problems of learning lexicons (vocabularies) and stochastic grammars from unsegmented text and continuous speech signals, and mappings between sound and representations of meaning. It performs extremely well on various objective criteria, acquiring knowledge that causes it to assign almost exactly the same linguistic structure to utterances as humans do. This work has application to data compression, language modeling, speech recognition, machine translation, information retrieval, and other tasks that rely on either structural or stochastic descriptions of language. Thesis Supervisor: Robert C. Berwick Title: Professor of Computer Science and Engineering
Context-aware citation recommendation
When you write papers, how many times do you want to make some citations at a place but you are not sure which papers to cite? Do you wish to have a recommendation system which can recommend a small number of good candidates for every place that you want to make some citations? In this paper, we present our initiative of building a context-aware citation recommendation system. High quality citation recommendation is challenging: not only should the citations recommended be relevant to the paper under composition, but also should match the local contexts of the places citations are made. Moreover, it is far from trivial to model how the topic of the whole paper and the contexts of the citation places should affect the selection and ranking of citations. To tackle the problem, we develop a context-aware approach. The core idea is to design a novel non-parametric probabilistic model which can measure the context-based relevance between a citation context and a document. Our approach can recommend citations for a context effectively. Moreover, it can recommend a set of citations for a paper with high quality. We implement a prototype system in CiteSeerX. An extensive empirical evaluation in the CiteSeerX digital library against many baselines demonstrates the effectiveness and the scalability of our approach.
Quantifying Differential Privacy under Temporal Correlations
Differential Privacy (DP) has received increasing attention as a rigorous privacy framework. Many existing studies employ traditional DP mechanisms (e.g., the Laplace mechanism) as primitives, which assume that the data are independent, or that adversaries do not have knowledge of the data correlations. However, continuous generated data in the real world tend to be temporally correlated, and such correlations can be acquired by adversaries. In this paper, we investigate the potential privacy loss of a traditional DP mechanism under temporal correlations in the context of continuous data release. First, we model the temporal correlations using Markov model and analyze the privacy leakage of a DP mechanism when adversaries have knowledge of such temporal correlations. Our analysis reveals that the privacy loss of a DP mechanism may accumulate and increase over time. We call it temporal privacy leakage. Second, to measure such privacy loss, we design an efficient algorithm for calculating it in polynomial time. Although the temporal privacy leakage may increase over time, we also show that its supremum may exist in some cases. Third, to bound the privacy loss, we propose mechanisms that convert any existing DP mechanism into one against temporal privacy leakage. Experiments with synthetic data confirm that our approach is efficient and effective.