title
stringlengths
8
300
abstract
stringlengths
0
10k
Neural Networks: Tricks of the Trade
The convergence of back-propagation learning is analyzed so as to explain common phenomenon observed by practitioners. Many undesirable behaviors of backprop can be avoided with tricks that are rarely exposed in serious technical publications. This paper gives some of those tricks, and offers explanations of why they work. Many authors have suggested that second-order optimization methods are advantageous for neural net training. It is shown that most “classical” second-order methods are impractical for large neural networks. A few methods are proposed that do not have these limitations.
Evaluating the impact of a computerized surveillance algorithm and decision support system on sepsis mortality
OBJECTIVE We created a system using a triad of change management, electronic surveillance, and algorithms to detect sepsis and deliver highly sensitive and specific decision support to the point of care using a mobile application. The investigators hypothesized that this system would result in a reduction in sepsis mortality. METHODS A before-and-after model was used to study the impact of the interventions on sepsis-related mortality. All patients admitted to the study units were screened per the Institute for Healthcare Improvement Surviving Sepsis Guidelines using real-time electronic surveillance. Sepsis surveillance algorithms that adjusted clinical parameters based on comorbid medical conditions were deployed for improved sensitivity and specificity. Nurses received mobile alerts for all positive sepsis screenings as well as severe sepsis and shock alerts over a period of 10 months. Advice was given for early goal-directed therapy. Sepsis mortality during a control period from January 1, 2011 to September 30, 2013 was used as baseline for comparison. RESULTS The primary outcome, sepsis mortality, decreased by 53% (P = 0.03; 95% CI, 1.06-5.25). The 30-day readmission rate reduced from 19.08% during the control period to 13.21% during the study period (P = 0.05; 95% CI, 0.97-2.52). No significant change in length of hospital stay was noted. The system had observed sensitivity of 95% and specificity of 82% for detecting sepsis compared to gold-standard physician chart review. CONCLUSION A program consisting of change management and electronic surveillance with highly sensitive and specific decision support delivered to the point of care resulted in significant reduction in deaths from sepsis.
Big Data Analytics in Heart Attack Prediction
Introduction: Acute myocardial infarction (heart attack) is one of the deadliest diseases patients face. The key to cardiovascular disease management is to evaluate large scores of datasets, compare and mine for information that can be used to predict, prevent, manage and treat chronic diseases such as heart attacks. Big Data analytics, known in the corporate world for its valuable use in controlling, contrasting and managing large datasets can be applied with much success to the prediction, prevention, management and treatment of cardiovascular disease. Data mining, visualization and Hadoop are technologies or tools of big data in mining the voluminous datasets for
Survey on distance metric learning and dimensionality reduction in data mining
Distance metric learning is a fundamental problem in data mining and knowledge discovery. Many representative data mining algorithms, such as $$k$$ k -nearest neighbor classifier, hierarchical clustering and spectral clustering, heavily rely on the underlying distance metric for correctly measuring relations among input data. In recent years, many studies have demonstrated, either theoretically or empirically, that learning a good distance metric can greatly improve the performance of classification, clustering and retrieval tasks. In this survey, we overview existing distance metric learning approaches according to a common framework. Specifically, depending on the available supervision information during the distance metric learning process, we categorize each distance metric learning algorithm as supervised, unsupervised or semi-supervised. We compare those different types of metric learning methods, point out their strength and limitations. Finally, we summarize open challenges in distance metric learning and propose future directions for distance metric learning.
Will the real Creative City please stand up
S omething has been troubling me. It is that, apparently, the cities we live in have become creative. What I mean by this is that the various attempts to tackle social and economic decline in urban areas over the last few decades have recently found new expression through discourses of creativity and the 'creative city'. These discourses have made their way into the centre of urban policy debates commentaries have also emerged on how the creative city concept has been put into practice by urban authorities (e.g. see Tony Harcup in issue 4(2) of City in relation to Leeds). As a result of this new vocabulary, certain cities such as Barcelona, Cologne, Bologna and even Huddersfield have gained the tag 'creative city'. But what does this rather nebulous term mean? It refers to a new 'method of strategic urban planning and examines how people can think, plan and act creatively' (Landry, 2000, p. xii). A city is being creative, then, when people adopt new and different ways of looking at the problems which they face. In what follows, I want to explore some of the ideas associated with the creative city concept which were explored at an epony-mously named conference in Huddersfield in May 2000. I then want to take a closer look at some issues which arise from a more critical analysis of the creative city concept. In particular, I want to explore in more detail the links between ethics, values and creativity, and the types of creativity which are tolerated within the creative city. The most recent Creative Cities conference occurred in Huddersfield, West Yorkshire, near where I was born amongst the mill towns of the east Pennines which these days I didn't much associate with either cityness or creativity. While people's perceptions of Huddersfield may extend to cloth mills and its role at the centre of textile production during the industrial revolution, Lowry's matchstick people, or perhaps the new McAlpine football stadium, for fewer people, I imagine, it would extend to that of a creative urban milieu. Nevertheless, several significant shifts have occurred in this town and it has experienced a partial cultural, and in some ways economic, turnaround. Indeed, it often comes as a surprise to learn that Huddersfield includes 1660 listed buildings, an amount second only to Bristol and West-minster in the whole of England. While much of the recent upturn may be a result …
Invariant Scattering Convolution Networks
A wavelet scattering network computes a translation invariant image representation which is stable to deformations and preserves high-frequency information for classification. It cascades wavelet transform convolutions with nonlinear modulus and averaging operators. The first network layer outputs SIFT-type descriptors, whereas the next layers provide complementary invariant information that improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State-of-the-art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.
Variable Speed Wind Turbines for Power System Stability Enhancement
This paper investigates possible improvements in grid voltage stability and transient stability with wind energy converter units using modified P/Q control. The voltage source converter (VSC) in modern variable speed wind turbines is utilized to achieve this enhancement. The findings show that using only available hardware for variable-speed turbines improvements could be obtained in all cases. Moreover, it was found that power system stability improvement is often larger when the control is modified for a given variable speed wind turbine rather than when standard variable speed turbines are used instead of fixed speed turbines. To demonstrate that the suggested modifications can be incorporated in real installations, a real situation is presented where short-term voltage stability is improved as an additional feature of an existing VSC high voltage direct current (HVDC) installation
Transformation from EPC to BPMN
Business process modeling is a big part in the industry, mainly to document, analyze, and optimize workflows. Currently, the EPC process modeling notation is used very wide, because of the excellent integration in the ARIS Toolset and the long existence of this process language. But as a change of time, BPMN gets popular and the interest in the industry and companies gets growing up. It is standardized, has more expressiveness than EPC and the tool support increase very rapidly. With having tons of existing EPC process models; a big need from the industry is to have an automated transformation from EPC to BPMN. This paper specified a direct approach of a transformation from EPC process model elements to BPMN. Thereby it is tried to map every construct in EPC fully automated to BPMN. But as it is described, not for every process element works this out, so in addition, some extensions and semantics rules are defined.
Voxel-Based Virtual Multi-Axis Machining
Although multi-axis CNC machines are well known for complicated part machining involving few time-consuming set-ups, difficulties arise in generating collision-free tool paths and determining cutting parameters. Machining simulation is used to validate the generated information for multi-axis CNC machines before it is physically downloaded and executed. The objective of this research is to develop a voxel-based simulator for multi-axis CNC machining. The simulator displays the machining process in which the initial workpiece is incrementally converted into the finished part. The voxel representation is used to model efficiently the state of the in-process workpiece, which is generated by successively subtracting tool swept volumes from the workpiece. The voxel representation also simplifies the computation of regularised Boolean set operations and of material removal volumes. By using the material removal rate measured by the number of removed voxels, the feedrate can be adjusted adaptively to increase the machining productivity.
Chronic obstructive pulmonary disease in the older adult: what defines abnormal lung function?
BACKGROUND The Global Initiative on Obstructive Lung Disease stages for chronic obstructive pulmonary disease (COPD) uses a fixed ratio of the post-bronchodilator forced expiratory volume in 1 second (FEV(1))/forced vital capacity (FVC) of 0.70 as a threshold. Since the FEV(1)/FVC ratio declines with age, using the fixed ratio to define COPD may "overdiagnose" COPD in older populations. OBJECTIVE To determine morbidity and mortality among older adults whose FEV(1)/FVC is less than 0.70 but more than the lower limit of normal (LLN). METHODS The severity of COPD was classified in 4965 participants aged > or =65 years in the Cardiovascular Health Study using these two methods and the age-adjusted proportion of the population who had died or had a COPD-related hospitalisation in up to 11 years of follow-up was determined. RESULTS 1621 (32.6%) subjects died and 935 (18.8%) had at least one COPD-related hospitalisation during the follow-up period. Subjects (n = 1134) whose FEV(1)/FVC fell between the LLN and the fixed ratio had an increased adjusted risk of death (hazard ratio (HR) 1.3, 95% CI 1.1 to 1.5) and COPD-related hospitalisation (HR 2.6, 95% CI 2.0 to 3.3) during follow-up compared with asymptomatic individuals with normal lung function. CONCLUSION In this cohort, subjects classified as "normal" using the LLN but abnormal using the fixed ratio were more likely to die and to have a COPD-related hospitalisation during follow-up. This suggests that a fixed FEV(1)/FVC ratio of <0.70 may identify at-risk patients, even among older adults.
Analysis of hereditary and medical risk factors in Achilles tendinopathy and Achilles tendon ruptures: a matched pair analysis
In Achilles tendon injuries, it is suggested that a pathological continuum might be evident from the healthy Achilles tendon to Achilles tendinopathy to Achilles tendon rupture. As such, risk factors for both tendinopathy and rupture should be the same. Hereditary and medical risk factors for Achilles tendinopathy and Achilles tendon rupture are the same to a similar extent in a matched pair analysis. Matched pair study; level of evidence: 3. Recreational sportsmen as well as athletes on national level. 566 questionnaires were analysed. 310 subjects were allocated to 3 groups (A, B, C) after matching the pairs for age, weight, height and gender: (A) healthy Achilles tendons (n = 89, age 39 ± 11 years, BMI 25.1 ± 3.9, females 36%), (B) chronic Achilles tendinopathy (n = 161, age 41 ± 11 years, BMI 24.4 ± 3.7, females 34%), (C) acute Achilles tendon rupture (n = 60, age 40 ± 9 years, BMI 25.2 ± 3.2, females 27%). We found a positive family history of Achilles tendinopathy as a risk factor for Achilles tendinopathy (OR: 4.8, 95% CI: 1.1–21.4; p = 0.023), but not for Achilles tendon rupture (OR: 4.0, 95% CI 0.7–21.1, p = 0.118). Smoking and cardiac diseases had a lower incidence in Achilles tendinopathy than in healthy subjects (both p = 0.001), while cardiovascular medication did not change the risk profile. Identifying risk factors associated with Achilles tendon disorders has a high clinical relevance regarding the development and implementation of prevention strategies and programs. This cross-sectional study identified a positive family history as a significant solitary risk factor for Achilles tendinopathy, increasing the risk fivefold. However, in this matched pair analysis excluding age, weight, height and gender as risk factors no further factor necessarily increases the risk for either Achilles tendinopathy or Achilles tendon rupture.
Multimodal Dialogue for Ambient Intelligence and Smart Environments
Ambient Intelligence (AmI) and Smart Environments (SmE) are based on three foundations: ubiquitous computing, ubiquitous communication and intelligent adaptive interfaces [41]. This type of systems consists of a series of interconnected computing and sensing devices which surround the user pervasively in his environment and are invisible to him, providing a service that is dynamically adapted to the interaction context, so that users can naturally interact with the system and thus perceive it as intelligent. To ensure such a natural and intelligent interaction, it is necessary to provide an effective, easy, safe and transparent interaction between the user and the system. With this objective, as an attempt to enhance and ease human-to-computer interaction, in the last years there has been an increasing interest in simulating human-tohuman communication, employing the so-called multimodal dialogue systems [46]. These systems go beyond both the desktop metaphor and the traditional speech-only interfaces by incorporating several communication modalities, such as speech, gaze, gestures or facial expressions. Multimodal dialogue systems offer several advantages. Firstly, they can make use of automatic recognition techniques to sense the environment allowing the user to employ different input modalities, some of these technologies are automatic speech recognition [62], natural language processing [12], face location and tracking [77], gaze tracking [58], lipreading recognition [13], gesture recognition [39], and handwriting recognition [78].
The clinical efficacy of 3 mg/day prednisone in patients with rheumatoid arthritis: evidence from a randomized, double-blind, placebo-controlled withdrawal clinical trial.
A randomised, double-blind, placebo-controlled, withdrawal clinical trial was conducted of prednisone <5 mg/ day versus placebo in 31 patients with rheumatoid arthritis (RA). These patients had been treated with long-term 1-4 mg/day of prednisone, 22 with 3 mg/day, in usual clinical care at a single academic clinical setting. Stable clinical status over 12 weeks prior to screening for the trial was documented quantitatively by patient questionnaire scores. The protocol involved three phases: a) 'equivalence' - 1-4 study prednisone 1-mg tablets taken for 12 weeks, to ascertain their efficacy versus the patient's usual prednisone tablets prior to randomisation; b) 'transfer' - substitution of a 1-mg prednisone or identical placebo tablet at a rate of a single 1-mg tablet every 4 weeks (over 0-12 weeks) to the same number as baseline prednisone; c) 'comparison' - observation over 24 subsequent weeks taking the same number of either placebo or prednisone tablets as at baseline. The primary outcome was withdrawal due to patient-reported lack of efficacy versus continuation in the trial for 24 weeks. Thirty-one patients were randomised, 15 to prednisone and 16 to placebo, with 3 administrative discontinuations. In 'intent-to-treat' analyses, 3/15 prednisone and 11/16 placebo participants withdrew (p=0.03). Among participants eligible for the primary outcome of withdrawal for lack of efficacy, 3/13 prednisone versus 11/15 placebo participants withdrew (p=0.02). No meaningful adverse events were reported, as anticipated. These data document statistically significant differences between the efficacy of 1-4 mg prednisone vs. placebo in only 31 patients, which may suggest a robust treatment effect.
Pentostatin and rituximab in the treatment of patients with B-cell malignancies.
Both pentostatin (Nipent) and rituximab (Rituxan) have single-agent activity in B-cell malignancies, including indolent and intermediate-grade non-Hodgkin's lymphoma (NHL). Pentostatin is also active in pretreated patients with chronic lymphocytic leukemia (CLL). In spite of current treatment modalities, few patients with these diseases are cured. The combination of rituximab and pentostatin is an attractive treatment option because both drugs have a limited toxicity profile and can be delivered on an outpatient basis. We describe the design of a phase II multicenter study to evaluate the safety and efficacy of pentostatin in combination with rituximab in patients with previously treated and untreated low-grade NHL and CLL.
A billion keys, but few locks: the crisis of web single sign-on
OpenID and InfoCard are two mainstream Web single sign-on (SSO) solutions intended for Internet-scale adoption. While they are technically sound, the business model of these solutions does not provide content-hosting and service providers (CSPs) with sufficient incentives to become relying parties (RPs). In addition, the pressure from users and identity providers (IdPs) is not strong enough to drive CSPs toward adopting Web SSO. As a result, there are currently over one billion OpenID-enabled user accounts provided by major CSPs, but only a few relying parties. In this paper, we discuss the problem of Web SSO adoption for RPs and argue that solutions in this space must offer RPs sufficient business incentives and trustworthy identity services in order to succeed. We suggest future Web SSO development should investigate and fulfill RPs' business needs, identify IdP business models, and build trust frameworks. Moreover, we propose that Web SSO technology should build identity support into browsers in order to facilitate RPs' adoption.
Neonatal haemoglobinopathy screening in Belgium.
BACKGROUND A neonatal haemoglobinopathy screening programme was implemented in Brussels more than a decade ago and in Liège 5 years ago; the programme was adapted to the local situation. METHODS Neonatal screening for haemoglobinopathies was universal, performed using liquid cord blood and an isoelectric focusing technique. All samples with abnormalities underwent confirmatory testing. Major and minor haemoglobinopathies were reported. Affected children were referred to a specialist centre. A central database in which all screening results were stored was available and accessible to local care workers. A central clinical database to monitor follow-up is under construction. RESULTS A total of 191,783 newborns were screened. One hundred and twenty-three (1:1559) newborns were diagnosed with sickle cell disease, seven (1:27,398) with beta thalassaemia major, five (1:38,357) with haemoglobin H disease, and seven (1:27,398) with haemoglobin C disease. All major haemoglobinopathies were confirmed, and follow-up of the infants was undertaken except for three infants who did not attend the first medical consultation despite all efforts. CONCLUSIONS The universal neonatal screening programme was effective because no case of major haemoglobinopathy was identified after the neonatal period. The affected children received dedicated medical care from birth. The screening programme, and specifically the reporting of minor haemoglobinopathies, has been an excellent health education tool in Belgium for more than 12 years.
Emotion Estimation and Reasoning Based on Affective Textual Interaction
This paper presents a novel approach to Emotion Estimation that assesses the affective content from textual messages. Our main goals are to detect emotion from chat or other dialogue messages and to employ animated agents capable of the emotional reasoning based on the textual interaction. In this paper, the emotion estimation module is applied to a chat system, where avatars associated with chat partners act out the assessed emotions of messages through multiple modalities, including synthetic speech and associated affective gestures.
An Experimental Study on Pedestrian Classification
Detecting people in images is key for several important application domains in computer vision. This paper presents an in-depth experimental study on pedestrian classification; multiple feature-classifier combinations are examined with respect to their ROC performance and efficiency. We investigate global versus local and adaptive versus nonadaptive features, as exemplified by PCA coefficients, Haar wavelets, and local receptive fields (LRFs). In terms of classifiers, we consider the popular support vector machines (SVMs), feedforward neural networks, and k-nearest neighbor classifier. Experiments are performed on a large data set consisting of 4,000 pedestrian and more than 25,000 nonpedestrian (labeled) images captured in outdoor urban environments. Statistically meaningful results are obtained by analyzing performance variances caused by varying training and test sets. Furthermore, we investigate how classification performance and training sample size are correlated. Sample size is adjusted by increasing the number of manually labeled training data or by employing automatic bootstrapping or cascade techniques. Our experiments show that the novel combination of SVMs with LRF features performs best. A boosted cascade of Haar wavelets can, however, reach quite competitive results, at a fraction of computational cost. The data set used in this paper is made public, establishing a benchmark for this important problem
Planar distributed structures with negative refractive index
Planar distributed periodic structures of microstrip-line and stripline types, which support left-handed (LH) waves are presented and their negative refractive index (NRI) properties are shown theoretically, numerically, and experimentally. The supported LH wave is fully characterized based on the composite right/left-handed transmission-line theory and the dispersion characteristics, refractive indexes, and Bloch impedance are derived theoretically. In addition, formulas to extract equivalent-circuit parameters from full-wave simulation are given. Open (microstrip) and closed (stripline) structures with a 5/spl times/5 mm/sup 2/ unit cell operating at approximately 4 GHz are designed and characterized by full-wave finite-element-method simulations. A 20 /spl times/ 6 unit-cell NRI lens structure interfaced with two parallel-plate waveguides is designed. The focusing/refocusing effect of the lens is observed by both circuit theory and full-wave simulations. Focusing in the NRI lens is also observed experimentally in excellent agreement with circuit theory and numerical predictions. This result represents the first experimental demonstration of NRI property using a purely distributed planar structure.
The mole concept using MS Excel
A program has been developed to assist chemistry students in understanding the mole concept. The mole concept is a fundamental concept in chemistry that students need to understand before they can solve problems such as calculating the number of atoms in an element, mass of a substance or finding the stoichiometric amounts of reactants or products in a given chemical reaction. This program will be able to calculate the number of moles of a substance, the empirical formula of a compound and solve problems related to reaction stoichiometry. In addition, there are short notes, glossary of terms and quiz in the form of true/false and multiple-choice type questions. Another feature of this program is the periodic table of elements where the physical properties of the elements can be displayed. The program is written in Visual Basic language using the Visual Basic editor of the Microsoft Excel software. The data required for developing this program are stored in the Excel worksheets and the user interface is linked to the data using Visual Basic programming. This program receives a positive feedback based on a perception survey on 50 students.
Clinical and Biological Characterization of Skin Pigmentation Diversity and Its Consequences on UV Impact
Skin color diversity is the most variable and noticeable phenotypic trait in humans resulting from constitutive pigmentation variability. This paper will review the characterization of skin pigmentation diversity with a focus on the most recent data on the genetic basis of skin pigmentation, and the various methodologies for skin color assessment. Then, melanocyte activity and amount, type and distribution of melanins, which are the main drivers for skin pigmentation, are described. Paracrine regulators of melanocyte microenvironment are also discussed. Skin response to sun exposure is also highly dependent on color diversity. Thus, sensitivity to solar wavelengths is examined in terms of acute effects such as sunburn/erythema or induced-pigmentation but also long-term consequences such as skin cancers, photoageing and pigmentary disorders. More pronounced sun-sensitivity in lighter or darker skin types depending on the detrimental effects and involved wavelengths is reviewed.
Capturing user reading behaviors for personalized document summarization
We propose a new personalized document summarization method that observes a user's personal reading preferences. These preferences are inferred from the user's reading behaviors, including facial expressions, gaze positions, and reading durations that were captured during the user's past reading activities. We compare the performance of our algorithm with that of a few peer algorithms and software packages. The results of our comparative study show that our algorithm can produce more superior personalized document summaries than all the other methods in that the summaries generated by our algorithm can better satisfy a user's personal preferences.
Design of a multi-DOF cable-driven mechanism of a miniature serial manipulator for robot-assisted minimally invasive surgery
While multi-fingered robotic hands have been developed for decades, none has been used for surgical operations. μAngelo is an anthropomorphic master-slave system for teleoperated robot-assisted surgery. As part of this system, this paper focuses on its slave instrument, a miniature three-digit hand. The design of the mechanism of such a manipulator poses a challenge due to the required miniaturization and the many active degrees of freedom. As the instrument has a human-centered design, its relation to the human hand is discussed. Two ways of routing its cable-driven mechanism are investigated and the method of deriving the input-output functions that drive the mechanism is presented.
PIK3CA-related overgrowth spectrum (PROS): diagnostic and testing eligibility criteria, differential diagnosis, and evaluation.
Somatic activating mutations in the phosphatidylinositol-3-kinase/AKT/mTOR pathway underlie heterogeneous segmental overgrowth phenotypes. Because of the extreme differences among patients, we sought to characterize the phenotypic spectrum associated with different genotypes and mutation burdens, including a better understanding of associated complications and natural history. Historically, the clinical diagnoses in patients with PIK3CA activating mutations have included Fibroadipose hyperplasia or Overgrowth (FAO), Hemihyperplasia Multiple Lipomatosis (HHML), Congenital Lipomatous Overgrowth, Vascular Malformations, Epidermal Nevi, Scoliosis/Skeletal and Spinal (CLOVES) syndrome, macrodactyly, Fibroadipose Infiltrating Lipomatosis, and the related megalencephaly syndromes, Megalencephaly-Capillary Malformation (MCAP or M-CM) and Dysplastic Megalencephaly (DMEG). A workshop was convened at the National Institutes of Health (NIH) to discuss and develop a consensus document regarding diagnosis and treatment of patients with PIK3CA-associated somatic overgrowth disorders. Participants in the workshop included a group of researchers from several institutions who have been studying these disorders and have published their findings, as well as representatives from patient-advocacy and support groups. The umbrella term of "PIK3CA-Related Overgrowth Spectrum (PROS)" was agreed upon to encompass both the known and emerging clinical entities associated with somatic PIK3CA mutations including, macrodactyly, FAO, HHML, CLOVES, and related megalencephaly conditions. Key clinical diagnostic features and criteria for testing were proposed, and testing approaches summarized. Preliminary recommendations for a uniform approach to assessment of overgrowth and molecular diagnostic testing were determined. Future areas to address include the surgical management of overgrowth tissue and vascular anomalies, the optimal approach to thrombosis risk, and the testing of potential pharmacologic therapies.
3-D Thermal Component Model for Electrothermal Analysis of Multichip Power Modules With Experimental Validation
This paper presents for the first time a full three-dimensional (3-D), multilayer, and multichip thermal component model, based on finite differences, with asymmetrical power distributions for dynamic electrothermal simulation. Finite difference methods (FDMs) are used to solve the heat conduction equation in three dimensions. The thermal component model is parameterized in terms of structural and material properties so it can be readily used to develop a library of component models for any available power module. The FDM model is validated with a full analytical Fourier series-based model in two dimensions. Finally, the FDM thermal model is compared against measured data acquired from a newly developed high-speed transient coupling measurement technique. By using the device threshold voltage as a time-dependent temperature-sensitive parameter (TSP), the thermal transient of a single device, along with the thermal coupling effect among nearby devices sharing common direct bond copper (DBC) substrates, can be studied under a variety of pulsed power conditions.
Social Bias in Elicited Natural Language Inferences
We analyze the Stanford Natural Language Inference (SNLI) corpus in an investigation of bias and stereotyping in NLP data. The human-elicitation protocol employed in the construction of the SNLI makes it prone to amplifying bias and stereotypical associations, which we demonstrate statistically (using pointwise mutual information) and with qualitative examples.
Assessing the Value of Investments in Government Interoperability
Government investments in enhancing the interoperability of ICT systems have the potential to improve services and help governments respond to the diverse and often incompatible needs and interests of individual citizens, organizations, and society at large. These diverse needs and interests encompass a broad range of value propositions and demands that can seldom be met by single programs or assessed by simple metrics. The diversity of stakeholder needs and the complexity inherent in interoperable systems for connected government require an architecture that is up to the task. Such an architecture must include the reference models and components that can accommodate and integrate large portfolios of applications and support multiple kinds of performance assessments. The value propositions that underlie the architecture’s performance assessment or reference model are fundamental. The propositions must be broad enough to span the full scope of the government program’s goals, a substantial challenge. In recognition of that challenge, this chapter puts forward two perspectives for assessing the value of interoperable ICT investments, incorporating outcomes beyond financial metrics. The first is the network value approach to assessment of investments in interoperable ICT systems for government. The second is the public value framework developed by the Center for Technology in Government, which expands on the network value approach to include a broader range of public value outcomes. These approaches are illustrated in two case studies: the I-Choose project designed to produce interoperable government and private sector data about a specific agricultural market and the government of Colombia’s interoperability efforts with expanded metrics based on the expansion of interoperability networks. DOI: 10.4018/978-1-4666-1824-4.ch019
Creating Rapport with Virtual Agents
Recent research has established the potential for virtual characters to establish rapport with humans through simple contingent nonverbal behaviors. We hypothesized that the contingency, not just the frequency of positive feedback is crucial when it comes to creating rapport. The primary goal in this study was evaluative: can an agent generate behavior that engenders feelings of rapport in human speakers and how does this compare to human generated feedback? A secondary goal was to answer the question: Is contingency (as opposed to frequency) of agent feedback crucial when it comes to creating feelings of rapport? Results suggest that contingency matters when it comes to creating rapport and that agent generated behavior was as good as human listeners in creating rapport. A “virtual human listener” condition performed worse than other conditions.
A Digital Input Controller for Audio Class-D Amplifiers with 100W 0.004% THD+N and 113dB DR
A digital input controller for audio class-D amplifiers is presented. The controller utilizes specially configured integrated DAC and power stage feedback loop to suppress distortion components coming from power-stage switching with digital input capability. The class-D amplifier system with the controller and an existing power stage achieves 113dB DR, 0.0018% THD+N with 10W output power, and 0.004% THD+N with 100W output power into 4Omega load
Microstepping Mode for Stepper Motor Control
The paper presents a high performance system for stepper motor control in a microstepping mode, which was designed and performed with a L292 specialized integrated circuits, made by SGS-THOMSON, Microelectronics Company. The microstepping control system improves the positioning accuracy and eliminates low speed ripple and resonance effects in a stepper motor electrical drive.
Software validation using power profiles
While historically software validation focused on the functional requirements, recent approaches also encompass the validation of quality requirements; for example, system reliability, performance or usability. Application development for mobile platforms opens an additional area of qual i ty-power consumption. In PDAs or mobile phones, power consumption varies depending on the hardware resources used, making it possible to specify and validate correct or incorrect executions. Consider an application that downloads a video stream from the network and displays it on the mobile device's display. In the test scenario the viewing of the video is paused at a certain point. If the specification does not allow video prefetching, the user expects the network card activity to stop when video is paused. How can a test engineer check this expectation? Simply running a test suite or even tracing the software execution does not detect the network activity. However, the extraneous network activity can be detected by power measurements and power model application (Figure 1). Tools to find the power inconsistencies and to validate software from the energy point of view are needed.
A Compact Microstrip Slot Triple-Band Antenna for WLAN/WiMAX Applications
A compact triple-band microstrip slot antenna applied to WLAN/WiMAX applications is proposed in this letter. This antenna has a simpler structure than other antennas designed for realizing triple-band characteristics. It is just composed of a microstrip feed line, a substrate, and a ground plane on which some simple slots are etched. Then, to prove the validation of the design, a prototype is fabricated and measured. The experimental data show that the antenna can provide three impedance bandwidths of 600 MHz centered at 2.7 GHz, 430 MHz centered at 3.5 GHz, and 1300 MHz centered at 5.6 GHz.
Python and Java: The Best of Both Worlds
This paper describes a new working implementation of the Python language; built on top of the Java language and run-time environment. This is in contrast to the existing implementation of Python, which has been built on top of the C language and run-time environment. Implementing Python in Java has a number of limitations when compared to the current implementation of Python in C. These include about 1.7X slower performance, portability limited by Java VM availability, and lack of compatibility with existing C extension modules. The advantages of Java over C as an implementation language include portability of binary executables, object-orientation in the implementation language to match object-orientation in Python, true garbage collection, run-time exceptions instead of segmentation faults, and the ability to automatically generate wrapper code for arbitrary Java libraries.
Modeling readability to improve unit tests
Writing good unit tests can be tedious and error prone, but even once they are written, the job is not done: Developers need to reason about unit tests throughout software development and evolution, in order to diagnose test failures, maintain the tests, and to understand code written by other developers. Unreadable tests are more difficult to maintain and lose some of their value to developers. To overcome this problem, we propose a domain-specific model of unit test readability based on human judgements, and use this model to augment automated unit test generation. The resulting approach can automatically generate test suites with both high coverage and also improved readability. In human studies users prefer our improved tests and are able to answer maintenance questions about them 14% more quickly at the same level of accuracy.
An experimental study of interactions between droplets and a nonwetting microfluidic capillary.
We present a detailed experimental study of water drops coming into contact with the end of vertical polytetrafluoroethane (PTFE) capillary tubes. The drops, supported on a superhydrophobic substrate, were between 0.06 and 1.97 mm in radius, and the inner radius of the vertical tube was 0.15 mm. These experiments expand on our recent work, which demonstrated that small water droplets can spontaneously penetrate non-wetting capillaries, driven by the action of Laplace pressure within the droplet, and that the dynamics of microfluidic capillary uptake are strongly dependent on the size of the incident drop. Here we quantitatively bound the critical drop radius at which droplets can penetrate a pre-filled capillary to the narrow range between 0.43 and 0.50 mm. This value is consistent with a water-PTFE contact angle between 107.8 degrees and 110.6 degrees. Capillary uptake dynamics were not significantly affected by the initial filling height, but other experimental factors have been identified as important to the dynamics of this process. In particular, interactions between the droplet, the substrate and the tubing are unavoidable prior to and during droplet uptake in a real microfluidic system. Such interactions are classified and discussed for the experimental set-up used, and the difficulties and requirements for droplet penetration of a dry capillary are outlined. These results are relevant to research into microfluidic devices, inkjet printing, and the penetration of fluids in porous materials.
Effects of a whey protein supplementation on intrahepatocellular lipids in obese female patients.
BACKGROUND & AIMS High protein diets have been shown to improve hepatic steatosis in rodent models and in high-fat fed humans. We therefore evaluated the effects of a protein supplementation on intrahepatocellular lipids (IHCL), and fasting plasma triglycerides in obese non diabetic women. METHODS Eleven obese women received a 60 g/day whey protein supplement (WPS) for 4-weeks, while otherwise nourished on a spontaneous diet, IHCL concentrations, visceral body fat, total liver volume (MR), fasting total-triglyceride and cholesterol concentrations, glucose tolerance (standard 75 g OGTT), insulin sensitivity (HOMA IS index), creatinine clearance, blood pressure and body composition (bio-impedance analysis) were assessed before and after 4-week WPS. RESULTS IHCL were positively correlated with visceral fat and total liver volume at inclusion. WPS decreased significantly IHCL by 20.8 ± 7.7%, fasting total TG by 15 ± 6.9%, and total cholesterol by 7.3 ± 2.7%. WPS slightly increased fat free mass from 54.8 ± 2.2 kg to 56.7 ± 2.5 kg, p = 0.005). Visceral fat, total liver volume, glucose tolerance, creatinine clearance and insulin sensitivity were not changed. CONCLUSIONS WPS improves hepatic steatosis and plasma lipid profiles in obese non diabetic patients, without adverse effects on glucose tolerance or creatinine clearance. TRIAL NUMBER NCT00870077, ClinicalTrials.gov.
Transanal irrigation for the treatment of neuropathic bowel dysfunction.
INTRODUCTION Children with spinal cord lesions very often experience bowel dysfunction, with a significant impact on their social activities and quality of life. Our aim was to evaluate the efficacy of the Peristeen transanal irrigation (TI) system in patients with neuropathic bowel dysfunction (NBD). MATERIAL AND METHODS We prospectively reviewed 40 children with spina bifida and NBD who did not respond satisfactorily to conventional bowel management and were treated with the Peristeen TI system. Dysfunctional bowel symptoms, patient opinion and level of satisfaction were analysed before and during TI treatment using a specific questionnaire. RESULTS Thirty-five children completed the study. Mean patient age and follow up were 12.5 years (6-25) and 12 months (4-18), respectively. Average irrigation frequency and instillation volume were once every 3 days and 616ml (200-1000), respectively. Bowel dysfunction symptoms including faecal incontinence improved significantly in all children. Patient opinion of intestinal functionality improved from 2.3±1.4 to 8.2±1.5 (P<0.0001) and mean grade of satisfaction with the Peristeen system was 7.3. Patient independence also improved from 28 to 46% and no adverse events were recorded. CONCLUSIONS TI should be used as a first therapeutic approach in those children with NBD who do not respond to conservative or medical bowel management before other more invasive treatment modalities are considered. The Peristeen system is as effective as other TI methods, but it is easy to learn, safe and increases the patient's independence.
Effective Exploration for MAVs Based on the Expected Information Gain
Micro aerial vehicles (MAVs) are an excellent platform for autonomous exploration. Most MAVs rely mainly on cameras for buliding a map of the 3D environment. Therefore, vision-based MAVs require an efficient exploration algorithm to select viewpoints that provide informative measurements. In this paper, we propose an exploration approach that selects in real time the next-best-view that maximizes the expected information gain of new measurements. In addition, we take into account the cost of reaching a new viewpoint in terms of distance and predictability of the flight path for a human observer. Finally, our approach selects a path that reduces the risk of crashes when the expected battery life comes to an end, while still maximizing the information gain in the process. We implemented and thoroughly tested our approach and the experiments show that it offers an improved performance compared to other state-of-the-art algorithms in terms of precision of the reconstruction, execution time, and smoothness of the path.
Solving Verbal Crypto-Arithmetic Problem by Parallel Genetic Algorithm (PGA)
51  Abstract— Cryptarithmetic puzzles are quite old and their inventor is not known. An example in The American Agriculturist of 1864 disproves the popular notion that it was invented by Sam Loyd. The name cryptarithmetic was coined by puzzlist Minos (pseudonym of Maurice Vatriquant) in the May 1931 issue of Sphinx, a Belgian magazine of recreational mathematics. In the 1955, J. A. H. Hunter introduced the word "alphabetic" to designate cryptarithms, such as Dudeney's, whose letters from meaningful words or phrases. Solving a cryptarithm by hand usually involves a mix of deductions and exhaustive tests of possibilities. Cryptarithmetic is a puzzle consisting of an arithmetic problem in which the digits have been replaced by letters of the alphabet. The goal is to decipher the letters (i.e. Map them back onto the digits) using the constraints provided by arithmetic and the additional constraint that no two letters can have the same numerical value. Cryptarithmetic is a class of constraint satisfaction problems which includes making mathematical relations between meaningful words using simple arithmetic operators like 'plus' in a way that the result is conceptually true, and assigning digits to the letters of these words and generating numbers in order to make correct arithmetic operations as well
Reverse Engineering CAPTCHAs
CAPTCHAs are automated Turing tests used to determine if the end-user is human and not an automated program. Users are asked to read and answer Visual CAPTCHAs, which often appear as bitmaps of text characters, in order to gain access to a low-cost resource such as webmail or a blog. CAPTCHAs are generated by software and the structure of a CAPTCHA gives hints to its implementation. Thus due to these properties of image processing and image composition, the process that creates CAPTCHAs can often be reverse engineered. Once the implementation strategy of a family of CAPTCHAs has been reverse engineered the CAPTCHA instances may be solved automatically by leveraging weaknesses in the creation process or by comparing a CAPTCHA's output against itself. In this paper, we present a case study where we reverse engineer and solve real-world CAPTCHAs using simple image processing techniques such as bitmap comparison, thresholding, fill-flood segmentation, dilation, and erosion. We present black-box and white-box methodologies for reverse engineering and solving CAPTCHAs. As well we provide an open source toolkit for solving CAPTCHAs that we have used with a success rates of 99, 95, 61, 30%, and 27% on hundreds of CAPTCHAs from five real-world examples.
Matrix representation for genetic algorithms
Many problems have a structure with an inherently two (or higher) dimensional nature. Unfortunately, the classical method of representing problems when using Genetic Algorithms (GAs) is of a linear nature. We develop a genome representation with a related crossover mechanism which preserves spatial relationships for two dimensional problems. We then explore how crossover disruption rates relate to the spatial structure of the problem space.
3 D CAD model generation of mechanical parts using coded-pattern projection and laser triangulation systems
Purpose – Investigate the use of two imaging-based methods – coded pattern projection and laser-based triangulation – to generate 3D models as input to a rapid prototyping pipeline. Design/methodology/approach – Discusses structured lighting technologies as suitable imaging-based methods. Two approaches, coded-pattern projection and laser-based triangulation, are specifically identified and discussed in detail. Two commercial systems are used to generate experimental results. These systems include the Genex Technologies 3D FaceCam and the Integrated Vision Products Ranger System. Findings – Presents 3D reconstructions of objects from each of the commercial systems. Research limitations/implications – Provides background in imaging-based methods for 3D data collection and model generation. A practical limitation is that imaging-based systems do not currently meet accuracy requirements, but continued improvements in imaging systems will minimize this limitation. Practical implications – Imaging-based approaches to 3D model generation offer potential to increase scanning time and reduce scanning complexity. Originality/value – Introduces imaging-based concepts to the rapid prototyping pipeline.
Passive in the world ’ s languages
In this chapter we shall examine the characteristic properties of a construction widespread in the world’s languages, the passive. In section 1 below we discuss defining characteristics of passives, contrasting them with other foregrounding and backgrounding constructions. In section 2 we present the common syntactic and semantic properties of the most widespread types of passives, and in section 3 we consider passives which differ in one or more ways from these. In section 4, we survey a variety of constructions that resemble passive constructions in one way or another. In section 5, we briefly consider differences between languages with regard to the roles passives play in their grammars. Specifically, we show that passives are a more essential part of the grammars of some languages than of others.
ConceptNet 5.5: An Open Multilingual Graph of General Knowledge
Machine learning about language can be improved by supplying it with specific knowledge and sources of external information. We present here a new version of the linked open data resource ConceptNet that is particularly well suited to be used with modern NLP techniques such as word embeddings. ConceptNet is a knowledge graph that connects words and phrases of natural language with labeled edges. Its knowledge is collected from many sources that include expertcreated resources, crowd-sourcing, and games with a purpose. It is designed to represent the general knowledge involved in understanding language, improving natural language applications by allowing the application to better understand the meanings behind the words people use. When ConceptNet is combined with word embeddings acquired from distributional semantics (such as word2vec), it provides applications with understanding that they would not acquire from distributional semantics alone, nor from narrower resources such as WordNet or DBPedia. We demonstrate this with state-of-the-art results on intrinsic evaluations of word relatedness that translate into improvements on applications of word vectors, including solving SAT-style analogies.
A Comparison of BPMN and UML 2 . 0 Activity Diagrams
Interest in evaluating Business Process Modeling Languages has widespread, in part due to the increase of the numb er of languages available for this purpose. Several works on the evaluation of BP MLs are available. Their evaluation are mainly based on perspectives centere d in modeling experts. In this paper, we address the readability perspective of tw o BPMLs (UML 2.0 and BPMN) for people not familiar with process modelin g. The UML can be tailored for purposes beyond softwar e modeling and offers Activity Diagrams for business process modeling. BP MN was designed for modeling business process and has a primary goal of being understandable by all business stakeholders. We compared undergraduates ( freshmen) understanding of business process modeled in BPMN and UML 2.0 Activi ty Diagrams. Our results are interesting, since we were able to find that th ese two languages do not have significant differences, despite BPMN readability d esign goals.
On Graph Problems in a Semi-streaming Model
Massive graphs arise naturally in a lot of applications, especially in communication networks like the internet. The size of these graphs makes it very hard or even impossible to store set of edges in the main memory. Thus, random access to the edges can't be realized, which makes most o ine algorithms unusable. This essay investigates e cient algorithms that read the edges only in a xed sequential order. Since even basic graph problems often need at least linear space in the number of vetices to be solved, the storage space bounds are relaxed compared to the classic streaming model, such that the bound is O(n · polylog n). The essay describes algorithms for approximations of the unweighted and weighted matching problem and gives a o(log1− n) lower bound for approximations of the diameter. Finally, some results for further graph problems are discussed.
A functional representation for aiding biomimetic and artificial inspiration of new ideas
Inspiration is useful for exploration and discovery of new solution spaces. Systems in natural and artificial worlds and their functionality are seen as rich sources of inspiration for idea generation. However, unlike in the artificial domain where existing systems are often used for inspiration, those from the natural domain are rarely used in a systematic way for this purpose. Analogy is long regarded as a powerful means for inspiring novel idea generation. One aim of the work reported here is to initiate similar work in the area of systematic biomimetics for product development, so that inspiration from both natural and artificial worlds can be used systematically to help develop novel, analogical ideas for solving design problems. A generic model for representing causality of natural and artificial systems has been developed, and used to structure information in a database of systems from both the domains. These are implemented in a piece of software for automated analogical search of relevant ideas from the databases to solve a given problem. Preliminary experiments at validating the software indicate substantial potential for the approach.
Transfusion management of sickle cell disease.
LTHOUGH IT IS likely that sickle cells have been reA sponsible for morbidity and mortality in humans for thousands of years,' it was not until 1910 that the first description of sickle cell anemia appeared? Subsequently, sickle cell anemia has become the prototype of modern molecular disease, its history tracing the evolution of science and medicine in the 20th century.' Sickle cell disease (SCD), including homozygous hemoglobin (Hb) S and doubly heterozygous HbS/C and HbS/P-thalassemia, is among the most common of inherited hemoglobinopathies. The incidence of SCD is greatest in equatorial Africa, where the frequency of heterozygous carriage is as high as 35%.3 The incidence of sickle trait in African-Americans is approximately 8% with 1 in 200 to 500 newborns affected with SCD.425 Although full discussion of the pathophysiology of SCD is beyond the scope of this report (the reader is directed to recent reviewsG8), certain features of HbS merit emphasis. Via direct and indirect mechanisms such as polymerization, instability; and membrane bindinglo HbS induces a cascade of alterations in red blood cell (RBC) structure and function. Consequently, RBCs from patients with SCD are both rigid and adherant, resulting in marked abnormalities in microrheol0gy.63"-'~ It should be emphasized that although such properties both promote and are exacerbated by cellular sickling, the adverse effects of HbS come into play even in the absence of this characteristic shape change. Hyperviscosity, cellular adherance, sickling, and possibly hypercoagulability l4 may all contribute to the vascular effects associated with SCD. Vaso-occlusion, intimal hyperplasia, impaired vasomotor activity, marrow fat embolism, thrombosis, and thromboembolism have in turn all been postulated to contribute to the tissue ischemia responsible for the clinical symptoms of SCD.7.' ' Finally, infarction-induced hyposplenism leads to a high incidence of severe infection in young children, particularly septicemia with encapsulated organism^.'^.'^ The major manifestations of SCD include hemolflc anemia and complications of vascular occlusion. Although there is significant variability in clinical severity, recent data suggest that most African-Americans with SCD have crises",'8 and require hospitalization at some point." Furthermore, chronic vasculopathy leads to irreversible organ damage in at least a third of patients, and as such, is the most frequent cause of death beyond early childh~od.'~ Survival in SCD has improved dramatically around the globe due in large part to changing living conditions, early diagnosis, and improvements in supportive care, especially the judicious use of antibiotic~.'~-~' Nonetheless, SCD remains a debilitating and life-threatening disorder.
Speech segmentation and spoken document processing
Progress in both speech and language processing has spurred efforts to support applications that rely on spoken rather than written language input. A key challenge in moving from text-based documents to such spoken documents is that spoken language lacks explicit punctuation and formatting, which can be crucial for good performance. This article describes different levels of speech segmentation, approaches to automatically recovering segment boundary locations, and experimental results demonstrating impact on several language processing tasks. The results also show a need for optimizing segmentation for the end task rather than independently.
Curve-Skeleton Extraction Using Iterative Least Squares Optimization
A curve skeleton is a compact representation of 3D objects and has numerous applications. It can be used to describe an object's geometry and topology. In this paper, we introduce a novel approach for computing curve skeletons for volumetric representations of the input models. Our algorithm consists of three major steps: 1) using iterative least squares optimization to shrink models and, at the same time, preserving their geometries and topologies, 2) extracting curve skeletons through the thinning algorithm, and 3) pruning unnecessary branches based on shrinking ratios. The proposed method is less sensitive to noise on the surface of models and can generate smoother skeletons. In addition, our shrinking algorithm requires little computation, since the optimization system can be factorized and stored in the precomputational step. We demonstrate several extracted skeletons that help evaluate our algorithm. We also experimentally compare the proposed method with other well-known methods. Experimental results show advantages when using our method over other techniques.
Personalized News Recommendation Based on Collaborative Filtering
Because of the abundance of news on the web, news recommendation is an important problem. We compare three approaches for personalized news recommendation: collaborative filtering at the level of news items, content-based system recommending items with similar topics, and a hybrid technique. We observe that recommending items according to the topic profile of the current browsing session seems to give poor results. Although news articles change frequently and thus data about their popularity is sparse, collaborative filtering applied to individual articles provides the best results.
A Survey of Modeling and Analysis Approaches for Architecting Secure Software Systems
There has been a growing interest in investigating methodologies to support the development of secure systems in the software engineering research community. Recently, much attention has been focused on the modelling and analysis of security properties for systems at the software architecture design level. The potential benefits of this architecture level work are substantial: security flaws can be detected and removed earlier in the software development life-cycle. This reduces development time, reduces development cost, and improves the quality of the resulting system. As a result of this attention, a wide variety of approaches have been proposed in the literature. At this point, a survey for researchers involved in the problem of systematically modelling and analyzing software architecture design that have security properties would be of value to the community. This paper presents such a survey; it includes a discussion of semi-formal, formal, integrated semi-formal and formal, and aspect-oriented approaches. Comparison criteria are defined including: the kinds of notations used to model the security properties (e.g., Petri nets, temporal logic, etc.), whether the approach supports the manual or automated analysis of security properties, the specific security property modelled (e.g., authentication, role-based access control, etc.), and the kind of example system that has been used to illustrate the approach (information, distributed, etc.).
Study on Secret Sharing Schemes (SSS) and their applications
Hiding a secret is needed in many situations. One might need to hide a password, an encryption key, a secret recipe, and etc. Information can be secured with encryption, but the need to secure the secret key used for such encryption is important too. Imagine you encrypt your important files with one secret key and if such a key is lost then all the important files will be inaccessible. Thus, secure and efficient key management mechanisms are required. One of them is secret sharing scheme (SSS) that lets you split your secret into several parts and distribute them among selected parties. The secret can be recovered once these parties collaborate in some way. This paper will study these schemes and explain the need for them and their security. Across the years, various schemes have been presented. This paper will survey some of them varying from trivial schemes to threshold based ones. Explanations on these schemes constructions are presented. The paper will also look at some applications of SSS.
Personality factors and adult attachment affecting job mobility
Past research has revealed that individuals’ job mobility is affected by factors such as job satisfaction, specific career enhancing attributes and job availability. This study examined personality factors predicting voluntary internal and external job mobility. Three types of voluntary job mobility measures were studied: dissatisfaction changes, job improvement changes and job rotations within companies. These mobility measures were related to the Big Five personality factors, sensation seeking and adult attachment. Results showed that demographic variables and sensation seeking contributed to the variance in external job changes. Internal job rotationswere not related to any of the demographic and personality variables.
The social strategy cone: Towards a framework for evaluating social media strategies
Social media is growing rapidly. Providing both risks and opportunities for organizations as it does. The social strategy cone is developed for evaluating social media strategies. This framework comprises of seven key elements of social media strategies as based on a systematic literature review and case studies. The results of 21 interviews have contributed to the construction of the social media strategy cone for analyzing social media strategies. Three levels of maturity of social media strategy are proposed: initia-
Interleukin-21-Producing CD4(+) T Cells Promote Type 2 Immunity to House Dust Mites.
Asthma is a T helper 2 (Th2)-cell-mediated disease; however, recent findings implicate Th17 and innate lymphoid cells also in regulating airway inflammation. Herein, we have demonstrated profound interleukin-21 (IL-21) production after house dust mite (HDM)-driven asthma by using T cell receptor (TCR) transgenic mice reactive to Dermatophagoides pteronyssinus 1 and an IL-21GFP reporter mouse. IL-21-producing cells in the mediastinal lymph node (mLN) bore characteristics of T follicular helper (Tfh) cells, whereas IL-21(+) cells in the lung did not express CXCR5 (a chemokine receptor expressed by Tfh cells) and were distinct from effector Th2 or Th17 cells. Il21r(-/-) mice developed reduced type 2 responses and the IL-21 receptor (IL-21R) enhanced Th2 cell function in a cell-intrinsic manner. Finally, administration of recombinant IL-21 and IL-25 synergistically promoted airway eosinophilia primarily via effects on CD4(+) lymphocytes. This highlights an important Th2-cell-amplifying function of IL-21-producing CD4(+) T cells in allergic airway inflammation.
Perceived environmental housing quality and wellbeing of movers.
STUDY OBJECTIVE To examine whether changes in environmental housing quality influence the wellbeing of movers taking into account other dimensions of housing quality and sociodemographic factors. DESIGN Cross sectional telephone survey. Associations between changes in satisfaction with 40 housing quality indicators (including environmental quality) and an improvement in self rated health (based on a standardised question) were analysed by multiple logistic regression adjusting for sociodemographic variables. Objective measures of wellbeing or environmental quality were not available. SETTING North western region of Switzerland including the city of Basel. PARTICIPANTS Random sample of 3870 subjects aged 18-70 who had moved in 1997, participation rate 55.7%. RESULTS A gain in self rated health was most strongly predicted by an improved satisfaction with indicators related to the environmental housing quality measured as "location of building" (adjusted odds ratio (OR) =1.58, 95% confidence intervals (CI) =1.28, 1.96) and "perceived air quality" (OR=1.58, 95% CI=1.24, 2.01) and to the apartment itself, namely "suitability" (OR=1.77, 95% CI=1.41, 2.23), "relationship with neighbours" (OR=1.46, 95% CI=1.19, 1.80) and "noise from neighbours" (OR=1.32, 95% CI=1.07, 1.64). The destination of moving and the main reason to move modified some of the associations with environmental indicators. CONCLUSION An improvement in perceived environmental housing quality was conducive to an increase in wellbeing of movers when other dimensions of housing quality and potential confounders were taken into account.
Learning compound multi-step controllers under unknown dynamics
Applications of reinforcement learning for robotic manipulation often assume an episodic setting. However, controllers trained with reinforcement learning are often situated in the context of a more complex compound task, where multiple controllers might be invoked in sequence to accomplish a higher-level goal. Furthermore, training such controllers typically requires resetting the environment between episodes, which is typically handled manually. We describe an approach for training chains of controllers with reinforcement learning. This requires taking into account the state distributions induced by preceding controllers in the chain, as well as automatically training reset controllers that can reset the task between episodes. The initial state of each controller is determined by the controller that precedes it, resulting in a non-stationary learning problem. We demonstrate that a recently developed method that optimizes linear-Gaussian controllers under learned local linear models can tackle this sort of non-stationary problem, and that training controllers concurrently with a corresponding reset controller only minimally increases training time. We also demonstrate this method on a complex tool use task that consists of seven stages and requires using a toy wrench to screw in a bolt. This compound task requires grasping and handling complex contact dynamics. After training, the controllers can execute the entire task quickly and efficiently. Finally, we show that this method can be combined with guided policy search to automatically train nonlinear neural network controllers for a grasping task with considerable variation in target position.
Does Colorspace Transformation Make Any Difference on Skin Detection?
Skin detection is an important process in many of computer vision algorithms. It usually is a process that starts at a pixel-level, and that involves a pre-process of colorspace transformation followed by a classification process. A colorspace transformation is assumed to increase separability between skin and non-skin classes, to increase similarity among different skin tones, and to bring a robust performance under varying illumination conditions, without any sound reasonings. In this work, we examine if the colorspace transformation does bring those benefits by measuring four separability measurements on a large dataset of 805 images with different skin tones and illumination. Surprising results indicate that most of the colorspace transformations do not bring the benefits which have been assumed.
JavaScript as an Embedded DSL
Developing rich web applications requires mastering different environments on the client and server sides. While there is considerable choice on the server-side, the client-side is tied to JavaScript, which poses substantial software engineering challenges, such as moving or sharing pieces of code between the environments. We embed JavaScript as a DSL in Scala, using Lightweight Modular Staging. DSL code can be compiled to JavaScript or executed as part of the server application. We use features of the host language to make client-side programming safer and more convenient. We use gradual typing to interface typed DSL programs with existing JavaScript APIs. We exploit a selective CPS transform already available in the host language to provide a compelling abstraction over asynchronous callback-driven programming in our DSL.
How Many Subjects Does It Take To Do A Regression Analysis.
Numerous rules-of-thumb have been suggested for determining the minimum number of subjects required to conduct multiple regression analyses. These rules-of-thumb are evaluated by comparing their results against those based on power analyses for tests of hypotheses of multiple and partial correlations. The results did not support the use of rules-of-thumb that simply specify some constant (e.g., 100 subjects) as the minimum number of subjects or a minimum ratio of number of subjects (N) to number of predictors (m). Some support was obtained for a rule-of-thumb that N ≥ 50 + 8 m for the multiple correlation and N ≥104 + m for the partial correlation. However, the rule-of-thumb for the multiple correlation yields values too large for N when m ≥ 7, and both rules-of-thumb assume all studies have a medium-size relationship between criterion and predictors. Accordingly, a slightly more complex rule-of thumb is introduced that estimates minimum sample size as function of effect size as well as the number of predictors. It is argued that researchers should use methods to determine sample size that incorporate effect size.
Noninvasive detection of brushless exciter rotating diode failure
A noninvasive method for detecting failure of brushless exciter rotating diodes is presented in this paper. Diodes can fail in one of two ways: either open circuit or short circuit. It is shown by actual test, on a 31.5-kVA alternator, that the exciter field current waveform changes distinctly when a diode fails. Harmonic analysis of the exciter field current waveform is performed. The results of this analysis form the basis of a method for real-time detection of diode failure on a microprocessor-based platform.
Encyclopedia of aesthetics
This reference surveys the breadth of critical thought on art, culture, and society - from classical philosophy to contemporary critical theory. Featuring 600 original articles by distinguished scholars from many fields and countries, it is a comprehensive survey of major concepts, thinkers, and debates about the meaning, uses, and value of all the arts - from painting and sculpture to literature, music, theatre, dance, television, film, and popular culture. Of special interest are in-depth surveys of Western aesthetics and broad coverage of non-Western traditions and theories of art.
Trajectory tracking control law for a tractor-trailer wheeled mobile robot
Trajectory tracking problem is one of the most important subjects that has been focused with many researchers in past years. In this paper, a Tractor-Trailer type robot including two nonholonomic constraints is analyzed. First, robot kinematic equations are obtained and transformed to the chained-form equations. Next, controllability of robot on the reference trajectory is evaluated and then appropriate reference trajectories for the tractor-trailer robot are generated. Finally a controller based on feedback linearization method is investigated in order to stabilize tracking errors about the origin. Obtained results show that the designed controller performs quite effective.
Neighborhood predictors of concealed firearm carrying among children and adolescents: results from the project on human development in Chicago neighborhoods.
BACKGROUND Previous studies of concealed firearm carrying among children and adolescents have focused on individual risk factors. OBJECTIVE To identify features of neighborhoods associated with concealed firearm carrying among a representative sample of youth from Chicago, Ill. DESIGN Cross-sectional analysis of individual- and neighborhood-level data collected by the Project on Human Development in Chicago Neighborhoods. SETTING Families and neighborhoods in Chicago. PARTICIPANTS Population-based sample of 1842 multiethnic youth aged 9 to 19 years and the 218 neighborhoods in which they resided. Main Outcome Measure Whether youth had ever carried a concealed firearm. RESULTS Lifetime estimates for concealed firearm carrying were 4.9% for males and 1.1% for females. We found that youth in safer and less disordered neighborhoods were less likely than youth in unsafe and more disordered neighborhoods to carry concealed firearms. Specifically, multilevel nonlinear regression models identified a positive association between concealed firearm carrying and (1) community members' ratings of neighborhoods as unsafe for children; (2) neighborhood social disorder; and (3) neighborhood physical disorder. Neighborhood collective efficacy was negatively associated with concealed firearm carrying. Models controlled for neighborhood economic indicators and individual and family factors associated with the carrying of concealed firearms by youth. CONCLUSIONS Youth are less likely to carry concealed firearms in areas where there is less violence and increased safety. Interventions to improve neighborhood conditions such as increasing safety, improving collective efficacy, and reducing social and physical disorder may be efficacious in preventing firearm use and its associated injuries and death among youth.
A Vision for All-Spin Neural Networks: A Device to System Perspective
Spin-transfer torque (STT) mechanisms in vertical and lateral spin valves and magnetization reversal/domain wall motion with spin-orbit torque (SOT) have opened up new possibilities of efficiently mimicking “neural” and “synaptic” functionalities with much lower area and energy consumption compared to CMOS implementations. In this paper, we review various STT/SOT devices that can provide a compact and area-efficient implementation of artificial neurons and synapses. We provide a device-circuit-system perspective and envision design of an All-Spin neuromorphic processor (with different degrees of bio-fidelity) that can be potentially appealing for ultra-low power cognitive applications.
Question classification for medical domain Question Answering system
Question classification plays an important role in question answering system. It helps in finding or constructing accurate answers and hence improves the quality of Question Answering systems. The question classification approaches generally used are: Rule based, Machine learning and Hybrid. This paper presents our research work on question classification through rule based approach. The question processing module helps in assigning a suitable question category and identifying the keywords from the given input question. A prototype system based on the proposed method has been constructed and the experiment on 500 medical questions collected from patients and doctors has been carried out. Using the two layered taxonomy of 6 course grain and 50 fine grained categories developed by Li and Roth, we have classified the questions into various categories. We have also studied the syntactic structure of the question and suggest the syntactic patterns for particular category of questions. Using these question patterns we have classified the question into particular category. In this paper we have proposed a compact and effective method for question classification. The experimental output shows that even with small set of question categories we can classify the questions with more satisfactory and better result.
A Teachable-Agent Arithmetic Game's Effects on Mathematics Understanding, Attitude and Self-efficacy
A teachable-agent arithmetic game is presented and evaluated in terms of student performance, attitude and self-eff icacy. An experimental prepost study design was used, enrolling 153 3 rd and 5 grade students in Sweden. The playing group showed significantly larger gains i math performance and self-efficacy beliefs, but not in general attitude towards math, compared to control groups. The contributions in relation to pr evious work include a novel educational game being evaluated, and an emphasis o n elf-efficacy in the study as a strong predictor of math achievements.
Multi-atlas segmentation of biomedical images: A survey
Multi-atlas segmentation (MAS), first introduced and popularized by the pioneering work of Rohlfing, et al. (2004), Klein, et al. (2005), and Heckemann, et al. (2006), is becoming one of the most widely-used and successful image segmentation techniques in biomedical applications. By manipulating and utilizing the entire dataset of "atlases" (training images that have been previously labeled, e.g., manually by an expert), rather than some model-based average representation, MAS has the flexibility to better capture anatomical variation, thus offering superior segmentation accuracy. This benefit, however, typically comes at a high computational cost. Recent advancements in computer hardware and image processing software have been instrumental in addressing this challenge and facilitated the wide adoption of MAS. Today, MAS has come a long way and the approach includes a wide array of sophisticated algorithms that employ ideas from machine learning, probabilistic modeling, optimization, and computer vision, among other fields. This paper presents a survey of published MAS algorithms and studies that have applied these methods to various biomedical problems. In writing this survey, we have three distinct aims. Our primary goal is to document how MAS was originally conceived, later evolved, and now relates to alternative methods. Second, this paper is intended to be a detailed reference of past research activity in MAS, which now spans over a decade (2003-2014) and entails novel methodological developments and application-specific solutions. Finally, our goal is to also present a perspective on the future of MAS, which, we believe, will be one of the dominant approaches in biomedical image segmentation.
Improved layout implementation of Mini-Mips in terms of power, performance and chip footprint
This paper proposes a customized implementation of Mini-Mips processor; commencing from Register Transfer Level (RTL) down to GDSii Layout level. The implementation is customized for a string matching specific-application that is given problem for second Iranian National Digital Systems Design contest 2015 at Amirkabir University of Technology. We apply various techniques to make reasonable trade-off between area, delay, and power consumption ranging from loop unrolling in algorithm level to low power techniques such as clock gating in circuit level. Moreover, we utilize compiler optimization techniques to reduce the execution time. Our simulations show that execution time is reduced more than 63% on average. In overall, we come up with an optimized design in terms of chip area, power consumption and total execution time.
How New Zealand has contained expenditure on drugs.
The recent economic crisis has forced Western countries to examine how they contain health spending and improve value for money. Spending on drugs averages around 15% of total health spending for countries in the Organisation for Economic Cooperation andDevelopment (OECD). Improvedmanagement of spending on drugs can therefore make an important contribution to containing health budgets. In recent years, increases in drug costs in New Zealand have been below those experienced in other countries while public coverage has improved. We discuss the Pharmaceutical Management Agency’s (PHARMAC) role in achieving this, its processes for setting priorities, criticisms about its work, and implications for other healthcare systems. History and role
A Design Approach for Collaboration Processes: A Multimethod Design Science Study in Collaboration Engineering
Collaboration Engineering is an approach for the design and deployment of repeatable collaboration processes that can be executed by practitioners without the support of collaboration professionals such as facilitators. A critical challenge in Collaboration Engineering concerns how the design activities have to be executed and which design choices have to be made to create a process design. We report on a four year design science study, in which we developed a design approach for Collaboration Engineering that
Evaluation Datasets for Twitter Sentiment Analysis: A survey and a new dataset, the STS-Gold
Sentiment analysis over Twitter offers organisations and individuals a fast and effective way to monitor the publics’ feelings towards them and their competitors. To assess the performance of sentiment analysis methods over Twitter a small set of evaluation datasets have been released in the last few years. In this paper we present an overview of eight publicly available and manually annotated evaluation datasets for Twitter sentiment analysis. Based on this review, we show that a common limitation of most of these datasets, when assessing sentiment analysis at target (entity) level, is the lack of distinctive sentiment annotations among the tweets and the entities contained in them. For example, the tweet “I love iPhone, but I hate iPad” can be annotated with a mixed sentiment label, but the entity iPhone within this tweet should be annotated with a positive sentiment label. Aiming to overcome this limitation, and to complement current evaluation datasets, we present STS-Gold, a new evaluation dataset where tweets and targets (entities) are annotated individually and therefore may present different sentiment labels. This paper also provides a comparative study of the various datasets along several dimensions including: total number of tweets, vocabulary size and sparsity. We also investigate the pair-wise correlation among these dimensions as well as their correlations to the sentiment classification performance on different datasets.
Efficient Design and Decoding of Polar Codes
Polar codes are shown to be instances of both generalized concatenated codes and multilevel codes. It is shown that the performance of a polar code can be improved by representing it as a multilevel code and applying the multistage decoding algorithm with maximum likelihood decoding of outer codes. Additional performance improvement is obtained by replacing polar outer codes with other ones with better error correction performance. In some cases this also results in complexity reduction. It is shown that Gaussian approximation for density evolution enables one to accurately predict the performance of polar codes and concatenated codes based on them.
A Deep Learning Approach with an Attention Mechanism for Automatic Sleep Stage Classification
Automatic sleep staging is a challenging problem and state-of-the-art algorithms have not yet reached satisfactory performance to be used instead of manual scoring by a sleep technician. Much research has been done to find good feature representations that extract the useful information to correctly classify each epoch into the correct sleep stage. While many useful features have been discovered, the amount of features have grown to an extent that a feature reduction step is necessary in order to avoid the curse of dimensionality. One reason for the need of such a large feature set is that many features are good for discriminating only one of the sleep stages and are less informative during other stages. This paper explores how a second feature representation over a large set of pre-defined features can be learned using an auto-encoder with a selective attention for the current sleep stage in the training batch. This selective attention allows the model to learn feature representations that focuses on the more relevant inputs without having to perform any dimensionality reduction of the input data. The performance of the proposed algorithm is evaluated on a large data set of polysomnography (PSG) night recordings of patients with sleep-disordered breathing. The performance of the autoencoder with selective attention is compared with a regular auto-encoder and previous works using a deep belief network (DBN).
Corneal biomechanical properties and glaucoma-related quantitative traits in the EPIC-Norfolk Eye Study.
PURPOSE We examined the association of corneal hysteresis (CH) with Heidelberg retina tomograph (HRT)- and Glaucoma Detection with Variable Corneal Compensation scanning laser polarimeter (GDxVCC)-derived measures in a British population. METHODS The EPIC-Norfolk Eye Study is nested within a multicenter cohort study--the European Prospective Investigation of Cancer. Ocular response analyzer (ORA), HRT3, and GDxVCC measurements were taken at the research clinic. Three ORA measurements were taken per eye, and the single best value used. Participants meeting predefined criteria were referred for a second examination, including Goldmann applanation tonometry (GAT) and central corneal thickness (CCT) measurement. Generalized estimating equation models were used to examine the associations of CH with HRT and GDxVCC parameters, adjusted for disc area. The GDxVCC analyses were adjusted further for typical scan score to handle atypical retardation. RESULTS There were complete research clinic data from 5134 participants. Corneal hysteresis was associated positively with HRT rim area (P < 0.001), and GDxVCC retinal nerve fiber layer (RNFL) average thickness (P = 0.006) and modulation (P = 0.003), and associated negatively with HRT linear cup-to-disc ratio (LCDR, P < 0.001), after adjustment for Goldmann-correlated IOP and other possible confounders. In the 602 participants undergoing the second examination, CH was associated negatively with LCDR (P = 0.008) after adjustment for GAT, CCT, and other possible confounders. CONCLUSIONS Lower CH was associated with HRT and GDxVCC parameters in a direction that is seen in glaucoma and with ageing. Further research is required to establish if this is a causal relationship, or due to residual confounding by age, IOP, or CCT.
THE CHARITY: PIVOTAL SOCIAL POLICY ROLE
Sometimes an issue can remain dormant for a long period of time before receiving governmental and legislative attention. Debate on corporate governance has coincided with a number of measures impacting on the charitable sector which, taken together, have the effect of bringing about improvements in the overall corporate governance climate for the charity, and re‐inforcing the centrality of the charity as an important instrument of social policy. The aim of this article is to explore this battery of measures, their historical context, and the varying fortunes of the charitable sector in its social policy role.
Association between Wegener's granulomatosis and Staphylococcus aureus infection?
Two patients are presented with Wegener's granulomatosis (WG) and lower respiratory tract infections with Staphyloccus aureus (SA). It is posulated that there is a relationship between the infection and the induction or relapse of the disease. We suggest that bronchoalveolar lavages should be performed in cases of suspected WG to identify SA-infections. The co-existence of WG and SA support the reported beneficial effects of sulfamethoxazole/trimethoprim, but needs further evaluation in patients with and without SA-infection of the airways.
Imposing higher-level Structure in Polyphonic Music Generation using Convolutional Restricted Boltzmann Machines and Constraints
We introduce a method for imposing higher-level structure on generated, polyphonic music. A Convolutional Restricted Boltzmann Machine (C-RBM) as a generative model is combined with gradient descent constraint optimisation to provide further control over the generation process. Among other things, this allows for the use of a “template” piece, from which some structural properties can be extracted, and transferred as constraints to the newly generated material. The sampling process is guided with Simulated Annealing to avoid local optima, and to find solutions that both satisfy the constraints, and are relatively stable with respect to the C-RBM. Results show that with this approach it is possible to control the higher-level self-similarity structure, the meter, and the tonal properties of the resulting musical piece, while preserving its local musical coherence.
Comparison of Different Classification Techniques Using WEKA for Breast Cancer
The development of data-mining applications such as classification and clustering has shown the need for machine learning algorithms to be applied to large scale data. In this paper we present the comparison of different classification techniques using Waikato Environment for Knowledge Analysis or in short, WEKA. WEKA is an open source software which consists of a collection of machine learning algorithms for data mining tasks. The aim of this paper is to investigate the performance of different classification or clustering methods for a set of large data. The algorithm or methods tested are Bayes Network, Radial Basis Function, Pruned Tree, Single Conjunctive Rule Learner and Nearest Neighbors Algorithm. A fundamental review on the selected technique is presented for introduction purposes. The data breast cancer data with a total data of 6291 and a dimension of 699 rows and 9 columns will be used to test and justify the differences between the classification methods or algorithms. Subsequently, the classification technique that has the potential to significantly improve the common or conventional methods will be suggested for use in large scale data, bioinformatics or other general applications. Keywords— Machine Learning, Data Mining, WEKA, Classification, Bioinformatics.
The Current State of Intervention Research for Posttraumatic Stress Disorder Within the Primary Care Setting
Posttraumatic Stress Disorder (PTSD) is common among primary care patients and is associated with significant functional impairment, physical health concerns, and mental health comorbidities. Significant barriers to receiving adequate treatment often exist for primary care patients with PTSD. Mental health professionals operating as part of the primary care team have the potential to provide effective brief intervention services. While good PTSD screening and assessment measures are available for the primary care setting, there are currently no empirically supported primary care-based brief interventions for PTSD. This article reviews early research on the development and testing of primary care-based PTSD treatments and also reviews other brief PTSD interventions (i.e., telehealth and early intervention) that could be adapted to the primary care setting. Cognitive and behavioral therapies currently have the strongest evidence base for establishing an empirically supported brief intervention for PTSD in primary care. Recommendations are made for future research and clinical practice.
Open-Domain Audio-Visual Speech Recognition: A Deep Learning Approach
Automatic speech recognition (ASR) on video data naturally has access to two modalities: audio and video. In previous work, audio-visual ASR, which leverages visual features to help ASR, has been explored on restricted domains of videos. This paper aims to extend this idea to open-domain videos, for example videos uploaded to YouTube. We achieve this by adopting a unified deep learning approach. First, for the visual features, we propose to apply segment(utterance-) level features, instead of highly restrictive frame-level features. These visual features are extracted using deep learning architectures which have been pre-trained on computer vision tasks, e.g., object recognition and scene labeling. Second, the visual features are incorporated into ASR under deep learning based acoustic modeling. In addition to simple feature concatenation, we also apply an adaptive training framework to incorporate visual features in a more flexible way. On a challenging video transcribing task, audio-visual ASR using our proposed approach gets notable improvements in terms of word error rates (WERs), compared to ASR merely using speech features.
Addressing bias in machine learning algorithms: A pilot study on emotion recognition for intelligent systems
Recently, there has been an explosion of cloud-based services that enable developers to include a spectrum of recognition services, such as emotion recognition, in their applications. The recognition of emotions is a challenging problem, and research has been done on building classifiers to recognize emotion in the open world. Often, learned emotion models are trained on data sets that may not sufficiently represent a target population of interest. For example, many of these on-line services have focused on training and testing using a majority representation of adults and thus are tuned to the dynamics of mature faces. For applications designed to serve an older or younger age demographic, using the outputs from these pre-defined models may result in lower performance rates than when using a specialized classifier. Similar challenges with biases in performance arise in other situations where datasets in these large-scale on-line services have a non-representative ratio of the desired class of interest. We consider the challenge of providing application developers with the power to utilize pre-constructed cloud-based services in their applications while still ensuring satisfactory performance for their unique workload of cases. We focus on biases in emotion recognition as a representative scenario to evaluate an approach to improving recognition rates when an on-line pre-trained classifier is used for recognition of a class that may have a minority representation in the training set. We discuss a hierarchical classification approach to address this challenge and show that the average recognition rate associated with the most difficult emotion for the minority class increases by 41.5% and the overall recognition rate for all classes increases by 17.3% when using this approach.
Ontology-Based Information Extraction for Knowledge Enrichment and Validation
Ontology is widely used as a mean to represent and share common concepts and knowledge from a particular domain or specialisation. As a knowledge representation, the knowledge within an ontology must be able to evolve along with the recent changes and updates within the community practice. In this paper, we propose a new Ontology-based Information Extraction (OBIE) system that extends existing systems in order to enrich and validate an ontology. Our model enables the ontology to find related recent knowledge in the domain from communities, by exploiting their underlying knowledge as keywords. The knowledge extraction process uses ontology-based and pattern-based information extraction technique. Not only the extracted knowledge enriches the ontology, it also validates contradictory instance-related statements within the ontology that is no longer relevant to recent practices. We determine a confidence value during the enrichment and validation process to ensure the stability of the enriched ontology. We implement the model and present a case study in herbal medicine domain. The result of the enrichment and validation process shows promising results. Moreover, we analyse how our proposed model contributes to the achievement of a richer and stable ontology.
A heuristic approach for detection of obfuscated malware
Obfuscated malware has become popular because of pure benefits brought by obfuscation: low cost and readily availability of obfuscation tools accompanied with good result of evading signature based anti-virus detection as well as prevention of reverse engineer from understanding malwares' true nature. Regardless obfuscation methods, a malware must deobfuscate its core code back to clear executable machine code so that malicious portion will be executed. Thus, to analyze the obfuscation pattern before unpacking provide a chance for us to prevent malware from further execution. In this paper, we propose a heuristic detection approach that targets obfuscated windows binary files being loaded into memory - prior to execution. We perform a series of static check on binary file's PE structure for common traces of a packer or obfuscation, and gauge a binary's maliciousness with a simple risk rating mechanism. As a result, a newly created process, if flagged as possibly malicious by the static screening, will be prevented from further execution. This paper explores the foundation of this research, as well as the testing methodology and current results.
Coagulation and ablation patterns of high-intensity focused ultrasound on a tissue-mimicking phantom and cadaveric skin
High-intensity focused ultrasound (HIFU) can be applied noninvasively to create focused zones of tissue coagulation on various skin layers. We performed a comparative study of HIFU, evaluating patterns of focused tissue coagulation and ablation upon application thereof. A tissue-mimicking (TM) phantom was prepared with bovine serum albumin and polyacrylamide hydrogel to evaluate the geometric patterns of HIFU-induced thermal injury zones (TIZs) for five different HIFU devices. Additionally, for each device, we investigated histologic patterns of HIFU-induced coagulation and ablation in serial sections of cadaveric skin of the face and neck. All HIFU devices generated remarkable TIZs in the TM phantom, with different geometric values of coagulation for each device. Most of the TIZs seemed to be separated into two or more tiny parts. In cadaveric skin, characteristic patterns of HIFU-induced ablation and coagulation were noted along the mid to lower dermis at the focal penetration depth of 3 mm and along subcutaneous fat to the superficial musculoaponeurotic system or the platysma muscle of the neck at 4.5 mm. Additionally, remarkable pre-focal areas of tissue coagulation were observed in the upper and mid dermis at the focal penetration depth of 3 mm and mid to lower dermis at 4.5 mm. For five HIFU devices, we outlined various patterns of HIFU-induced TIZ formation along pre-focal, focal, and post-focal areas of TM phantom and cadaveric skin of the face and neck.
Understanding VAEs in Fisher-Shannon Plane
In information theory, Fisher information and Shannon information (entropy) are respectively used to quantify the uncertainty associated with the distribution modeling and the uncertainty in specifying the outcome of given variables. These two quantities are complementary and are jointly applied to information behavior analysis in most cases. The uncertainty property in information asserts a fundamental trade-off between Fisher information and Shannon information, which enlightens us the relationship between the encoder and the decoder in variational auto-encoders (VAEs). In this paper, we investigate VAEs in the FisherShannon plane, and demonstrate that the representation learning and the log-likelihood estimation are intrinsically related to these two information quantities. Through extensive qualitative and quantitative experiments, we provide with a better comprehension of VAEs in tasks such as highresolution reconstruction, and representation learning in the perspective of Fisher information and Shannon information. We further propose a variant of VAEs, termed as Fisher auto-encoder (FAE), for practical needs to balance Fisher information and Shannon information. Our experimental results have demonstrated its promise in improving the reconstruction accuracy and avoiding the non-informative latent code as occurred in previous works.
Hough-Transform and Extended RANSAC Algorithms for Automatic Detection of 3 D Building Roof Planes from Lidar Data
Airborne laser scanner technique is broadly the most appropriate way to acquire rapidly and with high density 3D data over a city. Once the 3D Lidar data are available, the next task is the automatic data processing, with major aim to construct 3D building models. Among the numerous automatic reconstruction methods, the techniques allowing the detection of 3D building roof planes are of crucial importance. Three main methods arise from the literature: region growing, Hough-transform and Random Sample Consensus (RANSAC) paradigm. Since region growing algorithms are sometimes not very transparent and not homogenously applied, this paper focuses only on the Hough-transform and the RANSAC algorithm. Their principles, their pseudocode rarely detailed in the related literature as well as their complete analyses are presented in this paper. An analytic comparison of both algorithms, in terms of processing time and sensitivity to cloud characteristics, shows that despite the limitation encountered in both methods, RANSAC algorithm is still more efficient than the first one. Under other advantages, its processing time is negligible even when the input data size is very large. On the other hand, Hough-transform is very sensitive to the segmentation parameters values. Therefore, RANSAC algorithm has been chosen and extended to exceed its limitations. Its major limitation is that it searches to detect the best mathematical plane among 3D building point cloud even if this plane does not always represent a roof plane. So the proposed extension allows harmonizing the mathematical aspect of the algorithm with the geometry of a roof. At last, it is shown that the extended approach provides very satisfying results, even in the case of very weak point density and for different levels of building complexity. Therefore, once the roof planes are successfully detected, the automatic building modelling can be carried out.
Duration prediction using multi-level model for GPR-based speech synthesis
This paper introduces frame-based Gaussian process regression (GPR) into phone/syllable duration modeling for Thai speech synthesis. The GPR model is designed for predicting framelevel acoustic features using corresponding frame information, which includes relative position in each unit of utterance structure and linguistic information such as tone type and part of speech. Although the GPR-based prediction can be applied to a phone duration model, the use of phone duration model only is not always sufficient to generate natural sounding speech. Specifically, in some languages including Thai, syllable durations affect the perception of sentence structure. In this paper, we propose a duration prediction technique using a multi-level model which includes syllable and phone levels for prediction. In the technique, first, syllable durations are predicted, and then they are used as additional contexts in phone-level model to generate phone duration for synthesizing. Objective and subjective evaluation results show that GPR-based modeling with multi-level model for duration prediction outperforms the conventional HMM-based speech synthesis.
Harnessing the Crowdsourcing Power of Social Media for Disaster Relief
This article briefly describes the advantages and disadvantages of crowdsourcing applications applied to disaster relief coordination. It also discusses several challenges that must be addressed to make crowdsourcing a useful tool that can effectively facilitate the relief progress in coordination, accuracy, and security.
Characterization of apoptosis and autophagy through Bcl-2 and Beclin-1 immunoexpression in gestational trophoblastic disease
BACKGROUND The pathogenesis of Gestational Trophoblastic Disease (GTD) is not clearly known. OBJECTIVE In this study, immunoexpression of proteins Bcl-2 and Beclin-1 in trophoblastic lesions and normal trophoblastic tissue was conducted to study the mechanism of apoptotic and autophagic cell death that is expected to complete the study of GTD pathogenesis. MATERIALS AND METHODS Bcl-2 and Beclin-1 immunoexpression were studied on complete hydatidiform mole, partial hydatidiform mole, invasive mole, choriocarcinoma and normal placenta slides. RESULTS The average total scores of Bcl-2 immunoexpression had a decreasing value, starting from partial hydatidiform mole (3.09), complete hydatidiform mole (2.36), invasive mole (1.18) to choriocarcinoma (0) when compared to normal placenta (6). The results showed no significant difference in Beclin-1 immunoexpression total score between complete hydatidiform mole, partial hydatidiform mole and invasive mole, namely that the value of the average total score of Beclin-1 was low (2.27, 2.45 and 2.36), but on the contrary choriocarcinoma showed an increasing strong Beclin-1 expression with the average total score of 4.57. CONCLUSION Bcl-2 expression decreases in line with the excessive proliferation of trophoblast cells in hydatidiform mole and leads to malignancy in invasive mole and choriocarcinoma. The decreased expression of Beclin-1 that leads to autophagy defects in complete hydatidiform mole, partial hydatidiform mole and invasive mole shows the role of autophagy as tumor suppressor, whereas strong Beclin-1 expression shows the survival role of autophagy in choriocarcinoma. The change of Bcl-2 activity as antiapoptosis and Beclin-1 as proautophagy plays a role in pathogenesis of GTD.
Application-aware traffic scheduling for workload offloading in mobile clouds
Mobile Cloud Computing (MCC) bridges the gap between limited capabilities of mobile devices and the increasing complexity of mobile applications, by offloading the computational workloads from local devices to the cloud. Current research supports workload offloading through appropriate application partitioning and remote method execution, but generally ignores the impact of wireless network characteristics on such offloading. Wireless data transmissions incurred by remote method execution consume a large amount of additional energy during transmission intervals when the network interface stays in the high-power state, and deferring these transmissions increases the response delay of mobile applications. In this paper, we adaptively balance the tradeoff between energy efficiency and responsiveness of mobile applications by developing application-aware wireless transmission scheduling algorithms. We take both causality and run-time dynamics of application method executions into account when deferring wireless transmissions, so as to minimize the wireless energy cost and satisfy the application delay constraint with respect to the practical system contexts. Systematic evaluations show that our scheme significantly improves the energy efficiency of workload offloading over realistic smartphone applications.
The effects of surgically implanted acoustic transmitters on laboratory growth, survival and tag retention in hatchery yearling Chinook salmon
Telemetry has proven an effective means for studying the movement of fishes, however, biases associated with tagged animals requires careful scrutiny if accurate conclusions are to be made from field studies. The objective of this study was to evaluate growth, survival, and tag retention in hatchery yearling Chinook salmon (Oncorhynchus tshawytscha) juveniles with intracoelomic surgically implanted acoustic transmitters representing 2.6 to 5.6% of body weight. The first trial consisted of three treatments; passive integrated transponder (PIT) tag-only (25 fish), acoustic-tag+PIT-tag (25 fish), and sham-surgery+PIT-tag (25 fish). There were no significant differences in relative growth rate (% change in weight day−1) among treatments over the 221 day trial. Survival in the acoustic-tag treatment (80%) was not significantly different from the PIT-tag-only and sham treatments (92 and 88% respectively). The second trial consisted of three treatments; PIT-tag-only (22 fish), acoustic-tag+PIT-tag with absorbable sutures (12 fish) and acoustic-tag+PIT-tag with non-absorbable sutures (12 fish). There were no significant differences in relative growth rate among treatments over the 160 day trial. Survival in the second trial was 100%. Fish with absorbable sutures healed sooner and with less inflammation compared to fish with non-absorbable sutures. Tag retention was 100% in both trials. The results of this study suggest that acoustic transmitters of less than 5.6% body weight can be effectively used in 1-year old Chinook salmon.
Locality Preserving Projections
Many problems in information processing involve some form of dimensionality reduction. In this thesis, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. Theoretical analysis shows that PCA, LPP, and Linear Discriminant Analysis (LDA) can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. We have applied our algorithms to several real world applications, e.g. face analysis and document representation.
An Arabic-Hebrew parallel corpus of TED talks
We describe an Arabic-Hebrew parallel corpus of TED talks built upon WIT, the Web inventory that repurposes the original content of the TED website in a way which is more convenient for MT researchers. The benchmark consists of about 2,000 talks, whose subtitles in Arabic and Hebrew have been accurately aligned and rearranged in sentences, for a total of about 3.5M tokens per language. Talks have been partitioned in train, development and test sets similarly in all respects to the MT tasks of the IWSLT 2016 evaluation campaign. In addition to describing the benchmark, we list the problems encountered in preparing it and the novel methods designed to solve them. Baseline MT results and some measures on sentence length are provided as an extrinsic evaluation of the quality of the benchmark.
A Pattern-Recognition Approach for Detecting Power Islands Using Transient Signals—Part II: Performance Evaluation
Part I of this paper describes the design and implementation of an islanding detection method based on transient signals. The proposed method utilizes discrete wavelet transform to extract features from transient current and voltage signals. A decision-tree classifier uses the energy content in the wavelet coefficients to distinguish islanding events from other transient generating events. The verification tests performed in Part I, for a two generator test system having a synchronous generator and a wind farm, showed more than 98% classification accuracy with 95% confidence and a response time of less than two cycles. In Part II, the proposed methodology is applied to an extended test system with a voltage-source converter-based dc source. The proposed relay's performance is compared with the existing passive islanding detection methods under different scenarios. Furthermore, the effect of noise on the performance of the proposed method is studied. The transient-based islanding detection methodology exhibits very high reliability and fast response compared to all other passive islanding detection methods and shows that the relay can be designed with a zero nondetection zone for a particular system.
Prevalence and attitudes on female genital mutilation/cutting in Egypt since criminalisation in 2008.
Female genital mutilation/cutting (FGM/C), which can result in severe pain, haemorrhage and poor birth outcomes, remains a major public health issue. The extent to which prevalence of and attitudes toward the practice have changed in Egypt since its criminalisation in 2008 is unknown. We analysed data from the 2005, 2008 and 2014 Egypt Demographic and Health Surveys to assess trends related to FGM/C. Specifically, we determined whether FGM/C prevalence among ever-married, 15-19-year-old women had changed from 2005 to 2014. We also assessed whether support for FGM/C continuation among ever-married reproductive-age (15-49 years) women had changed over this time period. The prevalence of FGM/C among adolescent women statistically significantly decreased from 94% in 2008 to 88% in 2014 (standard error [SE] = 1.5), after adjusting for education, residence and religion. Prevalence of support for the continuation of FGM/C also statistically significantly decreased from 62% in 2008 to 58% in 2014 (SE = 0.6). The prevalence of FGM/C among ever-married women aged 15-19 years in Egypt has decreased since its criminalisation in 2008, but continues to affect the majority of this subgroup. Likewise, support of FGM/C continuation has also decreased, but continues to be held by a majority of ever-married women of reproductive age.
Growth mindset development pattern
In this paper we present a pattern for growth mindset development. We believe that students can be taught to positively change their mindset, where experience, training, and personal effort can add to a unique student's genetic endowment. We use our long years' experience and synthesized facilitation methods and techniques to assess insight mentoring and to improve it through growth mindset development. These can help students make creative changes in their life and see the world with new eyes in a new way. The pattern allows developing a growth mindset and improving our lives and the lives of those around us.
Design of Boost-Flyback Single-Stage PFC converter for LED power supply without electrolytic capacitor for energy-storage
Light emitting diodes (LEDs) are likely to be used for general lighting applications due to their high efficiency and longer life. This paper presents the concept of applying large voltage ripple for energy storage into the Boost-Flyback Single -Stage PFC converter for the elimination of the electrical capacitor. Following proposed design procedure, the single stage PFC circuit with small energy storage capacitance can still achieve good output voltage regulation while preserving desired input power factor. The detailed theoretical analysis and design procedure of the Single-Stage PFC converter is presented. The experimental results obtained on a 60W prototype converter along with waveforms are presented.
Crowdsourcing user studies with Mechanical Turk
User studies are important for many aspects of the design process and involve techniques ranging from informal surveys to rigorous laboratory studies. However, the costs involved in engaging users often requires practitioners to trade off between sample size, time requirements, and monetary costs. Micro-task markets, such as Amazon's Mechanical Turk, offer a potential paradigm for engaging a large number of users for low time and monetary costs. Here we investigate the utility of a micro-task market for collecting user measurements, and discuss design considerations for developing remote micro user evaluation tasks. Although micro-task markets have great potential for rapidly collecting user measurements at low costs, we found that special care is needed in formulating tasks in order to harness the capabilities of the approach.