title
stringlengths
8
300
abstract
stringlengths
0
10k
BetaEta-Complete Models for System F
We show that Friedman's proof of the existence of non-trivial βη-complete models of λ→ can be extended to system F. We isolate a set of conditions that are sufficient to ensure βη-completeness for a model of F (and α-completeness at the level of types), and we discuss which class of models we get. In particular, the model introduced in Barbanera and Berardi (1997), having as polymorphic maps exactly all possible Scott continuous maps, is βη-complete, and is hence the first known complete non-syntactic model of F. In order to have a suitable framework in which to express the conditions and develop the proof, we also introduce the very natural notion of ‘polymax models’ of System F.
Issue of the genesis of recent sediments on the Barents Sea shelf
This paper is a reply to the comment on our two previous publications (Krapivner, 2009a, 2009b) devoted to the genesis of recent sediments on the Barents Sea shelf in (Epshtein et al., 2011a, 2011b). It is substantiated that the physical nature of the reflection of recent sediments in seismoacoustic records, a very important point for the lithofacies analysis, is incorrectly interpreted by the above opponents. The paper presents geochronological and paleomagnetic data confirming invalidity of the popular concepts about the link between the cover of poorly consolidated sediments and the epoch of the last deglaciation. We show incorrectness of the statement of the opponents about the redeposited character of Pliocene-Quaternary marine biota of diamictons and the glacial processing of coarse-clastic material therein. Diverse properties of recent sediments and their topography artificially united into a complex of indicators of the glacial paragenesis are either simply explained in terms of the natural (for the Barents Sea) icemarine sedimentation or attributed to postsedimentary processes during intense neotectonic activity of the Barents Sea shelf.
Image Colorization Using Generative Adversarial Networks
Over the last decade, the process of automatic image colorization has been of significant interest for several application areas including restoration of aged or degraded images. This problem is highly ill-posed due to the large degrees of freedom during the assignment of color information. Many of the recent developments in automatic colorization involve images that contain a common theme or require highly processed data such as semantic maps as input. In our approach, we attempt to fully generalize the colorization procedure using a conditional Deep Convolutional Generative Adversarial Network (DCGAN). The network is trained over datasets that are publicly available such as CIFAR-10 and Places365. The results between the generative model and traditional deep neural networks are compared.
Information Flow Control Models in Peer-to-Peer Publish/Subscribe Systems
A publish/subscribe (PS) model is an event-driven model of a distributed system. In this paper, we consider a peer-to-peer (P2P) type of PS model where each peer (process) can publish and subscribe events. Here, a peer publishes an event message and then the event message is notified to a target peer which is interested in the event. Publications and subscriptions are specified in terms of topics as discussed in topic-based PS systems. In this paper, we newly discuss a topic-based access control (TBAC) model to prevent illegal information flow among peers in PS systems. Here, an access right is a pair "t, op" of a topic t and an operation op which is publish or subscribe. A peer is allowed to publish an event message with topics and subscribe topics only if the topics are granted to the peer. An event message e is notified to a peer pi if the publication of e and subscription of pi include some common topic. If a peer pi publishes an event message e2 after receiving an event message e1, the event message e2 may bring the event of e1, which the peer pi is not allowed to publish. Here, information in the peer pi illegally flow to another peer. We define the legal flow relation among the peers. Then, we newly propose a subscription-based synchronization (SBS) protocol to prevent illegal information flow. Here, a notification is banned if the notification may cause illegal information flow. We evaluate the SBS protocol in terms of number of notifications banned.
Energy-Efficient Base-Stations Sleep-Mode Techniques in Green Cellular Networks: A Survey
Due to global climate change as well as economic concern of network operators, energy consumption of the infrastructure of cellular networks, or “Green Cellular Networking,” has become a popular research topic. While energy saving can be achieved by adopting renewable energy resources or improving design of certain hardware (e.g., power amplifier) to make it more energy-efficient, the cost of purchasing, replacing, and installing new equipment (including manpower, transportation, disruption to normal operation, as well as associated energy and direct cost) is often prohibitive. By comparison, approaches that work on the operating protocols of the system do not require changes to current network architecture, making them far less costly and easier for testing and implementation. In this survey, we first present facts and figures that highlight the importance of green mobile networking and then review existing green cellular networking research with particular focus on techniques that incorporate the concept of the “sleep mode” in base stations. It takes advantage of changing traffic patterns on daily or weekly basis and selectively switches some lightly loaded base stations to low energy consumption modes. As base stations are responsible for the large amount of energy consumed in cellular networks, these approaches have the potential to save a significant amount of energy, as shown in various studies. However, it is noticed that certain simplifying assumptions made in the published papers introduce inaccuracies. This review will discuss these assumptions, particularly, an assumption that ignores the effect of traffic-load-dependent factors on energy consumption. We show here that considering this effect may lead to noticeably lower benefit than in models that ignore this effect. Finally, potential future research directions are discussed.
Second Nature once Removed Time, Space and Representations
Taking the example of large-scale technological and organizational change in the nineteenth century, it is argued that time and space are reconfigured socially by infrastructural change, and that these reconfigurations are then mirrored in cultural and scientific texts. The key claim is that organization and technology need to be looked at together: dropping out one or the other prevents the uncovering of causal links.
A compact stair-climbing wheelchair with two 3-DOF legs and a 1-DOF base
PurposeThe purpose of this paper is to describe a compact wheelchair, which has two 3-DOF legs and a 1DOF base (the total DOF of the leg system is 7) for stair-climbing, and wheels for flat surface driving. Design/methodology/approachThe proposed wheelchair climbs stairs using the two 3-DOF legs with boomerang-shaped feet. The leg mechanisms are folded into the compact wheelchair body when the wheelchair moves over flat surfaces. We also proposes a simple estimation method of stair shape using laser distance sensors, and a dual motor driving system to increase joint power. FindingsThe proposed wheelchair can climb arbitrary height and width stairs by itself, even when they are slightly curved. During climbing, the trajectory of the seat position is linear to guarantee the comfort of rider, and the wheelchair always keeps a stable condition to ensure the stability in an emergency stop. Originality/valueThe wheelchair mechanism with foldable legs and driving wheels enables smooth stair climbing, efficient flat surface driving, and additional useful motions such as standing and tilting.
The ethics of human-chicken relationships in video games: the origins of the digital chicken
In this paper, we look at the historical place that chickens have held in media depictions and as entertainment, analyse several types of representations of chickens in video games, and draw out reflections on society in the light of these representations. We also look at real-life, modern historical, and archaeological evidence of chicken treatment and the evolution of social attitudes with regard to animal rights, and deconstruct the depiction of chickens in video games in this light.
ResearchGate versus Google Scholar: Which finds more early citations?
ResearchGate has launched its own citation index by extracting citations from documents uploaded to the site and reporting citation counts on article profile pages. Since authors may upload preprints to ResearchGate, it may use these to provide early impact evidence for new papers. This article assesses the whether the number of citations found for recent articles is comparable to other citation indexes using 2675 recently-published library and information science articles. The results show that in March 2017, ResearchGate found less citations than did Google Scholar but more than both Web of Science and Scopus. This held true for the dataset overall and for the six largest journals in it. ResearchGate correlated most strongly with Google Scholar citations, suggesting that ResearchGate is not predominantly tapping a fundamentally different source of data than Google Scholar. Nevertheless, preprint sharing in ResearchGate is substantial enough for authors to take seriously.
Automatic Road Anomaly Detection Using Smart Mobile Device
Maintaining the quality of roadways is a major challenge for governments around the world. In particular, poor road surfaces pose a significant safety threat to motorists, especially when motorbikes make up a significant portion of roadway traffic. According to the statistics of the Ministry of Justice in Taiwan, there were 220 claims for state compensation caused by road quality problems between 2005 to 2007, and the government paid a total of 113 million NTD in compensation. This research explores utilizing a mobile phone with a tri-axial accelerometer to collect acceleration data while riding a motorcycle. The data is analyzed to detect road anomalies and to evaluate road quality. Motorcycle-based acceleration data is collected on twelve stretches of road, with a data log spanning approximately three hours, and a total road length of about 60 kilometers. Both supervised and unsupervised machine learning methods are used to recognize road conditions. SVM learning is used to detect road anomalies and to identify their corresponding positions from labeled acceleration data. This method of road anomaly detection achieves a precision of 78.5%. Furthermore, to construct a model of smooth roads, unsupervised learning is used to learn anomaly thresholds by clustering data collected from the accelerometer. The results are used to rank the quality of the road segments in the experiment. We compare the ranked list from the learned evaluator with the ranked list from human evaluators who rode along the same roadways during the test phase. Based on the Kendall tau rank correlation coefficient, the automatically ranked result exhibited excellent performance. Keywords-mobile device; machine learning; accelerometer; road surface anomaly; pothole;
Batch Processing for Incremental FP-tree Construction
Frequent Patterns are very important in knowledge discovery and data mining process such as mining of association rules, correlations etc. Prefix-tree based approach is one of the contemporary approaches for mining frequent patterns. FP-tree is a compact representation of transaction database that contains frequency information of all relevant Frequent Patterns (FP) in a dataset. Since the introduction of FP-growth algorithm for FP-tree construction, three major algorithms have been proposed, namely AFPIM, CATS tree, and CanTree, that have adopted FP-tree for incremental mining of frequent patterns. All of the three methods perform incremental mining by processing one transaction of the incremental database at a time and updating it to the FP-tree of the initial (original) database. Here in this paper we propose a novel method to take advantage of FP-tree representation of incremental transaction database for incremental mining. We propose “Batch Incremental Tree (BIT)” algorithm to merge two small consecutive duration FP-trees to obtain a FP-tree that is equivalent of FP-tree obtained when the entire database is processed at once from the beginning of the first duration
Individualness and Determinantal Point Processes for Pedestrian Detection
In this paper, we introduce individualness of detection candidates as a complement to objectness for pedestrian detection. The individualness assigns a single detection for each object out of raw detection candidates given by either object proposals or sliding windows. We show that conventional approaches, such as non-maximum suppression, are sub-optimal since they suppress nearby detections using only detection scores. We use a determinantal point process combined with the individualness to optimally select final detections. It models each detection using its quality and similarity to other detections based on the individualness. Then, detections with high detection scores and low correlations are selected by measuring their probability using a determinant of a matrix, which is composed of quality terms on the diagonal entries and similarities on the off-diagonal entries. For concreteness, we focus on the pedestrian detection problem as it is one of the most challenging problems due to frequent occlusions and unpredictable human motions. Experimental results demonstrate that the proposed algorithm works favorably against existing methods, including non-maximal suppression and a quadratic unconstrained binary optimization based method.
oPass: A User Authentication Protocol Resistant to Password Stealing and Password Reuse Attacks
Text password is the most popular form of user authentication on websites due to its convenience and simplicity. However, users' passwords are prone to be stolen and compromised under different threats and vulnerabilities. Firstly, users often select weak passwords and reuse the same passwords across different websites. Routinely reusing passwords causes a domino effect; when an adversary compromises one password, she will exploit it to gain access to more websites. Second, typing passwords into untrusted computers suffers password thief threat. An adversary can launch several password stealing attacks to snatch passwords, such as phishing, keyloggers and malware. In this paper, we design a user authentication protocol named oPass which leverages a user's cellphone and short message service to thwart password stealing and password reuse attacks. oPass only requires each participating website possesses a unique phone number, and involves a telecommunication service provider in registration and recovery phases. Through oPass, users only need to remember a long-term password for login on all websites. After evaluating the oPass prototype, we believe oPass is efficient and affordable compared with the conventional web authentication mechanisms.
Trimethoprim-sulfamethoxazole prophylaxis against urinary tract infection in the chronic spinal cord injury patient
Suppressive therapy with antibiotics has long been thought to decrease the number of complications from the neuropathic bladder in spinal cord injury patients, but it may also induce resistance to antibiotics which subsequently causes difficulties in treating symptomatic urinary tract infections. Forty-three chronic spinal cord injury patients were randomized to continue to receive daily trimethoprim-sulfamethoxazole (TMP-SMX) urinary tract prophylaxis versus discontinuing antibiotic prophylaxis. Patients were all at least 6 months after spinal cord injury. Patients were followed for a minimum of 3 months, with weekly catheter urine cultures. The difference in the colonization rate at onset and after 3 months (percent of cultures with asymptomatic bacteriuria) between the control and prophylaxis group was not statistically significant (P > 0.1). There was a significant decrease in the percentage of TMP-SMX resistant asymptomatic bacteriuria in the control group, 78.8%, compared to 94.1% in the suppressive group (P < 0.05). There was no significant difference in the number of symptomatic urinary tract infections following the withdrawal of suppressive therapy between the control group, 0.035/week, and the prophylaxis group, 0.043/week (P > 0.5). There was a larger percentage of TMP-SMX resistant symptomatic urinary tract infections in the treated group, 42.5% versus 37.5% in the control group, but the difference was not significant (P > 0.5). Irrespective of the method of bladder management, suppressive therapy with TMP-SMX did not reduce the incidence of symptomatic bacteriuria and did increase the percentage of cultures resistant to TMP-SMX in asymptomatic patients.
Comparison of the efficacy and systemic effects of 4 mg and 8 mg formulations of salbutamol controlled release in patients with asthma
The purpose of the present study was to compare the efficacy and systemic effects of 4 mg and 8 mg doses of salbutamol controlled release (SCR) after single dosing and at steady state in patients with asthma. Fifteen asthmatic patients (Age 36 y, FEV1 85% predicted) were given SCR 4 mg and 8 mg twice daily for 7 days in a randomised double-blind cross-over design, with at least 7 days washout between treatments. There were no differences between the bronchodilator effects of 4 mg and 8 mg doses. There was no evidence of tolerance to the bronchodilator effects after chronic dosing. Morning and evening PEFR measurements also showed improvements during treatment with SCR 4 mg and SCR 8 mg, although there were no differences between the two formulations. Both doses of SCR caused significant objective tremor responses which were maintained after chronic dosing. The 8 mg dose produced a larger tremor response after single dosing, but not at steady-state. Subjective tremor occurred in 7 patients with SCR 8 mg, and in 2 patients with SCR 4 mg. There were no cardiac arrhythmias on Holter ECG monitoring. These results suggest that the 8 mg dose of SCR was no more effective than the 4 mg formulation, and was associated with more systemic adverse effects.
Patient Privacy in Paralinguistic Tasks
Recent developments in cryptography and, in particular in Fully Homomorphic Encryption (FHE), have allowed for the development of new privacy preserving machine learning schemes. In this paper, we show how these schemes can be applied to the automatic assessment of speech affected by medical conditions, allowing for patient privacy in diagnosis and monitoring scenarios. More specifically, we present results for the assessment of the degree of Parkinsons Disease, the detection of a Cold, and both the detection and assessment of the degree of Depression. To this end, we use a neural network in which all operations are performed in an FHE context. This implies replacing the activation functions by linear and second degree polynomials, as only additions and multiplications are viable. Furthermore, to guarantee that the inputs of these activation functions fall within the convergence interval of the approximation, a batch normalization layer is introduced before each activation function. After training the network with unencrypted data, the resulting model is then employed in an encrypted version of the network, to produce encrypted predictions. Our tests show that the use of this framework yields results with little to no performance degradation, in comparison to the baselines produced for the same datasets.
Impact of bronchiectasis and trapped air on quality of life and exacerbations in cystic fibrosis.
Cystic fibrosis (CF) is primarily characterised by bronchiectasis and trapped air on chest computed tomography (CT). The revised Cystic Fibrosis Questionnaire respiratory symptoms scale (CFQ-R RSS) measures health-related quality of life. To validate bronchiectasis, trapped air and CFQ-R RSS as outcome measures, we investigated correlations and predictive values for pulmonary exacerbations. CF patients (aged 6-20 years) underwent CT, CFQ-R RSS and 1-year follow-up. Bronchiectasis and trapped air were scored using the CF-CT scoring system. Correlation coefficients and backward multivariate modelling were used to identify predictors of pulmonary exacerbations. 40 children and 32 adolescents were included. CF-CT bronchiectasis (r = -0.38, p<0.001) and CF-CT trapped air (r = -0.35, p = 0.003) correlated with CFQ-R RSS. Pulmonary exacerbations were associated with: bronchiectasis (rate ratio 1.10, 95% CI 1.02-1.19; p = 0.009), trapped air (rate ratio 1.02, 95% CI 1.00-1.05; p = 0.034) and CFQ-R RSS (rate ratio 0.95, 95% CI 0.91-0.98; p = 0.002). The CFQ-R RSS was an independent predictor of pulmonary exacerbations (rate ratio 0.96, 95% CI 0.94-0.97; p<0.001). Bronchiectasis, trapped air and CFQ-R RSS were associated with pulmonary exacerbations. The CFQ-R RSS was an independent predictor. This study further validated bronchiectasis, trapped air and CFQ-R RSS as outcome measures in CF.
Liking vs. wanting food: Importance for human appetite control and weight regulation
Current train of thought in appetite research is favouring an interest in non-homeostatic or hedonic (reward) mechanisms in relation to overconsumption and energy balance. This tendency is supported by advances in neurobiology that precede the emergence of a new conceptual approach to reward where affect and motivation (liking and wanting) can be seen as the major force in guiding human eating behaviour. In this review, current progress in applying processes of liking and wanting to the study of human appetite are examined by discussing the following issues: How can these concepts be operationalised for use in human research to reflect the neural mechanisms by which they may be influenced? Do liking and wanting operate independently to produce functionally significant changes in behaviour? Can liking and wanting be truly experimentally separated or will an expression of one inevitably contain elements of the other? The review contains a re-examination of selected human appetite research before exploring more recent methodological approaches to the study of liking and wanting in appetite control. In addition, some theoretical developments are described in four diverse models that may enhance current understanding of the role of these processes in guiding ingestive behaviour. Finally, the implications of a dual process modulation of food reward for weight gain and obesity are discussed. The review concludes that processes of liking and wanting are likely to have independent roles in characterising susceptibility to weight gain. Further research into the dissociation of liking and wanting through implicit and explicit levels of processing would help to disclose the relative importance of these components of reward for appetite control and weight regulation.
Listen to the Natives
Educators have slid into the 21st century—and into the digital age—still doing a great many things the old way. It's time for education leaders to raise their heads above the daily grind and observe the new landscape that's emerging. Recognizing and analyzing its characteristics will help define the education leadership with which we should be providing our students, both now and in the coming decades.
THE INSTITUTIONAL FOUNDATIONS OF PUBLIC POLICY : A TRANSACTIONS APPROACH WITH APPLICATION TO ARGENTINA *
Public policies are the outcomes of complex intertemporal exchanges among politicians. The basic institutional characteristics of a country constitute the framework within which those transactions are accomplished. We develop a transactions theory to understand the ways in which political institutions affect the transactions that political actors are able to undertake, and hence the policies that emerge. We argue that Argentina is a case in which the functioning of political institutions has been such that it prevented the capacity to undertake efficient intertemporal political exchanges. We use positive political theory and transaction cost economics to explain the workings of Argentine political institutions, and to show how that maps into low-quality policies.
The "booty call": a compromise between men's and women's ideal mating strategies.
Traditionally, research on romantic and sexual relationships has focused on one-night stands and monogamous pairs. However, as the result of men and women pursuing their ideal relationship types, various compromise relationships may emerge. One such compromise is explored here: the "booty call." The results of an act-nomination and frequency study of college students provided an initial definition and exploration of this type of relationship. Booty calls tend to utilize various communication mediums to facilitate sexual contact among friends who, for men, may represent low-investment, attractive sexual partners and, for women, may represent attractive test-mates. The relationship is discussed as a compromise between men's and women's ideal mating strategies that allows men greater sexual access and women an ongoing opportunity to evaluate potential long-term mates.
ENTREPRENEURSHIP IN THE LARGE CORPORATION : A LONGITUDINAL STUDY OF HOW ESTABLISHED FIRMS CREATE BREAKTHROUGH INVENTIONS
We present a model that explains how established firms create breakthrough inventions. We identify three organizational pathologies that inhibit breakthrough inventions: the familiarity trap – favoring the familiar; the maturity trap – favoring the mature; and the propinquity trap – favoring search for solutions near to existing solutions. We argue that by experimenting with novel (i.e., technologies in which the firm lacks prior experience), emerging (technologies that are recent or newly developed in the industry), and pioneering (technologies that do not build on any existing technologies) technologies firms can overcome these traps and create breakthrough inventions. Empirical evidence from the chemicals industry supports our model. Copyright  2001 John Wiley & Sons, Ltd.
Detecting Anomalies in Activities of Daily Living of Elderly Residents via Energy Disaggregation and Cox Processes
Monitoring the health of the elderly living independently in their own homes is a key issue in building sustainable healthcare models which support a country's ageing population. Existing approaches have typically proposed remotely monitoring the behaviour of a household's occupants through the use of additional sensors. However the costs and privacy concerns of such sensors have significantly limited their potential for widespread adoption. In contrast, in this paper we propose an approach which detects Activities of Daily Living, which we use as a proxy for the health of the household residents. Our approach detects appliance usage from existing smart meter data, from which the unique daily routines of the household occupants are learned automatically via a log Gaussian Cox process. We evaluate our approach using two real-world data sets, and show it is able to detect over 80% of kettle uses while generating less than 10% false positives. Furthermore, our approach allows earlier interventions in households with a consistent routine and fewer false alarms in the remaining households, relative to a fixed-time intervention benchmark.
Multitask Learning for Mental Health Conditions with Limited Social Media Data
Language contains information about the author’s demographic attributes as well as their mental state, and has been successfully leveraged in NLP to predict either one alone. However, demographic attributes and mental states also interact with each other, and we are the first to demonstrate how to use them jointly to improve the prediction of mental health conditions across the board. We model the different conditions as tasks in a multitask learning (MTL) framework, and establish for the first time the potential of deep learning in the prediction of mental health from online user-generated text. The framework we propose significantly improves over all baselines and single-task models for predicting mental health conditions, with particularly significant gains for conditions with limited data. In addition, our best MTL model can predict the presence of conditions (neuroatypicality) more generally, further reducing the error of the strong feed-forward baseline.
Sequence-to-Sequence Voice Conversion with Similarity Metric Learned Using Generative Adversarial Networks
We propose a training framework for sequence-to-sequence voice conversion (SVC). A well-known problem regarding a conventional VC framework is that acoustic-feature sequences generated from a converter tend to be over-smoothed, resulting in buzzy-sounding speech. This is because a particular form of similarity metric or distribution for parameter training of the acoustic model is assumed so that the generated feature sequence that averagely fits the training target example is considered optimal. This over-smoothing occurs as long as a manually constructed similarity metric is used. To overcome this limitation, our proposed SVC framework uses a similarity metric implicitly derived from a generative adversarial network, enabling the measurement of the distance in the high-level abstract space. This would enable the model to mitigate the oversmoothing problem caused in the low-level data space. Furthermore, we use convolutional neural networks to model the long-range context-dependencies. This also enables the similarity metric to have a shift-invariant property; thus, making the model robust against misalignment errors involved in the parallel data. We tested our framework on a non-native-to-native VC task. The experimental results revealed that the use of the proposed framework had a certain effect in improving naturalness, clarity, and speaker individuality.
A Discrete-Time Direct Torque Control for Direct-Drive PMSG-Based Wind Energy Conversion Systems
This paper proposes a novel flux-space-vector-based direct torque control (DTC) scheme for permanent-magnet synchronous generators (PMSGs) used in variable-speed direct-drive wind energy conversion systems (WECSs). The discrete-time control law, which is derived from the perspective of flux space vectors and load angle, predicts the desired stator flux vector for the next time-step with the torque and stator flux information only. The space vector modulation (SVM) is then employed to generate the reference voltage vector, leading to a fixed switching frequency, as well as lower flux and torque ripples, when compared to the conventional DTC. Compared with other SVM-based DTC methods in the literature, the proposed DTC scheme eliminates the use of proportional-integral regulators and is less dependent on machine parameters, e.g., stator inductances and permanent-magnet flux linkage, while the main advantages of the DTC, e.g., fast dynamic response and no need of coordinate transform, are preserved. The proposed DTC scheme is applicable for both nonsalient-pole and salient-pole PMSGs. The overall control scheme is simple to implement and is robust to parameter uncertainties and variations of the PMSGs. The effectiveness of the proposed discrete-time DTC scheme is verified by simulation and experimental results on a 180-W salient-pole PMSG and a 2.4-kW nonsalient-pole PMSG used in variable-speed direct-drive WECSs.
Image feature detection and matching in underwater conditions
The main challenge in underwater imaging and image analysis is to overcome the effects of blurring due to the strong scattering of light by the water and its constituents. This blurring adds complexity to already challenging problems like object detection and localization. The current state-of-the-art approaches for object detection and localization normally involve two components: (a) a feature detector that extracts a set of feature points from an image, and (b) a feature matching algorithm that tries to match the feature points detected from a target image to a set of template features corresponding to the object of interest. A successful feature matching indicates that the target image also contains the object of interest. For underwater images, the target image is taken in underwater conditions while the template features are usually extracted from one or more training images that are taken out-of-water or in different underwater conditions. In addition, the objects in the target image and the training images may show different poses, including rotation, scaling, translation transformations, and perspective changes. In this paper we investigate the effects of various underwater point spread functions on the detection of image features using many different feature detectors, and how these functions affect the capability of these features when they are used for matching and object detection. This research provides insight to further develop robust feature detectors and matching algorithms that are suitable for detecting and localizing objects from underwater images.
InTml: a description language for VR applications
We present the Interaction Technique Markup Language (InTml), a profile on top of the core X3D that describes 3D interaction techniques (InTs) and hardware platforms. InTml makes 3D InTs easier to understand, compare, and integrate in complete virtual reality (VR) applications. InTml can be used as a front end for any VR toolkit, so InTml documents that plug together 3D InTs, VR objects, and devices can be fully described and executed.
Reach and grasp by people with tetraplegia using a neurally controlled robotic arm
Paralysis following spinal cord injury, brainstem stroke, amyotrophic lateral sclerosis and other disorders can disconnect the brain from the body, eliminating the ability to perform volitional movements. A neural interface system could restore mobility and independence for people with paralysis by translating neuronal activity directly into control signals for assistive devices. We have previously shown that people with long-standing tetraplegia can use a neural interface system to move and click a computer cursor and to control physical devices. Able-bodied monkeys have used a neural interface system to control a robotic arm, but it is unknown whether people with profound upper extremity paralysis or limb loss could use cortical neuronal ensemble signals to direct useful arm actions. Here we demonstrate the ability of two people with long-standing tetraplegia to use neural interface system-based control of a robotic arm to perform three-dimensional reach and grasp movements. Participants controlled the arm and hand over a broad space without explicit training, using signals decoded from a small, local population of motor cortex (MI) neurons recorded from a 96-channel microelectrode array. One of the study participants, implanted with the sensor 5 years earlier, also used a robotic arm to drink coffee from a bottle. Although robotic reach and grasp actions were not as fast or accurate as those of an able-bodied person, our results demonstrate the feasibility for people with tetraplegia, years after injury to the central nervous system, to recreate useful multidimensional control of complex devices directly from a small sample of neural signals.
On the Superiority of Anglo-American Literature
In Dialogues, Deleuze contrasts French and Anglo-American literatures, arguing that the French are tied to hierarchies, origins, manifestos and personal disputes, whereas the English and Americans discover a line of flight that escapes hierarchies, and abandons questions of origins, schools and personal alliances, instead discovering a collective process of ongoing invention, without beginning or determinate end. Deleuze especially appreciates American writers, and above all Herman Melville. What ultimately distinguishes American from English literature is its pragmatic, democratic commitment to sympathy and camaraderie on the open road. For Deleuze, the American literary line of flight is toward the West, but this orientation reflects his almost exclusive focus on writers of European origins. If one turns to Chinese-American literature, the questions of a literary geography become more complex. Through an examination of works by Maxine Hong Kingston and Tao Lin, some of these complexities are detailed.
The Myth of Social Action.
1. Introduction 2. Action reported missing in action theory 3. Action and social action 4. Action versus social action 5. The rise of social situationalism 6. The argument by denial 7. Accounts and actions 8. The argument by exclusion 9. The argument by incorporation 10. The 'learning everything from others' thesis 11. The communicative act paradigm 12. The linguistic turn for the worse 13. The myth of social action 14. The obstacle which is social situationalism 15. Bringing action back in.
Large-Scale Video Classification with Convolutional Neural Networks
Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3% to 63.9%), but only a surprisingly modest improvement compared to single-frame models (59.3% to 60.9%). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3% up from 43.9%).
Efficient terahertz electro-absorption modulation employing graphene plasmonic structures
We propose and discuss terahertz (THz) electro-absorption modulators based on graphene plasmonic structures. The active device consists of a self-gated pair of graphene layers, which are patterned to structures supporting THz plasmonic resonances. These structures allow for efficient control of the effective THz optical conductivity, thus absorption, even at frequencies much higher than the Drude roll-off in graphene where most previously proposed graphene-based devices become inefficient. Our analysis shows that reflectance-based device configurations, engineered so that the electric field is enhanced in the active graphene pair, could achieve very high modulationdepth, even !100%, over a wide frequency range up to tens of THz. VC 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4773374]
Relationships not leadership sustain successful organisations
For over a century, managers and academics have been captivated by the relationship between organisational leadership styles and success. The quest to uncover the secret of successful leadership remains, however. Academics have debated the attributes of various leadership styles and the feasibility of training leaders. Today, the literature is suggesting the concept of leadership throughout the organisation, implying a move from ‘leaders and followers’ to leaders as inspirational players. This paper will argue that the success of an organisation is vested in the formation of sustainable relationships with the primary purpose of leadership being to influence the feelings and emotions of those associated with the organisation; to create the emotional heart of the organisation and thus to determine the tenor of the relationships between the people inside and outside the organisation.
Patient no-show predictive model development using multiple data sources for an effective overbooking approach.
BACKGROUND Patient no-shows in outpatient delivery systems remain problematic. The negative impacts include underutilized medical resources, increased healthcare costs, decreased access to care, and reduced clinic efficiency and provider productivity. OBJECTIVE To develop an evidence-based predictive model for patient no-shows, and thus improve overbooking approaches in outpatient settings to reduce the negative impact of no-shows. METHODS Ten years of retrospective data were extracted from a scheduling system and an electronic health record system from a single general pediatrics clinic, consisting of 7,988 distinct patients and 104,799 visits along with variables regarding appointment characteristics, patient demographics, and insurance information. Descriptive statistics were used to explore the impact of variables on show or no-show status. Logistic regression was used to develop a no-show predictive model, which was then used to construct an algorithm to determine the no-show threshold that calculates a predicted show/no-show status. This approach aims to overbook an appointment where a scheduled patient is predicted to be a no-show. The approach was compared with two commonly-used overbooking approaches to demonstrate the effectiveness in terms of patient wait time, physician idle time, overtime and total cost. RESULTS From the training dataset, the optimal error rate is 10.6% with a no-show threshold being 0.74. This threshold successfully predicts the validation dataset with an error rate of 13.9%. The proposed overbooking approach demonstrated a significant reduction of at least 6% on patient waiting, 27% on overtime, and 3% on total costs compared to other common flat-overbooking methods. CONCLUSIONS This paper demonstrates an alternative way to accommodate overbooking, accounting for the prediction of an individual patient's show/no-show status. The predictive no-show model leads to a dynamic overbooking policy that could improve patient waiting, overtime, and total costs in a clinic day while maintaining a full scheduling capacity.
Enabling Public Auditability and Data Dynamics for Storage Security in Cloud Computing
Cloud Computing has been envisioned as the next-generation architecture of IT Enterprise. It moves the application software and databases to the centralized large data centers, where the management of the data and services may not be fully trustworthy. This unique paradigm brings about many new security challenges, which have not been well understood. This work studies the problem of ensuring the integrity of data storage in Cloud Computing. In particular, we consider the task of allowing a third party auditor (TPA), on behalf of the cloud client, to verify the integrity of the dynamic data stored in the cloud. The introduction of TPA eliminates the involvement of the client through the auditing of whether his data stored in the cloud are indeed intact, which can be important in achieving economies of scale for Cloud Computing. The support for data dynamics via the most general forms of data operation, such as block modification, insertion, and deletion, is also a significant step toward practicality, since services in Cloud Computing are not limited to archive or backup data only. While prior works on ensuring remote data integrity often lacks the support of either public auditability or dynamic data operations, this paper achieves both. We first identify the difficulties and potential security problems of direct extensions with fully dynamic data updates from prior works and then show how to construct an elegant verification scheme for the seamless integration of these two salient features in our protocol design. In particular, to achieve efficient data dynamics, we improve the existing proof of storage models by manipulating the classic Merkle Hash Tree construction for block tag authentication. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multiuser setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis show that the proposed schemes are highly efficient and provably secure.
Superior cardiovascular effect of aerobic interval training versus moderate continuous training in heart failure patients: a randomized study.
BACKGROUND Exercise training reduces the symptoms of chronic heart failure. Which exercise intensity yields maximal beneficial adaptations is controversial. Furthermore, the incidence of chronic heart failure increases with advanced age; it has been reported that 88% and 49% of patients with a first diagnosis of chronic heart failure are >65 and >80 years old, respectively. Despite this, most previous studies have excluded patients with an age >70 years. Our objective was to compare training programs with moderate versus high exercise intensity with regard to variables associated with cardiovascular function and prognosis in patients with postinfarction heart failure. METHODS AND RESULTS Twenty-seven patients with stable postinfarction heart failure who were undergoing optimal medical treatment, including beta-blockers and angiotensin-converting enzyme inhibitors (aged 75.5+/-11.1 years; left ventricular [LV] ejection fraction 29%; VO2peak 13 mL x kg(-1) x min(-1)) were randomized to either moderate continuous training (70% of highest measured heart rate, ie, peak heart rate) or aerobic interval training (95% of peak heart rate) 3 times per week for 12 weeks or to a control group that received standard advice regarding physical activity. VO2peak increased more with aerobic interval training than moderate continuous training (46% versus 14%, P<0.001) and was associated with reverse LV remodeling. LV end-diastolic and end-systolic volumes declined with aerobic interval training only, by 18% and 25%, respectively; LV ejection fraction increased 35%, and pro-brain natriuretic peptide decreased 40%. Improvement in brachial artery flow-mediated dilation (endothelial function) was greater with aerobic interval training, and mitochondrial function in lateral vastus muscle increased with aerobic interval training only. The MacNew global score for quality of life in cardiovascular disease increased in both exercise groups. No changes occurred in the control group. CONCLUSIONS Exercise intensity was an important factor for reversing LV remodeling and improving aerobic capacity, endothelial function, and quality of life in patients with postinfarction heart failure. These findings may have important implications for exercise training in rehabilitation programs and future studies.
Information Classification Enablers
Data warehouse architecture classification
The purpose of this study is to give an overlook and comparison of best known data warehouse architectures. Single-layer, two-layer, and three-layer architectures are structure-oriented one that are depending on the number of layers used by the architecture. In independent data marts architecture, bus, hub-and-spoke, centralized and distributed architectures, the main layers are differently combined. Listed data warehouse architectures are compared based on organizational structures, with its similarities and differences. The second comparison gives a look into information quality (consistency, completeness, accuracy) and system quality (integration, flexibility, scalability). Bus, hub-and-spoke and centralized data warehouse architectures got the highest scores in information and system quality assessment.
Image quality and radiation exposure with prospectively ECG-triggered axial scanning for coronary CT angiography: the multicenter, multivendor, randomized PROTECTION-III study.
OBJECTIVES The purpose of this study was to evaluate image quality and radiation dose using a prospectively electrocardiogram (ECG)-triggered axial scan protocol compared with standard retrospective ECG-gated helical scanning for coronary computed tomography angiography. BACKGROUND Concerns have been raised regarding radiation exposure during coronary computed tomography angiography. Although the use of prospectively ECG-triggered axial scan protocols may effectively lower radiation dose compared with helical scanning, it is unknown whether image quality is maintained in a clinical setting. METHODS In a prospective, multicenter, multivendor trial, 400 patients with low and stable heart rates were randomized to either an axial or a helical coronary computed tomography angiography scan protocol. The primary endpoint was to demonstrate noninferiority in image quality with the axial scan protocol, which was assessed on a 4-point scale (1 = nondiagnostic, 4 = excellent image quality). Secondary endpoints included radiation dose and the rate of downstream testing during 30-day follow-up. RESULTS Image quality in patients scanned with the axial scan protocol (score 3.36 ± 0.59) was not inferior compared with helical scan protocols (3.37 ± 0.59) (p for noninferiority <0.004). Axial scanning was associated with a 69% reduction in radiation exposure (dose-length product [estimated effective dose] 252 ± 147 mGy · cm [3.5 ± 2.1 mSv] vs. 802 ± 419 mGy · cm [11.2 ± 5.9 mSv] for axial vs. helical scan protocols, p < 0.001). The rate of downstream testing did not differ (13.8% vs. 15.9% for axial vs. helical scan protocols, p = 0.555). CONCLUSIONS In patients with stable and low heart rates, the prospectively ECG-triggered axial scan protocol maintained image quality but reduced radiation exposure by 69% compared with helical scanning. Axial computed tomography data acquisition should be strongly recommended in suitable patients to avoid unnecessarily high radiation exposure.
Balance of power: dynamic thermal management for Internet data centers
Internet-based applications and their resulting multitier distributed architectures have changed the focus of design for large-scale Internet computing. Internet server applications execute in a horizontally scalable topology across hundreds or thousands of commodity servers in Internet data centers. Increasing scale and power density significantly impacts the data center's thermal properties. Effective thermal management is essential to the robustness of mission-critical applications. Internet service architectures can address multisystem resource management as well as thermal management within data centers.
Financial forecasting using ANFIS networks with Quantum-behaved Particle Swarm Optimization
To be successful in financial market trading it is necessary to correctly predict future market trends. Most professional traders use technical analysis to forecast future market prices. In this paper, we present a new hybrid intelligent method to forecast financial time series, especially for the Foreign Exchange Market (FX). To emulate the way real traders make predictions, this method uses both historical market data and chart patterns to forecast market trends. First, wavelet full decomposition of time series analysis was used as an Adaptive Network-based Fuzzy Inference System (ANFIS) input data for forecasting future market prices. Also, Quantum-behaved Particle Swarm Optimization (QPSO) for tuning the ANFIS membership functions has been used. The second part of this paper proposes a novel hybrid Dynamic Time Warping (DTW)-Wavelet Transform (WT) method for automatic pattern extraction. The results indicate that the presented hybrid method is a very useful and effective one for financial price forecasting and financial pattern extraction. 2014 Elsevier Ltd. All rights reserved.
SSR markers associated for late leaf spot disease resistance by bulked segregant analysis in groundnut (Arachis hypogaea L.)
Late leaf spot (LLS) caused by Phaeoisariopsis personata is the major foliar disease that reduces the pod yield and severely affects the fodder and seed quality in groundnut. Molecular markers linked with LLS can improve the process of identification of resistant genotypes. In the present study, a LLS susceptible genotype (TMV 2) and the LLS resistant genotype (COG 0437) were crossed and their F2 population was used for marker analysis. The phenotypic mean data on F2:3 progenies were used as phenotype. Parents were surveyed with 77 SSR (Simple Sequence Repeat) primers to identify polymorphic markers. Among SSR markers, nine primers were found polymorphic between the parents TMV 2 and COG 0437. These markers were utilized for bulked segregant analysis (BSA). Among the polymorphic SSR markers, three primers viz., PM 375162, pPGPseq5D5220 and PM 384100 were able to distinguish the resistant and susceptible bulks and individuals for LLS. In single marker analysis, the markers PM 375, PM 384, pPGPseq5D5, PM 137, PM 3, PMc 588 and Ah 4-26 were linked with LLS severity score. The phenotypic variation explained by these markers ranged from 32 to 59 %. The markers identified through BSA were also confirmed with single marker analysis. While validating the three primers over a set of resistant and susceptible genotypes, the primer PM 384100 allele had association with resistance. Hence PM 384 could be utilized in the marker assisted breeding programme over a wide range of genetic background.
Trading Strategies to Exploit Blog and News Sentiment
We use quantitative media (blogs, and news as a comparison) data generated by a large-scale natural language processing (NLP) text analysis system to perform a comprehensive and comparative study on how a company’s reported media frequency, sentiment polarity and subjectivity anticipates or reflects its stock trading volumes and financial returns. Our analysis provides concrete evidence that media data is highly informative, as previously suggested in the literature – but never studied on our scale of several large collections of blogs and news for over five years. Building on our findings, we give a sentiment-based market-neutral trading strategy which gives consistently favorable returns with low volatility over a five year period (2005-2009). Our results are significant in confirming the performance of general blog and news sentiment analysis methods over broad domains and sources. Moreover, several remarkable differences between news and blogs are also identified in this paper.
DropConnected neural network trained with diverse features for classifying heart sounds
A fully-connected, two-hidden-layer neural network trained by error backpropagation, and regularized with DropConnect is used to classify heart sounds as normal or abnormal. The heart sounds are segmented using an open-source algorithm based on a hidden semi-Markov model. Features are extracted from the heart sounds using a wavelet transform, mel-frequency cepstral coefficients, inter-beat properties, and signal complexity. Features are normalized by subtracting by their means and dividing by their standard deviations across the whole training set. Any feature which is not significantly different between normal and abnormal recordings in the training data is removed, as are highly-correlated features. The dimensionality of the features vector is reduced by projecting it onto its first 70 principal components. A 10 fold cross-validation study gives a mean classification score of 84.1% with a variance of 2.9%. The final score on the test data was 85.2%.
Degree of Modularity in Engineering Systems and Products with Technical and Business Constraints
There is consensus that modularity has many benefits from cost savings due to increased commonality to enabling a higher variety of products. Full modularity is, however, not always achievable. How engineering systems and products whose design is heavily influenced by technical constraints, such as weight or size limitations, tend to exhibit rather integral architectures is shown in this study. For this, two metrics are defined on the basis of a binary design structure matrix (DSM) representation of a system or product. The non-zero fraction (NZF) captures the sparsity of the interrelationships between components between zero and one, while the singular value modularity index (SMI) captures the degree of internal coupling, also between zero and one. These metrics are first developed using idealized canonical architectures and are then applied to two different product pairs that are functionally equivalent, but different in terms of technical constraints. Empirical evidence is presented that the lightweight variant of the same product tends to be more integral, presumably to achieve higher mass efficiency. These observations are strengthened by comparing the results to another, previously published, modularity metric as well as by comparing sparsity and modularity of a set of 15 products against a control population of randomly generated architectures of equivalent size and density. The results suggest that, indeed, some products are inherently less modular than others due to technological factors. The main advantage of SMI is that it enables analysis of the degree of modularity of any architecture independent of subjective module choices.
Acute administration of the cannabinoid CB1 antagonist rimonabant impairs positive affective memory in healthy volunteers
Emotional processing measures are sensitive to acute administration of clinically useful antidepressant drugs. We wished to test the hypothesis that these models would also be able to detect agents likely to cause depression as an adverse effect. The anti-obesity drug and cannabinoid type 1 receptor antagonist, rimonabant, is associated with significant rates of depression and anxiety in clinical use. Thirty healthy adult volunteers were randomly assigned to receive a single dose of rimonabant (20 mg) or lactose placebo in a double-blind, between-groups design. Three hours after medication administration, subjects undertook an emotional processing test battery including facial emotion recognition, emotional word attentional dot probe, self-relevant word classification, emotional and declarative memory and the emotion-potentiated acoustic startle response. Subjective state was assessed via self-report measures. A single dose of rimonabant did not alter subjective mood. However, rimonabant selectively reduced incidental recall of positive self-relevant adjectives, an effect contrary to that seen following the administration of antidepressants. There were no effects of rimonabant on the other measures of emotional processing. These results suggest that a single dose of rimonabant decreases positive emotional memory in the absence of changes in subjective state. Further studies are required to examine whether rimonabant might produce a wider range of negative emotional biases with repeated treatment.
A survey of mobile and wireless technologies for augmented reality systems
Recent advances in hardware and software for mobile computing have enabled a new breed of mobile AR systems and applications. A new breed of computing called “augmented ubiquitous computing” has resulted from the convergence of wearable computing, wireless networking and mobile AR interfaces. In this paper we provide a survey of different mobile and wireless technologies and how they have impact AR. Our goal is to place them into different categories so that it becomes easier to understand the state of art and to help identify new directions of research.
Sequential analysis of bone marrow and peripheral blood after stem cell transplant for myeloma shows disparate tumor involvement
Treatment with combination chemotherapy has not resulted in long-term remissions in multiple myeloma (MM) despite advances in drug discovery and protocol improvement over the last 25 years. Increasingly, peripheral blood (PB) stem cell transplants (PBSCT) are being used along with chemotherapy and total body irradiation as treatment for multiple myeloma. Although the majority of tumor cells are found within the bone marrow (BM), tumor cells circulate in the PB in patients with MM. Therefore, one potential problem with PBSCT is contamination of the stem cell harvests with tumor cells. Although substantial reduction in BM tumor load is achieved after chemotherapy and autologous transplantation, most patients still relapse. In an attempt to identify and quantitate the residual tumor within sequential BM and PB samples of patients with MM following autologous PB stem cell transplants we have used a tumor-specific detection assay, allele-specific oligonucleotide-PCR (ASO-PCR). We found that while the BM tumor burden may fluctuate in some patients by as much as 4-logs after transplant, the PB tumor remains quite stable, and does not reflect the tumor burden in the BM. Moreover, analysis of PB involvement over time was not predictive of marrow involvement or of potential relapse. These results suggest that the PB is frequently involved in MM and further indicate that it represents a compartment that is only minimally altered by intensive therapy.
Early-Life Social Isolation Stress Increases Kappa Opioid Receptor Responsiveness and Downregulates the Dopamine System
Chronic early-life stress increases vulnerability to alcoholism and anxiety disorders during adulthood. Similarly, rats reared in social isolation (SI) during adolescence exhibit augmented ethanol intake and anxiety-like behaviors compared with group housed (GH) rats. Prior studies suggest that disruption of dopamine (DA) signaling contributes to SI-associated behaviors, although the mechanisms underlying these alterations are not fully understood. Kappa opioid receptors (KORs) have an important role in regulating mesolimbic DA signaling, and other kinds of stressors have been shown to augment KOR function. Therefore, we tested the hypothesis that SI-induced increases in KOR function contribute to the dysregulation of NAc DA and the escalation in ethanol intake associated with SI. Our ex vivo voltammetry experiments showed that the inhibitory effects of the kappa agonist U50,488 on DA release were significantly enhanced in the NAc core and shell of SI rats. Dynorphin levels in NAc tissue were observed to be lower in SI rats. Microdialysis in freely moving rats revealed that SI was also associated with reduced baseline DA levels, and pretreatment with the KOR antagonist nor-binaltorphimine (nor-BNI) increased DA levels selectively in SI subjects. Acute ethanol elevated DA in SI and GH rats and nor-BNI pretreatment augmented this effect in SI subjects, while having no effect on ethanol-stimulated DA release in GH rats. Together, these data suggest that KORs may have increased responsiveness following SI, which could lead to hypodopaminergia and contribute to an increased drive to consume ethanol. Indeed, SI rats exhibited greater ethanol intake and preference and KOR blockade selectively attenuated ethanol intake in SI rats. Collectively, the findings that nor-BNI reversed SI-mediated hypodopaminergic state and escalated ethanol intake suggest that KOR antagonists may represent a promising therapeutic strategy for the treatment of alcohol use disorders, particularly in cases linked to chronic early-life stress.
Serum Brain-derived neurotrophic factor (BDNF): the severity and symptomatic dimensions of depression.
INTRODUCTION The aim of this study was to compare the concentration of serum Brain-derived neurotrophic factor (BDNF) in patients suffering from major depressive disorder (MDD) considering the severity of MDD episode defined by the Hamilton rating scale for depression (HAMD-17). The other aim was to research the connection between serum BDNF and the symptomatic dimensions of MDD. SUBJECTS AND METHODS The study includes 139 participants with major depressive disorder (MDD). Diagnosis of MDD was set by DSM-IV-TR criteria. The severity of MDD was estimated with HAM-D-17 in the manner that mild episode was diagnosed if the score on HAMD-17 was up to 18, moderately severe 18-25 and severe over 25. Concentration of BDNF was determined by the ELISA method. RESULTS This research could not find a difference in BDNF concentration considering the severity of the depressive disorder in groups suffering from mild, moderately severe and severe episodes of MDD (F=1.816; p=0.169). Factor analysis of HAMD-17 extracted four dimensions of depressive symptoms. None of the symptomatic dimensions was significantly related to BDNF concentration. CONCLUSION Results of this study indicate that serum BDNF levels are not related to the severity of depression and its specific symptomatic dimensions. These findings support the idea of a complex relationship between BDNF concentration at the periphery and in the CNS.
Stranger on the internet: Online self-disclosure and the role of visual anonymity
This paper deals with the phenomenon of online self-disclosure. Two qualitative data analyses of YouTube videos were conducted. The studies revealed emerging forms of self-disclosure online, which are not necessarily bound to conditions of visual anonymity. This finding puts previous research results into question, which stress the strong correlation between self-disclosure and visual anonymity. The results of both qualitative studies showed that people also tend to disclose information in (visually) non-anonymous settings. The paper concludes by presenting a revised model of online self-disclosure and describing enhancing factors for self-disclosing behaviour on the internet based on the latest research results. 2015 Elsevier Ltd. All rights reserved.
Political Polarization on Twitter
In this study we investigate how social media shape the networked public sphere and facilitate communication between communities with different political orientations. We examine two networks of political communication on Twitter, comprised of more than 250,000 tweets from the six weeks leading up to the 2010 U.S. congressional midterm elections. Using a combination of network clustering algorithms and manually-annotated data we demonstrate that the network of political retweets exhibits a highly segregated partisan structure, with extremely limited connectivity between leftand right-leaning users. Surprisingly this is not the case for the user-to-user mention network, which is dominated by a single politically heterogeneous cluster of users in which ideologically-opposed individuals interact at a much higher rate compared to the network of retweets. To explain the distinct topologies of the retweet and mention networks we conjecture that politically motivated individuals provoke interaction by injecting partisan content into information streams whose primary audience consists of ideologically-opposed users. We conclude with statistical evidence in support of this hypothesis.
MapReduce-based fuzzy c-means clustering algorithm: implementation and scalability
The management and analysis of big data has been identified as one of the most important emerging needs in recent years. This is because of the sheer volume and increasing complexity of data being created or collected. Current clustering algorithms can not handle big data, and therefore, scalable solutions are necessary. Since fuzzy clustering algorithms have shown to outperform hard clustering approaches in terms of accuracy, this paper investigates the parallelization and scalability of a common and effective fuzzy clustering algorithm named Fuzzy C-Means (FCM) algorithm. The algorithm is parallelized using the MapReduce paradigm outlining how the Map and Reduce primitives are implemented. A validity analysis is conducted in order to show that the implementation works correctly achieving competitive purity results compared to state-of-the art clustering algorithms. Furthermore, a scalability analysis is conducted to demonstrate the performance of the parallel FCM implementation with increasing number of computing nodes used.
Permission based Android security: Issues and countermeasures
Android security has been a hot spot recently in both academic research and public concerns due to numerous instances of security attacks and privacy leakage on Android platform. Android security has been built upon a permission based mechanism which restricts accesses of third-party Android applications to critical resources on an Android device. Such permission based mechanism is widely criticized for its coarse-grained control of application permissions and difficult management of permissions by developers, marketers, and end-users. In this paper, we investigate the arising issues in Android security, including coarse granularity of permissions, incompetent permission administration, insufficient permission documentation, over-claim of permissions, permission escalation attack, and TOCTOU (Time of Check to Time of Use) attack. We illustrate the relationships among these issues, and investigate the existing countermeasures to address these issues. In particular, we provide a systematic review on the development of these countermeasures, and compare them according to their technical features. Finally, we propose several methods to further mitigate the risk in Android security. a 2014 Elsevier Ltd. All rights reserved.
Wireless battery charging: E-bike application
Nowadays, Inductive Power Transfer (IPT) represents a widely investigated issue with respect to modern battery charging methods, by providing a wireless solution. IPT is applied across a large variety of applications, from Watt to kWatt power levels. Although IPT features great benefits in terms of safety and comfort, the most significant drawback consists of a relatively poor power conversion efficiency. In this paper, a 100W wireless charging equipment for E-bikes which improves efficiency is proposed. Complete magnetic structure design, as well as transmitter and receiver efficient architectures, are deeply exposed. The efficiency of the designed solution is shown by simulation results.
An Attack Graph-Based Probabilistic Security Metric
To protect critical resources in today’s networked environments, it is desirable to quantify the likelihood of potential multi-step attacks that combine multiple vulnerabilities. This now becomes feasible due to a model of causal relationships between vulnerabilities, namely, attack graph. This paper proposes an attack graph-based probabilistic metric for network security and studies its efficient computation. We first define the basic metric and provide an intuitive and meaningful interpretation to the metric. We then study the definition in more complex attack graphs with cycles and extend the definition accordingly. We show that computing the metric directly from its definition is not efficient in many cases and propose heuristics to improve the efficiency of such computation.
Caudal duplication syndrome.
OBJECTIVE To present the clinical and roentgenographic features of caudal duplication syndrome. DESIGN Retrospective review of the medical records and all available imaging studies. SETTING Two university-affiliated teaching hospitals. PARTICIPANTS Six children with multiple anomalies and duplications of distal organs derived from the hindgut, neural tube, and adjacent mesoderm. INTERVENTIONS None. RESULTS Spinal anomalies (myelomeningocele in two patients, sacral duplication in three, diplomyelia in two, and hemivertebrae in one) were present in all our patients. Duplications or anomalies of the external genitalia and/or the lower urinary and reproductive structures were also seen in all our patients. Ventral herniation (in one patient), intestinal obstructions (in one patient), and bowel duplications (in two patients) were the most common gastrointestinal abnormalities. CONCLUSIONS We believe that the above constellation of abnormalities resulted from an insult to the caudal cell mass and hindgut at approximately the 23rd through the 25th day of gestation. We propose the term caudal duplication syndrome to describe the association between gastrointestinal, genitourinary, and distal neural tube malformations.
Fuzzy Logic
Fuzzy Logic was initiated in 1965 [1], [2], [3], by Lotfi A. Zadeh , professor for computer science at the University of California in Berkeley. Basically, Fuzzy Logic (FL) is a multivalued logic, that allows intermediate values to be defined between conventional evaluations like true/false, yes/no, high/low, etc. Notions like rather tall or very fast can be formulated mathematically and processed by computers, in order to apply a more human-like way of thinking in the programming of computers [4]. Fuzzy systems is an alternative to traditional notions of set membership and logic that has its origins in ancient Greek philosophy. The precision of mathematics owes its success in large part to the efforts of Aristotle and the philosophers who preceded him. In their efforts to devise a concise theory of logic, and later mathematics, the so-called ”Laws of Thought” were posited [5]. One of these, the ”Law of the Excluded Middle,” states that every proposition must either be True or False. Even when Parminedes proposed the first version of this law (around 400 B.C.) there were strong and immediate objections: for example, Heraclitus proposed that things could be simultaneously True and not True. It was Plato who laid the foundation for what would become fuzzy logic, indicating that there was a third region (beyond True and False) where these opposites ”tumbled about.” Other, more modern philosophers echoed his sentiments, notably Hegel, Marx, and Engels. But it was Lukasiewicz who first proposed a systematic alternative to the bi–valued logic of Aristotle [6]. Even in the present time some Greeks are still outstanding examples for fussiness and fuzziness, (note: the connection to logic got lost somewhere during the last 2 mileniums [7]). Fuzzy Logic has emerged as a a profitable tool for the controlling and steering of of systems and complex industrial processes, as well as for household and entertainment electronics, as well as for other expert systems and applications like the classification of SAR data.
Oxidative Stress Biomarkers and Incidence of Postoperative Atrial Fibrillation in the Omega-3 Fatty Acids for Prevention of Postoperative Atrial Fibrillation (OPERA) Trial
BACKGROUND Animal study results point to oxidative stress as a key mechanism triggering postoperative atrial fibrillation (PoAF), yet the extent to which specific biomarkers of oxidative stress might relate to PoAF risk in humans remains speculative. METHODS AND RESULTS We assessed the association of validated, fatty acid-derived oxidative stress biomarkers (F2-isoprostanes, isofurans, and F3-isoprostanes) in plasma and urine, with incident PoAF among 551 cardiac surgery patients. Biomarkers were measured at enrollment, the end of surgery, and postoperative day 2. PoAF lasting ≥30 seconds was confirmed with rhythm strip or electrocardiography and centrally adjudicated. Outcomes were assessed until hospital discharge or postoperative day 10, whichever occurred first. Urine level of each oxidative stress biomarker rose at the end of surgery (2- to 3-fold over baseline, P<0.001) and subsequently declined to concentrations comparable to baseline by postoperative day 2. In contrast, plasma concentrations remained relatively stable throughout the perioperative course. Urine F2-isoprostanes and isofurans at the end of surgery were 20% and 50% higher in subjects who developed PoAF (P≤0.009). While baseline biomarker levels did not associate significantly with PoAF, end of surgery and postoperative day 2 isoprostanes and isofurans demonstrated relatively linear associations with PoAF. For example, the end of surgery extreme quartile multivariate adjusted OR (95% CI) for urine isofurans and F3-isoprostanes were 1.95 (1.05 to 3.62; P for trend=0.01) and 2.10 (1.04 to 2.25, P for trend=0.04), respectively. The associations of biomarkers with PoAF varied little by demographics, surgery type, and medication use (P≥0.29 for each). CONCLUSIONS These novel results add to accumulating evidence supporting the likely key pathogenic role of elevated oxidative stress in PoAF. CLINICAL TRIAL REGISTRATION URL: Clinicaltrials.gov Unique identifier: NCT00970489.
Forecasting exchage rates using machine learning models with time-varying volatility
This thesis is focused on investigating the predictability of exchange rate returns on monthly and daily frequency using models that have been mostly developed in the machine learning field. The forecasting performance of these models will be compared to the Random Walk, which is the benchmark model for financial returns, and the popular autoregressive process. The machine learning models that will be used are Shrinkage and Selection Operator (LASSO) and Bayesian Additive Regression trees (BART). A characterizing feature of financial returns data is the presence of volatility clustering, i.e. the tendency of persistent periods of low or high variance in the time series. This is in disagreement with the machine learning models which implicitly assume a constant variance. We therefore extend these models with the most widely used model for volatility clustering, the Generalized Autoregressive Conditional Heteroscedasticity (GARCH) process. This allows us to jointly estimate the time varying variance and the parameters of the machine learning using an iterative procedure. These GARCH-extended machine learning models are then applied to make one-step-ahead prediction by recursive estimation that the parameters estimated by this model are also updated with the new information. In order to predict returns, information related to the economic variables and the lagged variable will be used. This study is repeated on three different exchange rate returns: EUR/SEK, EUR/USD and USD/SEK in order to obtain robust results. Our result shows that machine learning models are capable of forecasting exchange returns both on daily and monthly frequency. The results were mixed, however. Overall, it was GARCH-extended SVR that shows great potential for improving the predictive performance of the forecasting of exchange rate returns. 2 3 Acknowledgement It gives me a great pleasure in acknowledging my supervisor Prof. Mattias Villani, Linköping University, for his guidance, patience and encouragement in completion of this thesis. His support from initial to final stage has enabled me to have better understanding of this topic. I would also like to thank Marianna Blix Grimaldi, Sveriges Riksbank, for introducing such an interesting topic and providing me opportunity to do it. Her comments and suggestions were helpful in improving this study.
Heavy Slow Resistance Versus Eccentric Training as Treatment for Achilles Tendinopathy: A Randomized Controlled Trial.
BACKGROUND Previous studies have shown that eccentric training has a positive effect on Achilles tendinopathy, but few randomized controlled trials have compared it with other loading-based treatment regimens. PURPOSE To evaluate the effectiveness of eccentric training (ECC) and heavy slow resistance training (HSR) among patients with midportion Achilles tendinopathy. STUDY DESIGN Randomized controlled trial; Level of evidence, 1. METHODS A total of 58 patients with chronic (>3 months) midportion Achilles tendinopathy were randomized to ECC or HSR for 12 weeks. Function and symptoms (Victorian Institute of Sports Assessment-Achilles), tendon pain during activity (visual analog scale), tendon swelling, tendon neovascularization, and treatment satisfaction were assessed at 0 and 12 weeks and at the 52-week follow-up. Analyses were performed on an intention-to-treat basis. RESULTS Both groups showed significant (P < .0001) improvements in Victorian Institute of Sports Assessment-Achilles and visual analog scale from 0 to 12 weeks, and these improvements were maintained at the 52-week follow-up. Concomitant with the clinical improvement, there was a significant reduction in tendon thickness and neovascularization. None of these robust clinical and structural improvements differed between the ECC and HSR groups. However, patient satisfaction tended to be greater after 12 weeks with HSR (100%) than with ECC (80%; P = .052) but not after 52 weeks (HSR, 96%; ECC, 76%; P = .10), and the mean training session compliance rate was 78% in the ECC group and 92% in the HSR group, with a significant difference between groups (P < .005). CONCLUSION The results of this study show that both traditional ECC and HSR yield positive, equally good, lasting clinical results in patients with Achilles tendinopathy and that the latter tends to be associated with greater patient satisfaction after 12 weeks but not after 52 weeks.
Handling class imbalance in customer churn prediction
0957-4174/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.eswa.2008.05.027 * Corresponding author. Tel.: +32 9 264 89 80; fax: E-mail address: [email protected] (D. Va URL: http://www.crm.UGent.be (D. Van den Poel). Customer churn is often a rare event in service industries, but of great interest and great value. Until recently, however, class imbalance has not received much attention in the context of data mining [Weiss, G. M. (2004). Mining with rarity: A unifying framework. SIGKDD Explorations, 6 (1), 7–19]. In this study, we investigate how we can better handle class imbalance in churn prediction. Using more appropriate evaluation metrics (AUC, lift), we investigated the increase in performance of sampling (both random and advanced under-sampling) and two specific modelling techniques (gradient boosting and weighted random forests) compared to some standard modelling techniques. AUC and lift prove to be good evaluation metrics. AUC does not depend on a threshold, and is therefore a better overall evaluation metric compared to accuracy. Lift is very much related to accuracy, but has the advantage of being well used in marketing practice [Ling, C., & Li, C. (1998). Data mining for direct marketing problems and solutions. In Proceedings of the fourth international conference on knowledge discovery and data mining (KDD-98). New York, NY: AAAI Press]. Results show that under-sampling can lead to improved prediction accuracy, especially when evaluated with AUC. Unlike Ling and Li [Ling, C., & Li, C. (1998). Data mining for direct marketing problems and solutions. In Proceedings of the fourth international conference on knowledge discovery and data mining (KDD98). New York, NY: AAAI Press], we find that there is no need to under-sample so that there are as many churners in your training set as non churners. Results show no increase in predictive performance when using the advanced sampling technique CUBE in this study. This is in line with findings of Japkowicz [Japkowicz, N. (2000). The class imbalance problem: significance and strategies. In Proceedings of the 2000 international conference on artificial intelligence (IC-AI’2000): Special track on inductive learning, Las Vegas, Nevada], who noted that using sophisticated sampling techniques did not give any clear advantage. Weighted random forests, as a cost-sensitive learner, performs significantly better compared to random forests, and is therefore advised. It should, however always be compared to logistic regression. Boosting is a very robust classifier, but never outperforms any other technique. 2008 Elsevier Ltd. All rights reserved.
On the extra factor of two in the phase of neutrino oscillations
Abstract Attempts to modify the standard expression for the phase in neutrino oscillations by an extra factor of two are based on misuse of quantum mechanics. Claims to present Bruno Pontecorvo and his coauthors as “godfathers” of this “extra 2” factor are easily disproved by unbiased reading their articles.
Perception and practice of Kangaroo Mother Care after discharge from hospital in Kumasi, Ghana: A longitudinal study
BACKGROUND The practice of Kangaroo Mother Care (KMC) is life saving in babies weighing less than 2000 g. Little is known about mothers' continued unsupervised practice after discharge from hospitals. This study aimed to evaluate its in-hospital and continued practice in the community among mothers of low birth weight (LBW) infants discharged from two hospitals in Kumasi, Ghana. METHODS A longitudinal study of 202 mothers and their inpatient LBW neonates was conducted from November 2009 to May 2010. Mothers were interviewed at recruitment to ascertain their knowledge of KMC, and then oriented on its practice. After discharge, the mothers reported at weekly intervals for four follow up visits where data about their perceptions, attitudes and practices of KMC were recorded. A repeated measure logistic regression analysis was done to assess variability in the binary responses at the various reviews visits. RESULTS At recruitment 23 (11.4%, 95%CI: 7.4 to 16.6%) mothers knew about KMC. At discharge 95.5% were willing to continue KMC at home with 93.1% willing to practice at night. 95.5% thought KMC was beneficial to them and 96.0% beneficial to their babies. 98.0% would recommend KMC to other mothers with 71.8% willing to practice KMC outdoors.At first follow up visit 99.5% (181) were still practicing either intermittent or continuous KMC. This proportion did not change significantly over the four weeks (OR: 1.4, 95%CI: 0.6 to 3.3, p-value: 0.333). Over the four weeks, increasingly more mothers practiced KMC at night (OR: 1.7, 95%CI: 1.2 to 2.6, p = 0.005), outside their homes (OR: 2.4, 95%CI: 1.7 to 3.3, p < 0.001) and received spousal help (OR: 1.6, 95%CI: 1.1 to 2.4, p = 0.007). Household chores and potentially negative community perceptions of KMC did not affect its practice with odds of 0.8 (95%CI: 0.5 to 1.2, p = 0.282) and 1.0 (95%CI: 0.6 to 1.7, p = 0.934) respectively. During the follow-up period the neonates gained 23.7 sg (95%CI: 22.6 g to 24.7 g) per day. CONCLUSION Maternal knowledge of KMC was low at outset. Once initiated mothers continued practicing KMC in hospital and at home with their infants gaining optimal weight. Continued KMC practice was not affected by perceived community attitudes.
Maternal filicide in Québec.
In an eight-year review (1991 to 1998) of all consecutive coroners' files in Québec, Canada, the authors identified a total of 34 cases of victims who were killed by their mothers. Most victims were less than six years of age, and there were several cases in which multiple siblings were murdered. There were 27 mothers in the sample, and 15 of those women committed suicide after the filicide. A psychiatric motive was determined for more than 85 percent of the mothers, and most of the mothers had received previous treatment for a depressive or psychotic disorder. Based on the characteristics of this sample, the authors developed a filicide classification system that is flexible and simple to use but must be standardized to become a useful tool for clinicians.
Large-scale prediction of long disordered regions in proteins using random forests
Many proteins contain disordered regions that lack fixed three-dimensional (3D) structure under physiological conditions but have important biological functions. Prediction of disordered regions in protein sequences is important for understanding protein function and in high-throughput determination of protein structures. Machine learning techniques, including neural networks and support vector machines have been widely used in such predictions. Predictors designed for long disordered regions are usually less successful in predicting short disordered regions. Combining prediction of short and long disordered regions will dramatically increase the complexity of the prediction algorithm and make the predictor unsuitable for large-scale applications. Efficient batch prediction of long disordered regions alone is of greater interest in large-scale proteome studies. A new algorithm, IUPforest-L, for predicting long disordered regions using the random forest learning model is proposed in this paper. IUPforest-L is based on the Moreau-Broto auto-correlation function of amino acid indices (AAIs) and other physicochemical features of the primary sequences. In 10-fold cross validation tests, IUPforest-L can achieve an area of 89.5% under the receiver operating characteristic (ROC) curve. Compared with existing disorder predictors, IUPforest-L has high prediction accuracy and is efficient for predicting long disordered regions in large-scale proteomes. The random forest model based on the auto-correlation functions of the AAIs within a protein fragment and other physicochemical features could effectively detect long disordered regions in proteins. A new predictor, IUPforest-L, was developed to batch predict long disordered regions in proteins, and the server can be accessed from http://dmg.cs.rmit.edu.au/IUPforest/IUPforest-L.php
Plant cell cultures for the production of recombinant proteins
The use of whole plants for the synthesis of recombinant proteins has received a great deal of attention recently because of advantages in economy, scalability and safety compared with traditional microbial and mammalian production systems. However, production systems that use whole plants lack several of the intrinsic benefits of cultured cells, including the precise control over growth conditions, batch-to-batch product consistency, a high level of containment and the ability to produce recombinant proteins in compliance with good manufacturing practice. Plant cell cultures combine the merits of whole-plant systems with those of microbial and animal cell cultures, and already have an established track record for the production of valuable therapeutic secondary metabolites. Although no recombinant proteins have yet been produced commercially using plant cell cultures, there have been many proof-of-principle studies and several companies are investigating the commercial feasibility of such production systems.
A collaboration platform for data sharing among heterogeneous relief organizations for disaster management
Recently, we are witnessing the progressive increase in the occurrence of largescale disasters, characterized by an overwhelming scale and number of causalities. After 72 hours from the disaster occurrence, the damaged area is interested by assessment, reconstruction and recovery actions from several heterogeneous organizations, which need to collaborate and being orchestrated by a centralized authority. This situation requires an effective data sharing by means of a proper middleware platform able to let such organizations to interoperate despite of their differences. Although international organizations have defined collaboration frameworks at the higher level, there is no ICT supporting platform at operational level able to realize the data sharing demanded by such collaborative frameworks. This work proposes a layered architecture and a preliminary implementation of such a middleware for messaging, data and knowledge management. We also illustrate a demonstration of the usability of such an implementation, so as to show the achievable interoperability.
Spin-3 gravity in three-dimensional flat space.
We present the first example of a nontrivial higher spin theory in three-dimensional flat space. We propose flat-space boundary conditions and prove their consistency for this theory. We find that the asymptotic symmetry algebra is a (centrally extended) higher spin generalization of the Bondi-Metzner-Sachs algebra, which we describe in detail. We also address higher spin analogues of flat space cosmology solutions and possible generalizations.
Deep Learning for Multi-task Plant Phenotyping
Plant phenotyping has continued to pose a challenge to computer vision for many years. There is a particular demand to accurately quantify images of crops, and the natural variability and structure of these plants presents unique difficulties. Recently, machine learning approaches have shown impressive results in many areas of computer vision, but these rely on large datasets that are at present not available for crops. We present a new dataset, called ACID, that provides hundreds of accurately annotated images of wheat spikes and spikelets, along with image level class annotation. We then present a deep learning approach capable of accurately localising wheat spikes and spikelets, despite the varied nature of this dataset. As well as locating features, our network offers near perfect counting accuracy for spikes (95.91%) and spikelets (99.66%). We also extend the network to perform simultaneous classification of images, demonstrating the power of multi-task deep architectures for plant phenotyping. We hope that our dataset will be useful to researchers in continued improvement of plant and crop phenotyping. With this in mind, alongside the dataset we will make all code and trained models available online.
Summary of the UAA-AAUS guidelines for urinary tract infections.
Urinary tract infections, genital tract infections and sexually transmitted infections are the most prevalent infectious diseases, and the establishment of locally optimized guidelines is critical to provide appropriate treatment. The Urological Association of Asia has planned to develop the Asian guidelines for all urological fields, and the present urinary tract infections, genital tract infections and sexually transmitted infections guideline was the second project of the Urological Association of Asia guideline development, which was carried out by the Asian Association of Urinary Tract Infection and Sexually Transmitted Infection. The members have meticulously reviewed relevant references, retrieved via the PubMed and MEDLINE databases, published between 2009 through 2015. The information identified through the literature review of other resources was supplemented by the author. Levels of evidence and grades of recommendation for each management were made according to the relevant strategy. If the judgment was made on the basis of insufficient or inadequate evidence, the grade of recommendation was determined on the basis of committee discussions and resultant consensus statements. Here, we present a short English version of the original guideline, and overview its key clinical issues.
Turn-in Folding of the Cephalic Portion of the Lateral Crus to Support the Alar Rim in Rhinoplasty
The hypoplastic, weak lateral crus of the nose may cause concave alar rim deformity, and in severe cases, even alar rim collapse. These deformities may lead to both aesthetic disfigurement and functional impairment of the nose. The cephalic part of the lateral crus was folded and fixed to reinforce the lateral crus. The study included 17 women and 15 men with a median age of 24 years. The average follow-up period was 12 months. For 23 patients, the described technique was used to treat concave alar rim deformity, whereas for 5 patients, who had thick and sebaceous skin, it was used to prevent weakness of the alar rim. The remaining 4 patients underwent surgery for correction of a collapsed alar valve. Satisfactory results were achieved without any complications. Turn-in folding of the cephalic portion of lateral crus not only functionally supports the lateral crus, but also provides aesthetic improvement of the nasal tip as successfully as cephalic excision of the lateral crura.
People Tracking with Mobile Robots Using Sample-based Joint Probabilistic Data Association Filters
One of the goals in the field of mobile robotics is the development of mobile platforms which operate in populated environments. For many tasks it is therefore highly desirable that a robot can track the positions of the humans in its surrounding. In this paper we introduce sample-based joint probabilistic data association filters as a new algorithm to track multiple moving objects. Our method applies Bayesian filtering to adapt the tracking process to the number of objects in the perceptual range of the robot. The approach has been implemented and tested on a real robot using laser-range data. We present experiments illustrating that our algorithm is able to robustly keep track of multiple people. The experiments furthermore show that the approach outperforms other techniques developed so far. KEY WORDS—multi-target tracking, data association, particle filters, people tracking, mobile robot perception
Monitoring Vital Signs: Development of a Modified Early Warning Scoring (Mews) System for General Wards in a Developing Country
OBJECTIVE The aim of the study was to develop and validate, by consensus, the construct and content of an observations chart for nurses incorporating a modified early warning scoring (MEWS) system for physiological parameters to be used for bedside monitoring on general wards in a public hospital in South Africa. METHODS Delphi and modified face-to-face nominal group consensus methods were used to develop and validate a prototype observations chart that incorporated an existing UK MEWS. This informed the development of the Cape Town ward MEWS chart. PARTICIPANTS One specialist anaesthesiologist, one emergency medicine specialist, two critical care nurses and eight senior ward nurses with expertise in bedside monitoring (N = 12) were purposively sampled for consensus development of the MEWS. One general surgeon declined and one neurosurgeon replaced the emergency medicine specialist in the final round. RESULTS Five consensus rounds achieved ≥70% agreement for cut points in five of seven physiological parameters respiratory and heart rates, systolic BP, temperature and urine output. For conscious level and oxygen saturation a relaxed rule of <70% agreement was applied. A reporting algorithm was established and incorporated in the MEWS chart representing decision rules determining the degree of urgency. Parameters and cut points differed from those in MEWS used in developed countries. CONCLUSIONS A MEWS for developing countries should record at least seven parameters. Experts from developing countries are best placed to stipulate cut points in physiological parameters. Further research is needed to explore the ability of the MEWS chart to identify physiological and clinical deterioration.
Cost Description and Comparative Cost Efficiency of Post-Exposure Prophylaxis and Canine Mass Vaccination against Rabies in N’Djamena, Chad
Rabies claims approximately 59,000 human lives annually and is a potential risk to 3.3 billion people in over 100 countries worldwide. Despite being fatal in almost 100% of cases, human rabies can be prevented by vaccinating dogs, the most common vector, and the timely administration of post-exposure prophylaxis (PEP) to exposed victims. For the control and prevention of human rabies in N'Djamena, the capital city of Chad, a free mass vaccination campaign for dogs was organized in 2012 and 2013. The campaigns were monitored by parallel studies on the incidence of canine rabies based on diagnostic testing of suspect animals and the incidence of human bite exposure recorded at selected health facilities. Based on the cost description of the campaign and the need for PEP registered in health centers, three cost scenarios were compared: cumulative cost-efficiency of (1) PEP alone, (2) dog mass vaccination and PEP, (3) dog mass vaccination, PEP, and maximal communication between human health and veterinary workers (One Health communication). Assuming ideal One Health communication, the cumulative prospective cost of dog vaccination and PEP break even with the cumulative prospective cost of PEP alone in the 10th year from the start of the calculation (2012). The cost efficiency expressed in cost per human exposure averted is much higher with canine vaccination and One Health communication than with PEP alone. As shown in other studies, our cost-effectiveness analysis highlights that canine vaccination is financially the best option for animal rabies control and rabies prevention in humans. This study also provides evidence of the beneficial effect of One Health communication. Only with close communication between the human and animal health sectors will the decrease in animal rabies incidence be translated into a decline for PEP. An efficiently applied One Health concept would largely reduce the cost of PEP in resource poor countries and should be implemented for zoonosis control in general.
CHORUS DETECTION WITH COMBINED USE OF MFCC AND CHROMA FEATURES AND IMAGE PROCESSING FILTERS
A computationally efficient method for detecting a chorus section in popular and rock music is presented. The method utilizes a distance matrix representation that is obtained by summing two separate distance matrices calculated using the mel-frequency cepstral coefficient and pitch chroma features. The benefit of computing two separate distance matrices is that different enhancement operations can be applied on each. An enhancement operation is found beneficial only for the chroma distance matrix. This is followed by detection of the off-diagonal segments of small distance from the distance matrix. From the detected segments, an initial chorus section is selected using a scoring mechanism utilizing several heuristics, and subjected to further processing. This further processing involves using image processing filters in a neighborhood of the distance matrix surrounding the initial chorus section. The final position and length of the chorus is selected based on the filtering results. On a database of 206 popular & rock music pieces an average F-measure of 86% is obtained. It takes about ten seconds to process a song with an average duration of three to four minutes on a Windows XP computer with a 2.8 GHz Intel Xeon processor.
A Wideband Base Station Antenna Element With Stable Radiation Pattern and Reduced Beam Squint
This paper presents the design procedure, optimization strategy, theoretical analysis, and experimental results of a wideband dual-polarized base station antenna element with superior performance. The proposed antenna element consists of four electric folded dipoles arranged in an octagon shape that are excited simultaneously for each polarization. It provides ±45° slant-polarized radiation that meets all the requirements for base station antenna elements, including stable radiation patterns, low cross polarization level, high port-to-port isolation, and excellent matching across the wide band. The problem of beam squint for beam-tilted arrays is discussed and it is found that the geometry of this element serves to reduce beam squint. Experimental results show that this element has a wide bandwidth of 46.4% from 1.69 to 2.71 GHz with ≥15-dB return loss and 9.8 ± 0.9-dBi gain. Across this wide band, the variations of the half-power-beamwidths of the two polarizations are all within 66.5° ± 5.5°, the port-to-port isolation is >28 dB, the cross-polarization discrimination is >25 dB, and most importantly, the beam squint is <4° with a maximum 10° down-tilt.
Beyond weight loss: a review of the therapeutic uses of very-low-carbohydrate (ketogenic) diets
Very-low-carbohydrate diets or ketogenic diets have been in use since the 1920s as a therapy for epilepsy and can, in some cases, completely remove the need for medication. From the 1960s onwards they have become widely known as one of the most common methods for obesity treatment. Recent work over the last decade or so has provided evidence of the therapeutic potential of ketogenic diets in many pathological conditions, such as diabetes, polycystic ovary syndrome, acne, neurological diseases, cancer and the amelioration of respiratory and cardiovascular disease risk factors. The possibility that modifying food intake can be useful for reducing or eliminating pharmaceutical methods of treatment, which are often lifelong with significant side effects, calls for serious investigation. This review revisits the meaning of physiological ketosis in the light of this evidence and considers possible mechanisms for the therapeutic actions of the ketogenic diet on different diseases. The present review also questions whether there are still some preconceived ideas about ketogenic diets, which may be presenting unnecessary barriers to their use as therapeutic tools in the physician’s hand.
Big Data Driven Mobile Traffic Understanding and Forecasting: A Time Series Approach
Understanding and forecasting mobile traffic of large scale cellular networks is extremely valuable for service providers to control and manage the explosive mobile data, such as network planning, load balancing, and data pricing mechanisms. This paper targets at extracting and modeling traffic patterns of 9,000 cellular towers deployed in a metropolitan city. To achieve this goal, we design, implement, and evaluate a time series analysis approach that is able to decompose large scale mobile traffic into regularity and randomness components. Then, we use time series prediction to forecast the traffic patterns based on the regularity components. Our study verifies the effectiveness of our utilized time series decomposition method, and shows the geographical distribution of the regularity and randomness component. Moreover, we reveal that high predictability of the regularity component can be achieved, and demonstrate that the prediction of randomness component of mobile traffic data is impossible.
Printed MIMO Antenna Systems : Performance Metrics , Implementations and Challenges
Multiple-input-multiple-output (MIMO) technology has become an integral part of wireless systems nowadays. This technology depends on the use of multiple antenna elements at the mobile terminal as well as the base station. The design of compact printed MIMO antenna systems is a challenging task specially when it is made for small factor mobile terminals. The introduction of MIMO brought with it several performance metrics and measurement methods to allow the designer gauge the performance of his antenna system in real environments. This paper will give an overview of such new metrics, it will give several recent example of printed MIMO antenna systems and will shine some light of some current challenges that designers face when dealing with such multi-antenna systems. Some future perspectives are also presented.
ModelarDB: Modular Model-Based Time Series Management with Spark and Cassandra
Industrial systems, e.g., wind turbines, generate big amounts of data from reliable sensors with high velocity. As it is unfeasible to store and query such big amounts of data, only simple aggregates are currently stored. However, aggregates remove fluctuations and outliers that can reveal underlying problems and limit the knowledge to be gained from historical data. As a remedy, we present the distributed Time Series Management System (TSMS) ModelarDB that uses models to store sensor data. We thus propose an online, adaptive multi-model compression algorithm that maintains data values within a user-defined error bound (possibly zero). We also propose (i) a database schema to store time series as models, (ii) methods to push-down predicates to a key-value store utilizing this schema, (iii) optimized methods to execute aggregate queries on models, (iv) a method to optimize execution of projections through static code-generation, and (v) dynamic extensibility that allows new models to be used without recompiling the TSMS. Further, we present a general modular distributed TSMS architecture and its implementation, ModelarDB, as a portable library, using Apache Spark for query processing and Apache Cassandra for storage. An experimental evaluation shows that, unlike current systems, ModelarDB hits a sweet spot and offers fast ingestion, good compression, and fast, scalable online aggregate query processing at the same time. This is achieved by dynamically adapting to data sets using multiple models. The system degrades gracefully as more outliers occur and the actual errors are much lower than the bounds. PVLDB Reference Format: Søren Kejser Jensen, Torben Bach Pedersen, Christian Thomsen. ModelarDB: Modular Model-Based Time Series Management with Spark and Cassandra. PVLDB, 11(11): 1688-1701, 2018. DOI: https://doi.org/10.14778/3236187.3236215
The influence of personality on HE students' confidence in their academic abilities
Students’ confidence in their academic abilities, measured with the Individual Learning Profile (ILP) scale, was examined in relation to their personality traits and grades. To validate the ILP, in Study 1, factor analysis of data from 3003 students extracted six factors (Reading and Writing, Hard IT, Numeracy, Time Management, Speaking, and Easy IT) with good internal reliability. Subsequently, in Study 2, 130 students completed the refined ILP, and scales measuring the Big Five, Perfectionism, Anxiety, and Self-Esteem. Between 10% and 31% of the variance in four ILP factors, but not IT skills, could be predicted by personality traits, but Self-Esteem and Anxiety were not influential. Higher conscientiousness and openness positively predicted higher confidence in reading and writing, while agreeableness and three aspects of perfectionism predicted confidence in numeracy skills. Being introvert and female were predictive of lower confidence in speaking, as were low conscientiousness and the perfectionistic desire to be organised. Conscientiousness, Extraversion, and the perfectionistic desire to be organised were strong predictors of confidence in time-management skills, which in turn predicted first year GPA. The reliability of the ILP was examined over the course of a one-year interval.
Mechanism-Based Inactivation of Human Cytochrome P 450 Enzymes and the Prediction of Drug-Drug Interactions
The ability to use vitro inactivation kinetic parameters in scaling to in vivo drug-drug interactions (DDIs) for mechanism-based inactivators of human cytochrome P450 (P450) enzymes was examined using eight human P450-selective marker activities in pooled human liver microsomes. These data were combined with other parameters (systemic Cmax, estimated hepatic inlet Cmax, fraction unbound, in vivo P450 enzyme degradation rate constants estimated from clinical pharmacokinetic data, and fraction of the affected drug cleared by the inhibited enzyme) to predict increases in exposure to drugs, and the predictions were compared with in vivo DDIs gathered from clinical studies reported in the scientific literature. In general, the use of unbound systemic Cmax as the inactivator concentration in vivo yielded the most accurate predictions of DDI with a mean -fold error of 1.64. Abbreviated in vitro approaches to identifying mechanism-based inactivators were developed. Testing potential inactivators at a single concentration (IC25) in a 30-min preincubation with human liver microsomes in the absence and presence of NADPH followed by assessment of P450 marker activities readily identified those compounds known to be mechanism-based inactivators and represents an approach that can be used with greater throughput. Measurement of decreases in IC50 occurring with a 30-min preincubation with liver microsomes and NADPH was also useful in identifying mechanismbased inactivators, and the IC50 measured after such a preincubation was highly correlated with the kinact/KI ratio measured after a full characterization of inactivation. Overall, these findings support the conclusion that P450 in vitro inactivation data are valuable in predicting clinical DDIs that can occur via this mechanism. The prediction of drug-drug interactions (DDIs) using in vitro enzyme kinetic data has been an area of increasing advances and sophistication. This has proven to be a valuable endeavor because DDIs remain an important issue in clinical practice and the discovery and development of new drugs. The earlier that the potential for DDIs can be identified in new compounds being studied as potential drugs, the greater the likelihood that this deleterious property can be removed through improved design of the molecule. Also, for those compounds already undergoing clinical trials, in vitro DDI data can be leveraged in the design of adequate and appropriate clinical DDI studies. With our increased understanding of drug-metabolizing enzymes and their roles in the metabolism of specific drugs, a mechanistic approach to assessing DDIs can be taken. The results of clinical DDI studies with one drug can be extrapolated to other drugs that are cleared by the same enzyme. The alteration of drug-metabolizing enzyme activities can occur by three main mechanisms: reversible inhibition, mechanism-based inactivation, and induction. Confidence in quantitatively extrapolating in vitro results to in vivo varies with these mechanisms. For reversible inhibition mechanisms, recent advances in our ability to predict the magnitude of DDIs from in vitro inhibition data have been made such that for cytochrome P450 (P450) enzymes, increases in exposure can be predicted within 2-fold (Brown et al., 2005; Ito et al., 2005; Obach et al., 2006). Through the use of human hepatocytes in culture, enzyme inducers can be readily identified (Silva and Nicholl-Griffith, 2002). However, the magnitude of predicted DDIs for inducers varies with the source of individual hepatocytes. Recently, in vitro induction data from an immortalized human hepatocyte line, which gives a robust and reliable response, has been shown to yield quantitative predictions of CYP3A induction-based DDIs in vivo (Ripp et al.,
Sensitivity analysis of multi-attribute decision making methods in Clinical Group Decision Support System
A development of a clinical group decision support system (CGDSS) has been carried out for diagnosing both neurosis and personality disorders. The knowledge, stored in the knowledge base, were generated from the aggregated preferences given by decision makers. Two types of preferences used here, i.e. the preferences of a mental evidence by a mental condition; and the preferences of a mental disorder by mental condition. Ordered weighted averaging operator was adopted to aggregate those preferences. This aggregation process was carried out after transforming the selected subset to fuzzy preference relation format. Then the Bayesian theorem was adopted to compute the probability of evidence given a particular disorder. After developing the knowledge base, the next step is to develop an inference engine. The method used for developing an inference engine is multiattribute decision making concept, this is because of the system was directed to choose the best disorder when a particular condition was given. Many methods have been developed to solve MADM problem, however only the SAW, WP, and TOPSIS were appropriate to solve problem here. In this knowledge base, the relation between each disorder and evidence were represented X matrix (m x n) that consist of probability value. Where the Xij was probability of jth mental evidence given ith mental disorder; i=1,2,...,m; and j=1,2,...,n. Sensitivity analysis process was to compute the sensitivity degree of each attribute to the ranking outcome in each method. The sensitivity analysis was aimed to determine the degree of sensitivity of each attribute to the ranking outcome of each method. This degree implies that there were a relevant between an attribute and a ranking outcome. This relevant attribute can be emitted by influence degree of attribute Cj to ranking outcome fj. Then, relation between sensitivity degree and influence degree for each attribute, can be found by computing the Pearsonpsilas correlation coefficient. The biggest correlation coefficient shows as the best result. This research shows that TOPSIS method always has the highest correlation coefficient, and it is getting higher if the change of the ranking is increased. The experimental results shows that that TOPSIS is the appropriate method for the clinical group decision support system for the above purposes.
From noble metal to Nobel Prize: palladium-catalyzed coupling reactions as key methods in organic synthesis.
Palladium is known to a broad audience as a beautiful, but expensive jewellery metal. In addition, it is nowadays found in nearly every car as part of the automotive catalysts, where palladium is used to eliminate harmful emissions produced by internal combustion engines. On the other hand, and not known to the general public, is the essential role of palladium catalysts in contemporary organic chemistry, a topic which has now been recognized with the Nobel Prize for Chemistry 2010. Have a look at any recent issue of a chemical journal devoted to organic synthesis and you will discover the broad utility of palladium-based catalysts. Among these different palladium-catalyzed reactions, the so-called cross-coupling reactions have become very powerful methods for the creation of new C C bonds. In general, bond formation takes place here between less-reactive organic electrophiles, typically aryl halides, and different carbon nucleophiles with the help of palladium. Remember the situation 50 years ago, when palladium began to make its way into organic chemistry. At that time C C bond formation in organic synthesis was typically achieved by stoichiometric reactions of reactive nucleophiles with electrophiles or by pericyclic reactions. Ironically, however, oxidation catalysis was the start of today s carbon–carbon bond-forming methods: The oxidation of olefins to carbonyl compounds, specifically the synthesis of acetaldehyde from ethylene (Wacker process) by applying palladium(II) catalysts, was an important inspiration for further applications. Probably also for Richard Heck, who worked in the 1960s as an industrial chemist with Hercules Corporation. There, in the late 1960s, he developed several coupling reactions of arylmercury compounds in the presence of either stoichiometric or catalytic amounts of palladium(II). Some of this work was published in 1968 in a remarkable series of seven consecutive articles, with Heck as the sole author! Based on the reaction of phenylmercuric acetate and lithium tetrachloropalladate under an atmosphere of ethylene, which afforded styrene in 80% yield and 10% trans-stilbene, he described in 1972 a protocol for the coupling of iodobenzene with styrene, which today is known as the “Heck reaction”. A very similar reaction had already been published by Tsutomo Mizoroki in 1971. However, Mizoroki didn t follow up on the reaction and died too young from cancer. The coupling protocol for aryl halides with olefins can be considered as a milestone for the development and application of organometallic catalysis in organic synthesis and set the stage for numerous further applications. Hence, palladium-catalyzed coupling reactions were disclosed continuously during the 1970s (Scheme 1). One of the related reactions is the Sonogashira coupling of aryl halides with alkynes, typically in the presence of catalytic amounts of palladium and copper salts.
Dual-5α-Reductase Inhibition Promotes Hepatic Lipid Accumulation in Man
CONTEXT 5α-Reductase 1 and 2 (SRD5A1, SRD5A2) inactivate cortisol to 5α-dihydrocortisol in addition to their role in the generation of DHT. Dutasteride (dual SRD5A1 and SRD5A2 inhibitor) and finasteride (selective SRD5A2 inhibitor) are commonly prescribed, but their potential metabolic effects have only recently been identified. OBJECTIVE Our objective was to provide a detailed assessment of the metabolic effects of SRD5A inhibition and in particular the impact on hepatic lipid metabolism. DESIGN We conducted a randomized study in 12 healthy male volunteers with detailed metabolic phenotyping performed before and after a 3-week treatment with finasteride (5 mg od) or dutasteride (0.5 mg od). Hepatic magnetic resonance spectroscopy (MRS) and two-step hyperinsulinemic euglycemic clamps incorporating stable isotopes with concomitant adipose tissue microdialysis were used to evaluate carbohydrate and lipid flux. Analysis of the serum metabolome was performed using ultra-HPLC-mass spectrometry. SETTING The study was performed in the Wellcome Trust Clinical Research Facility, Queen Elizabeth Hospital, Birmingham, United Kingdom. MAIN OUTCOME MEASURE Incorporation of hepatic lipid was measured with MRS. RESULTS Dutasteride, not finasteride, increased hepatic insulin resistance. Intrahepatic lipid increased on MRS after dutasteride treatment and was associated with increased rates of de novo lipogenesis. Adipose tissue lipid mobilization was decreased by dutasteride. Analysis of the serum metabolome demonstrated that in the fasted state, dutasteride had a significant effect on lipid metabolism. CONCLUSIONS Dual-SRD5A inhibition with dutasteride is associated with increased intrahepatic lipid accumulation.
Information-theoretic Local Non-malleable Codes and their Applications
Error correcting codes, though powerful, are only applicable in scenarios where the adversarial channel does not introduce “too many” errors into the codewords. Yet, the question of having guarantees even in the face of many errors is well-motivated. Non-malleable codes, introduced by Dziembowski, Pietrzak and Wichs (ICS 2010), address precisely this question. Such codes guarantee that even if an adversary completely over-writes the codeword, he cannot transform it into a codeword for a related message. Not only is this a creative solution to the problem mentioned above, it is also a very meaningful one. Indeed, nonmalleable codes have inspired a rich body of theoretical constructions as well as applications to tamper-resilient cryptography, CCA2 encryption schemes and so on. Another remarkable variant of error correcting codes were introduced by Katz and Trevisan (STOC 2000) when they explored the question of decoding “locally”. Locally decodable codes are coding schemes which have an additional “local decode” procedure: in order to decode a bit of the message, this procedure accesses only a few bits of the codeword. These codes too have received tremendous attention from researchers and have applications to various primitives in cryptography such as private information retrieval. More recently, Chandran, Kanukurthi and Ostrovsky (TCC 2014) explored the converse problem of making the “re-encoding” process local. Locally updatable codes have an additional “local update” procedure: in order to update a bit of the message, this procedure accesses/rewrites only a few bits of the codeword. At TCC 2015, Dachman-Soled, Liu, Shi and Zhou initiated the study of locally decodable and updatable non-malleable codes, thereby combining all the important properties mentioned above into one tool. Achieving locality and non-malleability is non-trivial. Yet, Dachman-Soled et al. provide a meaningful definition of local non-malleability and provide a construction that satisfies it. Unfortunately, their construction is secure only in the computational setting. ? Email: [email protected]. ?? Email: [email protected]. Research supported in part by a start-up grant from the Indian Institute of Science and in part by a grant from the Ministry of Communications and Information Technology, Government of India. ? ? ? Email: [email protected]. Research done while this author was at Indian Institute of Science and Microsoft Research, India. In this work, we construct information-theoretic non-malleable codes which are locally updatable and decodable. Our codes are non-malleable against Fhalf, the class of tampering functions where each function is arbitrary but acts (independently) on two separate parts of the codeword. This is one of the strongest adversarial models for which explicit constructions of standard non-malleable codes (without locality) are known. Our codes have O(1) rate and locality O(λ), where λ is the security parameter. We also show a rate 1 code with locality ω(1) that is non-malleable against bit-wise tampering functions. Finally, similar to Dachman-Soled et al., our work finds applications to informationtheoretic secure RAM computation.
The BellKor Solution to the Netflix Grand Prize
This article describes part of our contribution to the “Bell Kor’s Pragmatic Chaos” final solution, which won the Netflix Grand Prize. The other portion of the contribution was creat ed while working at AT&T with Robert Bell and Chris Volinsky, as reported in our 2008 Progress Prize report [3]. The final solution includes all the predictors described there. In th is article we describe only the newer predictors. So what is new over last year’s solution? First we further improved the baseline predictors (Sec. III). This in turn impr oves our other models, which incorporate those predictors, like the matrix factorization model (Sec. IV). In addition, an exten sion of the neighborhood model that addresses temporal dynamics was introduced (Sec. V). On the Restricted Boltzmann Machines (RBM) front, we use a new RBM model with superior accuracy by conditioning the visible units (Sec. VI). The fin al addition is the introduction of a new blending algorithm, wh ich is based on gradient boosted decision trees (GBDT) (Sec. VII ).
A multiyear estimate of the effective pollen donor pool for Albizia julibrissin
Studies of pollen movement in plant populations are often limited to a single reproductive event, despite concerns about the adequacy of single-year measures for perennial organisms. In this study, we estimate the effective number of pollen donors per tree from a multiyear study of Albizia julibrissin Durazz (mimosa, Fabaceae), an outcrossing, insect-pollinated tree. We determined 40 seedling genotypes for each of 15 seed trees during 4 successive years. A molecular analysis of variance of the pollen gametes fertilizing the sampled seeds was used to partition variation in pollen pools among seed trees, among years, and within single tree-year collections. Using these variance components, we demonstrate significant male gametic variability among years for individual trees. However, results indicate that yearly variation in the ‘global pollen pool’, averaged over all 15 seed trees for these 4 years, is effectively zero. We estimate the effective number of pollen donors for a single mimosa tree (Nep) to be 2.87. Single season analyses yield Nep∼2.05, which is 40% less than the value of Nep estimated from 4 years of data. We discuss optimal sampling for future studies designed to estimate Nep. Studies should include more trees, each sampled over at least a few years, with fewer seeds per tree per year than are needed for a traditional parentage study.
Systematic Planning for ICT Integration in Topic Learning
Integrating Information and Communication Technology (ICT) into teaching and learning is a growing area that has attracted many educators’ efforts in recent years. Based on the scope of content covered, ICT integration can happen in three different areas: curriculum, topic, and lesson. This paper elaborates upon the concept of ICT integration, and presents a systematic planning model for guiding ICT integration in the topic area. A sample of an ICT integration plan is described in this paper to demonstrate how this model can be applied in practice.
Organic livestock production: an emerging opportunity with new challenges for producers in tropical countries.
Agrochemicals, veterinary drugs, antibiotics and improved feeds can increase the food supply while minimising production costs in various livestock production systems around the world. However, these days, quality-conscious consumers are increasingly seeking environmentally safe, chemical-residue-free healthy foods, along with product traceability and a high standard of animal welfare, which organic production methods are said to ensure. Organic production is not only a challenge for producers in developing countries, it offers new export opportunities as well. Organic agriculture is practised by 1.8 million producers in 160 countries, and production of organically grown food continues to increase steadily by 15% per year. Most tropical countries are now exporting organic agricultural products but, apart from organic beef from Brazil and Argentina, organic livestock products are yetto take off. Most trade in organic livestock products is restricted to the European Union and other developed nations. Nevertheless, tropical countries cannot afford to neglect this emerging system of animal production. Organic production is knowledge- and management-intensive. Producers must be well versed in organic production standards, principles and practices, which require a high degree of knowledge and skill. In organic production, it is not simply the final product but the whole production process that must be inspected and approved by the accredited certification bodies. Organic livestock farming is still evolving, and further research is needed to make it sustainable. In this paper, the authors review the prospects of organic animal husbandry and its possible constraints in developing and tropical countries.
Impacts of Road Grade on Fuel Consumption of Light Vehicles by Use of Google Earth DEM
Recently, more and more vehicles are running on the road, which causes fuel consumption and environmental pollution problems. As a significant factor to the fuel consumption, road grade is often neglected in the vehicle energy consumption model and the decision of optimal path of the vehicle navigation system because of the difficulty of its collection and measurements. This work demonstrates the impact of road grade on the fuel consumption under different speeds and the change of vehicle fuel consumption due to the change of road gradient based on light vehicles' driving data in Beijing, elevation data available from the Google Earth DEM. In addition, we characterize the influence of driving styles on the fuel consumption before and on the slope. The results indicate that the effects of road grade should not be excluded from vehicle energy consumed evaluations and path choosing, i.e., "eco-routing".
Over 450-GHz ft and fmax InP/InGaAs DHBTs With a Passivation Ledge Fabricated by Utilizing SiN/SiO2 Sidewall Spacers
This paper describes InP/InGaAs double heterojunction bipolar transistor (HBT) technology that uses SiN/SiO2 sidewall spacers. This technology enables the formation of ledge passivation and narrow base metals by i-line lithography. With this process, HBTs with various emitter sizes and emitter-base (EB) spacings can be fabricated on the same wafer. The impact of the emitter size and EB spacing on the current gain and high-frequency characteristics is investigated. The reduction of the current gain is <;5% even though the emitter width decreases from 0.5 to 0.25 μm. A high current gain of over 40 is maintained even for a 0.25-μm emitter HBT. The HBTs with emitter widths ranging from 0.25 to 0.5 μm also provide peak ft of over 430 GHz. On the other hand, peak fmax greatly increases from 330 to 464 GHz with decreasing emitter width from 0.5 to 0.25 μm. These results indicate that the 0.25-μm emitter HBT with the ledge passivaiton exhibits balanced high-frequency performance (ft = 452 GHz and fmax = 464 GHz), while maintaining a current gain of over 40.
Discovering informative content blocks from Web documents
In this paper, we propose a new approach to discover informative contents from a set of tabular documents (or Web pages) of a Web site. Our system, InfoDiscoverer, first partitions a page into several content blocks according to HTML tag <TABLE> in a Web page. Based on the occurrence of the features (terms) in the set of pages, it calculates entropy value of each feature. According to the entropy value of each feature in a content block, the entropy value of the block is defined. By analyzing the information measure, we propose a method to dynamically select the entropy-threshold that partitions blocks into either informative or redundant. Informative content blocks are distinguished parts of the page, whereas redundant content blocks are common parts. Based on the answer set generated from 13 manually tagged news Web sites with a total of 26,518 Web pages, experiments show that both recall and precision rates are greater than 0.956. That is, using the approach, informative blocks (news articles) of these sites can be automatically separated from semantically redundant contents such as advertisements, banners, navigation panels, news categories, etc. By adopting InfoDiscoverer as the preprocessor of information retrieval and extraction applications, the retrieval and extracting precision will be increased, and the indexing size and extracting complexity will also be reduced.
Reducing hospital-acquired pressure ulcer prevalence through a focused prevention program.
OBJECTIVE To provide health care organizations with strategies for decreasing the prevalence of hospital-acquired pressure ulcers. DESIGN Hospital-acquired pressure ulcer prevalence was measured every 6 months for 4.5 years while multiple strategies were implemented. SETTING The study took place in a not-for-profit, 548-bed, 2-hospital system in Southwest Florida. PATIENTS All adult patients with the exception of those admitted for obstetric or mental health care. INTERVENTIONS An assortment of interventions were implemented, including electronic medical records, risk assessment tied to automatic consults, pressure relief measures including new equipment and personnel augmentation, and an interdisciplinary team to decide on protocols. MAIN RESULTS Hospital-acquired prevalence rate for all pressure ulcers was reduced by 81%. The rate for heel ulcers alone was reduced by 90%. CONCLUSION A pressure ulcer prevention program has been developed, which has shown a trend toward improved patient outcomes with a resultant cost savings.
Semantics of the placebo
With increased interest in the placebo effect and clinical methodology, definitions of the placebo have proliferated. But there is little agreement among lexicographers, historians, clinicians, and researchers about how to define this word. Derivations of the wordplacebo from the Hebrew and Latin and its use in the Bible and literature were reviewed. Historical inaccuracies in classical references to the placebo's origin, history, and development were corrected. An explanation was given for the introduction of the word in 1785, and for the changes in definition that have appeared since that time. Definitions of the placebo and placebo effect, based on historic considerations and heuristic principles, were then proposed.
Endoscopic ultrasound elastography for evaluation of lymph nodes and pancreatic masses: a multicenter study.
AIM To evaluate the ability of endoscopic ultrasound (EUS) elastography to distinguish benign from malignant pancreatic masses and lymph nodes. METHODS A multicenter study was conducted and included 222 patients who underwent EUS examination with assessment of a pancreatic mass (n = 121) or lymph node (n = 101). The classification as benign or malignant, based on the real time elastography pattern, was compared with the classification based on the B-mode EUS images and with the final diagnosis obtained by EUS-guided fine needle aspiration (EUS-FNA) and/or by surgical pathology. An interobserver study was performed. RESULTS The sensitivity and specificity of EUS elastography to differentiate benign from malignant pancreatic lesions are 92.3% and 80.0%, respectively, compared to 92.3% and 68.9%, respectively, for the conventional B-mode images. The sensitivity and specificity of EUS elastography to differentiate benign from malignant lymph nodes was 91.8% and 82.5%, respectively, compared to 78.6% and 50.0%, respectively, for the B-mode images. The kappa coefficient was 0.785 for the pancreatic masses and 0.657 for the lymph nodes. CONCLUSION EUS elastography is superior compared to conventional B-mode imaging and appears to be able to distinguish benign from malignant pancreatic masses and lymph nodes with a high sensitivity, specificity and accuracy. It might be reserved as a second line examination to help characterise pancreatic masses after negative EUS-FNA and might increase the yield of EUS-FNA for lymph nodes.
Profiting from technological innovation : Implications for integration , collaboration , licensing and public policy
This paper attempts to explain why innovating firms often fail to obtain significant economic returns from an innovation, while customers, imitators and other industry participants benefit. Business strategy particularly as it relates to the firm's decision to integrate and collaborate is shown to be an important factor. The paper demonstrates that when imitation is easy, markets don't work well, and the profits from innovation may accrue to the owners of certain complementary assets, rather than to the developers of the intellectual property. This speaks to the need, in certain cases, for the innovating firm to establish a prior position in these complementary assets. The paper also indicates that innovators with new products and processes which provide value to consumers may sometimes be so ill positioned in the market that they necessarily will fail, The analysis provides a theoretical foundation for the proposition that manufacturing often matters, particularly to innovating nations. Innovating firms without the requisite manufacturing and related capacities may die, even though they are the best at innovation. Implications for trade policy and domestic economic policy are examined.
An Exploration of Maternal Intimate Partner Violence Experiences and Infant General Health and Temperament
While the women’s health consequences of intimate partner violence have received much research attention, less is known about how maternal abuse experiences affect infant health and well-being. Existing studies have also been unable to examine specific types of intimate partner violence such as psychological aggression, physical abuse, and sexual coercion. This secondary data analysis explored the prevalence, patterns, and types of intimate partner violence within a large cohort of mothers and explored the relationship between maternal intimate partner violence experiences and infant’s general health and temperament at 1 year of age. Existing data were drawn from the Fragile Families and Child Wellbeing study which collected data through surveys conducted shortly after the infant’s birth (baseline) and at 1 year of age (follow-up). Records from 4,141 mothers recruited from 75 hospitals, in 20 cities, in the US were used. Bivariate and multivariate regression analyses were conducted. Results show high rates of intimate partner violence. Maternal reports of any intimate partner violence at baseline or follow-up were both significantly associated with increased odds of less than excellent infant general health and difficult temperament. Independent examination of psychological, physical, and sexual abuse revealed differential relationships between the types of intimate partner violence and infant health outcomes. Results from this study contribute to our understanding of the infant health threats associated with maternal intimate partner violence experiences. Additional research addressing the complex relationship between maternal abuse experiences and infant health and specific intervention implications is warranted.