title
stringlengths
8
300
abstract
stringlengths
0
10k
Parasitic twin with gastroschisis is one of the rarest variant of conjoined twins: a case report.
We report a case of parasitic twin or incomplete or heteropagus twining of extra portions of a pelvis, lower and upper limbs, duplication of genitalia and herniation of intestinal tract with spleen-variant of conjoined twinning (CT) consistent with fusion of two embryos followed by resorption of the caudal half of one of them, resulting in a normal male baby with the upper half of a male parasitic twin fused to his chest.
Development and Application of Electrochemical Sensor Based on Molecularly Imprinted Polymer and Carbon Nanotubes for the Determination of Carvedilol
This work describes the preparation of a glassy carbon electrode (GCE) modified with molecularly imprinted polymer (MIP) and multiwalled carbon nanotubes (MWCNTs) for determination of carvedilol (CAR). Electrochemical behavior of CAR on the modified electrode was evaluated using cyclic voltammetry. The best composition was found to be 65% (m/m) of MIP. Under optimized conditions (pH 8.5 in 0.25 mol·L−1 Britton–Robinson buffer and 0.1 mol·L−1 KCl) the voltammetric method showed a linear response for CAR in the range of 50–325 μmol·L−1 (R = 0.9755), with detection and quantification limits of 16.14 μmol·L−1 and 53.8 μmol·L−1, respectively. The developed method was successfully applied for determination of CAR in real samples of pharmaceuticals. The sensor presented good sensitivity, rapid detection of CAR, and quick and easy preparation. Furthermore, the material used as modifier has a simple synthesis and its amount utilized is very small, thus illustrating the economic feasibility of this sensor.
Development of algorithms and software for forecasting, nowcasting and variability of TEC
() Department of Electrical and Electronics Engineering, Middle East Technical University, Balgat, Ankara, Turkey () TUBITAK Marmara Research Center, Information Technologies Research Institute, Gebze, Kocaeli, Turkey () Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, U.K. () Faculty of Aeronautics and Astronautics, Istanbul Technical University (İTÜ), Maslak, Istanbul, Turkey () Department of Mathematics, Istanbul Technical University, Maslak, Istanbul, Turkey () Department of Electrical and Computer Engineering, Aristotelian University of Thessaloniki, Greece () Deutsches Zentrum für Luft und Raumfahrt (DLR), Institut für Kommunikation und Navigation (IKN), Neustrelitz, Germany
A new medium voltage PWM inverter topology for adjustable speed drives
In this paper a new PWM inverter topology suitable for medium voltage (2300/4160 V) adjustable speed drive (ASD) systems is proposed. The modular inverter topology is derived by combining three standard 3-phase inverter modules and a 0.33 pu output transformer. The output voltage is high quality, multistep PWM with low dv/dt. Further, the approach also guarantees balanced operation and 100% utilization of each 3-phase inverter module over the entire speed range. These features enable the proposed topology to be suitable for powering constant torque as well as variable torque type loads. Clean power utility interface of the proposed inverter system can be achieved via an 18-pulse input transformer. Analysis, simulation, and experimental results are shown to validate the concepts.
Organic Carbon Cycling in Forested Watersheds : A Carbon Isotope Approach Permalink
Dissolved organic carbon (DOC) is important in the acid-base chemistry of acid-sensitive freshwater systems; in the complexation, mobility, persistence, and toxicity of metals and other pollutants; and in lake carbon metabolism. Carbon isotopes (13C and •4C) are used to study the origin, transport, and fate of DOC in a softwater catchment in central Ontario. Precipitation, soil percolates, groundwaters, stream, beaver pond, and lake waters, and lake sediment pore water were characterized chemically and isotopically. In addition to total DOC, isotopic measurements were made on the humic and fulvic DOC fractions. The lake is a net sink for DOC. A •4C results indicate that the turnover time of most of the DOC in streams, lakes, and wetlands is fast, less than 40 years, and on the same time scale as changes in acidic deposition. DOC in groundwaters is composed of older carbon than surface waters, indicating extensive cycling of DOC in the upper soil zone or aquifer.
Antisense Oligonucleotide against hTERT (Cantide) Inhibits Tumor Growth in an Orthotopic Primary Hepatic Lymphoma Mouse Model
BACKGROUND Human xenograft models, resulting from orthotopic transplantation (implantation into the anatomically correct site) of histologically intact tissue into animals, are important for investigating local tumor growth, vascular and lymphatic invasion at the primary tumor site and metastasis. METHODOLOGY/PRINCIPAL FINDINGS We used surgical orthotopic transplantation to establish a nude mouse model of primary hepatic lymphoma (PHL), HLBL-0102. We performed orthotopic transfer of the HLBL-0102 tumor for 42 generations and characterized the tumor cells. The maintenance of PHL characteristics were supported by immunohistochemical and cytogenetic analysis. We also report the antitumor effect of Cantide, an antisense phosphorothioate oligonucleotide against hTERT, on the growth of HLBL-0102 tumors. We showed a significant, dose-dependent inhibition of tumor weight and serum LDH activity in the orthotopically transplanted animals by Cantide. Importantly, survival was prolonged in Cantide-treated HLBL-0102 tumor-bearing mice when compared to mock-treated mice. CONCLUSIONS/SIGNIFICANCE Our study provided the basis for the development of a clinical trial protocol to treat PHL.
The Lung Image Database Consortium (LIDC) data collection process for nodule detection and annotation
RATIONALE AND OBJECTIVES The Lung Image Database Consortium (LIDC) is developing a publicly available database of thoracic computed tomography (CT) scans as a medical imaging research resource to promote the development of computer-aided detection or characterization of pulmonary nodules. To obtain the best estimate of the location and spatial extent of lung nodules, expert thoracic radiologists reviewed and annotated each scan. Because a consensus panel approach was neither feasible nor desirable, a unique two-phase, multicenter data collection process was developed to allow multiple radiologists at different centers to asynchronously review and annotate each CT scan. This data collection process was also intended to capture the variability among readers. MATERIALS AND METHODS Four radiologists reviewed each scan using the following process. In the first or "blinded" phase, each radiologist reviewed the CT scan independently. In the second or "unblinded" review phase, results from all four blinded reviews were compiled and presented to each radiologist for a second review, allowing the radiologists to review their own annotations together with the annotations of the other radiologists. The results of each radiologist's unblinded review were compiled to form the final unblinded review. An XML-based message system was developed to communicate the results of each reading. RESULTS This two-phase data collection process was designed, tested, and implemented across the LIDC. More than 500 CT scans have been read and annotated using this method by four expert readers; these scans either are currently publicly available at http://ncia.nci.nih.gov or will be in the near future. CONCLUSIONS A unique data collection process was developed, tested, and implemented that allowed multiple readers at distributed sites to asynchronously review CT scans multiple times. This process captured the opinions of each reader regarding the location and spatial extent of lung nodules.
Pragmatism, Policy Science, and the State
The philosophy ofpragmatism is often cited as the source ofthe theoretical underpinnings of the contemporary policy sciences. However, an examination of the work of John Dewey reveals that pragmatism is incompatible with the conception of knowledge which now prevails in these sciences as well as the relationship currently established between this form of inquiry and the state. Fidelity to pragmatism requires a fundamental reconceptualization of the practice of social science and a reconsideration of the organization of knowledge and power in a democratic society.
Diuretic response in acute heart failure-an analysis from ASCEND-HF.
BACKGROUND Diuretic unresponsiveness often occurs during hospital admission for acute heart failure (AHF) and is associated with adverse outcome. This study aims to investigate determinants, clinical outcome, and the effects of nesiritide on diuretic response early after admission for AHF. METHODS Diuretic response, defined as weight loss per 40 mg of furosemide or equivalent, was examined from hospital admission to 48 hours in 4,379 patients from the ASCEND-HF trial. As an additional analysis, a urinary diuretic response metric was investigated in 5,268 patients using urine volume from hospital admission to 24 hours per 40 mg of furosemide or equivalent. RESULTS Mean diuretic response was -0.42 kg/40 mg of furosemide (interquartile range -1.0, -0.05). Poor responders had lower blood pressure, more frequent diabetes, long-term use of loop diuretics, poorer baseline renal function, and lower urine output (all P < .01). Randomized nesiritide treatment was not associated with diuretic response (P = .987). Good diuretic response was independently associated with a significantly decreased risk of 30-day all-cause mortality or heart failure rehospitalization (odds ratio 0.44, 95% CI 0.29-0.65, highest vs lowest quintile, P < .001). Diuretic response based on urine output per 40 mg of furosemide showed similar results in terms of clinical predictors, association with outcome, and the absence of an effect of nesiritide. CONCLUSIONS Poor diuretic response early after hospital admission for AHF is associated with low blood pressure, renal impairment, low urine output, and an increased risk of death or rehospitalization early after discharge. Nesiritide had a neutral effect on diuretic response.
Narrow assessments misrepresent development and misguide policy: comment on Steinberg, Cauffman, Woolard, Graham, and Banich (2009).
Intellectual and psychosocial functioning develop along complex learning pathways. Steinberg, Cauffman, Woolard, Graham, and Banich (see record 2009-18110-001) measured these two classes of abilities with narrow, biased assessments that captured only a segment of each pathway and created misleading age patterns based on ceiling and floor effects. It is a simple matter to shift the assessments to produce the opposite pattern, with cognitive abilities appearing to develop well into adulthood and psychosocial abilities appearing to stop developing at age 16. Their measures also lacked a realistic connection to the lived behaviors of adolescents, abstracting too far from messy realities and thus lacking ecological validity and the nuanced portrait that the authors called for. A drastically different approach to assessing development is required that (a) includes the full age-related range of relevant abilities instead of a truncated set and (b) examines the variability and contextual dependence of abilities relevant to the topics of murder and abortion.
THE INFLUENCE OF CONSUMERS ’ PERCEPTION OF GREEN PRODUCTS ON GREEN PURCHASE INTENTION
Green consumerism has increasingly received attention since the increased level of consumer awareness towards green products. Therefore, the aim of this paper had been to examine the influence of consumer perception of green products on green purchase intention. In this study, perception of green products was conceptualized as a multidimensional variable comprised of green corporate perception, eco-label, green advertising, green packaging, and green product value. By using a survey, a total of 159 questionnaires from respondents aged above 18 in Sabah were collected. The results demonstrated that within consumer perception; green corporate perception, eco-label, and green product value had positive significant influences on green purchase intention. The findings also revealed that eco-label and green product value made the largest contribution in influencing green purchase intention among consumers. In contrast, both green advertising and green packaging had no significant impact on consumer intention to purchase green products. © 2014 AESS Publications. All Rights Reserved. International Journal of Asian Social Science ISSN(e): 2224-4441/ISSN(p): 2226-5139 journal homepage: http://www.aessweb.com/journals/5007 International Journal of Asian Social Science, 2014, 4(8): 924-939 © 2014 AESS Publications. All Rights Reserved. 925
Reservoir oil bubblepoint pressures revisited ; solution gas – oil ratios and surface gas specific gravities
A large number of recently published bubblepoint pressure correlations have been checked against a large, diverse set of service company fluid property data with worldwide origins. The accuracy of the correlations is dependent on the precision with which the data are measured. In this work a bubblepoint pressure correlation is proposed which is as accurate as the data permit. Certain correlations, for bubblepoint pressure and other fluid properties, require use of stock-tank gas rate and specific gravity. Since these data are seldom measured in the field, additional correlations are presented in this work, requiring only data usually available in field operations. These correlations could also have usefulness in estimating stock-tank vent gas rate and quality for compliance purposes. D 2002 Elsevier Science B.V. All rights reserved.
A system of portable ECG monitoring based on Bluetooth mobile phone
Aiming at diagnosing and preventing the cardiovascular disease, a portable ECG monitoring system based on Bluetooth mobile phones is presented. The system consists of some novel dry skin electrodes, an ECG monitoring circuit and a smart phone. The weak ECG signals extracted from the dry electrode can be amplified, band-pass filtered, analog-digital converted and so on. Finally it is sent to the mobile phone by Bluetooth technology for real-time display on screen. The core ECG monitoring circuit is composed of a CMOS preamplifier ASIC designed by ourselves, a band-pass filter, a microcontroller and a Bluetooth module. The volume is 5.5 cm × 3.4 cm × 1.6 cm, weight is only 20.76 g (without batteries), and power consumption is 115 mW. The tests show that the system can operate steadily, precisely and display the ECG in real time.
Literacy acquisition reduces the influence of automatic holistic processing of faces and houses
Writing was invented too recently to have influenced the human genome. Consequently, reading acquisition must rely on partial recycling of pre-existing brain systems. Prior fMRI evidence showed that in literates a left-hemispheric visual region increases its activation to written strings relative to illiterates and reduces its response to faces. Increasing literacy also leads to a stronger right-hemispheric lateralization for faces. Here, we evaluated whether this reorganization of the brain's face system has behavioral consequences for the processing of non-linguistic visual stimuli. Three groups of adult illiterates, ex-illiterates and literates were tested with the sequential composite face paradigm that evaluates the automaticity with which faces are processed as wholes. Illiterates were consistently more holistic than participants with reading experience in dealing with faces. A second experiment replicated this effect with both faces and houses. Brain reorganization induced by literacy seems to reduce the influence of automatic holistic processing of faces and houses by enabling the use of a more analytic and flexible processing strategy, at least when holistic processing is detrimental to the task.
Supporting public health priorities: recommendations for physical education and physical activity promotion in schools.
Physical activity (PA) provides numerous physiological and psychosocial benefits. However, lifestyle changes, including reduced PA opportunities in multiple settings, have resulted in an escalation of overweight and obesity and related health problems. Poor physical and mental health, including metabolic and cardiovascular problems is seen in progressively younger ages, and the systematic decline in school PA has contributed to this trend. Of note, the crowded school curriculum with an intense focus on academic achievement, lack of school leadership support, funding and resources, plus poor quality teaching are barriers to PA promotion in schools. The school setting and physical educators in particular, must embrace their role in public health by adopting a comprehensive school PA program. We provide an overview of key issues and challenges in the area plus best bets and recommendations for physical education and PA promotion in the school system moving forward.
Treatment-related changes in brain activation in patients with fibromyalgia syndrome
Little is known about the effects of successful treatment on brain function in chronic pain. This study examined changes in pain-evoked brain activation following behavioral extinction training in fibromyalgia patients. Using functional magnetic resonance imaging, brain activation to painful mechanical stimuli applied to the 2nd phalanx of the left 2nd digit (m. flexor digitorum) was assessed in 10 patients with fibromyalgia syndrome (FM) before and after behavioral extinction training. The behavioral treatment significantly reduced interference from pain in the FM patients. Mechanical pain threshold and pain tolerance increased significantly after treatment. Activation in the insula shifted bilaterally from a more anterior site before treatment to a more posterior location after treatment. The pre- to post-treatment reduction in both interference related to pain and pain severity were significantly associated with bilateral activation in pain-evoked activity in the posterior insula, the ipsilateral caudate nucleus/striatum, the contralateral lenticular nucleus, the left thalamus and the primary somatosensory cortex contralateral to the stimulated side. These data show a relation between successful behavioral treatment and higher activation bilaterally in the posterior insula and in the contralateral primary somatosensory cortex. Future studies should compare responders and non-responders for differential treatment effects and examine in more detail the mechanisms underlying these changes.
An overlapping module identification method in protein-protein interaction networks
Previous studies have shown modular structures in PPI (protein-protein interaction) networks. More recently, many genome and metagenome investigations have focused on identifying modules in PPI networks. However, most of the existing methods are insufficient when applied to networks with overlapping modular structures. In our study, we describe a novel overlapping module identification method (OMIM) to address this problem. Our method is an agglomerative clustering method merging modules according to their contributions to modularity. Nodes that have positive effects on more than two modules are defined as overlapping parts. As well, we designed de-noising steps based on a clustering coefficient and hub finding steps based on nodal weight. The low computational complexity and few control parameters prove that our method is suitable for large scale PPI network analysis. First, we verified OMIM on a small artificial word association network which was able to provide us with a comprehensive evaluation. Then experiments on real PPI networks from the MIPS Saccharomyces Cerevisiae dataset were carried out. The results show that OMIM outperforms several other popular methods in identifying high quality modular structures.
Understanding the Effects of Mobile Gamification on Learning Performance
With the development of mobile technology, mobile learning, which is not bound by time and space, has become one of the most essential learning methods for modern people. However, previous studies have suggested that improvements are still needed in mobile learning performance. One of the means to achieve this goal is situated learning, which enhances learners’ autonomous motivation and make them more enthusiastic in learning. Nonetheless, few studies have attempted to increase learning performance through situated learning. This study attempts to explore the development of a productive learning atmosphere in the context of mobile situated learning. An experiment is conducted with university-level students having homogenous background and coursework by applying heterogeneous pedagogies, including textual pedagogy, video pedagogy, collaborative pedagogy, and gamification pedagogy. Moreover, the uses and gratification theory and the cognitive load theory were adopted to identify the cognitive factors that influence the learning performance in situated learning. It is hoped that education service providers can benefit from the insights discovered from this study and implement more effective learning strategies in mobile education.
I2oTology - Tracking-Oriented Ontology
To join the Internet of Things (IoT) and Ontology concepts today it is becoming a good strategy to save sensors and Smart Objects (SO) information using all the semantic capabilities and ontology inferences to improve and give some intelligence at the information manipulation, IoT-Lite and SSN (Semantic Sensor Network) are examples of ontologies for IoT. This paper presents the I2oTology, which is a tracking-oriented ontology. The I2oTology purpose is to presents a semantic aimed at tracking smart objects based on some IoT-Lite classes. It was made a simple test with this ontology but there is some classes, properties and situations to be tested and also to know how much the ontology is right, these topics will be considered at the future work.
Characterization of Porous Silicon Using Terahertz Differential Time-Domain Spectroscopy
Porous silicon (PS) films of different porosities are investigated using Terahertz Differential Time-Domain spectroscopy (THz-DTDS). Preliminary measurements indicate a power law type of behavior in the PS conductivity response.
A tem study of oxides formed on ultrafine Fe, Cr and FeCr particles
Abstract High resolution electron microscopy (HREM), electron diffraction (ED) and energy dispersie X-ray analysis (EDX) are used to analyze surface oxides on ultrafine Fe, Cr and FeCr particles. The following oxides are observed: Fe 3 O 4 on Fe particles, CrO 2 on Cr particles, FeCr 2 O 4 or γ-Fe 2 O 3 on FeCr particles depending on the oxidation process. FeCr 2 O 4 is only observed when the particles are thoroughly passivated and no further oxidation is possible. Some passivated FeCr alloy particles show the σ-phase. When a very strong oxidation occurs, i.e. the particles burn, holes are observed in the interior of the FeCr particles.
Large scale metric learning from equivalence constraints
In this paper, we raise important issues on scalability and the required degree of supervision of existing Mahalanobis metric learning methods. Often rather tedious optimization procedures are applied that become computationally intractable on a large scale. Further, if one considers the constantly growing amount of data it is often infeasible to specify fully supervised labels for all data points. Instead, it is easier to specify labels in form of equivalence constraints. We introduce a simple though effective strategy to learn a distance metric from equivalence constraints, based on a statistical inference perspective. In contrast to existing methods we do not rely on complex optimization problems requiring computationally expensive iterations. Hence, our method is orders of magnitudes faster than comparable methods. Results on a variety of challenging benchmarks with rather diverse nature demonstrate the power of our method. These include faces in unconstrained environments, matching before unseen object instances and person re-identification across spatially disjoint cameras. In the latter two benchmarks we clearly outperform the state-of-the-art.
Probability Estimates for Multi-class Classification by Pairwise Coupling
Pairwise coupling is a popular multi-class classification method that combines together all pairwise comparisons for each pair of classes. This paper presents two approaches for obtaining class probabilities. Both methods can be reduced to linear systems and are easy to implement. We show conceptually and experimentally that the proposed approaches are more stable than two existing popular methods: voting and [3].
Positive attitude in cancer: patients' perspectives.
PURPOSE/OBJECTIVES To describe what being positive means for patients undergoing treatment for cancer. RESEARCH APPROACH Qualitative, descriptive approach. SETTING Specialist cancer clinic in a large metropolitan hospital in Sydney, Australia. PARTICIPANTS 11 patients with cancer currently being treated at a cancer clinic for a variety of cancers. METHODOLOGIC APPROACH Semistructured interviews that were audiotaped, transcribed, and thematically analyzed for content related to being positive. MAIN RESEARCH VARIABLES Patients' definitions of positive and negative attitude, their perceptions of the importance of attitude during their cancer journey, and any factors that influenced their perceived attitude. FINDINGS For patients, positive attitude was defined as optimism for the day and getting though everyday events of the journey by taking control rather than focusing on the future. Factors that affected patients' positive attitude were their relationships with their specialists, people around them being positive and supportive, and having a pleasant environment at home and at the treatment center. Patients found expectations of them to be positive as being detrimental. CONCLUSIONS Patients with cancer must be positive for the present rather than the future. INTERPRETATION Nurses need to inspire and support patients' positivity while undergoing treatment for cancer. Nurses should not force their own value system on them nor treat them differently if they do not conform to societal expectations to be positive and optimistic for the future.
The Role of Business Intelligence Tools in Decision Making Process
Business organization sheds the lights on the development in marketing to be able to accompaniment with the last even in marketing and to handle market management. The organizations create their own business decisions and operations through using business intelligence justification. Therefore organization can do that through knowledge, and convey the correct information. As a result, business intelligence becomes the main criterion and the strategic performance in the modern organization to achieve the dominant character. This study will show the impact of using business intelligence strategy on the decision making process by showing a study of the Jordanian customs department.
Circadian Adaptation to Night Shift Work Influences Sleep, Performance, Mood and the Autonomic Modulation of the Heart
Our aim was to investigate how circadian adaptation to night shift work affects psychomotor performance, sleep, subjective alertness and mood, melatonin levels, and heart rate variability (HRV). Fifteen healthy police officers on patrol working rotating shifts participated to a bright light intervention study with 2 participants studied under two conditions. The participants entered the laboratory for 48 h before and after a series of 7 consecutive night shifts in the field. The nighttime and daytime sleep periods were scheduled during the first and second laboratory visit, respectively. The subjects were considered "adapted" to night shifts if their peak salivary melatonin occurred during their daytime sleep period during the second visit. The sleep duration and quality were comparable between laboratory visits in the adapted group, whereas they were reduced during visit 2 in the non-adapted group. Reaction speed was higher at the end of the waking period during the second laboratory visit in the adapted compared to the non-adapted group. Sleep onset latency (SOL) and subjective mood levels were significantly reduced and the LF∶HF ratio during daytime sleep was significantly increased in the non-adapted group compared to the adapted group. Circadian adaptation to night shift work led to better performance, alertness and mood levels, longer daytime sleep, and lower sympathetic dominance during daytime sleep. These results suggest that the degree of circadian adaptation to night shift work is associated to different health indices. Longitudinal studies are required to investigate long-term clinical implications of circadian misalignment to atypical work schedules.
Robust face recognition based on weighted DeepFace
In this paper, we present a new face recognition algorithm based on weighted deep face learning. Our proposed method composes of two steps: face detection and face feature extraction. The aim of face detection is to find an accurate face position. The face alignment is then applied by finding the facial landmarks in the face rectangle. With the help of face alignment the error rate of face recognition can be reduced. Deep learning is employed to extract distinctive features of the face components. We create the weights for each face feature by optimizing the within class variations with respect to between class similarity measures. We achieved the lowest total error rate of 0.01429% on XM2VTS database. We also accomplished the 98.61% of accuracy on LFW database. For realtime face recognition we achieved 99.17% for our own video database.
Asynchronous Subgradient-Push
We consider a multi-agent framework for distributed optimization where each agent in the network has access to a local convex function and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents’ local functions. We propose an algorithm wherein each agent operates asynchronously and independently of the other agents in the network. When the local functions are strongly-convex with Lipschitz-continuous gradients, we show that a subsequence of the iterates at each agent converges to a neighbourhood of the global minimum, where the size of the neighbourhood depends on the degree of asynchrony in the multi-agent network. When the agents work at the same rate, convergence to the global minimizer is achieved. Numerical experiments demonstrate that Asynchronous Subgradient-Push can minimize the global objective faster than state-of-the-art synchronous first-order methods, is more robust to failing or stalling agents, and scales better with the network size.
Filaria control and elimination: diagnostic, monitoring and surveillance needs.
Gold standard diagnosis using blood films or skin snips has dimished relevance as mass drug distribution programmes for control of filaria infections expand. The view of 'diagnosis' and its relevance at the individual level has changed, as it has been recognised that the spectrum of programmatic processes (mapping, mass drug interventions, monitoring and evaluation, and surveillance) require different approaches as different questions are asked at each stage. The feasibility and relevance of skin biopsy or blood film examination is challenged when mass drug distribution seeks to treat all eligibles in communities. The need to expand programmes rapidly by identifying the highest risk communities has seen the development of rapid assessment methods, such as rapid epidemiological mapping of onchocerciasis (REMO) and rapid epidemiological assessment (REA) for onchocerciasis, immunochromatographic test (ICT)-based mapping for lymphatic filariasis (LF), and Rapid Assessment Procedure for Loiasis (RAPLOA) for Loa, to reduce the risk of serious adverse events and to guide projects in high-risk communities. As programmes reduce the prevalence through mass drug distribution, more sensitive techniques are required to define endpoints, for LF in particular where the programmatic goal is elimination; for onchocerciasis, sensitive surveillance tools are required particularly in those areas where such risks of recrudescence are high. Whilst much progress has been made in the development and deployment of rapid methods, there are still specific needs for antigen detection in onchocerciasis, whilst standardisation of a panel of tools for LF will allow the definition of endpoint parameters so that countries can decide when mass drug administration (MDA) can be stopped and have a sensitive post-MDA surveillance system.
Consequences of changing biodiversity
Human alteration of the global environment has triggered the sixth major extinction event in the history of life and caused widespread changes in the global distribution of organisms. These changes in biodiversity alter ecosystem processes and change the resilience of ecosystems to environmental change. This has profound consequences for services that humans derive from ecosystems. The large ecological and societal consequences of changing biodiversity should be minimized to preserve options for future solutions to global environmental problems.
Sound-source recognition: a theory and computational model
The ability of a normal human listener to recognize objects in the environment from only the sounds they produce is extraordinarily robust with regard to characteristics of the acoustic environment and of other competing sound sources. In contrast, computer systems designed to recognize sound sources function precariously, breaking down whenever the target sound is degraded by reverberation, noise, or competing sounds. Robust listening requires extensive contextual knowledge, but the potential contribution of sound-source recognition to the process of auditory scene analysis has largely been neglected by researchers building computational models of the scene analysis process. This thesis proposes a theory of sound-source recognition, casting recognition as a process of gathering information to enable the listener to make inferences about objects in the environment or to predict their behavior. In order to explore the process, attention is restricted to isolated sounds produced by a small class of sound sources, the non-percussive orchestral musical instruments. Previous research on the perception and production of orchestral instrument sounds is reviewed from a vantage point based on the excitation and resonance structure of the sound-production process, revealing a set of perceptually salient acoustic features. A computer model of the recognition process is developed that is capable of “listening” to a recording of a musical instrument and classifying the instrument as one of 25 possibilities. The model is based on current models of signal processing in the human auditory system. It explicitly extracts salient acoustic features and uses a novel improvisational taxonomic architecture (based on simple statistical pattern-recognition techniques) to classify the sound source. The performance of the model is compared directly to that of skilled human listeners, using
Acupuncture for Cancer-Induced Bone Pain?
Bone pain is the most common type of pain in cancer. Bony metastases are common in advanced cancers, particularly in multiple myeloma, breast, prostate or lung cancer. Current pain-relieving strategies include the use of opioid-based analgesia, bisphosphonates and radiotherapy. Although patients experience some pain relief, these interventions may produce unacceptable side-effects which inevitably affect the quality of life. Acupuncture may represent a potentially valuable adjunct to existing strategies for pain relief and it is known to be relatively free of harmful side-effects. Although acupuncture is used in palliative care settings for all types of cancer pain the evidence-base is sparse and inconclusive and there is very little evidence to show its effectiveness in relieving cancer-induced bone pain (CIBP). The aim of this critical review is to consider the known physiological effects of acupuncture and discuss these in the context of the pathophysiology of malignant bone pain. The aim of future research should be to produce an effective protocol for treating CIBP with acupuncture based on a sound, evidence-based rationale. The physiological mechanisms presented in this review suggest that this is a realistic objective.
Spiking Deep Networks with LIF Neurons
We train spiking deep networks using leaky integrate-and-fire (LIF) neurons, and achieve state-of-the-art results for spiking networks on the CIFAR-10 and MNIST datasets. This demonstrates that biologically-plausible spiking LIF neurons can be integrated into deep networks can perform as well as other spiking models (e.g. integrate-and-fire). We achieved this result by softening the LIF response function, such that its derivative remains bounded, and by training the network with noise to provide robustness against the variability introduced by spikes. Our method is general and could be applied to other neuron types, including those used on modern neuromorphic hardware. Our work brings more biological realism into modern image classification models, with the hope that these models can inform how the brain performs this difficult task. It also provides new methods for training deep networks to run on neuromorphic hardware, with the aim of fast, power-efficient image classification for robotics applications.
A cutting plane method for solving linear generalized disjunctive programming problems
Raman and Grossmann [Raman, R., & Grossmann, I.E. (1994). Modeling and computational techniques for logic based integer programming. Computers and Chemical Engineering, 18(7), 563–578] and Lee and Grossmann [Lee, S., & Grossmann, I.E. (2000). New algorithms for nonlinear generalized disjunctive programming. Computers and Chemical Engineering, 24, 2125–2141] have developed a reformulation of Generalized Disjunctive Programming (GDP) problems that is based on determining the convex hull of each disjunction. Although the with the quires n order to hod relies m an LP else until ng, retrofit utting plane
Fake news on Twitter during the 2016 U.S. presidential election
The spread of fake news on social media became a public concern in the United States after the 2016 presidential election. We examined exposure to and sharing of fake news by registered voters on Twitter and found that engagement with fake news sources was extremely concentrated. Only 1% of individuals accounted for 80% of fake news source exposures, and 0.1% accounted for nearly 80% of fake news sources shared. Individuals most likely to engage with fake news sources were conservative leaning, older, and highly engaged with political news. A cluster of fake news sources shared overlapping audiences on the extreme right, but for people across the political spectrum, most political news exposure still came from mainstream media outlets.
In vitro Cr(VI) reduction by cell-free extracts of chromate-reducing bacteria isolated from tannery effluent irrigated soil.
Four efficient Cr(VI)-reducing bacterial strains were isolated from rhizospheric soil of plants irrigated with tannery effluent and investigated for in vitro Cr(VI) reduction. Based on 16S rRNA gene sequencing, the isolated strains SUCR44, SUCR140, SUCR186, and SUCR188 were identified as Bacillus sp. (JN674188), Microbacterium sp. (JN674183), Bacillus thuringiensis (JN674184), and Bacillus subtilis (JN674195), respectively. All four isolates could completely reduce Cr(VI) in culture media at 0.2 mM concentration within a period of 24-120 h; SUCR140 completely reduced Cr(VI) within 24 h. Assay with the permeabilized cells (treated with Triton X-100 and Tween 80) and cell-free assay demonstrated that the Cr(VI) reduction activity was mainly associated with the soluble fraction of cells. Considering the major amount of chromium being reduced within 24-48 h, these fractions could have been released extracellularly also during their growth. At the temperature optima of 28 °C and pH 7.0, the specific activity of Cr(VI) reduction was determined to be 0.32, 0.42, 0.34, and 0.28 μmol Cr(VI)min(-1)mg(-1) protein for isolates SUCR44, SUCR140, SUCR186, and SUCR188, respectively. Addition of 0.1 mM NADH enhanced the Cr(VI) reduction in the cell-free extracts of all four strains. The Cr(VI) reduction activity in cell-free extracts of all the isolates was stable in presence of different metal ions tested except Hg(2+). Beside this, urea and thiourea also reduced the activity of chromate reduction to significant levels.
Cell-free protein synthesis: applications come of age.
Cell-free protein synthesis has emerged as a powerful technology platform to help satisfy the growing demand for simple and efficient protein production. While used for decades as a foundational research tool for understanding transcription and translation, recent advances have made possible cost-effective microscale to manufacturing scale synthesis of complex proteins. Protein yields exceed grams protein produced per liter reaction volume, batch reactions last for multiple hours, costs have been reduced orders of magnitude, and reaction scale has reached the 100-liter milestone. These advances have inspired new applications in the synthesis of protein libraries for functional genomics and structural biology, the production of personalized medicines, and the expression of virus-like particles, among others. In the coming years, cell-free protein synthesis promises new industrial processes where short protein production timelines are crucial as well as innovative approaches to a wide range of applications.
Developmetrics A checklist for testing measurement invariance
The analysis of measurement invariance of latent constructs is important in research across groups, or across time. By establishing whether factor loadings, intercepts and residual variances are equivalent in a factor model that measures a latent concept, we can assure that comparisons that are made on the latent variable are valid across groups or time. Establishing measurement invariance involves running a set of increasingly constrained Structural Equation Models, and testing whether differences between these models are significant. This paper provides a step-by-step guide in analyzing Measurement Invariance.
Adenovirus-mediated Foxp3 expression in lung epithelial cells ameliorates acute radiation-induced pneumonitis in mice
Forkhead transcription factor 3 (Foxp3) has a critical role in regulatory T cells (Treg). There are an increasing number of researches concerning the functions of Foxp3 in other cells, including lung epithelial cells besides Treg. However, the roles of Foxp3 in lung epithelial cells remain poorly understood. To examine the potential therapeutic benefits of Foxp3 for lung inflammation, this study investigates the effect of adenovirus-mediated Foxp3 overexpression in a radiation-induced lung damage model. Foxp3-EGFP expressing adenovirus was administered by intratracheal injection three times over 14 days after focal X-ray irradiation. To evaluate effects of Foxp3 overexpression in radiation-induced lung inflammation, immune cell profiles of bronchoalveolar lavage (BAL) fluid were analyzed. Foxp3 gene-delivered mice showed significant inhibition of immune cell infiltration, such as eosinophils, lymphocytes, macrophages and neutrophils in BAL fluid. Histopathological analysis also showed that Foxp3 overexpression inhibits inflammatory cell recruitment and collagen deposition in lung tissues. In addition, expression of inflammatory and fibrosis-related genes was decreased in the Foxp3 expression adenovirus-infected group. These results suggest that Foxp3 expression in lungs holds considerable therapeutic potential for attenuating inflammation and fibrosis in radiation-induced lung injury.
Learning patterns of university student retention
Learning predictors for student retention is very difficult. After reviewing the literature, it is evident that there is considerable room for improvement in the current state of the art. As shown in this paper, improvements are possible if we (a) explore a wide range of learning methods; (b) take care when selecting attributes; (c) assess the efficacy of the learned theory not just by its median performance, but also by the variance in that performance; (d) study the delta of student factors between those who stay and those who are retained. Using these techniques, for the goal of predicting if students will remain for the first three years of an undergraduate degree, the following factors were found to be informative: family background and family’s social-economic status, high school GPA
An incremental approach to software systems re-engineering
Software re-engineering can dramatically improve an organization’s ability to maintain and upgrade its legacy production systems. But the risks that accompany traditional re-engineering tend to offset the potential benefits. Incremental software re-engineering is the practice of re-engineering a system’s software components on a phased basis, and then re-incorporating those components into production also on a phased basis. Incremental software re-engineering allows for safer re-engineering, increased flexibility and more immediate return on investment. But commercial automation to support incremental software re-engineering is currently weak. In addition, project managers need a methodology to plan and implement software re-engineering projects based on the incremental approach. This paper covers the advantages of incremental software re-engineering and what is available concerning support technology. The paper describes a process methodology for planning and implementing incremental software re-engineering projects. Finally, gaps in the support technology are identified with suggestions for future tools from vendors. 1998 John Wiley & Sons, Ltd.
International genetic evaluation for direct longevity in dairy bulls.
The aims of this study were to document, present, and discuss the procedure used to calculate the international estimated breeding value (EBV) for longevity for Brown Swiss, Guernsey, Holstein, Jersey, Red Dairy Cattle, and Simmental breeds. Data from 19 countries and 123,833 national sires' breeding value were used for this purpose. Trait definitions and national genetic evaluation procedures were first summarized; and this showed that differences among countries existed. International breeding values for direct longevity were calculated using a multi-trait across-country evaluation model. The data editing method was identical to the one used for the February 2007 routine international genetic evaluation. Estimated genetic correlations presented in this study were similar to those presented in the literature and, in general, differed from unity because of differences in trait definitions, culling reasons, data included, evaluation procedures, genotype-environment interactions, and weak genetic ties among countries. The average genetic correlations for Holstein ranged from 0.49 to 0.76. The genetic correlations for Brown Swiss and Guernsey ranged from 0.29 to 0.95 and from 0.30 to 0.89, respectively. For Jersey and Red Dairy Cattle the genetic correlations ranged from 0.39 to 0.61 and from 0.30 to 0.96, respectively. For Simmental the genetic correlation was 0.59. Different predictors were used at national levels to define combined longevity. These predictors were combined using economic and empirical weights. Three out of 15 countries published international EBV of direct longevity only and 12 out of 15 countries combined direct longevity with predictors (combined longevity). International breeding values for longevity were combined into the total merit index by most of the member organizations and made available to breeders across the world through magazines and Web sites. Even if some breeders are not familiar with longevity EBV, they will select for this trait automatically if they use the published total merit indexes.
Allograph errors and impaired access to graphic motor codes in a case of unilateral agraphia of the dominant left hand.
This paper describes the case of a unilateral agraphic patient (GG) who makes letter substitutions only when writing letters and words with his dominant left hand. Accuracy is significantly greater when he is writing with his right hand and when he is asked to spell words orally. GG also makes case errors when writing letters, and will sometimes write words in mixed case. However, these allograph errors occur regardless of which hand he is using to write. In terms of cognitive models of peripheral dysgraphia (e.g., Ellis, 1988), it appears that he has an allograph level impairment that affects writing with both hands, and a separate problem in accessing graphic motor patterns that disrupts writing with the left hand only. In previous studies of left-handed patients with unilateral agraphia (Zesiger & Mayer, 1992; Zesiger, Pegna, & Rilliet, 1994), it has been suggested that allographic knowledge used for writing with both hands is stored exclusively in the left hemisphere, but that graphic motor patterns are represented separately in each hemisphere. The pattern of performance demonstrated by GG strongly supports such a conclusion.
My hands shake--classification and treatment of tremor.
BACKGROUND Tremor is the most common movement disorder in the community and is defined as a rhythmic oscillatory movement of a body part. Classification of tremors is helpful for accurate diagnosis, prognosis and treatment. Most tremors can be separated according to the state in which they occur, that is, during rest or action. Other clinical features, including frequency, amplitude and associated neurological signs, further define tremor. OBJECTIVE This article describes some of the important clinical clues that reliably separate tremors, including the rest tremors of Parkinson disease and vascular midbrain lesions, or the action tremors of enhanced physiological tremor, essential tremor and dystonic tremor. DISCUSSION Numerous treatment strategies exist for tremor, but focused, selective use of appropriate medications requires accurate clinical diagnosis. Diagnostic certainty is essential as functional neurosurgery (deep brain stimulation) offers a realistic treatment option for many patients with severe tremor.
Unsupervised and Efficient Vocabulary Expansion for Recurrent Neural Network Language Models in ASR
In automatic speech recognition (ASR) systems, recurrent neural network language models (RNNLM) are used to rescore a word lattice or N-best hypotheses list. Due to the expensive training, the RNNLM’s vocabulary set accommodates only small shortlist of most frequent words. This leads to suboptimal performance if an input speech contains many out-of-shortlist (OOS) words. An effective solution is to increase the shortlist size and retrain the entire network which is highly inefficient. Therefore, we propose an efficient method to expand the shortlist set of a pretrained RNNLM without incurring expensive retraining and using additional training data. Our method exploits the structure of RNNLM which can be decoupled into three parts: input projection layer, middle layers, and output projection layer. Specifically, our method expands the word embedding matrices in projection layers and keeps the middle layers unchanged. In this approach, the functionality of the pretrained RNNLM will be correctly maintained as long as OOS words are properly modeled in two embedding spaces. We propose to model the OOS words by borrowing linguistic knowledge from appropriate in-shortlist words. Additionally, we propose to generate the list of OOS words to expand vocabulary in unsupervised manner by automatically extracting them from ASR output.
The VEINES-QOL/Sym questionnaire is a reliable and valid disease-specific quality of life measure for deep vein thrombosis in elderly patients
To prospectively evaluate the psychometric properties of the Venous Insufficiency Epidemiological and Economic Study (VEINES-QOL/Sym) questionnaire, an instrument to measure disease-specific quality of life and symptoms in elderly patients with deep vein thrombosis (DVT), and to validate a German version of the questionnaire. In a prospective multicenter cohort study of patients aged ≥65 years with acute venous thromboembolism, we used standard psychometric tests and criteria to evaluate the reliability, validity, and responsiveness of the VEINES-QOL/Sym in patients with acute symptomatic DVT. We also performed an exploratory factor analysis. Overall, 352 French- and German-speaking patients were enrolled (response rate of 87 %). Both language versions of the VEINES-QOL/Sym showed good acceptability (missing data, floor and ceiling effects), reliability (internal consistency, item-total and inter-item correlations), validity (convergent, discriminant, known-groups differences), and responsiveness to clinical change over time in elderly patients with DVT. The exploratory factor analysis of the VEINES-QOL/Sym suggested three underlying dimensions: limitations in daily activities, DVT-related symptoms, and psychological impact. The VEINES-QOL/Sym questionnaire is a practical, reliable, valid, and responsive instrument to measure quality of life and symptoms in elderly patients with DVT and can be used with confidence in prospective studies to measure outcomes in such patients.
Pulmonary imaging of pandemic influenza H1N1 infection: relationship between clinical presentation and disease burden on chest radiography and CT.
The potential for pulmonary involvement among patients presenting with novel swine-origin influenza A (H1N1) is high. To investigate the utility of chest imaging in this setting, we correlated clinical presentation with chest radiographic and CT findings in patients with proven H1N1 cases. Subjects included all patients presenting with laboratory-confirmed H1N1 between 1 May and 10 September 2009 to one of three urban hospitals. Clinical information was gathered retrospectively, including symptoms, possible risk factors, treatment and hospital survival. Imaging studies were re-read for study purposes, and CXR findings compared with CT scans when available. During the study period, 157 patients presented with subsequently proven H1N1 infection. Hospital admission was necessary for 94 (60%) patients, 16 (10%) were admitted to intensive care and 6 (4%) died. An initial CXR, carried out for 123 (78%) patients, was abnormal in only 40 (33%) cases. Factors associated with increased likelihood for radiographic lung abnormalities were dyspnoea (p<0.001), hypoxaemia (p<0.001) and diabetes mellitus (p = 0.023). Chest CT was performed in 21 patients, and 19 (90%) showed consolidation, ground-glass opacity, nodules or a combination of these findings. 4 of 21 patients had negative CXR and positive CT. Compared with CT, plain CXR was less sensitive in detecting H1N1 pulmonary disease among immunocompromised hosts than in other patients (p = 0.0072). A normal CXR is common among patients presenting to the hospital for H1N1-related symptoms without evidence of respiratory difficulties. The CXR may significantly underestimate lung involvement in the setting of immunosuppression.
How much ambiguity aversion?: Finding indifferences between Ellsberg's risky and ambiguous bets
Experimental results on the Ellsberg paradox typically reveal behavior that is commonly interpreted as ambiguity aversion. The experiments reported in the current paper find the objective probabilities for drawing a red ball that make subjects indifferent between various risky and uncertain Ellsberg bets. They allow us to examine the predictive power of alternative principles of choice under uncertainty, including the objective maximin and Hurwicz criteria, the sure-thing principle, and the principle of insufficient reason. Contrary to our expectations, the principle Parts of the experimental data were gathered while Lisa Stewart was a researcher in the Harvard Psychology Department and Alex Voorhoeve was a Faculty Fellow at Harvard’s Safra Center for Ethics. We thank the Decision Science Laboratory at Harvard and the ELSE laboratory at University College London for the use of their facilities, and the Suntory and Toyota International Centres for Economics and Related Disciplines (STICERD) for financial support. Ken Binmore thanks the British Economic and Social Research Council through the Centre for Economic Learning and Social Evolution (ELSE), the British Arts and Humanities Research Council through grant AH/F017502 and the European Research Council under the European Community’s Seventh Framework Programme (FP7/2007-2013)/ERC grant 295449. Alex Voorhoeve thanks the Safra Center for Ethics for its Faculty Fellowship and the British Arts and Humanities Research Council through grant AH/J006033/1. Results were presented at Bristol University, the European University Institute in Florence, Harvard University, the LSE, the University of Siena, and the University of York (UK). We thank Richard Bradley, Barbara Fasolo, Joshua Greene, Glenn Harrison, Jimmy Martinez, Katie Steele, Joe Swierzbinski, Peter Wakker, the Editors of and an anonymous referee for the JRU, and those present at our seminars for their comments. K. Binmore Economics Department, University College London, London, WC1E 6BT, UK L. Stewart Psychology Department, Australian National University, Canberra, ACT 0200, Australia A. Voorhoeve ( ) Department of Philosophy, Logic and Scientific Method, London School of Economics, London, WC2E 2AE, UK e-mail: [email protected] 216 J Risk Uncertain (2012) 45:215–238 of insufficient reason performed substantially better than rival theories in our experiment, with ambiguity aversion appearing only as a secondary phenomenon.
BabelDomains: Large-Scale Domain Labeling of Lexical Resources
In this paper we present BabelDomains, a unified resource which provides lexical items with information about domains of knowledge. We propose an automatic method that uses knowledge from various lexical resources, exploiting both distributional and graph-based clues, to accurately propagate domain information. We evaluate our methodology intrinsically on two lexical resources (WordNet and BabelNet), achieving a precision over 80% in both cases. Finally, we show the potential of BabelDomains in a supervised learning setting, clustering training data by domain for hypernym discovery.
Sources of growth. The entrepreneurial versus the managed economy
The purpose of this paper is to suggest that a fundamental shift in Europe, along with the other OECD countries, is taking place. This shift is from the managed economy to the entrepreneurial economy. While politicians and policy makers have made a plea for guidance in the era of entrepreneurship, scholars have been slow to respond. The purpose of this paper is to make a first step identifying and articulating these differences. We do this by contrasting the most fundamental elements of the newly emerging entrepreneurial economy with those of the managed economy. We identify fifteen trade-offs confronting these two polar worlds. The common thread throughout these trade-offs is the increased role of new and small enterprises in the entrepreneurial economy. A particular emphasis is placed on changes in economic policy demanded by the entrepreneurial economy vis-à-vis the managed economy. We then explore whether restructuring towards the entrepreneurial economy has been conducive to economic growth and job creation. Our empirical analysis links the stage of the transition towards an entrepreneurial economy to the growth rates of European countries over a recent period. We find that those countries which have introduced a greater element of entrepreneurship have been rewarded with additional growth. JEL Classification: O0, L0
Recommending News Based on Hybrid User Profile, Popularity, Trends, and Location
Reading the news is a favorite hobby for many people anywhere in the world. With the popularity of the Internet and social media, users are constantly provided, or even bombarded, with the latest news around the world. With numerous sources of news, it has become a real challenge for users to follow the news that they are interested. Previous work used user profile to recommend personalized news; and used RSS feeds and latest tweets to provide popular, trendy news. In this work we combine these two methods with three enhancements. First, to personalize news recommendation we used a hybrid approach, which involved the analyses of click through, user tweets, and user Twitter friends list to build user profile, this method significantly improves the accuracy of user profile. Second, to address the importance of temporal dynamics, we add a unique new feature of location preference to the news recommendation system. Third, we allow users to choose the ratio of popular news vs. trendy news they desire. The resulting system is then evaluated based on user satisfaction and accuracy. The results show that the average user satisfaction increases from 8.6 to 9.4 when location preference is added, while the accuracy of the recommendation system is around 92–95%. We believe that the proposed system is a successful example of incorporating temporal dynamics to recommendation systems; the combination of using hybrid user profile, popularity, trends and location would have significant impact on other recommendation systems in the future.
Stochastic Expectation Propagation
Expectation propagation (EP) is a deterministic approximation algorithm that is often used to perform approximate Bayesian parameter learning. EP approximates the full intractable posterior distribution through a set of local approximations that are iteratively refined for each datapoint. EP can offer analytic and computational advantages over other approximations, such as Variational Inference (VI), and is the method of choice for a number of models. The local nature of EP appears to make it an ideal candidate for performing Bayesian learning on large models in large-scale dataset settings. However, EP has a crucial limitation in this context: the number of approximating factors needs to increase with the number of datapoints, N , which often entails a prohibitively large memory overhead. This paper presents an extension to EP, called stochastic expectation propagation (SEP), that maintains a global posterior approximation (like VI) but updates it in a local way (like EP). Experiments on a number of canonical learning problems using synthetic and real-world datasets indicate that SEP performs almost as well as full EP, but reduces the memory consumption by a factor of N . SEP is therefore ideally suited to performing approximate Bayesian learning in the large model, large dataset setting.
INTERACTIONAL FEEDBACK AND INSTRUCTIONAL COUNTERBALANCE
This comparative analysis of teacher-student interaction in two different instructional settings at the elementary-school level (18.3 hr in French immersion and 14.8 hr Japanese immersion) investigates the immediate effects of explicit correction, recasts, and prompts on learner uptake and repair. The results clearly show a predominant provision of recasts over prompts and explicit correction, regardless of instructional setting, but distinctively varied student uptake and repair patterns in relation to feedback type, with the largest proportion of repair resulting from prompts in French immersion and from recasts in Japanese immersion. Based on these findings and supported by an analysis of each instructional setting’s overall communicative orientation, we introduce the counterbalance hypothesis, which states that instructional activities and interactional feedback that act as a counterbalance to a classroom’s predominant communicative orientation are likely to prove more effective than instructional activities and interactional feedback that are congruent with its predominant communicative orientation.
Top-down and bottom-up cues for scene text recognition
Scene text recognition has gained significant attention from the computer vision community in recent years. Recognizing such text is a challenging problem, even more so than the recognition of scanned documents. In this work, we focus on the problem of recognizing text extracted from street images. We present a framework that exploits both bottom-up and top-down cues. The bottom-up cues are derived from individual character detections from the image. We build a Conditional Random Field model on these detections to jointly model the strength of the detections and the interactions between them. We impose top-down cues obtained from a lexicon-based prior, i.e. language statistics, on the model. The optimal word represented by the text image is obtained by minimizing the energy function corresponding to the random field model. We show significant improvements in accuracies on two challenging public datasets, namely Street View Text (over 15%) and ICDAR 2003 (nearly 10%).
Research and Marketing Issues Facing Commodity Promotion Programs
Commodity promotion check-off programs are now an integral component of the set of marketing tools used by the nations' farmers to influence (and understand) the market for their output. Nearly $1 billion are invested annually by U.S. producers in such collective demandexpansion activities as generic advertising and promotion, new product development, public relations and product research. Most of the funds used for these generic promotion and research activities are collected from farmers under the authority of federal and/or state legislation. Some programs, authorize refunds and some exempt small producers.
Advances in data stream mining
Mining data streams has been a focal point of research interest over the past decade. Hardware and software advances have contributed to the significance of this area of research by introducing faster than ever data generation. This rapidly generated data has been termed as data streams. Credit card transactions, Google searches, phone calls in a city, and many others\are typical data streams. In many important applications, it is inevitable to analyze this streaming data in real time. Traditional data mining techniques have fallen short in addressing the needs of data stream mining. Randomization, approximation, and adaptation have been used extensively in developing new techniques or adopting exiting ones to enable them to operate in a streaming environment. This paper reviews key milestones and state of the art in the data stream mining area. Future insights are also be presented. C © 2011 Wiley Periodicals, Inc.
Assessing "Transaction Climate" Influencing the Adoption of Innovative ICT and e-Business In the Greek Agri-food Sector
Recently, a large number of innovative ICT systems and network tools facilitate the use of e-business frameworks. Modern organizations through innovative ICT models can confront competition, uncertainty and complexity. Supply chain faces organizations as a chain of interrelated entities, and provides a complete aspect of their prospects. A survey has been contacted to test the impact of the factor "transaction climate" on agri-food firms in Greece. A total of 20 variables was initially proposed to determine the factor "transaction climate" related to the four organizations that companies deal with, customers, suppliers, carriers and 3 rd Party logistics provider companies, wh ile for each one of the four were investigated separately 5 features: Commitment, Reliability, Firm's Satisfaction, Satisfactory Information Exchange and Long-lasting Relationships. Finally, through factor analysis, were expressed all 10 of the original 20 variables that describe the "transaction climate" in an agri-food firm, as linear combinations of the fewer and derived 2 component constructs/factors, leading firms in agri-food sector in Greece to adopt innovative IT and web-based technologies aiming to enhance e-business, supply chain management, organizational productivity, flexibility and competitiveness. Each factor is described with 5 questions of the questionnaire that load highly in each factor. The 2-factor model has to be further confirmed in a second sample.
Emerging potential of transposons for gene therapy and generation of induced pluripotent stem cells.
Effective gene therapy requires robust delivery of the desired genes into the relevant target cells, long-term gene expression, and minimal risks of secondary effects. The development of efficient and safe nonviral vectors would greatly facilitate clinical gene therapy studies. However, nonviral gene transfer approaches typically result in only limited stable gene transfer efficiencies in most primary cells. The use of nonviral gene delivery approaches in conjunction with the latest generation transposon technology based on Sleeping Beauty (SB) or piggyBac transposons may potentially overcome some of these limitations. In particular, a large-scale genetic screen in mammalian cells yielded a novel hyperactive SB transposase, resulting in robust and stable gene marking in vivo after hematopoietic reconstitution with CD34(+) hematopoietic stem/progenitor cells in mouse models. Moreover, the first-in-man clinical trial has recently been approved to use redirected T cells engineered with SB for gene therapy of B-cell lymphoma. Finally, induced pluripotent stem cells could be generated after genetic reprogramming with piggyBac transposons encoding reprogramming factors. These recent developments underscore the emerging potential of transposons in gene therapy applications and induced pluripotent stem generation for regenerative medicine.
The hamstring syndrome. A new diagnosis of gluteal sciatic pain.
A series of 59 patients was treated and operated on for pain felt over the area of the ischial tuberosity and radiating down the back of the thigh. This condition was labeled as the "hamstring syndrome." Pain was typically incurred by assuming a sitting position, stretching the affected posterior thigh, and running fast. The patients usually had a history of recurrent hamstring "tears." Their symptoms were caused by the tight, tendinous structures of the lateral insertion area of the hamstring muscles to the ischial tuberosity. Upon division of these structures, complete relief was obtained in 52 of the 59 patients.
Reasons for therapeutic inertia when managing hypertension in clinical practice in non-Western countries
Insufficient awareness of hypertension guidelines by physicians may be an impediment to achieving adequate blood pressure (BP) control rates in clinical practice. We therefore conducted an open intervention survey among primary care physicians in 1596 centres from 16 countries in four different continents to prospectively assess what is the BP goal defined by physicians for individual patients and what are the reasons for not intensifying antihypertensive treatment when BP goals are not achieved. Enrolled patients (N=35 302) were either not treated to goal (N=22 887) or previously untreated (N=12 250). Baseline systolic and diastolic BP averaged 159/95±15/12 mm Hg. BP goals defined by physicians averaged 136±6 mm Hg for systolic and 86±5 mm Hg for diastolic BP. Patients' individual risk stratification determined BP goals. At last visit BP averaged 132/81±11/8 mm Hg and values of ⩽140/90 were reached in 92% of untreated and 80% of previously uncontrolled treated hypertensives. The main reasons for not intensifying antihypertensive treatment when BP remained above goal were the assumption that the time after starting the new drug was too short to attain its full effect, the satisfaction with a clear improvement of BP or with a BP nearing the goal, and the acceptance of good self-measurements. In this open intervention program in primary care, a large proportion of patients achieved recommended BP goals. The belief that a clear improvement in BP is acceptable and that the full drug effect may take up to several weeks to be reached are frequent reasons for treatment inertia when goals are not achieved.
0 BLIS : A Framework for Rapid Instantiation of BLAS Functionality
The BLAS Libray Instantiation Software (BLIS) is a new framework for the rapid instantiation of Basic Linear Algebra Subprograms (BLAS) functionality. The fundamental innovation is the insight that virtually all computation within level-2 (matrix-vector) and level-3 (matrix-matrix) BLAS operations can be expressed in terms of very simple kernels. While others had made similar insights, BLIS brings this set down to what we believe is the simplest set that still supports the high performance that the computational science community demands. Higher-level framework code is generalized and implemented in standard C so that it can be reused and/or re-parameterized for different operations (as well as different architectures) with little to no modification. Inserting high-performance kernels into the framework facilitates the immediate optimization of any and all BLAS-like operations which are cast in terms of these kernels, and thus the framework acts as a productivity multiplier. Users of BLAS-dependent applications are given a choice of using the BLIS native interface (which is a C interface that corrects a few known idiosyncracies of the BLAS interface), the traditional Fortran BLAS interface, or through any other higher level interface that chooses to build upon the BLIS interface. Preliminary experimental performance of level-2 and level-3 operations is observed to be competitive with two mature open source libraries (OpenBLAS and ATLAS) as well as an established commercial product (Intel MKL).
Subsea: an efficient heuristic algorithm for subgraph isomorphism
We present a novel approach to the problem of finding all subgraphs and induced subgraphs of a (target) graph which are isomorphic to another (pattern) graph. To attain efficiency we use a special representation of the pattern graph. We also combine our search algorithm with some known bisection algorithms. Experimental comparison with other algorithms was performed on several types of graphs. The comparison results suggest that the approach provided here is most effective when all instances of a subgraph need to be found.
Image Copy Move Forgery Detection using Block Representing Method
As one of the most successful applications of image analysis and understanding, digital image forgery detection has recently received significant attention, especially during the past few years. At least two trend account for this: the first accepting digital image as official document has become a common practice, and the second the availability of low cost technology in which the image could be easily manipulated. Even though there are many systems to detect the digital image forgery, their success is limited by the conditions imposed by many applications. Most existing techniques to detect such tampering are mainly at the cost of higher computational complexity. In this paper, we present an efficient and robust approach to detect such specific artifact. Firstly, the original image is divided into fixed-size blocks, and discrete cosine transform (DCT) is applied to each block, thus, the DCT coefficients represent each block. Secondly, each cosine transformed block is represented by a circle block and four features are extracted to reduce the dimension of each block. Finally, the feature vectors are lexicographically sorted, and duplicated image blocks will be matched by a preset threshold value. In order to make the algorithm more robust, some parameters are proposed to remove the wrong similar blocks. Experiment results show that our proposed scheme is not only robust to multiple copy-move forgery, but also to blurring or nosing adding and with low computational complexity. KeywordsDidgital forencics copy-move forgery circle block duplicated region
Prevention of non-contact anterior cruciate ligament injuries in soccer players. Part 2: A review of prevention programs aimed to modify risk factors and to reduce injury rates
Soccer is the most commonly played sport in the world, with an estimated 265 million active soccer players participating in the game as on 2006. Inherent to this sport is the higher risk of injury to the anterior cruciate ligament (ACL) relative to other sports. ACL injury causes a significant loss of time from competition in soccer, which has served as the strong impetus to conduct research that focuses to determine the risk factors for injury, and more importantly, to identify and teach techniques to reduce this injury in the sport. This research emphasis has afforded a rapid influx of literature aimed to report the effects of neuromuscular training on the risk factors and the incidence of non-contact ACL injury in high-risk soccer populations. The purpose of the current review is to sequence the most recent literature relating the effects of prevention programs that were developed to alter risk factors associated with non-contact ACL injuries and to reduce the rate of non-contact ACL injuries in soccer players. To date there is no standardized intervention program established for soccer to prevent non-contact ACL injuries. Multi-component programs show better results than single-component preventive programs to reduce the risk and incidence of non-contact ACL injuries in soccer players. Lower extremity plyometrics, dynamic balance and strength, stretching, body awareness and decision-making, and targeted core and trunk control appear to be successful training components to reduce non-contact ACL injury risk factors (decrease landing forces, decrease varus/valgus moments, and increase effective muscle activation) and prevent non-contact ACL injuries in soccer players, especially in female athletes. Pre-season injury prevention combined with an in-season maintenance program may be advocated to prevent injury. Compliance may in fact be the limiting factor to the overall success of ACL injury interventions targeted to soccer players regardless of gender. Thus, interventional research must also consider techniques to improve compliance especially at the elite levels which will likely influence trickle down effects to sub-elite levels. Future research is also needed for male soccer athletes to help determine the most effective intervention to reduce the non-contact ACL injury risk factors and to prevent non-contact ACL injuries.
Automatic Detection and Classification of Brain Hemorrhages
Computer-aided diagnosis systems have been the focus of many research endeavors. They are based on the idea of processing and analyzing images of different parts of the human body for a quick and accurate diagnosis. In this paper, the aforementioned approach is followed to detect whether a brain hemorrhage exists or not in a Computed Topography (CT) scans of the brain. Moreover, the type of the hemorrhage is identified. The implemented system consists of several stages that include image preprocessing, image segmentation, feature extraction, and classification. The results of the conducted experiments are very promising. A recognition rate of 100% is attained for detecting whether a brain hemorrhage exists or not. For the hemorrhage type classification, more than 92% accuracy is achieved. Key–Words:brain hemorrhage, brain ct scans, machine learning, image processing, image segmentation
A MEMS-Based Flow Rate and Flow Direction Sensing Platform with Integrated Temperature Compensation Scheme
This study develops a MEMS-based low-cost sensing platform for sensing gas flow rate and flow direction comprising four silicon nitride cantilever beams arranged in a cross-form configuration, a circular hot-wire flow meter suspended on a silicon nitride membrane, and an integrated resistive temperature detector (RTD). In the proposed device, the flow rate is inversely derived from the change in the resistance signal of the flow meter when exposed to the sensed air stream. To compensate for the effects of the ambient temperature on the accuracy of the flow rate measurements, the output signal from the flow meter is compensated using the resistance signal generated by the RTD. As air travels over the surface of the cross-form cantilever structure, the upstream cantilevers are deflected in the downward direction, while the downstream cantilevers are deflected in the upward direction. The deflection of the cantilever beams causes a corresponding change in the resistive signals of the piezoresistors patterned on their upper surfaces. The amount by which each beam deflects depends on both the flow rate and the orientation of the beam relative to the direction of the gas flow. Thus, following an appropriate compensation by the temperature-corrected flow rate, the gas flow direction can be determined through a suitable manipulation of the output signals of the four piezoresistors. The experimental results have confirmed that the resulting variation in the output signals of the integrated sensors can be used to determine not only the ambient temperature and the velocity of the air flow, but also its direction relative to the sensor with an accuracy of ± 7.5° error.
Early memory phenotypes drive T cell proliferation in patients with pediatric malignancies
Engineered T cell therapies have begun to demonstrate impressive clinical responses in patients with B cell malignancies. Despite this efficacy, many patients are unable to receive T cell therapy because of failure of in vitro expansion, a necessary component of cell manufacture and a predictor of in vivo activity. To evaluate the biology underlying these functional differences, we investigated T cell expansion potential and memory phenotype during chemotherapy in pediatric patients with acute lymphoblastic leukemia (ALL) and non-Hodgkin lymphoma (NHL). We found that patients with T cell populations enriched for early lineage cells expanded better in vitro and that patients with ALL had higher numbers of these cells with a corresponding enhancement in expansion as compared to cells from patients with NHL. We further demonstrated that early lineage cells were selectively depleted by cyclophosphamide and cytarabine chemotherapy and that culture with interleukin-7 (IL-7) and IL-15 enriched select early lineage cells and rescued T cell expansion capability. Thus, early lineage cells are essential to T cell fitness for expansion, and enrichment of this population either by timing of T cell collection or culture method can increase the number of patients eligible to receive highly active engineered cellular therapies.
A Brief Survey of Bandwidth Selection for Density Estimation
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.
SLIQ: A Fast Scalable Classifier for Data Mining
Classi cation is an important problem in the emerging eld of data mining. Although classi cation has been studied extensively in the past, most of the classi cation algorithms are designed only for memory-resident data, thus limiting their suitability for data mining large data sets. This paper discusses issues in building a scalable classier and presents the design of SLIQ, a new classi er. SLIQ is a decision tree classi er that can handle both numeric and categorical attributes. It uses a novel pre-sorting technique in the tree-growth phase. This sorting procedure is integrated with a breadthrst tree growing strategy to enable classi cation of disk-resident datasets. SLIQ also uses a new tree-pruning algorithm that is inexpensive, and results in compact and accurate trees. The combination of these techniques enables SLIQ to scale for large data sets and classify data sets irrespective of the number of classes, attributes, and examples (records), thus making it an attractive tool for data mining.
ATLAS: A Small but Complete SQL Extension for Data Mining and Data Streams
DBMSs have long suffered from SQL’s lack of power and extensibility. We have implemented ATLaS [1], a powerful database language and system that enables users to develop complete data-intensive applications in SQL—by writing new aggregates and table functions in SQL, rather than in procedural languages as in current Object-Relational systems. As a result, ATLaS’ SQL is Turing-complete [7], and is very suitable for advanced data-intensive applications, such as data mining and stream queries. The ATLaS system is now available for download along with a suite of applications [1] including various data mining functions, that have been coded in ATLaS’ SQL, and execute with a modest (20–40%) performance overhead with respect to the same applications written in C/C++. Our proposed demo will illustrate the key features and applications of ATLaS. In particular, we will demonstrate:
Human abilities in cultural context
Preface Acknowledgements Part I. Human Abilities in Theoretical Cultures Section 1. Holistic Theories: 1. The abilities of mankind: a revaluation S. H. Irvine and J. W. Berry 2. A triatchic view of intelligence in cross-cultural perspective Robert J. Sternberg Section 2. Biometric Fundamentalism: 3. The biological basis of intelligence H. J. Eysenck 4. Speed of information processing and population differences Arthur R. Jensen Section 3. Structural Psychometrics: 5. The factor model as a theoretical basis for individual differences Joseph R. Royce 6. The meaning of item bias in ability tests Ype H. Poortinga and Henk van der Flier Part II. Cultural Responses to Ability Measurement Section 4. Europe and North America: 7. The British 'cultural influence' on ability testing Paul Kline 8. Cultural influences on patterns of abilities in North America Philip Anthony Vernon, Douglas N. Jackson and Samuel Messick 9. Human abilities in the Eastern Mediterranean Cigdem Kagitcibasi and Isik Savasir 10. The Norwegian tests and measurements in cultural context Knut A. Hagtvet and Johan O. Undheim Section 5. Africa, Asia, and Australia: 11. Human assessment in Australia Daphne M. Keats and John A. Keats 12. Test performance of blacks in Southern Africa I. M. Kendall, Mary Ann Vester, and J. W. Von Mollendorf 13. Individual differences among the peoples of China J. W. C. Chan and Philip E. Vernon 14. Japanese abilities and achievements Saburo Iwawaki and Philip E. Vernon Part III. Cultural Limits Upon Human Assessment Section 6. Minorities and Enclaves: 15. Native North Americans: Indian and Inuit abilities Damian McShane and J. W. Berry 16. Aboriginal cognition and psychological nescience L. Z. Klich 17. Testing Bushmen in the Central Kalahari Helmut Reuning 18. Caste and cognitive processes J. P. Das and Amulya Kanti Satpathy Khurana 19. Educational adaptation and achievement of ethnic minority adolescents in Britain Gajendra K. Verma 20. The diminishing test performance gap between English speakers and Afrikaans speakers in South Africa J. M. Verster and R. J. Prinsloo Author index Subject index.
Segmentation, Feature Extraction, and Multiclass Brain Tumor Classification
Multiclass brain tumor classification is performed by using a diversified dataset of 428 post-contrast T1-weighted MR images from 55 patients. These images are of primary brain tumors namely astrocytoma (AS), glioblastoma multiforme (GBM), childhood tumor-medulloblastoma (MED), meningioma (MEN), secondary tumor-metastatic (MET), and normal regions (NR). Eight hundred fifty-six regions of interest (SROIs) are extracted by a content-based active contour model. Two hundred eighteen intensity and texture features are extracted from these SROIs. In this study, principal component analysis (PCA) is used for reduction of dimensionality of the feature space. These six classes are then classified by artificial neural network (ANN). Hence, this approach is named as PCA-ANN approach. Three sets of experiments have been performed. In the first experiment, classification accuracy by ANN approach is performed. In the second experiment, PCA-ANN approach with random sub-sampling has been used in which the SROIs from the same patient may get repeated during testing. It is observed that the classification accuracy has increased from 77 to 91 %. PCA-ANN has delivered high accuracy for each class: AS—90.74 %, GBM—88.46 %, MED—85 %, MEN—90.70 %, MET—96.67 %, and NR—93.78 %. In the third experiment, to remove bias and to test the robustness of the proposed system, data is partitioned in a manner such that the SROIs from the same patient are not common for training and testing sets. In this case also, the proposed system has performed well by delivering an overall accuracy of 85.23 %. The individual class accuracy for each class is: AS—86.15 %, GBM—65.1 %, MED—63.36 %, MEN—91.5 %, MET—65.21 %, and NR—93.3 %. A computer-aided diagnostic system comprising of developed methods for segmentation, feature extraction, and classification of brain tumors can be beneficial to radiologists for precise localization, diagnosis, and interpretation of brain tumors on MR images.
Globalizing knowledge : intellectuals, universities, and publics in transformation
Heralding a push for higher education to adopt a more global perspective, the term "globalizing knowledge" is today a popular catchphrase among academics and their circles. The complications and consequences of this desire for greater worldliness, however, are rarely considered critically. In this groundbreaking cultural-political sociology of knowledge and change, Michael D. Kennedy rearticulates questions, approaches, and case studies to clarify intellectuals' and institutions' responsibilities in a world defined by transformation and crisis. Globalizing Knowledge introduces the stakes of globalizing knowledge before examining how intellectuals and their institutions and networks shape and are shaped by globalization and world-historical events from 2001 through the uprisings of 2011-13. But Kennedy is not only concerned with elaborating how wisdom is maintained and transmitted, he also asks how we can recognize both interconnectedness and inequalities, and possibilities for more knowledgeable change within and beyond academic circles. Subsequent chapters are devoted to issues of public engagement, the importance of recognizing difference and the local's implication in the global, and the specific ways in which knowledge, images, and symbols are shared globally. Kennedy considers numerous case studies, from historical happenings in Poland, Kosova, Ukraine, and Afghanistan, to today's energy crisis, Pussy Riot, the Occupy Movement, and beyond, to illuminate how knowledge functions and might be used to affect good in the world.
Boosting complete-code tool for partial program
To improve software quality, researchers and practitioners have proposed static analysis tools for various purposes (e.g., detecting bugs, anomalies, and vulnerabilities). Although many such tools are powerful, they typically need complete programs where all the code names (e.g., class names, method names) are resolved. In many scenarios, researchers have to analyze partial programs in bug fixes (the revised source files can be viewed as a partial program), tutorials, and code search results. As a partial program is a subset of a complete program, many code names in partial programs are unknown. As a result, despite their syntactical correctness, existing complete-code tools cannot analyze partial programs, and existing partial-code tools are limited in both their number and analysis capability. Instead of proposing another tool for analyzing partial programs, we propose a general approach, called GRAPA, that boosts existing tools for complete programs to analyze partial programs. Our major insight is that after unknown code names are resolved, tools for complete programs can analyze partial programs with minor modifications. In particular, GRAPA locates Java archive files to resolve unknown code names, and resolves the remaining unknown code names from resolved code names. To illustrate GRAPA, we implement a tool that leverages the state-of-the-art tool, WALA, to analyze Java partial programs. We thus implemented the first tool that is able to build system dependency graphs for partial programs, complementing existing tools. We conduct an evaluation on 8,198 partial-code commits from four popular open source projects. Our results show that GRAPA fully resolved unknown code names for 98.5% bug fixes, with an accuracy of 96.1% in total. Furthermore, our results show the significance of GRAPA's internal techniques, which provides insights on how to integrate with more complete-code tools to analyze partial programs.
Forecasting of ozone concentration in smart city using deep learning
Clean air is one of the most important needs for the well-being of human being health. In smart cities, timely and precise air pollution levels knowledge is vital for the successful setup of smart pollution systems. Recently, pollution and weather data in smart city have been bursting, and we have truly got into the era of big data. Ozone is considered as one of the most air pollutants with hurtful impact to human health. Existing methods used to predict the level of ozone uses shallow pollution prediction models and are still unsatisfactory in their accuracy to be used in many real-world applications. In order to increase the accuracy of prediction models we come up with the concept of using deep architecture models tested on big pollution and weather data. In this paper, a new deep learning-based ozone level prediction model is proposed, which considers the pollution and weather correlations integrally. This deep learning model is used to learn ozone level features, and it is trained using a grid search technique. A deep architecture model is utilized to represent ozone level features for prediction. Moreover, experiments demonstrate that the proposed method for ozone level prediction has superior performance. The outcome of this study can be helpful in predicting the ozone level pollution in Aarhus city as a model of smart cities for improving accuracy of ozone forecasting tools.
What do online behavioral advertising privacy disclosures communicate to users?
Online Behavioral Advertising (OBA), the practice of tailoring ads based on an individual's online activities, has led to privacy concerns. In an attempt to mitigate these privacy concerns, the online advertising industry has proposed the use of OBA disclosures: icons, accompanying taglines, and landing pages intended to inform users about OBA and provide opt-out options. We conducted a 1,505-participant online study to investigate Internet users' perceptions of OBA disclosures. The disclosures failed to clearly notify participants about OBA and inform them about their choices. Half of the participants remembered the ads they saw but only 12% correctly remembered the disclosure taglines attached to ads. When shown the disclosures again, the majority mistakenly believed that ads would pop up if they clicked on disclosures, and more participants incorrectly thought that clicking the disclosures would let them purchase advertisements than correctly understood that they could then opt out of OBA. "AdChoices", the most commonly used tagline, was particularly ineffective at communicating notice and choice. A majority of participants mistakenly believed that opting out would stop all online tracking, not just tailored ads. We dicuss challenges in crafting disclosures and provide suggestions for improvement.
Design of quadrifilar spiral antenna with integrated module for UHF RFID reader
In this paper, a quadrifilar spiral antenna (QSA) with an integrated module for UHF radio frequency identification (RFID) reader is presented. The proposed QSA consists of four spiral antennas with short stubs and a microstrip feed network. Also, the shielded module is integrated on the center of the ground inside the proposed QSA. In order to match the proposed QSA with the integrated module, we adopt a short stub connected from each spiral antenna to ground. Experimental result shows that the QSA of size 80 × 80 × 11.2 mm3 with the integrated module (40 × 40 × 3 mm3) has a peak gain of 3.5 dBic, an axial ratio under 2.5 dB and a 3-dB beamwidth of about 130o.
Digital Literacy: A Conceptual Framework for Survival Skills in the Digital Era
Digital literacy involves more than the mere ability to use software or operate a digital device; it includes a large variety of complex cognitive, motor, sociological, and emotional skills, which users need in order to function effectively in digital environments. The tasks required in this context include, for example, “reading” instructions from graphical displays in user interfaces; using digital reproduction to create new, meaningful materials from existing ones; constructing knowledge from a nonlinear, hypertextual navigation; evaluating the quality and validity of information; and have a mature and realistic understanding of the “rules” that prevail in the cyberspace. This newly emerging concept of digital literacy may be used as a measure of the quality of learners’ work in digital environments, and provide scholars and developers with a more effective means of communication in designing better user-oriented environments. This article proposes a holistic, refined conceptual framework for digital literacy, which includes photo-visual literacy; reproduction literacy; branching literacy; information literacy; and socioemotional literacy.
Tribological behavior of plasma-sprayed carbon nanotube-reinforced hydroxyapatite coating in physiological solution.
Wear behavior of plasma-sprayed carbon nanotube (CNT)-reinforced hydroxyapatite (HA) coating is evaluated in the simulated body fluid environment. Apart from enhancing the fracture toughness and providing biocompatibility, CNT-reinforced HA coating demonstrated superior wear resistance compared with that of hydroxyapatite coating without CNT. Initiation and propagation of microcracks during abrasive wear of plasma-sprayed hydroxyapatite coatings was suppressed by CNT reinforcement. Surface characterization and wear studies have shown that in addition to acting as underprop lubricant, CNTs provide reinforcement via stretching and splat-bridging for enhanced abrasion resistance in vitro.
Capturing location-privacy preferences: quantifying accuracy and user-burden tradeoffs
We present a 3-week user study in which we tracked the locations of 27 subjects and asked them to rate when, where, and with whom they would have been comfortable sharing their locations. The results of analysis conducted on over 7,500 h of data suggest that the user population represented by our subjects has rich location-privacy preferences, with a number of critical dimensions, including time of day, day of week, and location. We describe a methodology for quantifying the effects, in terms of accuracy and amount of information shared, of privacy-setting types with differing levels of complexity (e.g., setting types that allow users to specify location- and/or time-based rules). Using the detailed preferences we collected, we identify the best possible policy (or collection of rules granting access to one’s location) for each subject and privacy-setting type. We measure the accuracy with which the resulting policies are able to capture our subjects’ preferences under a variety of assumptions about the sensitivity of the information and user-burden tolerance. One practical implication of our results is that today’s location-sharing applications may have failed to gain much traction due to their limited privacy settings, as they appear to be ineffective at capturing the preferences revealed by our study.
Evaluation of MPEG-7 shape descriptors against other shape descriptors
Shape is an important image feature - it is one of the primary low level image features exploited in content-based image retrieval (CBIR). There are generally two types of shape descriptors in the literature: contour-based and region-based. In MPEG-7, the curvature scale space descriptor (CSSD) and Zernike moment descriptor (ZMD) have been adopted as the contour-based shape descriptor and region-based shape descriptor, respectively. In this paper, the two shape descriptors are evaluated against other shape descriptors, and the two shape descriptors are also evaluated against each other. Standard methodology is used in the evaluation. Specifically, we use standard databases, large data sets and query sets, commonly used performance measurement and guided principles. A Java-based client-server retrieval framework has been implemented to facilitate the evaluation. Results show that Fourier descriptor (FD) outperforms CSSD, and that CSSD can be replaced by either FD or ZMD.
The fragment assembly string graph
We present a concept and formalism, the string graph, which represents all that is inferable about a DNA sequence from a collection of shotgun sequencing reads collected from it. We give time and space efficient algorithms for constructing a string graph given the collection of overlaps between the reads and, in particular, present a novel linear expected time algorithm for transitive reduction in this context. The result demonstrates that the decomposition of reads into kmers employed in the de Bruijn graph approach described earlier is not essential, and exposes its close connection to the unitig approach we developed at Celera. This paper is a preliminary piece giving the basic algorithm and results that demonstrate the efficiency and scalability of the method. These ideas are being used to build a next-generation whole genome assembler called BOA (Berkeley Open Assembler) that will easily scale to mammalian genomes.
Deep Context: End-to-end Contextual Speech Recognition
In automatic speech recognition (ASR) what a user says depends on the particular context she is in. Typically, this context is represented as a set of word n-grams. In this work, we present a novel, all-neural, end-to-end (E2E) ASR system that utilizes such context. Our approach, which we refer to as Contextual Listen, Attend and Spell (CLAS) jointly-optimizes the ASR components along with embeddings of the context n-grams. During inference, the CLAS system can be presented with context phrases which might contain-of-vocabulary (OOV) terms not seen during training. We compare our proposed system to a more traditional contextualization approach, which performs shallow-fusion between independently trained LAS and contextual n-gram models during beam search. Across a number of tasks, we find that the proposed CLAS system outperforms the baseline method by as much as 68% relative WER, indicating the advantage of joint optimization over individually trained components.
Household Financial Assets Allocation and Behaviour of Art Collection Holding
Financial demands of household show a typical characteristic of hierarchy, which is reflected by financial assets allocation of household. The collection itself has dual-attributes of consumption and investment goods, which is becoming an important choice of household financial assets allocation. Using micro-survey data of China households and analyzing empirically, it is found that the higher the hierarchy of financial assets allocation, the stronger the tendency to hold the art collection for household. It is believed that financial assets allocation should be optimized further and the hierarchy of household financial demands should be enhanced based on the satisfaction of household transaction requirements in order to develop the art collection markets of China and absorb more households to participate in.
PIRAT - A System for Quantitative Sewer Pipe Assessment
Sewers are aging, expensive assets that attract public attention only when they fail. Sewer operators are under increasing pressure to minimise their maintenance costs, while preventing sewer failures. Inspection can give early warning of failures and allow economical repair under noncrisis conditions. Current inspection techniques are subjective and detect only gross defects reliably. They cannot provide the data needed to confidently plan long-term maintenance. This paper describes PIRAT, a quantitative technique for sewer inspection. PIRAT measures the internal geometry of the sewer and then analyses these data to detect, classify, and rate defects automatically using artificial intelligence techniques. We describe the measuring system and present and discuss geometry results for different types of sewers. The defect analysis techniques are outlined and a sample defect report presented. PIRAT’s defect reports are compared with reports from the conventional technique and the discrepancies discussed. We relate PIRAT to other work in sewer robotics. KEY WORDS—sewer inspection robot, sewer condition assessment, neural network
High-performance speed measurement by suppression of systematic resolver and encoder errors
The subject of this paper is a method which suppresses systematic errors of resolvers and optical encoders with sinusoidal line signals. The proposed method does not require any additional hardware and the computational efforts are minimal. Since this method does not cause any time delay, the dynamic of the speed control is not affected. By means of this new scheme, dynamic and smooth running characteristics of drive systems are improved considerably.
BlindBox: Deep Packet Inspection over Encrypted Traffic
Many network middleboxes perform deep packet inspection (DPI), a set of useful tasks which examine packet payloads. These tasks include intrusion detection (IDS), exfiltration detection, and parental filtering. However, a long-standing issue is that once packets are sent over HTTPS, middleboxes can no longer accomplish their tasks because the payloads are encrypted. Hence, one is faced with the choice of only one of two desirable properties: the functionality of middleboxes and the privacy of encryption. We propose BlindBox, the first system that simultaneously provides {\em both} of these properties. The approach of BlindBox is to perform the deep-packet inspection {\em directly on the encrypted traffic. BlindBox realizes this approach through a new protocol and new encryption schemes. We demonstrate that BlindBox enables applications such as IDS, exfiltration detection and parental filtering, and supports real rulesets from both open-source and industrial DPI systems. We implemented BlindBox and showed that it is practical for settings with long-lived HTTPS connections. Moreover, its core encryption scheme is 3-6 orders of magnitude faster than existing relevant cryptographic schemes.
A specification paradigm for the design and implementation of tangible user interfaces
Tangible interaction shows promise to significantly enhance computer-mediated support for activities such as learning, problem solving, and design. However, tangible user interfaces are currently considered challenging to design and build. Designers and developers of these interfaces encounter several conceptual, methodological, and technical difficulties. Among others, these challenges include: the lack of appropriate interaction abstractions, the shortcomings of current user interface software tools to address continuous and parallel interactions, as well as the excessive effort required to integrate novel input and output technologies. To address these challenges, we propose a specification paradigm for designing and implementing Tangible User Interfaces (TUIs), that enables TUI developers to specify the structure and behavior of a tangible user interface using high-level constructs which abstract away implementation details. An important benefit of this approach, which is based on User Interface Description Language (UIDL) research, is that these specifications could be automatically or semi-automatically converted into concrete TUI implementations. In addition, such specifications could serve as a common ground for investigating both design and implementation concerns by TUI developers from different disciplines. Thus, the primary contribution of this article is a high-level UIDL that provides developers from different disciplines means for effectively specifying, discussing, and programming a broad range of tangible user interfaces. There are three distinct elements to this contribution: a visual specification technique that is based on Statecharts and Petri nets, an XML-compliant language that extends this visual specification technique, as well as a proof-of-concept prototype of a Tangible User Interface Management System (TUIMS) that semi-automatically translates high-level specifications into a program controlling specific target technologies.
When Cognitive Radio meets the Internet of Things?
Internet of Things (IoT) is a world wide network of interconnected objects. IoT capable objects will be interconnected through wired and wireless communication technologies. However, cost-effectiveness issues and accessibility to remote users make wireless communication as a feasible solution. A majority of possibilities have been proposed but many of these suffer from vulnerabilities to dynamic environmental conditions, ease of access, bandwidth allocation and utilization, and cost to purchase spectrum. Thus trends are shifting to the adaptability of Cognitive Radio Networks (CRNs) into IoT. Additionally, ubiquitous objects with cognitive capabilities will be able to make intelligent decisions to achieve interference-free and on-demand services. The main goal of this paper is to discuss how CR technology can be helpful for the IoT paradigm. More precisely, in this paper, we highlight CR functionalities, specially spectrum sensing in conjunction with cloud services to serve as self-reconfigurable IoT solutions for a number of applications.
e-Justice Implementation at a National Scale: The Ugandan Case
The use of information and communications technologies has been identified as one of the means suitable for supplementing the various reforms in convalescing the performance of the justice sector. The Government of Uganda has made strides in the implementation of e-Government to effectively utilize information and communications technologies in governance. The justice players are manifested in a justice, law and order sector which is based on the the Sector Wide Approach whose basic principle is that communication, cooperation and coordination between institutions can greatly add value to service delivery within a sector. Although a subset of e-Government, e-Justice aims at improving service delivery and collaboration between all justice players through the use of ICTs and needs to be spear-headed at a sector level. This work proposes ways of harnessing the existing opportunities and methods to implement e-Justice in Uganda that will culminate into a generic framework that can be applied in similar countries.
Impacts of land use on selected physicochemical properties of soils of Abobo area, western Ethiopia
Assessing land use-induced changes in soil properties are essential for addressing issues of agro-ecosystem transformation and sustainable land productivity. In view of this, a study was conducted to assess the impact of land use/land cover on the physicochemical properties of soils of Abobo area, western Ethiopia. Three adjacent land use types, namely forest, grazing and cultivated lands each falling under four land mapping units (1Ac, 1Bc, 2Cc and 3Cl) were considered for the study. A total of 40 random soil samples (0-20 cm depth) were collected to make three composite samples for each land use type across the land mapping units and analyzed for selected soil physical and chemical properties. The results of the study, on one hand, revealed that soil OM, total N, CEC, PBS and available micronutrients (Fe, Mn, Zn and Cu) contents of the cultivated land was significantly (P < 0.001) lower than the adjacent forest land. For instance, soil OM, total N, CEC, PBS, exchangeable Mg and available micronutrients (Mn, Zn and Cu) contents of cultivated land was significantly lower than the adjacent forest land by 32.98, 33.33, 16.16, 17.81, 21.88, 29.47, 40.05 and 53.92%, respectively. On the other hand, the results of the study revealed that exchangeable cations (Mg, K and Na), PBS and available micronutrients (Fe, Mn, Zn and Cu) contents of the gazing land was significantly (P < 0.001) lower than the adjacent forest land. However, significant differences were not observed between the forests and grazing lands in soil OM, total N, CEC and available P. From the present study, it could be concluded that the soil quality and health were maintained relatively under the forest, whereas the influence on most parameters were negative on the soils of the cultivated land, indicating the need for employing integrated soil fertility management in sustainable manner to optimize and maintain the favorable soil physicochemical properties.
Recent Trends in Driver Safety Monitoring Systems: State of the Art and Challenges
Driving in busy highways and roads is becoming complex and challenging, as more cars are hitting the roads. Safe driving requires attentive drivers, quality perception of the environment, awareness of the situation, and critical decision making to react properly in emergency situations. This paper provides an overview on driver safety monitoring systems. We study various driver sources of inattention while providing a comprehensive taxonomy. Then, different safety systems that tackle driver inattention are reported. Furthermore, we present the new generation of driver monitoring systems within the context of Internet of Cars. Thus, we introduce the concept of integrated safety, where smart cars collect information from the driver, the car, the road, and, most importantly, the surrounding cars to build an efficient environment for the driver. We conclude by highlighting issues and emerging trends envisioned by the research community.
N-ary Relation Extraction using Graph-State LSTM
Cross-sentence n-ary relation extraction detects relations among n entities across multiple sentences. Typical methods formulate an input as a document graph, integrating various intra-sentential and inter-sentential dependencies. The current state-of-the-art method splits the input graph into two DAGs, adopting a DAG-structured LSTM for each. Though being able to model rich linguistic knowledge by leveraging graph edges, important information can be lost in the splitting procedure. We propose a graph-state LSTM model, which uses a parallel state to model each word, recurrently enriching state values via message passing. Compared with DAG LSTMs, our graph LSTM keeps the original graph structure, and speeds up computation by allowing more parallelization. On a standard benchmark, our model shows the best result in the literature.
Quality of life in patients with pancreatic cancer.
Pancreatic cancer is a rapidly fatal disease with palliation often serving as the main goal of treatment. The end of life is often marked by severe symptoms and poor quality of life (QoL). Several studies presented at the 2012 ASCO Gastrointestinal Cancers Symposium addressed the importance of symptom identification and management for patients with pancreatic cancer: 1) a study evaluating the correlation between patient-reported symptoms, disease burden and treatment duration in patients with advanced pancreatic cancer undergoing gemcitabine-based chemotherapy (Abstract #370); 2) a Japanese study found that patients without worsening of pain or sleep symptoms at one month of chemotherapy had higher frequency of disease control (Abstract #195); and 3) a study showed that fear of cancer recurrence is a substantial problem following resection and should be targeted (Abstract #289). The authors summarize the findings and discuss the importance of QoL in these patients. The results of these studies may facilitate in identifying symptom changes as predictive markers, and improving care and QoL for patients with this devastating disease.
Developing a holistic model for quality in higher education
Attempts to apply the Quality Management models from industry into higher education have not been successful. There is a rationale for separately addressing the service and education functions with appropriate sets of criteria. TQM is an appropriate model for the former-service. For the latter, a number of models of excellence centered on learning are reviewed. The effectiveness of any composite model in addressing the multifarious elements of higher education is dependent on the organisational culture. The typical current culture is one of bureaucratic nature, prone to conflict. It is argued that in the current literature an ideal organisation behaviour which addresses the core values of higher education are the Learning Communities concepts. Such a Holistic Model for Quality in Higher Education can serve as the ideal to address the service, education and implementation aspects synergistically.
Compressive demosaicing for periodic color filter arrays
The utility of Compressed Sensing (CS) for demosaicing of images captured using random panchromatic color filter arrays (CFA) has been investigated in [1]. Meanwhile, most camera manufacturers employ periodic CFAs such as the popular Bayer CFA. In this paper, we derive a CS-based solution to demosaicing images captured using the general class of periodic CFAs. It is well known that periodic CFAs can be designed to effectively separate luminance and chrominance frequency bands [2, 3]. We employ this ability to reduce artifacts associated with luminance-chrominance overlap at the solver side. We show that the modified compressive demo-saicing method coupled with the additional constraint that chrominance channels have smooth surfaces achieves further improved results for most periodic CFAs.
Teaching Classification Boundaries to Humans
Given a classification task, what is the best way to teach the resulting boundary to a human? While machine learning techniques can provide excellent methods for finding the boundary, including the selection of examples in an online setting, they tell us little about how we would teach a human the same task. We propose to investigate the problem of example selection and presentation in the context of teaching humans, and explore a variety of mechanisms in the interests of finding what may work best. In particular, we begin with the baseline of random presentation and then examine combinations of several mechanisms: the indication of an example’s relative difficulty, the use of the shaping heuristic from the cognitive science literature (moving from easier examples to harder ones), and a novel kernel-based “coverage model” of the subject’s mastery of the task. From our experiments on 54 human subjects learning and performing a pair of synthetic classification tasks via our teaching system, we found that we can achieve the greatest gains with a combination of shaping and the coverage model.
Twitter as a Corpus for Sentiment Analysis and Opinion Mining
Sentiment Analysis (SA) and summarization has recently become the focus of many researchers, because analysis of online text is beneficial and demanded in many different applications. One such application is productbased sentiment summarization of multi-documents with the purpose of informing users about pros and cons of various products. This paper introduces a novel solution to target-oriented sentiment summarization and SA of short informal texts with a main focus on Twitter posts known as “tweets”. We compare different algorithms and methods for SA polarity detection and sentiment summarization. We show that our hybrid polarity detection system not only outperforms the unigram state-of-the-art baseline, but also could be an advantage over other methods when used as a part of a sentiment summarization system. Additionally, we illustrate that our SA and summarization system exhibits a high performance with various useful functionalities and features. Sentiment classification aims to automatically predict sentiment polarity (e.g., positive or negative) of users publishing sentiment data (e.g., reviews, blogs). Although traditional classification algorithms can be used to train sentiment classifiers from manually labeled text data, the labeling work can be time-consuming and ex-pensive. Meanwhile, users often use some different words when they express sentiment in different domains. If we directly apply a classifier trained in one domain to other domains, the performance will be very low due to the differences between these domains. In this work, we develop a general solution to sentiment classification when we do not have any labels in a target domain but have some labeled data in a different domain, regarded as source domain.
Detecting and Characterizing Mental Health Related Self-Disclosure in Social Media
Self-disclosure is an important element facilitating improved psychological wellbeing in individuals with mental illness. As social media is increasingly adopted in health related discourse, we examine how these new platforms might be allowing honest and candid expression of thoughts, experiences and beliefs. Specifically, we seek to detect levels of self-disclosure manifested in posts shared on different mental health forums on Reddit. We develop a classifier for the purpose based on content features. The classifier is able to characterize a Reddit post to be of high, low, or no self-disclosure with 78% accuracy. Applying this classifier to general mental health discourse on Reddit, we find that bulk of such discourse is characterized by high self-disclosure, and that the community responds distinctively to posts that disclose less or more. We conclude with the potential of harnessing our proposed self-disclosure detection algorithm in psychological therapy via social media. We also discuss design considerations for improved community moderation and support in these vulnerable self-disclosing communities.
Convolutional Neural Pyramid for Image Processing
We propose a principled convolutional neural pyramid (CNP) framework for general low-level vision and image processing tasks. It is based on the essential finding that many applications require large receptive fields for structure understanding. But corresponding neural networks for regression either stack many layers or apply large kernels to achieve it, which is computationally very costly. Our pyramid structure can greatly enlarge the field while not sacrificing computation efficiency. Extra benefit includes adaptive network depth and progressive upsampling for quasirealtime testing on VGA-size input. Our method profits a broad set of applications, such as depth/RGB image restoration, completion, noise/artifact removal, edge refinement, image filtering, image enhancement and colorization.