title
stringlengths
8
300
abstract
stringlengths
0
10k
In-body experiences: embodiment, control, and trust in robot-mediated communication
Communication technologies are becoming increasingly diverse in form and functionality, making it important to identify which aspects of these technologies actually improve geographically distributed communication. Our study examines two potentially important aspects of communication technologies which appear in robot-mediated communication - physical embodiment and control of this embodiment. We studied the impact of physical embodiment and control upon interpersonal trust in a controlled laboratory experiment using three different videoconferencing settings: (1) a handheld tablet controlled by a local user, (2) an embodied system controlled by a local user, and (3) an embodied system controlled by a remote user (n = 29 dyads). We found that physical embodiment and control by the local user increased the amount of trust built between partners. These results suggest that both physical embodiment and control of the system influence interpersonal trust in mediated communication and have implications for future system designs.
The intelligent street: responsive sound environments for social interaction
The Intelligent Street is a music installation that is able to respond intelligently to the collective requests of users interacting together. The performance it creates is largely influenced by the collective set of text commands from users' mobile phones. In this way, users in shared environments, subjugated for so long to uncontrollable and often undesired 'Muzak', can now directly influence their sonic environment and collectively create the aural soundscape that they desire. We see our project as enabling inhabitants of any given space from passive consumers to active creators, and anticipate it has significant commercial, social and educational potential.In this paper we present a description of the installation, its software architecture and implementation, as well as a report on subsequent user-evaluation in providing a musical public playground and, moreover, our over-arching goals as musicians and software engineers.
Switchable Deep Network for Pedestrian Detection
In this paper, we propose a Switchable Deep Network (SDN) for pedestrian detection. The SDN automatically learns hierarchical features, salience maps, and mixture representations of different body parts. Pedestrian detection faces the challenges of background clutter and large variations of pedestrian appearance due to pose and viewpoint changes and other factors. One of our key contributions is to propose a Switchable Restricted Boltzmann Machine (SRBM) to explicitly model the complex mixture of visual variations at multiple levels. At the feature levels, it automatically estimates saliency maps for each test sample in order to separate background clutters from discriminative regions for pedestrian detection. At the part and body levels, it is able to infer the most appropriate template for the mixture models of each part and the whole body. We have devised a new generative algorithm to effectively pretrain the SDN and then fine-tune it with back-propagation. Our approach is evaluated on the Caltech and ETH datasets and achieves the state-of-the-art detection performance.
SeaGlass: Enabling City-Wide IMSI-Catcher Detection
Cell-site simulators, also known as IMSIcatchers and stingrays, are used around the world by governments and criminals to track and eavesdrop on cell phones. Despite extensive public debate surrounding their use, few hard facts about them are available. For example, the richest sources of information on U.S. government cell-site simulator usage are from anonymous leaks, public records requests, and court proceedings. This lack of concrete information and the difficulty of independently obtaining such information hampers the public discussion. To address this deficiency, we build, deploy, and evaluate SeaGlass, a city-wide cellsite simulator detection network. SeaGlass consists of sensors that measure and upload data on the cellular environment to find the signatures of portable cell-site simulators. SeaGlass sensors are designed to be robust, low-maintenance, and deployable in vehicles for long durations. The data they generate is used to learn a city’s network properties to find anomalies consistent with cell-site simulators. We installed SeaGlass sensors into 15 ridesharing vehicles across two cities, collecting two months of data in each city. Using this data, we evaluate the system and show how SeaGlass can be used to detect signatures of portable cell-site simulators. Finally, we evaluate our signature detection methods and discuss anomalies discovered in the data.
Summary of European Association of Urology (EAU) Guidelines on Neuro-Urology.
CONTEXT Most patients with neuro-urological disorders require life-long medical care. The European Association of Urology (EAU) regularly updates guidelines for the diagnosis and treatment of these patients. OBJECTIVE To provide a summary of the 2015 updated EAU Guidelines on Neuro-Urology. EVIDENCE ACQUISITION Structured literature searches in several databases were carried out to update the 2014 guidelines. Levels of evidence and grades of recommendation were assigned where possible. EVIDENCE SYNTHESIS Neurological disorders often cause urinary tract, sexual, and bowel dysfunction. Most neuro-urological patients need life-long care for optimal life expectancy and quality of life. Timely diagnosis and treatment are essential to prevent upper and lower urinary tract deterioration. Clinical assessment should be comprehensive and usually includes a urodynamic investigation. The neuro-urological management must be tailored to the needs of the individual patient and may require a multidisciplinary approach. Sexuality and fertility issues should not be ignored. Numerous conservative and noninvasive possibilities of management are available and should be considered before a surgical approach is chosen. Neuro-urological patients require life-long follow-up and particular attention has to be paid to this aspect of management. CONCLUSIONS The current EAU Guidelines on Neuro-Urology provide an up-to-date overview of the available evidence for adequate diagnosis, treatment, and follow-up of neuro-urological patients. PATIENT SUMMARY Patients with a neurological disorder often suffer from urinary tract, sexual, and bowel dysfunction and life-long care is usually necessary. The update of the EAU Guidelines on Neuro-Urology, summarized in this paper, enables caregivers to provide optimal support to neuro-urological patients. Conservative, noninvasive, or minimally invasive approaches are often possible.
Combining Word2Vec with Revised Vector Space Model for Better Code Retrieval
API example code search is an important applicationin software engineering. Traditional approaches to API codesearch are based on information retrieval. Recent advance inWord2Vec has been applied to support the retrieval of APIexamples. In this work, we perform a preliminary study thatcombining traditional IR with Word2Vec achieves better retrievalaccuracy. More experiments need to be done to study differenttypes of combination among two lines of approaches.
Treatment of adults with acute lymphoblastic leukemia: do the specifics of the regimen matter?: Results from a prospective randomized trial.
BACKGROUND Induction therapy for adults with acute lymphoblastic leukemia (ALL) is similar across essentially all regimens, comprised of vincristine, corticosteroids, and anthracyclines intensified with cyclophosphamide, asparaginase, or both. Given the lack of randomized data, to date, no regimen has emerged as standard. The authors previously evaluated cytarabine 3 g/m(2) daily for 5 days with mitoxantrone 80 mg/m(2) (the ALL-2 regimen) as a novel induction regimen. Compared with historic controls, the ALL-2 regimen was superior in terms of incidence of complete remission, failure with resistant disease, and activity in patients with Philadelphia chromosome (Ph)-positive ALL. METHODS The authors conducted a multicenter, prospective, randomized trial of the ALL-2 regimen compared with a standard 4-drug induction (the L-20 regimen). Patients also received consolidation, maintenance therapy, and central nervous system prophylaxis. The trial accrued patients from August 1996 to October 2004. RESULTS The median follow-up for survivors was 7 years, and the median patient age was 43 years. Responses were evaluated in 164 patients. The treatment arms were balanced in terms of pretreatment characteristics. The frequency of complete remission for the ALL-2 regimen versus the L-20 regimen was 83% versus 71% (P = .06). More patients on the L-20 arm failed with resistant disease (21% vs 8%; P = .02). Induction deaths were comparable at 9% (ALL-2) versus 7% (L-20). The median survival was similar; and, at 5 years, the survival rate was 33% alive on the ALL-2 arm versus 27% on the L-20. CONCLUSIONS Despite superior results of induction therapy with the ALL-2 regimen, this treatment did not improve long-term outcomes. When coupled to the reported experience of other studies in adults with ALL, the results of this randomized trial raise the possibility that ultimate outcomes in adult ALL may be independent of the specific regimen chosen. Cancer 2013. © 2012 American Cancer Society.
Predestination: Inferring Destinations from Partial Trajectories
We describe a method called Predestination that uses a history of a driver’s destinations, along with data about driving behaviors, to predict where a driver is going as a trip progresses. Driving behaviors include types of destinations, driving efficiency, and trip times. Beyond considering previously visited destinations, Predestination leverages an open-world modeling methodology that considers the likelihood of users visiting previously unobserved locations based on trends in the data and on the background properties of locations. This allows our algorithm to smoothly transition between “out of the box” with no training data to more fully trained with increasing numbers of observations. Multiple components of the analysis are fused via Bayesian inference to produce a probabilistic map of destinations. Our algorithm was trained and tested on hold-out data drawn from a database of GPS driving data gathered from 169 different subjects who drove 7,335 different trips.
Liability and Computer Security: Nine Principles
The conventional wisdom is that security priorities should be set by risk analysis. However, reality is subtly different: many computer security systems are at least as much about shedding liability as about minimising risk. Banks use computer security mechanisms to transfer liability to their customers; companies use them to transfer liability to their insurers, or (via the public prosecutor) to the taxpayer; and they are also used to shift the blame to other departments (“we did everything that GCHQ/the internal auditors told us to”). We derive nine principles which might help designers avoid the most common pitfalls.
A Koch-Like Sided Fractal Bow-Tie Dipole Antenna
A novel Koch-like fractal curve is proposed to transform ultra-wideband (UWB) bow-tie into so called Koch-like sided fractal bow-tie dipole. A small isosceles triangle is cut off from center of each side of the initial isosceles triangle, then the procedure iterates along the sides like Koch curve does, forming the Koch-like fractal bow-tie geometry. The fractal bow-tie of each iterative is investigated without feedline in free space for fractal trait unveiling first, followed by detailed expansion upon the four-iterated pragmatic fractal bow-tie dipole fed by 50-Ω coaxial SMA connector through coplanar stripline (CPS) and comparison with Sierpinski gasket. The fractal bow-tie dipole can operate in multiband with moderate gain (3.5-7 dBi) and high efficiency (60%-80%), which is corresponding to certain shape parameters, such as notch ratio α, notch angle φ, and base angles θ of the isosceles triangle. Compared with conventional bow-tie dipole and Sierpinski gasket with the same size, this fractal-like antenna has almost the same operating properties in low frequency and better radiation pattern in high frequency in multi-band operation, which makes it a better candidate for applications of PCS, WLAN, WiFi, WiMAX, and other communication systems.
Threshold Phenomena in Epistemic Networks
A small consortium of philosophers has begun work on the implications of epistemic networks (Zollman 2008 and forthcoming; Grim 2006, 2007; Weisberg and Muldoon forthcoming), building on theoretical work in economics, computer science, and engineering (Bala and Goyal 1998, Kleinberg 2001; Amaral et. al., 2004) and on some experimental work in social psychology (Mason, Jones, and Goldstone, 2008). This paper outlines core philosophical results and extends those results to the specific question of thresholds. Epistemic maximization of certain types does show clear threshold effects. Intriguingly, however, those effects appear to be importantly independent from more familiar threshold effects in networks. 1. Epistemology and Scientific Networks Epistemology is defined as the study of knowledge. The traditional focus in the field, however, has long been limited to the study of the individual epistemic agent. Traditional epistemology treats knowledge acquisition as an individual endeavor. In Hume, Descartes, and Kant, epistemology is told as the story of a single individual trying to figure out what the world is like—an attempt to answer the question of how an individual agent figures out the structure of the world. A small consortium of contemporary philosophers has begun work on a different approach (Zollman 2008 and forthcoming; Grim 2006, 2007; Weisberg and Muldoon forthcoming). In this recent work the essential emphasis is not on communities of epistemic agents rather than on the individual. How does an individual figure out the structure of the world? The truth is that no individual does. It is cultures and communities that plumb the structure of reality; individuals figure out the structure of the world only as they participate in the epistemic networks in which they are embedded. Science is undoubtedly our pre-eminent example of knowledge acquisition. But what characterizes contemporary scientific research is not a catalog of isolated investigators but coordinated interactive networks of investigation. To understand knowledge acquisition in science one must understand more than the work of individual participants. One must understand the structure and dynamics of the enterprise as whole. Here questions are importantly different than those in traditional epistemology. A scientific community can be envisaged as a network of interactive agents attempting to limn reality on the basis of uneven, conflicting, and sometimes ambiguous data. How does the network structure of collaboration and competition, of data sharing and information transfer, affect knowledge acquisition in the community at large? What kinds of network structures, of what kinds of agents, will best achieve scientific goals— scientific goals of accuracy, for example? In what ways will those structures be sensitive to the specific form of the problem, or to the distribution or uncertainty of data? Those are questions central to this new approach, and questions for which the work outlined below offers some early and partial clues. Given what we know of networks in general, it is to be expected that the dynamics of information acquisition and exchange across an epistemic network will not be reducible to any simple aggregate measure across individuals. The modeling results offered here substantiate that expectation in full. One of the implications of epistemic networks, tracked here in terms of thresholds, is the robust and surprising finding that a scientific community may learn more when its individual scientists learn less. In terms of central scientific goals such as accuracy, increased informational linkages between scientists may not always be a good thing. 17 th century science was characterized by distributed informational networks with limited linkages between investigators. 21 st century science is characterized by totally connected networks across the internet. One way of phrasing a central result in what follows is that for some central scientific goals, including accuracy, and for some topics of investigation, the network structure of 17 th century science appears to be superior to our own. Section 2 outlines the notion of epistemic landscape crucial to the model, with details in section 3 of initial networks surveyed. The core result that a scientific community can learn more when individual scientists learn less is presented in section 4. Sections 5 and 6 further explore the question of precisely what properties of networks are important for that result. Here results show clear thresholds for epistemic maximization of certain types with increasing number of links in random networks. Epistemic maximization on networks of the type at issue, it turns out, exhibits clear threshold phenomena. But it also turns out that the epistemic thresholds at issue are surprisingly independent from other network; they do not correlate cleanly with thresholds in any of the other network properties one might expect. Results here are intended as an introduction, with first hints regarding some of the surprises and subtleties of informational dynamics across epistemic networks. These are offered as a first word on the topic, rather than the last word; it quickly becomes clear how much we do not yet understand, and how much more work remains to be done. 2. Epistemic Landscapes We can envisage an epistemic landscape as a topology of ideal data—data regarding alternative medical treatments, for example (Fig. 1). In such a graph points in the xz plane represent particular combinations of radiation and chemotherapy, or particular hypotheses regarding the best combination. Graph height on the y axis represents some measure of success: the proportion of patients in fact cured with combinations of radiation and chemotherapy at that rate. If you use radiation therapy at rate x, and chemotherapy at rate z, you will get the proportion of cures represented on the y axis hovering over that point. Fig. 1 A three-dimensional epistemic landscape. Points on the xz plane represent hypotheses regarding optimal combination of radiation and chemotherapy; graph height on the y axis represents some measure of success. This first epistemic landscape is a medical one, but the specific topic of investigation is unimportant for our broader epistemic concerns. One might have an epistemic landscape of magnetology readings for different hypothetical locations of a shipwreck, or of irridium stratigraphy world-wide as feedback regarding different hypotheses regarding the timing of the K-T asteroid collision, or any measurable variable y that confirms some hypotheses rather than others regarding the interplay of variables x and z. It is important to emphasize, however, that the concept of an epistemic landscape represents ideal data across a full range of possible hypotheses. Different investigators will test different hypotheses and will get differential feedback regarding those hypotheses. As an individual investigator, however, one will not be able to see the epistemic landscape as a whole. One will see results only at a point in the graph, in a small area or in a scattering of points. Despite those limitations, our job description as epistemic agents is to find the theory that is best supported by data. The goal of investigation is to find the highest points in the epistemic landscape—the best confirmed hypotheses, or the most warranted predictions, the most reliable medical treatments. Fortunately, we do not work alone: we are linked to other investigators as part of a larger network. The model at issue here employs simpler twodimensional epistemic landscapes (Fig. 2). Fig. 2 Two-dimensional epistemic landscapes. Values on the x axis represent alternative hypotheses. Values on the y axis represent the ideal epistemic payoff for particular hypotheses. In the first landscape data converges smoothly to a single best hypothesis or medical treatment. The second represents a slightly more complex landscape, in which particular combinations of drugs do well, perhaps, but combinations in between do worse. The third is a still more complex landscape, in which some peaks are smooth and easily climbed, but represent inferior outcomes. The hypotheses or medical treatments they lead to are not the best. The best outcome, however—that hypothesis that would be best confirmed, or that medical treatment that would be most effective—is hidden in a spike with a narrow base, and is thus harder to find. 3. Modeling Epistemic Networks Suppose we have a population of agents, each of whom starts with a hypothesis. Here that hypothesis is represented by a single point on the x-axis landscape. In testing their hypotheses, our agents accumulate data as feedback—a percentage of patient cures, for example. But our agents are also networked to others; they can see not only the success rate of their own hypothesis but the success rate for the hypotheses of those to whom they are linked. Agents change their hypotheses based on the success rates of those to whom they are linked. As an agent in this model, you can see how well the hypotheses of some other agents are doing; if their hypotheses are better supported by the data than yours, you shift your hypothesis in their direction. If your hypothesis is the best of those visible to you, on the other hand, you stick with it. With even a network model this simple there are a number of intriguing parameters. One of the built into this model is a ‘shaking hand’: when you aim to duplicate another’s hypothesis, you may be slightly off. Your lab conditions may be slightly different the other agent, or your chemicals impure, or your sample slightly biased. You therefore end up with a hypothesis that is not a precise match of that you are imitating but is merely close by. One result, of course, is that you therefore explore more of the epistemic landscape. The model used here builds in a ‘shaking hand’ that puts one in random region within 4 points either side of a target hyp
Fast Context Adaptation via Meta-Learning
We propose CAVIA, a meta-learning method for fast adaptation that is scalable, flexible, and easy to implement. CAVIA partitions the model parameters into two parts: context parameters that serve as additional input to the model and are adapted on individual tasks, and shared parameters that are meta-trained and shared across tasks. At test time, the context parameters are updated with one or several gradient steps on a task-specific loss that is backpropagated through the shared part of the network. Compared to approaches that adjust all parameters on a new task (e.g., MAML), CAVIA can be scaled up to larger networks without overfitting on a single task, is easier to implement, and is more robust to the inner-loop learning rate. We show empirically that CAVIA outperforms MAML on regression, classification, and reinforcement learning problems.
Nonlinear dispersive regime of cavity QED: The dressed dephasing model
Maxime Boissonneault, Jay Gambetta, 3 and Alexandre Blais Département de Physique et Regroupement Québécois sur les Matériaux de Pointe, Université de Sherbrooke, Sherbrooke, Québec, Canada, J1K 2R1 Institute for Quantum Computing and Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada Centre for Quantum Computer Technology, Centre for Quantum Dynamics, Griffith University, Brisbane, Queensland 4111, Australia (Dated: June 17, 2008)
Recent Advances in Radio Resource Management for Heterogeneous LTE/LTE-A Networks
As heterogeneous networks (HetNets) emerge as one of the most promising developments toward realizing the target specifications of Long Term Evolution (LTE) and LTE-Advanced (LTE-A) networks, radio resource management (RRM) research for such networks has, in recent times, been intensively pursued. Clearly, recent research mainly concentrates on the aspect of interference mitigation. Other RRM aspects, such as radio resource utilization, fairness, complexity, and QoS, have not been given much attention. In this paper, we aim to provide an overview of the key challenges arising from HetNets and highlight their importance. Subsequently, we present a comprehensive survey of the RRM schemes that have been studied in recent years for LTE/LTE-A HetNets, with a particular focus on those for femtocells and relay nodes. Furthermore, we classify these RRM schemes according to their underlying approaches. In addition, these RRM schemes are qualitatively analyzed and compared to each other. We also identify a number of potential research directions for future RRM development. Finally, we discuss the lack of current RRM research and the importance of multi-objective RRM studies.
A 24GS/s 6b ADC in 90nm CMOS
This paper presents a 24 GS/s 6 b ADC in 90 nm CMOS with the highest ENOB up to 12 GHz input frequency and lowest power consumption of 1.2 W compared to ADCs with similar performance. It uses an interleaved architecture of SAR type self-calibrating converters operating from 1 V supply combined with an array of 2.5 V T/Hs with delay, gain and offset-calibration capability.
Hyperspectral image classification and dimensionality reduction: an orthogonal subspace projection approach
Abstruct-Most applications of hyperspectral imagery require processing techniques which achieve two fundamental goals: 1) detect and classify the constituent materials for each pixel in the scene; 2) reduce the data volumeldimensionality, without loss of critical information, so that it can be processed efficiently and assimilated by a human analyst. In this paper, we describe a technique which simultaneously reduces the data dimensionality, suppresses undesired or interfering spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel vector onto a subspace which is orthogonal to the undesired signatures. This operation is an optimal interference suppression process in the least squares sense. Once the interfering signatures have been nulled, projecting the residual onto the signature of interest maximizes the signal-to-noise ratio and results in a single component image that represents a classification for the signature of interest. The orthogonal subspace projection (OSP) operator can be extended to k signatures of interest, thus reducing the dimensionality of k and classifying the hyperspectral image simultaneously. The approach is applicable to both spectrally pure as well as mixed pixels.
Planning, Learning and Coordination in Multiagent Decision Processes
There has been a growing interest in AI in the design of multiagent systems, especially in multiagent cooperative planning. In this paper, we investigate the extent to which methods from single-agent planning and learning can be applied in multiagent settings. We survey a number of different techniques from decision-theoretic planning and reinforcement learning and describe a number of interesting issues that arise with regard to coordinating the policies of individual agents. To this end, we describe multiagent Markov decision processes as a general model in which to frame this discussion. These are special n-person cooperative games in which agents share the same utility function. We discuss coordination mechanisms based on imposed conventions (or social laws) as well as learning methods for coordination. Our focus is on the decomposition of sequential decision processes so that coordination can be learned (or imposed) locally, at the level of individual states. We also discuss the use of structured problem representations and their role in the generalization of learned conventions and in approximation.
A survey on privacy issues in digital forensics
Privacy issues have always been a major concern in computer forensics and security and in case of any investigation whether it is pertaining to computer or not always privacy issues appear. To enable privacy’s protection in the physical world we need the law that should be legislated, but in a digital world by rapidly growing of technology and using the digital devices more and more that generate a huge amount of private data it is impossible to provide fully protected space in cyber world during the transfer, store and collect data. Since its introduction to the field, forensics investigators, and developers have faced challenges in finding the balance between retrieving key evidences and infringing user privacy. This paper looks into developmental trends in computer forensics and security in various aspects in achieving such a balance. In addition, the paper analyses each scenario to determine the trend of solutions in these aspects and evaluate their effectiveness in resolving the aforementioned issues.
Facilitating HIV testing, care and treatment for orphans and vulnerable children aged five years and younger through community-based early childhood development playcentres in rural Zimbabwe
INTRODUCTION Early diagnosis of children living with HIV is a prerequisite for accessing timely paediatric HIV care and treatment services and for optimizing treatment outcomes. Testing of HIV-exposed infants at 6 weeks and later is part of the national prevention of mother to child transmission (PMTCT) of HIV programme in Zimbabwe, but many opportunities to test infants and children are being missed. Early childhood development (ECD) playcentres can act as an entry point providing multiple health and social services for orphans and vulnerable children (OVC) under 5 years, including facilitating access to HIV treatment and care. METHODS Sixteen rural community-based, community-run ECD playcentres were established to provide health, nutritional and psychosocial support for OVC aged 5 years and younger exposed to or living with HIV, coupled with family support groups (FSGs) for their families/caregivers. These centres were located in close proximity to health centres giving access to nurse-led monitoring of 697 OVC and their caregivers. Community mobilisers identified OVC within the community, supported their registration process and followed up defaulters. Records profiling each child's attendance, development and health status (including illness episodes), vaccinations and HIV status were compiled at the playcentres and regularly reviewed, updated and acted upon by nurse supervisors. Through FSGs, community cadres and a range of officers from local services established linkages and built the capacity of parents/caregivers and communities to provide protection, aid psychosocial development and facilitate referral for treatment and support. RESULTS Available data as of September 2011 for 16 rural centres indicate that 58.8% (n=410) of the 697 children attending the centres were tested for HIV; 18% (n=74) tested positive and were initiated on antibiotic prophylaxis. All those deemed eligible for antiretroviral therapy were commenced on treatment and adherence was monitored. CONCLUSIONS This community-based playcentre model strengthens comprehensive care (improving emotional, cognitive and physical development) for OVC younger than 5 years and provides opportunities for caregivers to access testing, care and treatment for children exposed to, affected by and infected with HIV in a secure and supportive environment. More research is required to evaluate barriers to counselling and testing of young children and the long-term impact of playcentres upon specific health and developmental outcomes.
A KINEMATIC ANALYSIS AND SIMULATION BASED ON ADAMS FOR EGGPLANT PICKING ROBOT / 基于 ADAMS 的茄子采摘机器人运动学分析与仿真
Eggplant picking robot is a type of complex optical-mechanical-electrical equipment in greenhouse environment. Its structure and control are more exigent than traditional industrial robot. Optimization design method was utilized for the design of the eggplant picking robot body structure parameters in accordance with the eggplant growth and distribution space. In order to determine the spatial position relationship between the eggplant picking robot components and the end effector, the theoretical model of robot was established by virtue of DenavitHartenberg approach and the positive solution of the kinematic equation is obtained. Premultiplication decoupling of 1  i A and matrix 4 T were adopted to solve inverse kinematic solution with the help of Matlab software. Pro/E software was used to establish 3-D simulation model, and ADAMS (Automatic Dynamic Analysis of Mechanical Systems) simulation software was imported for the kinematics simulation analysis. It was indicated by the simulation results that the kinematic model established by DH approach reflects the real motion conditions of the robot, and both the positive and inverse kinematic solutions are correct. Structure of four degrees freedom eggplant picking robot was reasonable, it could meet the requirements of eggplant picking in the greenhouse cultivation pattern.
A comparison of salivary pH in sympatric wild lemurs (Lemur catta and Propithecus verreauxi) at Beza Mahafaly Special Reserve, Madagascar.
Chemical deterioration of teeth is common among modern humans, and has been suggested for some extinct primates. Dental erosion caused by acidic foods may also obscure microwear signals of mechanical food properties. Ring-tailed lemurs at the Beza Mahafaly Special Reserve (BMSR), Madagascar, display frequent severe tooth wear and subsequent tooth loss. In contrast, sympatric Verreaux's sifaka display far less tooth wear and infrequent tooth loss, despite both species regularly consuming acidic tamarind fruit. We investigated the potential impact of dietary acidity on tooth wear, collecting data on salivary pH from both species, as well as salivary pH from ring-tailed lemurs at Tsimanampesotse National Park, Madagascar. We also collected salivary pH data from ring-tailed lemurs at the Indianapolis Zoo, none of which had eaten for at least 12 hr before data collection. Mean salivary pH for the BMSR ring-tailed lemurs (8.098, n=41, SD=0.550) was significantly more alkaline than Verreaux's sifaka (7.481, n=26, SD=0.458). The mean salivary pH of BMSR (8.098) and Tsimanampesotse (8.080, n=25, SD=0.746) ring-tailed lemurs did not differ significantly. Salivary pH for the Indianapolis Zoo sample (8.125, n=16, SD=0.289) did not differ significantly from either the BMSR or Tsimanampesotse ring-tailed lemurs, but was significantly more alkaline than the BMSR Verreaux's sifaka sample. Regardless of the time between feeding and collection of pH data (from several minutes to nearly 1 hr), salivary pH for each wild lemur was above the "critical" pH of 5.5, below which enamel demineralization occurs. Thus, the high pH of lemur saliva suggests a strong buffering capacity, indicating the impact of acidic foods on dental wear is short-lived, likely having a limited effect. However, tannins in tamarind fruit may increase friction between teeth, thereby increasing attrition and wear in lemurs. These data also suggest that salivary pH varies between lemur species, corresponding to broad dietary categories.
Searching for Learner-Centered , Constructivist , and Sociocultural Components of Collaborative Educational Learning Tools
strategies and tools must be based on some theory of learning and cognition. Of course, crafting well-articulated views that clearly answer the major epistemological questions of human learning has exercised psychologists and educators for centuries. What is a mind? What does it mean to know something? How is our knowledge represented and manifested? Many educators prefer an eclectic approach, selecting “principles and techniques from the many theoretical perspectives in much the same way we might select international dishes from a smorgasbord, choosing those we like best and ending up with a meal which represents no nationality exclusively and a design technology based on no single theoretical base” (Bednar et al., 1995, p. 100). It is certainly the case that research within collaborative educational learning tools has drawn upon behavioral, cognitive information processing, humanistic, and sociocultural theory, among others, for inspiration and justification. Problems arise, however, when tools developed in the service of one epistemology, say cognitive information processing, are integrated within instructional systems designed to promote learning goals inconsistent with it. When concepts, strategies, and tools are abstracted from the theoretical viewpoint that spawned them, they are too often stripped of meaning and utility. In this chapter, we embed our discussion in learner-centered, constructivist, and sociocultural perspectives on collaborative technology, with a bias toward the third. The principles of these perspectives, in fact, provide the theoretical rationale for much of the research and ideas presented in this book. 2
Improved segmentation of abnormal cervical nuclei using a graph-search based approach
Reliable segmentation of abnormal nuclei in cervical cytology is of paramount importance in automation-assisted screening techniques. This paper presents a general method for improving the segmentation of abnormal nuclei using a graph-search based approach. More specifically, the proposed method focuses on the improvement of coarse (initial) segmentation. The improvement relies on a transform that maps round-like border in the Cartesian coordinate system into lines in the polar coordinate system. The costs consisting of nucleus-specific edge and region information are assigned to the nodes. The globally optimal path in the constructed graph is then identified by dynamic programming. We have tested the proposed method on abnormal nuclei from two cervical cell image datasets, Herlev and H&E stained liquid-based cytology (HELBC), and the comparative experiments with recent state-of-the-art approaches demonstrate the superior performance of the proposed method.
Online reservoir adaptation by intrinsic plasticity for backpropagation-decorrelation and echo state learning
We propose to use a biologically motivated learning rule based on neural intrinsic plasticity to optimize reservoirs of analog neurons. This rule is based on an information maximization principle, it is local in time and space and thus computationally efficient. We show experimentally that it can drive the neurons' output activities to approximate exponential distributions. Thereby it implements sparse codes in the reservoir. Because of its incremental nature, the intrinsic plasticity learning is well suited for joint application with the online backpropagation-decorrelation or the least mean squares reservoir learning, whose performance can be strongly improved. We further show that classical echo state regression can also benefit from reservoirs, which are pre-trained on the given input signal with the implicit plasticity rule.
Twitter as a Lifeline: Human-annotated Twitter Corpora for NLP of Crisis-related Messages
Microblogging platforms such as Twitter provide active communication channels during mass convergence and emergency events such as earthquakes, typhoons. During the sudden onset of a crisis situation, affected people post useful information on Twitter that can be used for situational awareness and other humanitarian disaster response efforts, if processed timely and effectively. Processing social media information pose multiple challenges such as parsing noisy, brief and informal messages, learning information categories from the incoming stream of messages and classifying them into different classes among others. One of the basic necessities of many of these tasks is the availability of data, in particular human-annotated data. In this paper, we present human-annotated Twitter corpora collected during 19 different crises that took place between 2013 and 2015. To demonstrate the utility of the annotations, we train machine learning classifiers. Moreover, we publish first largest word2vec word embeddings trained on 52 million crisis-related tweets. To deal with tweets language issues, we present human-annotated normalized lexical resources for different lexical variations.
PhD training in simulation: NATCOR
To provide a broader education for Operational Research PhD students in the UK, the Engineering and Physical Sciences Research Council funds the National Taught Course Centre for Operational Research (NATCOR). This is an initiative led by six UK universities and includes a one-week, residential simulation module taught for the first time in July 2009. We describe the background to NATCOR, summarize its content and reflect on its further development.
Analysis of Temperature Distribution in Cast-resin Dry-type Transformers
Non-flammable characteristic of dry-type castresin transformers make them suitable for residential and hospital usages. However, because of resin’s property, thermal behavior of these transformers is undesirable, so it is important to analyze their thermal behavior. Temperature distribution of cast-resin transformers is mathematically modeled in this paper. The solution of the model was carried out successfully by finite difference method. In order to validate the model, the simulation results were compared with the experimental data measured from an 800 kVA transformer. Finally, the influences of some construction parameters and environmental conditions on temperature distribution of cast-resin transformers were discussed.
Yoga as an adjunctive treatment for posttraumatic stress disorder: a randomized controlled trial.
BACKGROUND More than a third of the approximately 10 million women with histories of interpersonal violence in the United States develop posttraumatic stress disorder (PTSD). Currently available treatments for this population have a high rate of incomplete response, in part because problems in affect and impulse regulation are major obstacles to resolving PTSD. This study explored the efficacy of yoga to increase affect tolerance and to decrease PTSD symptomatology. METHOD Sixty-four women with chronic, treatment-resistant PTSD were randomly assigned to either trauma-informed yoga or supportive women's health education, each as a weekly 1-hour class for 10 weeks. Assessments were conducted at pretreatment, midtreatment, and posttreatment and included measures of DSM-IV PTSD, affect regulation, and depression. The study ran from 2008 through 2011. RESULTS The primary outcome measure was the Clinician-Administered PTSD Scale (CAPS). At the end of the study, 16 of 31 participants (52%) in the yoga group no longer met criteria for PTSD compared to 6 of 29 (21%) in the control group (n = 60, χ²₁ = 6.17, P = .013). Both groups exhibited significant decreases on the CAPS, with the decrease falling in the large effect size range for the yoga group (d = 1.07) and the medium to large effect size decrease for the control group (d = 0.66). Both the yoga (b = -9.21, t = -2.34, P = .02, d = -0.37) and control (b = -22.12, t = -3.39, P = .001, d = -0.54) groups exhibited significant decreases from pretreatment to the midtreatment assessment. However, a significant group × quadratic trend interaction (d = -0.34) showed that the pattern of change in Davidson Trauma Scale significantly differed across groups. The yoga group exhibited a significant medium effect size linear (d = -0.52) trend. In contrast, the control group exhibited only a significant medium effect size quadratic trend (d = 0.46) but did not exhibit a significant linear trend (d = -0.29). Thus, both groups exhibited significant decreases in PTSD symptoms during the first half of treatment, but these improvements were maintained in the yoga group, while the control group relapsed after its initial improvement. DISCUSSION Yoga significantly reduced PTSD symptomatology, with effect sizes comparable to well-researched psychotherapeutic and psychopharmacologic approaches. Yoga may improve the functioning of traumatized individuals by helping them to tolerate physical and sensory experiences associated with fear and helplessness and to increase emotional awareness and affect tolerance. TRIAL REGISTRATION ClinicalTrials.gov identifier: NCT00839813.
Cryptanalysis and improvement of an elliptic curve Diffie-Hellman key agreement protocol
In SAC'05, Strangio proposed protocol ECKE- 1 as an efficient elliptic curve Diffie-Hellman two-party key agreement protocol using public key authentication. In this letter, we show that protocol ECKE-1 is vulnerable to key-compromise impersonation attacks. We also present an improved protocol - ECKE-1N, which can withstand such attacks. The new protocol's performance is comparable to the well-known MQV protocol and maintains the same remarkable list of security properties.
Generative Adversarial Network for Abstractive Text Summarization
In this paper, we propose an adversarial process for abstractive text summarization, in which we simultaneously train a generative model G and a discriminative model D. In particular, we build the generator G as an agent of reinforcement learning, which takes the raw text as input and predicts the abstractive summarization. We also build a discriminator which attempts to distinguish the generated summary from the ground truth summary. Extensive experiments demonstrate that our model achieves competitive ROUGE scores with the state-of-the-art methods on CNN/Daily Mail dataset. Qualitatively, we show that our model is able to generate more abstractive, readable and diverse summaries.
A multi-level framework to identify HTTPS services
The development of TLS-based encrypted traffic comes with new challenges related to the management and security analysis of encrypted traffic. There is an essential need for new methods to investigate, with a proper level of identification, the increasing number of HTTPS traffic that may hold security breaches. In fact, although many approaches detect the type of an application (Web, P2P, SSH, etc.) running in secure tunnels, and others identify a couple of specific encrypted web pages through website fingerprinting, this paper proposes a robust technique to precisely identify the services run within HTTPS connections, i.e. to name the services, without relying on specific header fields that can be easily altered. We have defined dedicated features for HTTPS traffic that are used as input for a multi-level identification framework based on machine learning algorithms. Our evaluation based on real traffic shows that we can identify encrypted web services with a high accuracy.
Inference rules for RDF(S) and OWL in N3Logic
This paper presents inference rules for Resource Description Framework (RDF), RDF Schema (RDFS) and Web Ontology Language (OWL). Our formalization is based on Notation 3 Logic, which extended RDF by logical symbols and created Semantic Web logic for deductive RDF graph stores. We also propose OWL-P that is a lightweight formalism of OWL and supports soft inferences by omitting complex language constructs.
Use of dynamic 3-dimensional transvaginal and transrectal ultrasonography to assess posterior pelvic floor dysfunction related to obstructed defecation.
BACKGROUND New ultrasound techniques may complement current diagnostic tools, and combined techniques may help to overcome the limitations of individual techniques for the diagnosis of anorectal dysfunction. A high degree of agreement has been demonstrated between echodefecography (dynamic 3-dimensional anorectal ultrasonography) and conventional defecography. OBJECTIVE Our aim was to evaluate the ability of a combined approach consisting of dynamic 3-dimensional transvaginal and transrectal ultrasonography by using a 3-dimensional biplane endoprobe to assess posterior pelvic floor dysfunctions related to obstructed defecation syndrome in comparison with echodefecography. DESIGN AND SETTING This was a prospective, observational cohort study conducted at a tertiary-care hospital. PATIENTS Consecutive female patients with symptoms of obstructed defecation were eligible. INTERVENTION Each patient underwent assessment of posterior pelvic floor dysfunctions with a combination of dynamic 3-dimensional transvaginal and transrectal ultrasonography by using a biplane transducer and with echodefecography. MAIN OUTCOME MEASURES Kappa (κ) was calculated as an index of agreement between the techniques. Diagnostic accuracy (sensitivity, specificity, and positive and negative predictive values) of the combined technique in detection of posterior dysfunctions was assessed with echodefecography as the standard for comparison. RESULTS A total of 33 women were evaluated. Substantial agreement was observed regarding normal relaxation and anismus. In detecting the absence or presence of rectocele, the 2 methods agreed in all cases. Near-perfect agreement was found for rectocele grade I, grade II, and grade III. Perfect agreement was found for entero/sigmoidocele, with near-perfect agreement for rectal intussusception. Using echodefecography as the standard for comparison, we found high diagnostic accuracy of transvaginal and transrectal ultrasonography in the detection of posterior dysfunctions. LIMITATIONS This combined technique should be compared with other dynamic techniques and validated with conventional defecography. CONCLUSIONS Dynamic 3-dimensional transvaginal and transrectal ultrasonography is a simple and fast ultrasound technique that shows strong agreement with echodefecography and may be used as an alternative method to assess patients with obstructed defecation syndrome.
A Distributed Publisher-Driven Secure Data Sharing Scheme for Information-Centric IoT
In Information-Centric Internet of Things (ICIoT), Internet of Things (IoT) data can be cached throughout a network for close data copy retrievals. Such a distributed data caching environment, however, poses a challenge to flexible authorization in the network. To address this challenge, Ciphertext-Policy Attribute-Based Encryption (CP-ABE) has been identified as a promising approach. However, in the existing CP-ABE scheme, publishers need to retrieve attributes from a centralized server for encrypting data, which leads to high communication overhead. To solve this problem, we incorporate CP-ABE and propose a novel Distributed Publisher-Driven secure data sharing for ICIoT (DPD-ICIoT) to enable only authorized users to retrieve IoT data from distributed cache. In DPD-ICIoT, newly introduced attribute manifest is cached in the network, through which publishers can retrieve the attributes from nearby copy holders instead of a centralized attribute server. In addition, a key chain mechanism is utilized for efficient cryptographic operations, and an automatic attribute self-update mechanism is proposed to enable fast updates of attributes without querying centralized servers. According to the performance evaluation, DPD-ICIoT achieves lower bandwidth cost compared to the existing CP-ABE scheme.
Optical properties of human skin , subcutaneous and mucous tissues in the wavelength range from 400 to 2000 nm
The optical properties of human skin, subcutaneous adipose tissue and human mucosa were measured in the wavelength range 400–2000 nm. The measurements were carried out using a commercially available spectrophotometer with an integrating sphere. The inverse adding–doubling method was used to determine the absorption and reduced scattering coefficients from the measurements.
Wiener-Hopf Solution for Impenetrable Wedges at Skew Incidence
A new Wiener-Hopf approach for the solution of impenetrable wedges at skew incidence is presented. Mathematical aspects are described in a unified and consistent theory for angular region problems. Solutions are obtained using analytical and numerical-analytical approaches. Several numerical tests from the scientific literature validate the new technique, and new solutions for anisotropic surface impedance wedges are solved at skew incidence. The solutions are presented considering the geometrical and uniform theory of diffraction coefficients, total fields, and possible surface wave contributions
Breast cancer recurrence following radioguided seed localization and standard wire localization of nonpalpable invasive and in situ breast cancers: 5-Year follow-up from a randomized controlled trial.
BACKGROUND This study compared 5-year breast cancer (BC) recurrence rates in patients randomized to radioguided seed localization (RSL) or wire localization (WL) for non-palpable BC undergoing breast conserving surgery. METHODS Chart review of follow-up visits and surveillance imaging was conducted. Data collected included patient and tumour factors, adjuvant therapies and BC recurrence (local recurrence (LR), regional recurrence (RR), and distant metastasis (DM)). Univariate analysis was used. RESULTS Follow-up data were available for 298 patients (98%) and median follow-up time was 65 months. There were 11 (4%) cases of BC recurrence and median time to recurrence was 26 months. LR occurred in 8 patients (6 WL and 2 RSL; p = 0.28). Positive margins at first surgery (p = 0.024) and final surgery (p = 0.004) predicted for BC recurrence. CONCLUSIONS There was no detectable difference in BC recurrence between WL and RSL groups and positive margins at initial or final surgery both predicted for BC recurrence.
Impact of Frequency Ramp Nonlinearity, Phase Noise, and SNR on FMCW Radar Accuracy
One of the main disturbances in a frequency-modulated continuous wave radar system for range measurement is nonlinearity in the frequency ramp. The intermediate frequency (IF) signal and consequently the target range accuracy are dependent on the type of the nonlinearity present in the frequency ramp. Moreover, the type of frequency ramp nonlinearity cannot be directly specified, which makes the problem even more challenging. In this paper, the frequency ramp nonlinearity is investigated with the modified short-time Fourier transform method by using the short-time Chirp-Z transform method with high accuracy. The random and periodic nonlinearities are characterized and their sources are identified as phase noise and spurious. These types of frequency deviations are intentionally increased, and their influence on the linearity and the IF-signal is investigated. The dependence of target range estimation accuracy on the frequency ramp nonlinearity, phase noise, spurious, and signal-to-noise ratio in the IF-signal are described analytically and are verified on the basis of measurements.
Posterolateral versus posterior interbody fusion in isthmic spondylolisthesis.
Spondylolisthesis is a heterogeneous disorder characterized by subluxation of a vertebral body over another in the sagittal plane. Its most common form is isthmic spondylolisthesis (IS). This study aims to compare clinical outcomes of posterolateral fusion (PLF) with posterior lumbar interbody fusion (PLIF) with posterior instrumentation in the treatment of IS. We performed a randomized prospective study in which 80 patients out of a total of 85 patients with IS were randomly allocated to one of two groups: PLF with posterior instrumentation (group I) or PLIF with posterior instrumentation (group II). Posterior decompression was performed in the patients. The Oswestry low back pain disability (OLBP) scale and Visual Analogue Scale (VAS) were used to evaluate the quality of life (QoL) and pain, respectively. Fisher's exact test was used to evaluate fusion rate and the Mann-Whitney U test was used to compare categorical data. Fusion in group II was significantly better than in group I (p=0.012). Improvement in low back pain was statistically more significant in group I (p=0.001). The incidence of neurogenic claudication was significantly lower in group I than in group II (p=0.004). In group I, there was no significant correlation between slip Meyerding grade and disc space height, radicular pain, and low back pain. There was no significant difference in post-operative complications at 1-year follow-up. Our data showed that PLF with posterior instrumentation provides better clinical outcomes and more improvement in low back pain compared to PLIF with posterior instrumentation despite the low fusion rate.
Ant colonies for the travelling salesman problem.
We describe an artificial ant colony capable of solving the travelling salesman problem (TSP). Ants of the artificial colony are able to generate successively shorter feasible tours by using information accumulated in the form of a pheromone trail deposited on the edges of the TSP graph. Computer simulations demonstrate that the artificial ant colony is capable of generating good solutions to both symmetric and asymmetric instances of the TSP. The method is an example, like simulated annealing, neural networks and evolutionary computation, of the successful use of a natural metaphor to design an optimization algorithm.
Rethinking BPM in a Cognitive World: Transforming How We Learn and Perform Business Processes
If we are to believe the technology hype cycle, we are entering a new era of Cognitive Computing, enabled by advances in natural language processing, machine learning, and more broadly artificial intelligence. These advances, combined with evolutionary progress in areas such as knowledge representation, automated planning, user experience technologies, software-as-a-service and crowdsourcing, have the potential to transform many industries. In this paper, we discuss transformations of BPM that advances in the Cognitive Computing will bring. We focus on three of the most signficant aspects of this transformation, namely: (a) Cognitive Computing will enable ”knowledge acquisition at scale”, which will lead to a transformation in Knowledge-intensive Processes (KiP’s); (b) We envision a new process meta-model will emerge that is centered around a “Plan-Act-Learn” cycle; and (c) Cognitive Computing can enable learning about processes from implicit descriptions (at both designand run-time), opening opportunities for new levels of automation and business process support, for both traditional business processes and KiP’s. We use the term cognitive BPM to refer to a new BPM paradigm encompassing all aspects of BPM that are impacted and enabled by Cognitive Computing. We argue that a fundamental understanding of cognitive BPM requires a new research framing of the business process ecosystem. The paper presents a conceptual framework for cognitive BPM, a brief survey of state of the art in emerging areas of Cognitive BPM, and discussion of key directions for further research.
Best practices in business process redesign: use and impact
Purpose – This paper seeks to provide business process redesign (BPR) practitioners and academics with insight into the most popular heuristics to derive improved process designs. Design/methodology/approach – An online survey was carried out in the years 2003-2004 among a wide range of experienced BPR practitioners in the UK and The Netherlands. Findings – The survey indicates that this “top ten” of best practices is indeed extensively used in practice. Moreover, indications for their business impact have been collected and classified. Research limitations/implications – The authors’ estimations of best practices effectiveness differed from feedback obtained from respondents, possibly caused by the design of the survey instrument. This is food for further research. Practical implications – The presented framework can be used by practitioners to keep the various aspects of a redesign in perspective. The presented list of BPR best practices is directly applicable to derive new process designs. Originality/value – This paper addresses the subject of process redesign, rather than the more popular subject of process reengineering. As such, it fills in part an existing gap in knowledge.
Influenza vaccine effectiveness in the community and the household.
BACKGROUND There is a recognized need to determine influenza vaccine effectiveness on an annual basis and a long history of studying respiratory illnesses in households. METHODS We recruited 328 households with 1441 members, including 839 children, and followed them during the 2010-2011 influenza season. Specimens were collected from subjects with reported acute respiratory illnesses and tested by real-time reverse transcriptase polymerase chain reaction. Receipt of influenza vaccine was defined based on documented evidence of vaccination in medical records or an immunization registry. The effectiveness of 2010-2011 influenza vaccination in preventing laboratory-confirmed influenza was estimated using Cox proportional hazards models adjusted for age and presence of high-risk condition, and stratified by prior season (2009-2010) vaccination status. RESULTS Influenza was identified in 78 (24%) households and 125 (9%) individuals; the infection risk was 8.5% in the vaccinated and 8.9% in the unvaccinated (P = .83). Adjusted vaccine effectiveness in preventing community-acquired influenza was 31% (95% confidence interval [CI], -7% to 55%). In vaccinated subjects with no evidence of prior season vaccination, significant protection (62% [95% CI, 17%-82%]) against community-acquired influenza was demonstrated. Substantially lower effectiveness was noted among subjects who were vaccinated in both the current and prior season. There was no evidence that vaccination prevented household transmission once influenza was introduced; adults were at particular risk despite vaccination. CONCLUSIONS Vaccine effectiveness estimates were lower than those demonstrated in other observational studies carried out during the same season. The unexpected findings of lower effectiveness with repeated vaccination and no protection given household exposure require further study.
The impact of multiple low-level BCR-ABL1 mutations on response to ponatinib.
The third-generation tyrosine kinase inhibitor (TKI) ponatinib shows activity against all common BCR-ABL1 single mutants, including the highly resistant BCR-ABL1-T315I mutant, improving outcome for patients with refractory chronic myeloid leukemia (CML). However, responses are variable, and causal baseline factors have not been well-studied. The type and number of low-level BCR-ABL1 mutations present after imatinib resistance has prognostic significance for subsequent treatment with nilotinib or dasatinib as second-line therapy. We therefore investigated the impact of low-level mutations detected by sensitive mass-spectrometry before ponatinib initiation (baseline) on treatment response in 363 TKI-resistant patients enrolled in the PONATINIB for Chronic Myeloid Leukemia Evaluation and Ph(+)Acute Lymphoblastic Leukemia trial, including 231 patients in chronic phase (CP-CML). Low-level mutations were detected in 53 patients (15%, including low-level T315I in 14 patients); most, however, did not undergo clonal expansion during ponatinib treatment and, moreover, no specific individual mutations were associated with inferior outcome. We demonstrate however, that the number of mutations detectable by mass spectrometry after TKI resistance is associated with response to ponatinib treatment and could be used to refine the therapeutic approach. Although CP-CML patients with T315I (63/231, 27%) had superior responses overall, those with multiple mutations detectable by mass spectrometry (20, 32%) had substantially inferior responses compared with those with T315I as the sole mutation detected (43, 68%). In contrast, for CP-CML patients without T315I, the inferior responses previously observed with nilotinib/dasatinib therapy for imatinib-resistant patients with multiple mutations were not seen with ponatinib treatment, suggesting that ponatinib may prove to be particularly advantageous for patients with multiple mutations detectable by mass spectrometry after TKI resistance.
Australian Aboriginal words in English : their origin and meaning
Words like boomerang and woomera, kangaroo and koala, mallee and mulga are quintessentially Australian. Australian Aboriginal Words in English is the definitive account of the history of these and other words borrowed from Aboriginal languages into English. It is also a fascinating insight into the contracts between the first Australians and the European settlers. In 1788 there were some 250 languages in use in Australia. This book begins with a general account of the nature and history of Aboriginal languages, and with profiles of those languages that have been most significant as sources of borrowings. The words are then grouped according to subject - birds, fish, flora, religion, implements, dwellings, and so on. For each word there is an entry, like that in a dictionary, giving the language the word comes from, the original pronunciation and meaning in that language, the date of its first written use in English, and its present meaning. Illustrative quotations are given in many instances. This second edition of the book has been fully revised, adding more than 30 new words and bringing the book fully up to date with contemporary scholarship. Chapter 6 - on the way English words have taken on new meanings and forms to describe Australia's indigenous inhabitants - is completely new. Research Background: This book is based on decades of studies of Australian Aboriginal languages by R. M. W. Dixon, and lexicographic work by Moore and Ramson within the OED Australia. The book begins with a general account of the nature and history of Aboriginal languages, and with profiles of those languages that have been most significant as sources of borrowings. This is a second edition of the book fully revised, adding more than 30 new words and bringing the book fully up to date with contemporary scholarship. Chapter 6 - on the way English words have taken on new meanings and forms to describe Australia's indigenous inhabitants - is completely new. Research Contribution: This book is the definitive account of the history of numerous words borrowed from Aboriginal languages into English. It also provided insights into contacts between the first Australians and the European settlers. This new and thoroughly revised edition is supplied with scientific labels for each entry. It is highly innovative in that it provides a comprehensive analysis of words borrowed from Aboriginal languages into English and their meanings. It offers a new perspective on contact between the indigenous population and white Australian settlers reflected in the language. Research Significance: This book represents a new milestone in the explorations of the influence of Australian Aboriginal languages on English, and in the investigation of language and culture interaction since the White Invasion of Australia. Major significance of this book lies in its encyclopaedic breadth and width. It is a unique resource, for linguists, anthropologists and anyone interested in Australian languages and culture. This book received laudatory reviews in journals, and was reissued several times, reflecting its appreciation by the public.
Blending liquids
We present a method for smoothly blending between existing liquid animations. We introduce a semi-automatic method for matching two existing liquid animations, which we use to create new fluid motion that plausibly interpolates the input. Our contributions include a new space-time non-rigid iterative closest point algorithm that incorporates user guidance, a subsampling technique for efficient registration of meshes with millions of vertices, and a fast surface extraction algorithm that produces 3D triangle meshes from a 4D space-time surface. Our technique can be used to instantly create hundreds of new simulations, or to interactively explore complex parameter spaces. Our method is guaranteed to produce output that does not deviate from the input animations, and it generalizes to multiple dimensions. Because our method runs at interactive rates after the initial precomputation step, it has potential applications in games and training simulations.
Canine and Human Dirofilariosis in the Rostov Region (Southern Russia)
Epidemiological data on canine and human dirofilariosis in the Rostov Region (Southern Russia) are presented. Prevalence of Dirofilaria spp. infections in 795 autochthonous dogs, assessed by the Knott test, was 20.25%. The highest prevalence was found in Novocherkassk (38.3%) and Rostov-on-Don (18.5%), while prevalences were lower in other points of the region. Prevalence of D. repens was 44.7%, prevalence of D. immitis was 30.3%, and coinfections were observed in 25.0% of the dog population. A case finding study carried out during 9 years (2000-2009) revealed 131 cases of human dirofilariosis in the Rostov Region, 129 of subcutaneous dirofilariosis and 2 of pulmonary dirofilariosis. Seroprevalence among 317 healthy blood donors from the Rostov Region was 10.4%, while seroprevalence in policemen living in Rostov city and working in training dogs was 19%. These data show high infection rates of Dirofilaria spp. in both human and dog populations of Rostov, probably because of the existence of favorable conditions for the transmission in this region.
Artificial intelligence applications in the intensive care unit.
OBJECTIVE To review the history and current applications of artificial intelligence in the intensive care unit. DATA SOURCES The MEDLINE database, bibliographies of selected articles, and current texts on the subject. STUDY SELECTION The studies that were selected for review used artificial intelligence tools for a variety of intensive care applications, including direct patient care and retrospective database analysis. DATA EXTRACTION All literature relevant to the topic was reviewed. DATA SYNTHESIS Although some of the earliest artificial intelligence (AI) applications were medically oriented, AI has not been widely accepted in medicine. Despite this, patient demographic, clinical, and billing data are increasingly available in an electronic format and therefore susceptible to analysis by intelligent software. Individual AI tools are specifically suited to different tasks, such as waveform analysis or device control. CONCLUSIONS The intensive care environment is particularly suited to the implementation of AI tools because of the wealth of available data and the inherent opportunities for increased efficiency in inpatient care. A variety of new AI tools have become available in recent years that can function as intelligent assistants to clinicians, constantly monitoring electronic data streams for important trends, or adjusting the settings of bedside devices. The integration of these tools into the intensive care unit can be expected to reduce costs and improve patient outcomes.
Shaping particle simulations with interaction forces
classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. SIGGRAPH 2014, August 10 – 14, 2014, Vancouver, British Columbia, Canada. 2014 Copyright held by the Owner/Author. ACM 978-1-4503-2960-6/14/08 Shaping Particle Simulations with Interaction Forces
Difference Target Propagation
Back-propagation has been the workhorse of recent successes of deep learning but it relies on infinitesimal effects (partial derivatives) in order to perform credit assignment. This could become a serious issue as one considers deeper and more non-linear functions, e.g., consider the extreme case of non-linearity where the relation between parameters and cost is actually discrete. Inspired by the biological implausibility of back-propagation, a few approaches have been proposed in the past that could play a similar credit assignment role as backprop. In this spirit, we explore a novel approach to credit assignment in deep networks that we call target propagation. The main idea is to compute targets rather than gradients, at each layer. Like gradients, they are propagated backwards. In a way that is related but different from previously proposed proxies for back-propagation which rely on a backwards network with symmetric weights, target propagation relies on auto-encoders at each layer. Unlike back-propagation, it can be applied even when units exchange stochastic bits rather than real numbers. We show that a linear correction for the imperfectness of the auto-encoders is very effective to make target propagation actually work, along with adaptive learning rates.
Randomised trial spacer v nebuliser for acute asthma.
Sixty hospitalised children with asthma aged 1-5 years were randomised to spacer or nebuliser. A clinical score was measured at baseline and every 12 hours. There were no differences between groups in the score over time, or secondary outcome measures. The spacer is an effective delivery method for young hospitalised asthmatic children.
Movement rehabilitation: are the principles of re-learning in the recovery of function the same as those of original learning?
This paper addresses the change in movement dynamics in rehabilitation through discussing issues that pertain to the question as to whether the principles of re-learning in functional recovery are the same as those of original learning. The many varieties of disease and injury states lead to significant differences in the constraints to action and these impairments in turn influence the pathway of change in re-learning and/or recovery of function. These altered constraints channel the effectiveness of many conditions and strategies of practice that influence learning and performance. Nevertheless, it is proposed that there is a small set of principles for the change in dynamics of motor learning, which drive the continuously evolving stability and instability of movement forms through the lifespan. However, this common set of dynamical principles is realized in individual pathways of change in the movement dynamics of learning, re-learning and recovery of function. The inherent individual differences of humans and environments insure that the coordination, control and skill of movement rehabilitation are challenged in distinct ways by the changing constraints arising from the many manifestations of disease and injury. Implications for rehabilitation The many varieties of disease and injury states lead to significant differences in the constraints to action that in turn influence the pathway of change in re-learning and/or recovery of function, and the effectiveness of the many conditions/strategies of practice to influence learning and performance. There are a small set of principles for the change in dynamics of motor learning that drive the continuously evolving ebb and flow of stability and instability of movement forms through the lifespan. The inherent individual differences of humans and environments insure that the coordination, control and skill of movement rehabilitation are uniquely challenged by the changing constraints arising from the many manifestations of disease and injury.
A Feature Selection Method Based on Information Gain and Genetic Algorithm
With the rapid development of the Computer Science and Technology, It has become a major problem for the users that how to quickly find useful or needed information. Text categorization can help people to solve this question. The feature selection method has become one of the most critical techniques in the field of the text automatic categorization. A new method of the text feature selection based on Information Gain and Genetic Algorithm is proposed in this paper. This method chooses the feature based on information gain with the frequency of items. Meanwhile, for the information filtering systems, this method has been improved fitness function to fully consider the characteristics of weight, text and vector similarity dimension, etc. The experiment has proved that the method can reduce the dimension of text vector and improve the precision of text classification.
Augmented Reality 2.0
Augmented Reality (AR) was first demonstrated in the 1960s, but only recently have technologies emerged that can be used to easily deploy AR applications to many users. Cameraequipped cell phones with significant processing power and graphics abilities provide an inexpensive and versatile platform for AR applications, while the social networking technology of Web 2.0 provides a large-scale infrastructure for collaboratively producing and distributing geo-referenced AR content. This combination of widely used mobile hardware and Web 2.0 software allows the development of a new type of AR platform that can be used on a global scale. In this paper we describe the Augmented Reality 2.0 concept and present existing work on mobile AR and web technologies that could be used to create AR 2.0 applications.
Affordances in Grounded Language Learning
We present a novel methodology involving mappings between different modes of semantic representations. We propose distributional semantic models as a mechanism for representing the kind of world knowledge inherent in the system of abstract symbols characteristic of a sophisticated community of language users. Then, motivated by insight from ecological psychology, we describe a model approximating affordances, by which we mean a language learner’s direct perception of opportunities for action in an environment. We present a preliminary experiment involving mapping between these two representational modalities, and propose that our methodology can become the basis for a cognitively inspired model of grounded language learning.
Criteria for measuring KM performance outcomes in organisations
Purpose – This research attempts to comprehensively examine the criteria for measuring knowledge management (KM) performance outcomes in organisations. To date, no studies have provided a set of widely accepted measurement criteria associated with KM efforts. This paper, therefore, aims to fill the gap. Design/methodology/approach – This study was carried out by systematically reviewing the literature on KM performance outcomes. Case studies were carried out in two organisations identified to have a KM programme in place. Findings – A review of the literature indicates that there are 38 outcomes from KM implementation which have garnered impressive theoretical and empirical support. Based on this, a comprehensive set of performance outcomes is proposed and grouped into five key dimensions. The findings from the case studies indicate that this proposition is relevant. Research limitations/implications – The use of case studies limits the generalisability of the findings, but it opens up new questions to be explored by further researching into the relationships between KM efforts and performance outcomes. Practical implications – Such significant findings will have important implications to organisations on how their KM efforts can be systematically measured for business success. To the academics, this paper provides insights into the relationship between KM efforts and organisational performance. Originality/value – This study is probably one of the first to comprehensively explain the criteria for measuring KM efforts in organisations. It is hoped that the findings of this study will encourage organisations to practise KM from the right perspective in order to reap the outcomes from KM initiatives.
Dialog Act Modeling for Conversational Speech
We describe an integrated approach for statistical modeling of discourse structure for natural conversational speech. Our model is based on 42`dialog acts' which were hand-labeled in 1155 conversations from the Switchboard corpus of spontaneous human-to-human telephone speech. We developed several models and algorithms to automatically detect dialog acts from transcribed or automatically recognized words and from prosodic properties of the speech signal, and by using a statistical discourse grammar. All of these components were probabilistic in nature and estimated from data, employing a variety of techniques (hidden Markov models, N-gram language models, maximum entropy estimation, decision tree classiiers, and neural networks). In preliminary studies, we achieved a dialog act labeling accuracy of 65% based on recognized words and prosody, and an accuracy of 72% based on word transcripts. Since humans achieve 84% on this task (with chance performance at 35%) we nd these results encouraging.
Investigating Success of Open Source Software Projects: A Social Network Perspective
To date, numerous open source projects are hosted on many online repositories, reflecting the popularity of open source projects. While some of these projects are active and thriving, some of them are either languishing or showing no development activities at all. This phenomenon thus begs the important question of what are the influential factors that affect the success of open source projects. In a quest to deepen our understanding of the evolution of open source projects, this research aims to analyze the survival of open source projects by using the theoretical lens of social network analysis. Based on extensive analyses of empirical data collected from online open source repositories, we study the impact of the communication patterns of the development teams on the outcomes of these projects, while accounting for project-specific characteristics. Using panel data analysis, we find significant impacts of communication patterns on project outcomes over the long term.
Acceleration and execution of relational queries using general purpose graphics processing unit (GPGPU)
s away much detail about the actual execution of a program from application developers: developers only specify logical relationships between data. Compared to emerging distributed programming languages for GPUs such as Map-Reduce [24], LogiQL expresses a richer set of relational and database operations that are cumbersome to implement in Map-Reduce. 2.1.2 Relational Algebra Primitives Database programming languages are mainly derived from primitive operations common to first order logic. They are also declarative, in that they specify the expected result of the computation rather than a list of steps required to determine it. Due to their roots in first order logic, many database programming languages such as SQL and Datalog can be mostly or completely represented using RA. RA itself consists of a small number of fundamental operations, including PROJECT, SELECT, PRODUCT, SET operations (UNION, INTERSECT, and DIFFERENCE), and JOIN. These fundamental operators are themselves complex applications that are composed of multi-level algorithms and complex data structures. Given kernel-granularity implementations of these operations it is possible to compile many high level database applications into a CFG of RA kernels. RA consists of a set of fundamental transformations that are applied to sets of primitive elements. Primitive elements consist of n-ary tuples that map attributes to values. Each attribute consists of a finite set of possible values and an n-ary tuple is a list of n values, one for each attribute. Another way to think about tuples is as coordinates in an n-dimensional space. An unordered set of tuples of this type specifies a region in this n-dimensional space and is termed a “relation”. Each transformation in RA performs an operation on a relation, producing a new relation. Many operators divide the tuple attributes into key attributes and value attributes. In these operations, the key attributes are considered by the operator and the value attributes are treated as payload data that are not considered by the operation. Table 1 lists the common RA operators and a few simple examples. In general, these operators perform simple tasks on a large amount of data. A typical complex relational
A phase-I trial of the epidermal growth factor receptor directed bispecific antibody MDX-447 without and with recombinant human granulocyte-colony stimulating factor in patients with advanced solid tumors
MDX-447 is a bispecific antibody directed against the epidermal growth factor receptor (EGFR) and the high affinity Fc receptor (FcγRI). Preclinical data suggest that co-administration of granulocyte-colony stimulating factor (G-CSF) may enhance the tumor cytotoxicity of bispecific antibodies. In group 1, patients received MDX-447 intravenously (IV) weekly. Dose levels of MDX-447 evaluated in group 1 were 1, 3.5, 7, 10, 15, 20, 30, and 40 mg/m2. In group 2, patients received MDX-447 IV weekly with G-CSF (3 mcg/kg/day) subcutaneously (days −3 to +2, 5–9, 12–16, etc.). Dose levels of MDX-447 evaluated in group 2 were 1, 3.5, 7, 10, and 15 mg/m2. Sixty-four patients with advanced solid tumors were treated. Forty-one patients received MDX-447 alone (group 1); 23 patients received MDX-447 + G-CSF (group 2). Hypotension was the predominant dose-limiting toxicity (DLT) in both treatment groups, with seven patients experiencing ≥grade 3 events. MDX-447 half-life (T1/2) ranged from 1.9 to 8.4 h, with no obvious differences between the two treatment groups. MDX-447 binding to neutrophils and peak levels of circulating tumor necrosis factor α (TNFα) and interleukin-6 (IL-6) were higher in group 2. The MTD for MDX-447 alone was 30 mg/m2. When G-CSF was given with MDX-447, treatment was not well tolerated and group 2 was closed early because of safety concerns, with the last patient being treated at the 7 mg/m2 dose level. There were no objective complete or partial responses in either group. MDX-447 alone was generally well tolerated, but did not achieve objective tumor responses. The MTD for MDX-447 alone was 30 mg/m2 weekly. Co-administration of G-CSF with MDX-447 precluded meaningful dose escalation.
Markers of endothelial and platelet activation are associated with high on-aspirin platelet reactivity in patients with stable coronary artery disease.
INTRODUCTION Aspirin inhibits the cyclooxygenase-1 (COX-1) mediated thromboxane A2 synthesis. Despite COX-1 inhibition, in patients with coronary artery disease (CAD), platelets can be activated through other mechanisms, like activation by thrombin. MATERIALS AND METHODS At baseline in this cross-sectional substudy of the ASCET trial, 1001 stable CAD patients, all on single aspirin treatment, were classified by the PFA100® method, as having high on-aspirin residual platelet reactivity (RPR) or not. Markers of hypercoagulability, endothelial and platelet activation as related to RPR, were evaluated to explore the potential mechanisms behind high on-aspirin RPR. RESULTS Altogether, 25.9% (n=259) of the patients were found to have high on-aspirin RPR. S-thromboxane B(2) levels were very low and did not differ between patients having high on-aspirin RPR or not. Patients with high on-aspirin RPR had significantly higher levels of von Willebrand Factor (vWF) (124 vs 100%, p<0.001, platelet count (236 vs 224 × 10(9)/l, p=0.008), total TFPI (68.4 vs 65.5 ng/ml, p=0.005) and ß-thromboglobulin (ß-TG) (33.3 vs 31.3 IU/ml, p=0.041) compared to patients with low on-aspirin RPR. No significant differences between the groups were observed in levels of endogenous thrombin generation (ETP), pro-thrombin fragment 1+2 (F1+2), D-dimer, soluble TF (sTF) or P-selectin (all p>0.05). CONCLUSIONS The high on-aspirin RPR as defined by PFA100® seems not to be due to increased thrombin activity as evaluated with ETP, sTF, F1+2 or D-dimer. The elevated levels of platelet count, ß-TG, TFPI and especially vWF might be explained by increased endothelial and platelet activation in these patients.
Efficient identification of Web communities
"# %$& $' & )(*$' $+ , .-/ 0 %1 !23 3 45$768 3 + 91: 1 3 ;< * ="> 1 $? )(> 0 % +@ 3 A % B C @ ="> 1 $ D*EA ="> 91 $F %(:$' G H % = @ I . % A"> * J 3 K 2 L K . 3 = & ,M 3 ON . QP 3 @ R S(T1G % 9 ! %1 4>U) 7 1 V W$' 1 ! 3$? X> %$' . Y %( 45 , 7 ="> 1 $ U% % :$' L 4Y $' L$Z $V %( ! 232 @[4 . 7 + % @[ ="> 91 $ D \ (] K $' . A 91G . 723 17 ) 0 1G 723$7 = Y M . X C . ) A )X X 1 ,M @ & , ^ 3 & ="> 91 $' 3XA"K A % _% K 3 _= ^_%1G %X B 3 @ ="K ` ! 1G 72 7 23 3 45$< ^ *-5 1' %2 $' 3 4Y 5 Dba! c 9(T@ (T 3-/ $'$7 %(< * %X X 1 .M 3 & ) 3 A %23_ )1 3 d L$c $Z '1G ) . 7 R$' -/ 1G )2* 1G 72*1 $' 23 $C ) C L K (T e " $ UF % %1 3 $ U : "A1 3 _%$ U ) A ) 1723 3 4 %X> 23 _% L $f , c )1 ^ $' (] 2g" 7 ) %$' 32 A . , _ )1 3h . iD \ X X 23 3 . ) 3 % $7 %(< % 1c )X X 1 % G A 3 2L F(T %@ $' . A 91G . 723 1 $F % C$' . ,1 9 B _ 3 $ U> % & , L ^X> %X 2j , 3 )( X> )1' G %2g . ) _% %1 3 $ Uk ) L X 1 ,-/ . 2 1 3 _ D Categories and Subject Descriptors l`D m5D n o pCq r q sSq t u vQq wSq x u yzu wir9{[|i}^ , G %" %$' \ X X 2L 3 . , 3 $I~  €%[€F‚=ƒ]„ ƒ]„5… † l D ‡5D ‡^o ˆ wg‰ Š ‹%yzq r.ŒTŠ>wzir Š ‹,q x u`q w?ŽA&u5r ‹)ŒTu5 q ‘j{’| “ (T %1 & ) 3 % •”5 . )1 G % –f 9 '1 L )2 ~˜— ™ š5›  œ99ƒ]„K…%ž^ƒ]„.Ÿ %9‚+€%8ƒ’ %„ ¡ ™  œ99ƒT„5… †5¢ D m5D m o pAŒTt.£ ‹,u r u vQq r ¤Su yzq r.ŒT£/t {’| ¢ 1G %X =a! %1' K~ „>œ98¥W % ¦^§ ' / ̈ ™ œ9‚ ›
MUSCLE DAMAGE AFTER A TENNIS MATCH IN YOUNG PLAYERS
The present study investigated changes in indirect markers of muscle damage following a simulated tennis match play using nationally ranked young (17.6 ± 1.4 years) male tennis players. Ten young athletes played a 3-hour simulated match play on outdoor red clay courts following the International Tennis Federation rules. Muscle soreness, plasma creatine kinase activity (CK), serum myoglobin concentration (Mb), one repetition maximum (1RM) squat strength, and squat jump (SJ) and counter movement jump (CMJ) heights were assessed before, immediately after, and 24 and 48 h after the simulated match play. All parameters were also evaluated in a non-exercised group (control group). A small increase in the indirect markers of muscle damage (muscle soreness, CK and Mb) was detected at 24-48 hours post-match (p < 0.05). A marked acute decrement in neuromuscular performance (1RM squat strength: -35.2 ± 10.4%, SJ: -7.0 ± 6.0%, CMJ: -10.0 ± 6.3%) was observed immediately post-match (p < 0.05). At 24 h post-match, the 1RM strength and jump heights were not significantly different from the baseline values. However, several players showed a decrease of these measures at 24 h after the match play. The simulated tennis match play induced mild muscle damage in young players. Coaches could monitor changes in the indirect markers of muscle damage to assess athletes' recovery status during training and competition.
Surgical Simulation Training Systems : Box Trainers , Virtual Reality and Augmented Reality Simulators
In contrast to Virtual Reality (VR) and traditional surgical training, simulation-based training in Minimally Invasive Surgery (MIS), and Augmented Reality (AR) technology, constitute relatively new application fields in surgery. However, between simulation and AR there is a direct link: the utilization of state-of-the art computer graphics tools and techniques. Nevertheless, an integrated research system that introduces AR properties in simulation-based surgical training scenarios has not yet been proposed in the current literature. In this article we provide a survey on the simulation techniques and systems employed for surgical skills training based on traditional as well as more advanced VR technologies. Moreover, we summarize recent experimental results with regard to the design of a novel ARbased simulator developed in our lab for basic skills training in MIS.
HMDB: A large video database for human motion recognition
With nearly one billion online videos viewed everyday, an emerging new frontier in computer vision research is recognition and search in video. While much effort has been devoted to the collection and annotation of large scalable static image datasets containing thousands of image categories, human action datasets lag far behind. Current action recognition databases contain on the order of ten different action categories collected under fairly controlled conditions. State-of-the-art performance on these datasets is now near ceiling and thus there is a need for the design and creation of new benchmarks. To address this issue we collected the largest action video database to-date with 51 action categories, which in total contain around 7,000 manually annotated clips extracted from a variety of sources ranging from digitized movies to YouTube. We use this database to evaluate the performance of two representative computer vision systems for action recognition and explore the robustness of these methods under various conditions such as camera motion, viewpoint, video quality and occlusion.
Topical treatments for cutaneous warts.
BACKGROUND Viral warts are a common skin condition, which can range in severity from a minor nuisance that resolve spontaneously to a troublesome, chronic condition. Many different topical treatments are available. OBJECTIVES To evaluate the efficacy of local treatments for cutaneous non-genital warts in healthy, immunocompetent adults and children. SEARCH METHODS We updated our searches of the following databases to May 2011: the Cochrane Skin Group Specialised Register, CENTRAL in The Cochrane Library, MEDLINE (from 2005), EMBASE (from 2010), AMED (from 1985), LILACS (from 1982), and CINAHL (from 1981). We searched reference lists of articles and online trials registries for ongoing trials. SELECTION CRITERIA Randomised controlled trials (RCTs) of topical treatments for cutaneous non-genital warts. DATA COLLECTION AND ANALYSIS Two authors independently selected trials and extracted data; a third author resolved any disagreements. MAIN RESULTS We included 85 trials involving a total of 8815 randomised participants (26 new studies were included in this update). There was a wide range of different treatments and a variety of trial designs. Many of the studies were judged to be at high risk of bias in one or more areas of trial design.Trials of salicylic acid (SA) versus placebo showed that the former significantly increased the chance of clearance of warts at all sites (RR (risk ratio) 1.56, 95% CI (confidence interval) 1.20 to 2.03). Subgroup analysis for different sites, hands (RR 2.67, 95% CI 1.43 to 5.01) and feet (RR 1.29, 95% CI 1.07 to 1.55), suggested it might be more effective for hands than feet.A meta-analysis of cryotherapy versus placebo for warts at all sites favoured neither intervention nor control (RR 1.45, 95% CI 0.65 to 3.23). Subgroup analysis for different sites, hands (RR 2.63, 95% CI 0.43 to 15.94) and feet (RR 0.90, 95% CI 0.26 to 3.07), again suggested better outcomes for hands than feet. One trial showed cryotherapy to be better than both placebo and SA, but only for hand warts.There was no significant difference in cure rates between cryotherapy at 2-, 3-, and 4-weekly intervals.Aggressive cryotherapy appeared more effective than gentle cryotherapy (RR 1.90, 95% CI 1.15 to 3.15), but with increased adverse effects.Meta-analysis did not demonstrate a significant difference in effectiveness between cryotherapy and SA at all sites (RR 1.23, 95% CI 0.88 to 1.71) or in subgroup analyses for hands and feet.Two trials with 328 participants showed that SA and cryotherapy combined appeared more effective than SA alone (RR 1.24, 95% CI 1.07 to 1.43).The benefit of intralesional bleomycin remains uncertain as the evidence was inconsistent. The most informative trial with 31 participants showed no significant difference in cure rate between bleomycin and saline injections (RR 1.28, 95% CI 0.92 to 1.78).Dinitrochlorobenzene was more than twice as effective as placebo in 2 trials with 80 participants (RR 2.12, 95% CI 1.38 to 3.26).Two trials of clear duct tape with 193 participants demonstrated no advantage over placebo (RR 1.43, 95% CI 0.51 to 4.05).We could not combine data from trials of the following treatments: intralesional 5-fluorouracil, topical zinc, silver nitrate (which demonstrated possible beneficial effects), topical 5-fluorouracil, pulsed dye laser, photodynamic therapy, 80% phenol, 5% imiquimod cream, intralesional antigen, and topical alpha-lactalbumin-oleic acid (which showed no advantage over placebo).We did not identify any RCTs that evaluated surgery (curettage, excision), formaldehyde, podophyllotoxin, cantharidin, diphencyprone, or squaric acid dibutylester. AUTHORS' CONCLUSIONS Data from two new trials comparing SA and cryotherapy have allowed a better appraisal of their effectiveness. The evidence remains more consistent for SA, but only shows a modest therapeutic effect. Overall, trials comparing cryotherapy with placebo showed no significant difference in effectiveness, but the same was also true for trials comparing cryotherapy with SA. Only one trial showed cryotherapy to be better than both SA and placebo, and this was only for hand warts. Adverse effects, such as pain, blistering, and scarring, were not consistently reported but are probably more common with cryotherapy.None of the other reviewed treatments appeared safer or more effective than SA and cryotherapy. Two trials of clear duct tape demonstrated no advantage over placebo. Dinitrochlorobenzene (and possibly other similar contact sensitisers) may be useful for the treatment of refractory warts.
Unsupervised Representation Learning with Prior-Free and Adversarial Mechanism Embedded Autoencoders
Most state-of-the-art methods for representation learning are supervised, which require a large number of labeled data. This paper explores a novel unsupervised approach for learning visual representation. We introduce an image-wise discrimination criterion in addition to a pixel-wise reconstruction criterion to model both individual images and the difference between original images and reconstructed ones during neural network training. These criteria induce networks to focus on not only local features but also global high-level representations, so as to provide a competitive alternative to supervised representation learning methods, especially in the case of limited labeled data. We further introduce a competition mechanism to drive each component to increase its capability to win its adversary. In this way, the identity of representations and the likeness of reconstructed images to original ones are alternately improved. Experimental results on several tasks demonstrate the effectiveness of our approach.
N-acetylcysteine does not prevent renal dysfunction after off-pump coronary artery bypass surgery.
BACKGROUND AND OBJECTIVE Coronary artery bypass graft surgery in high-risk patients may be associated with postoperative renal dysfunction. N-Acetylcysteine is a powerful antioxidant and has been used to prevent contrast-induced renal dysfunction. The efficacy of N-acetylcysteine in preventing postoperative renal dysfunction following off-pump coronary artery bypass graft surgery was studied. METHODS A prospective, randomized, controlled study was conducted in patients undergoing off-pump coronary artery bypass graft. The study group (37 patients) received N-acetylcysteine in the perioperative period, whereas the control group (37 patients) did not. The data obtained were analysed using the independent sample t-test (Student's t-test) and χ-test. RESULTS There was no significant difference in the incidence of renal dysfunction between the two groups. Three patients (8.6%) in the N-acetylcysteine group and four (11.4%) in the control group developed renal dysfunction (P value was 1.00). CONCLUSION N-Acetylcysteine does not have any beneficial effect on renal function in high-risk patients undergoing off-pump coronary artery bypass graft.
The evolutionary dynamics of grammar acquisition.
Grammar is the computational system of language. It is a set of rules that specifies how to construct sentences out of words. Grammar is the basis of the unlimited expressibility of human language. Children acquire the grammar of their native language without formal education simply by hearing a number of sample sentences. Children could not solve this learning task if they did not have some pre-formed expectations. In other words, children have to evaluate the sample sentences and choose one grammar out of a limited set of candidate grammars. The restricted search space and the mechanism which allows to evaluate the sample sentences is called universal grammar. Universal grammar cannot be learned; it must be in place when the learning process starts. In this paper, we design a mathematical theory that places the problem of language acquisition into an evolutionary context. We formulate equations for the population dynamics of communication and grammar learning. We ask how accurate children have to learn the grammar of their parents' language for a population of individuals to evolve and maintain a coherent grammatical system. It turns out that there is a maximum error tolerance for which a predominant grammar is stable. We calculate the maximum size of the search space that is compatible with coherent communication in a population. Thus, we specify the conditions for the evolution of universal grammar.
Stochastic Tracking of 3 D Human Figures Using 2 D Image Motion
A probabilistic method for tracking 3D articulated human gures in monocular image sequences is presented. Within a Bayesian framework, we de ne a generative model of image appearance, a robust likelihood function based on image graylevel di erences, and a prior probability distribution over pose and joint angles that models how humans move. The posterior probability distribution over model parameters is represented using a discrete set of samples and is propagated over time using particle ltering. The approach extends previous work on parameterized optical ow estimation to exploit a complex 3D articulated motion model. It also extends previous work on human motion tracking by including a perspective camera model, by modeling limb self occlusion, and by recovering 3D motion from a monocular sequence. The explicit posterior probability distribution represents ambiguities due to image matching, model singularities, and perspective projection. The method relies only on a frame-to-frame assumption of brightness constancy and hence is able to track people under changing viewpoints, in grayscale image sequences, and with complex unknown backgrounds.
Feature Ranking and Selection for Intrusion Detection Using Artificial Neural Networks and Statistical Methods
This paper describes results concerning the robustness and generalization capabilities of artificial neural networks in detecting intrusions using network audit trails. Through a variety of comparative experiments, it is found that neural network performs the best for intrusion detection. Feature selection is as important for intrusion detection as it is for many other problems. We present our work of identifying intrusion and normal pertinent features and evaluating the applicability of these features in detecting intrusions. We also present different feature selection methods for intrusion detection. It is demonstrated that, with appropriately chosen features, intrusions can be detected in real time or near real time.
Individual Voltage Balancing Strategy for PWM Cascaded H-Bridge Converter-Based STATCOM
This paper presents a new control method for cascaded connected H-bridge converter-based static compensators. These converters have classically been commutated at fundamental line frequencies, but the evolution of power semiconductors has allowed the increase of switching frequencies and power ratings of these devices, permitting the use of pulsewidth modulation techniques. This paper mainly focuses on dc-bus voltage balancing problems and proposes a new control technique (individual voltage balancing strategy), which solves these balancing problems, maintaining the delivered reactive power equally distributed among all the H-bridges of the converter.
General and Interval Type-2 Fuzzy Face-Space Approach to Emotion Recognition
Facial expressions of a person representing similar emotion are not always unique. Naturally, the facial features of a subject taken from different instances of the same emotion have wide variations. In the presence of two or more facial features, the variation of the attributes together makes the emotion recognition problem more complicated. This variation is the main source of uncertainty in the emotion recognition problem, which has been addressed here in two steps using type-2 fuzzy sets. First a type-2 fuzzy face space is constructed with the background knowledge of facial features of different subjects for different emotions. Second, the emotion of an unknown facial expression is determined based on the consensus of the measured facial features with the fuzzy face space. Both interval and general type-2 fuzzy sets (GT2FS) have been used separately to model the fuzzy face space. The interval type-2 fuzzy set (IT2FS) involves primary membership functions for m facial features obtained from n-subjects, each having l-instances of facial expressions for a given emotion. The GT2FS in addition to employing the primary membership functions mentioned above also involves the secondary memberships for individual primary membership curve, which has been obtained here by formulating and solving an optimization problem. The optimization problem here attempts to minimize the difference between two decoded signals: the first one being the type-1 defuzzification of the average primary membership functions obtained from the n-subjects, while the second one refers to the type-2 defuzzified signal for a given primary membership function with secondary memberships as unknown. The uncertainty management policy adopted using GT2FS has resulted in a classification accuracy of 98.333% in comparison to 91.667% obtained by its interval type-2 counterpart. A small improvement (approximately 2.5%) in classification accuracy by IT2FS has been attained by pre-processing measurements using the well-known interval approach.
Automatic Panoramic Image Stitching using Invariant Features
This paper concerns the problem of fully automated panoramic image stitching. Though the 1D problem (single axis of rotation) is well studied, 2D or multi-row stitching is more difficult. Previous approaches have used human input or restrictions on the image sequence in order to establish matching images. In this work, we formulate stitching as a multi-image matching problem, and use invariant local features to find matches between all of the images. Because of this our method is insensitive to the ordering, orientation, scale and illumination of the input images. It is also insensitive to noise images that are not part of a panorama, and can recognise multiple panoramas in an unordered image dataset. In addition to providing more detail, this paper extends our previous work in the area (Brown and Lowe, 2003) by introducing gain compensation and automatic straightening steps.
Digital surgery: current trends and techniques.
Digital deformities continue to be a common ailment among many patients who present to foot and ankle specialists. When conservative treatment fails to eliminate patient complaints, surgical correction remains a viable treatment option. Proximal interphalangeal joint arthrodesis remains the standard procedure among most foot and ankle surgeons. With continued advances in fixation technology and techniques, surgeons continue to have better options for the achievement of excellent digital surgery outcomes. This article reviews current trends in fixation of digital deformities while highlighting pertinent aspects of the physical examination, radiographic examination, and surgical technique.
Reduction in visceral adiposity is associated with an improved metabolic profile in HIV-infected patients receiving tesamorelin.
BACKGROUND Tesamorelin, a growth hormone-releasing hormone analogue, decreases visceral adipose tissue (VAT) by 15%-20% over 6-12 months in individuals with human immunodeficiency virus (HIV)-associated abdominal adiposity, but it is unknown whether VAT reduction is directly associated with endocrine and metabolic changes. METHODS In 2 phase III, randomized, double-blind studies, men and women with HIV-associated abdominal fat accumulation were randomly assigned (ratio, 2:1) to receive tesamorelin or placebo for 26 weeks. At week 26, patients initially receiving tesamorelin were randomly assigned to continue receiving tesamorelin or to receive placebo for an additional 26 weeks. In per-protocol analysis of 402 subjects initially randomly assigned to receive tesamorelin, those with ≥8% reduction in VAT were defined a priori as responders per the statistical analysis plan. Post hoc analyses were performed to assess differences between responders and nonresponders. RESULTS Compared with tesamorelin nonresponders, responders experienced greater mean (±SD) reduction in triglyceride levels (26 weeks: -0.6 ± 1.7 mmol/L vs -0.1 ± 1.2 mmol/L [P = .005]; 52 weeks: -0.8 ± 1.8 mmol/L vs 0.0 ± 1.1 mmol/L [P = .003]) and attenuated changes in fasting glucose levels (26 weeks: 1 ± 16 mg/dL vs 5 ± 14 mg/dL [P = .01]; 52 weeks: -1 ± 14 mg/dL vs 8 ± 17 mg/dL [P < .001]), hemoglobin A1c levels (26 weeks: 0.1 ± 0.3% vs 0.3 ± 0.4% [P < .001]; 52 weeks: 0.0 ± 0.3% vs 0.2 ± 0.5% [P = .003]), and other parameters of glucose homeostasis. Similar patterns were seen for adiponectin levels, with significant improvement in responders vs nonresponders. Changes in lipid levels and glucose homeostasis were significantly associated with percentage change in VAT. CONCLUSIONS In contrast to nonresponders, HIV-infected patients receiving tesamorelin with ≥8% reduction in VAT have significantly improved triglyceride levels, adiponectin levels, and preservation of glucose homeostasis over 52 weeks of treatment. CLINICALTRIALS.GOV REGISTRATION: NCT00123253, NCT00435136, NCT00608023.
Occurrence of Anti-Drug Antibodies against Interferon-Beta and Natalizumab in Multiple Sclerosis: A Collaborative Cohort Analysis
Immunogenicity of biopharmaceutical products in multiple sclerosis is a frequent side effect which has a multifactorial etiology. Here we study associations between anti-drug antibody (ADA) occurrence and demographic and clinical factors. Retrospective data from routine ADA test laboratories in Sweden, Denmark, Austria and Germany (Dusseldorf group) and from one research study in Germany (Munich group) were gathered to build a collaborative multi-cohort dataset within the framework of the ABIRISK project. A subset of 5638 interferon-beta (IFNβ)-treated and 3440 natalizumab-treated patients having data on at least the first two years of treatment were eligible for interval-censored time-to-event analysis. In multivariate Cox regression, IFNβ-1a subcutaneous and IFNβ-1b subcutaneous treated patients were at higher risk of ADA occurrence compared to IFNβ-1a intramuscular-treated patients (pooled HR = 6.4, 95% CI 4.9-8.4 and pooled HR = 8.7, 95% CI 6.6-11.4 respectively). Patients older than 50 years at start of IFNβ therapy developed ADA more frequently than adult patients younger than 30 (pooled HR = 1.8, 95% CI 1.4-2.3). Men developed ADA more frequently than women (pooled HR = 1.3, 95% CI 1.1-1.6). Interestingly we observed that in Sweden and Germany, patients who started IFNβ in April were at higher risk of developing ADA (HR = 1.6, 95% CI 1.1-2.4 and HR = 2.4, 95% CI 1.5-3.9 respectively). This result is not confirmed in the other cohorts and warrants further investigations. Concerning natalizumab, patients older than 45 years had a higher ADA rate (pooled HR = 1.4, 95% CI 1.0-1.8) and women developed ADA more frequently than men (pooled HR = 1.4, 95% CI 1.0-2.0). We confirmed previously reported differences in immunogenicity of the different types of IFNβ. Differences in ADA occurrence by sex and age are reported here for the first time. These findings should be further investigated taking into account other exposures and biomarkers.
Characteristics and Principles of Scaled Agile
The Agile Manifesto and Agile Principles are typically referred to as the definitions of "agile" and "agility". There is research on agile values and agile practises, but how should “Scaled Agility” be defined, and what might be the characteristics and principles of Scaled Agile? This paper examines the characteristics of scaled agile, and the principles that are used to build up such agility. It also gives suggestions as principles upon which Scaled Agility can be built.
Design and modeling of a micromachined high-Q tunable capacitor with large tuning range and a vertical planar spiral inductor
In wireless communication systems, passive elements including tunable capacitors and inductors often need high quality factor ( -factor). In this paper, we present the design and modeling of a novel high -factor tunable capacitor with large tuning range and a high -factor vertical planar spiral inductor implemented in microelectromechanical system (MEMS) technology. Different from conventional two-parallel-plate tunable capacitors, the novel tunable capacitor consists of one suspended top plate and two fixed bottom plates. One of the two fixed plates and the top plate form a variable capacitor, while the other fixed plate and the top plate are used to provide electrostatic actuation for capacitance tuning. For the fabricated prototype tunable capacitors, a maximum controllable tuning range of 69.8% has been achieved, exceeding the theoretical tuning range limit (50%) of conventional two-parallel-plate tunable capacitors. This tunable capacitor also exhibits a very low return loss of less than 0.6 dB in the frequency range from 45 MHz to 10 GHz. The high -factor planar coil inductor is first fabricated on silicon substrate and then assembled to the vertical position by using a novel three-dimensional microstructure assembly technique called plastic deformation magnetic assembly (PDMA). Inductors of different dimensions are fabricated and tested. The -parameters of the inductors before and after PDMA are measured and compared, demonstrating superior performance due to reduced substrate loss and parasitics. The new vertical planar spiral inductor also has the advantage of occupying much smaller silicon areas than the conventional planar spiral inductors.
Simulation of bone tissue formation within a porous scaffold under dynamic compression.
A computational model of mechanoregulation is proposed to predict bone tissue formation stimulated mechanically by overall dynamical compression within a porous polymeric scaffold rendered by micro-CT. Dynamic compressions of 0.5-5% at 0.0025-0.025 s(-1) were simulated. A force-controlled dynamic compression was also performed by imposing a ramp of force from 1 to 70 N. The model predicts homogeneous mature bone tissue formation under strain levels of 0.5-1% at strain rates of 0.0025-0.005 s(-1). Under higher levels of strain and strain rates, the scaffold shows heterogeneous mechanical behaviour which leads to the formation of a heterogeneous tissue with a mixture of mature bone and fibrous tissue. A fibrous tissue layer was also predicted under the force-controlled dynamic compression, although the same force magnitude was found promoting only mature bone during a strain-controlled compression. The model shows that the mechanical stimulation of bone tissue formation within a porous scaffold closely depends on the loading history and on the mechanical behaviour of the scaffold at local and global scales.
Information gathering actions over human internal state
Much of estimation of human internal state (goal, intentions, activities, preferences, etc.) is passive: an algorithm observes human actions and updates its estimate of human state. In this work, we embrace the fact that robot actions affect what humans do, and leverage it to improve state estimation. We enable robots to do active information gathering, by planning actions that probe the user in order to clarify their internal state. For instance, an autonomous car will plan to nudge into a human driver's lane to test their driving style. Results in simulation and in a user study suggest that active information gathering significantly outperforms passive state estimation.
A Scalable Geospatial Web Service for Near Real-Time, High-Resolution Land Cover Mapping
A land cover classification service is introduced toward addressing current challenges on the handling and online processing of big remote sensing data. The geospatial web service has been designed, developed, and evaluated toward the efficient and automated classification of satellite imagery and the production of high-resolution land cover maps. The core of our platform consists of the Rasdaman array database management system for raster data storage and the open geospatial consortium web coverage processing service for data querying. Currently, the system is fully covering Greece with Landsat 8 multispectral imagery, from the beginning of its operational orbit. Datasets are stored and preprocessed automatically. A two-stage automated classification procedure was developed which is based on a statistical learning model and a multiclass support vector machine classifier, integrating advanced remote sensing and computer vision tools like Orfeo Toolbox and OpenCV. The framework has been trained to classify pansharpened images at 15-m ground resolution toward the initial detection of 31 spectral classes. The final product of our system is delivering, after a postclassification and merging procedure, multitemporal land cover maps with 10 land cover classes. The performed intensive quantitative evaluation has indicated an overall classification accuracy above 80%. The system in its current alpha release, once receiving a request from the client, can process and deliver land cover maps, for a 500-$\text{km}^2$ region, in about 20 s, allowing near real-time applications.
Inference Under Progressively Type II Right-Censored Sampling for Certain Lifetime Distributions
In this paper, estimation of the parameters of a certain family of two-parameter lifetime distributions based on progressively Type II right censored samples (including ordinary Type II right censoring) is studied. This family, of reverse hazard distributions, includes the Weibull, Gompertz and Lomax distributions. A new type of parameter estimation, named inverse estimation, is introduced for both parameters. Exact confidence intervals for one of the parameters and generalized confidence intervals for the other are explored; inference for the first parameter can be accomplished by our methodology independently of the unknown value of the other parameter in this family of distributions. Derivation of the estimation method uses properties of order statistics. A simulation study in the particular context of the Weibull distribution illustrates the accuracy of these confidence intervals and compares inverse estimators favorably with maximum likelihood estimators. A numerical example is used to illustrate the proposed procedures.
Objective Function Zero for the Routing Protocol for Low-Power and Lossy Networks (RPL)
The Routing Protocol for Low-Power and Lossy Networks (RPL) specification defines a generic Distance Vector protocol that is adapted to a variety of network types by the application of specific Objective Functions (OFs). An OF states the outcome of the process used by a RPL node to select and optimize routes within a RPL Instance based on the Information Objects available; an OF is not an algorithm. This document specifies a basic Objective Function that relies only on the objects that are defined in the RPL and does not use any protocol extensions. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
Inverse Combinatorial Optimization: A Survey on Problems, Methods, and Results
Given a (combinatorial) optimization problem and a feasible solution to it, the corresponding inverse optimization problem is to find a minimal adjustment of the cost function such that the given solution becomes optimum. Several such problems have been studied in the last twelve years. After formalizing the notion of an inverse problem and its variants, we present various methods for solving them. Then we discuss the problems considered in the literature and the results that have been obtained. Finally, we formulate some open problems.
GlobeSen: an open interconnection framework based on named sensory date for IoT
How to efficiently interconnect ubiquitous wireless sensor networks (WSNs) and the Internet becomes an important challenge of Internet of Things. In this paper, we explore a route of information-centric networking (ICN) to solve this challenge and propose an open interconnection framework named GlobeSen. To overcome the problem that traditional ICN solution (such as NDN) is not suitable for resource-constrained WSNs, we present a new implementation of NDN, NDNs, for WSNs. Specifically, by extracting the spatio-temporal information and data type information of interest, we construct a globally unique name structure, and exploit the spatio-temporal relation operation as the matching method. Based on the new naming strategy and matching method, we further design packet forwarding and routing schemes. Moreover, we also develop a gateway system, NDNt, for protocol translating and an application, SenBrowser, to provide a user-friendly interface for generating interests and illustrating the returned sensory data. We implement a proof-of-concept prototype based on TelosB sensor nodes and an ARM development board, and conduct a series of experiments to evaluate the performance of GlobeSen.
Breast cancer molecular subtypes respond differently to preoperative chemotherapy.
PURPOSE Molecular classification of breast cancer has been proposed based on gene expression profiles of human tumors. Luminal, basal-like, normal-like, and erbB2+ subgroups were identified and were shown to have different prognoses. The goal of this research was to determine if these different molecular subtypes of breast cancer also respond differently to preoperative chemotherapy. EXPERIMENTAL DESIGN Fine needle aspirations of 82 breast cancers were obtained before starting preoperative paclitaxel followed by 5-fluorouracil, doxorubicin, and cyclophosphamide chemotherapy. Gene expression profiling was done with Affymetrix U133A microarrays and the previously reported "breast intrinsic" gene set was used for hierarchical clustering and multidimensional scaling to assign molecular class. RESULTS The basal-like and erbB2+ subgroups were associated with the highest rates of pathologic complete response (CR), 45% [95% confidence interval (95% CI), 24-68] and 45% (95% CI, 23-68), respectively, whereas the luminal tumors had a pathologic CR rate of 6% (95% CI, 1-21). No pathologic CR was observed among the normal-like cancers (95% CI, 0-31). Molecular class was not independent of conventional cliniocopathologic predictors of response such as estrogen receptor status and nuclear grade. None of the 61 genes associated with pathologic CR in the basal-like group were associated with pathologic CR in the erbB2+ group, suggesting that the molecular mechanisms of chemotherapy sensitivity may vary between these two estrogen receptor-negative subtypes. CONCLUSIONS The basal-like and erbB2+ subtypes of breast cancer are more sensitive to paclitaxel- and doxorubicin-containing preoperative chemotherapy than the luminal and normal-like cancers.
FREQUENCY-TUNABLE INTERNAL ANTENNA FOR MOBILE PHONES
New approaches are needed for improving the performance of small antennas to fulfill the multiband operation requirements of future wireless mobile communications terminals. In this paper, a novel low-loss frequencytuning circuit for mobile handset antennas is proposed. The presented design takes into account several factors that affect the practical mobile handset antenna design, such as the biasing limitations and distortion of the switching component as well as the effect of the mobile handset -sized ground plane. An antenna prototype that is capable of switching between the US cellular and GSM systems at 800-900 MHz frequency range was designed and measured. The antenna was positioned on a metallized printed circuit board (PCB) having size equal to that of a typical mobile phone. The tuning circuit, consisting of transmission line sections and an SPDT (single-pole, double-throw) FET switch, was fabricated directly on the substrate of the PCB. The designed antenna has high radiation efficiency and low distortion in both system bands.
Adaptive Image Segmentation Using a Genetic Algorithm
1 Asst Professor IMS Engg College Ghaziabad, Uttar Pradesh, India 2 Professor , Institute of Management Technology, Ghaziabad, Uttar Pradesh, India _________________________________________________________________________________________ Abstract: In this paper we present an Adaptive approach for image segmentation using genetic algorithm, a natural evolutionary approach for optimisation problems. In addition, method of implementing genetic algorithm has also been reviewed with a summary of research work on Adaptive approach for image segmentation techniques. We have proposed an efficient color image segmentation using Adaptive approach along with genetic algorithm. The advantage of the proposed method lies in its utilisation of prior knowledge of the RGB image to segment the image efficiently. __________________________________________________________________________________________
The Psychological Problems of North Korean Adolescent Refugees Living in South Korea
OBJECTIVE As the number of North Korean adolescent refugees drastically increased in South Korea, there is a growing interest in them. Our study was conducted to evaluate the mental health of the North Korean adolescent refugees residing in South Korea. METHODS The subjects of this study were 102 North Korean adolescent refugees in Hangyeore middle and high School, the public educational institution for the North Korean adolescent refugees residing in South Korea, and 766 general adolescents in the same region. The Korean version of Child Behavior Check List (K-CBCL) standardized in South Korea was employed as the mental health evaluation tool. RESULTS The adolescent refugees group showed a significantly different score with that of the normal control group in the K-CBCL subscales for sociality (t=29.67, p=0.000), academic performance (t=17.79, p=0.000), total social function (t=35.52, p=0.000), social withdrawal (t=18.01, p=0.000), somatic symptoms (t=28.85, p=0.000), depression/anxiety (t=13.08, p=0.000), thought problems (t=6.24, p=0.013), attention problems (t=4.14, p=0.042), internalized problems (t=26.54, p=0.000) and total problems (t=5.23, p=0.022). CONCLUSION The mental health of the North Korean adolescent refugees was severe particularly in internalized problems when compared with that of the general adolescents in South Korea. This result indicates the need for interest in not only the behavior of the North Korean adolescent refugees but also their emotional problem.
Clitoral and penile sizes of full term newborns in two different ethnic groups.
Standards of penile and clitoral sizes are useful for diagnosis of genital abnormalities. In order to verify whether ethnicity has an effect on the size of external genitalia in newborns, 570 full term infants, Jews (221) and Bedouins (349), at the neonatal department of the Soroka Medical Center were examined. Clitoral length, the distance between the center of the anus to the fourchette (AF) and the distance between the center of the anus and the base of the clitoris (AC) were measured, and the AF/AC ratio was calculated for the females. Penile length was measured in the males. Significant differences in clitoral length (12.6%) between the Jewish group (5.87 +/- 1.48 mm) and the Bedouin group (6.61 +/- 1.72 mm) (p < 0.01) and in the ratio of AF to AC between the two ethnic groups (p < 0.01) were found. To the best of our knowledge, our study is the first to report ethnic differences in genital sizes of newborns.
Does absorptive capacity determine collaborative research returns to innovation? A geographical dimension
This paper aims to estimate the impact of research collaboration with partners in different geographical areas on innovative performance. By using the Spanish Technological Innovation Panel, this study provides evidence that the benefits of research collaboration differ across different dimensions of the geography. We find that the impact of extra-European cooperation on innovation performance is larger than that of national and European cooperation, indicating that firms tend to benefit more from interaction with international partners as a way to access new technologies or specialized and novel knowledge that they are unable to find locally. We also find evidence of the positive role played by absorptive capacity, concluding that it implies a higher premium on the innovation returns to cooperation in the international case and mainly in the European one.
Acetaminophen reduces social pain: behavioral and neural evidence.
Pain, whether caused by physical injury or social rejection, is an inevitable part of life. These two types of pain-physical and social-may rely on some of the same behavioral and neural mechanisms that register pain-related affect. To the extent that these pain processes overlap, acetaminophen, a physical pain suppressant that acts through central (rather than peripheral) neural mechanisms, may also reduce behavioral and neural responses to social rejection. In two experiments, participants took acetaminophen or placebo daily for 3 weeks. Doses of acetaminophen reduced reports of social pain on a daily basis (Experiment 1). We used functional magnetic resonance imaging to measure participants' brain activity (Experiment 2), and found that acetaminophen reduced neural responses to social rejection in brain regions previously associated with distress caused by social pain and the affective component of physical pain (dorsal anterior cingulate cortex, anterior insula). Thus, acetaminophen reduces behavioral and neural responses associated with the pain of social rejection, demonstrating substantial overlap between social and physical pain.
Deep Convolutional Neural Networks for efficient vision based tunnel inspection
The inspection, assessment, maintenance and safe operation of the existing civil infrastructure consists one of the major challenges facing engineers today. Such work requires either manual approaches, which are slow and yield subjective results, or automated approaches, which depend upon complex handcrafted features. Yet, for the latter case, it is rarely known in advance which features are important for the problem at hand. In this paper, we propose a fully automated tunnel assessment approach; using the raw input from a single monocular camera we hierarchically construct complex features, exploiting the advantages of deep learning architectures. Obtained features are used to train an appropriate defect detector. In particular, we exploit a Convolutional Neural Network to construct high-level features and as a detector we choose to use a Multi-Layer Perceptron due to its global function approximation properties. Such an approach achieves very fast predictions due to the feedforward nature of Convolutional Neural Networks and Multi-Layer Perceptrons.
A Framework for Truthful Online Auctions in Cloud Computing with Heterogeneous User Demands
Auction-style pricing policies can effectively reflect the underlying trends in demand and supply for the cloud resources, and thereby attracted a research interest recently. In particular, a desirable cloud auction design should be (1) online to timely reflect the fluctuation of supply-demand relations, (2) expressive to support the heterogeneous user demands, and (3) truthful to discourage users from cheating behaviors. Meeting these requirements simultaneously is non-trivial, and most existing auction mechanism designs do not directly apply. To meet these goals, this paper conducts the first work on a framework for truthful online cloud auctions where users with heterogeneous demands could come and leave on the fly. Concretely speaking, we first design a novel bidding language, wherein users' heterogeneous requirement on their desired allocation time, application type, and even how they value among different possible allocations can be flexibly and concisely expressed. Besides, building on top of our bidding language we propose COCA, an incentive-Compatible (truthful) Online Cloud Auction mechanism. To ensure truthfulness with heterogenous and online user demand, the design of COCA is driven by a monotonic payment rule and a utility-maximizing allocation rule. Moreover, our theoretical analysis shows that the worst-case performance of COCA can be well-bounded, and our further discussion shows that COCA performs well when some other important factors in online auction design are taken into consideration. Finally, in simulations the performance of COCA is seen to be comparable to the well-known off-line Vickrey-Clarke-Groves (VCG) mechanism [19].
Smartphone use can be addictive? A case report
Background and aims The use of mobile phones has become an integral part of everyday life. Young people in particular can be observed using their smartphones constantly, and they not only make or receive calls but also use different applications or just tap touch screens for several minutes at a time. The opportunities provided by smartphones are attractive, and the cumulative time of using smartphones per day is very high for many people, so the question arises whether we can really speak of a mobile phone addiction? In this study, our aim is to describe and analyze a possible case of smartphone addiction. Methods We present the case of Anette, an 18-year-old girl, who is characterized by excessive smartphone use. We compare Anette's symptoms to Griffiths's conception of technological addictions, Goodman's criteria of behavioral addictions, and the DSM-5 criteria of gambling disorder. Results Anette fulfills almost all the criteria of Griffiths, Goodman, and the DSM-5, and she spends about 8 hr in a day using her smartphone. Discussion Anette's excessive mobile phone usage includes different types of addictive behaviors: making selfies and editing them for hours, watching movies, surfing on the Internet, and, above all, visiting social sites. The cumulative time of these activities results in a very high level of smartphone use. The device in her case is a tool that provides these activities for her whole day. Most of Anette's activities with a mobile phone are connected to community sites, so her main problem may be a community site addiction.
Comparison of Dimension Reduction Methods for Automated Essay Grading
Automatic Essay Assessor (AEA) is a system that utilizes information retrieval techniques such as Latent Semantic Analysis (LSA), Probabilistic Latent Semantic Analysis (PLSA), and Latent Dirichlet Allocation (LDA) for automatic essay grading. The system uses learning materials and relatively few teacher-graded essays for calibrating the scoring mechanism before grading. We performed a series of experiments using LSA, PLSA and LDA for document comparisons in AEA. In addition to comparing the methods on a theoretical level, we compared the applicability of LSA, PLSA, and LDA to essay grading with empirical data. The results show that the use of learning materials as training data for the grading model outperforms the k-NN-based grading methods. In addition to this, we found that using LSA yielded slightly more accurate grading than PLSA and LDA. We also found that the division of the learning materials in the training data is crucial. It is better to divide learning materials into sentences than paragraphs.
Hate Speech: Asian American Students’ Justice Judgments and Psychological Responses
Two experiments using Asian American university student participants examined the distinctive characteristics of responses to racist hate speech relative to responses to other forms of offense. The studies varied the target of insulting speech (Asian, African, and Overweight person) or the nature of offence (petty theft vs. insulting speech). Participant variables included collective self-esteem and social identification. Results indicate that hate speech directed at ethnic targets deserves more severe punishment than other forms of offensive speech and petty theft. Hate speech also results in more extreme emotional responses and, in the case of an Asian target, has a depressing influence on collective self-esteem. Ethnic identification moderated punishment responses in study 1 only. The theoretical and practical implications of the results are discussed.
Genetic association study of circadian genes with seasonal pattern in bipolar disorders
About one fourth of patients with bipolar disorders (BD) have depressive episodes with a seasonal pattern (SP) coupled to a more severe disease. However, the underlying genetic influence on a SP in BD remains to be identified. We studied 269 BD Caucasian patients, with and without SP, recruited from university-affiliated psychiatric departments in France and performed a genetic single-marker analysis followed by a gene-based analysis on 349 single nucleotide polymorphisms (SNPs) spanning 21 circadian genes and 3 melatonin pathway genes. A SP in BD was nominally associated with 14 SNPs identified in 6 circadian genes: NPAS2, CRY2, ARNTL, ARNTL2, RORA and RORB. After correcting for multiple testing, using a false discovery rate approach, the associations remained significant for 5 SNPs in NPAS2 (chromosome 2:100793045-100989719): rs6738097 (pc = 0.006), rs12622050 (pc = 0.006), rs2305159 (pc = 0.01), rs1542179 (pc = 0.01), and rs1562313 (pc = 0.02). The gene-based analysis of the 349 SNPs showed that rs6738097 (NPAS2) and rs1554338 (CRY2) were significantly associated with the SP phenotype (respective Empirical p-values of 0.0003 and 0.005). The associations remained significant for rs6738097 (NPAS2) after Bonferroni correction. The epistasis analysis between rs6738097 (NPAS2) and rs1554338 (CRY2) suggested an additive effect. Genetic variations in NPAS2 might be a biomarker for a seasonal pattern in BD.
Therapeutic Exercise Training to Reduce Chronic Headache in Working Women: Design of a Randomized Controlled Trial.
BACKGROUND Cervicogenic headache and migraine are common causes of visits to physicians and physical therapists. Few randomized trials utilizing active physical therapy and progressive therapeutic exercise have been previously published. The existing evidence on active treatment methods supports a moderate effect on cervicogenic headache. OBJECTIVE The aim of this study is to investigate whether a progressive, group-based therapeutic exercise program decreases the intensity and frequency of chronic headache among women compared with a control group receiving a sham dose of transcutaneous electrical nerve stimulation (TENS) and stretching exercises. DESIGN A randomized controlled trial with 6-month intervention and follow-up was developed. The participants were randomly assigned to either a treatment group or a control group. SETTING The study is being conducted at 2 study centers. PATIENTS The participants are women aged 18 to 60 years with chronic cervicogenic headache or migraine. INTERVENTION The treatment group's exercise program consisted of 6 progressive therapeutic exercise modules, including proprioceptive low-load progressive craniocervical and cervical exercises and high-load exercises for the neck muscles. The participants in the control group received 6 individually performed sham TENS treatment sessions. MEASUREMENTS The primary outcome is the intensity of headache. The secondary outcomes are changes in frequency and duration of headache, neck muscle strength, neck and shoulder flexibility, impact of headache on daily life, neck disability, fear-avoidance beliefs, work ability, and quality of life. Between-group differences will be analyzed separately at 6, 12, and 24 months with generalized linear mixed models. In the case of count data (eg, frequency of headache), Poisson or negative binomial regression will be used. LIMITATIONS The therapists are not blinded. CONCLUSIONS The effects of specific therapeutic exercises on frequency, intensity, and duration of chronic headache and migraine will be reported.