title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Minimally invasive, non-ablative Er:YAG laser treatment of stress urinary incontinence in women—a pilot study | The study presents an assessment of mechanism of action and a pilot clinical study of efficacy and safety of the Er:YAG laser for the treatment of stress urinary incontinence (SUI). The subject of this study is a treatment of SUI with a 2940 nm Er:YAG laser, operating in a special SMOOTH mode designed to increase temperature of the vaginal mucosa up to maximally 60–65 °C without ablating the epidermis. Numerical modelling of the temperature distribution within mucosa tissue following an irradiation with the SMOOTH mode Er:YAG laser was performed in order to determine the appropriate range of laser parameters. The laser treatment parameters were further confirmed by measuring in vivo temperatures of the vaginal mucosa using a thermal camera. To investigate the clinical efficacy and safety of the SMOOTH mode Er:YAG laser SUI treatment, a pilot clinical study was performed. The study recruited 31 female patients suffering from SUI. Follow-ups were scheduled at 1, 2, and 6 months post treatment. ICIQ-UI questionnaires were collected as a primary trial endpoint. Secondary endpoints included perineometry and residual urine volume measurements at baseline and all follow-ups. Thermal camera measurements have shown the optimal increase in temperature of the vaginal mucosa following treatment of SUI with a SMOOTH mode Er:YAG laser. Primary endpoint, the change in ICIQ-UI score, showed clinically relevant and statistically significant improvement after all follow-ups compared to baseline scores. There was also improvement in the secondary endpoints. Only mild and transient adverse events and no serious adverse events were reported. The results indicate that non-ablative Er:YAG laser therapy is a promising minimally invasive non-surgical option for treating women with SUI symptoms. |
Integrated feature selection and higher-order spatial feature extraction for object categorization | In computer vision, the bag-of-visual words image representation has been shown to yield good results. Recent work has shown that modeling the spatial relationship between visual words further improves performance. Previous work extracts higher-order spatial features exhaustively. However, these spatial features are expensive to compute. We propose a novel method that simultaneously performs feature selection and feature extraction. Higher-order spatial features are progressively extracted based on selected lower order ones, thereby avoiding exhaustive computation. The method can be based on any additive feature selection algorithm such as boosting. Experimental results show that the method is computationally much more efficient than previous approaches, without sacrificing accuracy. |
Dogs do look at images: eye tracking in canine cognition research | Despite intense research on the visual communication of domestic dogs, their cognitive capacities have not yet been explored by eye tracking. The aim of the current study was to expand knowledge on the visual cognition of dogs using contact-free eye movement tracking under conditions where social cueing and associative learning were ruled out. We examined whether dogs spontaneously look at actual objects within pictures and can differentiate between pictures according to their novelty or categorical information content. Eye movements of six domestic dogs were tracked during presentation of digital color images of human faces, dog faces, toys, and alphabetic characters. We found that dogs focused their attention on the informative regions of the images without any task-specific pre-training and their gazing behavior depended on the image category. Dogs preferred the facial images of conspecifics over other categories and fixated on a familiar image longer than on novel stimuli regardless of the category. Dogs’ attraction to conspecifics over human faces and inanimate objects might reflect their natural interest, but further studies are needed to establish whether dogs possess picture object recognition. Contact-free eye movement tracking is a promising method for the broader exploration of processes underlying special socio-cognitive skills in dogs previously found in behavioral studies. |
Valuing Diversity: A Group-Value Approach to Understanding the Importance of Organizational Efforts to Support Diversity | Using Leventhal’s rules as well as the group-value model of procedural justice, we first examined how the negative effects of perceived racial discrimination on procedural justice judgments can be attenuated by perceived organizational efforts to support diversity. Secondly, we examine how these effects ultimately impact affective commitment and organizational citizenship behavior. We found that employees who believe some individuals in the workplace are discriminating against them on the basis of race tend to report lower levels of procedural justice from the organization. However, this negative relationship was attenuated when employees perceived that their organization was making efforts to support diversity. Results suggest that individuals’ perceptions of organizational efforts to support diversity can help restore perceptions of procedural justice for employees who experience racial discrimination at work. Improving procedural justice also positively impacts affective commitment and organizational citizenship behavior directed at the organization. Copyright # 2009 John Wiley & Sons, Ltd. |
Joint Learning of Sentence Embeddings for Relevance and Entailment | Will Andre Iguodala win NBA Finals MVP in 2015? Should Andre Iguodala have won the NBA Finals MVP award over LeBron James? 12.12am ET Andre Iguodala was named NBA Finals MVP, not LeBron. Will Donald Trump run for President in 2016? Donald Trump released “Immigration Reform that will make America Great Again” last weekend — his first, ...detailed position paper since announcing his campaign for the Republican nomination for president. The Fix: A brief history of Donald Trump blaming everything on President Obama DONALD TRUMP FOR PRESIDENT OF PLUTO! |
The American census : a social history | This book, published on the eve of the bicentennial of the American census, is the first social history of this remarkably important institution, from its origins in 1790 to the present. Margo Anderson argues that the census has always been an influential policymaking tool, used not only to determine the number of representatives apportioned to each state but also to allocate tax dollars to states, and, in the past, to define groups-such as slaves and immigrants-who were to be excluded from the American polity. "As a history of the census, this study is a delight. It is thoroughly researched and richly detailed. Anderson is to be commended for covering such an expansive chronology with such skill. . . . Anderson has woven together not only social history but also intellectual, institutional, political, and military history into a thoroughly readable book that examines not only changes in the census but also the remarkable changes that have taken place in the US."-Choice "This book is valuable, clearly written and contains many interesting facts. It should be read not only by national policymakers and the statistical community, but by all who are interested in American society."-Bryant Robey, Population Today "A solid and readable piece of social, political, and institutional history. It will be essential reading not only for historians of American politics but also for census and population experts, for any public policy formulators who rely on census figures, and for those interested in the history of numeracy and statistics."-Patricia Cline Cohen, University of California, Santa Barbara |
Fifty Years of Classification and Regression Trees 1 | Fifty years have passed since the publication of the first regression tree algorithm. New techniques have added capabilities that far surpass those of the early methods. Modern classification trees can partition the data with linear splits on subsets of variables and fit nearest neighbor, kernel density, and other models in the partitions. Regression trees can fit almost every kind of traditional statistical model, including least-squares, quantile, logistic, Poisson, and proportional hazards models, as well as models for longitudinal and multiresponse data. Greater availability and affordability of software (much of which is free) have played a significant role in helping the techniques gain acceptance and popularity in the broader scientific community. This article surveys the developments and briefly reviews the key ideas behind some of the major algorithms. |
STRUCTURED WIKI WITH ANNOTATION FOR KNOWLEDGE MANAGEMENT: AN APPLICATION TOCULTURAL HERITAGE | In this paper, we highlight how semantic wikis can be relevant solutions for building cooperative data driven applications in domains characterized by a rapid evolution of knowledge. We will point out the semantic capabilities of annotated databases and structured wikis to provide better quality of content, to support complex queries and finally to carry on different type of users. Then we compare database application development with wiki for domains that encompass evolving knowledge. We detail the architecture of WikiBridge, a semantic wiki, which integrates templates forms and allows complex annotations as well as consistency checking. We describe the archaeological CARE project, and explain the conceptual modeling approach. A specific section is dedicated to ontology design, which is the compulsory foundational knowledge for the application. We finally report related works of the semantic wiki use for archaeological projects. |
Optimal Planning of Electric-Vehicle Charging Stations in Distribution Systems | With the progressive exhaustion of fossil energy and the enhanced awareness of environmental protection, more attention is being paid to electric vehicles (EVs). Inappropriate siting and sizing of EV charging stations could have negative effects on the development of EVs, the layout of the city traffic network, and the convenience of EVs' drivers, and lead to an increase in network losses and a degradation in voltage profiles at some nodes. Given this background, the optimal sites of EV charging stations are first identified by a two-step screening method with environmental factors and service radius of EV charging stations considered. Then, a mathematical model for the optimal sizing of EV charging stations is developed with the minimization of total cost associated with EV charging stations to be planned as the objective function and solved by a modified primal-dual interior point algorithm (MPDIPA). Finally, simulation results of the IEEE 123-node test feeder have demonstrated that the developed model and method cannot only attain the reasonable planning scheme of EV charging stations, but also reduce the network loss and improve the voltage profile. |
Polynuclear Aromatic Hydrocarbons (PAHs) in fish from the Red Sea Coast of Yemen | A detailed analytical study using combined normal phase high pressure liquid chromatography (HPLC), gas chromatography (GC) and gas chromatography/mass spectrometry (GC/MS) of Polynuclear Aromatic Hydrocarbons (PAHs) in fish from the Red Sea was undertaken. This investigation involves a preliminary assessment of the sixteen parent compounds issued by the U.S. Environmental Protection Agency(EPA). The study revealed measurable levels of Σ PAHs (the sum of three to five or six ring parent compounds) (49.2 ng g−1 dry weight) and total PAHs (all PAH detected) (422.1 ng g−1 dry weight) in edible muscle of fishes collected from the Red Sea. These concentrations are within the range of values reported for other comparable regions of the world. Mean concentrations for individual parent PAH in fish muscles were; naphthalene 19.5, biphenyl 4.6, acenaphthylene 1.0, acenaphthene 1.2, fluorene 5.5, phenanthrene 14.0, anthracene 0.8, fluoranthene 1.5, pyrene 1.8, benz(a)anthracene 0.4, chrysene 1.9, benzo(b)fluoranthene 0.5, benzo(k)fluoranthene 0.5, benzo(e)pyrene 0.9, benzo(a)pyrene 0.5, perylene 0.2, and indeno(1,2,3-cd)pyrene 0.1 ng g−1 dry weight respectively. The Red Sea fish extracts exhibit the low molecular weight aromatics as well as the discernible alkyl-substituted species of naphthalene, fluorene, phenanthrene and dibenzothiophene. Thus, it was suggested that the most probable source of PAHs is oil contamination originating from spillages and/or heavy ship traffic. It was concluded that the presence of PAHs in the fish muscles is not responsible for the reported fish kill phenomenon. However, the high concentrations of carcinogenic chrysene encountered in these fishes should be considered seriously as it is hazardous to human health. Based on fish consumption by Yemeni‘s population it was calculated that the daily intake of total carcinogens were 0.15 µg/person/day. |
Microstrip Stepped Impedance Resonator Bandpass Filter With an Extended Optimal Rejection Bandwidth | Bandpass filters with an optimal rejection bandwidth are designed using parallel-coupled stepped impedance resonators (SIRs). The fundamental ( ) and higher order resonant harmonics of an SIR are analyzed against the length ratio of the highand lowsegments. It is found that an optimal length ratio can be obtained for each highto lowimpedance ratio to maximize the upper rejection bandwidth. A tapped-line input/output structure is exploited to create two extra transmission zeros in the stopband. The singly loaded ( ) of a tapped SIR is derived. With the aid of , the two zeros can be independently tuned over a wide frequency range. When the positions of the two zeros are purposely located at the two leading higher order harmonics, the upper rejection band can be greatly extended. Chebyshev bandpass filters with spurious resonances up to 4 4 , 6 5 , and 8 2 are fabricated and measured to demonstrate the idea. |
PALSAR Radiometric and Geometric Calibration | This paper summarizes the results obtained from geometric and radiometric calibrations of the Phased-Array L-Band Synthetic Aperture Radar (PALSAR) on the Advanced Land Observing Satellite, which has been in space for three years. All of the imaging modes of the PALSAR, i.e., single, dual, and full polarimetric strip modes and scanning synthetic aperture radar (SCANSAR), were calibrated and validated using a total of 572 calibration points collected worldwide and distributed targets selected primarily from the Amazon forest. Through raw-data characterization, antenna-pattern estimation using the distributed target data, and polarimetric calibration using the Faraday rotation-free area in the Amazon, we performed the PALSAR radiometric and geometric calibrations and confirmed that the geometric accuracy of the strip mode is 9.7-m root mean square (rms), the geometric accuracy of SCANSAR is 70 m, and the radiometric accuracy is 0.76 dB from a corner-reflector analysis and 0.22 dB from the Amazon data analysis (standard deviation). Polarimetric calibration was successful, resulting in a VV/HH amplitude balance of 1.013 (0.0561 dB) with a standard deviation of 0.062 and a phase balance of 0.612deg with a standard deviation of 2.66deg . |
Denial-of-Service in Wireless Sensor Networks: Attacks and Defenses | This survey of denial-of-service threats and countermeasures considers wireless sensor platforms' resource constraints as well as the denial-of-sleep attack, which targets a battery-powered device's energy supply. Here, we update the survey of denial-of-service threats with current threats and countermeasures.In particular, we more thoroughly explore the denial-of-sleep attack, which specifically targets the energy-efficient protocols unique to sensor network deployments. We start by exploring such networks' characteristics and then discuss how researchers have adapted general security mechanisms to account for these characteristics. |
The Data-Information-Knowledge-Wisdom Hierarchy and its Antithesis | The now taken-for-granted notion that data lead to information, which leads to knowledge, which in turn leads to wisdom was first specified in detail by R. L. Ackoff in 1988. The Data-Information-KnowledgeWisdom hierarchy is based on filtration, reduction, and transformation. Besides being causal and hierarchical, the scheme is pyramidal, in that data are plentiful while wisdom is almost nonexistent. Ackoff’s formula linking these terms together this way permits us to ask what the opposite of knowledge is and whether analogous principles of hierarchy, process, and pyramiding apply to it. The inversion of the DataInformation-Knowledge-Wisdom hierarchy produces a series of opposing terms (including misinformation, error, ignorance, and stupidity) but not exactly a chain or a pyramid. Examining the connections between these phenomena contributes to our understanding of the contours and limits of knowledge. This presentation will revisit the Data-Information-Knowledge-Wisdom hierarchy linking these concepts together as stages of a single developmental process, with the aim of building a taxonomy for a postulated opposite of knowledge, which I will call ‘nonknowledge’. Concepts of data, information, knowledge, and wisdom are the building blocks of library and information science. Discussions and definitions of these terms pervade the literature from introductory textbooks to theoretical research articles (see Zins, 2007). Expressions linking some of these concepts predate the development of information science as a field of study (Sharma 2008). But the first to put all the terms into a single formula was Russell Lincoln Ackoff, in 1989. Ackoff posited a hierarchy at the top of which lay wisdom, and below that understanding, knowledge, information, and data, in that order. Furthermore, he wrote that “each of these includes the categories that fall below it,” and estimated that “on average about forty percent of the human mind consists of data, thirty percent information, twenty percent knowledge, ten percent understanding, and virtually no wisdom” (Ackoff, 1989, 3). This phraseology allows us to view his model as a pyramid, and indeed it has been likened to one ever since (Rowley, 2007; see figure 1). (‘Understanding’ is omitted, since subsequent formulations have not picked up on it.) Ackoff was a management consultant and former professor of management science at the Wharton School specializing in operations research and organizational theory. His article formulating what is now commonly called the Data-InformationKnowledge-Wisdom hierarchy (or DIKW for short) was first given in 1988 as a presidential address to the International Society for General Systems Research. This background may help explain his approach. Data in his terms are the product of observations, and are of no value until they are processed into a usable form to become information. Information is contained in answers to questions. Knowledge, the next layer, further refines information by making “possible the transformation of information into instructions. It makes control of a system possible” (Ackoff, 1989, 4), and that enables one to make it work efficiently. A managerial rather than scholarly perspective runs through Ackoff’s entire hierarchy, so that “understanding” for him |
Power Supply for a High-Voltage Application | In this paper, the guidelines to design a high-voltage power converter based on the hybrid series parallel resonant topology, PRC-LCC, with a capacitor as output filter are established. As a consequence of the selection of this topology, transformer ratio, and therefore secondary volume, is reduced. The mathematical analysis provides an original equivalent circuit for the steady-state and dynamical behavior of the topology. A new way to construct high-voltage transformers is also proposed, pointing out the advantages and establishing an original method to evaluate the stray components of the transformer before construction. The way to make compatible the characteristics of both, topology and transformer is illustrated in the frame of a practical application. To demonstrate the feasibility of this solution, a high-voltage, high-power prototype is assembled and tested with good performance and similar behavior to the one predicted by the models. Experimental results are shown on this particular. |
Bipolar pulse generator for very high frequency (> 100 MHz) ultrasound applications | Design of a cost-effective and compact monocycle bipolar pulse generator for very high frequency (> 100 MHz) transducers and cell applications is presented. The designed pulse generator could provide a monocycle bipolar pulse with center frequency of 220 MHz, -6 dB bandwidth of 100 % and the peak-to-peak amplitude of 2 V. Moreover, it generates less ringing effect to improve the axial resolution of the ultrasound systems. For the transducers that triggered by the pulse generator, the -6 dB bandwidth of received echo signals was considerably improved by 122 % compared to the Panametrics 5900 transmitter. In addition, wire phantom measurements are presented. |
Common Mode Noise Cancellation for Electrically Non-Contact ECG Measurement System on a Chair | Electrically non-contact ECG measurement system on a chair can be applied to a number of various fields for continuous health monitoring in daily life. However, the body is floated electrically for this system due to the capacitive electrodes and the floated body is very sensitive to the external noises or motion artifacts which affect the measurement system as the common mode noise. In this paper, the driven-seat-ground circuit similar to the driven-right-leg circuit is proposed to reduce the common mode noise. The analysis of this equivalent circuit is performed and the output signal waveforms are compared between with driven-seat-ground and with capacitive ground. As the results, the driven-seat-ground circuit improves significantly the properties of the fully capacitive ECG measurement system as the negative feedback |
Angle Estimation of Simultaneous Orthogonal Rotations from 3D Gyroscope Measurements | A 3D gyroscope provides measurements of angular velocities around its three intrinsic orthogonal axes, enabling angular orientation estimation. Because the measured angular velocities represent simultaneous rotations, it is not appropriate to consider them sequentially. Rotations in general are not commutative, and each possible rotation sequence has a different resulting angular orientation. None of these angular orientations is the correct simultaneous rotation result. However, every angular orientation can be represented by a single rotation. This paper presents an analytic derivation of the axis and angle of the single rotation equivalent to three simultaneous rotations around orthogonal axes when the measured angular velocities or their proportions are approximately constant. Based on the resulting expressions, a vector called the simultaneous orthogonal rotations angle (SORA) is defined, with components equal to the angles of three simultaneous rotations around coordinate system axes. The orientation and magnitude of this vector are equal to the equivalent single rotation axis and angle, respectively. As long as the orientation of the actual rotation axis is constant, given the SORA, the angular orientation of a rigid body can be calculated in a single step, thus making it possible to avoid computing the iterative infinitesimal rotation approximation. The performed test measurements confirm the validity of the SORA concept. SORA is simple and well-suited for use in the real-time calculation of angular orientation based on angular velocity measurements derived using a gyroscope. Moreover, because of its demonstrated simplicity, SORA can also be used in general angular orientation notation. |
VELNET (Virtual Environment for Learning Networking) | The problems of providing a real, physical specialist laboratory to teach computer networking such as, the lack of funding and physical space and the risks and threats to the network environment and infrastructure, can be solved by the use of a virtual learning environment. Velnet is such a virtual learning environment that we have developed and used successfully. Velnet consists of one or more host machines and operating systems, commercial virtual machine software, virtual machines and their operating systems, a virtual network connecting the virtual machines, and remote desktop display software. In order to be able to present more computernetworking concepts and to improve on our original version of Velnet we have been developing a virtual reality overlay. This virtual reality overlay allows students to build virtual networking topologies in a virtual lab. This paper describes Velnet our virtual environment for learning networking and the virtual reality overlay under development. |
Univalence for inverse diagrams and homotopy canonicity | We describe a homotopical version of the relational and gluing models of type theory, and generalize it to inverse diagrams and oplax limits. Our method uses the Reedy homotopy theory on inverse diagrams, and relies on the fact that Reedy fibrant diagrams correspond to contexts of a certain shape in type theory. This has two main applications. First, by considering inverse diagrams in Voevodsky's univalent model in simplicial sets, we obtain new models of univalence in a number of (infinity,1)-toposes; this answers a question raised at the Oberwolfach workshop on homotopical type theory. Second, by gluing the syntactic category of univalent type theory along its global sections functor to groupoids, we obtain a partial answer to Voevodsky's homotopy-canonicity conjecture: in 1-truncated type theory with one univalent universe of sets, any closed term of natural number type is homotopic to a numeral. |
Applying WebTables in Practice | We started investigating the collection of HTML tables on the Web and developed the WebTables system a few years ago [4]. Since then, our work has been motivated by applying WebTables in a broad set of applications at Google, resulting in several product launches. In this paper, we describe the challenges faced, lessons learned, and new insights that we gained from our efforts. The main challenges we faced in our efforts were (1) identifying tables that are likely to contain high-quality data (as opposed to tables used for navigation, layout, or formatting), and (2) recovering the semantics of these tables or signals that hint at their semantics. The result is a semantically enriched table corpus that we used to develop several services. First, we created a search engine for structured data whose index includes over a hundred million HTML tables. Second, we enabled users of Google Docs (through its Research Panel) to find relevant data tables and to insert such data into their documents as needed. Most recently, we brought WebTables to a much broader audience by using the table corpus to provide richer tabular snippets for fact-seeking web search queries on Google.com. |
Specimen Box: A tangible interaction technique for world-fixed virtual reality displays | Object selection and manipulation in world-fixed displays such as CAVE-type systems are typically achieved with tracked input devices, which lack the tangibility of real-world interactions. Conversely, due to the visual blockage of the real world, head-mounted displays allow the use of many types of real world objects that can convey realistic haptic feedback. To bridge this gap, we propose Specimen Box, an interaction technique that allows users to naturally hold a plausible physical object while manipulating virtual content inside it. This virtual content is rendered based on the tracked position of the box in relation to the user's point of view. Specimen Box provides the weight and tactile feel of an actual object and does not occlude rendered objects in the scene. The end result is that the user sees the virtual content as if it exists inside the clear physical box. We hypothesize that the effect of holding a physical box, which is a valid part of the overall scenario, would improve user performance and experience. To verify this hypothesis, we conducted a user study which involved a cognitively loaded inspection task requiring extensive manipulation of the box. We compared Specimen Box to Grab-and-Twirl, a naturalistic bimanual manipulation technique that closely mimics the mechanics of our proposed technique. Results show that in our specific task, performance was significantly faster and rotation rate was significantly lower with Specimen Box. Further, performance of the control technique was positively affected by experience with Specimen Box. |
Streaming feature selection using alpha-investing | In Streaming Feature Selection (SFS), new features are sequentially considered for addition to a predictive model. When the space of potential features is large, SFS offers many advantages over traditional feature selection methods, which assume that all features are known in advance. Features can be generated dynamically, focusing the search for new features on promising subspaces, and overfitting can be controlled by dynamically adjusting the threshold for adding features to the model. We describe α-investing, an adaptive complexity penalty method for SFS which dynamically adjusts the threshold on the error reduction required for adding a new feature. α-investing gives false discovery rate-style guarantees against overfitting. It differs from standard penalty methods such as AIC, BIC or RIC, which always drastically over- or under-fit in the limit of infinite numbers of non-predictive features. Empirical results show that SFS is competitive with much more compute-intensive feature selection methods such as stepwise regression, and allows feature selection on problems with over a million potential features. |
Gender in Twitter: Styles, stances, and social networks | We present a study of the relationship between gender, linguistic style, and social networks, using a novel corpus of 14,000 users of Twitter. Prior quantitative work on gender often treats this social variable as a binary; we argue for a more nuanced approach. By clustering Twitter feeds, we find a range of styles and interests that reflects the multifaceted interaction between gender and language. Some styles mirror the aggregated language-gender statistics, while others contradict them. Next, we investigate individuals whose language better matches the other gender. We find that such individuals have social networks that include significantly more individuals from the other gender, and that in general, social network homophily is correlated with the use of same-gender language markers. Pairing computational methods and social theory thus offers a new perspective on how gender emerges as individuals position themselves relative to audiences, topics, and mainstream gender norms. |
CASMACAT: An Open Source Workbench for Advanced Computer Aided Translation | We describe an open source workbench that offers advanced computer aided translation (CAT) functionality: post-editing machine translation (MT), interactive translation prediction (ITP), visualization of word alignment, extensive logging with replay mode, integration with eye trackers and e-pen. |
Economic history and the modern economist | In this book, nine economists and economic historians express their concern for the present state of economics and its development in the future. From a variety of different approaches, these scholars reassert the importance of economic history in the training of economists and urge that this perspective should not be lost in the future. |
Exploring the relationship between knowledge management practices and innovation performance | The process of innovation depends heavily on knowledge, and the management of knowledge and human capital should be an essential element of running any type of business. Recent research indicates that organisations are not consistent in their approach to knowledge management (KM), with KM approaches being driven predominantly within an information technology (IT) or humanist framework, with little if any overlap. This paper explores the relationship between KM approaches and innovation performance through a preliminary study focusing on the manufacturing industry. The most significant implication that has emerged from the study is that managers in manufacturing firms should place more emphasis on human resource management (HRM) practices when developing innovation strategies for product and process innovations. The study shows that KM contributes to innovation performance when a simultaneous approach of “soft HRM practices” and “hard IT practices” are implemented. |
Clinical, radiographic and immunogenic effects after 1 year of tocilizumab-based treatment strategies in rheumatoid arthritis: the ACT-RAY study | OBJECTIVE
To assess the 1-year efficacy and safety of a regimen of tocilizumab plus methotrexate or placebo, which was augmented by a treat-to-target strategy from week 24.
METHODS
ACT-RAY was a double-blind, 3-year trial. Adults with active rheumatoid arthritis despite methotrexate were randomised to add tocilizumab to ongoing methotrexate (add-on strategy) or to switch to tocilizumab plus placebo (switch strategy). Tocilizumab 8 mg/kg was administered every 4 weeks. Conventional open-label disease-modifying antirheumatic drugs (DMARDs) other than methotrexate were added at week 24 or later in patients with DAS28>3.2.
RESULTS
556 patients were randomised; 85% completed 52 weeks. The proportion of patients receiving open-label DMARDs was comparable in the add-on (29%) and switch (33%) arms. Overall, week 24 results were maintained or further improved at week 52 in both arms. Some endpoints favoured the add-on strategy. Mean changes in Genant-modified Sharp scores were small; more add-on (92.8%) than switch patients (86.1%) had no radiographic progression. At week 52, comparable numbers of patients had antidrug antibodies (ADAs; 1.5% and 2.2% of add-on and switch patients, respectively) and neutralising ADAs (0.7% and 1.8%). Rates of serious adverse events and serious infections per 100 patient-year (PY) were 11.3 and 4.5 in add-on and 16.8 and 5.5 in switch patients. In patients with normal baseline values, alanine aminotransferase elevations >3× upper limit of normal were observed in 11% of add-on and 3% of switch patients.
CONCLUSIONS
Despite a trend favouring the add-on strategy, these data suggest that both tocilizumab add-on and switch strategies led to meaningful clinical and radiographic responses. |
Bee Colony Optimization with local search for traveling salesman problem | Many real world industrial applications involve finding a Hamiltonian path with minimum cost. Some instances that belong to this category are transportation routing problem, scan chain optimization and drilling problem in integrated circuit testing and production. This paper presents a bee colony optimization (BCO) algorithm for traveling salesman problem (TSP). The BCO model is constructed algorithmically based on the collective intelligence shown in bee foraging behaviour. The model is integrated with 2-opt heuristic to further improve prior solutions generated by the BCO model. Experimental results comparing the proposed BCO model with existing approaches on a set of benchmark problems are presented. |
MACHINE TRANSLATION The process of translating comprises in its essence the whole secret of human understanding and social communication | The process of translating comprises in its essence the whole secret of human understanding and social communication. This chapter introduces techniques for machine translation (MT), the use of MACHINE TRANSLATION MT computers to automate some or all of the process of translating from one language to another. Translation, in its full generality, is a difficult, fascinating, and intensely human endeavor, as rich as any other area of human creativity. Consider the following passage from the end of Chapter 45 of the 18th-century novel The Story of the Stone, also called Dream of the Red Chamber, by Cao Xue Qin (Cao, 1792), transcribed in the Mandarin dialect: Fig. 24.1 shows the English translation of this passage by David Hawkes, in sentences labeled E 1-E 4. For ease of reading, instead of giving the Chinese, we have shown the English glosses of each Chinese word IN SMALL CAPS. Words in blue are Chinese words not translated into English, or English words not in the Chinese. We have shown alignment lines between words that roughly correspond in the two languages. Consider some of the issues involved in this translation. First, the English and Chinese texts are very different structurally and lexically. The four English sentences (notice the periods in blue) correspond to one long Chinese sentence. The word order of the two texts is very different, as we can see by the many crossed alignment lines in Fig. 24.1. The English has many more words than the Chinese, as we can see by the large number of English words marked in blue. Many of these differences are caused by structural differences between the two languages. For example, because Chinese rarely marks verbal aspect or tense; the English translation has additional words like as, turned to, and had begun, and Hawkes had to decide to translate Chinese tou as penetrated, rather than say was penetrating or had penetrated. Chinese has less articles than English, explaining the large number of blue thes. Chinese also uses far fewer pronouns than English, so Hawkes had to insert she and her in many places into the |
Distributed analytics in fog computing platforms using tensorflow and kubernetes | Modern Internet-of-Things (IoT) applications produce large amount of data and require powerful analytics approaches, such as Deep Learning to extract useful information. Existing IoT applications transmit the data to resource-rich data centers for analytics. However, it may congest networks, overload data centers, and increase security vulnerability. In this paper, we implement a platform, which integrates resources from data centers (servers) to end devices (IoT devices). We launch distributed analytics applications among the devices without sending everything to the data centers. We analyze challenges to implement such a platform and carefully adopt popular open-source projects to overcome the challenges. We then conduct comprehensive experiments on the implemented platform. The results show: (i) the benefits/limitations of distributed analytics, (ii) the importance of decisions on distributing an application across multiple devices, and (iii) the overhead caused by different components in our platform. |
What Is Social Software? | The first third (roughly) of the XXth century saw two important developments. One of these was Ramsey's tracing of personal probabilities to an agent's choices. This was a precursor to the work of de Finetti, von Neumann and Morgenstern, and Savage. The other one was Turing's invention of the Turing machine and the formulation of the Church-Turing thesis according to which all computable functions on natural numbers were recursive or Turing computable. Game theory has depended heavily on the first of these developments, since of course von Neumann and Morgenstern can be regarded as the fathers of Game theory. But the other development has received less attention (within Game theory itself). This development led to the design of computers and also to fields like logic of programs, complexity theory and analysis of algorithms. It also resulted in much deeper understanding of algorithms, but only computer algorithms. Social algorithms remain largely unanalyzed except in special subfields like social choice theory or fair division [5]. These fields do not tend to analyze complex social algorithms as is done in computer science. This paper illustrates the different ingredients that make up social software: logical structure, knowledge and incentives. |
Ensuring the Quality of the Findings of Qualitative Research : Looking at Trustworthiness Criteria | This paper identifies that masters of education students at the University of Dar es Salaam School of Education and Open University of Tanzania are faced with a challenge of deciding which trustworthiness criteria to employ between the qualitative and quantitative criteria in ensuring the genuineness of qualitative enquiry. This paper used 323 students‘ masters of education dissertations employing the qualitative research methodology and examined the trustworthiness criteria used by students to ensure the rigour of the findings of those dissertations. The findings indicated that most of the students in their dissertations incorrectly employed the quantitative trustworthiness criteria such as reliability and validity to ensure the rigour of their dissertations findings employing the qualitative research methodology. In the sampled masters of education dissertations only 21 out 323 employed the correct qualitative trustworthiness criteria, such as credibility, transferability, confirmability and dependability. This study finding suggests that the authenticity of some dissertations submitted for master‘s degree award their findings are questionable. The study recommendations were that research methodology course lecturers are encouraged to strengthen teaching of the qualitative research approach as well as dissertation supervision to guide students to apply correct trustworthiness criteria for qualitative research methodologies. _________________________________________________________________________________________ |
The Teaching Scholars Program: a proposed approach for promoting research integrity. | All research environments are not created equal. They possess their own unique communication style, culture, and professional mores. Coupled with these distinct professional nuances is the fact that research collaborations today span not only a campus, but also the globe. While the opportunities for cross cultural collaborations are invaluable, they may present challenges that result in misunderstandings about how a research idea should be studied and the findings presented. Such misunderstandings are sometimes found at the center of research misconduct cases. And yet in light of highly visible cases of research misconduct, the attitude about ensuring research integrity remains rather opaque. This paper discusses the merits of the Teaching Scholars Program as a mechanism by which to promote research integrity. This paper will examine this education program against the backdrop of the US Office of Research Integrity (ORI), as an established office responsible for ensuring the integrity of federally funded biomedical and behavioral research. |
Sentiment Analysis: Detecting Valence, Emotions, and Other Affectual States from Text | The term sentiment analysis can be used to refer to many different, but related, problems. Most commonly, it is used to refer to the task of automatically determining the valence or polarity of a piece of text, whether it is positive, negative, or neutral. However, more generally, it refers to determining one’s attitude towards a particular target or topic. Here, attitude can mean an evaluative judgment, such as positive or negative, or an emotional or affectual attitude such as frustration, joy, anger, sadness, excitement, and so on. Note that some authors consider feelings to be the general category that includes attitude, emotions, moods, and other affectual states. In this chapter, we use ‘sentiment analysis’ to refer to the task of automatically determining feelings from text, in other words, automatically determining valence, emotions, and other affectual states from text. Osgood, Suci, and Tannenbaum (1957) showed that the three most prominent dimensions of meaning are evaluation (good–bad), potency (strong–weak), and activity (active– passive). Evaluativeness is roughly the same dimension as valence (positive–negative). Russell (1980) developed a circumplex model of affect characterized by two primary dimensions: valence and arousal (degree of reactivity to stimulus). Thus, it is not surprising that large amounts of work in sentiment analysis are focused on determining valence. (See survey articles by Pang and Lee (2008), Liu and Zhang (2012), and Liu (2015).) However, there is some work on automatically detecting arousal (Thelwall, Buckley, Paltoglou, Cai, & Kappas, 2010; Kiritchenko, Zhu, & Mohammad, 2014b; Mohammad, Kiritchenko, & Zhu, 2013a) and growing interest in detecting emotions such as anger, frustration, sadness, and optimism in text (Mohammad, 2012; Bellegarda, 2010; Tokuhisa, Inui, & Matsumoto, 2008; Strapparava & Mihalcea, 2007; John, Boucouvalas, & Xu, 2006; Mihalcea & Liu, 2006; Genereux & Evans, 2006; Ma, Prendinger, & Ishizuka, 2005; Holzman & Pottenger, 2003; Boucouvalas, 2002; Zhe & Boucouvalas, 2002). Further, massive amounts of data emanating from social media have led to significant interest in analyzing blog posts, tweets, instant messages, customer reviews, and Facebook posts for both valence (Kiritchenko et al., 2014b; Kiritchenko, Zhu, Cherry, & Mohammad, 2014a; Mohammad et al., 2013a; Aisopos, Papadakis, Tserpes, & Varvarigou, 2012; Bakliwal, Arora, Madhappan, Kapre, Singh, & Varma, 2012; Agarwal, Xie, Vovsha, Rambow, & Passonneau, 2011; Thelwall, Buckley, & Paltoglou, 2011; Brody & Diakopoulos, 2011; Pak & Paroubek, 2010) and emotions (Hasan, Rundensteiner, & Agu, 2014; Mohammad & Kiritchenko, 2014; Mohammad, Zhu, Kiritchenko, & Martin, 2014; Choudhury, Counts, & Gamon, 2012; Mohammad, 2012a; Wang, Chen, Thirunarayan, & Sheth, 2012; Tumasjan, Sprenger, Sandner, & Welpe, 2010b; Kim, Gilbert, Edwards, & |
Head tracking for the Oculus Rift | We present methods for efficiently maintaining human head orientation using low-cost MEMS sensors. We particularly address gyroscope integration and compensation of dead reckoning errors using gravity and magnetic fields. Although these problems have been well-studied, our performance criteria are particularly tuned to optimize user experience while tracking head movement in the Oculus Rift Development Kit, which is the most widely used virtual reality headset to date. We also present novel predictive tracking methods that dramatically reduce effective latency (time lag), which further improves the user experience. Experimental results are shown, along with ongoing research on positional tracking. |
Evaluating the Cost of Software Quality | There is some confusion about the business value of quality even outside the software development context. On the one hand, there are those who believe that it is economical to maximize quality. This is the “quality is free” perspective espoused by Crosby [7], Juran and Gryna [8], and others. Their key argument is that as the voluntary costs of defect prevention are increased, the involuntary costs of rework decrease by much more than the increase in prevention costs. The net result is lower total costs, and thus quality is free. On the other hand, there are those who believe it is uneconomical to have high levels of quality and assume they must sacrifice quality to achieve other objectives such as reduced development cycles. For example, a study of adoption of the Software Engineering Institute’s Capability Maturity Model (CMM) reports the following quote from a software manager: “I’d rather have it wrong than have it late. We can always fix it later” [11]. Experiences in manufacturing relating to the cost |
Tracking speech-presence uncertainty to improve speech enhancement in non-stationary noise environments | Speech enhancement algorithms which are based on estimating the short-time spectral amplitude of the clean speech have better performance when a soft-decision gain modification, depending on the a priori probability of sp eech absence, is used. In reported works a fixed probability, q, is assumed. Since speech is non-stationary and may not be present in every frequency bin when voiced, we propose a method for estimating distinct values of q for different bins which are tracked in time. The estimation is based on a decision-theoretic approach for setting a threshold in each bin followed by short-time averaging. The estimated q's are used to control both the gain and the update of the estimated noise spectrum during speech presence in a modified MMSE log-spectral amplitude estimator. Subjective tests resulted in higher scores than for the IS-127 standard enhancement algorithm, when pre-processing noisy speech for a coding application. |
A grounded coplanar waveguide resonator based in-line material characterization sensor | Microwave Materials such as Rogers RO3003 are subject to process-related fluctuations in terms of the relative permittivity. The behavior of high frequency circuits like patch-antenna arrays and their distribution networks is dependent on the effective wavelength. Therefore, fluctuations of the relative permittivity will influence the resonance frequency and antenna beam direction. This paper presents a grounded coplanar wave-guide based sensor, which can measure the relative permittivity at 77 GHz, as well as at other resonance frequencies, by applying it on top of the manufactured depaneling. In addition, the sensor is robust against floating ground metallizations on inner printed circuit board layers, which are typically distributed over the entire surface below antennas. |
A Duality Based Approach for Realtime TV-L1 Optical Flow | Variational methods are among the most successful approaches to calculate the optical flow between two image frames. A particularly appealing formulation is based on total variation (TV) regularization and the robust L norm in the data fidelity term. This formulation can preserve discontinuities in the flow field and offers an increased robustness against illumination changes, occlusions and noise. In this work we present a novel approach to solve the TV-L formulation. Our method results in a very efficient numerical scheme, which is based on a dual formulation of the TV energy and employs an efficient point-wise thresholding step. Additionally, our approach can be accelerated by modern graphics processing units. We demonstrate the real-time performance (30 fps) of our approach for video inputs at a resolution of 320 × 240 pixels. |
Predicting overall survivability in comorbidity of cancers: A data mining approach | Cancer and other chronic diseases have constituted (and will do so at an increasing pace) a significant portion of healthcare costs in the United States in recent years. Although prior research has shown that diagnostic and treatment recommendations might be altered based on the severity of comorbidities, chronic diseases are still being investigated in isolation from one another in most cases. To illustrate the significance of concurrent chronic diseases in the course of treatment, this study uses SEER’s cancer data to create two comorbid data sets: one for breast and female genital cancers and another for prostate and urinal cancers. Several popular machine learning techniques are then applied to the resultant data sets to build predictive models. Comparison of the results shows that having more information about comorbid conditions of patients can improve models’ predictive power, which in turn, can help practitioners make better diagnostic and treatment decisions. Therefore, proper identification, recording, and use of patients’ comorbidity status can potentially lower treatment costs and ease the healthcare related economic challenges. |
Zygomatic-maxillary buttress reconstruction of midface defects with the osteocutaneous radial forearm free flap. | BACKGROUND
The purpose of this study was to evaluate morbidity, functional, and aesthetic outcomes in midface zygomatic-maxillary buttress reconstruction using the osteocutaneous radial forearm free flap (OCRFFF).
METHODS
A retrospective review of 24 consecutive patients that underwent midface reconstruction using the OCRFFF was performed. All patients had variable extension of maxillectomy defects that requires restoration of the zygomatic-maxillary buttress. After harvest, the OCRFFF was fixed transversely with miniplates connecting the remaining zygoma to the anterior maxilla. The orbital support was given by titanium mesh when needed that was fixed to the radial forearm bone anteriorly and placed on the remaining orbital floor posteriorly. The skin paddle was used for intraoral lining, external skin coverage, or both. The main outcome measures were flap success, donor-site morbidity, orbital, and oral complications. Facial contour, speech understandability, swallowing, oronasal separation, and socialization were also analyzed.
RESULTS
There were 6 women and 18 men, with an average age of 66 years old (range, 34-87). The resulting defects after maxillectomy were (according to the Cordeiro classification; Disa et al, Ann Plast Surg 2001;47:612-619; Santamaria and Cordeiro, J Surg Oncol 2006;94:522-531): type I (8.3%), type II (33.3%), type III (45.8%), and type IV (12.5%). There were no flap losses. Donor-site complications included partial loss of the split thickness skin graft (25%) and 1 radial bone fracture. The most significant recipient-site complications were severe ectropion (24%), dystopia (8%), and oronasal fistula (12%). All the complications occurred in patients with defects that required orbital floor reconstruction and/or cheek skin coverage. The average follow-up was 11.5 months, and over 80% of the patients had adequate swallowing, speech, and reincorporation to normal daily activities.
CONCLUSIONS
The OCRFFF is an excellent alternative for midface reconstruction of the zygomatic-maxillary buttress. Complications were more common in patients who underwent resection of the orbital rim and floor (type III and IV defects) or external cheek skin. |
Accuracy of iPhone Locations: A Comparison of Assisted GPS, WiFi and Cellular Positioning | The 3G iPhone was the first consumer device to provide a seamless integration of three positioning technologies: Assisted GPS (A-GPS), WiFi positioning and cellular network positioning. This study presents an evaluation of the accuracy of locations obtained using these three positioning modes on the 3G iPhone. A-GPS locations were validated using surveyed benchmarks and compared to a traditional low-cost GPS receiver running simultaneously. WiFi and cellular positions for indoor locations were validated using high resolution orthophotography. Results indicate that A-GPS locations obtained using the 3G iPhone are much less accurate than those from regular autonomous GPS units (average median error of 8 m for ten 20-minute field tests) but appear sufficient for most Location Based Services (LBS). WiFi locations using the 3G iPhone are much less accurate (median error of 74 m for 58 observations) and fail to meet the published accuracy specifications. Positional errors in WiFi also reveal erratic spatial patterns resulting from the design of the calibration effort underlying the WiFi positioning system. Cellular positioning using the 3G iPhone is the least accurate positioning method (median error of 600 m for 64 observations), consistent with previous studies. Pros and cons of the three positioning technologies are presented in terms of coverage, accuracy and reliability, followed by a discussion of the implications for LBS using the 3G iPhone and similar mobile devices. |
Attitudes of People in the UK with HIV Who Are Antiretroviral (ART) Naïve to Starting ART at High CD4 Counts for Potential Health Benefit or to Prevent HIV Transmission | OBJECTIVE
To assess if a strategy of early ART to prevent HIV transmission is acceptable to ART naïve people with HIV with high CD4 counts.
DESIGN
ASTRA is a UK multicentre, cross sectional study of 3258 HIV outpatients in 2011/12. A self-completed questionnaire collected sociodemographic, behavioral and health data, and attitudes to ART; CD4 count was recorded from clinical records.
METHODS
ART naïve participants with CD4 ≥350 cells/µL (n = 281) were asked to agree/disagree/undecided with the statements (i) I would want to start treatment now if this would slightly reduce my risk of getting a serious illness, and (ii) I would want to start treatment now if this would make me less infectious to a sexual partner, even if there was no benefit to my own health.
RESULTS
Participants were 85% MSM, 76% white, 11% women. Of 281 participants, 49.5% and 45.2% agreed they would start ART for reasons (i) and (ii) respectively; 62.6% agreed with either (i) or (ii); 12.5% agreed with neither; 24.9% were uncertain. Factors independently associated (p<0.1) with agreement to (i) were: lower CD4, more recent HIV diagnosis, physical symptoms, not being depressed, greater financial hardship, and with agreement to (ii) were: being heterosexual, more recent HIV diagnosis, being sexually active.
CONCLUSIONS
A strategy of starting ART at high CD4 counts is likely to be acceptable to the majority of HIV-diagnosed individuals. Almost half with CD4 >350 would start ART to reduce infectiousness, even if treatment did not benefit their own health. However a significant minority would not like to start ART either for modest health benefit or to reduce infectivity. Any change in approach to ART initiation must take account of individual preferences. Transmission models of potential benefit of early ART should consider that ART uptake may be lower than that seen with low CD4 counts. |
A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data | This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes. |
Green Cognitive Mobile Networks With Small Cells for Multimedia Communications in the Smart Grid Environment | High-data-rate mobile multimedia applications can greatly increase energy consumption, leading to an emerging trend of addressing the “energy efficiency” aspect of mobile networks. Cognitive mobile networks with small cells are important techniques for meeting the high-data-rate requirements and improving the energy efficiency of mobile multimedia communications. However, most existing works do not consider the power grid, which provides electricity to mobile networks. Currently, the power grid is experiencing a significant shift from the traditional grid to the smart grid. In the smart grid environment, only considering energy efficiency may not be sufficient since the dynamics of the smart grid will have significant impacts on mobile networks. In this paper, we study green cognitive mobile networks with small cells in the smart grid environment. Unlike most existing studies on cognitive networks, where only the radio spectrum is sensed, our cognitive networks sense not only the radio spectrum environment but also the smart grid environment, based on which power allocation and interference management for multimedia communications are performed. We formulate the problems of electricity price decision, energy-efficient power allocation, and interference management as a three-stage Stackelberg game. A homogeneous Bertrand game with asymmetric costs is used to model price decisions made by the electricity retailers. A backward induction method is used to analyze the proposed Stackelberg game. Simulation results show that our proposed scheme can significantly reduce operational expenditure and CO2 emissions in cognitive mobile networks with small cells for multimedia communications. |
Detection of depression in low resource settings: validation of the Patient Health Questionnaire (PHQ-9) and cultural concepts of distress in Nepal. | BACKGROUND
Despite recognition of the burden of disease due to mood disorders in low- and middle-income countries, there is a lack of consensus on best practices for detecting depression. Self-report screening tools, such as the Patient Health Questionnaire (PHQ-9), require modification for low literacy populations and to assure cultural and clinical validity. An alternative approach is to employ idioms of distress that are locally salient, but these are not synonymous with psychiatric categories. Therefore, our objectives were to evaluate the validity of the PHQ-9, assess the added value of using idioms of distress, and develop an algorithm for depression detection in primary care.
METHODS
We conducted a transcultural translation of the PHQ-9 in Nepal using qualitative methods to achieve semantic, content, technical, and criterion equivalence. Researchers administered the Nepali PHQ-9 to randomly selected patients in a rural primary health care center. Trained psychosocial counselors administered a validated Nepali depression module of the Composite International Diagnostic Interview (CIDI) to validate the Nepali PHQ-9. Patients were also assessed for local idioms of distress including heart-mind problems (Nepali, manko samasya).
RESULTS
Among 125 primary care patients, 17 (14 %) were positive for a major depressive episode in the prior 2 weeks based on CIDI administration. With a Nepali PHQ-9 cutoff ≥ 10: sensitivity = 0.94, specificity = 0.80, positive predictive value (PPV) =0.42, negative predictive value (NPV) =0.99, positive likelihood ratio = 4.62, and negative likelihood ratio = 0.07. For heart-mind problems: sensitivity = 0.94, specificity = 0.27, PPV = 0.17, NPV = 0.97. With an algorithm comprising two screening questions (1. presence of heart-mind problems and 2. function impairment due to heart-mind problems) to determine who should receive the full PHQ-9, the number of patients requiring administration of the PHQ-9 could be reduced by 50 %, PHQ-9 false positives would be reduced by 18 %, and 88 % of patients with depression would be correctly identified.
CONCLUSION
Combining idioms of distress with a transculturally-translated depression screener increases efficiency and maintains accuracy for high levels of detection. The algorithm reduces the time needed for primary healthcare staff to verbally administer the tool for patients with limited literacy. The burden of false positives is comparable to rates in high-income countries and is a limitation for universal primary care screening. |
Clinical pragmatism: John Dewey and clinical ethics. | |
An Interval Classifier for Database Mining Applications | We are given a large population database that contains information about population instances. The population is known to comprise of m groups, but the population instances are not labeled with the group identi cation. Also given is a population sample (much smaller than the population but representative of it) in which the group labels of the instances are known. We present an interval classi er (IC) which generates a classi cation function for each group that can be used to e ciently retrieve all instances of the specied group from the population database. To allow IC to be embedded in interactive loops to answer adhoc queries about attributes with missing values, IC has been designed to be e cient in the generation of classi cation functions. Preliminary experimental results indicate that IC not only has retrieval and classi er generation e ciency advantages, but also compares favorably in the classi cation accuracy with current tree classi ers, such as ID3, which were primarily designed for minimizing classi cation errors. We also describe some new applications that arise from encapsulating the classi cation capability in database systems and discuss extensions to IC for it to be used in these new application domains. Current address: Computer Science Department, Rutgers University, NJ 08903 Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 18th VLDB Conference Vancouver, British Columbia, Canada 1992 |
A tutorial on support vector regression | In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention somemodifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective. |
Self-Assembly of Semiconducting Polymers and Fullerenes for Photovoltaic Applications | In this thesis we present methodologies for control and characterization of nanoscale film morphology and self-assembly in systems containing semiconducting polymers and fullerenes for use in photovoltaic devices. These materials are of interest to the photovoltaic community due to their facile processing and relative low cost. Organic photovoltaics consist of a photo- absorbing electron-donating polymer and an electron-accepting fullerene, upon exposure to light an electron-hole pair (excition) is formed. This exciton can travel 10-20 nm before if finds a polymer-fullerene interface or it will recombine. Due to this small exciton diffusion length, the study of the nanoscale morphology is pivotal in understanding and improving device properties.Here we first explore how the crystallinity of different molecular components of a blended film affects device performance. Using grazing incidence wide-angle X-ray scattering (GIWAXS), we find that different device fabrication techniques are optimized by polymers with different crystallinities. Additionally we studied all-polymer solar films through GIWAXS, which shows that these blends are approximately an addition of the two polymers; however, shifts of the polymer peaks elucidated how the polymers are mixing. To further these X-ray studies we used time-resolved microwave conductivity to study the local mobilities of fullerenes.In the second half of this thesis, we examine a hydrogel network formed from a charged amphiphilic polymer, poly(fluorene-alt-thiophene) (PFT). This polymer self-assembles into rod- like structures in water and also shows improved conductivity in dried films due to its assembled structure. Here we use small angle X-ray scattering (SAXS) and TEM to confirm the nanoscale rod-like assembly, and employ rheology to study how the three dimensional network is held together. Finally, we examine photophysical changes upon the addition of a water-soluble fullerene, C60-N,N-dimethylpyrrolidinium iodide, to PFT, as a step towards water-processable organic solar cells. Photoexcitation of aqueous assemblies of cationic polymers and fullerenes result in the formation of free charge carriers (polarons). These separated charge carriers are stable for days to weeks, which is unprecedented in polymer/fullerene assemblies. We have shown that through these fundamental studies of device architectures and intelligent molecular design, self-assembly has the power to provide a pathway towards improved photovoltaic device properties. |
Hierarchical Bayesian Inference and Recursive Regularization for Large-Scale Classification | In this article, we address open challenges in large-scale classification, focusing on how to effectively leverage the dependency structures (hierarchical or graphical) among class labels, and how to make the inference scalable in jointly optimizing all model parameters. We propose two main approaches, namely the hierarchical Bayesian inference framework and the recursive regularization scheme. The key idea in both approaches is to reinforce the similarity among parameter across the nodes in a hierarchy or network based on the proximity and connectivity of the nodes. For scalability, we develop hierarchical variational inference algorithms and fast dual coordinate descent training procedures with parallelization. In our experiments for classification problems with hundreds of thousands of classes and millions of training instances with terabytes of parameters, the proposed methods show consistent and statistically significant improvements over other competing approaches, and the best results on multiple benchmark datasets for large-scale classification. |
Integrating Web Services with Competitive Strategies: A Balanced Scorecard Approach | The significance of aligning IT with corporate strategy is widely recognized, but the lack of appropriate methodologies prevented practitioners from integrating IT projects with competitive strategies effectively. This article addresses the issue of deploying Web services strategically using the concept of a widely accepted management tool, the balanced scorecard. A framework is developed to match potential benefits of Web services with corporate strategy in four business dimensions: innovation and learning, internal business process, customer, and financial. It is argued that the strategic benefits of implementing Web services can only be realized if the Web services initiatives are planned and implemented within the framework of an IT strategy that is designed to support the business strategy of a firm. |
Efficacy of maternal tenofovir disoproxil fumarate in interrupting mother-to-infant transmission of hepatitis B virus. | UNLABELLED
The efficacy and safety of maternal tenofovir disoproxil fumarate (TDF) in reducing mother-to-infant hepatitis B virus (HBV) transmissions is not clearly understood. We conducted a prospective, multicenter trial and enrolled 118 hepatitis B surface antigen (HBsAg)- and hepatitis B e antigen-positive pregnant women with HBV DNA ≥7.5 log10 IU/mL. The mothers received no medication (control group, n = 56, HBV DNA 8.22 ± 0.39 log10 IU/mL) or TDF 300 mg daily (TDF group, n = 62, HBV DNA 8.18 ± 0.47 log10 IU/mL) from 30-32 weeks of gestation until 1 month postpartum. Primary outcome was infant HBsAg at 6 months old. At delivery, the TDF group had lower maternal HBV DNA levels (4.29 ± 0.93 versus 8.10 ± 0.56 log10 IU/mL, P < 0.0001). Of the 121/123 newborns, the TDF group had lower rates of HBV DNA positivity at birth (6.15% versus 31.48%, P = 0.0003) and HBsAg positivity at 6 months old (1.54% versus 10.71%, P = 0.0481). Multivariate analysis revealed that the TDF group had lower risk (odds ratio = 0.10, P = 0.0434) and amniocentesis was associated with higher risk (odds ratio 6.82, P = 0.0220) of infant HBsAg positivity. The TDF group had less incidence of maternal alanine aminotransferase (ALT) levels above two times the upper limit of normal for ≥3 months (3.23% versus 14.29%, P = 0.0455), a lesser extent of postpartum elevations of ALT (P = 0.007), and a lower rate of ALT over five times the upper limit of normal (1.64% versus 14.29%, P = 0.0135) at 2 months postpartum. Maternal creatinine and creatinine kinase levels, rates of congenital anomaly, premature birth, and growth parameters in infants were comparable in both groups. At 12 months, one TDF-group child newly developed HBsAg positivity, presumably due to postnatal infection and inefficient humoral responses to vaccines.
CONCLUSIONS
Treatment with TDF for highly viremic mothers decreased infant HBV DNA at birth and infant HBsAg positivity at 6 months and ameliorated maternal ALT elevations. (Hepatology 2015;62:375-386. |
Adaptive Estimation Approach for Parameter Identification of Photovoltaic Modules | This paper presents a novel identification technique for estimation of unknown parameters in photovoltaic (PV) systems. A single-diode model is considered for the PV system, which consists of five unknown parameters. Using information about standard test conditions, three unknown parameters are written as functions of the other two parameters in a reduced model. An objective function and a set of inequality constraints are defined for the reduced model, considering limitations of the physical system. It is shown that the nonconvex optimization problem of PV systems is converted to a convex optimization one. The constraints are enforced using a modified barrier function that generates an augmented convex objective function. An adaptive identification technique is utilized to find the optimal values of the augmented cost. Unlike most identification techniques, the proposed algorithm has a precise and unique solution, which is easy to implement. The effectiveness of the proposed approach is confirmed using some simulation and experiments. |
Applying Clustering of Hierarchical K-means-like Algorithm on Arabic Language | In this study a clustering technique has been implemented which is K-Means like with hierarchical initial set (HKM). The goal of this study is to prove that clustering document sets do enhancement precision on information retrieval systems, since it was proved by Bellot & El-Beze on French language. A comparison is made between the traditional information retrieval system and the clustered one. Also the effect of increasing number of clusters on precision is studied. The indexing technique is Term Frequency * Inverse Document Frequency (TF * IDF). It has been found that the effect of Hierarchical K-Means Like clustering (HKM) with 3 clusters over 242 Arabic abstract documents from the Saudi Arabian National Computer Conference has significant results compared with traditional information retrieval system without clustering. Additionally it has been found that it is not necessary to increase the number of clusters to improve precision more. Keywords—Hierarchical K-mean like clustering (HKM), Kmeans, cluster centroids, initial partition, and document distances |
A Multi-tenant Web Application Framework for SaaS | Software as a Service (SaaS) is a software delivery model in which software resources are accessed remotely by users. Enterprises find SaaS attractive because of its low cost. SaaS requires sharing of application servers among multiple tenants for low operational costs. Besides the sharing of application servers, customizations are needed to meet requirements of each tenant. Supporting various levels of configuration and customization is desirable for SaaS frameworks. This paper describes a multi-tenant web application framework for SaaS. The proposed framework supports runtime customizations of user interfaces and business logics by use of file-level namespaces, inheritance, and polymorphism. It supports various client-side web application technologies. |
Automatic Room Segmentation From Unstructured 3-D Data of Indoor Environments | We present an automatic approach for the task of reconstructing a 2-D floor plan from unstructured point clouds of building interiors. Our approach emphasizes accurate and robust detection of building structural elements and, unlike previous approaches, does not require prior knowledge of scanning device poses. The reconstruction task is formulated as a multiclass labeling problem that we approach using energy minimization. We use intuitive priors to define the costs for the energy minimization problem and rely on accurate wall and opening detection algorithms to ensure robustness. We provide detailed experimental evaluation results, both qualitative and quantitative, against state-of-the-art methods and labeled ground-truth data. |
Retrospective analysis of treatment outcome of pediatric ependymomas in Korea: analysis of Korean multi-institutional data | We analyzed the treatment outcomes of intracranial ependymomas in Korean children aged <18 years. Data for 96 patients were collected from five hospitals. Survival rates were calculated using the Kaplan–Meier method. Log-rank tests for univariate analyses and Cox regression model for multivariate analysis were conducted to identify prognostic factors for survival. The median age of the patients was 4 years (range, 0.3–17.9 years). The median follow-up was 55 months (range, 2–343 months). Age <3 years was an important factor for selecting adjuvant therapy after surgery. Among children aged <3 and ≥3 years, adjuvant radiotherapy (RT) was applied to 55 and 84 %, respectively, and adjuvant chemotherapy to 52 and 10 %, respectively. The 5 year local progression-free survival (LPFS), disease-free survival (DFS), and overall survival (OS) rates were 54, 52, and 79 %, respectively. Gross total resection was the most significant prognostic factor for all survival endpoints. Age ≥3 years and RT were significant prognostic factors for superior LPFS and DFS. However, the significance of age was lost in multivariate analysis for DFS. LPFS, DFS, and OS were superior in patients who started RT within 44 days after surgery (the median time) than in patients who started RT later in the patients aged ≥3 years. Postoperative RT was a strong prognostic factor for intracranial ependymomas. Our results suggest that early use of RT is an essential component of treatment, and should be considered in young children. |
A Study on the CBOW Model's Overfitting and Stability | Word vectors are distributed representations of word features. Continuous Bag-of-Words Model(CBOW) is a state-of-the-art model for learning word vectors, yet can be ameliorated for learning better word vectors because we find that CBOW is vulnerable to be overfitted and unstable. We use two methods to solve these two problems so that CBOW can learn better word vectors. In this study, we add the regularized structure risk summation to the objective function of the CBOW model and propose inverse word frequency encoding for the CBOW model. Our proposed methods substantially improve the quality of word vectors, boosting r from 0.638 to 0.696 for word relatedness and total accuracy from 30.80% to 38.43% for word pairs relationship relatedness regarding to 52 million training words with 200 dimensionality. |
Wikify!: linking documents to encyclopedic knowledge | This paper introduces the use of Wikipedia as a resource for automatic keyword extraction and word sense disambiguation, and shows how this online encyclopedia can be used to achieve state-of-the-art results on both these tasks. The paper also shows how the two methods can be combined into a system able to automatically enrich a text with links to encyclopedic knowledge. Given an input document, the system identifies the important concepts in the text and automatically links these concepts to the corresponding Wikipedia pages. Evaluations of the system show that the automatic annotations are reliable and hardly distinguishable from manual annotations. |
What Actually Wins Soccer Matches : Prediction of the 2011-2012 Premier League for Fun and Profit | Sports analytics is a fascinating problem area in which to apply statistical learning techniques. This thesis brings new data to bear on the problem of predicting the outcome of a soccer match. We use frequency counts of in-game events, sourced from the Manchester City Analytics program, to predict the 380 matches of the 2011-2012 Premier League season. We generate prediction models with multinomial regression and rigorously test them with betting simulations. An extensive review of prior efforts is presented, as well as a novel theoretically optimal betting strategy. We measure performance different feature sets and betting strategies. Accuracy and simulated profit far exceeding those of all earlier efforts are achieved. |
Background Subtraction via 3D Convolutional Neural Networks | Background subtraction can be treated as the binary classification problem of highlighting the foreground region in a video whilst masking the background region, and has been broadly applied in various vision tasks such as video surveillance and traffic monitoring. However, it still remains a challenging task due to complex scenes and for lack of the prior knowledge about the temporal information. In this paper, we propose a novel background subtraction model based on 3D convolutional neural networks (3D CNNs) which combines temporal and spatial information to effectively separate the foreground from all the sequences in an end-to-end manner. Different from conventional models, we view background subtraction as three-class classification problem, i.e., the foreground, the background and the boundary. This design can obtain more reasonable results than existing baseline models. Experiments on the Change Detection 2012 dataset verify the potential of our model in both quantity and quality. |
Efficient Realization of Householder Transform Through Algorithm-Architecture Co-Design for Acceleration of QR Factorization | QR factorization is a ubiquitous operation in many engineering and scientific applications. In this paper, we present efficient realization of Householder Transform (HT) based QR factorization through algorithm-architecture co-design where we achieve performance improvement of 3-90x in-terms of Gflops/watt over state-of-the-art multicore, General Purpose Graphics Processing Units (GPGPUs), Field Programmable Gate Arrays (FPGAs), and ClearSpeed CSX700. Theoretical and experimental analysis of classical HT is performed for opportunities to exhibit higher degree of parallelism where parallelism is quantified as a number of parallel operations per level in the Directed Acyclic Graph (DAG) of the transform. Based on theoretical analysis of classical HT, an opportunity to re-arrange computations in the classical HT is identified that results in Modified HT (MHT) where it is shown that MHT exhibits 1.33x times higher parallelism than classical HT. Experiments in off-the-shelf multicore and General Purpose Graphics Processing Units (GPGPUs) for HT and MHT suggest that MHT is capable of achieving slightly better or equal performance compared to classical HT based QR factorization realizations in the optimized software packages for Dense Linear Algebra (DLA). We implement MHT on a customized platform for Dense Linear Algebra (DLA) and show that MHT achieves 1.3x better performance than native implementation of classical HT on the same accelerator. For custom realization of HT and MHT based QR factorization, we also identify macro operations in the DAGs of HT and MHT that are realized on a Reconfigurable Data-path (RDP). We also observe that due to re-arrangement in the computations in MHT, custom realization of MHT is capable of achieving 12 percent better performance improvement over multicore and GPGPUs than the performance improvement reported by General Matrix Multiplication (GEMM) over highly tuned DLA software packages for multicore and GPGPUs which is counter-intuitive. |
Real-Time Bidding by Reinforcement Learning in Display Advertising | The majority of online display ads are served through real-time bidding (RTB) --- each ad display impression is auctioned off in real-time when it is just being generated from a user visit. To place an ad automatically and optimally, it is critical for advertisers to devise a learning algorithm to cleverly bid an ad impression in real-time. Most previous works consider the bid decision as a static optimization problem of either treating the value of each impression independently or setting a bid price to each segment of ad volume. However, the bidding for a given ad campaign would repeatedly happen during its life span before the budget runs out. As such, each bid is strategically correlated by the constrained budget and the overall effectiveness of the campaign (e.g., the rewards from generated clicks), which is only observed after the campaign has completed. Thus, it is of great interest to devise an optimal bidding strategy sequentially so that the campaign budget can be dynamically allocated across all the available impressions on the basis of both the immediate and future rewards. In this paper, we formulate the bid decision process as a reinforcement learning problem, where the state space is represented by the auction information and the campaign's real-time parameters, while an action is the bid price to set. By modeling the state transition via auction competition, we build a Markov Decision Process framework for learning the optimal bidding policy to optimize the advertising performance in the dynamic real-time bidding environment. Furthermore, the scalability problem from the large real-world auction volume and campaign budget is well handled by state value approximation using neural networks. The empirical study on two large-scale real-world datasets and the live A/B testing on a commercial platform have demonstrated the superior performance and high efficiency compared to state-of-the-art methods. |
CellSDN : Software-Defined Cellular Networks | Existing cellular networks suffer from inflexible and expen sive equipment, complex control-plane protocols, and vend orspecific configuration interfaces. In this paper, we argue th at software defined networking (SDN) can simplify the design and management of cellular data networks, while enabling new services. However, supporting many subscribers, frequent mobility, fine-grained measurement and control, and real-time adaptation introduces new scalability challeng es that future SDN architectures should address. As a first step , we present a software-defined cellular network architectur that (i) allows controller applications to express policie s based on the attributes of subscribers, rather than network addre sses and locations, (ii) enables real-time, fine-grained contro l via a local agent on each switch, and (iii) extends switches to support features like deep packet inspection and header com pression to meet the needs of cellular data services, (iv) su pports flexible “slicing” of network resources based on the at tributes of subscribers, rather than the packet header field s, and flexible “slicing” of base stations and radio resources b y having the controller to handle radio resource management, admission control and mobility in each slice. |
Low power radiofrequency electromagnetic radiation for the treatment of pain due to osteoarthritis of the knee. | OBJECTIVE
To investigate the analgesic effect of low power radiofrequency electromagnetic radiation (RF) in osteoarthritis (OA) of the knee.
METHODS
In a randomized study on 40 patients the analgesic effect of RF was compared with the effect of transcutaneous electrical nerve stimulation (TENS). RF and TENS applications were repeated every day for a period of 5 days. The therapeutic effect was evaluated by a visual analogue scale (VAS) and by Lequesne's index: tests were performed before, immediately after and 30 days after therapy.
RESULTS
RF therapy induced a statistically significant and long lasting decrease of VAS and of Lequesne's index; TENS induced a decrease of VAS and of Lequesne's index which was not statistically significant.
CONCLUSIONS
A therapeutic effect of RF was therefore demonstrated on pain and disability due to knee OA. This effect was better than the effect of TENS, which is a largely used analgesic technique. Such a difference of the therapeutic effect may be due to the fact that TENS acts only on superficial tissues and nerve terminals, while RF acts increasing superficial and deep tissue temperature. |
Alcohol-facilitated ankylosis of the distal intertarsal and tarsometatarsal joints in horses with osteoarthritis. | OBJECTIVE
To assess the safety and efficacy of alcohol-facilitated ankylosis of the distal intertarsal (DIT) and tarsometatarsal (TMT) joints in horses with osteoarthritis (bone spavin).
DESIGN
Prospective clinical trial.
ANIMALS
21 horses with DIT or TMT joint-associated hind limb lameness and 5 nonlame horses.
PROCEDURES
11 horses (group 1) underwent lameness, force-plate, and radiographic examinations; following intra-articular analgesia, lameness and force-plate examinations were repeated. Nonlame horses were used for force-plate data acquisition only. Following localization of lameness to the DIT and TMT joints, contrast arthrographic evaluation was performed; when communication with the tibiotarsal joint was not evident or suspected, 70% ethyl alcohol (3 mL) was injected. Group 1 horses underwent lameness, force-plate, and radiographic examinations every 3 months for 1 year. Ten other horses (group 2) underwent lameness and radiographic examinations followed by joint injection with alcohol; follow-up information was obtained from owners or via clinical examination.
RESULTS
Significant postinjection reduction in lameness (after 3 days to 3 months) was evident for all treated horses. Twelve months after injection, 10 of 11 group 1 horses were not lame; lameness grade was 0.5 in 1 horse. Follow-up information was available for 9 of 10 group 2 horses; 7 were not lame, and 2 remained mildly lame (1 had a concurrent problem in the injected limb, and the other had DIT joint collapse that precluded needle entry).
CONCLUSIONS AND CLINICAL RELEVANCE
Intra-articular alcohol injection in horses with bone spavin resulted in a rapid (usually within 3 months) reduction in lameness and joint space collapse. |
Synthesis Lectures on Artificial Intelligence andMachine Learning | Learning from Demonstration (LfD) explores techniques for learning a task policy from examples provided by a human teacher. e field of LfD has grown into an extensive body of literature over the past 30 years, with a wide variety of approaches for encoding human demonstrations and modeling skills and tasks. Additionally, we have only recently seen a focus on gathering data from non-expert human teachers (i.e., domain experts but not robotics experts). In this book, we provide an introduction to the field with a focus on the unique technical challenges associated with designing robots that learn from naive human teachers. We begin, in the introduction, with a unification of the various terminology seen in the literature as well as an outline of the design choices one has in designing an LfD system. Chapter 2 that gives a brief survey of psychology literature that provides insights from human social learning that are relevant to designing robotic social learners. Chapter 3 walks through an LfD interaction, surveying the design choices one makes and state of the art approaches in prior work. First, is the choice of input, how the human teacher interacts with the robot to provide demonstrations. Next, is the choice of modeling technique. Currently, there is a dichotomy in the field between approaches that model low-level motor skills and those that model high-level tasks composed of primitive actions. We devote a chapter to each of these. Chapter 7 on interactive and active learning approaches that allow the robot to refine an existing task model. And finally, Chapter 8 provides on best practices for evaluation of LfD systems, with a focus on how to approach experiments with human subjects for this domain. |
Classifying the findings in qualitative studies. | A key task in conducting research integration studies is determining what features to account for in the research reports eligible for inclusion. In the course of a methodological project, the authors found a remarkable uniformity in the way findings were produced and presented, no matter what the stated or implied frame of reference or method. They describe a typology of findings, which they developed to bypass the discrepancy between method claims and the actual use of methods, and efforts to ascertain its utility and reliability. The authors propose that the findings in journal reports of qualitative studies in the health domain can be classified on a continuum of data transformation as no finding, topical survey, thematic survey, conceptual/thematic description, or interpretive explanation. |
Mastering Erosion of Software Architecture in Automotive Software Product Lines | Most automobile manufacturers maintain many vehicle types to keep a successful position on the market. Through the further development all vehicle types gain a diverse amount of new functionality. Additional features have to be supported by the car’s software. For time efficient accomplishment, usually the existing electronic control unit (ECU) code is extended. In the majority of cases this evolutionary development process is accompanied by a constant decay of the software architecture. This effect known as software erosion leads to an increasing deviation from the requirements specifications. To counteract the erosion it is necessary to continuously restore the architecture in respect of the specification. Automobile manufacturers cope with the erosion of their ECU software with varying degree of success. Successfully we applied a methodical and structured approach of architecture restoration in the specific case of the brake servo unit (BSU). Software product lines from existing BSU variants were extracted by explicit projection of the architecture variability and decomposition of the original architecture. After initial application, this approach was capable to restore the BSU architecture recurrently. |
Standoff Detection of Weapons and Contraband in the 100 GHz to 1 THz Region | The techniques and technologies currently being investigated to detect weapons and contraband concealed on persons under clothing are reviewed. The basic phenomenology of the atmosphere and materials that must be understood in order to realize such a system are discussed. The component issues and architectural designs needed to realize systems are outlined. Some conclusions with respect to further technology developments are presented. |
Anonymity and Online Commenting: The Broken Windows Effect and the End of Drive-by Commenting | In this study we ask how regulations about commenter identity affect the quantity and quality of discussion on commenting fora. In December 2013, the Huffington Post changed the rules for its comment forums to require participants to authenticate their accounts through Facebook. This enabled a large-scale 'before and after' analysis. We collected over 42m comments on 55,000 HuffPo articles published in the period January 2013 to June 2014 and analysed them to determine how changes in identity disclosure impacted on discussions in the publication's comment pages. We first report our main results on the quantity of online commenting, where we find both a reduction and a shift in its distribution from politicised to blander topics. We then discuss the quality of discussion. Here we focus on the subset of 18.9m commenters who were active both before and after the change, in order to disentangle the effects of the worst offenders withdrawing and the remaining commenters modifying their tone. We find a 'broken windows' effect, whereby comment quality improves even when we exclude interaction with trolls and spammers. |
MIMO Precoding and Combining Solutions for Millimeter-Wave Systems | Millimeter-wave communication is one way to alleviate the spectrum gridlock at lower frequencies while simultaneously providing high-bandwidth communication channels. MmWave makes use of MIMO through large antenna arrays at both the base station and the mobile station to provide sufficient received signal power. This article explains how beamforming and precoding are different in MIMO mmWave systems than in their lower-frequency counterparts, due to different hardware constraints and channel characteristics. Two potential architectures are reviewed: hybrid analog/digital precoding/combining and combining with low-resolution analog- to-digital converters. The potential gains and design challenges for these strategies are discussed, and future research directions are highlighted. |
Environmental Science & Policy | ENVIRONMENTAL SCIENCE & POLICY. BA Academic year 2016-2017 College: Arts and Sciences Degree: B.A. Limited Access: NO Contact: Mr. Tim McGann (advising) Dr. Jeff Chanton (program information) Address: Dept. of Earth, Ocean and Atmospheric Science Tallahassee, FL 32306-4320 McGann 111 CAR Chanton: 305 OSB Phone: McGann: (850) 644-8582 Chanton: (850) 644-7493 E-Mail: [email protected] [email protected] |
Exploring Markov Logic Networks for Question Answering | Elementary-level science exams pose significant knowledge acquisition and reasoning challenges for automatic question answering. We develop a system that reasons with knowledge derived from textbooks, represented in a subset of firstorder logic. Automatic extraction, while scalable, often results in knowledge that is incomplete and noisy, motivating use of reasoning mechanisms that handle uncertainty. Markov Logic Networks (MLNs) seem a natural model for expressing such knowledge, but the exact way of leveraging MLNs is by no means obvious. We investigate three ways of applying MLNs to our task. First, we simply use the extracted science rules directly as MLN clauses and exploit the structure present in hard constraints to improve tractability. Second, we interpret science rules as describing prototypical entities, resulting in a drastically simplified but brittle network. Our third approach, called Praline, uses MLNs to align lexical elements as well as define and control how inference should be performed in this task. Praline demonstrates a 15% accuracy boost and a 10x reduction in runtime as compared to other MLNbased methods, and comparable accuracy to word-based baseline approaches. |
An intrusion detection system against malicious attacks on the communication network of driverless cars | Vehicular ad hoc networking (VANET) have become a significant technology in the current years because of the emerging generation of self-driving cars such as Google driverless cars. VANET have more vulnerabilities compared to other networks such as wired networks, because these networks are an autonomous collection of mobile vehicles and there is no fixed security infrastructure, no high dynamic topology and the open wireless medium makes them more vulnerable to attacks. It is important to design new approaches and mechanisms to rise the security these networks and protect them from attacks. In this paper, we design an intrusion detection mechanism for the VANETs using Artificial Neural Networks (ANNs) to detect Denial of Service (DoS) attacks. The main role of IDS is to detect the attack using a data generated from the network behavior such as a trace file. The IDSs use the features extracted from the trace file as auditable data. In this paper, we propose anomaly and misuse detection to detect the malicious attack. |
Fleets of robots for precision agriculture: a simulation environment | Purpose – The purpose of this paper is to propose going one step further in the simulation tools related to agriculture by integrating fleets of mobile robots for the execution of precision agriculture techniques. The proposed new simulation environment allows the user to define different mobiles robots and agricultural implements. Design/methodology/approach – With this computational tool, the crop field, the fleet of robots and the different sensors and actuators that are incorporated into each robot can be configured by means of two interfaces: a configuration interface and a graphical interface, which interact with each other. Findings – The system presented in this article unifies two very different areas – robotics and agriculture – to study and evaluate the implementation of precision agriculture techniques in a 3D virtual world. The simulation environment allows the users to represent realistic characteristics from a defined location and to model different variabilities that may affect the task performance accuracy of the fleet of robots. Originality/value – This simulation environment, the first in incorporating fleets of heterogeneous mobile robots, provides realistic 3D simulations and videos, which grant a good representation and a better understanding of the robot labor in agricultural activities for researchers and engineers from different areas, who could be involved in the design and application of precision agriculture techniques. The environment is available at the internet, which is an added value for its expansion in the agriculture/robotics family. |
Autophagy and the Cell Cycle: A Complex Landscape | Autophagy is a self-degradation pathway, in which cytoplasmic material is sequestered in double-membrane vesicles and delivered to the lysosome for degradation. Under basal conditions, autophagy plays a homeostatic function. However, in response to various stresses, the pathway can be further induced to mediate cytoprotection. Defective autophagy has been linked to a number of human pathologies, including neoplastic transformation, even though autophagy can also sustain the growth of tumor cells in certain contexts. In recent years, a considerable correlation has emerged between autophagy induction and stress-related cell-cycle responses, as well as unexpected roles for autophagy factors and selective autophagic degradation in the process of cell division. These advances have obvious implications for our understanding of the intricate relationship between autophagy and cancer. In this review, we will discuss our current knowledge of the reciprocal regulation connecting the autophagy pathway and cell-cycle progression. Furthermore, key findings involving nonautophagic functions for autophagy-related factors in cell-cycle regulation will be addressed. |
Square Monopole Antenna for UWB Applications With Novel Rod-Shaped Parasitic Structures and Novel V-Shaped Slots in the Ground Plane | A novel band-notched ultrawideband monopole antenna is presented. The proposed antenna consists of a stepped square radiating patch, two rod-shaped parasitic structures for generating band-notched function instead of changing the patch or feeding shapes, and a notched ground plane with two novel V-shaped slots that provide a wide usable bandwidth of more than 200% (3-22.5 GHz). With two rod-shaped parasitic structures, frequency band-stop performance is generated, and characteristics such as band-notch frequency and bandwidth can be controlled. The designed antenna has a small size of 12 × 18 mm2 while showing the band-rejection performance in the frequency band of 5.1/5.9 GHz. The bandwidth of proposed antenna is wider than the bandwidth of antenna in the references. |
Personalization of image enhancement | We address the problem of incorporating user preference in automatic image enhancement. Unlike generic tools for automatically enhancing images, we seek to develop methods that can first observe user preferences on a training set, and then learn a model of these preferences to personalize enhancement of unseen images. The challenge of designing such system lies at intersection of computer vision, learning, and usability; we use techniques such as active sensor selection and distance metric learning in order to solve the problem. The experimental evaluation based on user studies indicates that different users do have different preferences in image enhancement, which suggests that personalization can further help improve the subjective quality of generic image enhancements. |
Laban Movement Analysis Using a Bayesian Model and Perspective Projections | Human movement is essentially the process of moving one or more body parts to a specific location along a certain trajectory. A person observing the movement might be able to recognize it through the spatial pathway alone. Kendon (Kendon, 2004) holds the view that willingly or not, humans, when in co-presence, continuously inform one another about their intentions, interests, feelings and ideas by means of visible bodily action. Analysis of face-toface interaction has shown that bodily action can play a crucial role in the process of interaction and communication. Kendon states that expressive actions like greeting, threat and submission often play a central role in social interaction. In order to access the expressive content of movements theoretically, a notational system is needed. Rudolf Laban, (1879-1958) was a notable central European dance artist and theorist, whose work laid the foundations for Laban Movement Analysis (LMA). Used as a tool by dancers, athletes, physical and occupational therapists, it is one of the most widely used systems of human movement analysis. Robotics has already acknowledged the evidence that human movements could be an important cue for Human-Robot Interaction. Sato et al. (Sato et al., 1996), while defining the requirements for 'human symbiosis robotics' state that those robots should be able to use non-verbal media to communicate with humans and exchange information. As input modalities on a higher abstraction level they define channels on language, gesture and unconscious behavior. This skill could enable the robot to actively perceive human behavior, whether conscious and unconscious. Human intention could be understood, simply by observation, allowing the system to achieve a certain level of friendliness, hospitality and reliance. Fong, Nourbakhsh and Dautenhahn (Fong et al., 2003) state in their survey on 'socially interactive robots' that the design of sociable robots needs input from research concerning social learning and imitation, gesture and natural language communication, emotion and recognition of interaction patterns. Otero et al. suggest (Otero et al., 2006) that the interpretation of a person’s motion within its environment can enhance Human-Robot Interaction in several ways. They point out, that a recognized action can help the robot to plan its future tasks and goals, that the information flow during interaction can be extended and additional cues, like speech recognition, can be supported. Otero et al. state that body motion and context provide in many situations enough information to derive the person’s current activity. |
Suicide and Suicide Risk in Lesbian, Gay, Bisexual, and Transgender Populations: Review and Recommendations | Despite strong indications of elevated risk of suicidal behavior in lesbian, gay, bisexual, and transgender people, limited attention has been given to research, interventions or suicide prevention programs targeting these populations. This article is a culmination of a three-year effort by an expert panel to address the need for better understanding of suicidal behavior and suicide risk in sexual minority populations, and stimulate the development of needed prevention strategies, interventions and policy changes. This article summarizes existing research findings, and makes recommendations for addressing knowledge gaps and applying current knowledge to relevant areas of suicide prevention practice. |
Yet another MicroArchitectural Attack: : exploiting I-Cache | MicroArchitectural Attacks (MA), which can be considered as a special form of Side-Channel Analysis, exploit microarchitectural functionalities of processor implementations and can compromise the security of computational environments even in the presence of sophisticated protection mechanisms like virtualization and sandboxing. This newly evolving research area has attracted significant interest due to the broad application range and the potentials of these attacks. Cache Analysis and Branch Prediction Analysis were the only types of MA that had been known publicly. In this paper, we introduce Instruction Cache (I-Cache) as yet another source of MA and present our experimental results which clearly prove the practicality and danger of I-Cache Attacks. |
Error correction, sensory prediction, and adaptation in motor control. | Motor control is the study of how organisms make accurate goal-directed movements. Here we consider two problems that the motor system must solve in order to achieve such control. The first problem is that sensory feedback is noisy and delayed, which can make movements inaccurate and unstable. The second problem is that the relationship between a motor command and the movement it produces is variable, as the body and the environment can both change. A solution is to build adaptive internal models of the body and the world. The predictions of these internal models, called forward models because they transform motor commands into sensory consequences, can be used to both produce a lifetime of calibrated movements, and to improve the ability of the sensory system to estimate the state of the body and the world around it. Forward models are only useful if they produce unbiased predictions. Evidence shows that forward models remain calibrated through motor adaptation: learning driven by sensory prediction errors. |
Infants’ developing understanding of the link between looker and object | Three studies investigated infants’ understanding that gaze involves a relation between a person and the object of his or her gaze. Infants were habituated to an event in which an actor turned and looked at one of two toys. Then, infants saw test events in which (1) the actor turned to the same side as during habituation to look at a different toy, or (2) the actor turned to the other side to look at the same toy as during habituation. The first of these involved a change in the relation between actor and object. The second involved a new physical motion on the part of the actor but no change in the relation between actor and object. Sevenand 9-month-old infants did not respond to the change in relation between actor and object, although infants at both ages followed the actor’s gaze to the toys. In contrast, 12-month-old infants responded to the change in the actor–object relation. Control conditions verified that the paradigm was a sensitive index of the younger infants’ representations of action: 7and 9-month-olds responded to a change in the actor–object relation when the actor’s gaze was accompanied by a grasp. Taken together, these findings indicate that gaze-following does not initially go hand in hand with understanding the relation between a person who looks and the object of his or her gaze, and that infants begin to understand this relation between 9 and |
Inferring Unusual Crowd Events From Mobile Phone Call Detail Records | The pervasiveness and availability of mobile phone data offer the opportunity of discovering usable knowledge about crowd behaviors in urban environments. Cities can leverage such knowledge in order to provide better services (e.g., public transport planning, optimized resource allocation) and safer cities. Call Detail Record (CDR) data represents a practical data source to detect and monitor unusual events considering the high level of mobile phone penetration, compared with GPS equipped and open devices. In this paper, we provide a methodology that is able to detect unusual events from CDR data that typically has low accuracy in terms of space and time resolution. Moreover, we introduce a concept of unusual event that involves a large amount of people who expose an unusual mobility behavior. Our careful consideration of the issues that come from coarse-grained CDR data ultimately leads to a completely general framework that can detect unusual crowd events from CDR data effectively and efficiently. Through extensive experiments on real-world CDR data for a large city in Africa, we demonstrate that our method can detect unusual events with 16% higher recall and over 10 times higher precision, compared to stateof-the-art methods. We implement a visual analytics prototype system to help end users analyze detected unusual crowd events to best suit different application scenarios. To the best of our knowledge, this is the first work on the detection of unusual events from CDR data with considerations of its temporal and spatial sparseness and distinction between user unusual activities and daily routines. |
Design, Manufacturing and Performance Test of a Solar Tracker Made by a Embedded Control | This paper presents all the stages of development of a solar tracker for a photovoltaic panel. The system was made with a microcontroller which was design as an embedded control. It has a data base of the angles of orientation horizontal axle, therefore it has no sensor inlet signal and it function as an open loop control system. Combined of above mention characteristics in one the tracker system is a new technique of the active type. It is also a rotational robot of 1 degree of freedom. |
Bayesian color constancy. | The problem of color constancy may be solved if we can recover the physical properties of illuminants and surfaces from photosensor responses. We consider this problem within the framework of Bayesian decision theory. First, we model the relation among illuminants, surfaces, and photosensor responses. Second, we construct prior distributions that describe the probability that particular illuminants and surfaces exist in the world. Given a set of photosensor responses, we can then use Bayes's rule to compute the posterior distribution for the illuminants and the surfaces in the scene. There are two widely used methods for obtaining a single best estimate from a posterior distribution. These are maximum a posteriori (MAP) and minimum mean-square-error (MMSE) estimation. We argue that neither is appropriate for perception problems. We describe a new estimator, which we call the maximum local mass (MLM) estimate, that integrates local probability density. The new method uses an optimality criterion that is appropriate for perception tasks: It finds the most probable approximately correct answer. For the case of low observation noise, we provide an efficient approximation. We develop the MLM estimator for the color-constancy problem in which flat matte surfaces are uniformly illuminated. In simulations we show that the MLM method performs better than the MAP estimator and better than a number of standard color-constancy algorithms. We note conditions under which even the optimal estimator produces poor estimates: when the spectral properties of the surfaces in the scene are biased. |
A Dual Coordinate Descent Algorithm for SVMs Combined with Rational Kernels | This paper presents a novel application of automata algorithms to machine learning. It introduces the first optimization solution for support vector machines used with sequence kernels that is purely based on weighted automata and transducer algorithms, without requiring any specific solver. The algorithms presented apply to a family of kernels covering all those commonly used in text and speech processing or computational biology. We show that these algorithms have significantly better computational complexity than previous ones and report the results of large-scale experiments demonstrating a dramatic reduction of the training time, typically by several orders of magnitude. |
A phase Ib dose-escalation study of everolimus combined with cisplatin and etoposide as first-line therapy in patients with extensive-stage small-cell lung cancer. | BACKGROUND
This phase Ib study aimed to establish the feasible everolimus dose given with standard-dose etoposide plus cisplatin (EP) for extensive-stage small-cell lung cancer (SCLC).
PATIENTS AND METHODS
An adaptive Bayesian dose-escalation model and investigator opinion were used to identify feasible daily or weekly everolimus doses given with EP in adults with treatment-naive extensive-stage SCLC. A protocol amendment mandated prophylactic granulocyte colony-stimulating factor (G-CSF). Primary end point was cycle 1 dose-limiting toxicity (DLT) rate. Secondary end points included safety, relative EP dose intensity, pharmacokinetics, and tumor response.
RESULTS
Patients received everolimus 2.5 or 5 mg/day without G-CSF (n=10; cohort A), 20 or 30 mg/week without G-CSF (n=18; cohort B), or 2.5 or 5 mg/day with G-CSF (n=12; cohort C); all received EP. Cycle 1 DLT rates were 50.0%, 22.2%, and 16.7% in cohorts A, B, and C, respectively. Cycle 1 DLTs were neutropenia (cohorts A and B), febrile neutropenia (all cohorts), and thrombocytopenia (cohorts A and C). The most common grade 3/4 adverse events were hematologic. Best overall response was partial response (40.0%, 61.1%, and 58.3% in cohorts A, B, and C, respectively).
CONCLUSIONS
Everolimus 2.5 mg/day plus G-CSF was the only feasible dose given with standard-dose EP in untreated extensive-stage SCLC. |
Three Kinds of Network Security Situation Awareness Model Based on Big Data | In this paper, we have proposed three kinds of network security situation awareness (NSSA) models. In the era of big data, the traditional NSSA methods cannot analyze the problem effectively. Therefore, the three models are designed for big data. The structure of these models are very large, and they are integrated into the distributed platform. Each model includes three modules: network security situation detection (NSSD), network security situation understanding (NSSU), and network security situation projection (NSSP). Each module comprises different machine learning algorithms to realize different functions. We conducted a comprehensive study of the safety of these models. Three models compared with each other. The experimental results show that these models can improve the efficiency and accuracy of data processing when dealing with different problems. Each model has its own advantages and disadvantages. |
Cardiac surgery is successful in heart transplant recipients. | BACKGROUND
Improved survival of heart transplant (HTx) recipients and increased acceptance of higher risk donors allows development of late pathology. However, there are few data to guide surgical options. We evaluated short-term outcomes and mortality to guide pre-operative assessment, planning, and post-operative care.
METHODS
Single centre, retrospective review of 912 patients who underwent HTx from February 1984 - June 2012, identified 22 patients who underwent subsequent cardiac surgery. Data are presented as median (IQR).
RESULTS
Indications for surgery were coronary allograft vasculopathy (CAV) (n=10), valvular disease (n=6), infection (n=3), ascending aortic aneurysm (n=1), and constrictive pericarditis (n=2). There was one intraoperative death (myocardial infarction). Hospital stay was 10 (8-21) days. Four patients (18%) returned to theatre for complications. After cardiac surgery, survival at one, five and 10 years was 91±6%, 79±10% and 59±15% with a follow-up of 4.6 (1.7-10.2) years. High pre-operative creatinine was a univariate risk factor for mortality, HR=1.028, (95%CI 1.00-1.056; p=0.05). A time dependent Cox proportional hazards model of the risk of cardiac surgery post-HTx showed no significant hazard; HR=0.87 (95%CI 0.37-2.00; p=0.74).
CONCLUSIONS
Our experience shows cardiac surgery post-HTx is associated with low mortality, and confirms that cardiac surgery is appropriate for selected HTx recipients. |
Vietnamese spelling detection and correction using Bi-gram, Minimum Edit Distance, SoundEx algorithms with some additional heuristics | The spelling checking problem is considered to contain two main phases: the detecting phase and the correcting phase. In this paper, we present a new approach for Vietnamese spelling checking based on Vietnamese characteristics for each phase. Our research approach includes the use of a syllable Bi-gram in combination with parts of speech (POS) to find out suspected syllables. In the correcting phase, we based on minimum edit distance, SoundEx algorithms and some heuristics to build a weight function for assessing suggestion candidates. The training corpus and the test set were collected from e-newspapers. |
On the Expansion of Environmental Protection Themes in Journal of Environment and Behavior | Metrology is employed for the statistics in studying the changes of the environmental protection themes in Journal of Environment and Behavior in U.S.since its publication.It is found that there are two main changes:from the initial concerns about the environmental awareness,attitudes and environmental education to the recent concerns about environmental behavior;and from focusing on local environmental problems to global ecological problems. |
Gentle Adaboost algorithm for weld defect classification | In this paper, we present a new strategy for automatic classification of weld defects in radiographs based on Gentle Adaboost algorithm. Radiographic images were segmented and moment-based features were extracted and given as input to Gentle Adaboost classifier. The performance of our classification system is evaluated using hundreds of radiographic images. The classifier is trained to classify each defect pattern into one of four classes: Crack, Lack of penetration, Porosity, and Solid inclusion. The experimental results show that the Gentle Adaboost classifier is an efficient automatic weld defect classification algorithm and can achieve high accuracy and is faster than support vector machine (SVM) algorithm, for the tested data. |
DocFace: Matching ID Document Photos to Selfies | Numerous activities in our daily life, including purchases, travels and access to services, require us to verify who we are by showing ID documents containing face images, such as passports and driver licenses. An automatic system for matching ID document photos to live face images in real time with high accuracy would speed up the verification process and reduce the burden on human operators. In this paper, we propose a new method, DocFace, for ID document photo matching using the transfer learning technique. We propose to use a pair of sibling networks to learn domain specific parameters from heterogeneous face pairs. Cross validation testing on an ID-Selfie dataset shows that while the best CNN-based general face matcher only achieves a TAR=61.14% at FAR=0.1% on the problem, the DocFace improves the TAR to 92.77%. Experimental results also indicate that given sufficiently large training data, a viable system for automatic ID document photo matching can be developed and deployed. |
Coverage criteria for GUI testing | A widespread recognition of the usefulness of graphical user interfaces (GUIs) has established their importance as critical components of today's software. GUIs have characteristics different from traditional software, and conventional testing techniques do not directly apply to GUIs. This paper's focus is on coverage critieria for GUIs, important rules that provide an objective measure of test quality. We present new coverage criteria to help determine whether a GUI has been adequately tested. These coverage criteria use events and event sequences to specify a measure of test adequacy. Since the total number of permutations of event sequences in any non-trivial GUI is extremely large, the GUI's hierarchical structure is exploited to identify the important event sequences to be tested. A GUI is decomposed into GUI components, each of which is used as a basic unit of testing. A representation of a GUI component, called an event-flow graph, identifies the interaction of events within a component and intra-component criteria are used to evaluate the adequacy of tests on these events. The hierarchical relationship among components is represented by an integration tree, and inter-component coverage criteria are used to evaluate the adequacy of test sequences that cross components. Algorithms are given to construct event-flow graphs and an integration tree for a given GUI, and to evaluate the coverage of a given test suite with respect to the new coverage criteria. A case study illustrates the usefulness of the coverage report to guide further testing and an important correlation between event-based coverage of a GUI and statement coverage of its software's underlying code. |
Knowledge management, codification and tacit knowledge | Introduction. This article returns to a theme addressed in Vol. 8(1) October 2002 of the journal: knowledge management and the problem of managing tacit knowledge. Method. The article is primarily a review and analysis of the literature associated with the management of knowledge. In particular, it focuses on the works of a group of economists who have studied the transformation of knowledge into information through the process of codification and the knowledge transaction topography they have developed to describe this process. Analysis. The article explores the theoretical and philosophical antecedents of the economists' views. It uses this as a basis for examining the dominant views of knowledge that appear in much of the literature on knowledge management and for performing a critical evaluation of their work. Results. The results of the analysis centre upon the question of when is it appropriate to codify knowledge. They present a basic summary of the costs and benefits of codification before looking in more detail at its desirability. Conclusions. The conclusions concern the implications of the above for knowledge management and the management of tacit knowledge. They deal with the nature of knowledge management, some of the reasons why knowledge management projects fail to achieve their expectations and the potential problems of codification as a strategy for knowledge management. |
British social attitudes : the 22nd report : two terms of new labour: the public's reaction | Give and Take - Tom Sefton Attitudes to Redistribution Work-Life Balance - Alice Bell and Caroline Bryson Still a 'Women's Issue'? Home Sweet Home - Alun Humphrey and Catherine Bromley Higher Education - Ted Wragg and Mark Johnson A Class Act Public Responses to NHS Reform - John Appleby and Arturo Alvarez-Rosete Transport - Peter Jones, Georgina Christodoulou and David Whibley Are Policymakers and the Public on the Same Track? Planning for Retirement - Miranda Phillips and Ruth Hancock Realism or Denial? Leaders or Followers? - Geoffrey Evans and Sarah Butt Parties and Public Opinion on the European Union Are Trade Unionists Left-Wing Any More? - John Curtice and Ann Mair |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.