title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Optimal Data-Dependent Hashing for Approximate Near Neighbors | We show an optimal data-dependent hashing scheme for the approximate near neighbor problem. For an n-point dataset in a d-dimensional space our data structure achieves query time O(d ⋅ nρ+o(1)) and space O(n1+ρ+o(1) + d ⋅ n), where ρ=1/(2c2-1) for the Euclidean space and approximation c>1. For the Hamming space, we obtain an exponent of ρ=1/(2c-1). Our result completes the direction set forth in (Andoni, Indyk, Nguyen, Razenshteyn 2014) who gave a proof-of-concept that data-dependent hashing can outperform classic Locality Sensitive Hashing (LSH). In contrast to (Andoni, Indyk, Nguyen, Razenshteyn 2014), the new bound is not only optimal, but in fact improves over the best (optimal) LSH data structures (Indyk, Motwani 1998) (Andoni, Indyk 2006) for all approximation factors c>1.
From the technical perspective, we proceed by decomposing an arbitrary dataset into several subsets that are, in a certain sense, pseudo-random. |
Modular converter architecture for medium voltage ultra fast EV charging stations: Global system considerations | Electric Vehicles (EVs) are expected to contribute significantly to the reduction of CO2 emissions and other types of pollution within the next years. However, the concept of their ultra fast charging still poses demanding requirements, both in terms of the EV battery and the impact on the power grid. Towards this direction, emerging power electronics interfaces as well as energy storage technologies will play a significant role into making EV charging stations competitive with the existing gas station infrastructures. This paper focuses on the proposal of a power converter architecture, which aims for the interface between the three-phase medium voltage AC grid and the Electric Vehicle (EV) batteries. The transformerless AC/DC conversion stage motivates the choice of a suitable multilevel topology, namely the Cascaded H-Bridge Converter (CHB), where the medium voltage is split into several dedicated low voltage DC buses. At each of these levels, integrated stationary Battery Energy Storage Systems (BESS) play the role of power buffers, reducing thus the influence of the charging station on the distribution grid. The EV battery is then charged through parallel-connected isolated DC/DC converters, in order to achieve high currents and meet the standards for galvanic isolation. |
Toward a Fully Autonomous UAV: Research Platform for Indoor and Outdoor Urban Search and Rescue | Urban search and rescue missions raise special requirements on robotic systems. Small aerial systems provide essential support to human task forces in situation assessment and surveillance. As external infrastructure for navigation and communication is usually not available, robotic systems must be able to operate autonomously. A limited payload of small aerial systems poses a great challenge to the system design. The optimal tradeoff between flight performance, sensors, and computing resources has to be found. Communication to external computers cannot be guaranteed; therefore, all processing and decision making has to be done on board. In this article, we present an unmanned aircraft system design fulfilling these requirements. The components of our system are structured into groups to encapsulate their functionality and interfaces. We use both laser and stereo vision odometry to enable seamless indoor and outdoor navigation. The odometry is fused with an inertial measurement unit in an extended Kalman filter. Navigation is supported by a module that recognizes known objects in the environment. A distributed computation approach is adopted to address the computational requirements of the used algorithms. The capabilities of the system are validated in flight experiments, using a quadrotor. |
A training-based speech regeneration approach with cascading mapping models | Computational speech reconstruction algorithms have the ultimate aim of returning natural sounding speech to aphonic and dysphonic patients as well as those who can only whisper. In particular, individuals who have lost glottis function due to disease or surgery, retain the power of vocal tract modulation to some degree but they are unable to speak anything more than hoarse whispers without prosthetic aid. While whispering can be seen as a natural and secondary aspect of speech communications for most people, it becomes the primary mechanism of communications for those who have impaired voice production mechanisms, such as laryngectomees. In this paper, by considering the current limitations of speech reconstruction methods, a novel algorithm for converting whispers to normal speech is proposed and the efficiency of the algorithm is explored. The algorithm relies upon cascading mapping models and makes use of artificially generated whispers (called whisperised speech) to regenerate natural phonated speech from whispers. Using a training-based approach, the mapping models exploit whisperised speech to overcome frame to frame time alignment problems that are inherent in the speech reconstruction process. This algorithm effectively regenerates missing information in the conventional frameworks of phonated speech reconstruction, ∗Corresponding author Email address: [email protected] (Hamid R. Sharifzadeh) Preprint submitted to Journal of Computers & Electrical Engineering February 15, 2016 and is able to outperform the current state-of-the-art regeneration methods using both subjective and objective criteria. |
Comment on "Reduced system dynamics from the N-body Schrödinger equation" | We argue that the "reduced wave function", proposed recently [Phys.Rev.Lett. {\bf 75}, 2255 (1995)], contains conditional and restricted information on the reduced system. The concept of "reduced wave function" can thus not represent a relevant alternative to the common reduced dynamics methods. |
New species of Phallodrilus (Oligochaeta, Tubificidae) from caves of northern Spain and southwestern France | A taxonomic account of a collection of Phallodrilus species inhabiting caves is given. The following three new species are described: P. subterraneus, P. crypticus and P. labouichensis. New material of P. aquaedulcis Hrabe, 1960 from northern Spain and southwestern France is reported. This species was previously known from West Germany. The relationship between Phallodrilus cave species and littoral and deep-sea species is discussed. |
Religious coping and use of intensive life-prolonging care near death in patients with advanced cancer. | CONTEXT
Patients frequently rely on religious faith to cope with cancer, but little is known about the associations between religious coping and the use of intensive life-prolonging care at the end of life.
OBJECTIVE
To determine the way religious coping relates to the use of intensive life-prolonging end-of-life care among patients with advanced cancer.
DESIGN, SETTING, AND PARTICIPANTS
A US multisite, prospective, longitudinal cohort of 345 patients with advanced cancer, who were enrolled between January 1, 2003, and August 31, 2007. The Brief RCOPE assessed positive religious coping. Baseline interviews assessed psychosocial and religious/spiritual measures, advance care planning, and end-of-life treatment preferences. Patients were followed up until death, a median of 122 days after baseline assessment.
MAIN OUTCOME MEASURES
Intensive life-prolonging care, defined as receipt of mechanical ventilation or resuscitation in the last week of life. Analyses were adjusted for demographic factors significantly associated with positive religious coping and any end-of-life outcome at P < .05 (ie, age and race/ethnicity). The main outcome was further adjusted for potential psychosocial confounders (eg, other coping styles, terminal illness acknowledgment, spiritual support, preference for heroics, and advance care planning).
RESULTS
A high level of positive religious coping at baseline was significantly associated with receipt of mechanical ventilation compared with patients with a low level (11.3% vs 3.6%; adjusted odds ratio [AOR], 2.81 [95% confidence interval {CI}, 1.03-7.69]; P = .04) and intensive life-prolonging care during the last week of life (13.6% vs 4.2%; AOR, 2.90 [95% CI, 1.14-7.35]; P = .03) after adjusting for age and race. In the model that further adjusted for other coping styles, terminal illness acknowledgment, support of spiritual needs, preference for heroics, and advance care planning (do-not-resuscitate order, living will, and health care proxy/durable power of attorney), positive religious coping remained a significant predictor of receiving intensive life-prolonging care near death (AOR, 2.90 [95% CI, 1.07-7.89]; P = .04).
CONCLUSIONS
Positive religious coping in patients with advanced cancer is associated with receipt of intensive life-prolonging medical care near death. Further research is needed to determine the mechanisms for this association. |
Testing and validating high level components for automated driving: simulation framework for traffic scenarios | Current advances in the research field of autonomous driving demand advanced simulation methods for testing and validation. By combining versatile foci of different simulations, we can provide an increased amount and diversity of realistic traffic scenarios, which are relevant to the development and verification of high level automated driving functions. The focus of the present paper is to propose a concept for realistic simulation scenarios, which is capable of running in different integration levels, from software- to vehicle-in-the-loop. Its application is demonstrated, exposing an experimental vehicle, which is used for autonomous driving development, to a traffic scenario with virtual vehicles on a real road network. |
Cooperative Path Prediction in Vehicular Environments | The prediction of the future path of the ego vehicle and of other vehicles in the road environment is very important for safety applications, especially for collision avoidance systems. Today's available advanced driver assistance systems are mainly based on sensors that are installed in the vehicle. Due to the evolution of wireless networks the current trend is to exploit the cooperation among vehicles to enhance road safety. In this paper a cooperative path prediction algorithm is presented. This algorithm gathers position, velocity and yaw rate measurements from all vehicles in order to calculate the future paths. A specific care is taken for the manipulation of the latency of the wireless vehicular network. Also map data concerning the road geometry are used to enhance the estimation of path prediction. This work shows both the advances of using communications among road users and the corresponding challenges. |
Quantification of Model Uncertainty in RANS Simulations : A Review | In computational fluid dynamics simulations of industrial flows, models based on the Reynoldsaveraged Navier–Stokes (RANS) equations are expected to play an important role in decades to come. However, model uncertainties are still a major obstacle for the predictive capability of RANS simulations. This review examines both the parametric and structural uncertainties in turbulence models. We review recent literature on data-free (uncertainty propagation) and data-driven (statistical inference) approaches for quantifying and reducing model uncertainties in RANS simulations. Moreover, the fundamentals of uncertainty propagation and Bayesian inference are introduced in the context of RANS model uncertainty quantification. Finally, the literature on uncertainties in scale-resolving simulations is briefly reviewed with particular emphasis on large eddy simulations. |
Body contouring using 635-nm low level laser therapy. | Noninvasive body contouring has become one of the fastest-growing areas of esthetic medicine. Many patients appear to prefer nonsurgical less-invasive procedures owing to the benefits of fewer side effects and shorter recovery times. Increasingly, 635-nm low-level laser therapy (LLLT) has been used in the treatment of a variety of medical conditions and has been shown to improve wound healing, reduce edema, and relieve acute pain. Within the past decade, LLLT has also emerged as a new modality for noninvasive body contouring. Research has shown that LLLT is effective in reducing overall body circumference measurements of specifically treated regions, including the hips, waist, thighs, and upper arms, with recent studies demonstrating the long-term effectiveness of results. The treatment is painless, and there appears to be no adverse events associated with LLLT. The mechanism of action of LLLT in body contouring is believed to stem from photoactivation of cytochrome c oxidase within hypertrophic adipocytes, which, in turn, affects intracellular secondary cascades, resulting in the formation of transitory pores within the adipocytes' membrane. The secondary cascades involved may include, but are not limited to, activation of cytosolic lipase and nitric oxide. Newly formed pores release intracellular lipids, which are further metabolized. Future studies need to fully outline the cellular and systemic effects of LLLT as well as determine optimal treatment protocols. |
What's mine is yours: Pretrained CNNs for limited training sonar ATR | Finding mines in Sonar imagery is a significant problem with a great deal of relevance for seafaring military and commercial endeavors. Unfortunately, the lack of enormous Sonar image data sets has prevented automatic target recognition (ATR) algorithms from some of the same advances seen in other computer vision fields. Namely, the boom in convolutional neural nets (CNNs) which have been able to achieve incredible results — even surpassing human actors — has not been an easily feasible route for many practitioners of Sonar ATR. We demonstrate the power of one avenue to incorporating CNNs into Sonar ATR: transfer learning. We first show how well a straightforward, flexible CNN feature-extraction strategy can be used to obtain impressive if not state-of-the-art results. Secondly, we propose a way to utilize the powerful transfer learning approach towards multiple instance target detection and identification within a provided synthetic aperture Sonar data set. |
An EV SRM Drive Powered by Battery/Supercapacitor With G2V and V2H/V2G Capabilities | This paper develops an electric vehicle switched-reluctance motor (SRM) drive powered by a battery/supercapacitor having grid-to-vehicle (G2V) and vehicle-to-home (V2H)/vehicle-to-grid (V2G) functions. The power circuit of the motor drive is formed by a bidirectional two-quadrant front-end dc/dc converter and an SRM asymmetric bridge converter. Through proper control and setting of key parameters, good acceleration/deceleration, reversible driving, and braking characteristics are obtained. In idle condition, the proposed motor drive schematic can be rearranged to construct the integrated power converter to perform the following functions: 1) G2V charging mode: a single-phase two-stage switch-mode rectifier based charger is formed with power factor correction capability; 2) autonomous V2H discharging mode: the 60-Hz 220-V/110-V ac sources are generated by the developed single-phase three-wire inverter to power home appliances. Through the developed differential mode and common mode control schemes, well-regulated output voltages are achieved; 3) grid-connected V2G discharging mode: the programmed real power can be sent back to the utility grid. |
Modeling software design diversity | Design diversity has been used for many years now as a means of achieving a degree of fault tolerance in software-based systems. While there is clear evidence that the approach can be expected to deliver some increase in reliability compared to a single version, there is no agreement about the extent of this. More importantly, it remains difficult to evaluate exactly how reliable a particular diverse fault-tolerant system is. This difficulty arises because assumptions of independence of failures between different versions have been shown to be untenable: assessment of the actual level of dependence present is therefore needed, and this is difficult. In this tutorial, we survey the modeling issues here, with an emphasis upon the impact these have upon the problem of assessing the reliability of fault-tolerant systems. The intended audience is one of designers, assessors, and project managers with only a basic knowledge of probabilities, as well as reliability experts without detailed knowledge of software, who seek an introduction to the probabilistic issues in decisions about design diversity. |
Designing 1V Op Amps Using Standard Digital CMOS Technology | This paper addresses the difficulty of designing 1-V capable analog circuits in standard digital complementary metal–oxide–semiconductor (CMOS) technology. Design techniques for facilitating 1-V operation are discussed and 1-V analog building block circuits are presented. Most of these circuits use the bulk-driving technique to circumvent the metal– oxide–semiconductor field-effect transistor turn-on (threshold) voltage requirement. Finally, techniques are combined within a 1-V CMOS operational amplifier with rail-to-rail input and output ranges. While consuming 300 W, the 1-V rail-to-rail CMOS op amp achieves 1.3-MHz unity-gain frequency and 57 phase margin for a 22-pF load capacitance. |
Prostate cancer in renal transplant recipients | As patients with end-stage renal disease are receiving renal allografts at older ages, the number of male renal transplant recipients (RTRs) being diagnosed with prostate cancer (CaP) is increasing. Historically, the literature regarding the management of CaP in RTR's is limited to case reports and small case series. To date, there are no standardized guidelines for screening or management of CaP in these complex patients. To better understand the unique characteristics of CaP in the renal transplant population, we performed a literature review of PubMed, without date limitations, using a combination of search terms including prostate cancer, end stage renal disease, renal transplantation, prostate cancer screening, prostate specific antigen kinetics, immunosuppression, prostatectomy, and radiation therapy. Of special note, teams facilitating the care of these complex patients must carefully and meticulously consider the altered anatomy for surgical and radiotherapeutic planning. Active surveillance, though gaining popularity in the general low risk prostate cancer population, needs further study in this group, as does the management of advance disease. This review provides a comprehensive and contemporary understanding of the incidence, screening measures, risk stratification, and treatment options for CaP in RTRs. |
Reasoning with !-Graphs | The aim of this thesis is to present an extension to the string graphs of Dixon, Duncan and Kissinger that allows the finite representation of certain infinite families of graphs and graph rewrite rules, and to demonstrate that a logic can be built on this to allow the formalisation of inductive proofs in the string diagrams of compact closed and traced symmetric monoidal categories. String diagrams provide an intuitive method for reasoning about monoidal categories. However, this does not negate the ability for those using them to make mistakes in proofs. To this end, there is a project (Quantomatic) to build a proof assistant for string diagrams, at least for those based on categories with a notion of trace. The development of string graphs has provided a combinatorial formalisation of string diagrams, laying the foundations for this project. The prevalence of commutative Frobenius algebras (CFAs) in quantum information theory, a major application area of these diagrams, has led to the use of variable-arity nodes as a shorthand for normalised networks of Frobenius algebra morphisms, so-called “spider notation”. This notation greatly eases reasoning with CFAs, but string graphs are inadequate to properly encode this reasoning. This dissertation firstly extends string graphs to allow for variable-arity nodes to be represented at all, and then introduces !-box notation – and structures to encode it – to represent string graph equations containing repeated subgraphs, where the number of repetitions is abitrary. This can be used to represent, for example, the “spider law” of CFAs, allowing two spiders to be merged, as well as the much more complex generalised bialgebra law that can arise from two interacting CFAs. This work then demonstrates how we can reason directly about !-graphs, viewed as (typically infinite) families of string graphs. Of particular note is the presentation of a form of graph-based induction, allowing the formal encoding of proofs that previously could only be represented as a mix of string diagrams and explanatory text. |
The DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure as a Screening Tool | The DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure was developed to aid clinicians with a dimensional assessment of psychopathology; however, this measure resembles a screening tool for several symptomatic domains. The objective of the current study was to examine the basic parameters of sensitivity, specificity, positive and negative predictive power of the measure as a screening tool. One hundred and fifty patients in a correctional community center filled out the measure prior to a psychiatric evaluation, including the Mini International Neuropsychiatric Interview screen. The above parameters were calculated for the domains of depression, mania, anxiety, and psychosis. The results showed that the sensitivity and positive predictive power of the studied domains was poor because of a high rate of false positive answers on the measure. However, when the lowest threshold on the Cross-Cutting Symptom Measure was used, the sensitivity of the anxiety and psychosis domains and the negative predictive values for mania, anxiety and psychosis were good. In conclusion, while it is foreseeable that some clinicians may use the DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure as a screening tool, it should not be relied on to identify positive findings. It functioned well in the negative prediction of mania, anxiety and psychosis symptoms. |
Person re-identification by probabilistic relative distance comparison | Matching people across non-overlapping camera views, known as person re-identification, is challenging due to the lack of spatial and temporal constraints and large visual appearance changes caused by variations in view angle, lighting, background clutter and occlusion. To address these challenges, most previous approaches aim to extract visual features that are both distinctive and stable under appearance changes. However, most visual features and their combinations under realistic conditions are neither stable nor distinctive thus should not be used indiscriminately. In this paper, we propose to formulate person re-identification as a distance learning problem, which aims to learn the optimal distance that can maximises matching accuracy regardless the choice of representation. To that end, we introduce a novel Probabilistic Relative Distance Comparison (PRDC) model, which differs from most existing distance learning methods in that, rather than minimising intra-class variation whilst maximising intra-class variation, it aims to maximise the probability of a pair of true match having a smaller distance than that of a wrong match pair. This makes our model more tolerant to appearance changes and less susceptible to model over-fitting. Extensive experiments are carried out to demonstrate that 1) by formulating the person re-identification problem as a distance learning problem, notable improvement on matching accuracy can be obtained against conventional person re-identification techniques, which is particularly significant when the training sample size is small; and 2) our PRDC outperforms not only existing distance learning methods but also alternative learning methods based on boosting and learning to rank. |
Do all birds tweet the same?: characterizing twitter around the world | Social media services have spread throughout the world in just a few years. They have become not only a new source of information, but also new mechanisms for societies world-wide to organize themselves and communicate. Therefore, social media has a very strong impact in many aspects -- at personal level, in business, and in politics, among many others. In spite of its fast adoption, little is known about social media usage in different countries, and whether patterns of behavior remain the same or not. To provide deep understanding of differences between countries can be useful in many ways, e.g.: to improve the design of social media systems (which features work best for which country?), and influence marketing and political campaigns. Moreover, this type of analysis can provide relevant insight into how societies might differ. In this paper we present a summary of a large-scale analysis of Twitter for an extended period of time. We analyze in detail various aspects of social media for the ten countries we identified as most active. We collected one year's worth of data and report differences and similarities in terms of activity, sentiment, use of languages, and network structure. To the best of our knowledge, this is the first on-line social network study of such characteristics. |
Characterisation of Plasmodium falciparum populations selected on the human endothelial receptors P-selectin, E-selectin, CD9 and CD151 | The ability of the parasite Plasmodium falciparum to evade the immune system and be sequestered within human small blood vessels is responsible for severe forms of malaria. The sequestration depends on the interaction between human endothelial receptors and P. falciparum erythrocyte membrane protein 1 (PfEMP1) exposed on the surface of the infected erythrocytes (IEs). In this study, the transcriptomes of parasite populations enriched for parasites that bind to human P-selectin, E-selectin, CD9 and CD151 receptors were analysed. IT4_var02 and IT4_var07 were specifically expressed in IT4 parasite populations enriched for P-selectin-binding parasites; eight var genes (IT4_var02/07/09/13/17/41/44/64) were specifically expressed in isolate populations enriched for CD9-binding parasites. Interestingly, IT4 parasite populations enriched for E-selectin- and CD151-binding parasites showed identical expression profiles to those of a parasite population exposed to wild-type CHO-745 cells. The same phenomenon was observed for the 3D7 isolate population enriched for binding to P-selectin, E-selectin, CD9 and CD151. This implies that the corresponding ligands for these receptors have either weak binding capacity or do not exist on the IE surface. Conclusively, this work expanded our understanding of P. falciparum adhesive interactions, through the identification of var transcripts that are enriched within the selected parasite populations. |
The role of self-construal in consumers' electronic word of mouth (eWOM) in social networking sites: A social cognitive approach | The current study reconceptualized self-construal as a social cognitive indicator of self-observation that individuals employ for developing and maintaining social relationship with others. From the social cognitive perspective, this study investigated how consumers’ self-construal can affect consumers’ electronic word of mouth (eWOM) behavior through two cognitive factors (online community engagement self-efficacy and social outcome expectations) in the context of a social networking site. This study conducted an online experiment that directed 160 participants to visit a newly created online community. The results demonstrated that consumers’ relational view became salient when the consumers’ self-construal was primed to be interdependent rather than independent. Further, the results showed that such interdependent self-construal positively influenced consumers’ eWOM behavioral intentions through their community engagement self-efficacy and their social outcome expectations. 2012 Elsevier Ltd. All rights reserved. |
Conducting an acute intense interval exercise session during the Ramadan fasting month: what is the optimal time of the day? | This study examines the effects of Ramadan fasting on performance during an intense exercise session performed at three different times of the day, i.e., 08:00, 18:00, and 21:00 h. The purpose was to determine the optimal time of the day to perform an acute high-intensity interval exercise during the Ramadan fasting month. After familiarization, nine trained athletes performed six 30-s Wingate anaerobic test (WAnT) cycle bouts followed by a time-to-exhaustion (T(exh)) cycle on six separate randomized and counterbalanced occasions. The three time-of-day nonfasting (control, CON) exercise sessions were performed before the Ramadan month, and the three corresponding time-of-day Ramadan fasting (RAM) exercise sessions were performed during the Ramadan month. Note that the 21:00 h session during Ramadan month was conducted in the nonfasted state after the breaking of the day's fast. Total work (TW) completed during the six WAnT bouts was significantly lower during RAM compared to CON for the 08:00 and 18:00 h (p < .017; effect size [d] = .55 [small] and .39 [small], respectively) sessions, but not for the 21:00 h (p = .03, d = .18 [trivial]) session. The T(exh) cycle duration was significantly shorter during RAM than CON in the 18:00 (p < .017, d = .93 [moderate]) session, but not in the 08:00 (p = .03, d = .57 [small]) and 21:00 h (p = .96, d = .02 [trivial]) sessions. In conclusion, Ramadan fasting had a small to moderate, negative impact on quality of performance during an acute high-intensity exercise session, particularly during the period of the daytime fast. The optimal time to conduct an acute high-intensity exercise session during the Ramadan fasting month is in the evening, after the breaking of the day's fast. |
Denying humanness to others: a newly discovered mechanism by which violent video games increase aggressive behavior. | Past research has provided abundant evidence that playing violent video games increases aggressive behavior. So far, these effects have been explained mainly as the result of priming existing knowledge structures. The research reported here examined the role of denying humanness to other people in accounting for the effect that playing a violent video game has on aggressive behavior. In two experiments, we found that playing violent video games increased dehumanization, which in turn evoked aggressive behavior. Thus, it appears that video-game-induced aggressive behavior is triggered when victimizers perceive the victim to be less human. |
Feasibility study of an optimised person-centred intervention to improve mental health and reduce antipsychotics amongst people with dementia in care homes: study protocol for a randomised controlled trial | BACKGROUND
People living in care homes often have complex mental and physical health problems, disabilities and social needs which are compounded by the use of psychiatric and other drugs. In the UK dementia care is a national priority with a vast impact on services. WHELD combines the most effective elements of existing approaches to develop a comprehensive but practical intervention. This will be achieved by training care staff to provide care that is focused on an understanding of the individual and their needs; and by using additional components such as exercise, activities and social interaction to improve mental health and quality of life (QoL) and reduce the use of sedative drugs.
DESIGN
Work Package 3 (WP3) is the pilot randomised trial and qualitative evaluation to help develop a future definitive randomised controlled clinical trial. The study design is a cluster randomised 2x2x2 factorial design with two replications in 16 care homes. Each care home is randomized to receive one of the eight possible permutations of the four key interventions, with each possible combination delivered in two of the 16 homes. Each cluster includes a minimum of 12 participants (depending upon size of the care home, the number of people with dementia and the number consenting).
DISCUSSION
The overarching goal of the programme is to provide an effective, simple and practical intervention which improves the mental health of, and reduces sedative drug use in, people with dementia in care homes and which can be implemented nationally in all UK care homes as an NHS intervention.
TRIAL REGISTRATION
Current controlled trials ISRCTN40313497. |
Capacity of Wireless Multi-hop Networks Using Physical Carrier Sense and Transmit Power Control | In this paper, we investigate the capacity of CSMA (Carrier Sense Multiple Access) based wireless multi-hop networks with random topologies described by the hop length distribution. First we develop an analytical model for the effective link capacity as a function of transmit power adaptation policy, physical carrier sense threshold, hop length distribution and medium access probability of p−persistent CSMA. Secondly, we devise an optimal transmit power control scheme that maximizes the network capacity by adjusting the transmit power and the corresponding physical carrier sense threshold. Thereafter, it is extended for the joint optimization of these parameters with the medium access probability of CSMA. Finally we compare the optimal power control scheme with the minimum transmit power policy in [1]. Results show that the proposed power control scheme optimally trades off the spatial reuse (number of interfering links) to the link SIR (Signal-to-Interference Ratio) and achieves an amount of ∼%15 increase in network capacity when both schemes employ joint optimization. |
Classification and Adaptive Novel Class Detection of Feature-Evolving Data Streams | Data stream classification poses many challenges to the data mining community. In this paper, we address four such major challenges, namely, infinite length, concept-drift, concept-evolution, and feature-evolution. Since a data stream is theoretically infinite in length, it is impractical to store and use all the historical data for training. Concept-drift is a common phenomenon in data streams, which occurs as a result of changes in the underlying concepts. Concept-evolution occurs as a result of new classes evolving in the stream. Feature-evolution is a frequently occurring process in many streams, such as text streams, in which new features (i.e., words or phrases) appear as the stream progresses. Most existing data stream classification techniques address only the first two challenges, and ignore the latter two. In this paper, we propose an ensemble classification framework, where each classifier is equipped with a novel class detector, to address concept-drift and concept-evolution. To address feature-evolution, we propose a feature set homogenization technique. We also enhance the novel class detection module by making it more adaptive to the evolving stream, and enabling it to detect more than one novel class at a time. Comparison with state-of-the-art data stream classification techniques establishes the effectiveness of the proposed approach. |
Clustering Point Trajectories with Various Life-Spans | Motion-based segmentation of a sequence of images is an essential step for many applications of video analysis, including action recognition and surveillance. This paper introduces a new approach to motion segmentation operating on point trajectories. Each of these trajectories has its own start and end instants, hence its own life-span, depending on the pose and appearance changes of the object it belongs to. A set of such trajectories is obtained by tracking sparse interest points. Based on an adaptation of recently proposed J-linkage method, these trajectories are then clustered using series of affine motion models estimated between consecutive instants, and an appropriate residual that can handle trajectories with various life-spans. Our approach does not require any completion of trajectories whose life-span is shorter than the sequence of interest. We evaluate the performance of the single cue of motion, without considering spatial prior and appearance. Using a standard test set, we validate our new algorithm and compare it to existing ones. Experimental results on a variety of challenging real sequences demonstrate the potential of our approach. |
Learning to Classify Documents According to Formal and Informal Style | This paper discusses an important issue in computational linguistics: classifying texts as formal or informal style. Our work describes a genreindependent methodology for building classifiers for formal and informal texts. We used machine learning techniques to do the automatic classification, and performed the classification experiments at both the document level and the sentence level. First, we studied the main characteristics of each style, in order to train a system that can distinguish between them. We then built two datasets: the first dataset represents general-domain documents of formal and informal style, and the second represents medical texts. We tested on the second dataset at the document level, to determine if our model is sufficiently general, and that it works on any type of text. The datasets are built by collecting documents for both styles from different sources. After collecting the data, we extracted features from each text. The features that we designed represent the main characteristics of both styles. Finally, we tested several classification algorithms, namely Decision Trees, Naïve Bayes, and Support Vector Machines, in order to choose the classifier that generates the best classification results. 1 LiLT Volume 8, Issue 1, March 2012. Learning to Classify Documents According to Formal and Informal Style. Copyright c © 2012, CSLI Publications. 2 / LiLT volume 8, issue 1 March 2012 |
Could autoimmunity be induced by vaccination? | Autoimmune reactions to vaccinations may rarely be induced in predisposed individuals by molecular mimicry or bystander activation mechanisms. Autoimmune reactions reliably considered vaccine-associated, include Guillain-Barré syndrome after 1976 swine influenza vaccine, immune thrombocytopenic purpura after measles/mumps/rubella vaccine, and myopericarditis after smallpox vaccination, whereas the suspected association between hepatitis B vaccine and multiple sclerosis has not been further confirmed, even though it has been recently reconsidered, and the one between childhood immunization and type 1 diabetes seems by now to be definitively gone down. Larger epidemiological studies are needed to obtain more reliable data in most suggested associations. |
Reduction in the incidence of type 2 diabetes with lifestyle intervention or metformin. | BACKGROUND
Type 2 diabetes affects approximately 8 percent of adults in the United States. Some risk factors--elevated plasma glucose concentrations in the fasting state and after an oral glucose load, overweight, and a sedentary lifestyle--are potentially reversible. We hypothesized that modifying these factors with a lifestyle-intervention program or the administration of metformin would prevent or delay the development of diabetes.
METHODS
We randomly assigned 3234 nondiabetic persons with elevated fasting and post-load plasma glucose concentrations to placebo, metformin (850 mg twice daily), or a lifestyle-modification program with the goals of at least a 7 percent weight loss and at least 150 minutes of physical activity per week. The mean age of the participants was 51 years, and the mean body-mass index (the weight in kilograms divided by the square of the height in meters) was 34.0; 68 percent were women, and 45 percent were members of minority groups.
RESULTS
The average follow-up was 2.8 years. The incidence of diabetes was 11.0, 7.8, and 4.8 cases per 100 person-years in the placebo, metformin, and lifestyle groups, respectively. The lifestyle intervention reduced the incidence by 58 percent (95 percent confidence interval, 48 to 66 percent) and metformin by 31 percent (95 percent confidence interval, 17 to 43 percent), as compared with placebo; the lifestyle intervention was significantly more effective than metformin. To prevent one case of diabetes during a period of three years, 6.9 persons would have to participate in the lifestyle-intervention program, and 13.9 would have to receive metformin.
CONCLUSIONS
Lifestyle changes and treatment with metformin both reduced the incidence of diabetes in persons at high risk. The lifestyle intervention was more effective than metformin. |
Oral sarpogrelate can improve endothelial dysfunction as effectively as oral cilostazol, with fewer headaches, in active young male smokers | Sarpogrelate and cilostazol are two commonly used adjunctive antiplatelet agents that also can be used to improve endothelial dysfunction. We compared the effects of sarpogrelate and cilostazol on endothelial dysfunction in active male smokers with flow-mediated dilatation (FMD). We enrolled and compared baseline and follow-up FMD in 20 young male smokers without any known cardiovascular diseases. Two participants who were initially medicated with cilostazol dropped out because of severe headache after taking medication. However, they continued the other experiment with sarpogrelate medication. Baseline endothelium-dependent dilatation (EDD) after reactive hyperemia was 7.5 % ± 1.9 % and endothelium-independent dilatation (EID) after sublingual administration of nitroglycerin was 13.3 % ± 3.4 %. After a 2-week treatment of cilostazol, follow-up EDD significantly increased (7.7 % ± 1.9 to 8.8 ± 2.0 %, P = 0.016), but follow-up EID changed insignificantly (13.2 % ± 3.5 to 12.5 % ± 3.9 %, P = 0.350). With the sarpogrelate treatment, follow-up EDD was significantly increased (7.4 % ± 1.9 % to 8.8 % ± 1.9 %, P = 0.021), but follow-up EID was similar (13.5 % ± 3.5 to 14.0 % ± 3.2 %, P = 0.427). There was no clinical significance between the two groups on follow-up EDD and EID (P = 0.984 and 0.212, respectively). However, the mean score of intensity of headache was significantly higher in the cilostazol group than in the sarpogrelate group (3.8 % ± 2.5 % vs 1.4 % ± 2.2 %, P = 0.005). EDD showed a similar significant increase with 2-week treatment of cilostazol and sarpogrelate. However, intensity of headaches was significantly higher in the cilostazol group. |
Relating Reputation and Money in On-line Markets | Reputation in online economic systems is typically quantified using counters that specify positive and negative feedback from past transactions and/or some form of transaction network analysis that aims to quantify the likelihood that a network user will commit a fraudulent transaction. These approaches can be deceiving to honest users from numerous perspectives. We take a radically different approach with the goal of guaranteeing to a buyer that a fraudulent seller cannot disappear from the system with profit following a set of fabricated transactions that total a certain monetary limit. Even in the case of stolen identity, such an adversary cannot produce illegal profit unless a buyer decides to pay over the suggested limit. |
Retrieve and Refine: Improved Sequence Generation Models For Dialogue | Sequence generation models for dialogue are known to have several problems: they tend to produce short, generic sentences that are uninformative and unengaging. Retrieval models on the other hand can surface interesting responses, but are restricted to the given retrieval set leading to erroneous replies that cannot be tuned to the specific context. In this work we develop a model that combines the two approaches to avoid both their deficiencies: first retrieve a response and then refine it – the final sequence generator treating the retrieval as additional context. We show on the recent CONVAI2 challenge task our approach produces responses superior to both standard retrieval and generation models in human evaluations. |
Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising | The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing. |
Transponder using SIW based negative and zeroth order resonance dual-band antenna and sub-harmonic self-oscillating mixer | A simple active transponder using a substrate integrated waveguide (SIW) based dual-band antenna and a self-oscillating mixer is proposed. Making use of the negative first mode and zeroth order mode of the SIW based slot antenna, we designed the dual-band antenna operating at around 2.45GHz and 5.85GHz (industrial-scientific-medical bands). Due to the resonance of the diplexer and antenna, the self-oscillating mixer oscillates at around 1.7GHz, providing a second harmonic which can be used to up-convert 2.45GHz input signal into 5.85GHz output. Furthermore, the proposed transponder only needs one DC bias for drain at 1.5V, which should be useful for portable devices with single battery. |
Magnetic Equivalent Circuit Modeling of the AC Homopolar Machine for Flywheel Energy Storage | This paper develops a magnetic equivalent circuit model suitable to the design and optimization of the synchronous ac homopolar machine. The ac homopolar machine is of particular interest in the application of grid-based flywheel energy storage, where it has the potential to significantly reduce self-discharge associated with magnetic losses. The ac homopolar machine features both axial and radial magnetizing flux paths, which requires finite element analysis to be conducted in 3-D. The computation time associated with 3-D finite element modeling is highly prohibitive in the design process. The magnetic equivalent circuit model developed in this paper is shown to be a viable alternative for calculating several design performance parameters and has a computation time which is orders of magnitude less than that of 3-D finite element analysis. Results obtained from the developed model are shown to be in good agreement with finite element and experimental results for varying levels of saturation. |
Scene Text Localization and Recognition with Oriented Stroke Detection | An unconstrained end-to-end text localization and recognition method is presented. The method introduces a novel approach for character detection and recognition which combines the advantages of sliding-window and connected component methods. Characters are detected and recognized as image regions which contain strokes of specific orientations in a specific relative position, where the strokes are efficiently detected by convolving the image gradient field with a set of oriented bar filters. Additionally, a novel character representation efficiently calculated from the values obtained in the stroke detection phase is introduced. The representation is robust to shift at the stroke level, which makes it less sensitive to intra-class variations and the noise induced by normalizing character size and positioning. The effectiveness of the representation is demonstrated by the results achieved in the classification of real-world characters using an euclidian nearest-neighbor classifier trained on synthetic data in a plain form. The method was evaluated on a standard dataset, where it achieves state-of-the-art results in both text localization and recognition. |
Gender identity and sport: is the playing field level? | This review examines gender identity issues in competitive sports, focusing on the evolution of policies relating to female gender verification and transsexual participation in sport. The issues are complex and continue to challenge sport governing bodies, including the International Olympic Committee, as they strive to provide a safe environment in which female athletes may compete fairly and equitably. |
Semantic Path based Personalized Recommendation on Weighted Heterogeneous Information Networks | Recently heterogeneous information network (HIN) analysis has attracted a lot of attention, and many data mining tasks have been exploited on HIN. As an important data mining task, recommender system includes a lot of object types (e.g., users, movies, actors, and interest groups in movie recommendation) and the rich relations among object types, which naturally constitute a HIN. The comprehensive information integration and rich semantic information of HIN make it promising to generate better recommendations. However, conventional HINs do not consider the attribute values on links, and the widely used meta path in HIN may fail to accurately capture semantic relations among objects, due to the existence of rating scores (usually ranging from 1 to 5) between users and items in recommender system. In this paper, we are the first to propose the weighted HIN and weighted meta path concepts to subtly depict the path semantics through distinguishing different link attribute values. Furthermore, we propose a semantic path based personalized recommendation method SemRec to predict the rating scores of users on items. Through setting meta paths, SemRec not only flexibly integrates heterogeneous information but also obtains prioritized and personalized weights representing user preferences on paths. Experiments on two real datasets illustrate that SemRec achieves better recommendation performance through flexibly integrating information with the help of weighted meta paths. |
Visual and statistical comparison of metagenomes | BACKGROUND
Metagenomics is the study of the genomic content of an environmental sample of microbes. Advances in the through-put and cost-efficiency of sequencing technology is fueling a rapid increase in the number and size of metagenomic datasets being generated. Bioinformatics is faced with the problem of how to handle and analyze these datasets in an efficient and useful way. One goal of these metagenomic studies is to get a basic understanding of the microbial world both surrounding us and within us. One major challenge is how to compare multiple datasets. Furthermore, there is a need for bioinformatics tools that can process many large datasets and are easy to use.
RESULTS
This article describes two new and helpful techniques for comparing multiple metagenomic datasets. The first is a visualization technique for multiple datasets and the second is a new statistical method for highlighting the differences in a pairwise comparison. We have developed implementations of both methods that are suitable for very large datasets and provide these in Version 3 of our standalone metagenome analysis tool MEGAN.
CONCLUSION
These new methods are suitable for the visual comparison of many large metagenomes and the statistical comparison of two metagenomes at a time. Nevertheless, more work needs to be done to support the comparative analysis of multiple metagenome datasets.
AVAILABILITY
Version 3 of MEGAN, which implements all ideas presented in this article, can be obtained from our web site at: www-ab.informatik.uni-tuebingen.de/software/megan.
SUPPLEMENTARY INFORMATION
Supplementary data are available at Bioinformatics online. |
Accurate tracking in NLOS environments using integrated IMU and fixed lag smoother | Multi-sensor data fusion using Inertial Measurement Units (IMUs) is a promising technique for improving the performance of positioning systems. However, the performance of conventional sensor fusion algorithms based on the Kalman Filter (KF) is compromised in indoor environments due to non-line-of-sight (NLOS) propagation. In this paper, we propose a semi-real time tracking algorithm which uses a fixed lag smoother for sensor fusion and achieves high accuracy in NLOS environments. The computational complexity of the algorithm is taken into consideration and is reduced by decreasing the operating rate of the smoother. The performance of the proposed algorithm is validated experimentally using a real indoor positioning platform. It is shown that the 90th percentile positioning error for a pedestrian is reduced by 42% using the proposed semi-real time tracking algorithm with 10 s lag, compared with using a KF-based real time tracking algorithm. |
PSYCHOLINGUISTIC APPROACHES TO SLA | These are exciting times for research into the psychological processes underlying second language acquisition (SLA). In the 1970s, SLA emerged as a field of inquiry in its own right (Brown 1980), and in the 1980s, a number of different approaches to central questions in the field began to develop in parallel and in relative isolation (McLaughlin and Harrington 1990). In the 1990s, however, these different approaches began to confront one another directly. Now we are entering a period reminiscent, in many ways, of the intellectually turbulent times following the Chomskyan revolution (Chomsky 1957; 1965). Now, as then, researchers are debating basic premises of a science of mind, language, and learning. Some might complain, not entirely without reason, that we are still debating the same issues after 30-40 years. However, there are now new conceptual and research tools available to test hypotheses in ways previously thought impossible. Because of this, many psychologists believe there will soon be significant advancement on some SLA issues that have resisted closure for decades. We outline some of these developments and explore where the field may be heading. More than ever, it appears possible that psychological theory and SLA theory are converging on solutions to common issues. |
Aging and wisdom: culture matters. | People from different cultures vary in the ways they approach social conflicts, with Japanese being more motivated to maintain interpersonal harmony and avoid conflicts than Americans are. Such cultural differences have developmental consequences for reasoning about social conflict. In the study reported here, we interviewed random samples of Americans from the Midwest United States and Japanese from the larger Tokyo area about their reactions to stories of intergroup and interpersonal conflicts. Responses showed that wisdom (e.g., recognition of multiple perspectives, the limits of personal knowledge, and the importance of compromise) increased with increasing age among Americans, but older age was not associated with wiser responses among Japanese. Younger and middle-aged Japanese showed greater use of wise-reasoning strategies than younger and middle-aged Americans did. This cultural difference was weaker for older participants' reactions to interpersonal conflicts and was actually reversed for intergroup conflicts. This research has important implications for the study of aging, cultural psychology, and wisdom. |
On Traffic-Aware Partition and Aggregation in MapReduce for Big Data Applications | The MapReduce programming model simplifies large-scale data processing on commodity cluster by exploiting parallel map tasks and reduce tasks. Although many efforts have been made to improve the performance of MapReduce jobs, they ignore the network traffic generated in the shuffle phase, which plays a critical role in performance enhancement. Traditionally, a hash function is used to partition intermediate data among reduce tasks, which, however, is not traffic-efficient because network topology and data size associated with each key are not taken into consideration. In this paper, we study to reduce network traffic cost for a MapReduce job by designing a novel intermediate data partition scheme. Furthermore, we jointly consider the aggregator placement problem, where each aggregator can reduce merged traffic from multiple map tasks. A decomposition-based distributed algorithm is proposed to deal with the large-scale optimization problem for big data application and an online algorithm is also designed to adjust data partition and aggregation in a dynamic manner. Finally, extensive simulation results demonstrate that our proposals can significantly reduce network traffic cost under both offline and online cases. |
An Ensemble Model for Classification of Attacks with Feature Selection based on KDD99 and NSL-KDD Data Set | Information security is extremely critical issues for every organization to protect information from unauthorized access. Intrusion detection system has one of the important roles to prevent data or information from malicious behaviours. Basically Intrusion detection system is a classifier that can classify the data as normal or attacks. In this research paper, we have proposed ANN-Bayesian Net-GR technique that means ensemble of Artificial Neural Network (ANN) and Bayesian Net with Gain Ratio (GR) feature selection technique. We have applied various individual classification techniques and its ensemble model on KDD99 and NSL-KDD data set to check the robustness of model. Due to irrelevant features in data set, also applied Gain Ratio feature selection technique on best model. Finally our proposed model produces highest accuracy compare to others. |
Graspit! A versatile simulator for robotic grasping | A robotic grasping simulator, called Graspit!, is presented as versatile tool for the grasping community. The focus of the grasp analysis has been on force-closure grasps, which are useful for pick-and-place type tasks. This work discusses the different types of world elements and the general robot definition, and presented the robot library. The paper also describes the user interface of Graspit! and present the collision detection and contact determination system. The grasp analysis and visualization method were also presented that allow a user to evaluate a grasp and compute optimal grasping forces. A brief overview of the dynamic simulation system was provided. |
The HAWKwood Database | We present a database consisting of wood pile images, which can be used as a benchmark to evaluate the performance of wood pile detection and surveying algorithms. We distinguish six database categories which can be used for different types of algorithms. Images of real and synthetic scenes are provided, which consist of 7655 images divided into 354 data sets. Depending on the category the data sets either include ground truth data or forestry specific measurements with which algorithms may be compared. |
Computational Complexity of Linear Large Margin Classification With Ramp Loss | Minimizing the binary classification error with a linear model leads to an NP-hard problem. In practice, surrogate loss functions are used, in particular loss functions leading to large margin classification such as the hinge loss and the ramp loss. The intuitive large margin concept is theoretically supported by generalization bounds linking the expected classification error to the empirical margin error and the complexity of the considered hypotheses class. This article addresses the fundamental question about the computational complexity of determining whether there is a hypotheses class with a hypothesis such that the upper bound on the generalization error is below a certain value. Results of this type are important for model comparison and selection. This paper takes a first step and proves that minimizing a basic margin-bound is NP-hard when considering linear hypotheses and the ρmargin loss function, which generalizes the ramp loss. This result directly implies the hardness of ramp loss minimization. |
Segmented snake for contour detection | The active contour model, called snake, has been proved to be an effective method in contour detection. This method has been successfully employed in the areas of object recognition, computer vision, computer graphics and biomedical images. However, this model suffers from a great limitation, that is, it is difficult to locate concave parts of an object. In view of such a limitation, a segmented snake is designed and proposed in this paper. The basic idea of the proposed method is to convert the global optimization of a closed snake curve into local optimization on a number of open snake curves. The segmented snake algorithm consists of two steps. In the first step, the original snake model is adopted to locate the initial contour near the object boundary. In the second step, a recursive split-and-merge procedure is developed to determine the final object contour. The proposed method is able to locate all convex, concave and high curvature parts of an object accurately. A number of images are selected to evaluate the capability of the proposed algorithm and the results are encouraging. ( 1998 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. Active contour model Contour detection Convex and concave parts Snake model Open snake Global and local optimization |
A study in two-handed input | Two experiments were run to investigate two-handed input. The experimental tasks were representative of those found in CAD and office information systems.
Experiment one involved the performance of a compound selection/positioning task. The two sub-tasks were performed by different hands using separate transducers. Without prompting, novice subjects adopted strategies that involved performing the two sub-tasks simultaneously. We interpret this as a demonstration that, in the appropriate context, users are capable of simultaneously providing continuous data from two hands without significant overhead. The results also show that the speed of performing the task was strongly correlated to the degree of parallelism employed.
Experiment two involved the performance of a compound navigation/selection task. It compared a one-handed versus two-handed method for finding and selecting words in a document. The two-handed method significantly outperformed the commonly used one-handed method by a number of measures. Unlike experiment one, only two subjects adopted strategies that used both hands simultaneously. The benefits of the two-handed technique, therefore, are interpreted as being due to efficiency of hand motion. However, the two subjects who did use parallel strategies had the two fastest times of all subjects. |
The Use of Phrases and Structured Queries in Information Retrieval | Both phrases and Boolean queries have a long history in information retrieval, particularly in commercial systems. In previous work, Boolean queries have been used as a source of phrases for a statistical retrieval model, This work, like the majority of research on phrases, resulted in little improvement in retrieval effectiveness, In this paper, we describe an approach where phrases identified in natural language queries are used to build structured queries for a probabilistic retrieval model. Our results show that using phrases in this way can improve performance, and that phrases that are automatically extracted from a natural language query perform nearly as well as manually selected phrases. 1 Introduction The use of phrases as part of a text representation or indexing language has been investigated since the early days of information retrieval research. Cleverdon, for example, included phrase-based indexing in the Cran-field studies (1966). Salton (1968) also described a variety of experiments using phrases in the SMART system. Certainly, there has always been the feeling that phrases, if used correctly, should improve the specificity of the indexing language and, consequently, the quality of the text representation. The experimental results obtained with phrases do not, however, support this intuition. These results have been very mixed, ranging from small improvements in some collections to decreases in * Current address: West F'ubtishing Company, Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Aeeociation for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. @ 1991 ACM O-8979 J-448-J/9 j[O009/0032... $J .50 effectiveness in othersl, Fagan's recent theeis (1987) is one of the most comprehensive studies of automatic indexing ueing phrases, in that he used both " statistical " and " syntactic " phrasee and varied a number of factors in the phrase formation process. A statistical phrase is defined by constraints on the number of occurrences and co-occurrences of its component words and/or the proximity between occurrences of components in a document. A syntactic phrase may be characterized by some of the came criteria as a statistical phrase, but in addition must obey some constraint on the syntactic relationships among its component words. Fagan's results showed … |
The legal, social and ethical controversy of the collection and storage of fingerprint profiles and DNA samples in forensic science | The collection and storage of fingerprint profiles and DNA samples in the field of forensic science for nonviolent crimes is highly controversial. While biometric techniques such as fingerprinting have been used in law enforcement since the early 1900s, DNA presents a more invasive and contentious technique as most sampling is of an intimate nature (e.g. buccal swab). A fingerprint is a pattern residing on the surface of the skin while a DNA sample needs to be extracted in the vast majority of cases (e.g. at times extraction even implying the breaking of the skin). This paper aims to balance the need to collect DNA samples where direct evidence is lacking in violent crimes, versus the systematic collection of DNA from citizens who have committed acts such as petty crimes. The legal, ethical and social issues surrounding the proliferation of DNA collection and storage are explored, with a view to outlining the threats that such a regime may pose to citizens in the not-to-distant future, especially persons belonging to ethnic minority groups. |
Potentiality of Bacterial Cellulose as the Scaffold of Tissue Engineering of Cornea | The bacterial cellulose (BC) secreted by Gluconacetobacter xylinus was explored as a novel scaffold material due to its unusual biocompatibility, light transmittance and material properties. The specific surface area of the frozendried BC sheet based on BET isotherm was 22.886 m/g, and the porosity was around 90%. It is known by SEM graphs that significant difference in porosity and pore size exists in the two sides of air-dried BC sheets. The width of cellulose ribbons was 10 nm to 100 nm known by AFM image. The examination of the growth of human corneal stromal cells on BC demonstrated that the material supported the growth and proliferation of human corneal stromal cells. The ingrowth of corneal stromal cells into the scaffold was verified by Laser Scanning Confocal Microscope. The results suggest the potentiality for this biomaterial as a scaffold for tissue engineering of artificial cornea. KeywordsBacterial cellulose; Cornea; Tissue engineering; Scaffold; Corneal stromal cells |
Feed-Forward Networks with Attention Can Solve Some Long-Term Memory Problems | We propose a simplified model of attention which is applicable to feed-forward neural networks and demonstrate that the resulting model can solve the synthetic “addition” and “multiplication” long-term memory problems for sequence lengths which are both longer and more widely varying than the best published results for these tasks. 1 MODELS FOR SEQUENTIAL DATA Many problems in machine learning are best formulated using sequential data and appropriate models for these tasks must be able to capture temporal dependencies in sequences, potentially of arbitrary length. One such class of models are recurrent neural networks (RNNs), which can be considered a learnable function f whose output ht = f(xt, ht−1) at time t depends on input xt and the model’s previous state ht−1. Training of RNNs with backpropagation through time (Werbos, 1990) is hindered by the vanishing and exploding gradient problem (Pascanu et al., 2012; Hochreiter & Schmidhuber, 1997; Bengio et al., 1994), and as a result RNNs are in practice typically only applied in tasks where sequential dependencies span at most hundreds of time steps. Very long sequences can also make training computationally inefficient due to the fact that RNNs must be evaluated sequentially and cannot be fully parallelized. |
Epistasis — the essential role of gene interactions in the structure and evolution of genetic systems | Epistasis, or interactions between genes, has long been recognized as fundamentally important to understanding the structure and function of genetic pathways and the evolutionary dynamics of complex genetic systems. With the advent of high-throughput functional genomics and the emergence of systems approaches to biology, as well as a new-found ability to pursue the genetic basis of evolution down to specific molecular changes, there is a renewed appreciation both for the importance of studying gene interactions and for addressing these questions in a unified, quantitative manner. |
MAC Protocols for IEEE 802.11ax: Avoiding Collisions on Dense Networks | Wireless networks have become the main form of Internet access. Statistics show that the global mobile Internet penetration should exceed 70% until 2019. Wi-Fi is an important player in this change. Founded on IEEE 802.11, this technology has a crucial impact in how we share broadband access both in domestic and corporate networks. However, recent works have indicated performance issues in Wi-Fi networks, mainly when they have been deployed without planning and under high user density. Hence, different collision avoidance techniques and Medium Access Control protocols have been designed in order to improve Wi-Fi performance. Analyzing the collision problem, this work strengthens the claims found in the literature about the low Wi-Fi performance under dense scenarios. Then, in particular, this article overviews the MAC protocols used in the IEEE 802.11 standard and discusses solutions to mitigate collisions. Finally, it contributes presenting future trends in MAC protocols. This assists in foreseeing expected improvements for the next generation of Wi-Fi devices. |
SLO-aware colocation of data center tasks based on instantaneous processor requirements | In a cloud data center, a single physical machine simultaneously executes dozens of highly heterogeneous tasks. Such colocation results in more efficient utilization of machines, but, when tasks' requirements exceed available resources, some of the tasks might be throttled down or preempted. We analyze version 2.1 of the Google cluster trace that shows short-term (1 second) task CPU usage. Contrary to the assumptions taken by many theoretical studies, we demonstrate that the empirical distributions do not follow any single distribution. However, high percentiles of the total processor usage (summed over at least 10 tasks) can be reasonably estimated by the Gaussian distribution. We use this result for a probabilistic fit test, called the Gaussian Percentile Approximation (GPA), for standard bin-packing algorithms. To check whether a new task will fit into a machine, GPA checks whether the resulting distribution's percentile corresponding to the requested service level objective, SLO is still below the machine's capacity. In our simulation experiments, GPA resulted in colocations exceeding the machines' capacity with a frequency similar to the requested SLO. |
Tests of general relativity in the solar system | Tests of gravity performed in the solar system show a good agreement with general relativity. The latter is however challenged by observations at larger, galactic and cosmic, scales which are presently cured by introducing ``dark matter'' or ``dark energy''. A few measurements in the solar system, particularly the so-called ``Pioneer anomaly'', might also be pointing at a modification of gravity law at ranges of the order of the size of the solar system. The present lecture notes discuss the current status of tests of general relativity in the solar system. They describe metric extensions of general relativity which have the capability to preserve compatibility with existing gravity tests while opening free space for new phenomena. They present arguments for new mission designs and new space technologies as well as for having a new look on data of existing or future experiments. |
Identification of 20(S)-protopanaxadiol metabolites in human liver microsomes and human hepatocytes. | 20(S)-Protopanaxadiol (PPD, 1) is one of the aglycones of the ginsenosides and has a wide range of pharmacological activities. At present, PPD has progressed to early clinical trials as an antidepressant. In this study, its fate in mixed human liver microsomes (HLMs) and human hepatocytes was examined for the first time. By using liquid chromatography-electrospray ionization ion trap mass spectrometry, 24 metabolites were found. Four metabolites were isolated, and their structures were elucidated as (20S,24S)-epoxydammarane-3,12,25-triol (2), (20S,24R)-epoxydammarane-3,12,25-triol (3), (20S,24S)-epoxydammarane-12,25-diol-3-one (4), and (20S,24R)-epoxydammarane-12,25-diol-3-one (5) based on a detailed analysis of their spectroscopic data. The predominant metabolic pathway of PPD observed was the oxidation of the 24,25-double bond to yield 24,25-epoxides, followed by hydrolysis and rearrangement to form the corresponding 24,25-vicinal diol derivatives (M6) and the 20,24-oxide form (2 and 3). Further sequential metabolites (M2-M5) were also detected through the hydroxylation and dehydrogenation of 2 and 3. All of the phase I metabolites except for M1-1 possess a hydroxyl group at C-25 of the side chain, which was newly formed by biotransformation. Two glucuronide conjugates (M7) attributed to 2 and 3 were detected in human hepatocyte incubations, and their conjugation sites were tentatively assigned to the 25-hydroxyl group. The findings of this study strongly suggested that the formation of the 25-hydroxyl group is very important for the elimination of PPD. |
Serendipitous recommendation for scholarly papers considering relations among researchers | Serendipity occurs when one finds an interesting discovery while searching for something else. While search engines seek to report work relevant to a targeted query, recommendation engines are particularly well-suited for serendipitous recommendations as such processes do not need to fulfill a targeted query. Junior researchers can use such an engine to broaden their horizon and learn new areas, while senior researchers can discover interdisciplinary frontiers to apply integrative research. We adapt a state-of-the-art scholarly paper recommendation system's user profile construction to make use of information drawn from 1) dissimilar users and 2) co-authors to specifically target serendipitous recommendation. |
TOSThreads: thread-safe and non-invasive preemption in TinyOS | Many threads packages have been proposed for programming wireless sensor platforms. However, many sensor network operating systems still choose to provide an event-driven model, due to efficiency concerns. We present TOS-Threads, a threads package for TinyOS that combines the ease of a threaded programming model with the efficiency of an event-based kernel. TOSThreads is backwards compatible with existing TinyOS code, supports an evolvable, thread-safe kernel API, and enables flexible application development through dynamic linking and loading. In TOS-Threads, TinyOS code runs at a higher priority than application threads and all kernel operations are invoked only via message passing, never directly, ensuring thread-safety while enabling maximal concurrency. The TOSThreads package is non-invasive; it does not require any large-scale changes to existing TinyOS code.
We demonstrate that TOSThreads context switches and system calls introduce an overhead of less than 0.92% and that dynamic linking and loading takes as little as 90 ms for a representative sensing application. We compare different programming models built using TOSThreads, including standard C with blocking system calls and a reimplementation of Tenet. Additionally, we demonstrate that TOSThreads is able to run computationally intensive tasks without adversely affecting the timing of critical OS services. |
Secure Agreement Protocols: Reliable and Atomic Group Multicast in Rampart | Reliable and atomic group multicast have been proposed as fundamental communication paradigms to support secure distributed computing in systems in which processes may behave maliciously. These protocols enable messages to be multicast to a group of processes, while ensuring that all honest group members deliver the same messages and, in the case of atomic multicast, deliver these messages in the same order. We present new reliable and atomic group multicast protocols for asynchronous distributed systems. We also describe their implementation as part of Rampart, a toolkit for building high-integrity distributed services, i.e., services that remain correct and available despite the corruption of some component servers by an attacker. To our knowledge, Rampart is the first system to demonstrate reliable and atomic group multicast in asynchronous systems subject to process corruptions. |
A MultiPath Network for Object Detection | The recent COCO object detection dataset presents several new challenges for object detection. In particular, it contains objects at a broad range of scales, less prototypical images, and requires more precise localization. To address these challenges, we test three modifications to the standard Fast R-CNN object detector: (1) skip connections that give the detector access to features at multiple network layers, (2) a foveal structure to exploit object context at multiple object resolutions, and (3) an integral loss function and corresponding network adjustment that improve localization. The result of these modifications is that information can flow along multiple paths in our network, including through features from multiple network layers and from multiple object views. We refer to our modified classifier as a ‘MultiPath’ network. We couple our MultiPath network with DeepMask object proposals, which are well suited for localization and small objects, and adapt our pipeline to predict segmentation masks in addition to bounding boxes. The combined system improves results over the baseline Fast R-CNN detector with Selective Search by 66% overall and by 4× on small objects. It placed second in both the COCO 2015 detection and segmentation challenges. |
Potentials of Plasma NGAL and MIC-1 as Biomarker(s) in the Diagnosis of Lethal Pancreatic Cancer | Pancreatic cancer (PC) is lethal malignancy with very high mortality rate. Absence of sensitive and specific marker(s) is one of the major factors for poor prognosis of PC patients. In pilot studies using small set of patients, secreted acute phase proteins neutrophil gelatinase associated lipocalin (NGAL) and TGF-β family member macrophage inhibitory cytokine-1 (MIC-1) are proposed as most potential biomarkers specifically elevated in the blood of PC patients. However, their performance as diagnostic markers for PC, particularly in pre-treatment patients, remains unknown. In order to evaluate the diagnostic efficacy of NGAL and MIC-1, their levels were measured in plasma samples from patients with pre-treatment PC patients (n = 91) and compared it with those in healthy control (HC) individuals (n = 24) and patients with chronic pancreatitis (CP, n = 23). The diagnostic performance of these two proteins was further compared with that of CA19-9, a tumor marker commonly used to follow PC progression. The levels of all three biomarkers were significantly higher in PC compared to HCs. The mean (± standard deviation, SD) plasma NGAL, CA19-9 and MIC-1 levels in PC patients was 111.1 ng/mL (2.2), 219.2 U/mL (7.8) and 4.5 ng/mL (4.1), respectively. In comparing resectable PC to healthy patients, all three biomarkers were found to have comparable sensitivities (between 64%-81%) but CA19-9 and NGAL had a higher specificity (92% and 88%, respectively). For distinguishing resectable PC from CP patients, CA19-9 and MIC-1 were most specific (74% and 78% respectively). CA19-9 at an optimal cut-off of 54.1 U/ml is highly specific in differentiating resectable (stage 1/2) pancreatic cancer patients from controls in comparison to its clinical cut-off (37.1 U/ml). Notably, the addition of MIC-1 to CA19-9 significantly improved the ability to distinguish resectable PC cases from CP (p = 0.029). Overall, MIC-1 in combination with CA19-9 improved the diagnostic accuracy of differentiating PC from CP and HCs. |
Taking Digital Ecosystems to SMEs A European case study | The digital business ecosystem (DBE) project experimented with new ways of bringing small and medium-sized enterprises (SMEs) into a major R&D project. Such involvement is an important factor in projects where the R&D performers are not the exploiters. The project utilised the social capital of regional actors, and conceptualised a certain kind of SME, known as driver SMEs. The strategy adopted has worked for this project and has provided a basis for self-sustained post-project expansion. In doing so it has created a model that could easily be adopted by other projects, both in Digital Ecosystems and beyond. |
Werdy: Recognition and Disambiguation of Verbs and Verb Phrases with Syntactic and Semantic Pruning | Word-sense recognition and disambiguation (WERD) is the task of identifying word phrases and their senses in natural language text. Though it is well understood how to disambiguate noun phrases, this task is much less studied for verbs and verbal phrases. We present Werdy, a framework for WERD with particular focus on verbs and verbal phrases. Our framework first identifies multi-word expressions based on the syntactic structure of the sentence; this allows us to recognize both contiguous and non-contiguous phrases. We then generate a list of candidate senses for each word or phrase, using novel syntactic and semantic pruning techniques. We also construct and leverage a new resource of pairs of senses for verbs and their object arguments. Finally, we feed the so-obtained candidate senses into standard word-sense disambiguation (WSD) methods, and boost their precision and recall. Our experiments indicate that Werdy significantly increases the performance of existing WSD methods. |
Safety Approach to Otoplasty: A Surgical Algorithm. | An algorithm was developed through an evolution of refinements in surgical technique with the goal to minimize risk and morbidity in otoplasty. Key principles were avoidance of cartilage incisions and transections and the use of multiple surgical techniques to distribute the "surgical load" evenly among these techniques. The present retrospective study was designed to test safety and efficacy of the concept in 100 consecutive patients and to discuss the results in light of the literature. Data detailing the surgery, preoperative, and postoperative period were extracted from the record and during patient interviews. Patients were contacted to complete a questionnaire to rate the postoperative pain and their satisfaction with the final outcome on a 6-point visual analog scale (VAS). An expert and a lay panel assessed preoperative and postoperative frontal-view photographs, using the same VAS. Pain in the postoperative was rated as minor (pain level VAS average score, 2.33) and patients' satisfaction was excellent (satisfaction level VAS average score, 1.82). The assessment by the panels of expert and lay evaluators paralleled these outcomes with a postoperative average VAS score of 1.69 and 1.87, respectively. Cartilage incision and transection can be effectively avoided in otoplasty. Even distribution of the surgical load among multiple techniques avoids the problems associated with "overload" of a single technique. The innovative technique of cortical mastoid drill-out is described. High satisfaction with the results, excellent patient comfort, and a favorable safety profile are associated with the present algorithm. |
Multimodal Attention for Neural Machine Translation | The attention mechanism is an important part of the neural machine translation (NMT) where it was reported to produce richer source representation compared to fixed-length encoding sequence-to-sequence models. Recently, the effectiveness of attention has also been explored in the context of image captioning. In this work, we assess the feasibility of a multimodal attention mechanism that simultaneously focus over an image and its natural language description for generating a description in another language. We train several variants of our proposed attention mechanism on the Multi30k multilingual image captioning dataset. We show that a dedicated attention for each modality achieves up to 1.6 points in BLEU and METEOR compared to a textual NMT baseline. |
Effect of the humidification device on the work of breathing during noninvasive ventilation | Abstract Objective. Heat and moisture exchangers (HME) increase circuitry deadspace compared to heated humidifiers (HH). This study compared the effect of HH and HME during noninvasive ventilation (NIV) on arterial blood gases and patient's effort assessed by respiratory muscles pressure-time product and by work of breathing (WOB). Design and setting. Randomized cross-over study in a medical intensive care unit. Patients. Nine patients receiving NIV for moderate to severe acute hypercapnic respiratory failure. Measurements. HME was randomly compared to HH during periods of 20 min. Each device was studied without (ZEEP) and with a PEEP of 5 cmH2O. At the end of each period arterial blood gases, ventilatory parameters, oesophageal and gastric pressures were recorded and indexes of patient's effort calculated. Results. Minute ventilation was significantly higher with HME than with HH (ZEEP: 15.8±3.7 vs. 12.8±3.6 l/min) despite a similar PaCO2 (60±16 vs. 57±16 mmHg). HME was associated with a greater increase in WOB (ZEEP: 15.5±7.7 vs. 8.4±4.5 J/min and PEEP: 11.3±5.7 vs. 7.3±3.8 J/min) and indexes of patient effort. NIV with HME failed to decrease WOB compared to baseline. Addition of PEEP reduced the level of effort, but similar differences between HME and HH were observed. Conclusions. In patients receiving NIV for moderate to severe acute hypercapnic respiratory failure, the use of HME lessens the efficacy of NIV in reducing effort compared to HH. |
High Platelet Reactivity in Patients with Acute Coronary Syndromes Undergoing Percutaneous Coronary Intervention: Randomised Controlled Trial Comparing Prasugrel and Clopidogrel | BACKGROUND
Prasugrel is more effective than clopidogrel in reducing platelet aggregation in acute coronary syndromes. Data available on prasugrel reloading in clopidogrel treated patients with high residual platelet reactivity (HRPR) i.e. poor responders, is limited.
OBJECTIVES
To determine the effects of prasugrel loading on platelet function in patients on clopidogrel and high platelet reactivity undergoing percutaneous coronary intervention for acute coronary syndrome (ACS).
PATIENTS
Patients with ACS on clopidogrel who were scheduled for PCI found to have a platelet reactivity ≥40 AUC with the Multiplate Analyzer, i.e. "poor responders" were randomised to prasugrel (60 mg loading and 10 mg maintenance dose) or clopidogrel (600 mg reloading and 150 mg maintenance dose). The primary outcome measure was proportion of patients with platelet reactivity <40 AUC 4 hours after loading with study medication, and also at one hour (secondary outcome). 44 patients were enrolled and the study was terminated early as clopidogrel use decreased sharply due to introduction of newer P2Y12 inhibitors.
RESULTS
At 4 hours after study medication 100% of patients treated with prasugrel compared to 91% of those treated with clopidogrel had platelet reactivity <40 AUC (p = 0.49), while at 1 hour the proportions were 95% and 64% respectively (p = 0.02). Mean platelet reactivity at 4 and 1 hours after study medication in prasugrel and clopidogrel groups respectively were 12 versus 22 (p = 0.005) and 19 versus 34 (p = 0.01) respectively.
CONCLUSIONS
Routine platelet function testing identifies patients with high residual platelet reactivity ("poor responders") on clopidogrel. A strategy of prasugrel rather than clopidogrel reloading results in earlier and more sustained suppression of platelet reactivity. Future trials need to identify if this translates into clinical benefit.
TRIAL REGISTRATION
ClinicalTrials.gov NCT01339026. |
Outcomes of Cardiac Resynchronization Therapy With or Without Defibrillation in Patients With Nonischemic Cardiomyopathy. | BACKGROUND
Recent studies have cast doubt on the benefit of cardiac resynchronization therapy (CRT) with defibrillation (CRT-D) versus pacing (CRT-P) for patients with nonischemic cardiomyopathy (NICM). Left ventricular myocardial scar portends poor clinical outcomes.
OBJECTIVES
The aim of this study was to determine whether CRT-D is superior to CRT-P in patients with NICM either with (+) or without (-) left ventricular midwall fibrosis (MWF), detected by cardiac magnetic resonance.
METHODS
Clinical events were quantified in patients with NICM who were +MWF (n = 68) or -MWF (n = 184) who underwent cardiac magnetic resonance prior to CRT device implantation.
RESULTS
In the total study population, +MWF emerged as an independent predictor of total mortality (adjusted hazard ratio [aHR]: 2.31; 95% confidence interval [CI]: 1.45 to 3.68), total mortality or heart failure hospitalization (aHR: 2.02; 95% CI: 1.32 to 3.09), total mortality or hospitalization for major adverse cardiac events (aHR: 2.02; 95% CI: 1.32 to 3.07), death from pump failure (aHR: 1.95; 95% CI: 1.11 to 3.41), and sudden cardiac death (aHR: 3.75; 95% CI: 1.26 to 11.2) over a maximum follow-up period of 14 years (median 3.8 years [interquartile range: 2.0 to 6.1 years] for +MWF and 4.6 years [interquartile range: 2.4 to 8.3 years] for -MWF). In separate analyses of +MWF and -MWF, total mortality (aHR: 0.23; 95% CI: 0.07 to 0.75), total mortality or heart failure hospitalization (aHR: 0.32; 95% CI: 0.12 to 0.82), and total mortality or hospitalization for major adverse cardiac events (aHR: 0.30; 95% CI: 0.12 to 0.78) were lower after CRT-D than after CRT-P in +MWF but not in -MWF.
CONCLUSIONS
In patients with NICM, CRT-D was superior to CRT-P in +MWF but not -MWF. These findings have implications for the choice of device therapy in patients with NICM. |
Comparison deep learning method to traditional methods using for network intrusion detection | Recently, deep learning has gained prominence due to the potential it portends for machine learning. For this reason, deep learning techniques have been applied in many fields, such as recognizing some kinds of patterns or classification. Intrusion detection analyses got data from monitoring security events to get situation assessment of network. Lots of traditional machine learning method has been put forward to intrusion detection, but it is necessary to improvement the detection performance and accuracy. This paper discusses different methods which were used to classify network traffic. We decided to use different methods on open data set and did experiment with these methods to find out a best way to intrusion detection. |
Culture-sensitive neural substrates of human cognition: a transcultural neuroimaging approach | Our brains and minds are shaped by our experiences, which mainly occur in the context of the culture in which we develop and live. Although psychologists have provided abundant evidence for diversity of human cognition and behaviour across cultures, the question of whether the neural correlates of human cognition are also culture-dependent is often not considered by neuroscientists. However, recent transcultural neuroimaging studies have demonstrated that one's cultural background can influence the neural activity that underlies both high- and low-level cognitive functions. The findings provide a novel approach by which to distinguish culture-sensitive from culture-invariant neural mechanisms of human cognition. |
Misesian Praxeology : an illustration froM the field of sociology of delinquency | The two main principles of the praxeological system elaborated by Mises are his concept of action and his epistemological apriorism. This paper illustrates these principles in the field of the sociology of delinquency. It first shows that the Misesian concept of action is very helpful in order to (1) understand why a praxeological turn occurred in the 1950s with the critique of the culturalist approach to criminality, and (2) analyze some of the main theoretical developments that took place afterwards, such as the social control theory, the low self-control theory, and the routine activity theory. The paper then shows that, behind their positivist façade, Renaud Fillieule ([email protected]) is Professor at the Université Lille Nord de France and member of the CLERSÉ research unit. The author wishes to thank Joe Salerno for the invitation to deliver this lecture, the Mises Institute for a flawless organization of the conference, and Guido Hülsmann for his support. VoL. 15 | N 4 | 410–431 WINTER 2012 The |
Network traffic characteristics of data centers in the wild | Although there is tremendous interest in designing improved networks for data centers, very little is known about the network-level traffic characteristics of data centers today. In this paper, we conduct an empirical study of the network traffic in 10 data centers belonging to three different categories, including university, enterprise campus, and cloud data centers. Our definition of cloud data centers includes not only data centers employed by large online service providers offering Internet-facing applications but also data centers used to host data-intensive (MapReduce style) applications). We collect and analyze SNMP statistics, topology and packet-level traces. We examine the range of applications deployed in these data centers and their placement, the flow-level and packet-level transmission properties of these applications, and their impact on network and link utilizations, congestion and packet drops. We describe the implications of the observed traffic patterns for data center internal traffic engineering as well as for recently proposed architectures for data center networks. |
Generating Text Summaries through the Relative Importance of Topics | This work proposes a new extractive text-summarization algorithm based on the importance of the topics contained in a document. The basic ideas of the proposed algorithm are as follows. At first the document is partiti oned by using the TextTili ng algorithm, which identifies topics (coherent segments of text) based on the TF-IDF metric. Then for each topic the algorithm computes a measure of its relative relevance in the document. This measure is computed by using the notion of TF-ISF (Term Frequency Inverse Sentence Frequency), which is our adaptation of the well -known TF-IDF (Term Frequency Inverse Document Frequency) measure in information retrieval. Finall y, the summary is generated by selecting from each topic a number of sentences proportional to the importance of that topic. |
Vibration and motion control design and trade-off for high-performance mechatronic systems | This paper discusses the H∞-based design of a vibration controller for an industrial pick-and-place machine. A vibration controller is added to the classical motion control scheme with the purpose of improving positioning behavior by reducing the vibration level and settling time. It is shown that a trade-off is required between vibration reduction and motion control. The approach is validated experimentally and the results clearly illustrate the benefit of the proposed method. |
Melatonin deficiency and its implications for the treatment of insomnia in elderly subjects | |
Modifications of interfacial proteins in oil-in-water emulsions prior to and during lipid oxidation. | Lipid oxidation is a major cause for the degradation of biological systems and foods, but the intricate relationship between lipid oxidation and protein modifications in these complex multiphase systems remains unclear. The objective of this work was to have a spatial and temporal insight of the modifications undergone by the interfacial or the unadsorbed proteins in oil-in-water emulsions during lipid oxidation. Tryptophan fluorescence and oxygen uptake were monitored simultaneously during incubation in different conditions of protein-stabilized oil-in-water emulsions. Kinetic parameters demonstrated that protein modifications, highlighted by decrease of protein fluorescence, occurred as an early event in the sequence of the reactions. They concerned more specifically the proteins adsorbed at the oil/water interface. The reactions led in a latter stage to protein aggregation, carbonylation, and loss of protein solubility. |
Were there Designed Landscapes in Medieval Ireland | AbstractRelatively little has been written about the non-pragmatic aspects of medieval Irish landscapes. The dominance of political and ethnic conflict in the traditional view of medieval Irish history has tempted archaeologists away from any interpretation of medieval landscape that stresses pleasure and intellect. This article attempts to redress that imbalance, principally by a critique of the current literature but also by the brief presentation of two designed landscapes of c. 1300. |
Strange mesons as a probe for dense nuclear matter | The production and propagation of kaons and antikaons has been studied in symmetric nucleus-nucleus collisions in the SIS energy range. The ratio of the excitation functions of K^+ production in Au+Au and C+C collisions increases with decreasing beam energy. This effect was predicted for a soft nuclear equation-of-state. In noncentral Au+Au collisions, the K^+ mesons are preferentially emitted perpendicular to the reaction plane. The K^-/K^+ ratio from A+A collisions at beam energies which are equivalent with respect to the threshold is found to be about two orders of magnitude larger than the corresponding ratio from proton-proton collisions. Both effects are considered to be experimental signatures for a modification of kaon properties in the dense nuclear medium. |
Logistic Regression and Collaborative Filtering for Sponsored Search Term Recommendation | Sponsored search advertising is largely based on bidding on individual terms. The richness of natural languages permits web searchers to express their information needs in myriad ways. Advertisers have difficulty discovering all the terms that are relevant to their products or services. We examine the performance of logistic regression and collaborative filtering models on two different data sources to predict terms relevant to a set of seed terms describing an advertiser’s product or service. |
Algorithms and bounds for energy-based multi-source localization in log-normal fading | The problem of multi-source localization in lognormal fading is addressed. A Gaussian-Mixture Model (GMM) is proposed as an approximation of the measured signal strengths from various sensors in space, parameterized by the positions and the transmission (Tx) powers of the sources. The iterative Expectation-Maximization algorithm is invoked for the estimation of the relevant GMM parameter set. The complexity of the proposed algorithm grows only linearly with the number of sources, a major advantage in comparison with the direct Maximum Likelihood approach. When the number of sources is unknown, both the Akaike Information Criterion and the Minimum-Description-Length method are employed to estimate the proper number of sources. In addition, approximate Cramer-Rao lower bounds are derived for the estimation of the source positions and Tx powers. Simulations plus laboratory experiments are presented, which show fine agreement with theory. |
Snoezelen, structured reminiscence therapy and 10-minutes activation in long term care residents with dementia (WISDE): study protocol of a cluster randomized controlled trial | BACKGROUND
People with dementia are often inapproachable due to symptoms of their illness. Therefore nurses should establish relationships with dementia patients via their remaining resources and facilitate communication. In order to achieve this, different targeted non-pharmacological interventions are recommended and practiced. However there is no sufficient evidence about the efficacy of most of these interventions. A number of publications highlight the urgent need for methodological sound studies so that more robust conclusions may be drawn.
METHODS/DESIGN
The trial is designed as a cluster randomized controlled trial with 20 nursing homes in Saxony and Saxony-Anhalt (Germany) as the units of randomization. Nursing homes will be randomly allocated into 4 study groups consisting of 5 clusters and 90 residents: snoezelen, structured reminiscence therapy, 10-minutes activation or unstructured verbal communication (control group). The purpose is to determine whether the interventions are effective to reduce apathy in long-term care residents with dementia (N = 360) as the main outcome measure. Assessments will be done at baseline, 3, 6 and 12 months after beginning of the interventions.
DISCUSSION
This trial will particularly contribute to the evidence on efficacy of non-pharmacological interventions in dementia care.
TRIAL REGISTRATION
ClinicalTrials.gov NCT00653731. |
Using enterprise systems to realize digital business strategies | Purpose – Organizations invest in enterprise systems (ESs) with an expectation to share digital information from disparate sources to improve organizational effectiveness. This study aims to examine how organizations realize digital business strategies using an ES. It does so by evaluating the ES data support activities for knowledge creation, particularly how ES data are transformed into corporate knowledge in relevance to business strategies sought. Further, how this knowledge leads to realization of the business benefits. The linkage between establishing digital business strategy, utilization of ES data in decision-making processes, and realized or unrealized benefits provides the reason for this study. Design/methodology/approach – This study develops and utilizes a transformational model of how ES data are transformed into knowledge and results to evaluate the role of digital business strategies in achieving benefits using an ES. Semi-structured interviews are first conducted with ES vendors, consultants and IT research firms to understand the process of ES data transformation for realizing business strategies from their perspective. This is followed by three in-depth cases (two large and one medium-sized organization) who have implemented ESs. The empirical data are analyzed using the condensation approach. This method condenses the data into multiple groups according to pre-defined categories, which follow the scope of the research questions. Findings – The key findings emphasize that strategic benefit realization from an ES implementation is a holistic process that not only includes the essential data and technology factors, but also includes factors such as digital business strategy deployment, people and process management, and skills and competency development. Although many companies are mature with their ES implementation, these firms have only recently started aligning their ES capabilities with digital business strategies correlating data, decisions, and actions to maximize business value from their ES investment. Research limitations/implications – The findings reflect the views of two large and one mediumsized organization in the manufacturing sector. Although the evidence of the benefit realization process success and its results is more prominent in larger organizations than medium-sized, it may not be generalized that smaller firms cannot achieve these results. Exploration of these aspects in smaller firms or a different industry sector such as retail/service would be of value. Practical implications – The paper highlights the importance of tools and practices for accessing relevant information through an integrated ES so that competent decisions can be established towards achieving digital business strategies, and optimizing organizational performance. Knowledge is a key factor in this process. Originality/value – The paper evaluates a holistic framework for utilization of ES data in realizing digital business strategies. Thus, it develops an enhanced transformational cycle model for ES data transformation into knowledge and results, which maintains to build up the transformational process success in the long term. |
A PERSONALIZED WEB SEARCH BASED ON USER PROFILE AND USER CLICKS | Generally web search engines are built to serve all users, independent of the individual needs of any user. Personalization of web search is to carry out retrieval for each user incorporating their interest. This has become an important factor in daily usage as it improves the retrieval effectiveness on topics that the user would look for. There are several studies that have been done in this field. This paper proposes a new idea on personalized web search which is based on user profile and user clicks. |
Geografía e historia: ¿reactivación de antiguas relaciones interdisciplinarias? | El articulo se refiere al tema de las relaciones interdisciplinarias, supuestas o reales, que han ligado a traves del tiempo a la geografia y la historia. El estudio pretende explorar y analizar las opiniones escritas de eruditos que concurren, en general, con el aserto de destacar cierto grado de interdependencia y cooperacion de aquellas disciplinas, con base en el trabajo desarrollado por geografos e historiadores. Tanto en el campo cientifico como en el academico, la historia considera a la geografia como un auxiliar imprescindible, y viceversa. En la actualidad se presentan nuevas manifestaciones de tales relaciones, las cuales son discutidas y evaluadas en terminos de las propias tendencias de desarrollo teorico de la geografia historica y la historia ambiental. |
Leadership in Teams : A Functional Approach to Understanding Leadership Structures and Processes | As the use of teams has increased in organizations, research has begun to focus on the role of leadership in fostering team success. This review sought to summarize this literature and advance research and theory by focusing on leadership processes within a team and describing how team leadership can arise from four distinct sources inside and outside a team. Then, drawing from this inclusive, team-centric view of leadership, the authors describe 15 team leadership functions that help teams satisfy their critical needs and regulate their behavior in the service of goal accomplishment. This integrative view of team leadership enables the summarization of past research and identification of promising areas of future research. |
Neural mechanisms of selective visual attention. | The two basic phenomena that define the problem of visual attention can be illustrated in a simple example. Consider the arrays shown in each panel of Figure 1. In a typical experiment, before the arrays were presented, subjects would be asked to report letters appearing in one color (targets, here black letters), and to disregard letters in the other color (nontargets, here white letters). The array would then be briefly flashed, and the subjects, without any opportunity for eye movements, would give their report. The display mimics our. usual cluttered visual environment: It contains one or more objects that are relevant to current behavior, along with others that are irrelevant. The first basic phenomenon is limited capacity for processing information. At any given time, only a small amount of the information available on the retina can be processed and used in the control of behavior. Subjectively, giving attention to any one target leaves less available for others. In Figure 1, the probability of reporting the target letter N is much lower with two accompa nying targets (Figure la) than with none (Figure Ib). The second basic phenomenon is selectivity-the ability to filter out un wanted information. Subjectively, one is aware of attended stimuli and largely unaware of unattended ones. Correspondingly, accuracy in identifying an attended stimulus may be independent of the number of nontargets in a display (Figure la vs Ie) (see Bundesen 1990, Duncan 1980). |
An efficient path-based equivalent circuit model for design, synthesis, and optimization of power distribution networks in multilayer printed circuit boards | In high-speed printed circuit boards, the decoupling capacitors are commonly used to mitigate the power-bus noise that causes many signal integrity problems. It is very important to determine their proper locations and values so that the power distribution network should have low impedance over a wide range of frequencies, which demands a precise power-bus model considering the decoupling capacitors. However, conventional power-bus models suffer from various problems, i.e., the numerical analyzes require huge computation while the lumped circuit models show poor accuracy. In this paper, a novel power-bus model has been proposed, which simplifies the n-port Z-parameters of a power-bus plane to a lumped T-network circuit model. It exploits the path-based equivalent circuit model to consider the interference of the current paths between the decoupling capacitors, while the conventional lumped models assume that all decoupling capacitors are connected in parallel, independently with each other. It also models the equivalent electrical parameters of the board parasitic precisely, while the conventional lumped models employ only the inter-plane capacitance of the power-ground planes. Although it is a lumped model for fast and easy calculation, experimental results show that the proposed model is almost as precise as the numerical analysis. Consequently, the proposed model enables a quick and accurate optimization of power distribution networks in the frequency domain by determining the locations and values of the decoupling capacitors. |
A pilot study examining the effect of mindfulness on depression and anxiety for minority children. | From the Christine E. Lynn, College of Nursing, Florida Atlantic University, Boca Raton, FL; School of Social Work,SO284, FloridaAtlanticUniversity,BocaRaton, FL. Corresponding Author: Patricia Liehr, PhD, RN, Professor, Christine E. Lynn, College of Nursing, Florida Atlantic University, Boca Raton, FL 33431. E-mail addresses: [email protected], [email protected] © 2010 Elsevier Inc. All rights reserved. 0883-9417/1801-0005$34.00/0 doi:10.1016/j.apnu.2009.10.001 D EPRESSION AND ANXIETY are the most common mental health problems affecting children today (Farrell & Barrett, 2007). Research has documented an association among anxiety, depression, and psychosocial impairments including immaturity, inattention, concentration problems, academic difficulties, poor peer relations, low selfesteem, and low social competence (Ialongo, Edelsohn, Werthermar-Larsson, Crockett, & Kellam, 1994;Kashani&Orvaschel, 1990;Kendall, Cantwell, & Kazdin, 1989; Strauss, Frame, & Forehand, 1987). Mindfulness-based stress reduction was introduced by Jon Kabat-Zinn (1990) nearly two decades ago. The goal of persons who engage in mindfulness is awareness in the present moment with full attention to experiencing what is happening now (Kabat-Zinn, 2003). The primary mechanism is self-management of attention (Semple, Reid, & Miller, 2005). Segal, Williams, and Teasdale (2002) have taken mindfulness principles and merged them with cognitive therapy to successfully treat depression in adults. However, there is scant research addressing mindfulness in children (Burke, 2009). Some existing research focuses on anxiety (Napoli, Krech, & Holley, 2005; Saltzman & Goldin, 2008; Semple et al., 2005). Only one study focuses on depression and anxiety (Lee, Semple, Rosa, & Miller, 2008). These researchers reported no change in depression or anxiety for 25 children who came from ethnically diverse backgrounds. Methodological limitations included lack of a comparison/control group and use of depression and anxiety measures appropriate for clinical samples even though their participants |
METRIC AND TOPO-GEOMETRIC PROPERTIES OF URBAN STREET NETWORKS : some convergences , divergences , and new results | The theory of cities, which has grown out of the use of space syntax techniques in urban studies, proposes a curious mathematical duality: that urban space is locally metric but globally topo-geometric. Evidence for local metricity comes from such generic phenomena as grid intensification to reduce mean trip lengths in live centres, the fall of movement from attractors with metric distance, and the commonly observed decay of shopping with metric distance from an intersection. Evidence for global topo-geometry come from the fact that we need to utilise both the geometry and connectedness of the larger scale space network to arrive at configurational measures which optimally approximate movement patterns in the urban network. It might be conjectured that there is some threshold above which human being use some geometrical and topological representation of the urban grid rather than the sense of bodily distance to making movement decisions, but this is unknown. The discarding of metric properties in the large scale urban grid has, however, been controversial. Here we cast a new light on this duality. We show first some phenomena in which metric and topo-geometric measures of urban space converge and diverge, and in doing so clarify the relation between the metric and topo-geometric properties of urban spatial networks. We then show how metric measures can be used to create a new urban phenomenon: the partitioning of the background network of urban space into a network of semi-discrete patches by applying metric universal distance measures at different metric radii, suggesting a natural spatial area-isation of the city at all scales. On this basis we suggest a key clarification of the generic structure of cities: that metric universal distance captures exactly the formally and functionally local patchwork properties of the network, most notably the spatial differentiation of areas, while the top-geometric measures identifying the structure which overcomes locality and links the urban patchwork into a whole at different scales. Introduction: the dual urban network The theory of cities, which has grown out of the use of space syntax techniques in urban studies, proposes that urban street networks have a dual form: a foreground network of linked centres at all scales, and a background network of primarily residential space in which the foreground network is embedded. (Hillier 2001/2) The theory also notes a mathematical duality. On the one hand, measures which express the geometric and topological properties of the network at an extended scale, such as integration and choice measures in axial maps or segment angular maps, are needed to capture structure-function relations such as natural movement patterns (Hillier & Iida 2005). We can call these measures topo-geometric. On the other, at a more localised level, an understanding of structure-function relations often requires an account of metric properties – for example the generic, but usually local, phenomenon of grid intensification to reduce mean trip lengths in live centres (Siksna 1997, Hillier 1999), the fall of movement rates with metric distance from attractors, and the commonly observed decay of shopping with metric distance from an intersection. In terms of understanding structure-function relations. urban space seems to be globally topo-geometric but locally metric. Here we propose to link these two dualities in a more thorough-going way. We show first that the large scale foreground network of space in cities, in spite of the claims of critics (Ratti 2004), really is not metric. On the contrary, the substitution of metric for topo-geometric measures in the analysis, has catastrophic effects on the ability of syntax to account for structure-function relations at this scale. At the same time, topo-geometric measures turn out to capture some interesting metric properties of the larger scale urban network. But we then show that the background network of space really is metric in a much more general sense than has been thought, in that metric measures at different radii can be used to partition the background network of urban space into a patchwork of semi-discrete areas, suggesting a natural metric area-isation of cities at all scales as a function of the placing, shaping and scaling of urban blocks. On this basis we suggest a clarification of the dual structure of cities: that metric ‘universal distance’ (distance from all points to all others – Hillier 1996) measures can capture the spatial differentiation of the background urban network into a patchwork of local areas, while the topo-geometric measures identify the structures which overcomes locality and links the urban patchwork into a whole at different scales. The patchwork theory is in effect a theory of block size and shape, picking up the local distortions in urban space induced by the placing and shaping of physical structures. More generally, we can say that the local-to-global topo-geometric structure reflects the visual, and so non-local effects of placing blocks in space, while the patchwork structure reflects metric and so local effects. The patchwork theory extends and generalises the concept of grid intensification, meaning the reduction of block size to reduce mean distance from all points to all others in a space network. As shown in (Hillier 2000), holding total land coverage and travellable distance in free space constant, a grid in which smaller blocks are placed at the centre and larger blocks at the edge has lower mean distance from all points to all others in the space network than a regular grid, while if larger blocks are placed at the centre and smaller blocks at the edge, then the mean distance from all points to others in the space network is higher than in a regular grid. This follows from the partitioning theory set out in Chapter 8 of Space is the Machine. (Hillier 1996) In general in urban grids, live centres and sub-centres (‘live’ in the sense of having movement dependent uses such as retail and catering) tend to the grid intensified form to maximise the inter-accessibility of the facilities within in the centre, residential areas tend to larger block sizes, reflecting the need to restrain and structure movement in the image of a spatial culture, while the linkages between centres tend to an even larger block size again, an effect of the directional structuring of routes, so that the network of linked centres which dominate the spatial structure of cities tend to oscillate between a relatively large and relative small block size, with the residential background occupying the middle range. This block size pattern is explained more fully in (Hillier 2001/2). In this paper we: – First review the duality of urban space in three ways: geometrically to establish its empirical existence as a key dimension of urban form, functionally to show its implications in terms of movement and land use patterns, and syntactically to show the relations between the two. – We then explore some of the suggestions that have been made about reducing the distance between syntax and more traditional metric approaches, in particular by examining the suggestion of Ratti that we should add metric weightings to the main syntax measures. We show the consequences of these suggestions for any theory which seeks to identify functionally meaningful structures in urban space – We then suggest a general method for showing the metric effect on space of block placing and shaping, both visually and in terms of patterns in scattergrams, by showing theoretical cases – We then apply this method to some cities and show its ability to identify if not natural spatial areas then at least a natural periodicity in city networks through which they tend to a natural spatial area-isation at all scales, reflecting the ways in which we talks about urban areas and regions at different scales. Metric and geometric properties of the grid First, we consider the urban duality geometrically by looking sections of metropolitan Tokyo and London, shown in Figure 1. As shown in (Hillier 2001/2) and later formalised in (Carvalho & Penn 2004), we must first remind ourselves of the fractal nature of urban least line networks: all are made up at all scales, from the local area to the city regions, of a small number of long lines and a large number of short lines. But there is more to be said. Longer and shorter lines form different kinds of geometric patterns. If we look for patterns in the section of Tokyo, the first thing the eye notes are line continuities. What we are seeing in effect is sequence of lines linked at their ends by nearly straight intersections with other lines, forming a visually dominant pattern in the network. But in general, the lines forming these nearly straight continuities as Figueredo calls them (Figueredo 2003) are longer than other nearby lines. This has the effect that if we find a locally longer line it is likely that at either end it will lead to another to which it will be connected by a nearly straight connection, and these lines will in turn be similarly connected. Probabilistically, we can say the longer the line, the more likely it is to end in a nearly straight connection, and taken together these alignments form a network of multi-directional sequences. Intuitively, the value of these in navigating urban grids is obvious, but here, following (Hillier 1999) we are making a structural point. Figure 1: sections of the Tokyo and London street networks What then of the shorter lines ? Again, in spite of a highly variable geometry, we find certain consistencies. First, shorter lines tend to form clusters, so that in the vicinity of each longer line there will be several shorter lines. These localised groups tend to form more grid-like local patterns, with lines either passing through each other, or ending on other lines, at near right angles. We can say then that the shorter the line, the more likely it is to |
Salvage assessment with cardiac MRI following acute myocardial infarction underestimates potential for recovery of systolic strain | Our aim was to evaluate the relationship between the degree of salvage following acute ST elevation myocardial infarction (STEMI) and subsequent reversible contractile dysfunction using cardiac magnetic resonance (CMR) imaging. Thirty-four patients underwent CMR examination 1–7 days after primary percutaneous coronary intervention (PPCI) for acute STEMI with follow-up at 1 year. The ischaemic area-at-risk (AAR) was assessed with T2-weighted imaging and myocardial necrosis with late gadolinium enhancement. Myocardial strain was quantified with complementary spatial modulation of magnetisation (CSPAMM) tagging. Ischaemic segments with poor (<25 %) or intermediate (26–50 %) salvage index were associated with worse Eulerian circumferential (Ecc) strain immediately post-PPCI (−9.1 % ± 0.6, P = 0.033 and −11.8 % ± 1.3, P = 0.003, respectively) than those with a high (51–100 %) salvage index (−14.4 % ± 1.3). Mean strain in ischaemic myocardium improved between baseline and follow-up (−10.1 % ± 0.5 vs. −16.2 % ± 0.5 %, P < 0.0001). Segments with poor salvage also showed an improvement in strain by 1 year (−9.1 % ± 0.6 vs. −15.3 % ± 0.6, P = 0.033) although they remained the most functionally impaired. Partial recovery of peak systolic strain following PPCI is observed even when apparent salvage is less than 25 %. Late gadolinium enhancement (LGE) may not equate to irreversibly injured myocardium and salvage assessment performed within the first week of revascularisation may underestimate the potential for functional recovery. • MRI can measure how much myocardium is damaged after a heart attack. • Heart muscle that appears initially non-viable may sometimes partially recover. • Enhancement around the edges of infarcts may resolve over time. • Evaluating new cardio-protective treatments with MRI requires appreciation of its limitations. |
Towards Privacy Protection in Smart Grid | The smart grid is an electronically controlled electrical grid that connects power generation, transmission, distribution, and consumers using information communication technologies. One of the key characteristics of the smart grid is its support for bi-directional information flow between the consumer of electricity and the utility provider. This two-way interaction allows electricity to be generated in real-time based on consumers’ demands and power requests. As a result, consumer privacy becomes an important concern when collecting energy usage data with the deployment and adoption of smart grid technologies. To protect such sensitive information it is imperative that privacy protection mechanisms be used to protect the privacy of smart grid users. We present an analysis of recently proposed smart grid privacy solutions and identify their strengths and weaknesses in terms of their implementation complexity, efficiency, robustness, and simplicity. |
A battery-free object localization and motion sensing platform | Indoor object localization can enable many ubicomp applications, such as asset tracking and object-related activity recognition. Most location and tracking systems rely on either battery-powered devices which create cost and maintenance issues or cameras which have accuracy and privacy issues. This paper introduces a system that is able to detect the 3D position and motion of a battery-free RFID tag embedded with an ultrasound detector and an accelerometer. Combining tags' acceleration with location improves the system's power management and supports activity recognition. We characterize the system's localization performance in open space as well as implement it in a smart wet lab application. The system is used to track real-time location and motion of the tags in the wet lab as well as recognize pouring actions performed on the objects to which the tag is attached. The median localization accuracy is 7.6cm -- (3.1, 5, 1.9)cm for each (x, y, z) axis -- with max update rates of 15 Sample/s using single RFID reader antenna. |
Convex Optimization for Big Data: Scalable, randomized, and parallel algorithms for big data analytics | This article reviews recent advances in convex optimization algorithms for big data, which aim to reduce the computational, storage, and communications bottlenecks. We provide an overview of this emerging field, describe contemporary approximation techniques such as first-order methods and randomization for scalability, and survey the important role of parallel and distributed computation. The new big data algorithms are based on surprisingly simple principles and attain staggering accelerations even on classical problems. |
Comparative assessment of methods for aligning multiple genome sequences | Multiple sequence alignment is a difficult computational problem. There have been compelling pleas for methods to assess whole-genome multiple sequence alignments and compare the alignments produced by different tools. We assess the four ENCODE alignments, each of which aligns 28 vertebrates on 554 Mbp of total input sequence. We measure the level of agreement among the alignments and compare their coverage and accuracy. We find a disturbing lack of agreement among the alignments not only in species distant from human, but even in mouse, a well-studied model organism. Overall, the assessment shows that Pecan produces the most accurate or nearly most accurate alignment in all species and genomic location categories, while still providing coverage comparable to or better than that of the other alignments in the placental mammals. Our assessment reveals that constructing accurate whole-genome multiple sequence alignments remains a significant challenge, particularly for noncoding regions and distantly related species. |
Steering the Beam of Medium-to-High Gain Antennas Using Near-Field Phase Transformation | A method to steer the beam of aperture-type antennas is presented in this paper. Beam steering is achieved by transforming phase of the antenna near field using a pair of totally passive metasurfaces, which are located just above and parallel to the antenna. They are rotated independently or synchronously around the antenna axis. A prototype, with a peak gain of 19.4 dBi, demonstrated experimentally that the beam of a resonant cavity antenna can be steered to any direction within a large conical region (with an apex angle of 102°), with less than 3-dB gain variation, by simply turning the two metasurfaces without moving the antenna at all. Measured gain variation within a 92° cone is only 1.9 dBi. Contrary to conventional mechanical steering methods, such as moving reflector antennas with multiaxis rotary joints, the 3-D volume occupied by this antenna system does not change during beam steering. This advantage, together with its low profile, makes it a strong contender for space-limited applications where beam steering with active devices is not desirable due to cost, nonlinear distortion, limited power handling, sensitivity to temperature variations, radio frequency losses, or associated heating. This beam steering method using near-field phase transformation can also be applied to other aperture-type antennas and arrays with medium-to-high gains. |
Computational reconstruction of ancient artifacts | In this article, we discuss the development of automatic artifact reconstruction systems capable of coping with the realities of real-world geometric puzzles that anthropologists and archaeologists face on a daily basis. Such systems must do more than find matching fragments and subsequently align these matched fragments; these systems must be capable of simultaneously solving an unknown number of multiple puzzles where all of the puzzle pieces are mixed together in an unorganized pile and each puzzle may be missing an unknown number of its pieces. Discussion has cast the puzzle reconstruction problem into a generic terminology that is formalized appropriately for the 2-D and 3-D artifact reconstruction problems. Two leading approaches for 2-D tablet reconstruction and four leading approaches for 3-D object reconstruction have been discussed in detail, including partial or complete descriptions for the numerous algorithms upon which these systems rely. Several extensions to the geometric matching problem that use patterns apparent on the fragment outer surface were also discussed that generalize the problem beyond that of matching strictly geometry. The models needed for solving these problems are new and challenging, and most involve 3-D that is largely unexplored by the signal processing community. This work is highly relevant to the new 3-D signal processing that is looming on the horizon for tele-immersion. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.