title
stringlengths
8
300
abstract
stringlengths
0
10k
Performance-based Fire Engineering Design And Its Application In Australia
In the international context there have been considerable advances recently towards the development of a performance-based approach for fire safety design. For the consideration of others involved in planning or implementing a performance-based approach, an examination is given of the experiences gained in Australia. This paper outlines those developments that have led to the implementation of a performance-based approach to fire safety design in Australia. It is noted that several factors must be in existence for there to be a successful implementation of a performance-based approach. Some important issues that need to be resolved are noted also. A brief description is given of some research which is currently in progress to hrther develop a risk-assessment model. Finally, some challenges and issues surrounding the future international application of a performancebased approach to fire safety design are presented.
Elasticity in cloud computing: a survey
Cloud computing is now a well-consolidated paradigm for on-demand services provisioning on a pay-as-you-go model. Elasticity, one of the major benefits required for this computing model, is the ability to add and remove resources “on the fly” to handle the load variation. Although many works in literature have surveyed cloud computing and its features, there is a lack of a detailed analysis about elasticity for the cloud. As an attempt to fill this gap, we propose this survey on cloud computing elasticity based on an adaptation of a classic systematic review. We address different aspects of elasticity, such as definitions, metrics and tools for measuring, evaluation of the elasticity, and existing solutions. Finally, we present some open issues and future direcEmanuel Ferreira Coutinho Master and Doctorate in Computer Science (MDCC) Virtual UFC Institute Federal University of Ceara (UFC) Brazil Tel.: +55-85-8875-1977 E-mail: [email protected] Flávio R. C. Sousa Teleinformatics Engineering Department (DETI) Federal University of Ceara (UFC) Brazil E-mail: [email protected] Paulo A. L. Rego Master and Doctorate in Computer Science (MDCC) Federal University of Ceara (UFC) Brazil E-mail: [email protected] Danielo G. Gomes Teleinformatics Engineering Department (DETI) Federal University of Ceara (UFC) Brazil E-mail: [email protected] José N. de Souza Master and Doctorate in Computer Science (MDCC) Federal University of Ceara (UFC) Brazil E-mail: [email protected] 2 Emanuel Ferreira Coutinho et al. tions. To the best of our knowledge, this is the first study on cloud computing elasticity using a systematic review approach.
Effect of occupational and recreational activity on the risk of colorectal cancer among males: a case-control study.
Epidemiological studies have consistently demonstrated that either occupational or recreational physical activity is protective against colon cancer. However, it is unclear whether recreational activity is similarly protective among those who engage in high or low occupational activity. We therefore compared 440 male cases of colorectal cancer with 1164 male hospital patients. Occupational activity was defined according to job title, while recreational activity was assessed by questionnaire for three different periods of life. Occupational activity was protective with respect to colorectal cancer irrespective of whether one engaged in recreational activity at any different period of life. In contrast, recreational activity, performed at 20-44 years of age appeared to decrease colon cancer risk by 10-25% irrespective of the intensity of job activity. The present results suggest that, although we observed a larger effect with occupational activity than with recreational activity, middle-aged men may reduce their risk of colorectal cancer if they exercise when they are not working. These findings need to be confirmed in other populations.
Blockchain Utilization in Healthcare: Key Requirements and Challenges
Blockchain is so far well-known for its potential applications in financial and banking sectors. However, blockchain as a decentralized and distributed technology can be utilized as a powerful tool for immense daily life applications. Healthcare is one of the prominent applications area among others where blockchain is supposed to make a strong impact. It is generating wide range of opportunities and possibilities in current healthcare systems. Therefore, this paper is all about exploring the potential applications of blockchain technology in current healthcare systems and highlights the most important requirements to fulfill the need of such systems such as trustless and transparent healthcare systems. In addition, this work also presents the challenges and obstacles needed to resolve before the successful adoption of blockchain technology in healthcare systems. Furthermore, we introduce the smart contract for blockchain based healthcare systems which is key for defining the pre-defined agreements among various involved stakeholders.
Benchmarking Lexical Simplification Systems
Lexical Simplification is the task of replacing complex words in a text with simpler alternatives. A variety of strategies have been devised for this challenge, yet there has been little effort in comparing their performance. In this contribution, we present a benchmarking of several Lexical Simplification systems. By combining resources created in previous work with automatic spelling and inflection correction techniques, we introduce BenchLS: a new evaluation dataset for the task. Using BenchLS, we evaluate the performance of solutions for various steps in the typical Lexical Simplification pipeline, both individually and jointly. This is the first time Lexical Simplification systems are compared in such fashion on the same data, and the findings introduce many contributions to the field, revealing several interesting properties of the systems evaluated.
Stock Market Prediction based on Time Series Data and Market Sentiment
In this project, we would like to create a system that predicts stock market movements on a given day, based on time series data and market sentiment analysis. We will use Twitter data on that day to predict the market sentiment and S&P 500 values to perform analysis on historical data.
E-Satisfaction and E-Loyalty: A Contingency Framework
The authors investigate the impact of satisfaction on loyalty in the context of electronic commerce. Findings of this research indicate that although e-satisfaction has an impact on e-loyalty, this relationship is moderated by (a) consumers’ individual level factors and (b) firms’ business level factors. Among consumer level factors, convenience motivation and purchase size were found to accentuate the impact of e-satisfaction on e-loyalty, whereas inertia suppresses the impact of e-satisfaction on e-loyalty. With respect to business level factors, both trust and perceived value, as developed by the company, significantly accentuate the impact of e-satisfaction on e-loyalty. 2003 Wiley Periodicals, Inc. The collapse of large numbers of dot-com companies has required managers, who felt that the Internet had changed everything, to relearn that profits indeed do matter (Rosenbloom, 2002) and that the traditional laws of marketing were not rescinded with the arrival of the e-commerce era. Additionally, it has been reinforced that organizations not only need to attract new customers, but also must retain them to ensure profitable repeat business. In several industries, the high cost of acquiring customers renders many customer relationships unprofitable during early years. Even the individual stores of highly successful warehouse clubs like Sam’s Club, Costco, and BJ’s are typically not profitable until the second or third year after opening.
Strength training for plantar fasciitis and the intrinsic foot musculature: A systematic review.
The aim was to critically evaluate the literature investigating strength training interventions in the treatment of plantar fasciitis and improving intrinsic foot musculature strength. A search of PubMed, CINHAL, Web of Science, SPORTSDiscus, EBSCO Academic Search Complete and PEDRO using the search terms plantar fasciitis, strength, strengthening, resistance training, intrinsic flexor foot, resistance training. Seven articles met the eligibility criteria. Methodological quality was assessed using the modified Downs and Black checklist. All articles showed moderate to high quality, however external validity was low. A comparison of the interventions highlights significant differences in strength training approaches to treating plantar fasciitis and improving intrinsic strength. It was not possible to identify the extent to which strengthening interventions for intrinsic musculature may benefit symptomatic or at risk populations to plantar fasciitis. There is limited external validity that foot exercises, toe flexion against resistance and minimalist running shoes may contribute to improved intrinsic foot musculature function. Despite no plantar fascia thickness changes being observed through high-load plantar fascia resistance training there are indications that it may aid in a reduction of pain and improvements in function. Further research should use standardised outcome measures to assess intrinsic foot musculature strength and plantar fasciitis symptoms.
Word lengths are optimized for efficient communication.
We demonstrate a substantial improvement on one of the most celebrated empirical laws in the study of language, Zipf's 75-y-old theory that word length is primarily determined by frequency of use. In accord with rational theories of communication, we show across 10 languages that average information content is a much better predictor of word length than frequency. This indicates that human lexicons are efficiently structured for communication by taking into account interword statistical dependencies. Lexical systems result from an optimization of communicative pressures, coding meanings efficiently given the complex statistics of natural language use.
Canine intraspinal meningiomas: imaging features, histopathologic classification, and long-term outcome in 34 dogs.
BACKGROUND Meningioma is the most common primary intraspinal nervous system tumor in dogs. Clinical findings, clinicopathologic data, and treatment of these tumors have been reported sporadically, but little information is available regarding cerebrospinal fluid (CSF) analysis, histologic tumor grade, or efficacy of radiation therapy as an adjunct to cytoreductive surgery. ANIMALS Dogs with histologically confirmed intraspinal meningiomas (n = 34). METHODS A retrospective study of dogs with intraspinal meningiomas between 1984 and 2006 was carried out. Signalment, historical information, physical examination, clinicopathologic data, radiation therapy protocols, surgery reports, and all available images were reviewed. All tumors were histologically classified and graded as defined by the international World Health Organization classification scheme for central nervous system tumors. RESULTS Intraspinal mengiomas in dogs are most common in the cervical spinal cord but can be found throughout the neuraxis. Location is correlated with histologic grade, with grade I tumors more likely to be in the cervical region than grade II tumors. Myelography generally shows an intradural extramedullary compressive lesion. On magnetic resonance imaging, the masses are strongly and uniformly contrast enhancing and a dural tail often is present. CSF analysis usually shows increased protein concentration with mild to moderate mixed pleocytosis. Surgical resection is an effective means of improving neurologic status, and adjunctive radiation therapy may lead to an improved outcome. CONCLUSIONS AND CLINICAL IMPORTANCE Biopsy is necessary for definitive diagnosis, but imaging and CSF analysis can suggest a diagnosis of meningioma. Treatment of meningiomas with surgery and radiation therapy can result in a fair to excellent prognosis.
Back EMF Sensorless-Control Algorithm for High-Dynamic Performance PMSM
In this paper, a low-time-consuming and low-cost sensorless-control algorithm for high-dynamic performance permanent-magnet synchronous motors, both surface and internal permanent-magnet mounted for position and speed estimation, is introduced, discussed, and experimentally validated. This control algorithm is based on the estimation of rotor speed and angular position starting from the back electromotive force space-vector determination without voltage sensors by using the reference voltages given by the current controllers instead of the actual ones. This choice obviously introduces some errors that must be vanished by means of a compensating function. The novelties of the proposed estimation algorithm are the position-estimation equation and the process of compensation of the inverter phase lag that also suggests the final mathematical form of the estimation. The mathematical structure of the estimation guarantees a high degree of robustness against parameter variation as shown by the sensitivity analysis reported in this paper. Experimental verifications of the proposed sensorless-control system have been made with the aid of a flexible test bench for brushless motor electrical drives. The test results presented in this paper show the validity of the proposed low-cost sensorless-control algorithm and, above all, underline the high dynamic performances of the sensorless-control system also with a reduced equipment.
Masturbation among women: associated factors and sexual response in a Portuguese community sample.
Masturbation is a common sexual practice with significant variations in reported incidence between men and women. The goal of this study was to explore (a) the age at initiation and frequency of masturbation, (b) the associations of masturbation with diverse variables, (c) the reported reasons for masturbating and associated emotions, and (d) the relation between frequency of masturbation and different sexual behavioral factors. Participants were 3,687 women who completed a web-based survey of previously pilot-tested items. The results reveal a high reported incidence of masturbation practices among this convenience sample of women. Among the women in this sample, 91% indicated that they had masturbated at some point in their lives, and 29.3% reported having masturbated within the past month. Masturbation behavior appears to be related to a greater sexual repertoire, more sexual fantasies, and greater reported ease in reaching sexual arousal and orgasm. Women reported many reasons for masturbation and a variety of direct and indirect techniques. A minority of women reported feeling shame and guilt associated with masturbation. Early masturbation experience might be beneficial to sexual arousal and orgasm in adulthood. Further, this study demonstrates that masturbation is a positive component in the structuring of female sexuality.
A Survey of Insider Attack Detection Research
This paper surveys proposed solutions for the problem of insider attack detection appearing in the computer security research literature. We distinguish between masqueraders and traitors as two distinct cases of insider attack. After describing the challenges of this problem and highlighting current approaches and techniques pursued by the research community for insider attack detection, we suggest directions for future research.
Core stability: the centerpiece of any training program.
Core strengthening and stability exercises have become key components of training programs for athletes of all levels. The core muscles act as a bridge between upper and lower limbs, and force is transferred from the core, often called the powerhouse, to the limbs. Stability initially requires maintenance of a neutral spine but must progress beyond the neutral zone in a controlled manner. Some studies have demonstrated a relationship between core stability and increased incidence of injury. A training program should start with exercises that isolate specific core muscles but must progress to include complex movements and incorporate other training principles.
Is Your Anchor Going Up or Down? Fast and Accurate Supervised Topic Models
Topic models provide insights into document collections, and their supervised extensions also capture associated document-level metadata such as sentiment. However, inferring such models from data is often slow and cannot scale to big data. We build upon the “anchor” method for learning topic models to capture the relationship between metadata and latent topics by extending the vector-space representation of word-cooccurrence to include metadataspecific dimensions. These additional dimensions reveal new anchor words that reflect specific combinations of metadata and topic. We show that these new latent representations predict sentiment as accurately as supervised topic models, and we find these representations more quickly without sacrificing interpretability. Topic models were introduced in an unsupervised setting (Blei et al., 2003), aiding in the discovery of topical structure in text: large corpora can be distilled into human-interpretable themes that facilitate quick understanding. In addition to illuminating document collections for humans, topic models have increasingly been used for automatic downstream applications such as sentiment analysis (Titov and McDonald, 2008; Paul and Girju, 2010; Nguyen et al., 2013). Unfortunately, the structure discovered by unsupervised topic models does not necessarily constitute the best set of features for tasks such as sentiment analysis. Consider a topic model trained on Amazon product reviews. A topic model might discover a topic about vampire romance. However, we often want to go deeper, discovering facets of a topic that reflect topic-specific sentiment, e.g., “buffy” and “spike” for positive sentiment vs. “twilight” and “cullen” for negative sentiment. Techniques for discovering such associations, called supervised topic models (Section 2), both produce interpretable topics and predict metadata values. While unsupervised topic models now have scalable inference strategies (Hoffman et al., 2013; Zhai et al., 2012), supervised topic model inference has not received as much attention and often scales poorly. The anchor algorithm is a fast, scalable unsupervised approach for finding “anchor words”—precise words with unique co-occurrence patterns that can define the topics of a collection of documents. We augment the anchor algorithm to find supervised sentiment-specific anchor words (Section 3). Our algorithm is faster and just as effective as traditional schemes for supervised topic modeling (Section 4). 1 Anchors: Speedy Unsupervised Models The anchor algorithm (Arora et al., 2013) begins with a V × V matrix Q̄ of word co-occurrences, where V is the size of the vocabulary. Each word type defines a vector Q̄i,· of length V so that Q̄i,j encodes the conditional probability of seeing word j given that word i has already been seen. Spectral methods (Anandkumar et al., 2012) and the anchor algorithm are fast alternatives to traditional topic model inference schemes because they can discover topics via these summary statistics (quadratic in the number of types) rather than examining the whole dataset (proportional to the much larger number of tokens). The anchor algorithm takes its name from the idea of anchor words—words which unambiguously identify a particular topic. For instance, “wicket” might be an anchor word for the cricket topic. Thus, for any anchor word a, Q̄a,· will look like a topic distribution. Q̄wicket,· will have high probability for “bowl”, “century”, “pitch”, and “bat”; these words are related to cricket, but they cannot be anchor words because they are also related to other topics. Because these other non-anchor words could be topically ambiguous, their co-occurrence must be explained through some combination of anchor words; thus for non-anchor word i,
Comparison of fundamental frequency and PWM methods applied on a hybrid cascaded multilevel inverter
This paper presents a hybrid cascaded multilevel inverter for electric vehicles (EV) / hybrid electric vehicles (HEV) and utility interface applications. The inverter consists of a standard 3-leg inverter (one leg for each phase) and H-bridge in series with each inverter leg. It can use only a single DC power source to supply a standard 3-leg inverter along with three full H-bridges supplied by capacitors or batteries. Both fundamental frequency and high switching frequency PWM methods are used for the hybrid multilevel inverter. An experimental 5 kW prototype inverter is built and tested. The above two switching control methods are validated and compared experimentally.
Hospital purchasing alliances: utilization, services, and performance.
BACKGROUND Hospital purchasing alliances are voluntary consortia of hospitals that aggregate their contractual purchases of supplies from manufacturers. Purchasing groups thus represent pooling alliances rather than trading alliances (e.g., joint ventures). Pooling alliances have been discussed in the health care management literature for years but have never received much empirical investigation. They represent a potentially important source of economies of scale for hospitals. PURPOSES This study represents the first national survey of hospital purchasing alliances. The survey analyzes alliance utilization, services, and performance from the perspective of the hospital executive in charge of materials management. This study extends research on pooling alliances, develops national benchmark statistics, and answers important issues raised recently about pooling alliances. METHODOLOGY/APPROACH The investigators surveyed hospital members in the seven largest purchasing alliances (that account for 93% of all hospital purchases) and individual members of the Association of Healthcare Resource & Materials Management. The concatenated database yielded an approximate population of all hospital materials managers numbering 5,014. FINDINGS Hospital purchasing group alliances succeed in reducing health care costs by lowering product prices, particularly for commodity and pharmaceutical items. Alliances also reduce transaction costs through commonly negotiated contracts and increase hospital revenues via rebates and dividends. Thus, alliances may achieve purchasing economies of scale. Hospitals report additional value as evidenced by their long tenure and the large share of purchases routed through the alliances. Alliances appear to be less successful, however, in providing other services of importance and value to hospitals and in mediating the purchase of expensive physician preference items. There is little evidence that alliances exclude new innovative firms from the marketplace or restrict hospital access to desired products. PRACTICE IMPLICATIONS Pooling alliances appear successful in purchasing commodity and pharmaceutical products. Pooling alliances face the same issues as trading alliances in their efforts to work with physicians and the supply items they prefer.
A Self-Service Supporting Business Intelligence and Big Data Analytics Architecture
Self-service Business Intelligence (SSBI) is an emerging topic for many companies. Casual users should be enabled to independently build their own analyses and reports. This accelerates and simplifies the decision-making processes. Although recent studies began to discuss parts of a self-service environment, none of these present a comprehensive architecture. Following a design science research approach, this study proposes a new self-service oriented BI architecture in order to address this gap. Starting from an in-depth literature review, an initial model was developed and improved by qualitative data analysis from interviews with 18 BI and IT specialists form companies across different industries. The proposed architecture model demonstrates the interaction between introduced self-service elements with each other and with traditional BI components. For example, we look at the integration of collaboration rooms and a self-learning knowledge database that aims to be a source for a report recommender.
Influences on purposeful implementation of ICT into the classroom: An exploratory study of K-12 teachers
Teachers are increasingly required to incorporate information and communications technologies (ICT) into the modern classroom. The implementation of ICT into the classroom should not be seen as merely an add-on, however, but should be included with purpose; meaningfully implemented based on pedagogy. The aim of this study is to explore potential factors that might predict purposeful implementation of ICT into the classroom. Using an online survey, skills in and beliefs about ICT were assessed, as well as the teaching and learning beliefs of forty-five K-12 teachers. Hierarchical multiple regression revealed that competence using ICT and a belief in the importance of ICT for student outcomes positively predicted purposeful implementation of ICT into the classroom, while endorsing more traditional content-based learning was a negative predictor. These three predictors explained 47% of the variance in purposeful implementation of ICT into the classroom. ICT competence was unpacked further with correlations. This revealed that there is a relationship between teachers having ICT skills that can personalize, engage, and create an interactive atmosphere for students and purposeful implementation of ICT into the classroom. Based on these findings, suggestions are made of important focal areas for encouraging teachers to purposefully implement ICT into their classrooms.
Multiple memory systems and consciousness.
This Introduction to the Special Issue on Human Memory discusses some of the recent and current developments in the study of human memory from the neuropsychological perspective. A problem of considerable current interest, that of multiple memory systems, is a problem in classification. Much of the evidence for it is derived from clinical and experimental observations of dissociations between performances in memory tasks. The distinction between short-term and long-term memory is considered as an example of classification by dissociation. Current conceptualizations of multiple long-term memory systems are reviewed from the vantage point that distinguishes among three major kinds of memory--episodic, semantic, and procedural. These systems are briefly described and compared, and current views concerning the relation between them are discussed. The role of consciousness in memory is raised against the backdrop of the suggestion that it may be necessary to differentiate among several kinds of consciousness.
Dairy products and colorectal cancer risk: a systematic review and meta-analysis of cohort studies.
BACKGROUND Previous studies of the association between intake of dairy products and colorectal cancer risk have indicated an inverse association with milk, however, the evidence for cheese or other dairy products is inconsistent. METHODS We conducted a systematic review and meta-analysis to clarify the shape of the dose-response relationship between dairy products and colorectal cancer risk. We searched the PubMed database for prospective studies published up to May 2010. Summary relative risks (RRs) were estimated using a random effects model. RESULTS Nineteen cohort studies were included. The summary RR was 0.83 (95% CI [confidence interval]: 0.78-0.88, I2=25%) per 400 g/day of total dairy products, 0.91 (95% CI: 0.85-0.94, I2=0%) per 200 g/day of milk intake and 0.96 (95% CI: 0.83-1.12, I2=28%) per 50 g/day of cheese. Inverse associations were observed in both men and women but were restricted to colon cancer. There was evidence of a nonlinear association between milk and total dairy products and colorectal cancer risk, P<0.001, and the inverse associations appeared to be the strongest at the higher range of intake. CONCLUSION This meta-analysis shows that milk and total dairy products, but not cheese or other dairy products, are associated with a reduction in colorectal cancer risk.
Developing new frontiers in the Rubber Hand Illusion: Design of an open source robotic hand to better understand prosthetics
In psychology the Rubber Hand Illusion (RHI) is an experiment where participants get the feeling that a fake hand is becoming their own. Recently, new testing methods using an action based paradigm have induced stronger RHI. However, these experiments are facing limitations because they are difficult to implement and lack of rigorous experimental conditions. This paper proposes a low-cost open source robotic hand which is easy to manufacture and removes these limitations. This device reproduces fingers movement of the participants in real time. A glove containing sensors is worn by the participant and records fingers flexion. Then a microcontroller drives hobby servo-motors on the robotic hand to reproduce the corresponding fingers position. A connection between the robotic device and a computer can be established, enabling the experimenters to tune precisely the desired parameters using Matlab. Since this is the first time a robotic hand is developed for the RHI, a validation study has been conducted. This study confirms previous results found in the literature. This study also illustrates the fact that the robotic hand can be used to conduct innovative experiments in the RHI field. Understanding such RHI is important because it can provide guidelines for prosthetic design.
Psychiatric diagnosis and the surgical outcome of pancreas transplantation in patients with type I diabetes mellitus.
To examine the role of psychiatric diagnosis in the surgical outcome of pancreas transplantation, we studied candidates with type I diabetes mellitus. Eighty of 140 candidates underwent transplantation. Survival analysis found the extent of human leukocyte antigen-DR (HLA-DR) matching, two diagnoses, and patients' perceived support from first-degree relatives to be related to duration of full-graft function. Lifetime diagnoses of tobacco use disorder (P = 0.029) and alcohol abuse/dependence (P = 0.006) were associated with less favorable outcomes; perceived support was associated with positive outcomes (P = 0.048). Subsequent analysis suggested that the four variables independently and directly affect outcome.
VNF-FG design and VNF placement for 5G mobile networks
Network function virtualization (NFV) is envisioned as one of the critical technologies in 5th-Generation (5G) mobile networks. This paper investigates the virtual network function forwarding graph (VNF-FG) design and virtual network function (VNF) placement for 5G mobile networks. We first propose a two-step method composed of flow designing and flow combining for generating VNF-FGs according to network service requests. For mapping VNFs in the generated VNF-FG to physical resources, we then modify the hybrid NFV environment with introducing more types of physical nodes and mapping modes for the sake of completeness and practicality, and formulate the VNF placement optimization problem for achieving lower bandwidth consumption and lower maximum link utilization simultaneously. To resolve this problem, four genetic algorithms are proposed on the basis of the frameworks of two existing algorithms (multiple objective genetic algorithm and improved non-dominated sorting genetic algorithm). Simulation results show that Greedy-NSGA-II achieves the best performance among our four algorithms. Compared with three non-genetic algorithms (random, backtracking mapping and service chains deployment with affiliation-aware), Greedy-NSGA-II reduces 97.04%, 87.76% and 88.42% of the average total bandwidth consumption, respectively, and achieves only 13.81%, 25.04% and 25.41% of the average maximum link utilization, respectively. Moreover, using our VNF-FG design method and Greedy-NSGA-II together can also reduce the total bandwidth consumption remarkably.
A review of the human–horse relationship
Despite a long history of human-horse relationship, horse-related incidents and accidents do occur amongst professional and non professional “horse persons”. Recent studies show that their occurrence depend more on the frequency and amount of interactions with horses than on the level of competency, suggesting a strong need for specific research and training of humans working with horses. In the present study, we review the current scientific knowledge on human-horse relationships. We distinguish here short occasional interactions with familiar or unfamiliar horses (e.g. veterinary inspection) and long term bonds (e.g. horseowner). It appears clearly that research is needed in order to assess how to better approach the horse (position, posture, gaze...), what type of approaches and timing may help develop a positive bond, what influence human management and care has on the relationship, and how this can be adapted to have a positive influence on the relationship. On the other hand, adequate knowledge is readily available that may improve the present situation rapidly. Developing awareness and attention to behavioural cues given by horses would certainly help decreasing accidents among professionals when interacting. Another whole line is how to try and improve the development and maintenance of a really positive relationship. Studies show that deficits in the management conditions (housing, food, social context, training) may lead to relational problems. Different methods have been used in order to assess and improve the human-horse relation, especially in the young age. They reveal that the time and type of contact all play a role, while recent studies suggest that the use of conspecific social models might be a great help. We argue that an important theoretical framework could be Hinde (1979)’s definition of a relationship as an emerging bond from a series of interactions: partners have expectations on the next interaction on the basis of the previous ones. Understanding that a relationship is built up on the basis of a succession of interactions is an important step as it suggests that attention is being paid to the “positive” or “negative” valence of each interaction as a step for the next one. A better knowledge of learning rules is certainly necessary in this context not only to train the horse but also to counterbalance the unavoidable negative inputs that exist in routine procedures and reduce their impact on the relationship.
Quantifying the relative contributions of riparian and hillslope zones to catchment runoff: SMALL CATCHMENT RUNOFF SOURCES
[1] The spatial and temporal sources of headwater catchment runoff are poorly understood. We quantified the contributions of hillslopes and riparian zones to streamflow for two storm events in a highly responsive, steep, wet watershed located on the west coast of the South Island of New Zealand. We examined the spatial and temporal components of catchment storm flow using a simple continuity-based approach. We tested this with independent isotopic/solute mass balance hydrograph separation techniques. We monitored catchment runoff, internal hydrological response, isotopic, and solute dynamics at a trenched hillslope, and at hillslope and riparian positions in a 2.6-ha catchment. The gauged hillslope was used to isolate and quantify (by difference) riparian and hillslope zone contributions to the 2.6-ha headwater catchment. Utilizing flow-based approaches and a tracer-based mass balance mixing model, we found that hillslope runoff comprised 2–16% of total catchment storm runoff during a small 27-mm event and 47–55% during a larger 70-mm event. However, less than 4% of the new water collected at the catchment outlet originated from the hillslopes during each event. We found that in the 27-mm rain event, 84–97% of total storm runoff was generated in the riparian zone. In a larger 70-mm event, riparian water dominated total flow early in the event, although the hillslope became the main contributor once hillslope runoff was initiated. Despite the large amount of subsurface hillslope runoff in total storm runoff during the second larger event, riparian and channel zones accounted for 96% of the new water measured at the catchment outlet. Riparian water dominated between events, throughout small runoff events, and during early portions of large events. While this sequencing of catchment position contributions to flow has been conceptualized for some time, this is the first study to quantify this timing, constrained by hydrometric, isotopic, and solute approaches.
Effects of Small-Group Learning on Undergraduates in Science , Mathematics , Engineering , and Technology : A Meta-Analysis
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..I................ vii Conceptual Framework ........................................................................................................... 2 Motivational Perspective ............................................................................................. 3 Affective Perspective .................................................................................................. 3 Cognitive Perspective .................................................................................................. 3 Forms of Small-Group Learning ................................................................................. 4 Research Questions ..................................................................................................... 4 Meta-Analysis Method ............................................................................................................ 4 Literature Search Procedures ....................................................................................... 4 Inclusion Criteria ......................................................................................................... 5 Metric for Expressing Effect Sizes .............................................................................. 5 Calculations of Average Effect Sizes .......................................................................... 5 Tests for Conditional Effects ....................................................................................... 6 Study Coding ............................................................................................................... 7 Meta Analysis Results ............................................................................................................. 7 Main Effect of Small-Group Learning ........................................................................ 7 Distribution of Effect Sizes ......................................................................................... 7 Conditional Effects of Small-Group Learning .......................................................... 1 0 Discussion and Conclusions .................................................................................................. 1 7 Robust Main Effects .................................................................................................. 1 7 Conditional Effects of Small-Group Learning .......................................................... 1 8 Limitations of the Study ............................................................................................ 2 0 Implications for Theory, Research, Policy, and Practice .......................................... 2 1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 9
The Form of Referential Expressions in Discourse
Most instances of real-life language use involve discourses in which several sentences or utterances are coherently linked through the use of repeated references. Repeated reference can take many forms, and the choice of referential form has been the focus of much research in several related fields. In this article we distinguish between three main approaches: one that addresses the ‘why’ question – why are certain forms used in certain contexts; one that addresses the ‘how’ question – how are different forms processed; and one that aims to answer both questions by seriously considering both the discourse function of referential expressions, and the cognitive mechanisms that underlie their processing cost. We argue that only the latter approach is capable of providing a complete view of referential processing, and that in so doing it may also answer a more profound ‘why’ question – why does language offer multiple referential forms. Coherent discourse typically involves repeated references to previously mentioned referents, and these references can be made with different forms. For example, a person mentioned in discourse can be referred to by a proper name (e.g., Bill), a definite description (e.g., the waiter), or a pronoun (e.g., he). When repeated reference is made to a referent that was mentioned in the same sentence, the choice and processing of referential form may be governed by syntactic constraints such as binding principles (Chomsky 1981). However, in many cases of repeated reference to a referent that was mentioned in the same sentence, and in all cases of repeated reference across sentences, the choice and processing of referential form reflects regular patterns and preferences rather than strong syntactic constraints. The present article focuses on the factors that underlie these patterns. Considerable research in several disciplines has aimed to explain how speakers and writers choose which form they should use to refer to objects and events in discourse, and how listeners and readers process different referential forms (e.g., Chafe 1976; Clark & Wilkes 1986; Kintsch 1988; Gernsbacher 1989; Ariel 1990; Gordon, Grosz & Gilliom 1993; Gundel, Hedberg & Zacharski 1993; Garrod & Sanford 1994; Gordon & Hendrick 1998; Almor 1999; Cowles & Garnham 2005). One of the central observations in this research is that there exists an inverse relation between the specificity of the referential
You've Got Vulnerability: Exploring Effective Vulnerability Notifications
Security researchers can send vulnerability notifications to take proactive measures in securing systems at scale. However, the factors affecting a notification’s efficacy have not been deeply explored. In this paper, we report on an extensive study of notifying thousands of parties of security issues present within their networks, with an aim of illuminating which fundamental aspects of notifications have the greatest impact on efficacy. The vulnerabilities used to drive our study span a range of protocols and considerations: exposure of industrial control systems; apparent firewall omissions for IPv6-based services; and exploitation of local systems in DDoS amplification attacks. We monitored vulnerable systems for several weeks to determine their rate of remediation. By comparing with experimental controls, we analyze the impact of a number of variables: choice of party to contact (WHOIS abuse contacts versus national CERTs versus US-CERT), message verbosity, hosting an information website linked to in the message, and translating the message into the notified party’s local language. We also assess the outcome of the emailing process itself (bounces, automated replies, human replies, silence) and characterize the sentiments and perspectives expressed in both the human replies and an optional anonymous survey that accompanied our notifications. We find that various notification regimens do result in different outcomes. The best observed process was directly notifying WHOIS contacts with detailed information in the message itself. These notifications had a statistically significant impact on improving remediation, and human replies were largely positive. However, the majority of notified contacts did not take action, and even when they did, remediation was often only partial. Repeat notifications did not further patching. These results are promising but ultimately modest, behooving the security community to more deeply investigate ways to improve the effectiveness of vulnerability notifications.
Insecticide Resistance Alleles Affect Vector Competence of Anopheles gambiae s.s. for Plasmodium falciparum Field Isolates
The widespread insecticide resistance raises concerns for vector control implementation and sustainability particularly for the control of the main vector of human malaria, Anopheles gambiae sensu stricto. However, the extent to which insecticide resistance mechanisms interfere with the development of the malignant malaria parasite in its vector and their impact on overall malaria transmission remains unknown. We explore the impact of insecticide resistance on the outcome of Plasmodium falciparum infection in its natural vector using three An. gambiae strains sharing a common genetic background, one susceptible to insecticides and two resistant, one homozygous for the ace-1(R) mutation and one for the kdr mutation. Experimental infections of the three strains were conducted in parallel with field isolates of P. falciparum from Burkina Faso (West Africa) by direct membrane feeding assays. Both insecticide resistant mutations influence the outcome of malaria infection by increasing the prevalence of infection. In contrast, the kdr resistant allele is associated with reduced parasite burden in infected individuals at the oocyst stage, when compared to the susceptible strain, while the ace-1 (R) resistant allele showing no such association. Thus insecticide resistance, which is particularly problematic for malaria control efforts, impacts vector competence towards P. falciparum and probably parasite transmission through increased sporozoite prevalence in kdr resistant mosquitoes. These results are of great concern for the epidemiology of malaria considering the widespread pyrethroid resistance currently observed in Sub-Saharan Africa and the efforts deployed to control the disease.
ROSPlan: Planning in the Robot Operating System
The Robot Operating System (ROS) is a set of software libraries and tools used to build robotic systems. ROS is known for a distributed and modular design. Given a model of the environment, task planning is concerned with the assembly of actions into a structure that is predicted to achieve goals. This can be done in a way that minimises costs, such as time or energy. Task planning is vital in directing the actions of a robotic agent in domains where a causal chain could lock the agent into a dead-end state. Moreover, planning can be used in less constrained domains to provide more intelligent behaviour. This paper describes the ROSPLAN framework, an architecture for embedding task planning into ROS systems. We provide a description of the architecture and a case study in autonomous robotics. Our case study involves autonomous underwater vehicles in scenarios that demonstrate the flexibility and robustness of our approach.
Modulation of human postprandial lipemia by changing ratios of polyunsaturated to saturated (P/S) fatty acid content of blended dietary fats: a cross-over design with repeated measures
BACKGROUND Postprandial lipemia (PL) contributes to coronary artery disease. The fatty acid composition of dietary fats is potentially a modifiable factor in modulating PL response. METHODS This human postprandial study evaluated 3 edible fat blends with differing polyunsaturated to saturated fatty acids (P/S) ratios (POL = 0.27, AHA = 1.00, PCAN = 1.32). A cross-over design included mildly hypercholestrolemic subjects (9 men and 6 women) preconditioned on test diets fats at 31% energy for 7 days prior to the postprandial challenge on the 8th day with 50 g test fat. Plasma lipids and lipoproteins were monitored at 0, 1.5, 3.5, 5.5 and 7 hr. RESULTS Plasma triacylglycerol (TAG) concentrations in response to POL, AHA or PCAN meals were not significant for time x test meal interactions (P > 0.05) despite an observed trend (POL > AHA > PCAN). TAG area-under-the-curve (AUC) increased by 22.58% after POL and 7.63% after PCAN compared to AHA treatments (P > 0.05). Plasma total cholesterol (TC) response was not significant between meals (P > 0.05). Varying P/S ratios of test meals significantly altered prandial high density lipoprotein-cholesterol (HDL-C) concentrations (P < 0.001) which increased with decreasing P/S ratio (POL > AHA > PCAN). Paired comparisons was significant between POL vs PCAN (P = 0.009) but not with AHA or between AHA vs PCAN (P > 0.05). A significantly higher HDL-C AUC for POL vs AHA (P = 0.015) and PCAN (P = 0.001) was observed. HDL-C AUC increased for POL by 25.38% and 16.0% compared to PCAN and AHA respectively. Plasma low density lipoprotein-cholesterol (LDL-C) concentrations was significant (P = 0.005) between meals and significantly lowest after POL meal compared to PCAN (P = 0.004) and AHA (P > 0.05) but not between AHA vs PCAN (P > 0.05). AUC for LDL-C was not significant between diets (P > 0.05). Palmitic (C16:0), oleic (C18:1), linoleic (C18:2) and linolenic (C18:3) acids in TAGs and cholesteryl esters were significantly modulated by meal source (P < 0.05). CONCLUSIONS P/S ratio of dietary fats significantly affected prandial HDL-C levels without affecting lipemia.
The Impact Of Software-As-A-Service On Business Models Of Leading Software Vendors: Experiences From Three Exploratory Case Studies
The number of software vendors offering ‘Software-as-a-Service’ has been increasing in recent years. In the Software-as-a-Service model software is operated by the software vendor and delivered to the customer as a service. Existing business models and industry structures are challenged by the changes to the deployment and pricing model compared to traditional software. However, the full implications on the way companies create, deliver and capture value are not yet sufficiently analyzed. Current research is scattered on specific aspects, only a few studies provide a more holistic view of the impact from a business model perspective. For vendors it is, however, crucial to be aware of the potentially far reaching consequences of Software-as-a-Service. Therefore, a literature review and three exploratory case studies of leading software vendors are used to evaluate possible implications of Software-as-a-Service on business models. The results show an impact on all business model building blocks and highlight in particular the often less articulated impact on key activities, customer relationship and key partnerships for leading software vendors and show related challenges, for example, with regard to the integration of development and operations processes. The observed implications demonstrate the disruptive character of the concept and identify future research requirements.
SVM Based Decision Support System for Heart Disease Classification with Integer-Coded Genetic Algorithm to Select Critical Features
The paper presents a decision support system for heart diseases classification based on support vector machines (SVM) and integer-coded genetic algorithm (GA). Simple Support Vector Machine (SSVM) algorithm has been used to determine the support vectors in a fast, iterative manner. For selecting the important and relevant features and discarding those irrelevant and redundant ones, integer-coded genetic algorithm is used which also maximizes SVM‘s classification accuracy. The heart disease database used in this study includes 303 cases, and 13 diagnostic features were used for each case. The results of the 5-class classification problem indicate an increase in the overall accuracy using the optimal feature subset, accuracy achieved being 72.55% indicating the potential of the system to be used as a practical decision support system. As a two class problem, the proposed method gives an accuracy of 90.57% which is better than the existing methods.
"I Always Wanted to See the Night Sky": Blind User Preferences for Sensory Substitution Devices
Sensory Substitution Devices (SSDs) convert visual information into another sensory channel (e.g. sound) to improve the everyday functioning of blind and visually impaired persons (BVIP). However, the range of possible functions and options for translating vision into sound is largely open-ended. To provide constraints on the design of this technology, we interviewed ten BVIPs who were briefly trained in the use of three novel devices that, collectively, showcase a large range of design permutations. The SSDs include the 'Depth-vOICe,' 'Synaestheatre' and 'Creole' that offer high spatial, temporal, and colour resolutions respectively via a variety of sound outputs (electronic tones, instruments, vocals). The participants identified a range of practical concerns in relation to the devices (e.g. curb detection, recognition, mental effort) but also highlighted experiential aspects. This included both curiosity about the visual world (e.g. understanding shades of colour, the shape of cars, seeing the night sky) and the desire for the substituting sound to be responsive to movement of the device and aesthetically engaging.
Fast Metric Tracking by Detection System: Radar Blob and Camera Fusion
This article proposes a system that fuses radar and monocular vision sensor data in order to detect and classify on-road obstacles, like cars or not cars (other obstacles). The obstacle detection process and classification is divided into three stages, the first consist in reading radar signals and capturing the camera data, the second stage is the data fusion, and the third step is the classify the obstacles, aiming to differentiate the obstacles types identified by the radar and confirmed by the computer vision. In the detection task it is important to locate, measure, and rank the obstacles to be able to take adequate decisions and actions (e.g. Generate alerts, autonomously control the vehicle path), so the correct classification of the obstacle type and position, is very important, also avoiding false positives and/or false negatives in the classification task.
An Overview of Reconfigurable Hardware in Embedded Systems
Over the past few years, the realm of embedded systems has expanded to include a wide variety of products, ranging from digital cameras, to sensor networks, to medical imaging systems. Consequently, engineers strive to create ever smaller and faster products, many of which have stringent power requirements. Coupled with increasing pressure to decrease costs and time-to-market, the design constraints of embedded systems pose a serious challenge to embedded systems designers. Reconfigurable hardware can provide a flexible and efficient platform for satisfying the area, performance, cost, and power requirements of many embedded systems. This article presents an overview of reconfigurable computing in embedded systems, in terms of benefits it can provide, how it has already been used, design issues, and hurdles that have slowed its adoption.
The IEEE 802.15.4g standard for smart metering utility networks
Smart utility network (SUN) communications are an essential part of the smart grid. Major vendors realized the importance of universal standards and participated in the IEEE802.15.4g standardization effort. Due to the fact that many vendors already have proprietary solutions deployed in the field, the standardization effort was a challenge, but after three years of hard work, the IEEE802.15.4g standard published on April 28th, 2012. The publication of this standard is a first step towards establishing common and consistent communication specifications for utilities deploying smart grid technologies. This paper summaries the technical essence of the standard and how it can be used in smart utility networks.
Preschool Drawing and School Mathematics: The Nature of the Association.
The study examined the etiology of individual differences in early drawing and of its longitudinal association with school mathematics. Participants (N = 14,760), members of the Twins Early Development Study, were assessed on their ability to draw a human figure, including number of features, symmetry, and proportionality. Human figure drawing was moderately stable across 6 months (average r = .40). Individual differences in drawing at age 4½ were influenced by genetic (.21), shared environmental (.30), and nonshared environmental (.49) factors. Drawing was related to later (age 12) mathematical ability (average r = .24). This association was explained by genetic and shared environmental factors that also influenced general intelligence. Some genetic factors, unrelated to intelligence, also contributed to individual differences in drawing.
Cost offset from a psychiatric consultation-liaison intervention with elderly hip fracture patients.
OBJECTIVE The authors hypothesized that psychiatric liaison screening of elderly patients with hip fractures would shorten the average length of hospital stay and increase the proportion of patients who returned home after discharge. METHOD The study was performed at Mount Sinai Medical Center in New York and Northwestern Memorial Hospital in Chicago. The subjects were 452 patients 65 years or older who were consecutively admitted for surgical repair of fractured hips. During a baseline year the patients received traditional referral for psychiatric consultation. During the experimental year all the patients at Mount Sinai and the patients on one Northwestern Unit were screened for psychiatric consultation. RESULTS The patients who received psychiatric liaison screening had a higher consultation rate than those who received traditional consultation. The rates of DSM-III disorders in the experimental year were 56% at Mount Sinai and 60% at Northwestern. The mean length of stay was reduced from 20.7 to 18.5 days at Mount Sinai and from 15.5 to 13.8 days at Northwestern, resulting in reductions in hospital costs ($647/day) of $166,926 and $97,361, respectively. Fees generated from Medicare service delivery could have paid for the $20,000 psychiatric intervention cost at each site. There was no difference, however, between the two years in the discharge placement of patients. CONCLUSIONS Admission psychiatric liaison screening of elderly patients with hip fractures results in early detection of psychiatric morbidity, better psychiatric care, earlier discharge, and substantial cost savings to the hospital.
RASSH - Reinforced adaptive SSH honeypot
The wide spread of cyber-attacks made the need of gathering as much information as possible about them, a real demand in nowadays global context. The honeypot systems have become a powerful tool on the way to accomplish that. Researchers have already focused on the development of various honeypot systems but the fact that their administration is time consuming made clear the need of self-adaptive honeypot system capable of learning from their interaction with attackers. This paper presents a self-adaptive honeypot system we are developing that tries to overlap some of the disadvantaged that existing systems have. The proposed honeypot is a medium interaction system developed using Python and it emulates a SSH (Secure Shell) server. The system is capable of interacting with the attackers by means of reinforcement learning algorithms.
Automated Audience Segmentation Using Reputation Signals
Selecting the right audience for an advertising campaign is one of the most challenging, time-consuming and costly steps in the advertising process. To target the right audience, advertisers usually have two options: a) market research to identify user segments of interest and b) sophisticated machine learning models trained on data from past campaigns. In this paper we study how demand-side platforms (DSPs) can leverage the data they collect (demographic and behavioral) in order to learn reputation signals about end user convertibility and advertisement (ad) quality. In particular, we propose a reputation system which learns interest scores about end users, as an additional signal of ad conversion, and quality scores about ads, as a signal of campaign success. Then our model builds user segments based on a combination of demographic, behavioral and the new reputation signals and recommends transparent targeting rules that are easy for the advertiser to interpret and refine. We perform an experimental evaluation on industry data that showcases the benefits of our approach for both new and existing advertiser campaigns.
The invariants of the binary decimic
We consider the algebra of invariants of binary forms of degree 10 with complex coefficients, construct a system of parameters with degrees 2, 4, 6, 6, 8, 9, 10, 14 and find the 106 basic invariants.
Role of Campylobacter jejuni Infection in the Pathogenesis of Guillain-Barré Syndrome: An Update
Our current knowledge on Campylobacter jejuni infections in humans has progressively increased over the past few decades. Infection with C. jejuni is the most common cause of bacterial gastroenteritis, sometimes surpassing other infections due to Salmonella, Shigella, and Escherichia coli. Most infections are acquired due to consumption of raw or undercooked poultry, unpasteurized milk, and contaminated water. After developing the diagnostic methods to detect C. jejuni, the possibility to identify the association of its infection with new diseases has been increased. After the successful isolation of C. jejuni, reports have been published citing the occurrence of GBS following C. jejuni infection. Thus, C. jejuni is now considered as a major triggering agent of GBS. Molecular mimicry between sialylated lipooligosaccharide structures on the cell envelope of these bacteria and ganglioside epitopes on the human nerves that generates cross-reactive immune response results in autoimmune-driven nerve damage. Though C. jejuni is associated with several pathologic forms of GBS, axonal subtypes following C. jejuni infection may be more severe. Ample amount of existing data covers a large spectrum of GBS; however, the studies on C. jejuni-associated GBS are still inconclusive. Therefore, this review provides an update on the C. jejuni infections engaged in the pathogenesis of GBS.
Screening for Parkinson's disease with response time batteries: A pilot study
BACKGROUND Although significant response time deficits (both reaction time and movement time) have been identified in numerous studies of patients with Parkinson's disease (PD), few attempts have been made to evaluate the use of these measures in screening for PD. METHODS Receiver operator characteristic curves were used to identify cutoff scores for a unit-weighted composite of two choice response tasks in a sample of 40 patients and 40 healthy participants. These scores were then cross-validated in an independent sample of 20 patients and 20 healthy participants. RESULTS The unit-weighted movement time composite demonstrated high sensitivity (90%) and specificity (90%) in the identification of PD. Movement time was also significantly correlated (r = 0.59, p < 0.025) with the motor score of the Unified Parkinson's Disease Rating Scale (UPDRS). CONCLUSIONS Measures of chronometric speed, assessed without the use of biomechanically complex movements, have a potential role in screening for PD. Furthermore, the significant correlation between movement time and UPDRS motor score suggests that movement time may be useful in the quantification of PD severity.
Violent video games stress people out and make them more aggressive.
It is well known that violent video games increase aggression, and that stress increases aggression. Many violent video games can be stressful because enemies are trying to kill players. The present study investigates whether violent games increase aggression by inducing stress in players. Stress was measured using cardiac coherence, defined as the synchronization of the rhythm of breathing to the rhythm of the heart. We predicted that cardiac coherence would mediate the link between exposure to violent video games and subsequent aggression. Specifically, we predicted that playing a violent video game would decrease cardiac coherence, and that cardiac coherence, in turn, would correlate negatively with aggression. Participants (N = 77) played a violent or nonviolent video game for 20 min. Cardiac coherence was measured before and during game play. After game play, participants had the opportunity to blast a confederate with loud noise through headphones during a reaction time task. The intensity and duration of noise blasts given to the confederate was used to measure aggression. As expected, violent video game players had lower cardiac coherence levels and higher aggression levels than did nonviolent game players. Cardiac coherence, in turn, was negatively related to aggression. This research offers another possible reason why violent games can increase aggression-by inducing stress. Cardiac coherence can be a useful tool to measure stress induced by violent video games. Cardiac coherence has several desirable methodological features as well: it is noninvasive, stable against environmental disturbances, relatively inexpensive, not subject to demand characteristics, and easy to use.
Macro Tree Transducers
Macro tree transducers are a combination of top-down tree transducers and macro grammars. They serve as a model for syntax-directed semantics in which context information can be handled. In this paper the formal model of macro tree transducers is studied by investigating typical automata theoretical topics like composition, decomposition, domains, and ranges of the induced translation classes. The extension with regular look-ahead is considered. 0 1985 Academic Press, Inc.
Managing medical and psychiatric comorbidity in individuals with major depressive disorder and bipolar disorder.
BACKGROUND Most individuals with mood disorders experience psychiatric and/or medical comorbidity. Available treatment guidelines for major depressive disorder (MDD) and bipolar disorder (BD) have focused on treating mood disorders in the absence of comorbidity. Treating comorbid conditions in patients with mood disorders requires sufficient decision support to inform appropriate treatment. METHODS The Canadian Network for Mood and Anxiety Treatments (CANMAT) task force sought to prepare evidence- and consensus-based recommendations on treating comorbid conditions in patients with MDD and BD by conducting a systematic and qualitative review of extant data. The relative paucity of studies in this area often required a consensus-based approach to selecting and sequencing treatments. RESULTS Several principles emerge when managing comorbidity. They include, but are not limited to: establishing the diagnosis, risk assessment, establishing the appropriate setting for treatment, chronic disease management, concurrent or sequential treatment, and measurement-based care. CONCLUSIONS Efficacy, effectiveness, and comparative effectiveness research should emphasize treatment and management of conditions comorbid with mood disorders. Clinicians are encouraged to screen and systematically monitor for comorbid conditions in all individuals with mood disorders. The common comorbidity in mood disorders raises fundamental questions about overlapping and discrete pathoetiology.
The Role of Alpha-Band Brain Oscillations as a Sensory Suppression Mechanism during Selective Attention
Evidence has amassed from both animal intracranial recordings and human electrophysiology that neural oscillatory mechanisms play a critical role in a number of cognitive functions such as learning, memory, feature binding and sensory gating. The wide availability of high-density electrical and magnetic recordings (64-256 channels) over the past two decades has allowed for renewed efforts in the characterization and localization of these rhythms. A variety of cognitive effects that are associated with specific brain oscillations have been reported, which range in spectral, temporal, and spatial characteristics depending on the context. Our laboratory has focused on investigating the role of alpha-band oscillatory activity (8-14 Hz) as a potential attentional suppression mechanism, and this particular oscillatory attention mechanism will be the focus of the current review. We discuss findings in the context of intersensory selective attention as well as intrasensory spatial and feature-based attention in the visual, auditory, and tactile domains. The weight of evidence suggests that alpha-band oscillations can be actively invoked within cortical regions across multiple sensory systems, particularly when these regions are involved in processing irrelevant or distracting information. That is, a central role for alpha seems to be as an attentional suppression mechanism when objects or features need to be specifically ignored or selected against.
Pentoxifylline does not alter the response to inhaled grain dust.
Pentoxifylline (PTX) has been shown to reduce sepsis-induced neutrophil sequestration in the lung and inhibit endotoxin-mediated release of tumor necrosis factor-alpha (TNF-alpha). Previously, we have shown that endotoxin appears to be the principal agent in grain dust causing airway inflammation and airflow obstruction following grain dust inhalation. To determine whether PTX affects the physiologic and inflammatory events following acute grain dust inhalation, 10 healthy, nonsmoking subjects with normal airway reactivity were treated with PTX or placebo (PL) followed by corn dust extract (CDE) inhalation (0.08 mL/kg), using a single-blinded, crossover design. Subjects received PTX (1,200 mg/d) or PL for 4 days prior to CDE inhalation and 400 mg PTX or PL on the exposure day. Both respiratory symptoms and declines in FEV1 and FVC occurred following CDE exposure in both groups, but there were no significant differences in the frequency of symptoms or percent declines from baseline in the FEV1 and FVC at any of the time points measured in the study. Elevations in peripheral blood leukocyte and neutrophil concentrations and BAL total cell, neutrophil, TNF-alpha, and interleukin-8 concentrations were measured 4 h following exposure to CDE in both the PTX- and PL-treated subjects, but no significant differences were found between treatment groups. These results suggest that pretreatment with PTX prior to inhalation of CDE, in the doses used in this study, does not alter the acute physiologic or inflammatory events following exposure to inhaled CDE.
Antimicrobial susceptibility patterns of Ureaplasma species and Mycoplasma hominis in pregnant women
BACKGROUND Genital mycoplasmas colonise up to 80% of sexually mature women and may invade the amniotic cavity during pregnancy and cause complications. Tetracyclines and fluoroquinolones are contraindicated in pregnancy and erythromycin is often used to treat patients. However, increasing resistance to common antimicrobial agents is widely reported. The purpose of this study was to investigate antimicrobial susceptibility patterns of genital mycoplasmas in pregnant women. METHODS Self-collected vaginal swabs were obtained from 96 pregnant women attending an antenatal clinic in Gauteng, South Africa. Specimens were screened with the Mycofast Revolution assay for the presence of Ureaplasma species and Mycoplasma hominis. The antimicrobial susceptibility to levofloxacin, moxifloxacin, erythromycin, clindamycin and tetracycline were determined at various breakpoints. A multiplex polymerase chain reaction assay was used to speciate Ureaplasma positive specimens as either U. parvum or U. urealyticum. RESULTS Seventy-six percent (73/96) of specimens contained Ureaplasma spp., while 39.7% (29/73) of Ureaplasma positive specimens were also positive for M. hominis. Susceptibilities of Ureaplasma spp. to levofloxacin and moxifloxacin were 59% (26/44) and 98% (43/44) respectively. Mixed isolates (Ureaplasma species and M. hominis) were highly resistant to erythromycin and tetracycline (both 97% resistance). Resistance of Ureaplasma spp. to erythromycin was 80% (35/44) and tetracycline resistance was detected in 73% (32/44) of Ureaplasma spp. Speciation indicated that U. parvum was the predominant Ureaplasma spp. conferring antimicrobial resistance. CONCLUSIONS Treatment options for genital mycoplasma infections are becoming limited. More elaborative studies are needed to elucidate the diverse antimicrobial susceptibility patterns found in this study when compared to similar studies. To prevent complications in pregnant women, the foetus and the neonate, routine screening for the presence of genital mycoplasmas is recommended. In addition, it is recommended that antimicrobial susceptibility patterns are determined.
Overweight, Obesity and Underweight Is Associated with Adverse Psychosocial and Physical Health Outcomes among 7-Year-Old Children: The ‘Be Active, Eat Right’ Study
BACKGROUND Limited studies have reported on associations between overweight, and physical and psychosocial health outcomes among younger children. This study evaluates associations between overweight, obesity and underweight in 5-year-old children, and parent-reported health outcomes at age 7 years. METHODS Data were used from the 'Be active, eat right' study. Height and weight were measured at 5 and 7 years. Parents reported on child physical and psychosocial health outcomes (e.g. respiratory symptoms, general health, happiness, insecurity and adverse treatment). Regression models, adjusted for potential confounders, were fitted to predict health outcomes at age 7 years. RESULTS The baseline study sample consisted of 2,372 children mean age 5.8 (SD 0.4) years; 6.2% overweight, 1.6% obese and 15.0% underweight. Based on parent-report, overweight, obese and underweight children had an odds ratio (OR) of 5.70 (95% CI: 4.10 to 7.92), 35.34 (95% CI: 19.16; 65.17) and 1.39 (95% CI: 1.05 to 1.84), respectively, for being treated adversely compared to normal weight children. Compared to children with a low stable body mass index (BMI), parents of children with a high stable BMI reported their child to have an OR of 3.87 (95% CI: 1.75 to 8.54) for visiting the general practitioner once or more, an OR of 15.94 (95% CI: 10.75 to 23.64) for being treated adversely, and an OR of 16.35 (95% CI: 11.08 to 24.36) for feeling insecure. CONCLUSION This study shows that overweight, obesity and underweight at 5 years of age is associated with more parent-reported adverse treatment of the child. Qualitative research examining underlying mechanisms is recommended. Healthcare providers should be aware of the possible adverse effects of childhood overweight and also relative underweight, and provide parents and children with appropriate counseling.
A comparative planning study of step-and-shoot IMRT versus helical tomotherapy for whole-pelvis irradiation in cervical cancer
The aim of this study was to compare the dosimetric parameters of whole-pelvis radiotherapy (WPRT) for cervical cancer between step-and-shoot IMRT (SaS-IMRT) and Helical Tomotherapy™ (HT). Retrospective analysis was performed on 20 cervical cancer patients who received WPRT in our center between January 2011 and January 2014. SaS-IMRT and HT treatment plans were generated for each patient. The dosimetric values for target coverage and organ-at-risk (OAR) sparing were compared according to the criteria of the International Commission on Radiation Units and Measurements 83 (ICRU 83) guidelines. Differences in beam-on time (BOT) were also compared. All the PTV dosimetric parameters (D5%, D50% and D95%) for the HT plan were (statistically significantly) of better quality than those for the SaS-IMRT plan (P-value < 0.001 in all respects). HT was also significantly more accurate than SaS-IMRT with respect to the D98% and Dmean of the CTV (P-values of 0.008 and <0.001, respectively). The median Conformity Index (CI) did not differ between the two plans (P-value = 0.057). However, the Uniformity Index for HT was significantly better than that for SaS-IMRT (P-value < 0.001). The median of D50% for the bladder, rectum and small bowel were significantly lower in HT planning than SaS-IMRT (P-value < 0.001). For D2%, we found that HT provided better sparing to the rectum and bladder (P-value < 0.001). However, the median of D2% for the small bowel was comparable for both plans. The median of Dmax of the head of the left femur was significantly lower in the HT plan, but this did not apply for the head of the right femur. BOT for HT was significantly shorter than for SaS-IMRT (P-value < 0.001). HT provided highly accurate plans, with more homogeneous PTV coverage and superior sparing of OARs than SaS-IMRT. In addition, HT enabled a shorter delivery time than SaS-IMRT.
Suboptimal compliance with evidence-based guidelines in patients with traumatic brain injuries.
OBJECT Evidence-based management (EBM) guidelines for severe traumatic brain injuries (TBIs) were promulgated decades ago. However, the extent of their adoption into bedside clinical practices is not known. The purpose of this study was to measure compliance with EBM guidelines for management of severe TBI and its impact on patient outcome. METHODS This was a retrospective study of blunt TBI (11 Level I trauma centers, study period 2008-2009, n = 2056 patients). Inclusion criteria were an admission Glasgow Coma Scale score ≤ 8 and a CT scan showing TBI, excluding patients with nonsurvivable injuries-that is, head Abbreviated Injury Scale score of 6. The authors measured compliance with 6 nonoperative EBM processes (endotracheal intubation, resuscitation, correction of coagulopathy, intracranial pressure monitoring, maintaining cerebral perfusion pressure ≥ 50 cm H2O, and discharge to rehabilitation). Compliance rates were calculated for each center using multivariate regression to adjust for patient demographics, physiology, injury severity, and TBI severity. RESULTS The overall compliance rate was 73%, and there was wide variation among centers. Only 3 centers achieved a compliance rate exceeding 80%. Risk-adjusted compliance was worse than average at 2 centers, better than average at 1, and the remainder were average. Multivariate analysis showed that increased adoption of EBM was associated with a reduced mortality rate (OR 0.88; 95% CI 0.81-0.96, p < 0.005). CONCLUSIONS Despite widespread dissemination of EBM guidelines, patients with severe TBI continue to receive inconsistent care. Barriers to adoption of EBM need to be identified and mitigated to improve patient outcomes.
Large-scale generation and analysis of filamentous fungal DNA barcodes boosts coverage for kingdom fungi and reveals thresholds for fungal species and higher taxon delimitation
Species identification lies at the heart of biodiversity studies that has in recent years favoured DNA-based approaches. Microbial Biological Resource Centres are a rich source for diverse and high-quality reference materials in microbiology, and yet the strains preserved in these biobanks have been exploited only on a limited scale to generate DNA barcodes. As part of a project funded in the Netherlands to barcode specimens of major national biobanks, sequences of two nuclear ribosomal genetic markers, the Internal Transcribed Spaces and 5.8S gene (ITS) and the D1/D2 domain of the 26S Large Subunit (LSU), were generated as DNA barcode data for ca. 100 000 fungal strains originally assigned to ca. 17 000 species in the CBS fungal biobank maintained at the Westerdijk Fungal Biodiversity Institute, Utrecht. Using more than 24 000 DNA barcode sequences of 12 000 ex-type and manually validated filamentous fungal strains of 7 300 accepted species, the optimal identity thresholds to discriminate filamentous fungal species were predicted as 99.6 % for ITS and 99.8 % for LSU. We showed that 17 % and 18 % of the species could not be discriminated by the ITS and LSU genetic markers, respectively. Among them, ∼8 % were indistinguishable using both genetic markers. ITS has been shown to outperform LSU in filamentous fungal species discrimination with a probability of correct identification of 82 % vs. 77.6 %, and a clustering quality value of 84 % vs. 77.7 %. At higher taxonomic classifications, LSU has been shown to have a better discriminatory power than ITS. With a clustering quality value of 80 %, LSU outperformed ITS in identifying filamentous fungi at the ordinal level. At the generic level, the clustering quality values produced by both genetic markers were low, indicating the necessity for taxonomic revisions at genus level and, likely, for applying more conserved genetic markers or even whole genomes. The taxonomic thresholds predicted for filamentous fungal identification at the genus, family, order and class levels were 94.3 %, 88.5 %, 81.2 % and 80.9 % based on ITS barcodes, and 98.2 %, 96.2 %, 94.7 % and 92.7 % based on LSU barcodes. The DNA barcodes used in this study have been deposited to GenBank and will also be publicly available at the Westerdijk Institute's website as reference sequences for fungal identification, marking an unprecedented data release event in global fungal barcoding efforts to date.
HANDEXOS: Towards an exoskeleton device for the rehabilitation of the hand
This paper introduces a novel exoskeleton device (HANDEXOS) for the rehabilitation of the hand for post-stroke patients. The nature of the impaired hand can be summarized in a limited extension, abduction and adduction leaving the fingers in a flexed position, so the exoskeleton goal is to train a safe extension motion from the typical closed position of the impaired hand.
Cause-specific excess deaths associated with underweight, overweight, and obesity.
CONTEXT The association of body mass index (BMI) with cause-specific mortality has not been reported for the US population. OBJECTIVE To estimate cause-specific excess deaths associated with underweight (BMI <18.5), overweight (BMI 25-<30), and obesity (BMI > or =30). DESIGN, SETTING, AND PARTICIPANTS Cause-specific relative risks of mortality from the National Health and Nutrition Examination Survey (NHANES) I, 1971-1975; II, 1976-1980; and III, 1988-1994, with mortality follow-up through 2000 (571,042 person-years of follow-up) were combined with data on BMI and other covariates from NHANES 1999-2002 with underlying cause of death information for 2.3 million adults 25 years and older from 2004 vital statistics data for the United States. MAIN OUTCOME MEASURES Cause-specific excess deaths in 2004 by BMI levels for categories of cardiovascular disease (CVD), cancer, and all other causes (noncancer, non-CVD causes). RESULTS Based on total follow-up, underweight was associated with significantly increased mortality from noncancer, non-CVD causes (23,455 excess deaths; 95% confidence interval [CI], 11,848 to 35,061) but not associated with cancer or CVD mortality. Overweight was associated with significantly decreased mortality from noncancer, non-CVD causes (-69 299 excess deaths; 95% CI, -100 702 to -37 897) but not associated with cancer or CVD mortality. Obesity was associated with significantly increased CVD mortality (112,159 excess deaths; 95% CI, 87,842 to 136,476) but not associated with cancer mortality or with noncancer, non-CVD mortality. In further analyses, overweight and obesity combined were associated with increased mortality from diabetes and kidney disease (61 248 excess deaths; 95% CI, 49 685 to 72,811) and decreased mortality from other noncancer, non-CVD causes (-105,572 excess deaths; 95% CI, -161 816 to -49,328). Obesity was associated with increased mortality from cancers considered obesity-related (13,839 excess deaths; 95% CI, 1920 to 25,758) but not associated with mortality from other cancers. Comparisons across surveys suggested a decrease in the association of obesity with CVD mortality over time. CONCLUSIONS The BMI-mortality association varies by cause of death. These results help to clarify the associations of BMI with all-cause mortality.
A randomized clinical trial of the effect of intensive versus non-intensive counselling on discontinuation rates due to bleeding disturbances of three long-acting reversible contraceptives.
STUDY QUESTION Does intensive counselling before insertion and throughout the first year of use have any influence on discontinuation rates due to unpredictable menstrual bleeding in users of three long-acting reversible contraceptives (LARCs)? SUMMARY ANSWER Intensive counselling had a similar effect to routine counselling in terms of discontinuation rates due to unpredictable menstrual bleeding in new users of the contraceptives. WHAT IS KNOWN ALREADY Contraceptive efficacy and satisfaction rates are very high with LARCs, including the etonogestrel (ENG)-releasing implant, the levonorgestrel-releasing intrauterine system (LNG-IUS) and the TCu380A intrauterine device (IUD). However, unpredictable menstrual bleeding constitutes the principal reason for premature discontinuation, particularly in the cases of the ENG-implant and the LNG-IUS. STUDY DESIGN, SIZE, DURATION A randomized clinical trial was conducted between 2011 and 2013, and involved 297 women: 98 ENG-implant users, 99 LNG-IUS users and 100 TCu380A IUD users. PARTICIPANTS, SETTING, METHODS Women accepting each contraceptive method were randomized into two groups after the women chose their contraceptive method. Group I received routine counselling at the clinic, including information on safety, efficacy and side effects, as well as what to expect regarding bleeding disturbances. Group II received 'intensive counselling'. In addition to the information provided to those in Group I, these women also received leaflets on their chosen method and were seen by the same three professionals, the most experienced at the clinic, throughout the year of follow-up. These three professionals went over all the information provided at each consultation. Women in both groups were instructed to return to the clinic after 45 (±7) days and at 6 and 12 (±1) months after insertion. They were instructed to record all bleeding episodes on a menstrual calendar specifically provided for this purpose. Additionally, satisfaction with the method was evaluated by a questionnaire completed by the women after 12 months of use of the contraceptive method. MAIN RESULTS AND THE ROLE OF CHANCE There were no significant differences between the intensive and routine counselling groups on the discontinuation rates due to unpredictable menstrual bleeding of the three contraceptives under evaluation. The 1-year cumulative discontinuation rates due to menstrual bleeding irregularities were 2.1, 2.7 and 4.0% and the continuation rates were 82.6, 81.0 and 73.2%, for the ENG-implant, the LNG-IUS or the TCu380A IUD users, respectively. The main reasons for discontinuation of the methods were weight gain in users of the ENG-implant and expulsion of the TCu380A. LIMITATIONS, REASONS FOR CAUTION The main limitations are that we cannot assure generalization of the results to another settings and that the routine counselling provided by our counsellors may already be appropriate for the women attending the clinic and so consequently intensive counselling including written leaflets was unable to influence the premature discontinuation rate due to unpredictable menstrual bleeding. Additionally, counselling could discourage some women from using the LARC methods offered in the study and consequently those women may have decided on other contraceptives. WIDER IMPLICATIONS OF THE FINDINGS Routine counselling may be sufficient for many women to help reduce premature discontinuation rates and improve continuation rates and user satisfaction among new users of LARC methods. TRIAL REGISTRATION NUMBER The trial was registered at clinicaltrials.gov (NCT01392157). STUDY FUNDING/COMPETING INTEREST(S) The study was partially funded by the Fundação de Apoio a Pesquisa do Estado de São Paulo (FAPESP) grant # 2012/01379-0, the Brazilian National Research Council (CNPq) grant #573747/2008-3 and by Merck (MSD), Brazil under an unrestricted grant. The LNG-IUS were donated by the International Contraceptive Access Foundation (ICA) and the copper IUD by Injeflex, São Paulo, Brazil. L.B. has occasionally served on the Board of MSD, Bayer and Vifor.
Using Gravity to Estimate Accelerometer Orientation
Several wearable computing or ubiquitous computing research projects have detected and distinguished user motion activities by attaching accelerometers in known positions and orientations on the user’s body. This paper observes that the orientation constraint can probably be relaxed. An estimate of the constant gravity vector can be obtained by averaging accelerometer samples. This gravity vector estimate in turn enables estimation of the vertical component and the magnitude of the horizontal component of the user’s motion, independently of how the three-axis accelerometer system is oriented.
Citarasa engineering model for affective design of vehicles
Research on affective design has expanded considerably in recent years. The focus has primarily been on consumer products, such as mobile phones. Here we discuss a model for affective design of vehicles, with special emphasis on cars. It was developed in CATER - a research program intended to support mass customization of vehicles. Cars are different from consumer products because they represent major investments, and customers will typically take about a year to decide what to buy. During this time the customer goes through several motivational stages from belief to attitude to intention and behavior, which will affect his/her priorities. The model drives development of the citarasa engineering system.
Bridging the gap: a genre analysis of Weblogs
Weblogs (blogs) - frequently modified Web pages in which dated entries are listed in reverse chronological sequence - are the latest genre of Internet communication to attain widespread popularity, yet their characteristics have not been systematically described. This paper presents the results of a content analysis of 203 randomly-selected Weblogs, comparing the empirically observable features of the corpus with popular claims about the nature of Weblogs, and finding them to differ in a number of respects. Notably, blog authors, journalists and scholars alike exaggerate the extent to which blogs are interlinked, interactive, and oriented towards external events, and underestimate the importance of blogs as individualistic, intimate forms of self-expression. Based on the profile generated by the empirical analysis, we consider the likely antecedents of the blog genre, situate it with respect to the dominant forms of digital communication on the Internet today, and advance predictions about its long-term impacts.
Low-level, high-frequency mechanical signals enhance musculoskeletal development of young women with low BMD.
UNLABELLED The potential for brief periods of low-magnitude, high-frequency mechanical signals to enhance the musculoskeletal system was evaluated in young women with low BMD. Twelve months of this noninvasive signal, induced as whole body vibration for at least 2 minutes each day, increased bone and muscle mass in the axial skeleton and lower extremities compared with controls. INTRODUCTION The incidence of osteoporosis, a disease that manifests in the elderly, may be reduced by increasing peak bone mass in the young. Preliminary data indicate that extremely low-level mechanical signals are anabolic to bone tissue, and their ability to enhance bone and muscle mass in young women was investigated in this study. MATERIALS AND METHODS A 12-month trial was conducted in 48 young women (15-20 years) with low BMD and a history of at least one skeletal fracture. One half of the subjects underwent brief (10 minutes requested), daily, low-level whole body vibration (30 Hz, 0.3g); the remaining women served as controls. Quantitative CT performed at baseline and at the end of study was used to establish changes in muscle and bone mass in the weight-bearing skeleton. RESULTS Using an intention-to-treat (ITT) analysis, cancellous bone in the lumbar vertebrae and cortical bone in the femoral midshaft of the experimental group increased by 2.1% (p = 0.025) and 3.4% (p < 0.001), respectively, compared with 0.1% (p = 0.74) and 1.1% (p = 0.14), in controls. Increases in cancellous and cortical bone were 2.0% (p = 0.06) and 2.3% (p = 0.04) greater, respectively, in the experimental group compared with controls. Cross-sectional area of paraspinous musculature was 4.9% greater (p = 0.002) in the experimental group versus controls. When a per protocol analysis was considered, gains in both muscle and bone were strongly correlated to a threshold in compliance, where the benefit of the mechanical intervention compared with controls was realized once subjects used the device for at least 2 minute/day (n = 18), as reflected by a 3.9% increase in cancellous bone of the spine (p = 0.007), 2.9% increase in cortical bone of the femur (p = 0.009), and 7.2% increase in musculature of the spine (p = 0.001) compared with controls and low compliers (n = 30). CONCLUSIONS Short bouts of extremely low-level mechanical signals, several orders of magnitude below that associated with vigorous exercise, increased bone and muscle mass in the weight-bearing skeleton of young adult females with low BMD. Should these musculoskeletal enhancements be preserved through adulthood, this intervention may prove to be a deterrent to osteoporosis in the elderly.
Combining Psychodynamic Psychotherapy and Pharmacotherapy.
Many patients with depression, anxiety disorders, and other psychiatric disorders are treated with combinations of psychodynamic psychotherapy and medication. Whether this is better than monotherapy is an empirical question that requires much more extensive research than is currently available. When medications were first introduced to treat psychiatric illnesses, some psychopharmacologists insisted that it heralded a new area of "biological psychiatry" that would ultimately render psychotherapy obsolete. Psychodynamic theorists and practitioners, on the other hand, argued that psychopharmacology offered only a superficial approach to treatment. Fortunately, these battles are now largely supplanted by the belief that whatever treatment offers the patient the best outcome should be employed, regardless of the therapist's theoretical outlook. This should motivate more extensive study of the value of combination treatment. So far, the few studies that have been done suggest that the combination of psychodynamic psychotherapy and medication may be superior for the treatment of mood and anxiety disorders, but most of these studies have small sample sizes and involve only short-term psychotherapy. An examination of the neuroscience of mood and anxiety disorders and of the mechanism of action of psychodynamic psychotherapy and of antidepressant medication suggests several routes by which the two treatment modalities could be synergistic: stimulation of hippocampal neurogenesis; epigenetic regulation of gene expression; dendritic remodeling; enhanced prefrontal cortical control of limbic system activity; and action at specific neurohormonal and neurotransmitter targets. The evidence for each of these mechanisms is reviewed with an eye toward potential experiments that might be relevant to them.
Noninvasive electroanatomic mapping of human ventricular arrhythmias with electrocardiographic imaging.
The rapid heartbeat of ventricular tachycardia (VT) can lead to sudden cardiac death and is a major health issue worldwide. Efforts to identify patients at risk, determine mechanisms of VT, and effectively prevent and treat VT through a mechanism-based approach would all be facilitated by continuous, noninvasive imaging of the arrhythmia over the entire heart. Here, we present noninvasive real-time images of human ventricular arrhythmias using electrocardiographic imaging (ECGI). Our results reveal diverse activation patterns, mechanisms, and sites of initiation of human VT. The spatial resolution of ECGI is superior to that of the routinely used 12-lead electrocardiogram, which provides only global information, and ECGI has distinct advantages over the currently used method of mapping with invasive catheter-applied electrodes. The spatial resolution of this method and its ability to image electrical activation sequences over the entire ventricular surfaces in a single heartbeat allowed us to determine VT initiation sites and continuation pathways, as well as VT relationships to ventricular substrates, including anatomical scars and abnormal electrophysiological substrate. Thus, ECGI can map the VT activation sequence and identify the location and depth of VT origin in individual patients, allowing personalized treatment of patients with ventricular arrhythmias.
Supervised Learning of Semantics-Preserving Hashing via Deep Neural Networks for Large-Scale Image Search
This paper presents a supervised deep hashing approach that constructs binary hash codes from labeled data for large-scale image search. We assume that semantic labels are governed by a set of latent attributes in which each attribute can be on or off, and classification relies on these attributes. Based on this assumption, our approach, dubbed supervised semanticspreserving deep hashing (SSDH), constructs hash functions as a latent layer in a deep network in which binary codes are learned by the optimization of an objective function defined over classification error and other desirable properties of hash codes. With this design, SSDH has a nice property that classification and retrieval are unified in a single learning model, and the learned binary codes not only preserve the semantic similarity between images but also are efficient for image search. Moreover, SSDH performs joint learning of image representations, hash codes, and classification in a pointwised manner and thus is naturally scalable to large-scale datasets. SSDH is simple and can be easily realized by a slight modification of an existing deep architecture for classification; yet it is effective and outperforms other unsupervised and supervised hashing approaches on several benchmarks and one large dataset comprising more than 1 million images.
Advances in group psychotherapy and self psychology: An intersubjective approach
The theory of the "self" as applied to group psychotherapy has had particular meaning in this age and culture wherein people strive for independence, autonomy, and self-sufficiency, but all too often at the cost of alienation from others. The much-talked about phenomenon of the "me" culture can be viewed as a product of the times we live in, but a closer look also reveals it to be a common compensatory response to psychological factors going back to an early period of life when a child's developmental needs were misunderstood, ignored, mocked, overvalued, or overwhelmed by caretakers who could not relate to the child phase-appropriately. In such a case, a foundation for good-enough object relations never emerged. Children who experienced little or no enthusiastic responsiveness from such caretakers, except perhaps by taking on a submissive, compliant false self, can hardly be expected to relate spontaneously and with mature reciprocation. This paper discusses clinical issues in group psychotherapy with narcissistic and borderline patients; but first a brief review of the principal concepts of self psychology will be useful, especially more recent developments. For a deeper immersion into the earlier concepts, Kohut and Wolfs article (1978) and Kohuťs last two books (1977, 1984) are recommended.
MAC-Layer Concurrent Beamforming Protocol for Indoor Millimeter-Wave Networks
In this paper, we study concurrent beamforming issue for achieving high capacity in indoor millimeter-wave (mmWave) networks. The general concurrent beamforming issue is first formulated as an optimization problem to maximize the sum rates of concurrent transmissions, considering the mutual interference. To reduce the complexity of beamforming and the total setup time, concurrent beamforming is decomposed into multiple single-link beamforming, and an iterative searching algorithm is proposed to quickly achieve the suboptimal transmission/reception beam sets. A codebook-based beamforming protocol at medium access control (MAC) layer is then introduced in a distributive manner to determine the beam sets. Both analytical and simulation results demonstrate that the proposed protocol can drastically reduce total setup time, increase system throughput, and improve energy efficiency.
Phase IB study of the mTOR inhibitor ridaforolimus with capecitabine.
PURPOSE Synergistic/additive cytotoxicity in tumor models and widespread applicability of fluoropyrimidines in solid tumors prompted the study of the combination of the mammalian target of rapamycin (mTOR) inhibitor, non-prodrug rapamycin analog ridaforolimus, with capecitabine. PATIENTS AND METHODS Thirty-two adult patients were treated. Intravenous ridaforolimus was given once weekly for 3 weeks and capecitabine was given from days 1 to 14 every 4 weeks. Ridaforolimus was given at 25, 37.5, 50, or 75 mg with capecitabine at 1,650 mg/m(2) or 1,800 mg/m(2) divided into two daily doses. Pharmacokinetics of both drugs were determined during cycles 1 and 2. Pharmacodynamic studies in peripheral blood mononuclear cells (PBMCs) and wound tissue of the skin characterized pathways associated with the metabolism or disposition of fluoropyrimidines and mTOR and ERK signaling. RESULTS Two recommended doses (RDs) were defined: 75 mg ridaforolimus/1,650 mg/m(2) capecitabine and 50 mg ridaforolimus/1,800 mg/m(2) capecitabine. Dose-limiting toxicities were stomatitis and skin rash. One patient achieved a partial response lasting 10 months and 10 of 29 evaluable patients had stable disease for ≥ 6 months. The only pharmacokinetic interaction was a ridaforolimus-induced increase in plasma exposure to fluorouracil. PBMC data suggested that prolonged exposure to capecitabine reduced the ridaforolimus inhibition of mTOR. Ridaforolimus influenced the metabolism of fluoropyrimidines and inhibited dihydropyrimidine dehydrogenase, behavior similar to that of rapamycin. Inhibition of the target thymidylate synthase by capecitabine was unaffected. mTOR and ERK signaling was inhibited in proliferating endothelial cells and was more pronounced at the RD with the larger amount of ridaforolimus. CONCLUSION Good tolerability, feasibility of prolonged treatment, antitumor activity, and favorable pharmacologic profile support further investigation of this combination.
Primary treatment of multiple myeloma with thalidomide, vincristine, liposomal doxorubicin and dexamethasone (T-VAD doxil): a phase II multicenter study.
BACKGROUND High-dose chemotherapy with autologous stem cell transplantation after initial cytoreductive chemotherapy with the combination vincristine, doxorubicin and dexamethasone (VAD) is considered an effective therapy for many patients with newly diagnosed, symptomatic multiple myeloma. Response to initial cytoreductive chemotherapy is important for the long-term outcome of such patients. Thalidomide has recently shown significant antimyeloma activity. We studied the efficacy and toxicity of the combination of a liposomal doxorubicin-containing VAD regimen with thalidomide, administered on an outpatient basis, as initial cytoreductive treatment in previously untreated patients with symptomatic myeloma. PATIENTS AND METHODS Thirty-nine myeloma patients were treated with vincristine 2 mg intravenously (i.v.), liposomal doxorubicin 40 mg/m(2) i.v. administered as single dose on day 1, and dexamethasone 40 mg per os daily for 4 days. Dexamethasone was also given on days 15-18 of the first cycle of treatment. The regimen was administered every 4 weeks for four courses. Thalidomide was given daily at a dose of 200 mg at bedtime. Response to treatment was evaluated after four cycles of treatment. After completion of four cycles, the patients were allowed to proceed to high-dose chemotherapy or to receive two additional cycles of the same treatment. RESULTS On an intention-to-treat basis, 29 of the 39 patients (74%) responded to treatment. Four patients (10%) achieved complete and 25 (64%) partial response. Three patients (8%) showed minor response and seven (18%) were rated as non-responders. Major grade 3 or 4 toxicities consisted of neutropenia (15%), thrombocytopenia (15%), deep vein thrombosis (10%), constipation (10%), skin rash (5%) and peripheral neuropathy (5%). Two patients (5%) experienced early death due to infection. CONCLUSIONS The combination of vincristine, liposomal doxorubicin, and dexamethasone (VAD doxil) with thalidomide is an effective and relatively well-tolerated initial cytoreductive treatment. Prospective randomized studies are required in order to assess the effect of this regimen on the long-term outcome of patients with multiple myeloma.
Signal subspace integration for improved seizure localization
A subspace signal processing approach is proposed for improved scalp EEG-based localization of broad-focus epileptic seizures, and estimation of the directions of source arrivals (DOA). Ictal scalp EEGs from adult and pediatric patients with broad-focus seizures were first decomposed into dominant signal modes, and signal and noise subspaces at each modal frequency, to improve the signal-to-noise ratio while preserving the original data correlation structure. Transformed (focused) modal signals were then resynthesized into wideband signals from which the number of sources and DOA were estimated. These were compared to denoised signals via principal components analysis (PCA). Coherent subspace processing performed better than PCA, significantly improved the localization of ictal EEGs and the estimation of distinct sources and corresponding DOAs.
Property-based attestation for computing platforms: caring about properties, not mechanisms
Over the past years, the computing industry has started various initiatives announced to increase computer security by means of new hardware architectures. The most notable effort is the Trusted Computing Group (TCG) and the Next-Generation Secure Computing Base (NGSCB). This technology offers useful new functionalities as the possibility to verify the integrity of a platform (attestation) or binding quantities on a specific platform (sealing).In this paper, we point out the deficiencies of the attestation and sealing functionalities proposed by the existing specification of the TCG: we show that these mechanisms can be misused to discriminate certain platforms, i.e., their operating systems and consequently the corresponding vendors. A particular problem in this context is that of managing the multitude of possible configurations. Moreover, we highlight other shortcomings related to the attestation, namely system updates and backup. Clearly, the consequences caused by these problems lead to an unsatisfactory situation both for the private and business branch, and to an unbalanced market when such platforms are in wide use.To overcome these problems generally, we propose a completely new approach: the attestation of a platform should not depend on the specific software or/and hardware (configuration) as it is today's practice but only on the "properties" that the platform offers. Thus, a property-based attestation should only verify whether these properties are sufficient to fulfill certain (security) requirements of the party who asks for attestation. We propose and discuss a variety of solutions based on the existing Trusted Computing (TC) functionality. We also demonstrate, how a property-based attestation protocol can be realized based on the existing TC hardware such as a Trusted Platform Module (TPM).
PRODUCTION OF BIODIESEL FROM JATROPHA CURCAS L OIL
World’s economy depends upon the burning of fossil fuel equivalent of some 180 million barrels of oil each day. The consumption rate is equivalent to an annual burning of what nature took about one million years to accumulate fossil deposits. The fast depletion of fossil fuel becomes concern and there is a heavy pressure on the engine researcher to find alternative fuels to replace conventional fuels for the developing countries of the world. Stavarache et al., (2007) reported that biodiesel has attracted considerable attention during the past decade as a renewable, biodegradable and non-toxic fuel. Biodiesel is well known chemically as the monoalkly ester of long chain fatty acids and is produced from several types of conventional and non conventional vegetable oil and animal fats (Tomasevic and SilverMarinkivoc, 2003; Ramadhas et al., 2005). Biodiesel contains no petroleum but it can be blended at any level with petroleum diesel to create a biodiesel blend. It can be used in compression-ignition (diesel) engines with little or no modification. Biodiesel is simple to use biodegradable, nontoxic and essentially free of sulphur and aromatics. The fuel of bioorigin may be alcohol, edible and non-edible vegetable oils, biomass, biogas etc. Jatropha curcas, a nonedible oil seed bearing and drought advantage, provides energy self-sufficiency, while reducing fossil fuel consumption and green house gas emissions (Gubitz et al., 1999). Houfang et al., (2009) investigated a two step process consisting of preesterification and transesterification to produce biodiesel from crude Jatropha curcas L oil. The yield of biodiesel by transesterification was higher than 90% using 1% of NaOH and a molar ration of methanol to oil 6:1 at 65oC. As a part of our systematic investigations of exploring indigenous vegetable oil resources for biodiesel production efforts were made to evaluate the utility of Jatropha seed oil for biodiesel production.
Chatbots' Greetings to Human-Computer Communication
Both dialogue systems and chatbots aim at putting into action communication between humans and computers. However, instead of focusing on sophisticated techniques to perform natural language understanding, as the former usually do, chatbots seek to mimic conversation. Since Eliza, the first chatbot ever, developed in 1966, there were many interesting ideas explored by the chatbots’ community. Actually, more than just ideas, some chatbots’ developers also provide free resources, including tools and large-scale corpora. It is our opinion that this know-how and materials should not be neglected, as they might be put to use in the human-computer communication field (and some authors already do it). Thus, in this paper we present a historical overview of the chatbots’ developments, we review what we consider to be the main contributions of this community, and we point to some possible ways of coupling these with current work in the human-computer communication research line.
pNRQCD: review of selected results
Non-relativistic bound-state systems are characterized by, at least, three widely separated scales: the mass m of the particle, the (soft) scale associated to its relative momentum ∼ mv, v ≪ 1, and the (ultrasoft) scale associated to its kinetic energy ∼ mv. In QED and in the perturbative regime of QCD the velocity v of the particle in the bound state may be identified with the coupling constant. Moreover, the inverse of the size of the system is also of order mv and the binding energy of order mv. Indeed, a systematic treatment of nonrelativistic bound-state systems in the framework of effective field theories (EFT), which takes full advantage of the above energy scale hierarchy, was initiated in QED 1 and in more recent years remarkable progress has been achieved in the analysis of tt̄ threshold production . For systems made of b and c quarks (I will denote them generically as heavy quarkonia: ψ, Υ, Bc, ...) non-perturbative contributions may be relevant. By comparing the energy level spacings of these systems (see Fig. 1) with the heavy-quark masses (e.g. mb ≃ 5 GeV and mc ≃ 1.6 GeV) we can still argue that the data are consistent with a kinetic energy of the bound quark much smaller than the heavy-quark mass and, therefore, with a non-relativistic (NR) description of the heavy-quark–antiquark system. However, in dependence of the specific system, the scale of non-perturbative physics, ΛQCD, may turn out to be close to some of its dynamical scales. The physical picture, which then arises, may be quite different from the perturbative situation. What remains guaranteed, also for heavy quarkonia, is that m≫ ΛQCD and that at least the mass scale can be treated perturbatively, i.e. integrated out from QCD order by order in the coupling constant. The resulting EFT is called NRQCD . A lot of effort has been put over the last two decades in order to find the relevant operators, which parameterize the non-perturbative heavy-quark–
Testing the feasibility, acceptability and effectiveness of a 'decision navigation' intervention for early stage prostate cancer patients in Scotland--a randomised controlled trial.
OBJECTIVE Does decision navigation (DN) increase prostate cancer patients' confidence and certainty in treatment decisions, while reducing regret associated with the decisions made? METHODS Two hundred eighty-nine newly diagnosed prostate cancer patients were eligible. 123 consented and were randomised to usual care (n = 60) or navigation (n = 63). The intervention involved a 'navigator' guiding the patient in creating a personal question list for a consultation and providing a CD and typed summary of the consultation to patients, the general practitioner and physician. The primary outcome was decisional self efficacy. Secondary outcomes included decisional conflict (DCS) and decisional regret (RS). Measures of mood (Hospital Anxiety and Depression Scale) and adjustment (Mental Adjustment to Cancer Scale) were included to detect potential adverse effects of the intervention. RESULTS ANOVA showed a main effect for the group (F = 7.161, df 1, p = 0.009). Post hoc comparisons showed significantly higher decisional self efficacy in the navigated patients post-consultation and 6 months later. Decisional conflict was lower for navigated patients initially (t = 2.005, df = 105, p = 0.047), not at follow-up (t = 1.969, df = 109, p = 0.052). Regret scores were significantly lower in the navigation group compared to the controls 6 months later (t = -2.130, df = 100, p = 0.036). There was no impact of the intervention on mood or adjustment. CONCLUSION Compared to control patients, navigated patients were more confident in making decisions about cancer treatment, were more certain they had made the right decision after the consultation and had less regret about their decision 6 months later. Decision navigation was feasible, acceptable and effective for newly diagnosed prostate cancer patients in Scotland.
Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection
Paraphrase detection is the task of examining two sentences and determining whether they have the same meaning. In order to obtain high accuracy on this task, thorough syntactic and semantic analysis of the two statements is needed. We introduce a method for paraphrase detection based on recursive autoencoders (RAE). Our unsupervised RAEs are based on a novel unfolding objective and learn feature vectors for phrases in syntactic trees. These features are used to measure the wordand phrase-wise similarity between two sentences. Since sentences may be of arbitrary length, the resulting matrix of similarity measures is of variable size. We introduce a novel dynamic pooling layer which computes a fixed-sized representation from the variable-sized matrices. The pooled representation is then used as input to a classifier. Our method outperforms other state-of-the-art approaches on the challenging MSRP paraphrase corpus.
Parallel realities: exploring poverty dynamics using mixed methods in rural Bangladesh.
This paper explores the implications of using two methodological approaches to study poverty dynamics in rural Bangladesh. Using data from a unique longitudinal study, we show how different methods lead to very different assessments of socio-economic mobility. We suggest five ways of reconciling these differences: considering assets in addition to expenditures, proximity to the poverty line, other aspects of well-being, household division, and qualitative recall errors. Considering assets and proximity to the poverty line along with expenditures resolves three-fifths of the qualitative and quantitative differences. Use of such integrated mixed-methods can therefore improve the reliability of poverty dynamics research.
Building better biomarkers: brain models in translational neuroimaging
Despite its great promise, neuroimaging has yet to substantially impact clinical practice and public health. However, a developing synergy between emerging analysis techniques and data-sharing initiatives has the potential to transform the role of neuroimaging in clinical applications. We review the state of translational neuroimaging and outline an approach to developing brain signatures that can be shared, tested in multiple contexts and applied in clinical settings. The approach rests on three pillars: (i) the use of multivariate pattern-recognition techniques to develop brain signatures for clinical outcomes and relevant mental processes; (ii) assessment and optimization of their diagnostic value; and (iii) a program of broad exploration followed by increasingly rigorous assessment of generalizability across samples, research contexts and populations. Increasingly sophisticated models based on these principles will help to overcome some of the obstacles on the road from basic neuroscience to better health and will ultimately serve both basic and applied goals.
Support Vector Machine in Prediction of Building Energy Demand Using Pseudo Dynamic Approach
Building’s energy consumption prediction is a major concern in the recent years and many efforts have been achieved in order to improve the energy management of buildings. In particular, the prediction of energy consumption in building is essential for the energy operator to build an optimal operating strategy, which could be integrated to building’s energy management system (BEMS). This paper proposes a prediction model for building energy consumption using support vector machine (SVM). Datadriven model, for instance, SVM is very sensitive to the selection of training data. Thus the relevant days data selection method based on Dynamic Time Warping is used to train SVM model. In addition, to encompass thermal inertia of building, pseudo dynamic model is applied since it takes into account information of transition of energy consumption effects and occupancy profile. Relevant days data selection and whole training data model is applied to the case studies of Ecole des Mines de Nantes, France Office building. The results showed that support vector machine based on relevant data selection method is able to predict the energy consumption of building with a high accuracy in compare to whole data training. In addition, relevant data selection method is computationally cheaper (around 8 minute training time) in contrast to whole data training (around 31 hour for weekend and 116 hour for working days) and reveals realistic control implementation for online system as well.
Critical Power: An Important Fatigue Threshold in Exercise Physiology.
: The hyperbolic form of the power-duration relationship is rigorous and highly conserved across species, forms of exercise, and individual muscles/muscle groups. For modalities such as cycling, the relationship resolves to two parameters, the asymptote for power (critical power [CP]) and the so-called W' (work doable above CP), which together predict the tolerable duration of exercise above CP. Crucially, the CP concept integrates sentinel physiological profiles-respiratory, metabolic, and contractile-within a coherent framework that has great scientific and practical utility. Rather than calibrating equivalent exercise intensities relative to metabolically distant parameters such as the lactate threshold or V˙O2max, setting the exercise intensity relative to CP unifies the profile of systemic and intramuscular responses and, if greater than CP, predicts the tolerable duration of exercise until W' is expended, V˙O2max is attained, and intolerance is manifested. CP may be regarded as a "fatigue threshold" in the sense that it separates exercise intensity domains within which the physiological responses to exercise can (CP) be stabilized. The CP concept therefore enables important insights into 1) the principal loci of fatigue development (central vs. peripheral) at different intensities of exercise and 2) mechanisms of cardiovascular and metabolic control and their modulation by factors such as O2 delivery. Practically, the CP concept has great potential application in optimizing athletic training programs and performance as well as improving the life quality for individuals enduring chronic disease.
Particle PHD Filter Based Multiple Human Tracking Using Online Group-Structured Dictionary Learning
An enhanced sequential Monte Carlo probability hypothesis density (PHD) filter-based multiple human tracking system is presented. The proposed system mainly exploits two concepts: a novel adaptive gating technique and an online group-structured dictionary learning strategy. Conventional PHD filtering methods preset the target birth intensity and the gating threshold for selecting real observations for the PHD update. This often yields inefficiency in false positives and missed detections in a cluttered environment. To address this issue, a measurement-driven mechanism based on a novel adaptive gating method is proposed to adaptively update the gating sizes. This yields an accurate approach to discriminate between survival and residual measurements by reducing the clutter inferences. In addition, online group-structured dictionary learning with a maximum voting method is used to robustly estimate the target birth intensity. It enables the new-born targets to be automatically detected from noisy sensor measurements. To improve the adaptability of our group-structured dictionary to appearance and illumination changes, we employ the simultaneous code word optimization algorithm for the dictionary update stage. Experimental results demonstrate our proposed method achieves the best performance amongst state-of-the-art random finite set-based methods, and the second best online tracker ranked on the leaderboard of latest MOT17 challenge.
Engineering a paradox of thrift recession
We build a variation of the neoclassical growth model in which financial shocks to households or wealth shocks (in the sense of wealth destruction) generate recessions. Two standard ingredients that are necessary are (1) the existence of adjustment costs that make the expansion of the tradable goods sector difficult and (2) the existence of some frictions in the labor market that prevent enormous reductions in real wages (Nash bargaining in Mortensen-Pissarides labor markets is enough). We pose a new ingredient that greatly magnifies the recession: a reduction in consumption expenditures reduces measured productivity, while technology is unchanged due to reduced utilization of production capacity. Our model provides a novel, quantitative theory of the current recessions in southern Europe.
Phase II study of monthly pasireotide LAR (SOM230C) for recurrent or progressive meningioma.
OBJECTIVE A subset of meningiomas recur after surgery and radiation therapy, but no medical therapy for recurrent meningioma has proven effective. METHODS Pasireotide LAR is a long-acting somatostatin analog that may inhibit meningioma growth. This was a phase II trial in patients with histologically confirmed recurrent or progressive meningioma designed to evaluate whether pasireotide LAR prolongs progression-free survival at 6 months (PFS6). Patients were stratified by histology (atypical [World Health Organization grade 2] and malignant [grade 3] meningiomas in cohort A and benign [grade 3] in cohort B). RESULTS Eighteen patients were accrued in cohort A and 16 in cohort B. Cohort A had median age 59 years, median Karnofsky performance status 80, 17 (94%) had previous radiation therapy, and 11 (61%) showed high octreotide uptake. Cohort B had median age 52 years, median Karnofsky performance status 90, 11 (69%) had previous radiation therapy, and 12 (75%) showed high octreotide uptake. There were no radiographic responses to pasireotide LAR therapy in either cohort. Twelve patients (67%) in cohort A and 13 (81%) in cohort B achieved stable disease. In cohort A, PFS6 was 17% and median PFS 15 weeks (95% confidence interval: 8-20). In cohort B, PFS6 was 50% and median PFS 26 weeks (12-43). Treatment was well tolerated. Octreotide uptake and insulin-like growth factor-1 levels did not predict outcome. Expression of somatostatin receptor 3 predicted favorable PFS and overall survival. CONCLUSIONS Pasireotide LAR has limited activity in recurrent meningiomas. The finding that somatostatin receptor 3 is associated with favorable outcomes warrants further investigation. CLASSIFICATION OF EVIDENCE This study provides Class IV evidence that in patients with recurrent or progressive meningioma, pasireotide LAR does not significantly increase the proportion of patients with PFS at 6 months.
Constructing elastic distinguishability metrics for location privacy
With the increasing popularity of hand-held devices, location-based applications and services have access to accurate and real-time location information, raising serious privacy concerns for their users. The recently introduced notion of geo-indistinguishability tries to address this problem by adapting the well-known concept of differential privacy to the area of location-based systems. Although geo-indistinguishability presents various appealing aspects, it has the problem of treating space in a uniform way, imposing the addition of the same amount of noise everywhere on the map. In this paper we propose a novel elastic distinguishability metric that warps the geometrical distance, capturing the different degrees of density of each area. As a consequence, the obtained mechanism adapts the level of noise while achieving the same degree of privacy everywhere. We also show how such an elastic metric can easily incorporate the concept of a “geographic fence” that is commonly employed to protect the highly recurrent locations of a user, such as his home or work. We perform an extensive evaluation of our technique by building an elastic metric for Paris’ wide metropolitan area, using semantic information from the OpenStreetMap database. We compare the resulting mechanism against the Planar Laplace mechanism satisfying standard geo-indistinguishability, using two real-world datasets from the Gowalla and Brightkite location-based social networks. The results show that the elastic mechanism adapts well to the semantics of each area, adjusting the noise as we move outside the city center, hence offering better overall privacy. 1
Privacy Preserving OLAP
We present techniques for privacy-preserving computation of multidimensional aggregates on data partitioned across multiple clients. Data from different clients is perturbed (randomized) in order to preserve privacy before it is integrated at the server. We develop formal notions of privacy obtained from data perturbation and show that our perturbation provides guarantees against privacy breaches. We develop and analyze algorithms for reconstructing counts of subcubes over perturbed data. We also evaluate the tradeoff between privacy guarantees and reconstruction accuracy and show the practicality of our approach.
A review of gas sensors employed in electronic nose applications
This paper reviews the range of sensors used in electronic nose (e-nose) systems to date. It outlines the operating principles and fabrication methods of each sensor type as well as the applications in which the different sensors have been utilised. It also outlines the advantages and disadvantages of each sensor for application in a cost-effective low-power handheld e-nose system.
Effects of surface characteristics on the plantar shape of feet and subjects' perceived sensations.
Orthotics and other types of shoe inserts are primarily designed to reduce injury and improve comfort. The interaction between the plantar surface of the foot and the load-bearing surface contributes to foot and surface deformations and hence to perceived comfort, discomfort or pain. The plantar shapes of 16 participants' feet were captured when standing on three support surfaces that had different cushioning properties in the mid-foot region. Foot shape deformations were quantified using 3D laser scans. A questionnaire was used to evaluate the participant's perceptions of perceived shape and perceived feeling. The results showed that the structure in the mid-foot could change shape, independent of the rear-foot and forefoot regions. Participants were capable of identifying the shape changes with distinct preferences towards certain shapes. The cushioning properties of the mid-foot materials also have a direct influence on perceived feelings. This research has strong implications for the design and material selection of orthotics, insoles and footwear.
Sensitive tenofovir resistance screening of HIV-1 from the genital and blood compartments of women with breakthrough infections in the CAPRISA 004 tenofovir gel trial.
The Centre for the AIDS Programme of Research in South Africa 004 (CAPRISA 004) study demonstrated that vaginally applied tenofovir gel is a promising intervention for protecting women from sexually acquiring human immunodeficiency virus (HIV). However, the potential for emergence of tenofovir resistance remains a concern in women who seroconvert while using the gel despite the lack of plasma virus resistance as assessed by population sequencing during the trial. We applied highly sensitive polymerase chain reaction-based assays to screen for tenofovir resistance in plasma and vaginal swab specimens. The absence of mutation detection suggested little immediate risk of tenofovir-resistant HIV-1 emergence and forward transmission in settings in which gel users are closely monitored for HIV seroconversion.
Towards a theory of natural language interfaces to databases
The need for Natural Language Interfaces (NLIs) to databases has become increasingly acute as more nontechnical people access information through their web browsers, PDAs and cell phones. Yet NLIs are only usable if they map natural language questions to SQL queries correctly. We introduce the Precise NLI [2], which reduces the semantic interpretation challenge in NLIs to a graph matching problem. Precise uses the max-flow algorithm to efficiently solve this problem. Each max-flow solution corresponds to a possible semantic interpretation of the sentence. precise collects max-flow solutions, discards the solutions that do not obey syntactic constraints and retains the rest as the basis for generating SQL queries corresponding to the question q. The syntactic information is extracted from the parse tree corresponding to the given question which is computed by a statistical parser [1]. For a broad, well-defined class of semantically tractable natural language questions, Precise is guaranteed to map each question to the corresponding SQL querySemantically tractable questions correspond to a natural, domain-independent subset of English that can be efficiently and accurately interpreted as nonrecursive Datalog clauses. Precise is transportable to arbitrary databases, such as the Restaurants,Jobs and Geography databases used in our implementation. Examples of semantically tractable questions include: "What Chinese restaurants with a 3.5 rating are in Seattle?", "What are the areas of US states with large populations?", "What jobs require 4 years of experience and desire a B.S.CS degree?".Given a question which is not semantically tractable, Precise recognizes it as such and informs the user that it cannot answer it.Given a semantically tractable question, Precise computes the set of non-equivalent SQL interpretations corresponding to the question. If a unique such SQL interpretation exists, Precise outputs it together with the corresponding result set obtained by querying the current database. If the set contains more than one SQL interpretation, the natural language question is ambiguous in the context of the current database. In this case, Precise asks for the user's help in determining which interpretation is the correct one.Our experiments have shown that Precise has high coverage and accuracy over common English questions. In future work, we plan to explore increasingly broad classes of questions and include Precise as a module in a full-fledged dialog system. An important direction for future work is helping users understand the types of questions Precise cannot handle via dialog, enabling them to build an accurate mental model of the system and its capabilities. Also, our own group's work on the EXACT natural language interface [3] builds on Precise and on the underlying theoretical framework. EXACT composes an extended version of Precise with a sound and complete planner to develop a powerful and provably reliable interface to household appliances
MedRec: Using Blockchain for Medical Data Access and Permission Management
Years of heavy regulation and bureaucratic inefficiency have slowed innovation for electronic medical records (EMRs). We now face a critical need for such innovation, as personalization and data science prompt patients to engage in the details of their healthcare and restore agency over their medical data. In this paper, we propose MedRec: a novel, decentralized record management system to handle EMRs, using blockchain technology. Our system gives patients a comprehensive, immutable log and easy access to their medical information across providers and treatment sites. Leveraging unique blockchain properties, MedRec manages authentication, confidentiality, accountability and data sharing- crucial considerations when handling sensitive information. A modular design integrates with providers' existing, local data storage solutions, facilitating interoperability and making our system convenient and adaptable. We incentivize medical stakeholders (researchers, public health authorities, etc.) to participate in the network as blockchain “miners”. This provides them with access to aggregate, anonymized data as mining rewards, in return for sustaining and securing the network via Proof of Work. MedRec thus enables the emergence of data economics, supplying big data to empower researchers while engaging patients and providers in the choice to release metadata. The purpose of this short paper is to expose, prior to field tests, a working prototype through which we analyze and discuss our approach.
MAX-phase coatings produced by thermal spraying
This paper presents a comparative study on the Ti2AlC coatings produced by different thermal spray methods, as Ti2AlC is one of the most studied materials from the MAX-phase family. Microstructural analysis of coatings produced by High Velocity Air Fuel (HVAF), Cold Spray and High Velocity Oxygen Fuel (HVOF) has been carried out by means of the scanning electron microscopy equipped with an energy dispersive spectrometer (EDS). The volume fraction of porosity was determined using the ASTM standard E562. The phase characterization of the as-received powder and as-sprayed coatings was conducted using the X-ray diffraction with CrKα radiation. Impact of the spray parameters on the porosity and the mechanical properties of the coatings are discussed. The results show that the spraying temperature and velocity play a crucial role in coatings characteristics.
An integrated design and fabrication strategy for entirely soft, autonomous robots
Soft robots possess many attributes that are difficult, if not impossible, to achieve with conventional robots composed of rigid materials. Yet, despite recent advances, soft robots must still be tethered to hard robotic control systems and power sources. New strategies for creating completely soft robots, including soft analogues of these crucial components, are needed to realize their full potential. Here we report the untethered operation of a robot composed solely of soft materials. The robot is controlled with microfluidic logic that autonomously regulates fluid flow and, hence, catalytic decomposition of an on-board monopropellant fuel supply. Gas generated from the fuel decomposition inflates fluidic networks downstream of the reaction sites, resulting in actuation. The body and microfluidic logic of the robot are fabricated using moulding and soft lithography, respectively, and the pneumatic actuator networks, on-board fuel reservoirs and catalytic reaction chambers needed for movement are patterned within the body via a multi-material, embedded 3D printing technique. The fluidic and elastomeric architectures required for function span several orders of magnitude from the microscale to the macroscale. Our integrated design and rapid fabrication approach enables the programmable assembly of multiple materials within this architecture, laying the foundation for completely soft, autonomous robots.
Getting by with a little help from self and others: self-esteem and social support as resources during early adolescence.
Influences of social support and self-esteem on adjustment in early adolescence were investigated in a 2-year longitudinal study (N = 350). Multi-informant data (youth and parent) were used to assess both overall levels and balance in peer- versus adult-oriented sources for social support and self-esteem. Findings obtained using latent growth-curve modeling were consistent with self-esteem mediating effects of social support on both emotional and behavioral adjustment. Lack of balance in social support and self-esteem in the direction of stronger support and esteem from peer-oriented sources predicted greater levels and rates of growth in behavioral problems. Results indicate a need for process-oriented models of social support and self-esteem and sensitivity to patterning of sources for each resource relative to adaptive demands of early adolescence.
Development of a novel direct-drive tubular linear brushless permanent-magnet motor
This paper presents a novel design for a tubular linear brushless permanent-magnet motor. In this design, the magnets in the moving part are oriented in an NS-NS—SN-SN fashion which leads to higher magnetic force near the like-pole region. An analytical methodology to calculate the motor force and to size the actuator was developed. The linear motor is operated in conjunction with a position sensor, three power amplifiers, and a controller to form a complete solution for controlled precision actuation. Real-time digital controllers enhanced the dynamic performance of the motor, and gain scheduling reduced the effects of a nonlinear dead band. In its current state, the motor has a rise time of 30 ms, a settling time of 60 ms, and 25% overshoot to a 5-mm step command. The motor has a maximum speed of 1.5 m/s and acceleration up to 10 g. It has a 10-cm travel range and 26-N maximum pull-out force. The compact size of the motor suggests it could be used in robotic applications requiring moderate force and precision, such as robotic-gripper positioning or actuation. The moving part of the motor can extend significantly beyond its fixed support base. This reaching ability makes it useful in applications requiring a small, direct-drive actuator, which is required to extend into a spatially constrained environment.
Subspace Learning and Imputation for Streaming Big Data Matrices and Tensors
Extracting latent low-dimensional structure from high-dimensional data is of paramount importance in timely inference tasks encountered with “Big Data” analytics. However, increasingly noisy, heterogeneous, and incomplete datasets, as well as the need for real-time processing of streaming data, pose major challenges to this end. In this context, the present paper permeates benefits from rank minimization to scalable imputation of missing data, via tracking low-dimensional subspaces and unraveling latent (possibly multi-way) structure from incomplete streaming data. For low-rank matrix data, a subspace estimator is proposed based on an exponentially weighted least-squares criterion regularized with the nuclear norm. After recasting the nonseparable nuclear norm into a form amenable to online optimization, real-time algorithms with complementary strengths are developed, and their convergence is established under simplifying technical assumptions. In a stationary setting, the asymptotic estimates obtained offer the well-documented performance guarantees of the batch nuclear-norm regularized estimator. Under the same unifying framework, a novel online (adaptive) algorithm is developed to obtain multi-way decompositions of low-rank tensors with missing entries and perform imputation as a byproduct. Simulated tests with both synthetic as well as real Internet and cardiac magnetic resonance imagery (MRI) data confirm the efficacy of the proposed algorithms, and their superior performance relative to state-of-the-art alternatives.
Management of adolescents with very poorly controlled type 1 diabetes by nurses: a parallel group randomized controlled trial
BACKGROUNDS Fluctuation in glycemia due to hormonal changes, growth periods, physical activity, and emotions make diabetes management difficult during adolescence. Our objective was to show that a close control of patients' self-management of diabetes by nurse-counseling could probably improve metabolic control in adolescents with type 1 diabetes. METHODS We designed a multicenter, randomized controlled, parallel group, clinical trial. Seventy seven adolescents aged 12-17 years with A1C >8% were assigned to either an intervention group (pediatrician visit every 3 months + nurse visit and phone calls) or to the control group (pediatrician visit every 3 months). The primary outcome was the evolution of the rate of A1C during the 12 months of follow-up. Secondary outcomes include patient's acceptance of the disease (evaluated by visual analog scale), the number of hypoglycemic or ketoacidosis episodes requiring hospitalization, and evaluation of A1C rate over time in each group. RESULTS Seventy-seven patients were enrolled by 10 clinical centers. Seventy (89.6%) completed the study, the evolution of A1C and participants satisfaction over the follow-up period was not significantly influenced by the nurse intervention. CONCLUSION Nurse-led intervention to improve A1C did not show a significant benefit in adolescents with type 1 diabetes because of lack of power. Only psychological management and continuous glucose monitoring have shown, so far, a slight but significant benefit on A1C. We did not show improvements in A1C control in teenagers by nurse-led intervention. TRIAL REGISTRATION Clinical Trials.gov registration number: NCT00308256, 28 March 2006.
TWO-WAY COMMUNICATION CHANNELS
input at terminal 2 and Y2 the corresponding output. Once each second, say, new inputs xi and x2 may be chosen from corresponding input alphabets and put into the channel; outputs yi and Y2 may then be observed. These outputs will be related statistically to the inputs and perhaps historically to previous inputs and outputs if the channel has memory. The problem is to communicate in both directions through the channel as effectively as possible. Particularly, we wish to determine what pairs of signalling rates R1 and R2 for the two directions can be approached with arbitrarily small error probabilities.
Cold-Start Recommendation with Provable Guarantees: A Decoupled Approach
Although the matrix completion paradigm provides an appealing solution to the collaborative filtering problem in recommendation systems, some major issues, such as data sparsity and cold-start problems, still remain open. In particular, when the rating data for a subset of users or items is entirely missing, commonly known as the cold-start problem, the standard matrix completion methods are inapplicable due the non-uniform sampling of available ratings. In recent years, there has been considerable interest in dealing with cold-start users or items that are principally based on the idea of exploiting other sources of information to compensate for this lack of rating data. In this paper, we propose a novel and general algorithmic framework based on matrix completion that simultaneously exploits the similarity information among users and items to alleviate the cold-start problem. In contrast to existing methods, our proposed recommender algorithm, dubbed DecRec, decouples the following two aspects of the cold-start problem to effectively exploit the side information: (i) the completion of a rating sub-matrix, which is generated by excluding cold-start users/items from the original rating matrix; and (ii) the transduction of knowledge from existing ratings to cold-start items/users using side information. This crucial difference prevents the error propagation of completion and transduction, and also significantly boosts the performance when appropriate side information is incorporated. The recovery error of the proposed algorithm is analyzed theoretically and, to the best of our knowledge, this is the first algorithm that addresses the cold-start problem with provable guarantees on performance. Additionally, we also address the problem where both cold-start user and item challenges are present simultaneously. We conduct thorough experiments on real datasets that complement our theoretical results. These experiments demonstrate the effectiveness of the proposed algorithm in handling the cold-start users/items problem and mitigating data sparsity issue.
The Fagerström Test for Nicotine Dependence-Smokeless Tobacco (FTND-ST).
Few nicotine dependence measures have been developed for smokeless tobacco (ST) users. Existing measures are limited by the requirement to rate the nicotine content of ST brands for which data is scarce or non-existent. We modified the Fagerström Test for Nicotine Dependence (FTND) for ST users, referred to this scale as the FTND-ST, and evaluated its characteristics in a population of 42 ST users. The correlation between the FTND-ST total score and the serum cotinine concentrations was 0.53 (p<0.001). Internal consistency reliability assessed using the coefficient alpha was 0.47. Correlations and the coefficient alpha are similar to those reported for commonly used nicotine dependence measures. Development and refinement of nicotine dependence measures for ST users are essential steps in order to advance the field of ST research.
An Adaptive-Gain Complementary Filter for Real-Time Human Motion Tracking With MARG Sensors in Free-Living Environments
High-resolution, real-time data obtained by human motion tracking systems can be used for gait analysis, which helps better understanding the cause of many diseases for more effective treatments, such as rehabilitation for outpatients or recovery from lost motor functions after a stroke. In order to achieve real-time ambulatory human motion tracking with low-cost MARG (magnetic, angular rate, and gravity) sensors, a computationally efficient and robust algorithm for orientation estimation is critical. This paper presents an analytically derived method for an adaptive-gain complementary filter based on the convergence rate from the Gauss-Newton optimization algorithm (GNA) and the divergence rate from the gyroscope, which is referred as adaptive-gain orientation filter (AGOF) in this paper. The AGOF has the advantages of one iteration calculation to reduce the computing load and accurate estimation of gyroscope measurement error. Moreover, for handling magnetic distortions especially in indoor environments and movements with excessive acceleration, adaptive measurement vectors and a reference vector for earth's magnetic field selection schemes are introduced to help the GNA find more accurate direction of gyroscope error. The features of this approach include the accurate estimation of the gyroscope bias to correct the instantaneous gyroscope measurements and robust estimation in conditions of fast motions and magnetic distortions. Experimental results are presented to verify the performance of the proposed method, which shows better accuracy of orientation estimation than several well-known methods.