title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Os-level virtualization and its applications | of the Dissertation OS-level Virtualization and Its Applications |
A New Model for Stock Price Movements Prediction Using Deep Neural Network | In this paper, we introduce a new prediction model depend on Bidirectional Gated Recurrent Unit (BGRU). Our predictive model relies on both online financial news and historical stock prices data to predict the stock movements in the future. Experimental results show that our model accuracy achieves nearly 60% in S&P 500 index prediction whereas the individual stock prediction is over 65%. |
A clinical comparison of pneumatic compression devices: the basis for selection. | PURPOSE
The five pneumatic compression devices (PCDs) that are marketed provide mechanical protection from deep venous thrombosis (DVT). They differ with respect to patterns of compression and the length of the sleeve. Evidence linking differences to clinical outcomes is lacking. Our purpose was twofold: to evaluate each of the marketed PCDs with respect to effectiveness, compliance, and patient and nursing satisfaction and to determine whether there is a clinical basis for the selection of one device over another.
METHODS
Each of the marketed devices was used exclusively for a 4-week period. Patients participated in an evaluation including venous duplex ultrasound scan, DVT risk assessment, and device evaluation. Vascular laboratory records were used to document DVT. Compliance was measured by meters installed on all pumps. A ranking matrix was stratified by compression pattern: rapid graduated sequential compression, graduated compression, and intermittent compression, and each device was rated by patients and nurses.
RESULTS
The PCDs were used in 1350 cases with a DVT rate of 3.5% ranging from 2% to 9.8% depending on the method of compression. Patients with DVT were older (58 vs 54 years), had better compliance (67% vs 50%), and had more compression days (11 vs 7.2). When thigh-length sleeves were used, a greater proportion of DVT occurred in the proximal segments (71%) as compared with the number of proximal DVT when the calf-length devices were used (52%; P =.21). Devices W, X, and Y had comparable rates of DVT, which were lower than those for V and Z. Compression device W, [correction] with calf and thigh sleeves, achieved the best overall ranking largely because of high scores for patient and nurse satisfaction.
CONCLUSION
Our data appear at odds with commonly held beliefs. We were unable to show a difference in DVT incidence based on the length of the device or the method of compression. Randomized studies are needed to confirm our findings and evaluate hypotheses derived from this study. |
Sieve: Cryptographically Enforced Access Control for User Data in Untrusted Clouds | Modern web services rob users of low-level control over cloud storage; a user’s single logical data set is scattered across multiple storage silos whose access controls are set by the web services, not users. The result is that users lack the ultimate authority to determine how their data is shared with other web services. In this thesis, we introduce Sieve, a new architecture for selectively exposing user data to third party web services in a provably secure manner. Sieve starts with a user-centric storage model: each user uploads encrypted data to a single cloud store, and by default, only the user knows the decryption keys. Given this storage model, Sieve defines an infrastructure to support rich, legacy web applications. Using attribute-based encryption, Sieve allows users to define intuitive, understandable access policies that are cryptographically enforceable. Using key homomorphism, Sieve can re-encrypt user data on storage providers in situ, revoking decryption keys from web services without revealing new ones to the storage provider. Using secret sharing and two-factor authentication, Sieve protects against the loss of user devices like smartphones and laptops. The result is that users can enjoy rich, legacy web applications, while benefitting from cryptographically strong controls over what data the services can access. Thesis Supervisor: Nickolai Zeldovich Title: Associate Professor Thesis Supervisor: James Mickens Title: Associate Professor |
Thankful for the little things: A meta-analysis of gratitude interventions. | A recent qualitative review by Wood, Froh, and Geraghty (2010) cast doubt on the efficacy of gratitude interventions, suggesting the need to carefully attend to the quality of comparison groups. Accordingly, in a series of meta-analyses, we evaluate the efficacy of gratitude interventions (ks = 4-18; Ns = 395-1,755) relative to a measurement-only control or an alternative-activity condition across 3 outcomes (i.e., gratitude, anxiety, psychological well-being). Gratitude interventions outperformed a measurement-only control on measures of psychological well-being (d = .31, 95% confidence interval [CI = .04, .58]; k = 5) but not gratitude (d = .20; 95% CI [-.04, .44]; k = 4). Gratitude interventions outperformed an alternative-activity condition on measures of gratitude (d = .46, 95% CI [.27, .64]; k = 15) and psychological well-being (d = .17, 95% CI [.09, .24]; k = 20) but not anxiety (d = .11, 95% CI [-.08, .31]; k = 5). More-detailed subdivision was possible on studies with outcomes assessing psychological well-being. Among these, gratitude interventions outperformed an activity-matched comparison (d = .14; 95% CI [.01, .27]; k = 18). Gratitude interventions performed as well as, but not better than, a psychologically active comparison (d = -.03, 95% CI [-.13, .07]; k = 9). On the basis of these findings, we summarize the current state of the literature and make suggestions for future applied research on gratitude. (PsycINFO Database Record |
Fast Decoding in Sequence Models using Discrete Latent Variables | Autoregressive sequence models based on deep neural networks, such as RNNs, Wavenet and the Transformer attain state-of-the-art results on many tasks. However, they are difficult to parallelize and are thus slow at processing long sequences. RNNs lack parallelism both during training and decoding, while architectures like WaveNet and Transformer are much more parallelizable during training, yet still operate sequentially during decoding. We present a method to extend sequence models using discrete latent variables that makes decoding much more parallelizable. We first autoencode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. To this end, we introduce a novel method for constructing a sequence of discrete latent variables and compare it with previously introduced methods. Finally, we evaluate our model end-to-end on the task of neural machine translation, where it is an order of magnitude faster at decoding than comparable autoregressive models. While lower in BLEU than purely autoregressive models, our model achieves higher scores than previously proposed non-autoregressive translation models. |
The ‘ ‘ Grammar Correction ’ ’ Debate in L 2 Writing : Where are we , and where do we go from here ? ( and what do we do in the meantime . . . ? ) | The efficacy of teacher error/grammar correction in second language writing classes has been the subject of much controversy, including a published debate in an earlier volume of this journal [J. Second Language Writing 8 (1999) 1; J. Second Language Writing 8 (1999) 111]. In this paper, the state-of-the-art in error correction research in L2 writing is described (‘‘Where are we?’’), directions for future research are outlined (‘‘Where do we go from here?’’) and implications for current L2 composition pedagogy are suggested (‘‘What do we do in the meantime?’’). The primary thesis of the paper is that, despite the published debate and several decades of research activity in this area, we are virtually at Square One, as the existing research base is incomplete and inconsistent, and it would certainly be premature to formulate any conclusions about this topic. Thus, findings from previous research on this controversial yet ubiquitous pedagogical issue are recast as ‘‘predictions’’ about what future research might discover, rather than ‘‘conclusions’’ about what the previous research shows us. # 2004 Elsevier Inc. All rights reserved. |
Efficient and Robust Question Answering from Minimal Context over Documents | [1] Robin Jia, Percy Liang. “Adversarial examples for evaluating reading comprehension systems.” In EMNLP 2017. [2] Caiming Xiong, Victor Zhong, Richard Socher. “DCN+ Mixed objective and deep residual coattention for question answering.” In ICLR 2018. [3] Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes. “Reading wikipedia to answer open-domain questions.” In ACL 2017. Check out more of our work at https://einstein.ai/research Method |
Protocol adherence and the progression of cardiovascular calcification in the ADVANCE study. | BACKGROUND
The ADVANCE study assessed the progression of vascular and cardiac valve calcification in 360 hemodialysis patients with secondary hyperparathyroidism (sHPT) assigned randomly to treatment either with cinacalcet plus low-dose vitamin D (≤ 6 µg/week of intravenous paricalcitol equivalent) or with varying doses of vitamin D alone for 52 weeks. The primary efficacy endpoint was progression of coronary artery calcification (CAC).
METHODS
In this post-hoc analysis, we compared CAC progression among 70 protocol-adherent subjects given cinacalcet and low doses of vitamin D (CPA) as specified in the study protocol and 120 control subjects given vitamin D sterols.
RESULTS
Baseline patient characteristics did not differ between CPA and control subjects. The mean (standard error of the mean, SEM) doses of vitamin D at week 2 were 4.7 (0.3) and 12.8 (1.0) µg/week in CPA and control subjects, respectively, and the corresponding mean cumulative doses of vitamin D over 52 weeks in each group were 225 (22) and 671 (47) µg. The median change in Agatston CAC score after 52 weeks was less in CPA subjects than in controls (17.8% versus 31.3%, P = 0.02). The median increase in calcification scores in the aortic valve also was less in CPA subjects than in controls (6.0% versus 51.5% P = 0.02). Reductions in serum parathyroid hormone, calcium and phosphorus levels were significantly greater in CPA subjects than in controls (P < 0.05).
CONCLUSIONS
The progression of cardiovascular calcification was attenuated among cinacalcet-treated subjects with sHPT given low doses of vitamin D per protocol compared with control subjects in whom sHPT was treated with higher doses of vitamin D sterols alone. |
Predicting protein structures with a multiplayer online game | People exert large amounts of problem-solving effort playing computer games. Simple image- and text-recognition tasks have been successfully ‘crowd-sourced’ through games, but it is not clear if more complex scientific problems can be solved with human-directed computing. Protein structure prediction is one such problem: locating the biologically relevant native conformation of a protein is a formidable computational challenge given the very large size of the search space. Here we describe Foldit, a multiplayer online game that engages non-scientists in solving hard prediction problems. Foldit players interact with protein structures using direct manipulation tools and user-friendly versions of algorithms from the Rosetta structure prediction methodology, while they compete and collaborate to optimize the computed energy. We show that top-ranked Foldit players excel at solving challenging structure refinement problems in which substantial backbone rearrangements are necessary to achieve the burial of hydrophobic residues. Players working collaboratively develop a rich assortment of new strategies and algorithms; unlike computational approaches, they explore not only the conformational space but also the space of possible search strategies. The integration of human visual problem-solving and strategy development capabilities with traditional computational algorithms through interactive multiplayer games is a powerful new approach to solving computationally-limited scientific problems. |
Public Skepticism of Psychology Why Many People Perceive the Study of Human Behavior as Unscientific | Data indicate that large percentages of the general public regard psychology’s scientific status with considerable skepticism. I examine 6 criticisms commonly directed at the scientific basis of psychology (e.g., psychology is merely common sense, psychology does not use scientific methods, psychology is not useful to society) and offer 6 rebuttals. I then address 8 potential sources of public skepticism toward psychology and argue that although some of these sources reflect cognitive errors (e.g., hindsight bias) or misunderstandings of psychological science (e.g., failure to distinguish basic from applied research), others (e.g., psychology’s failure to police itself, psychology’s problematic public face) reflect the failure of professional psychology to get its own house in order. I offer several individual and institutional recommendations for enhancing psychology’s image and contend that public skepticism toward psychology may, paradoxically, be one of our field’s strongest allies. |
Leveraging User Interaction Signals for Web Image Search | User interfaces for web image search engine results differ significantly from interfaces for traditional (text) web search results, supporting a richer interaction. In particular, users can see an enlarged image preview by hovering over a result image, and an `image preview' page allows users to browse further enlarged versions of the results, and to click-through to the referral page where the image is embedded. No existing work investigates the utility of these interactions as implicit relevance feedback for improving search ranking, beyond using clicks on images displayed in the search results page. In this paper we propose a number of implicit relevance feedback features based on these additional interactions: hover-through rate, 'converted-hover' rate, referral page click through, and a number of dwell time features. Also, since images are never self-contained, but always embedded in a referral page, we posit that clicks on other images that are embedded on the same referral webpage as a given image can carry useful relevance information about that image. We also posit that query-independent versions of implicit feedback features, while not expected to capture topical relevance, will carry feedback about the quality or attractiveness of images, an important dimension of relevance for web image search. In an extensive set of ranking experiments in a learning to rank framework, using a large annotated corpus, the proposed features give statistically significant gains of over 2% compared to a state of the art baseline that uses standard click features. |
Composing Music by Composing Rules: Design and Usage of a Generic Music Constraint System | This research presents the design, usage, and evaluation of a highly generic music constraint system called Strasheela. Strasheela simplifies the definition of musical constraint satisfaction problems (CSP) by predefining building blocks required for such problems. At the same time, Strasheela preserves a high degree of generality and is reasonably efficient. Strasheela is highly generic, because it is highly programmable. In particular, three fundamental components are more programmable in Strasheela compared with existing music constraint systems: the music representation, the rule application mechanism, and the search process. Strasheela features an expressive symbolic music representation. This representation supports explicitly storing score information by sets of score objects, their attributes, and their hierarchic nesting. Any information available in the score is accessible from any object in the score and can be used to obtain derived information. The representation is complemented by the notion of variables: score information can be unknown and such information can be constrained. This research proposes a rule formalism which combines convenience and full user control to express which score variable sets are constrained by a given rule. A rule is a first-class function. A rule application mechanism is a higher-order function. A rule application mechanism traverses the score in order to apply a given rule to variable sets. This text presents rule application mechanisms suitable for a large set of musical CSPs and reproduces important mechanisms of existing systems. Strasheela is founded on a constraint programming model which makes the search process programmable at a high-level. The Strasheela user can optimise the search for a particular constraint satisfaction problem by programming a distribution strategy (a dynamic variable and value ordering) independent of the problem definition. Special distribution strategies for efficiently solving various musical CSPs – including complex polyphonic problems – are presented. |
The Randomized Dependence Coefficient | We introduce the Randomized Dependence Coefficient (RDC), a measure of nonlinear dependence between random variables of arbitrary dimension based on the Hirschfeld-Gebelein-Rényi Maximum Correlation Coefficient. RDC is defined in terms of correlation of random non-linear copula projections; it is invariant with respect to marginal distribution transformations, has low computational cost and is easy to implement: just five lines of R code, included at the end of the paper. |
잭슨데모크라시 여전히 유효한가 | Recently historiography about “Age of Jackson” mainly focused at social and cultural developments. Specially so-called social history and its effects dominate all the historical research outputs in this historical area. But Political history as traditional main field of history has not lost its importance and, futhermore, its vital reason to remain and maintain in the historical researching fields is much more required than any other one. Even though recently social history findings have the limelight, but the much more we know about these social new facts at that time, the much more we could understand and should be obliged to study main central high politics. Because all the different historical new facts are indulged and involved into the main high politics and the deeply interconnected central political people such as Andrew Jackson, central figures and their political programs are much importance than any other minor historical findings in the part of social and cultural approaches. And so, now we required traditional political history should be revival. |
Putting education in "educational" apps: lessons from the science of learning. | Children are in the midst of a vast, unplanned experiment, surrounded by digital technologies that were not available but 5 years ago. At the apex of this boom is the introduction of applications ("apps") for tablets and smartphones. However, there is simply not the time, money, or resources available to evaluate each app as it enters the market. Thus, "educational" apps-the number of which, as of January 2015, stood at 80,000 in Apple's App Store (Apple, 2015)-are largely unregulated and untested. This article offers a way to define the potential educational impact of current and future apps. We build upon decades of work on the Science of Learning, which has examined how children learn best. From this work, we abstract a set of principles for two ultimate goals. First, we aim to guide researchers, educators, and designers in evidence-based app development. Second, by creating an evidence-based guide, we hope to set a new standard for evaluating and selecting the most effective existing children's apps. In short, we will show how the design and use of educational apps aligns with known processes of children's learning and development and offer a framework that can be used by parents and designers alike. Apps designed to promote active, engaged, meaningful, and socially interactive learning-four "pillars" of learning-within the context of a supported learning goal are considered educational. |
The Many Shades of Anonymity: Characterizing Anonymous Social Media Content | Recently, there has been a significant increase in the popularity of anonymous social media sites like Whisper and Secret. Unlike traditional social media sites like Facebook and Twitter, posts on anonymous social media sites are not associated with well-defined user identities or profiles. In this study, our goals are two-fold: (i) to understand the nature (sensitivity, types) of content posted on anonymous social media sites and (ii) to investigate the differences between content posted on anonymous and non-anonymous social media sites like Twitter. To this end, we gather and analyze extensive content traces from Whisper (anonymous) and Twitter (non-anonymous) social media sites. We introduce the notion of anonymity sensitivity of a social media post, which captures the extent to which users think the post should be anonymous. We also propose a human annotator based methodology to measure the same for Whisper and Twitter posts. Our analysis reveals that anonymity sensitivity of most whispers (unlike tweets) is not binary. Instead, most whispers exhibit many shades or different levels of anonymity. We also find that the linguistic differences between whispers and tweets are so significant that we could train automated classifiers to distinguish between them with reasonable accuracy. Our findings shed light on human behavior in anonymous media systems that lack the notion of an identity and they have important implications for the future designs of such systems. |
Robosemantics: How Stanley the Volkswagen Represents the World | One of the most impressive feats in robotics was the 2005 victory by a driverless Volkswagen Touareg in the DARPA Grand Challenge. This paper discusses what can be learned about the nature of representation from the car’s successful attempt to navigate the world. We review the hardware and software that it uses to interact with its environment, and describe how these techniques enable it to represent the world. We discuss robosemantics, the meaning of computational structures in robots. We argue that the car constitutes a refutation of semantic arguments against the possibility of strong artificial intelligence. |
Semantic Segmentation with Second-Order Pooling | Feature extraction, coding and pooling, are important components on many contemporary object recognition paradigms. In this paper we explore novel pooling techniques that encode the second-order statistics of local descriptors inside a region. To achieve this effect, we introduce multiplicative second-order analogues of average and maxpooling that together with appropriate non-linearities lead to state-ofthe-art performance on free-form region recognition, without any type of feature coding. Instead of coding, we found that enriching local descriptors with additional image information leads to large performance gains, especially in conjunction with the proposed pooling methodology. We show that second-order pooling over free-form regions produces results superior to those of the winning systems in the Pascal VOC 2011 semantic segmentation challenge, with models that are 20,000 times faster. |
Diagnostically relevant facial gestalt information from ordinary photos | Craniofacial characteristics are highly informative for clinical geneticists when diagnosing genetic diseases. As a first step towards the high-throughput diagnosis of ultra-rare developmental diseases we introduce an automatic approach that implements recent developments in computer vision. This algorithm extracts phenotypic information from ordinary non-clinical photographs and, using machine learning, models human facial dysmorphisms in a multidimensional 'Clinical Face Phenotype Space'. The space locates patients in the context of known syndromes and thereby facilitates the generation of diagnostic hypotheses. Consequently, the approach will aid clinicians by greatly narrowing (by 27.6-fold) the search space of potential diagnoses for patients with suspected developmental disorders. Furthermore, this Clinical Face Phenotype Space allows the clustering of patients by phenotype even when no known syndrome diagnosis exists, thereby aiding disease identification. We demonstrate that this approach provides a novel method for inferring causative genetic variants from clinical sequencing data through functional genetic pathway comparisons.DOI: http://dx.doi.org/10.7554/eLife.02020.001. |
Visualizing Media Bias through | Traditional media outlets are known to report political news in a biased way, potentially affecting the political beliefs of the audience and even altering their voting behaviors. Therefore, tracking bias in everyday news and building a platform where people can receive balanced news information is important. We propose a model that maps the news media sources along a dimensional dichotomous political spectrum using the co-subscriptions relationships inferred by Twitter links. By analyzing 7 million follow links, we show that the political dichotomy naturally arises on Twitter when we only consider direct media subscription. Furthermore, we demonstrate a real-time Twitter-based application that visualizes an ideological map of various media sources. |
Feature deduction and ensemble design of intrusion detection systems | Current intrusion detection systems (IDS) examine all data features to detect intrusion or misuse patterns. Some of the features may be redundant or contribute little (if anything) to the detection process. The purpose of this study is to identify important input features in building an IDS that is computationally efficient and effective. We investigated the performance of two feature selection algorithms involving Bayesian networks (BN) and Classification and Regression Trees (CART) and an ensemble of BN and CART. Empirical results indicate that significant input feature selection is important to design an IDS that is lightweight, efficient and effective for real world detection systems. Finally, we propose an hybrid architecture for combining different feature selection algorithms for real world intrusion detection. |
Pegasus, a workflow management system for science automation | Modern science often requires the execution of large-scale, multi-stage simulation and data analysis pipelines to enable the study of complex systems. The amount of computation and data involved in these pipelines requires scalable workflow management systems that are able to reliably and efficiently coordinate and automate data movement and task execution on distributed computational resources: campus clusters, national cyberinfrastructures, and commercial and academic clouds. This paper describes the design, development and evolution of the Pegasus Workflow Management System, which maps abstract workflow descriptions onto distributed computing infrastructures. Pegasus has been used for more than twelve years by scientists in a wide variety of domains, including astronomy, seismology, bioinformatics, physics and others. This paper provides an integrated view of the Pegasus system, showing its capabilities that have been developed over time in response to application needs and to the evolution of the scientific computing platforms. The paper describes how Pegasus achieves reliable, scalable workflow execution across a wide variety of computing infrastructures. |
Procedural learning in Parkinson's disease and cerebellar degeneration. | We compared procedural learning, translation of procedural knowledge into declarative knowledge, and use of declarative knowledge in age-matched normal volunteers (n = 30), patients with Parkinson's disease (n = 20), and patients with cerebellar degeneration (n = 15) by using a serial reaction time task. Patients with Parkinson's disease achieved procedural knowledge and used declarative knowledge of the task to improve performance, but they required a larger number of repetitions of the task to translate procedural knowledge into declarative knowledge. Patients with cerebellar degeneration did not show performance improvement due to procedural learning, failed to achieve declarative knowledge, and showed limited use of declarative knowledge of the task to improve their performance. Both basal ganglia and cerebellum are involved in procedural learning, but their roles are different. The normal influence of the basal ganglia on the prefrontal cortex may be required for timely access of information to and from the working memory buffer, while the cerebellum may index and order events in the time domain and be therefore essential for any cognitive functions involving sequences. |
Partially transferred convolution neural network with cross-layer inheriting for posture recognition from top-view depth camera | This paper proposes a new method for human posture recognition from top-view depth maps on small training datasets. There are two strategies developed to leverage the capability of convolution neural network (CNN) in mining the fundamental and generic features for recognition. First, the early layers of CNN should serve the function to extract feature without specific representation. By applying the concept of transfer learning, the first few layers from the pre-learned VGG model can be used directly without further fine-tuning. To alleviate the computational loading and to increase the accuracy of our partially transferred model, a cross-layer inheriting feature fusion (CLIFF) is proposed by using the information from the early layer in fully connected layer without further processing. The experimental result shows that combination of partial transferred model and CLIFF can provide better performance than VGG16 [1] model with re-trained FC layer and other hand-crafted features like RBPs [2]. |
Belowground environmental effects of transgenic crops: a soil microbial perspective. | Experimental studies investigated the effects of transgenic crops on the structure, function and diversity of soil and rhizosphere microbial communities playing key roles in belowground environments. Here we review available data on direct, indirect and pleiotropic effects of engineered plants on soil microbiota, considering both the technology and the genetic construct utilized. Plants modified to express phytopathogen/phytoparasite resistance, or traits beneficial to food industries and consumers, differentially affected soil microorganisms depending on transformation events, experimental conditions and taxa analyzed. Future studies should address the development of harmonized methodologies by taking into account the complex interactions governing soil life. |
Active rehabilitation for chronic low back pain: Cognitive-behavioral, physical, or both? First direct post-treatment results from a randomized controlled trial [ISRCTN22714229] | BACKGROUND
The treatment of non-specific chronic low back pain is often based on three different models regarding the development and maintenance of pain and especially functional limitations: the deconditioning model, the cognitive behavioral model and the biopsychosocial model. There is evidence that rehabilitation of patients with chronic low back pain is more effective than no treatment, but information is lacking about the differential effectiveness of different kinds of rehabilitation. A direct comparison of a physical, a cognitive-behavioral treatment and a combination of both has never been carried out so far.
METHODS
The effectiveness of active physical, cognitive-behavioral and combined treatment for chronic non-specific low back pain compared with a waiting list control group was determined by performing a randomized controlled trial in three rehabilitation centers. Two hundred and twenty three patients were randomized, using concealed block randomization to one of the following treatments, which they attended three times a week for 10 weeks: Active Physical Treatment (APT), Cognitive-Behavioral Treatment (CBT), Combined Treatment of APT and CBT (CT), or Waiting List (WL). The outcome variables were self-reported functional limitations, patient's main complaints, pain, mood, self-rated treatment effectiveness, treatment satisfaction and physical performance including walking, standing up, reaching forward, stair climbing and lifting. Assessments were carried out by blinded research assistants at baseline and immediately post-treatment. The data were analyzed using the intention-to-treat principle.
RESULTS
For 212 patients, data were available for analysis. After treatment, significant reductions were observed in functional limitations, patient's main complaints and pain intensity for all three active treatments compared to the WL. Also, the self-rated treatment effectiveness and satisfaction appeared to be higher in the three active treatments. Several physical performance tasks improved in APT and CT but not in CBT. No clinically relevant differences were found between the CT and APT, or between CT and CBT.
CONCLUSION
All three active treatments were effective in comparison to no treatment, but no clinically relevant differences between the combined and the single component treatments were found. |
Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees | Developing classification algorithms that are fair with respect to sensitive attributes of the data is an important problem due to the increased deployment of classification algorithms in societal contexts. Several recent works have focused on studying classification with respect to specific fairness metrics, modeled the corresponding fair classification problem as constrained optimization problems, and developed tailored algorithms to solve them. Despite this, there still remain important metrics for which there are no fair classifiers with theoretical guarantees; primarily because the resulting optimization problem is non-convex. The main contribution of this paper is a meta-algorithm for classification that can take as input a general class of fairness constraints with respect to multiple non-disjoint and multi-valued sensitive attributes, and which comes with provable guarantees. In particular, our algorithm can handle non-convex "linear fractional" constraints (which includes fairness constraints such as predictive parity) for which no prior algorithm was known. Key to our results is an algorithm for a family of classification problems with convex constraints along with a reduction from classification problems with linear fractional constraints to this family. Empirically, we observe that our algorithm is fast, can achieve near-perfect fairness with respect to various fairness metrics, and the loss in accuracy due to the imposed fairness constraints is often small. |
Intervention with dietary fiber to treat constipation and reduce laxative use in residents of nursing homes. | BACKGROUND
Residents of nursing homes have lost their independence. Recent studies reported that nutritional problems arise in nursing homes. These problems are correlated with changed eating habits and geriatric constipation which is predominantly treated with laxatives. These interventions are not always without risk since frequent usage of laxatives may be accompanied by several side effects. Dietary fibers also affect stool weight and transit time. Therefore, oat-bran effectiveness in reducing the need for bowel medication was examined.
AIM
To develop diets with the addition of oat bran for reduction of laxatives and to improve the inhabitants' well-being in a long-term-care facility.
METHODS
A controlled blind parallel intervention trial among 30 frail inhabitants of a geriatric ward aged 57-100 years with laxative use. An intervention and a control group were formed. 15 of them received oat bran for 12 weeks (fiber group) mixed up in the daily common diet of the ward and 15 served as control (control group). Laxative use, body weight and the observations concerning the eating habits of the elderly were documented.
RESULTS
A cake with the required dietary fibers and the complaisant acceptance of the seniors was developed. Laxatives were successfully discontinued by 59% (p < 0.001) in the fiber group. Body weight remained constant in the fiber group and decreased in the control group (p < 0.005).
CONCLUSION
Fiber supplementation in the form of a cake allows discontinuation of laxatives and increases the seniors' well-being in a nursing home. |
The validity of two versions of the GHQ in the WHO study of mental illness in general health care. | BACKGROUND
In recent years the 12-item General Health Questionnaire (GHQ-12) has been extensively used as a short screening instrument, producing results that are comparable to longer versions of the GHQ.
METHODS
The validity of the GHQ-12 was compared with the GHQ-28 in a World Health organization study of psychological disorders in general health care. Results are presented for 5438 patients interviewed in 15 centres using the primary care version of the Composite International Diagnostic Instrument, or CIDI-PC.
RESULTS
Results were uniformly good, with the average area under the ROC curve 88, range from 83 to 95. Minor variations in the criteria used for defining a case made little difference to the validity of the GHQ, and complex scoring methods offered no advantages over simpler ones. The GHQ was translated into 10 other languages for the purposes of this study, and validity coefficients were almost as high as in the original language. There was no tendency for the GHQ to work less efficiently in developing countries. Finally gender, age and educational level are shown to have no significant effect on the validity of the GHQ.
CONCLUSIONS
If investigators wish to use a screening instrument as a case detector, the shorter GHQ is remarkably robust and works as well as the longer instrument. The latter should only be preferred if there is an interest in the scaled scores provided in addition to the total score. |
Stimulation of the caudal zona incerta is superior to stimulation of the subthalamic nucleus in improving contralateral parkinsonism. | Deep brain stimulation (DBS) has an increasing role in the treatment of idiopathic Parkinson's disease. Although, the subthalamic nucleus (STN) is the commonly chosen target, a number of groups have reported that the most effective contact lies dorsal/dorsomedial to the STN (region of the pallidofugal fibres and the rostral zona incerta) or at the junction between the dorsal border of the STN and the latter. We analysed our outcome data from Parkinson's disease patients treated with DBS between April 2002 and June 2004. During this period we moved our target from the STN to the region dorsomedial/medial to it and subsequently targeted the caudal part of the zona incerta nucleus (cZI). We present a comparison of the motor outcomes between these three groups of patients with optimal contacts within the STN (group 1), dorsomedial/medial to the STN (group 2) and in the cZI nucleus (group 3). Thirty-five patients with Parkinson's disease underwent MRI directed implantation of 64 DBS leads into the STN (17), dorsomedial/medial to STN (20) and cZI (27). The primary outcome measure was the contralateral Unified Parkinson's Disease Rating Scale (UPDRS) motor score (off medication/off stimulation versus off medication/on stimulation) measured at follow-up (median time 6 months). The secondary outcome measures were the UPDRS III subscores of tremor, bradykinesia and rigidity. Dyskinesia score, L-dopa medication reduction and stimulation parameters were also recorded. The mean adjusted contralateral UPDRS III score with cZI stimulation was 3.1 (76% reduction) compared to 4.9 (61% reduction) in group 2 and 5.7 (55% reduction) in the STN (P-value for trend <0.001). There was a 93% improvement in tremor with cZI stimulation versus 86% in group 2 versus 61% in group 1 (P-value = 0.01). Adjusted 'off-on' rigidity scores were 1.0 for the cZI group (76% reduction), 2.0 for group 2 (52% reduction) and 2.1 for group 1 (50% reduction) (P-value for trend = 0.002). Bradykinesia was more markedly improved in the cZI group (65%) compared to group 2 (56%) or STN group (59%) (P-value for trend = 0.17). There were no statistically significant differences in the dyskinesia scores, L-dopa medication reduction and stimulation parameters between the three groups. Stimulation related complications were seen in some group 2 patients. High frequency stimulation of the cZI results in greater improvement in contralateral motor scores in Parkinson's disease patients than stimulation of the STN. We discuss the implications of this finding and the potential role played by the ZI in Parkinson's disease. |
Нравственный аспект образовательных систем в информационном социуме | The article deals with the problems of moral imperatives in the education. The author identifies invariant ethical educational characteristics of communication processes in the modern information society, associated with understanding of inter-frame interactions, universal mechanisms of scholar schools development, and values that go back to the traditional forms of cultural self-identity in education. |
On the Reuse of Past Optimal Queries | Vijay V. Raghavan Hayri Sever The Center for Advanced Computer Studies The Department of Computer Science University of Southwestern Louisiana University of Southwestern Louisiana Lafayette, LA 70504, USA Lafayette, LA 70504, USA e-mail: [email protected]. edu Information Retrieval (IR) systems exploit user feedback by generating an optimal query with respect to a particular information need. Since obtaining an optimal query is an expensive process, the need for mechanisms to save and reuse past optimal queries for future queries is obvions. In this article, we propose the use of a query base, a set of persistent past optimal queries, and investigate similarity measures between queries. The query base can be used either to answer user queries or to formulate optimal queries. We justify the former case analytically and the latter case by experiment. |
Feasibility Study on Plant Chili Disease Detection Using Image Processing Techniques | Producing chili is a daunting task as the plant is exposed to the attacks from various micro-organisms and bacterial diseases and pests. The symptoms of the attacks are usually distinguished through the leaves, stems or fruit inspection. This paper discusses the effective way used in performing early detection of chili disease through leaf features inspection. Leaf image is captured and processed to determine the health status of each plant. Currently the chemicals are applied to the plants periodically without considering the requirement of each plant. This technique will ensure that the chemicals only applied when the plants are detected to be effected with the diseases. The image processing techniques are used to perform hundreds of chili disease images. The plant chili disease detection through leaf image and data processing techniques is very useful and inexpensive system especially for assisting farmers in monitoring the big plantation area. |
Completion and Reconstruction with Primitive Shapes | We consider the problem of reconstruction from incomplete point-clouds. To find a closed mesh the reconstruction is guided by a set of primitive shapes which has been detected on the input point-cloud (e.g. planes, cylinders etc.). With this guidance we not only continue the surrounding structure into the holes but also synthesize plausible edges and corners from the primitives’ intersections. To this end we give a surface energy functional that incorporates the primitive shapes in a guiding vector field. The discretized functional can be minimized with an efficient graphcut algorithm. A novel greedy optimization strategy is proposed to minimize the functional under the constraint that surface parts corresponding to a given primitive must be connected. From the primitive shapes our method can also reconstruct an idealized model that is suitable for use in a CAD system. |
Country houses on the southern slopes of Povlen: 1800-1970. | Houses in the village area on the southern slopes of Mt. Povlen began their
development in the late 18th century with the arrival of the first settlers
from Montenegro and Herzegovina. In the new habitat, the immigrants were
applying knowledge and skills acquired in villages they came from. Natural
conditions at this new location offered them wide range of construction
materials. Rapid economic development stimulated the transition from
livestock to farming and more intensive plum growing. New activities
influenced in a new way the shape of rural houses. Slow development of
transportation and public administration also influenced in a certain way the
house forming. At the same time, organization and relationships within the
family remained almost the same. All the influences on the development of
rural houses can be traced in the saved objects on the site and in written
materials. Based on the objects on the site, breakpoints from one stage to
another could be established. Knowledge in this very area may help to
understand the 21st century village architecture. Objects presented in this
paper were recorded for personal project from 1996 to 2000. |
Learning Noun-Modifier Semantic Relations with Corpus-based and WordNet-based Features | We study the performance of two representations of word meaning in learning noun-modifier semantic relations. One representation is based on lexical resources, in particular WordNet, the other – on a corpus. We experimented with decision trees, instance-based learning and Support Vector Machines. All these methods work well in this learning task. We report high precision, recall and F-score, and small variation in performance across several 10-fold cross-validation runs. The corpus-based method has the advantage of working with data without word-sense annotations and performs well over the baseline. The WordNet-based method, requiring wordsense annotated data, has higher precision. |
Polymer-free sirolimus-eluting versus polymer-based paclitaxel-eluting stents: an individual patient data analysis of randomized trials. | INTRODUCTION AND OBJECTIVES
The angiographic and clinical efficacy of polymer-free sirolimus-eluting stents vs polymer-based paclitaxel-eluting stents remain a matter of debate. We sought to investigate angiographic and clinical measures of efficacy of polymer-free sirolimus-eluting stents vs polymer-based paclitaxel-eluting stents.
METHODS
Patient data from the randomized intracoronary stenting and angiographic restenosis-test equivalence between the 2 drug-eluting stents (ISAR-TEST) clinical trial and the LIPSIA Yukon clinical trial (randomized comparison of a polymer-free sirolimus-eluting stent vs a polymer-based paclitaxel-eluting stent in patients with diabetes mellitus) were pooled. The angiographic (primary) endpoint was in-stent late lumen loss at 6 months to 9 months. The clinical (secondary) endpoints were death or myocardial infarction, cardiac death or myocardial infarction, target lesion revascularization, and myocardial infarction.
RESULTS
A total of 686 patients (polymer-free sirolimus-eluting stents, n=345 vs polymer-based paclitaxel-eluting stents, n=341) and 751 lesions (polymer-free sirolimus-eluting stents, n=383 vs polymer-based paclitaxel-eluting stents, n=368) were included in the study. Control angiography (606 lesions, 80.6%) showed comparable in-stent late lumen loss for polymer-free sirolimus-eluting stents vs polymer-based paclitaxel-eluting stents (0.53 [0.59] mm vs 0.46 [0.57] mm; P=.15). Median follow-up was 34.8 months. Polymer-free sirolimus-eluting stents and polymer-based paclitaxel-eluting stents were associated with comparable risk of death or myocardial infarction (relative risk=1.17; 95% confidence interval, 0.49-2.80; P=.71), cardiac death or myocardial infarction (relative risk=1.17; 95% confidence interval, 0.72-1.89; P=.50), target lesion revascularization (relative risk=0.98; 95% confidence interval, 0.65-1.47; P=.93), and myocardial infarction (relative risk=1.79; 95% confidence interval, 0.85-3.76; P=.12).
CONCLUSIONS
In this pooled analysis, polymer-free sirolimus-eluting stents were comparable to polymer-based paclitaxel-eluting stents with respect to both angiographic and clinical efficacy. |
Transfer Learning Based Evolutionary Algorithm for Composite Face Sketch Recognition | Matching facial sketches to digital face images has widespread application in law enforcement scenarios. Recent advancements in technology have led to the availability of sketch generation tools, minimizing the requirement of a sketch artist. While these sketches have helped in manual authentication, matching composite sketches with digital mugshot photos automatically show high modality gap. This research aims to address the task of matching a composite face sketch image to digital images by proposing a transfer learning based evolutionary algorithm. A new feature descriptor, Histogram of Image Moments, has also been presented for encoding features across modalities. Moreover, IIITD Composite Face Sketch Database of 150 subjects is presented to fill the gap due to limited availability of databases in this problem domain. Experimental evaluation and analysis on the proposed dataset show the effectiveness of the transfer learning approach for performing cross-modality recognition. |
Detailed characterization of, and clinical correlations in, 10 patients with distal deletions of chromosome 9p | Purpose: Deletions of distal 9p are associated with trigonocephaly, mental retardation, dysmorphic facial features, cardiac anomalies, and abnormal genitalia. Previous studies identified a proposed critical region for the consensus phenotype in band 9p23, between 11.8 Mb and 16 Mb from the 9p telomere. Here we report 10 new patients with 9p deletions; 9 patients have clinical features consistent with 9p− syndrome, but possess terminal deletions smaller than most reported cases, whereas one individual lacks the 9p− phenotype and shows a 140-kb interstitial telomeric deletion inherited from his mother.Methods: We combined fluorescence in situ hybridization and microarray analyses to delineate the size of each deletion.Results: The deletion sizes vary from 800 kb to 12.4 Mb in our patients with clinically relevant phenotypes. Clinical evaluation and comparison showed little difference in physical features with regard to the deletion sizes. Severe speech and language impairment were observed in all patients with clinically relevant phenotypes.Conclusion: The smallest deleted region common to our patients who demonstrate a phenotype consistent with 9p− is <2 Mb of 9pter, which contains six known genes. These genes may contribute to some of the cardinal features of 9p deletion syndrome. |
The Frequency of Multifactorial Syndromes in Geriatrics of Tuzla Canton Population | Introduction
There are four main multifactorial syndromes in geriatrics the so-called "4N", which specifically occur at elderly patients. Listed syndromes often occur related, and they can be the cause and the result of many other syndromes at geriatric patients.
Objective
determine the difference in the assessment of the level of immobility, instability, dependence, urinary incontinence ("4N") in elderly groups.
Materials and methods
The research included total 200 elderly respondents experimental group made of elderly persons (>65 years) living alone. Control group included elderly persons living in a family environment. Universal geriatric questionnaire was made for this research. For fast orientation the redone questionnaire was used for our conditions: examination in clinics for usual elderly problems "Short list for examination". For the assessment of the mental abilities reduction at elderly we used "Short portable mental status questionnaire" (SPMSQ).
Results
In total sample the research included 200 elderly respondents, 45% in experimental group and 55% in control group. The average age (±SD) was 75,4±6,2 years in the experimental group, while in the control group the average age was 74,9±5,6 years. We notice nearly equal distribution of falling risk according to groups (50%, 47%). In total sample there were 62% mobile, 22,5% limited mobility, and 4% immobile. Dependence frequency was more represented at examination group respondents (p=0,002). Dependence chances (OR) were 2,05 times larger (95 %CI=1,12-3,75) in examination group than in control group respondents. Frequency of urinary incontinence problem is significantly represented at all our respondents (42,2% v.s. 35%).
Conclusion
Permanent gerontology and geriatrics training is needed both family medicine doctors and other experts in the field of elderly health protection and preventive health measures, pharmacotherapy, palliative care, especially about four main geriatrics syndromes at elderly. |
The Security Evaluation of Time Stamping Schemes : The Present Situation and Studies Masashi UNE * | Time stamping is a technique used to prove the existence of certain digital data prior to a specific point in time. With the recent development of electronic commerce, time stamping is now widely recognized as an important technique used to ensure the integrity of digital data for a long time period. Various time stamping schemes and services have been proposed. When one uses a certain time stamping service, he should confirm in advance that its security level sufficiently meets his security requirements. However, time stamping schemes are generally so complicated that it is not easy to evaluate their security levels accurately. It is important for users to have a good grasp of current studies of time stamping schemes and to make use of such studies to select an appropriate time stamping service. Une and Matsumoto [2000], [2001a], [2001b] and [2002] have proposed a method of classifying time stamping schemes and evaluating their security systematically. Their papers have clarified the objectives, functions and entities involved in time stamping schemes and have discussed the conditions sufficient to detect the alteration of a time stamp in each scheme. This paper explains existing problems regarding the security evaluation of time stamping schemes and the results of Une and Matsumoto [2000], [2001a], [2001b] and [2002]. It also applies their results to some existing time stamping schemes and indicates possible directions of further research into time stamping schemes. |
A family of triply periodic Costa surfaces | Among all known complete embedded minimal surfaces in R3 the triply periodic ones form the richest class of examples regarding variety of genus and symmetry group. For instance, the gyroid can be pointed out as a complete minimal surface containing neither straight lines nor reflectional symmetry curves. Another curiosity is the associate family from the Schwarz P-Surface till its conjugate, the D-Surface, which has the gyroid as an intermediate embedded member (see [4], p. 25). Another associate family with such a behaviour is unknown outside the triply periodic class. The variety of triply periodic minimal surfaces has been known much earlier than most of the non-triply periodic examples, even if the existence proof of the formers took longer to be concluded. The first five examples came out in 1890 due to a work of H.A. Schwarz and his students [18]. This work inspired A.H. Schoen [17] who presented 17 other such surfaces in 1970. Later in 1989 H. Karcher [9] proved the existence of these triply periodic examples and found many others with the conjugate surface method. In this paper we enrich this class, not only by presenting a new family of triply periodic minimal surfaces in R3, but also with a full study of this family which includes uniqueness and limits. Such a thorough study is rare to be found among other triply periodic examples. Moreover, in this new family the boundary contour of the conjugate surfaces patch does not project onto any convex domain. Hence, the classical Plateau approach fails for these particular surfaces and they are perhaps the simplest such examples. In order to find new surfaces one can make use of already known examples like the Costa surface, which gave rise to the “Mk-Costa-Hoffman-Meeks” families [7], later generalised by D. Hoffman and H. Karcher [5]. In this present paper, once more the Costa surface turns out to be a source of new results. |
Workflow and toolchain for developing the automotive software according AUTOSAR standard at a Virtual-ECU | The increasing demand of new functionalities in next generation vehicles, leads to a growth of the complexity level for the E/E automotive systems. On the same way, the automotive software also tends to follow the same pace, so new methods should be adopted to deal with this scenario of complexity. The next generation of automotive embedded software is rapidly migrating to the AUTOSAR standard, which is an architectural composition of software components idealized to establish an open industry standard for the automotive industry. AUTOSAR aims to increase the reuse of these software components, in particular between different vehicle platforms, and between OEMs and suppliers. Inside this development process, software control suppliers are able to check if the system functionalities are attending to the requirements already in preliminary phases, even if the ECU is not yet available. In this paper the authors show the workflow to develop a virtual validation based on the AUTOSAR standard with a Virtual ECU (V-ECU) using a toolchain consisting of dSPACE (SystemDesk, VEOS and TargetLink) and MathWorks (Matlab, Simulink and Stateflow) software. The simulation of the architecture has been realized considering the communication inside a same V-ECU and also between two different V-ECUs considering a distributed architecture. As result, a point-to-point explanation about AUTOSAR methodology is done to show how the process is done. |
Artificial intelligence - a modern approach by Stuart Russell and Peter Norvig, Prentice Hall. Series in Artificial Intelligence, Englewood Cliffs, NJ | Benferhat, S, Dubois D and Prade, H, 1992. "Representing default rules in possibilistic logic" In: Proc. of the 3rd Inter. Conf. on Principles of knowledge Representation and Reasoning (KR'92), 673-684, Cambridge, MA, October 26-29. De Finetti, B, 1936. "La logique de la probabilite" Actes du Congres Inter, de Philosophic Scientifique, Paris. (Hermann et Cie Editions, 1936, IV1-IV9). Driankov, D, Hellendoorn, H and Reinfrank, M, 1995. An Introduction to Fuzzy Control, Springer-Verlag. Dubois, D and Prade, H, 1988. "An introduction to possibilistic and fuzzy logics" In: Non-Standard Logics for Automated Reasoning (P Smets, A Mamdani, D Dubois and H Prade, editors), 287-315, Academic Press. Dubois, D and Prade, H, 1994. "Can we enforce full compositionality in uncertainty calculi?" In: Proc. 12th US National Conf. On Artificial Intelligence (AAAI94), 149-154, Seattle, WA. Elkan, C, 1994. "The paradoxical success of fuzzy logic" IEEE Expert August, 3-8. Lehmann, D and Magidor. M, 1992. "What does a conditional knowledge base entail?" Artificial Intelligence 55 (1) 1-60. Maung, 1,1995. "Two characterizations of a minimum-information principle in possibilistic reasoning" Int. J. of Approximate Reasoning 12 133-156. Pearl, J, 1990. "System Z: A natural ordering of defaults with tractable applications to default reasoning" Proc. of the 2nd Conf. on Theoretical Aspects of Reasoning about Knowledge (TARK'90) 121-135, San Francisco, CA, Morgan Karfman. Shoham, Y, 1988. Reasoning about Change MIT Press. Smets, P, 1988. "Belief functions" In: Non-Standard Logics for Automated Reasoning (P Smets, A Mamdani, D Dubois and H Prade, editors), 253-286, Academic Press. Smets, P, 1990a. "The combination of evidence in the transferable belief model" IEEE Trans, on Pattern Anal. Mach. Intell. 12 447-458. Smets, P, 1990b. "Constructing the pignistic probability function in a context of uncertainty" Un certainty in Artificial Intelligence 5 (M Henrion et al., editors), 29-40, North-Holland. Smets, P, 1995. "Quantifying beliefs by belief functions: An axiomatic justification" In: Procoj the 13th Inter. Joint Conf. on Artificial Intelligence (IJACT93), 598-603, Chambey, France, August 28-September 3. Smets, P and Kennes, R, 1994. "The transferable belief model" Artificial Intelligence 66 191-234. |
POLY: Mining Relational Paraphrases from Multilingual Sentences | Language resources that systematically organize paraphrases for binary relations are of great value for various NLP tasks and have recently been advanced in projects like PATTY, WiseNet and DEFIE. This paper presents a new method for building such a resource and the resource itself, called POLY. Starting with a very large collection of multilingual sentences parsed into triples of phrases, our method clusters relational phrases using probabilistic measures. We judiciously leverage fine-grained semantic typing of relational arguments for identifying synonymous phrases. The evaluation of POLY shows significant improvements in precision and recall over the prior works on PATTY and DEFIE. An extrinsic use case demonstrates the benefits of POLY for question answering. |
Gravitational renormalization of quantum field theory: a | We propose to include gravity in quantum field theory nonperturbatively, by modifying the propagators so that each virtual particle in a Feynman graph move in the space–time determined by the four-momenta of the other particles in the same graph. By making additional working assumptions, we are able to put this idea at work in a simplified context, and obtain a modified Feynman propagator for the massless neutral scalar field. Our expression shows a suppression at high momentum, strong enough to entail finite results, to all loop orders, for processes involving at least two virtual particles. |
Recognition of normal-abnormal phonocardiographic signals using deep convolutional neural networks and mel-frequency spectral coefficients. | Intensive care unit patients are heavily monitored, and several clinically-relevant parameters are routinely extracted from high resolution signals.
OBJECTIVE
The goal of the 2016 PhysioNet/CinC Challenge was to encourage the creation of an intelligent system that fused information from different phonocardiographic signals to create a robust set of normal/abnormal signal detections.
APPROACH
Deep convolutional neural networks and mel-frequency spectral coefficients were used for recognition of normal-abnormal phonocardiographic signals of the human heart. This technique was developed using the PhysioNet.org Heart Sound database and was submitted for scoring on the challenge test set.
MAIN RESULTS
The current entry for the proposed approach obtained an overall score of 84.15% in the last phase of the challenge, which provided the sixth official score and differs from the best score of 86.02% by just 1.87%. |
Citalopram therapy for depression: a review of 10 years of European experience and data from U.S. clinical trials. | BACKGROUND
This review summarizes and evaluates clinical experience with citalopram, the latest selective serotonin reuptake inhibitor (SSRI) to be approved for the treatment of depression in the United States.
DATA SOURCES
Published reports of randomized, double-blind, controlled clinical studies of citalopram were retrieved using a MEDLINE literature search. Search terms included citalopram, SSRI, TCA (tricylic antidepressant), depression, and clinical. For each study, data on antidepressant efficacy and adverse events were evaluated. Pharmacokinetic studies and case reports were reviewed to supplement the evaluation of citalopram's safety and tolerability. Data presented at major medical conferences and published in abstract form also were reviewed.
STUDY FINDINGS
Thirty randomized, double-blind, controlled studies of the antidepressant efficacy of citalopram were located and reviewed. In 11 studies, citalopram was compared with placebo (1 of these studies also included comparison with another SSRI). In 4 additional studies, the efficacy of citalopram in preventing depression relapse or recurrence was investigated. In another 11 studies (including 1 meta-analysis of published and unpublished trials), citalopram was compared with tricyclic and tetracyclic antidepressants. Finally, results are available from 4 studies in which citalopram was compared with other SSRIs. A placebo-controlled study of citalopram for the treatment of panic disorder was reviewed for data on long-term adverse events.
CONCLUSION
Data published over the last decade suggest that citalopram is (1) superior to placebo in the treatment of depression, (2) has efficacy similar to that of the tricyclic and tetracyclic antidepressants and to other SSRIs, and (3) is safe and well tolerated in the therapeutic dose range of 20 to 60 mg/day. Distinct from some other agents in its class, citalopram exhibits linear pharmacokinetics and minimal drug interaction potential. These features make citalopram an attractive agent for the treatment of depression, especially among the elderly and patients with comorbid illness. |
The evolutionary psychology of facial beauty. | What makes a face attractive and why do we have the preferences we do? Emergence of preferences early in development and cross-cultural agreement on attractiveness challenge a long-held view that our preferences reflect arbitrary standards of beauty set by cultures. Averageness, symmetry, and sexual dimorphism are good candidates for biologically based standards of beauty. A critical review and meta-analyses indicate that all three are attractive in both male and female faces and across cultures. Theorists have proposed that face preferences may be adaptations for mate choice because attractive traits signal important aspects of mate quality, such as health. Others have argued that they may simply be by-products of the way brains process information. Although often presented as alternatives, I argue that both kinds of selection pressures may have shaped our perceptions of facial beauty. |
A Dataset for StarCraft AI & an Example of Armies Clustering | This paper advocates the exploration of the full state of recorded real-time strategy (RTS) games, by human or robotic players, to discover how to reason about tactics and strategy. We present a dataset of StarCraft games encompassing the most of the games’ state (not only player’s orders). We explain one of the possible usages of this dataset by clustering armies on their compositions. This reduction of armies compositions to mixtures of Gaussian allow for strategic reasoning at the level of the components. We evaluated this clustering method by predicting the outcomes of battles based on armies compositions’ mixtures components. |
Sector-Based Radio Resource Allocation (SBRRA) Algorithm for Better Quality of Service and Experience in Device-to-Device (D2D) Communication | The mounting content sharing among users has resulted in a considerable rise in wireless data traffic, pressurizing the cellular networks to undergo a suitable upheaval. A competent technology of the fifth-generation (5G) networks for efficiently supporting proximity-based applications is device-to-device (D2D) communication, underlaying cellular networks. Significant advances have been made till date, for allocating resources to D2D users in cellular networks, such that sharing of spectral resources between cellular and D2D users is carried out in a coordinated manner. In this paper, a sector-based radio resource allocation (SBRRA) algorithm for resource block (RB) allocation to D2D pairs has been proposed, where the number of resource blocks (RBs) is allocated to each D2D pair in an adaptive manner, based on the demanded application by each pair. Different applications demand a varying number of RBs, in accordance with their priority. This algorithm focuses on the use of sectored antennas at the base station, for a better performance and low complexity. Extensive simulations are carried out, considering real-time scenario, for ensuring satisfactory quality of service (QoS) and quality of experience (QoE) by the users. The efficiency of the proposed scheme is proved by comparing it with the RB allocation using hidden Markov model. |
The Anatomic Basis of Midfacial Aging | Facial aging is a multifactorial, three-dimensional (3D) process with anatomic, biochemical, and genetic correlates. Many exogenous and endogenous factors can signifi cantly impact the perceived age of an individual. Solar exposure [ 1– 3 ] , cigarette smoking [ 1, 2, 4, 5 ] , medications [ 1 ] , alcohol use [ 1 ] , body mass index [ 2 ] , and endocrinologic status [ 1, 6, 7 ] have all been implicated as factors that accelerate cutaneous and subcutaneous aging. These factors act in concert to create a variegated spectrum of facial morphologic aging changes, and thus, Mme. Chanel was partially correct in her statement from the last century. Most of the aging changes that occur in the midface, however, occur predictably in the majority of individuals. Stigmata of midfacial aging typically appear by the middle of the fourth decade. Degenerative changes occur in nearly every anatomic component of the midface and include cranial bone remodeling, tissue descent secondary to gravity, fat atrophy, and deterioration in the condition and appearance of the skin. The lower eyelids and adjacent tissues are often the initial areas of patient concern. This chapter reviews the morphologic changes that occur in the aging midface and discusses the pathogenesis of midfacial aging based upon its anatomic components. An integrated theory of facial aging will be presented. A. E. Wulc , MD, FACS ( ) Associate Clinical Professor of Ophthalmology , University of Pennsylvania |
Ketogenic diet does not affect strength performance in elite artistic gymnasts | BACKGROUND
Despite the increasing use of very low carbohydrate ketogenic diets (VLCKD) in weight control and management of the metabolic syndrome there is a paucity of research about effects of VLCKD on sport performance. Ketogenic diets may be useful in sports that include weight class divisions and the aim of our study was to investigate the influence of VLCKD on explosive strength performance.
METHODS
8 athletes, elite artistic gymnasts (age 20.9 ± 5.5 yrs) were recruited. We analyzed body composition and various performance aspects (hanging straight leg raise, ground push up, parallel bar dips, pull up, squat jump, countermovement jump, 30 sec continuous jumps) before and after 30 days of a modified ketogenic diet. The diet was based on green vegetables, olive oil, fish and meat plus dishes composed of high quality protein and virtually zero carbohydrates, but which mimicked their taste, with the addition of some herbal extracts. During the VLCKD the athletes performed the normal training program. After three months the same protocol, tests were performed before and after 30 days of the athletes' usual diet (a typically western diet, WD). A one-way Anova for repeated measurements was used.
RESULTS
No significant differences were detected between VLCKD and WD in all strength tests. Significant differences were found in body weight and body composition: after VLCKD there was a decrease in body weight (from 69.6 ± 7.3 Kg to 68.0 ± 7.5 Kg) and fat mass (from 5.3 ± 1.3 Kg to 3.4 ± 0.8 Kg p < 0.001) with a non-significant increase in muscle mass.
CONCLUSIONS
Despite concerns of coaches and doctors about the possible detrimental effects of low carbohydrate diets on athletic performance and the well known importance of carbohydrates there are no data about VLCKD and strength performance. The undeniable and sudden effect of VLCKD on fat loss may be useful for those athletes who compete in sports based on weight class. We have demonstrated that using VLCKD for a relatively short time period (i.e. 30 days) can decrease body weight and body fat without negative effects on strength performance in high level athletes. |
EVALUATING THE PREDICTIVE VALIDITY OF THE COMPAS RISK AND NEEDS ASSESSMENT SYSTEM | This study examines the statistical validation of a recently developed, fourth-generation (4G) risk–need assessment system (Correctional Offender Management Profiling for Alternative Sanctions; COMPAS) that incorporates a range of theoretically relevant criminogenic factors and key factors emerging from meta-analytic studies of recidivism. COMPAS’s automated scoring provides decision support for correctional agencies for placement decisions, offender management, and treatment planning. The article describes the basic features of COMPAS and then examines the predictive validity of the COMPAS risk scales by fitting Cox proportional hazards models to recidivism outcomes in a sample of presentence investigation and probation intake cases (N = 2,328). Results indicate that the predictive validities for the COMPAS recidivism risk model, as assessed by the area under the receiver operating characteristic curve (AUC), equal or exceed similar 4G instruments. The AUCs ranged from .66 to .80 for diverse offender subpopulations across three outcome criteria, with a majority of these exceeding .70. |
A fuzzy spatial coherence-based approach to background/foreground separation for moving object detection | The detection of moving objects from stationary cameras is usually approached by background subtraction, i.e. by constructing and maintaining an up-to-date model of the background and detecting moving objects as those that deviate from such a model. We adopt a previously proposed approach to background subtraction based on self-organization through artificial neural networks, that has been shown to well cope with several of the well known issues for background maintenance. Here, we propose a spatial coherence variant to such approach to enhance robustness against false detections and formulate a fuzzy model to deal with decision problems typically arising when crisp settings are involved. We show through experimental results and comparisons that higher accuracy values can be reached for color video sequences that represent typical situations critical for moving object detection. |
QLBS: Q-Learner in the Black-Scholes(-Merton) Worlds | This paper presents a discrete-time option pricing model that is rooted in Reinforcement Learning (RL), and more specifically in the famous Q-Learning method of RL. We construct a riskadjusted Markov Decision Process for a discrete-time version of the classical Black-ScholesMerton (BSM) model, where the option price is an optimal Q-function, while the optimal hedge is a second argument of this optimal Q-function, so that both the price and hedge are parts of the same formula. Pricing is done by learning to dynamically optimize risk-adjusted returns for an option replicating portfolio, as in the Markowitz portfolio theory. Using Q-Learning and related methods, once created in a parametric setting, the model is able to go model-free and learn to price and hedge an option directly from data generated from a dynamic replicating portfolio which is rebalanced at discrete times. If the world is according to BSM, our risk-averse Q-Learner converges, given enough training data, to the true BSM price and hedge ratio of the option in the continuous time limit ∆t → 0, even if hedges applied at the stage of data generation are completely random (i.e. it can learn the BSM model itself, too!), because QLearning is an off-policy algorithm. If the world is different from a BSM world, the Q-Learner will find it out as well, because Q-Learning is a model-free algorithm. For finite time steps ∆t, the Q-Learner is able to efficiently calculate both the optimal hedge and optimal price for the option directly from trading data, and without an explicit model of the world. This suggests that RL may provide efficient data-driven and model-free methods for optimal pricing and hedging of options, once we depart from the academic continuous-time limit ∆t → 0, and vice versa, option pricing methods developed in Mathematical Finance may be viewed as special cases of model-based Reinforcement Learning. Further, due to simplicity and tractability of our model which only needs basic linear algebra (plus Monte Carlo simulation, if we work with synthetic data), and its close relation to the original BSM model, we suggest that our model could be used for benchmarking of different RL algorithms for financial trading applications. I would like to thank my students for their interest in this work and stimulating discussions that challenged me to look for simple explanations of complex topics. I thank Tom N.L. for an initial implementation of a timediscretized BSM model. This work is dedicated to my wife Lola on the occasion of her birthday and receiving a doctoral degree. |
Advances in opportunistic radio technologies for TVWS | Cognitive radio has been an active research area in wireless communications over the past 10 years. TV Digital Switch Over resulted in new regulatory regimes, which offer the first large-scale opportunity for cognitive radio and networks. This article considers the most recent regulatory rules for TV White Space opportunistic usage, and proposes technologies to operate in these bands. It addresses techniques to assess channel vacancy by the cognitive radio, focusing on the two incumbent systems of the TV bands, namely TV stations and wireless microphones. Spectrum-sensing performance is discussed under TV White Space regulation parameters. Then, modulation schemes for the opportunistic radio are discussed, showing the limitations of classical multi-carrier techniques and the advantages of filter bank modulations. In particular, the low adjacent band leakage of filter bank is addressed, and its benefit for spectrum pooling is stressed as a means to offer broadband access through channel aggregation. |
Footprint of uncertainty for type-2 fuzzy sets | In the paper, a new and comprehensible definition is proposed for type-2 fuzzy sets (T2 FSs), and the primary and secondary memberships function are defined respectively by using multi valued mapping. A new definition and formula for the footprint of uncertainty (FOU) is presented, and based on the new definitions, the relation between FOU and the original definition of T2 FS is discussed. Finally, the partition method of FOU is provided to represent the primary membership grade, the FOU and the interval type-2 fuzzy sets (IT2 FS) when the upper and lower membership functions (UMF, LMF) of FOU are given. 2014 Elsevier Inc. All rights reserved. |
Fast time series classification using numerosity reduction | Many algorithms have been proposed for the problem of time series classification. However, it is clear that one-nearest-neighbor with Dynamic Time Warping (DTW) distance is exceptionally difficult to beat. This approach has one weakness, however; it is computationally too demanding for many realtime applications. One way to mitigate this problem is to speed up the DTW calculations. Nonetheless, there is a limit to how much this can help. In this work, we propose an additional technique, numerosity reduction, to speed up one-nearest-neighbor DTW. While the idea of numerosity reduction for nearest-neighbor classifiers has a long history, we show here that we can leverage off an original observation about the relationship between dataset size and DTW constraints to produce an extremely compact dataset with little or no loss in accuracy. We test our ideas with a comprehensive set of experiments, and show that it can efficiently produce extremely fast accurate classifiers. |
A Design Procedure for All-Digital Phase-Locked Loops Based on a Charge-Pump Phase-Locked-Loop Analogy | In this brief, a systematic design procedure for a second-order all-digital phase-locked loop (PLL) is proposed. The design procedure is based on the analogy between a type-II second-order analog PLL and an all-digital PLL. The all-digital PLL design inherits the frequency response and stability characteristics of the analog prototype PLL |
SciMAT: A new science mapping analysis software tool | This article presents a new open-source software tool, SciMAT, which performs science mapping analysis within a longitudinal framework. It provides different modules that help the analyst to carry out all the steps of the science mapping workflow. In addition, SciMAT presents three key features that are remarkable in respect to other science mapping software tools: (a) a powerful preprocessing module to clean the raw bibliographical data, (b) the use of bibliometric measures to study the impact of each studied element, and (c) a wizard to configure the analysis. |
A Handbook for Building an Approximate Query Engine | There has been much research on various aspects of Approximate Query Processing (AQP), such as different sampling strategies, error estimation mechanisms, and various types of data synopses. However, many subtle challenges arise when building an actual AQP engine that can be deployed and used by real world applications. These subtleties are often ignored (or at least not elaborated) by the theoretical literature and academic prototypes alike. For the first time to the best of our knowledge, in this article, we focus on these subtle challenges that one must address when designing an AQP system. Our intention for this article is to serve as a handbook listing critical design choices that database practitioners must be aware of when building or using an AQP system, not to prescribe a specific solution to each challenge. |
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks | We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-WeightNetworks, the filters are approximated with binary values resulting in 32× memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58× faster convolutional operations and 32× memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is only 2.9% less than the full-precision AlexNet (in top-1 measure). We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy. |
A Micro-Spatial Analysis of the Demographic and Criminogenic Environment of Drug Markets in Philadelphia | THE AUSTRALIAN AND NEW ZEALAND JOURNAL OF CRIMINOLOGY VOLUME 40 NUMBER 1 2007 PP. 43–63 Address for correspondence: Jerry H. Ratcliffe, Associate Professor, Department of Criminal Justice, Gladfelter Hall, Temple University, 1115 W Berks Street, Philadelphia PA 19122, United States of America. E-mail: [email protected] A Micro-Spatial Analysis of the Demographic and Criminogenic Environment of Drug Markets in Philadelphia |
A phase I study of oral ixabepilone in patients with advanced solid tumors | Intravenous infusion of ixabepilone is Food and Drug Administration-approved for treatment of patients with metastatic breast cancer. The aim of this study was to establish the maximum tolerated dose (MTD), dose-limiting toxicities (DLTs), safety, and pharmacokinetics (PK) of a novel oral formulation of ixabepilone in patients with advanced solid tumors. Forty-four patients received one of six daily doses of oral ixabepilone (5, 10, 15, 20, 25, or 30 mg) on days 1–5 of a 21-day cycle. PK parameters were evaluated in cycle 1 for all treated patients and in cycle 1 and cycle 2 for patients participating in assessments of food and gastric pH effects. The most common DLTs (reported in at least one patient) were neutropenia, neutropenic fever, diarrhea, ileus, and hypokalemia. The MTD of oral ixabepilone was 25 mg. Plasma concentrations of ixabepilone showed high variability; coefficients of variation for the area under the curve and the peak plasma concentration ranged from 61 to 131 % and from 17 to 172 %, respectively. The mean half-life of ixabepilone calculated after day 5 of cycle 1 ranged from 24 to 47 h. Ixabepilone exposure was higher when administered with a low-fat meal compared with the fasted state, and when administered 2 h after the histamine H2 receptor antagonist famotidine. The MTD of oral ixabepilone when administered once daily for five consecutive days every 21 days was 25 mg. Ixabepilone exposure was highly variable; therefore, safety and efficacy of this novel oral formulation might not be reliably predicted. |
A Discriminative Graph-Based Parser for the Abstract Meaning Representation | Abstract Meaning Representation (AMR) is a semantic formalism for which a growing set of annotated examples is available. We introduce the first approach to parse sentences into this representation, providing a strong baseline for future improvement. The method is based on a novel algorithm for finding a maximum spanning, connected subgraph, embedded within a Lagrangian relaxation of an optimization problem that imposes linguistically inspired constraints. Our approach is described in the general framework of structured prediction, allowing future incorporation of additional features and constraints, and may extend to other formalisms as well. Our open-source system, JAMR, is available at:Meaning Representation (AMR) is a semantic formalism for which a growing set of annotated examples is available. We introduce the first approach to parse sentences into this representation, providing a strong baseline for future improvement. The method is based on a novel algorithm for finding a maximum spanning, connected subgraph, embedded within a Lagrangian relaxation of an optimization problem that imposes linguistically inspired constraints. Our approach is described in the general framework of structured prediction, allowing future incorporation of additional features and constraints, and may extend to other formalisms as well. Our open-source system, JAMR, is available at: http://github.com/jflanigan/jamr |
Controversial entrapment neuropathies. | There is no significant disagreement about the major common entrapment neuropathies, such as carpal tunnel syndrome (CTS), ulnar neuropathy at the elbow, and peroneal neuropathy at the knee. In contrast, there is a group of entrapment syndromes about which there is major disagreement, including whether or not they even exist. There are other entrapment syndromes about which clinical questions arise on a regular basis, and which are the subject of this discussion. These include thoracic outlet syndrome, radial tunnel syndrome, ulnar nerve entrapment at the arcade of Struthers, piriformis syndrome, and tarsal tunnel syndrome. |
Direct torque control of induction machines with constant switching frequency and reduced torque ripple | Direct torque control (DTC) of induction machines is known to have a simple control structure with comparable performance to that of the field-oriented control technique. Two major problems that are usually associated with DTC drives are: switching frequency that varies with operating conditions and high torque ripple. To solve these problems, and at the same time retain the simple control structure of DTC, a constant switching frequency torque controller is proposed to replace the conventional hysteresis-based controller. In this paper, the modeling, averaging, and linearization of the torque loop containing the proposed controller followed by simulation and experimental results are presented. The proposed controller is shown to be capable of reducing the torque ripple and maintaining a constant switching frequency. |
Driven coherent oscillations of a single electron spin in a quantum dot | The ability to control the quantum state of a single electron spin in a quantum dot is at the heart of recent developments towards a scalable spin-based quantum computer. In combination with the recently demonstrated controlled exchange gate between two neighbouring spins, driven coherent single spin rotations would permit universal quantum operations. Here, we report the experimental realization of single electron spin rotations in a double quantum dot. First, we apply a continuous-wave oscillating magnetic field, generated on-chip, and observe electron spin resonance in spin-dependent transport measurements through the two dots. Next, we coherently control the quantum state of the electron spin by applying short bursts of the oscillating magnetic field and observe about eight oscillations of the spin state (so-called Rabi oscillations) during a microsecond burst. These results demonstrate the feasibility of operating single-electron spins in a quantum dot as quantum bits. |
PZT ceramics formed directly from oxides via reactive sintering | Abstract Lead zicronate titanate ceramics with morphological boundary phase (PMP) composition (PbZr 0.52 Ti 0.48 O 3 or PZT) were prepared directly from their oxide mixture without involving calcination process. Sintering behavior of the oxide mixture showed that there was a volumetric expansion occurring at about 800°C, which can be attributed to the reaction to PZT compound from the oxide mixture. A maximum sintering rate was observed at about 870°C in the linear sintering rate curve, showing that the densification of the oxide mixture is significant than that of calcined powder. PZT ceramics sintered at 1100°C for 1 h demonstrated a dielectric constant of 1157 at 1 kHz, a remnant polarization of 27.3 μC/cm 2 and coercive field of 21 kV/cm, respectively. |
Sustainability principles and their integration into a higher education mechanical engineering undergraduate programme | Current undergraduate engineering programs tend to be organised along fairly standard
engineering themes such as mechanical engineering or automotive engineering. The engineering institutions have made it plain that sustainability needs to be a fundamental
element of all mechanical engineering themed courses yet in reality only a tiny minority
of modules make reference to sustainability. It is true, however that sustainability is a
subject with such a large depth and breadth that the subject could be dedicated to whole degree courses. This paper defines the Principles of Sustainability [4] and suggests several types of degree program through which sustainability can be introduced into a higher education, undergraduate mechanical engineering scheme. The paper explains that sustainability can only be taught through the medium of design since the design function is the only function which has an overview and can influence the life of a
product from "cradle to grave". |
A holistic approach to service survivability | We present SABER (Survivability Architecture: Block, Evade, React), a proposed survivability architecture that blocks, evades and reacts to a variety of attacks by using several security and survivability mechanisms in an automated and coordinated fashion. Contrary to the ad hoc manner in which contemporary survivable systems are built-using isolated, independent security mechanisms such as firewalls, intrusion detection systems and software sandboxes-SABER integrates several different technologies in an attempt to provide a unified framework for responding to the wide range of attacks malicious insiders and outsiders can launch.
This coordinated multi-layer approach will be capable of defending against attacks targeted at various levels of the network stack, such as congestion-based DoS attacks, software-based DoS or code-injection attacks, and others. Our fundamental insight is that while multiple lines of defense are useful, most conventional, uncoordinated approaches fail to exploit the full range of available responses to incidents. By coordinating the response, the ability to survive successful security breaches increases substantially.
We discuss the key components of SABER, how they will be integrated together, and how we can leverage on the promising results of the individual components to improve survivability in a variety of coordinated attack scenarios. SABER is currently in the prototyping stages, with several interesting open research topics. |
An Enhanced Power Sharing Scheme for Voltage Unbalance and Harmonics Compensation in an Islanded AC Microgrid | In this paper, an enhanced hierarchical control structure with multiple current loop damping schemes for voltage unbalance and harmonics compensation (UHC) in ac islanded microgrid is proposed to address unequal power sharing problems. The distributed generation (DG) is properly controlled to autonomously compensate voltage unbalance and harmonics while sharing the compensation effort for the real power, reactive power, and unbalance and harmonic powers. The proposed control system of the microgrid mainly consists of the positive sequence real and reactive power droop controllers, voltage and current controllers, the selective virtual impedance loop, the unbalance and harmonics compensators, the secondary control for voltage amplitude and frequency restoration, and the auxiliary control to achieve a high-voltage quality at the point of common coupling. By using the proposed unbalance and harmonics compensation, the auxiliary control, and the virtual positive/negative-sequence impedance loops at fundamental frequency, and the virtual variable harmonic impedance loop at harmonic frequencies, an accurate power sharing is achieved. Moreover, the low bandwidth communication (LBC) technique is adopted to send the compensation command of the secondary control and auxiliary control from the microgrid control center to the local controllers of DG unit. Finally, the hardware-in-the-loop results using dSPACE 1006 platform are presented to demonstrate the effectiveness of the proposed approach. |
Compressive Sequential Learning for Action Similarity Labeling | Human action recognition in videos has been extensively studied in recent years due to its wide range of applications. Instead of classifying video sequences into a number of action categories, in this paper, we focus on a particular problem of action similarity labeling (ASLAN), which aims at verifying whether a pair of videos contain the same type of action or not. To address this challenge, a novel approach called compressive sequential learning (CSL) is proposed by leveraging the compressive sensing theory and sequential learning. We first project data points to a low-dimensional space by effectively exploring an important property in compressive sensing: the restricted isometry property. In particular, a very sparse measurement matrix is adopted to reduce the dimensionality efficiently. We then learn an ensemble classifier for measuring similarities between pairwise videos by iteratively minimizing its empirical risk with the AdaBoost strategy on the training set. Unlike conventional AdaBoost, the weak learner for each iteration is not explicitly defined and its parameters are learned through greedy optimization. Furthermore, an alternative of CSL named compressive sequential encoding is developed as an encoding technique and followed by a linear classifier to address the similarity-labeling problem. Our method has been systematically evaluated on four action data sets: ASLAN, KTH, HMDB51, and Hollywood2, and the results show the effectiveness and superiority of our method for ASLAN. |
Design and performance analysis of Multiply-Accumulate (MAC) unit | In recent years, Multiply-Accumulate (MAC) unit is developing for various high performance applications. MAC unit is a fundamental block in the computing devices, especially Digital Signal Processor (DSP). MAC unit performs multiplication and accumulation process. Basic MAC unit consists of multiplier, adder, and accumulator. In the existing MAC unit model, multiplier is designed using modified Radix-2 booth multiplier. In this paper, MAC unit model is designed by incorporating the various multipliers such as Array Multiplier, Ripple Carry Array Multiplier with Row Bypassing Technique, Wallace Tree Multiplier and DADDA Multiplier in the multiplier module and the performance of MAC unit models is analyzed in terms of area, delay and power. The performance analysis of MAC unit models is done by designing the models in Verilog HDL. Then, MAC unit models are simulated and synthesized in Xilinx ISE 13.2 for Virtex-6 family 40nm technology. |
Randomized, double-blind study of the analgesic efficacy of morphine-6-glucuronide versus morphine sulfate for postoperative pain in major surgery. | BACKGROUND
Morphine-6-glucuronide (M6G) has promising preclinical characteristics and encouraging pharmacokinetic features for acute nociceptive pain. Early studies have produced a good safety profile when compared to morphine sulfate, although in surrogate pain models studies, a mixed picture emerged. A study to evaluate the efficacy and safety profile in a clinical setting was designed.
METHODS
The authors conducted a double-blind, randomized, dose-finding study of patients scheduled to undergo major joint replacement. One hundred patients of both sexes were included, with 50 patients in each group. A loading dose of 10 mg of study medication was given intravenously at induction of anesthesia, and two further doses were allowed during surgery if required. Bolus doses via a patient-controlled analgesia system were given subcutaneously at 2 mg/dose and set at a 10-min lockout. Assessments of pain intensity and relief were recorded during the 24-h period.
RESULTS
There were no statistically significant differences between the treatments for 24-h mean pain intensity. However, pain intensity was significantly higher in the M6G group than in the morphine group at 30 min and 1 h. There was no statistical difference in 24-h mean pain relief or retrospective pain scores at any time point during the 24-h period. The severity of sedation was significantly greater in the morphine group than in the M6G group at 30 min, 1 h, 2 h, and 24 h. Respiratory depression was greater in the morphine group than in the M6G group, and more patients in the morphine group withdrew from the study because of respiratory depression.
CONCLUSIONS
Overall, M6G has an analgesic effect similar to that of morphine over the first 24 h postoperatively. However, M6G may be slower onset initially than morphine; therefore, a larger initial dose may be required. |
Grammar induction from (lots of) words alone | Grammar induction is the task of learning syntactic structure in a setting where that structure is hidden. Grammar induction from words alone is interesting because it is similiar to the problem that a child learning a language faces. Previous work has typically assumed richer but cognitively implausible input, such as POS tag annotated data, which makes that work less relevant to human language acquisition. We show that grammar induction from words alone is in fact feasible when the model is provided with sufficient training data, and present two new streaming or mini-batch algorithms for PCFG inference that can learn from millions of words of training data. We compare the performance of these algorithms to a batch algorithm that learns from less data. The minibatch algorithms outperform the batch algorithm, showing that cheap inference with more data is better than intensive inference with less data. Additionally, we show that the harmonic initialiser, which previous work identified as essential when learning from small POStag annotated corpora (Klein and Manning, 2004), is not superior to a uniform initialisation. |
The Toronto Paper Matching System: An automated paper-reviewer assignment system | One of the most important tasks of conference organizers is the assignment of papers to reviewers. Reviewers’ assessments of papers is a crucial step in determining the conference program, and in a certain sense to shape the direction of a field. However this is not a simple task: large conferences typically have to assign hundreds of papers to hundreds of reviewers, and time constraints make the task impossible for one person to accomplish. Furthermore other constraints, such as reviewer load have to be taken into account, preventing the process from being completely distributed. We built the first version of a system to suggest reviewer assignments for the NIPS 2010 conference, followed, in 2012, by a release that better integrated our system with Microsoft’s popular Conference Management Toolkit (CMT). Since then our system has been widely adopted by the leading conferences in both the machine learning and computer vision communities. This paper provides an overview of the system, a summary of learning models and methods of evaluation that we have been using, as well as some of the recent progress and open issues. |
Software-Defined Networking (SDN) and Distributed Denial of Service (DDoS) Attacks in Cloud Computing Environments: A Survey, Some Research Issues, and Challenges | Distributed Denial of Service (DDoS) attacks in cloud computing environments are growing due to the essential characteristics of cloud computing. With recent advances in software-defined networking (SDN), SDN-based cloud brings us new chances to defeat DDoS attacks in cloud computing environments. Nevertheless, there is a contradictory relationship between SDN and DDoS attacks. On one hand, the capabilities of SDN, including software-based traffic analysis, centralized control, global view of the network, dynamic updating of forwarding rules, make it easier to detect and react to DDoS attacks. On the other hand, the security of SDN itself remains to be addressed, and potential DDoS vulnerabilities exist across SDN platforms. In this paper, we discuss the new trends and characteristics of DDoS attacks in cloud computing, and provide a comprehensive survey of defense mechanisms against DDoS attacks using SDN. In addition, we review the studies about launching DDoS attacks on SDN, as well as the methods against DDoS attacks in SDN. To the best of our knowledge, the contradictory relationship between SDN and DDoS attacks has not been well addressed in previous works. This work can help to understand how to make full use of SDN's advantages to defeat DDoS attacks in cloud computing environments and how to prevent SDN itself from becoming a victim of DDoS attacks, which are important for the smooth evolution of SDN-based cloud without the distraction of DDoS attacks. |
An association analysis of lipid profile and diabetic cardiovascular autonomic neuropathy in a Chinese sample | Recent studies have shown that triglyceride (TG), low-density lipoprotein cholesterol (LDL), and high-density lipoprotein cholesterol (HDL) are related to the prevalence of cardiovascular autonomic neuropathy (CAN). However, little is known about the association of lipid profile with diabetic cardiovascular autonomic neuropathy (DCAN), or its severity in the Chinese population. The purpose of this study is to explore the extent of this phenomenon using a Chinese sample. A subgroup analysis on 455 diabetic patients with undiagnosed DCAN was performed to evaluate the relationships of lipids profile and DCAN. DCAN was diagnosed if there were at least two abnormal cardiovascular autonomic reflex test results, based on short-term heart rate variability tests. Multivariable logistic regression (MLR)was carried out to control potential confounders for determining the independent association of variables with DCAN in different models. MLR analysis indicated that TG was significantly and independently associated with DCAN when controlling for confounding factors (P < 0.1 for two models). Additionally, TG combined with TC (LRS-1) and LDL (LRS-2) was associated with this outcome (P < 0.1 for LRS-1 and LRS-2). Our findings indicate that TG and the severity of lipids profile is significantly and independently associated with DCAN, respectively. ClinicalTrials.gov Identifier: NCT02461472 , retrospectively registered 2 Jun, 2015 |
DStore: A Holistic Key-Value Store Exploring Near-Data Processing and On-Demand Scheduling for Compaction Optimization | Log-structured merge tree (LSM-tree)-based key-value stores are widely deployed in large-scale storage systems. The underlying reason is that the traditional relational databases cannot reach the high performance required by big-data applications. As high-throughput alternatives to relational databases, LSM-tree-based key-value stores can support high-throughput write operations and provide high sequential bandwidth in storage systems. However, the compaction process triggers write amplification and is confronted with the degraded write performance, especially under update-intensive workloads. To address this issue, we design a holistic key-value store to explorer near-data processing (NDP) and on-demand scheduling for compaction optimization in an LSM-tree key-value store, named DStore. DStore makes full use of various computing capacities in the host-side and device-side subsystems. DStore dynamically divides the whole host-side compaction tasks into the above two-side subsystems according to two-side different computing capabilities. Meanwhile, the device must be featured with an NDP model. The divided compaction tasks are performed by the host and the device in parallel. In DStore, the NDP-based devices exhibit low-latency and high-bandwidth performance, thus facilitating key-value stores. DStore not only accomplishes compaction for key-value stores but also improves the system performance. We implement our DStore prototype in a real-world platform, and different kinds of testbeds are employed in our experiment. LevelDB and a static compaction optimization using the NDP model (called Co-KV) are used to compare with the DStore in our evaluation. Results show that DStore achieves about $3.7 \times $ performance improvement over LevelDB under the db_bench workload. In addition, DStore-enabled key-value stores outperform LevelDB by a factor of about $3.3 \times $ and 77% in terms of throughput and latency under YCSB benchmark, respectively. |
Open-label phase I trial of vandetanib in combination with mFOLFOX6 in patients with advanced colorectal cancer | Background: Vandetanib (ZACTIMA™) is a once-daily oral inhibitor of vascular endothelial growth factor, epidermal growth factor and RET receptor tyrosine kinases. The safety and tolerability of vandetanib plus mFOLFOX6 was investigated in patients with advanced colorectal cancer (CRC). Methods: Patients eligible for first- or second-line chemotherapy received once-daily oral doses of vandetanib (100 or 300 mg) plus 14-day treatment cycles of mFOLFOX6. Results: Seventeen patients received vandetanib 100 mg (n = 9) or 300 mg (n = 8) plus mFOLFOX6. The protocol definition of a tolerable dose (vandetanib-related dose-limiting toxicity [DLT] in less than two patients) was met in both dose cohorts, with one DLT of diarrhoea reported in each. Overall, the most common adverse events were diarrhoea, nausea and lethargy (all n = 11). There was no pharmacokinetic interaction between vandetanib and mFOLFOX6. Preliminary efficacy results included one complete response and three confirmed partial responses. Conclusions: In patients with advanced CRC, once-daily vandetanib (100 or 300 mg) with mFOLFOX6 was generally well tolerated. |
Bacterial colony counting with Convolutional Neural Networks in Digital Microbiology Imaging | Counting bacterial colonies on microbiological culture plates is a time-consuming, error-prone, nevertheless essential quantitative task in Clinical Microbiology Laboratories. With this work we explore the possibility to find effective solutions to the above issue by designing and testing two different machine learning approaches. The first one is based on the extraction of a complete set of handcrafted morphometric and radiometric features used within a Support Vector Machines solution. The second one is based on the design and configuration of a Convolutional Neural Networks deep learning architecture. To validate, in a real and challenging clinical scenario, the proposed bacterial load estimation techniques, we built and publicly released a fully labeled large and representative database of both single and aggregated bacterial colonies extracted from routine clinical laboratory culture plates. Dataset enhancement approaches have also been experimentally tested for performance optimization. The adopted deep learning approach outperformed the handcrafted feature based one, and also a conventional reference technique, by a large margin, becoming a preferable solution for the addressed Digital Microbiology Imaging quantification task, especially in the emerging context of Full Laboratory Automation systems. |
Detecting Duplicate Posts in Programming QA Communities via Latent Semantics and Association Rules | Programming community-based question-answering (PCQA) websites such as Stack Overflow enable programmers to find working solutions to their questions. Despite detailed posting guidelines, duplicate questions that have been answered are frequently created. To tackle this problem, Stack Overflow provides a mechanism for reputable users to manually mark duplicate questions. This is a laborious effort, and leads to many duplicate questions remain undetected. Existing duplicate detection methodologies from traditional community based question-answering (CQA) websites are difficult to be adopted directly to PCQA, as PCQA posts often contain source code which is linguistically very different from natural languages. In this paper, we propose a methodology designed for the PCQA domain to detect duplicate questions. We model the detection as a classification problem over question pairs. To extract features for question pairs, our methodology leverages continuous word vectors from the deep learning literature, topic model features and phrases pairs that co-occur frequently in duplicate questions mined using machine translation systems. These features capture semantic similarities between questions and produce a strong performance for duplicate detection. Experiments on a range of real-world datasets demonstrate that our method works very well; in some cases over 30% improvement compared to state-of-the-art benchmarks. As a product of one of the proposed features, the association score feature, we have mined a set of associated phrases from duplicate questions on Stack Overflow and open the dataset to the public. |
Trading Volume and Serial Correlation in Stock Returns | This paper investigates the relationship between aggregate stock market trading volume and the serial correlation of daily stock returns. For both stock indexes and individual large stocks, the first-order daily return autocorrelation tends to decline with volume. The paper explains this phenomenon using a model in which risk-averse "market makers" accommodate buying or selling pressure from "liquidity" or "noninformational" traders. Changing expected stock returns reward market makers for playing this role. The model implies that a stock price decline on a high-volume day is more likely than a stock price decline on a low-volume day to be associated with an increase in the expected stock return. |
Modernism, Race and Manifestos: Contents | 1. Introduction: manifestos, race, and modernity Part I. Cosmopolitan London, 1906-14: 2. Women's suffrage melodrama and burlesque 3. Futurism's music hall and India docks 4. Vorticism's cabaret modernism and racial spectacle Part II. Transnational Modernisms, 1934-8: 5. Nancy Cunard's negro and black transnationalism 6. Reading across the Color Line: Virginia Woolf, C. L. R. James, and Suzanne and Aime Cesaire Epilogue: manifestos: then and now Index. |
Effects of safflower seed extract supplementation on oxidation and cardiovascular risk markers in healthy human volunteers. | We previously demonstrated that safflower seed extract (SSE) and its major antioxidant constituents, serotonin hydroxycinnamic acid amides, suppressed LDL oxidation in vitro, decreased plasma autoantibody titres to oxidized LDL and attenuated atherosclerotic lesion formation in apoE-deficient mice. In this report, we examined whether SSE, rich in serotonin derivatives, could affect markers of oxidative stress, inflammation and aortic stiffness in healthy human subjects. Twenty Japanese male volunteers were studied at baseline, after 2.1 g SSE supplementation daily (providing 290 mg serotonin derivatives/d) for 4 weeks, and after a 4-week washout period. Significant reductions in circulating oxidized LDL, autoantibody titres to malondialdehyde-modified LDL, the soluble form of vascular cell adhesion molecule-1 (sVCAM-1), and urinary 8-isoprostane were observed after a 4-week intervention. Although there were no statistically significant differences in blood pressure or brachial-ankle pulse wave velocity (baPWV), an index of arterial stiffness, baPWV was lower than baseline in eleven of twenty subjects and was accompanied by a reduction in blood pressure. Statistically significant negative correlations were observed between the extent of initial cardiovascular risk markers (autoantibody titres, 8-isoprostane, sVCAM-1 and baPWV) and the effect of intervention. This suggested that individuals with elevated oxidative stress, inflammation, and/or arterial stiffness may receive more benefit from SSE supplementation. |
Planned complex suicide. An unusual suicide by hanging and gunshot. | We report a case of planned complex suicide (PCS) by a young man who had previously tried to commit suicide twice. He was found dead hanging by his neck, with a shot in his head. The investigation of the scene, the method employed, and previous attempts at suicide altogether pointed toward a suicidal etiology. The main difference between PCS and those cases defined in the medicolegal literature as combined suicides lies in the complex mechanism used by the victim as a protection against a failure in one of the mechanisms. |
Generative Face Completion | In this paper, we propose an effective face completion algorithm using a deep generative model. Different from well-studied background completion, the face completion task is more challenging as it often requires to generate semantically new pixels for the missing key components (e.g., eyes and mouths) that contain large appearance variations. Unlike existing nonparametric algorithms that search for patches to synthesize, our algorithm directly generates contents for missing regions based on a neural network. The model is trained with a combination of a reconstruction loss, two adversarial losses and a semantic parsing loss, which ensures pixel faithfulness and local-global contents consistency. With extensive experimental results, we demonstrate qualitatively and quantitatively that our model is able to deal with a large area of missing pixels in arbitrary shapes and generate realistic face completion results. |
Drip Irrigation Management Affects Celery Yield and Quality | Trials in nine commercial celery (Apium graveolens L.) fields were conducted between 1997–99 to evaluate grower drip irrigation management practices and their effects on yield and quality. Surface drip irrigation tapes with flow rates higher and lower than the grower-installed tapes were spliced into the field system; as the cooperating growers irrigated and applied N fertigation according to their routine practices these drip tapes delivered either more or less water and N than the field drip system. Total grower water application during the drip-irrigated portion of the season ranged from 85% to 414% of seasonal reference evapotranspiration (ETo). Water volume per irrigation varied among fields from 1.8 to 3.8 cm, with irrigation frequency varying from an average of every other day to once a week. Grower management of drip irrigation was not consistently successful in maintaining soil water tension (SWT) in a desirable range. SWT was often below –30 kPa, and in some cases below –70 kPa. These transient stresses were more often a result of inappropriate irrigation frequency than applied water volume. In four of the fields plots receiving less water than that delivered by the field system produced equivalent marketable yield and quality, indicating a significant potential for water savings. An economically important incidence of petiole pithiness (collapse of parenchyma tissue) was observed in four fields. Infrequent irrigation under high ETo summer conditions, rather than irrigation volume applied, appeared to be the major factor in pith development. N fertigation amount and crop N status appeared to be unrelated to pithiness severity. We conclude that celery drip irrigation management could be substantially improved by maintaining a closer proportionality between irrigation and crop evapotranspiration (ETc), increasing irrigation frequency, and reducing volume per irrigation. field system. As the cooperating growers managed the fields according to their routine practices these plots received either more or less water (and fertigated N) than the plots irrigated by the field system. Among the various drip tapes used emitter spacing ranged from 20–40 cm, with emitter output ranging from ≈0.5–1.0 L·h. Three to five irrigation rates were evaluated in each field. Experimental designs were randomized complete blocks with four replications, with individual plots 4 beds wide × 7.5 m long. The plots were arranged sequentially along the same four beds. All data were collected from the middle beds. No precipitation was received during the drip-irrigated portion of the season except in fields 2 and 9, which received 1.8 and 1.1 cm, respectively, in the week before harvest. Irrigation volume applied in the experimental area was recorded by an in-line water meter on each of the two middle beds of the test plots. The flow from three individual emitters in each plot was periodically captured to document the relative flow rates in the various treatments. A tensiometer (Irrometer Co., Riverside Calif.) was installed 25–30 cm deep in the plant row in each plot to document soil water tension prior to each irrigation. Computerized weather stations from the California Irrigation Management Information System network (CIMIS, Snyder and Pruitt, 1992) provided daily reference evapotranspiration (ETo, modified Penman) values for each field. Plant canopy development (percentage of ground area covered) was visually estimated in each field several times in the growing season. Cooperating growers were asked to provide details of their N applications. Plant N status was evaluated at midseason by petiole sampling and at harvest by sampling the marketable portion of whole plants. Tissue samples were oven-dried and ground. Petiole tissue was extracted in 2% acetic acid and analyzed for NO3-N concentration by the method of Carlson et al. (1990). Total N concentration in the harvest samples was determined by a combustion technique (CarloErba 1500; Fisons Instruments, Beverly, Mass.). The plots were harvested at commercial maturity. Data collected included mean total and marketable mass, and incidence of petiole pithiness. Plants having two or more marketable petioles showing parenchyma breakdown were considered to have an objectionable level of the disorder; the percentage of plants showing this level of the disorder was recorded. The significance of irrigation treatment effects on celery yield and incidence of pithiness was determined using orthogonal contrasts to compare individual irrigation treatments with the field irrigation rate. |
Deep-UV Curing of Poly(4-Vinyl Phenol) Gate Dielectric for Hysteresis-Free Organic Thin-Film Transistors | A simple method is presented to remedy the hysteresis problem associated with the gate dielectric of poly(4-vinyl phenol) (PVPh), which is widely used for organic transistors. The method involves simple blanket illumination of deep ultraviolet (UV) on the PVPh layer at room temperature. The exposure results in the photochemical transformation of hydroxyl groups in PVPh via the UV/ozone effect. This reduction in the concentration of hydroxyl groups enables one to effectively control the hysteresis problem even when the layer is exposed to moisture. The contrast created in the concentration of hydroxyl groups between the exposed and unexposed parts of PVPh also allows simultaneous patterning of the dielectric layer. |
The WikEd Error Corpus: A Corpus of Corrective Wikipedia Edits and Its Application to Grammatical Error Correction | This paper introduces the freely available WikEd Error Corpus. We describe the data mining process from Wikipedia revision histories, corpus content and format. The corpus consists of more than 12 million sentences with a total of 14 million edits of various types. As one possible application, we show that WikEd can be successfully adapted to improve a strong baseline in a task of grammatical error correction for English-as-a-Second-Language (ESL) learners’ writings by 2.63%. Used together with an ESL error corpus, a composed system gains 1.64% when compared to the ESL-trained system. |
Challenges on Large Scale Surveillance Video Analysis | Large scale surveillance video analysis is one of the most important components in the future artificial intelligent city. It is a very challenging but practical system, consists of multiple functionalities such as object detection, tracking, identification and behavior analysis. In this paper, we try to address three tasks hosted in NVIDIA AI City Challenge contest. First, a system that transforming the image coordinate to world coordinate has been proposed, which is useful to estimate the vehicle speed on the road. Second, anomalies like car crash event and stalled vehicles can be found by the proposed anomaly detector framework. Third, multiple camera vehicle re-identification problem has been investigated and a matching algorithm is explained. All these tasks are based on our proposed online single camera multiple object tracking (MOT) system, which has been evaluated on the widely used MOT16 challenge benchmark. We show that it achieves the best performance compared to the state-of-the-art methods. Besides of MOT, we evaluate the proposed vehicle re-identification model on VeRi-776 dataset and it outperforms all other methods with a large margin. |
Detection of Behavior Change in People with Depression | Major Depressive Disorder (MDD) is the most common mental health disorder and remains a leading cause of disability and lost productivity with huge costs for society. MDD has high rates of relapse and recurrence, and it has strong correlations with feelings of low social support and disrupted sleep. However, MDD is also commonly misdiagnosed by primary care providers, which leads to delayed treatment and unnecessary suffering. Changes in technology now make it possible to cheaply and effectively monitor social and sleep behaviors, offering the potential of early detection of the onset of MDD. We report on the design of Big Black Dog, a smartphone-based system for gathering data about social and sleep behaviors. We also report on the results of a pilot evaluation to understand the feasibility of gathering and using data from smartphones for inferring the onset of depression. Introduction Each year 7% of Americans experience an episode of major depression [NIMH, 2011]. As a leading disability, depression has huge costs in terms of reduced productivity and absenteeism [RAND, 2008]. Most people seek help from their primary care provider (PCP); however, PCPs fail to recognize depression symptoms 65% of the time [Jencks, 1985; Coyne et al, 1995]. The delay in diagnosis and treatment increases the time people suffer from this condition. Research has shown that early detection of a first episode or of a recurrent episode can have a major positive impact [Halfin, 2007; Kupfer et al., 1989]. MDD has high rates of relapse and recurrence, and it has strong correlations with feelings of low social support and disrupted sleep. For example, a lack of social support has been found to predict depression as well as many other health related issues [Sias and Bartoo, 2007]. Past work has also found that sleep disorders are correlated with depression [Livingston et al., 1993], and disrupted sleep has been found to predict recurrences [Perlis et al., 1997]. The meteoric adoption of smartphones offers a tantalizing opportunity, in that many people now carry a networked, sensor-rich device almost everywhere they go. These changes make it possible to cheaply and effectively monitor people’s activities and behaviors, which could then be used to detect the early onset of MDD. Towards this end, we have developed a system called Big Black Dog (BBD) to detect the onset of major depression, allowing for earlier diagnosis and treatment of first episodes, relapses, and recurrences. Our angle of attack is to capture data about and model social behaviors and sleep behaviors. This paper reports on the design and on a pilot study to understand the feasibility of our approach. Related Work Researchers have investigated a number of behavioral signals to detect the mental state of people, using such approaches as brain signals (Stewart et al., 2010), heart rate (Vikram et al. 2002) blood pressure (Shinn et al., 2001), voice prosody (Cohn et al., 2009; France et al., 2000), and facial expression (Cohn et al., 2009) as proxies for psychophysiological information. EEGs, heart rate trackers, and skin conductors provide rich streams of data; however they are cumbersome to wear, often difficult to use, and typically limited to being used in clinics. Text mining has also been investigated as a method to detect depression. De Choudhury et al. used tweets to detect depression (De Choudhury et al., 2013). Important indicators included a decrease in social activity, raised negative affect, highly clustered ego networks, heightened relational and medicinal concerns, and greater expression of religious involvement. Smartphones offer rich a set of built-in sensors including accelerometers, location (GPS, WiFi ID, signal strength), light, and microphone. In past work, we used call logs, text logs, and contact list data to model social behaviors. In other past work, we used smartphone sensor data to model sleep quantity and sleep quality. BBD uses many of the same features from these two systems and applies them in a pilot study specifically for depression. The most relevant piece of past work is Mobilyze (Burns et al., 2011), a smartphone app that collects sensor data to detect the user’s cognitive state. Mobilyze uses machine learning models to predict mood, emotions, cognitive/motivational states, activities, environmental context, and social context. This system has been used for intervention and not for depression detection. Our research builds on this previous work, focusing on detecting the early onset of depression. Unlike Mobilyze that constantly prompts users for ground truth data, we only ask for a weekly survey. Our goal is to track behavior without any effort from the users. In the following sections, we report on design of the BBD mobile application followed by the pilot study and primary results. Overview of the Data Collection app We designed and implemented BBD as an Android app that uploads captured data every day to our server (see Fig. 1). BBD collects sensor data that might reveal behavioral and environmental factors, including noise amplitude (from microphone), location, WiFi SSIDs, light intensity (from ambient light sensor), and movement (from accelerometer). To minimize power consumption, each of these sensors captures data on a relatively light duty cycle (see Table 1). If the battery charge decreases below 30%, the phone samples the information less frequently. If the battery is very low (below 15%), we pause the logging. BBD also captures device states, such as screen on/off, apps currently running, and the battery-charging state. For example, “screen on” in the middle of night is a signal that a user is probably not asleep. Lastly, BBD collects daily call logs and text messages from the phone. BBD stores captured data in a database in the protected storage area of the phone. It creates a new database each day and uploads the previous database to the server. This strategy reduces the risk of data loss and complications that can come when attempting to upload large files. Figure 1. System overview of BBD. Raw data Frequency Low Battery Very low Battery Sound (1hz) Every 2 minute (1min break) Stop Apps When the screen is turned on, every 10 |
From space syntax to space semantics: a behaviorally and perceptually oriented methodology for the efficient description of the geometry and topology of environments | Human spatial behavior and experience cannot be investigated independently from the shape and configuration of environments. Therefore, comparative studies in architectural psychology and spatial cognition would clearly benefit from operationalizations of space that provide a common denominator for capturing its behavioral and psychologically relevant properties. This paper presents theoretical and methodological issues arising from the practical application of isovist-based graphs for the analysis of architectural spaces. Based on recent studies exploring the influence of spatial form and structure on behavior and experience in virtual environments, the following topics are discussed: (1) the derivation and empirical verification of meaningful descriptor variables on the basis of classic qualitative theories of environmental psychology relating behavior and experience to spatial properties; (2) methods to select reference points for the analysis of architectural spaces at a local level; furthermore, based on two experiments exploring the phenomenal conception of the spatial structure of architectural environments, formalized strategies for (3) the selection of reference points at a global level, and for (4), their integration into a sparse yet plausible comprehensive graph structure, are proposed. Taken together, a well formalized and psychologically oriented methodology for the efficient description of spatial properties of environments at the architectural scale level is outlined. This method appears useful for a wide range of applications, ranging from abstract architectural analysis over behavioral experiments to studies on mental representations in cognitive science. doi:10.1068/b33050 }Formerly also associated to Cognitive Neuroscience, Department of Zoology, University of Tu« bingen. Currently at the Centre for Cognitive Science, University of Freiburg, Friedrichstrasse 50, 79098 Freiburg, Germany. because, in reality, various potentially relevant factors coexist. In order to obtain better predictions under such complex conditions, either a comprehensive model or at least additional knowledge on the relative weights of individual factors and their potential interactions is required. As an intermediate step towards such more comprehensive approaches, existing theories have to be formulated qualitatively and translated to a common denominator. In this paper an integrative framework for describing the shape and structure of environments is outlined that allows for a quantitative formulation and test of theories on behavioral and emotional responses to environments. It is based on the two basic elements isovist and place graph. This combination appears particularly promising, since its sparseness allows an efficient representation of both geometrical and topological properties at a wide range of scales, and at the same time it seems capable and flexible enough to retain a substantial share of psychologically and behaviorally relevant detail features. Both the isovist and the place graph are established analysis techniques within their scientific communities of space syntax and spatial cognition respectively. Previous combinations of graphs and isovists (eg Batty, 2001; Benedikt, 1979; Turner et al, 2001) were based on purely formal criteria, whereas many placegraph applications made use of their inherent flexibility but suffered from a lack of formalization (cf Franz et al, 2005a). The methodology outlined in this paper seeks to combine both approaches by defining well-formalized rules for flexible graphs based on empirical findings on the human conception of the spatial structure. In sections 3 and 4, methodological issues of describing local properties on the basis of isovists are discussed. This will be done on the basis of recent empirical studies that tested the behavioral relevance of a selection of isovist measurands. The main issues are (a) the derivation of meaningful isovist measurands, based on classic qualitative theories from environmental psychology, and (b) strategies to select reference points for isovist analysis in environments consisting of few subspaces. Sections 5 and 6 then discuss issues arising when using an isovist-based description system for operationalizing larger environments consisting of multiple spaces: (c) on the basis of an empirical study in which humans identified subspaces by marking their centers, psychologically plausible selection criteria for sets of reference points are proposed and formalized; (d) a strategy to derive a topological graph on the basis of the previously identified elements is outlined. Taken together, a viable methodology is proposed which describes spatial properties of environments efficiently and comprehensively in a psychologically and behaviorally plausible manner. |
Improvement in clinical markers in CF patients using a reduced glutathione regimen: an uncontrolled, observational study. | CFTR mutation, which causes cystic fibrosis (CF), has also recently been identified as causing glutathione system dysfunction and systemic deficiency of reduced glutathione (GSH). Such dysfunction and deficiency regarding GSH may contribute to the pathophysiology of CF. We followed 13 patients (age range 1-27 years) with cystic fibrosis who were using a regimen of reduced glutathione (GSH), including oral glutathione and inhaled buffered glutathione in an uncontrolled, observational study. Dosage ranged from 66-148 mg/kg/day in divided doses, and the term examined was the initial 5.5 months of GSH use (45 days of incrementally adjusted dose, plus 4 months of use at full dosage). Baseline and post-measurements of FEV1 percent predicted, BMI percentile, and weight percentile were noted, in addition to bacterial status and pulmonary exacerbations. Significant improvement in the following clinical parameters was observed: average improvement in FEV1 percent predicted (N=10) was 5.8 percentage points (p<0.0001), average weight percentile (N=13) increased 8.6 points (p<0.001), BMI percentile (N=11) improved on average 1.22 points (p<0.001). All patients improved in FEV1 and BMI, if measured in their case; 12 of 13 patients improved in weight percentile. Positive sputum cultures of bacteria in 11 patients declined from 13 to 5 (p<0.03) with sputum cultures of Pseudomonas aeruginosa becoming negative in 4 of 5 patients previously culturing PA, including two of three patients chronically infected with PA as determined by antibody status. Use of a daily GSH regimen appears to be associated in CF patients with significant improvement in lung function and weight, and a significant decline in bacteria cultured in this uncontrolled study. These findings bear further clinical investigation in larger, randomized, controlled studies. |
Design, modeling and simulation of a green building energy system | Traditional buildings consume more of the energy resources than necessary and generate a variety of emissions and waste. The solution to overcoming these problems will be to build them green and smart. One of the significant components in the concept of smart green buildings is using renewable energy. Solar energy and wind energy are intermittent sources of energy, so these sources have to be combined with other sources of energy or storage devices. While batteries and/or supercapacitors are an ideal choice for short-term energy storage, regenerative hydrogen-oxygen fuel cells are a promising candidate for long-term energy storage. This paper is to design and test a green building energy system that consists of renewable energy, energy storage, and energy management. The paper presents the architecture of the proposed green building energy system and a simulation model that allows for the study of advanced control strategies for the green building energy system. An example green building energy system is tested and simulation results show that the variety of energy source and storage devices can be managed very well. |
Mometasone furoate in the treatment of vulvar lichen sclerosus: could its formulation influence efficacy, tolerability and adherence to treatment? | PURPOSE
To assess the effectiveness, tolerability, and convenience of the cream formulation of mometasone furoate 0.1% (MMF) in the treatment of active vulvar lichen sclerosus (VLS) and to compare the cream with the ointment formulation.
METHODS
The following efficacy parameters were assessed in 27 VLS patients treated with MMF cream for 12 weeks (group A): (i) response rate, (ii) percentage of patients achieving an improvement from baseline of ≥75% in subjective and objective scores, and (iii) mean reduction in subjective and objective scores. These efficacy assessments, as well as those regarding safety and adherence, were compared with the assessments recorded among 37 VLS patients treated with MMF ointment (group B).
RESULTS
59.3% (group A) and 78.4% (group B) of patients were considered responders; 44.4% and 40.7% of patients in group A and 54.1% and 45.9% in group B achieved an improvement of at least 75% in subjective and objective scores, respectively. MMF ointment obtained a significantly higher improvement in symptom scores in comparison with the cream formulation.
CONCLUSIONS
MMF in ointment formulation seems to be more effective in treating active VLS in comparison with MMF cream. Both formulations are well tolerated and there is no difference in patient adherence and satisfaction. |
Disease progression models : A review and comparison | A better understanding of disease progression is beneficial for early diagnosis and appropriate individual therapy. There are many different approaches for statistical modelling of disease progression proposed in the literature, including simple path models up to complex restricted Bayesian networks. Important fields of application are diseases like cancer and HIV. Tumour progression is measured by means of chromosome aberrations, whereas people infected with HIV develop drug resistances because of genetic changes of the HI-virus. These two very different diseases have typical courses of disease progression, which can be modelled partly by consecutive and partly by independent steps. This paper gives an overview of the different progression models and points out their advantages and drawbacks. Different models are compared via simulations to analyse how they work if some of their assumptions are violated. So far, such a comparison has not been done and there are no established methods to compare different progression models. This paper is a step into both directions. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.