title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Interval exchange transformations: Applications of Keane's construction and disjointness | Interval Exchange Transformations: Applications of Keane's Construction and Disjointness by Jon Chaika This thesis is divided into two parts. The first part uses a family of Interval Exchange Transformations constructed by Michael Keane to show that IETs can have some particular behavior including: 1. IETs can be topologically mixing. 2. A minimal IET can have an ergodic measure with Hausdorff dimension a for any a € [0,1]. 3. The complement of the generic points for Lebesgue measure in a minimal nonuniquely ergodic IET can have Hausdorff dimension 0. Note that this is a dense G& set. The second part shows that almost every pair of IETs are different. In particular, the product of almost every pair of IETs is uniquely ergodic. In proving this we show that any sequence of natural numbers of density 1 contains a rigidity sequence for almost every IET, strengthening a result of Veech. |
Constant Space Self-Stabilizing Center Finding in Anonymous Tree Networks | It is known that there is no self-stabilizing silent distributed algorithm for finding the center (or centers) of an anonymous tree network that uses less than O(log diam) space per process, where diam is the diameter of the tree. In this paper, a self-stabilizing, but non-silent, distributed algorithm, STC, for this problem is given, which takes O(diam) rounds under the unfair daemon and uses O(1) space per process. The method is to first construct a silent O(1)-space algorithm for the problem that works under the synchronous daemon, provided it has a clean start. A transformer is then constructed, which transforms any tree algorithm which is silent under the synchronous algorithm given a clean start into an equivalent non-silent self-stabilizing algorithm with the same asymptotic space complexity. The desired center finding algorithm, CSTC, is then obtained by applying the transformer to STC. |
Consumption of cosmetic products by the French population. First part: frequency data. | The aim of this study was to assess the percentage of users, the frequency of use and the number of cosmetic products consumed at home by the French population. The evaluation was performed for adult, child and baby consumers. Pregnant women were also taken into account in this work. All in all, 141 products cosmetics including general hygiene, skin care, hair care, hair styling, make-up, fragrances, solar, shaving and depilatory products were studied. The strengths of the study were the separation of data by sex and by age groups, the consideration of a priori at risk subpopulations and the consideration of a large number of cosmetic products. These current consumption data could be useful for safety assessors and for safety agencies in order to protect the general population and these at risk subpopulations. |
CapsuleGAN: Generative Adversarial Capsule Network | We present Generative Adversarial Capsule Network (CapsuleGAN), a framework that uses capsule networks (CapsNets) instead of the standard convolutional neural networks (CNNs) as discriminators within the generative adversarial network (GAN) setting, while modeling image data. We provide guidelines for designing CapsNet discriminators and the updated GAN objective function, which incorporates the CapsNet margin loss, for training CapsuleGAN models. We show that CapsuleGAN outperforms convolutional-GAN at modeling image data distribution on MNIST and CIFAR-10 datasets, evaluated on the generative adversarial metric and at semi-supervised image classification. |
Verbal working memory and sentence comprehension. | This target article discusses the verbal working memory system used in sentence comprehension. We review the concept of working memory as a short-duration system in which small amounts of information are simultaneously stored and manipulated in the service of accomplishing a task. We summarize the argument that syntactic processing in sentence comprehension requires such a storage and computational system. We then ask whether the working memory system used in syntactic processing is the same as that used in verbally mediated tasks that involve conscious controlled processing. Evidence is brought to bear from various sources: the relationship between individual differences in working memory and individual differences in the efficiency of syntactic processing; the effect of concurrent verbal memory load on syntactic processing; and syntactic processing in patients with poor short-term memory, patients with poor working memory, and patients with aphasia. Experimental results from these normal subjects and patients with various brain lesions converge on the conclusion that there is a specialization in the verbal working memory system for assigning the syntactic structure of a sentence and using that structure in determining sentence meaning that is separate from the working memory system underlying the use of sentence meaning to accomplish other functions. We present a theory of the divisions of the verbal working memory system and suggestions regarding its neural basis. |
Cannabis improves night vision: a case study of dark adaptometry and scotopic sensitivity in kif smokers of the Rif mountains of northern Morocco. | Previous reports have documented an improvement in night vision among Jamaican fishermen after ingestion of a crude tincture of herbal cannabis, while two members of this group noted that Moroccan fishermen and mountain dwellers observe an analogous improvement after smoking kif, sifted Cannabis sativa mixed with tobacco (Nicotiana rustica). Field-testing of night vision has become possible with a portable device, the LKC Technologies Scotopic Sensitivity Tester-1 (SST-1). This study examines the results of double-blinded graduated THC administration 0-20 mg (as Marinol) versus placebo in one subject on measures of dark adaptometry and scotopic sensitivity. Analogous field studies were performed in Morocco with the SST-1 in three subjects before and after smoking kif. In both test situations, improvements in night vision measures were noted after THC or cannabis. It is believed that this effect is dose-dependent and cannabinoid-mediated at the retinal level. Further testing may assess possible clinical application of these results in retinitis pigmentosa or other conditions. |
Finding Actors and Actions in Movies | We address the problem of learning a joint model of actors and actions in movies using weak supervision provided by scripts. Specifically, we extract actor/action pairs from the script and use them as constraints in a discriminative clustering framework. The corresponding optimization problem is formulated as a quadratic program under linear constraints. People in video are represented by automatically extracted and tracked faces together with corresponding motion features. First, we apply the proposed framework to the task of learning names of characters in the movie and demonstrate significant improvements over previous methods used for this task. Second, we explore the joint actor/action constraint and show its advantage for weakly supervised action learning. We validate our method in the challenging setting of localizing and recognizing characters and their actions in feature length movies Casablanca and American Beauty. |
Statistical Validation of a Web-Based GIS Application and Its Applicability to Cardiovascular-Related Studies | PURPOSE
There is abundant evidence that neighborhood characteristics are significantly linked to the health of the inhabitants of a given space within a given time frame. This study is to statistically validate a web-based GIS application designed to support cardiovascular-related research developed by the NIH funded Research Centers in Minority Institutions (RCMI) Translational Research Network (RTRN) Data Coordinating Center (DCC) and discuss its applicability to cardiovascular studies.
METHODS
Geo-referencing, geocoding and geospatial analyses were conducted for 500 randomly selected home addresses in a U.S. southeastern Metropolitan area. The correlation coefficient, factor analysis and Cronbach's alpha (α) were estimated to quantify measures of the internal consistency, reliability and construct/criterion/discriminant validity of the cardiovascular-related geospatial variables (walk score, number of hospitals, fast food restaurants, parks and sidewalks).
RESULTS
Cronbach's α for CVD GEOSPATIAL variables was 95.5%, implying successful internal consistency. Walk scores were significantly correlated with number of hospitals (r = 0.715; p < 0.0001), fast food restaurants (r = 0.729; p < 0.0001), parks (r = 0.773; p < 0.0001) and sidewalks (r = 0.648; p < 0.0001) within a mile from homes. It was also significantly associated with diversity index (r = 0.138, p = 0.0023), median household incomes (r = -0.181; p < 0.0001), and owner occupied rates (r = -0.440; p < 0.0001). However, its non-significant correlation was found with median age, vulnerability, unemployment rate, labor force, and population growth rate.
CONCLUSION
Our data demonstrates that geospatial data generated by the web-based application were internally consistent and demonstrated satisfactory validity. Therefore, the GIS application may be useful to apply to cardiovascular-related studies aimed to investigate potential impact of geospatial factors on diseases and/or the long-term effect of clinical trials. |
Predicting Personality from Twitter | Social media is a place where users present themselves to the world, revealing personal details and insights into their lives. We are beginning to understand how some of this information can be utilized to improve the users' experiences with interfaces and with one another. In this paper, we are interested in the personality of users. Personality has been shown to be relevant to many types of interactions, it has been shown to be useful in predicting job satisfaction, professional and romantic relationship success, and even preference for different interfaces. Until now, to accurately gauge users' personalities, they needed to take a personality test. This made it impractical to use personality analysis in many social media domains. In this paper, we present a method by which a user's personality can be accurately predicted through the publicly available information on their Twitter profile. We will describe the type of data collected, our methods of analysis, and the machine learning techniques that allow us to successfully predict personality. We then discuss the implications this has for social media design, interface design, and broader domains. |
Context Based Approach for Second Language Acquisition | SLAM 2018 focuses on predicting a student’s mistake while using the Duolingo application. In this paper, we describe the system we developed for this shared task. Our system uses a logistic regression model to predict the likelihood of a student making a mistake while answering an exercise on Duolingo in all three language tracks English/Spanish (en/es), Spanish/English (es/en) and French/English (fr/en). We conduct an ablation study with several features during the development of this system and discover that context based features play a major role in language acquisition modeling. Our model beats Duolingo’s baseline scores in all three language tracks (AUROC scores for en/es = 0.821, es/en = 0.790 and fr/en = 0.812). Our work makes a case for providing favourable textual context for students while learning second language. |
MedChatBot: An UMLS based Chatbot for Medical Students | The use of natural dialog has great significance in the design of interactive tutoring systems. The nature of student queries can be confined to a small set of templates based on the task domain. This paper describes the development of a chatbot for medical students, that is based on the open source AIML based Chatterbean. We deploy the widely available Unified Medical Language System (UMLS) as the domain knowledge source for generating responses to queries. The AIML based chatbot is customized to convert natural language queries into relevant SQL queries. The SQL queries are run against the knowledge base and results returned to the user in natural dialog. Student survey was carried out to identify various queries posed by students. The chatbot was designed to address common template queries. Knowledge inference techniques were applied to generate responses for queries for which knowledge was not explicitly encoded. Query responses were rated by three experts on a 1-5 point likert scale, who agreed among themselves with Pearson Correlation Coefficient of 0. 54 and p References |
Associations of dairy intake with glycemia and insulinemia, independent of obesity, in Brazilian adults: the Brazilian Longitudinal Study of Adult Health (ELSA-Brasil). | BACKGROUND
Inverse associations between dairy intake and the risk of type 2 diabetes have been shown, but more studies are needed, especially from low- and middle-income countries.
OBJECTIVE
The objective was to describe the association between dairy products and direct measures of glycemic status in adults without known diabetes.
DESIGN
The Brazilian Longitudinal Study of Adult Health (ELSA-Brasil) includes 15,105 adults, aged 35-74 y, enrolled from universities and research institutions in 6 Brazilian capital cities. We excluded participants with a known diabetes diagnosis, cardiovascular diseases, and cancer. Dairy consumption was assessed by a food-frequency questionnaire, and we computed servings per day for total and subgroups of dairy. Associations with fasting blood glucose (FG) and fasting insulin, 2-h postload glucose (PG), 2-h postload insulin (PI), glycated hemoglobin (Hb A1c), and homeostasis model assessment of insulin resistance (HOMA-IR) were assessed through multivariable linear regression analysis with adjustment for demographic characteristics, behavioral risk factors, other dietary factors, and anthropometric measurements.
RESULTS
The sample size after exclusions was 10,010. The intake of total dairy was inversely associated with FG (linear β for dairy servings/d = -0.46 ± 0.2 mg/dL), PG (-1.25 ± 0.5 mg/dL), PI (-1.52 ± 0.6 mg/dL), Hb A1c (-0.02 ± 0.0%), and HOMA-IR (-0.04 ± 0.0) after adjustment for all covariates (P < 0.05 for all). The findings were consistent across categories of sex, race, obesity status, and dairy fat amount (reduced-fat vs. full-fat dairy). Fermented dairy products showed particularly strong inverse associations with the outcomes, with adjusted differences for a 1-serving/d increment of -0.24 (95% CI: -0.46, -0.02) mg/dL for FG, -0.86 (-1.42, -0.30) mg/dL for PG, and -0.01% (-0.02%, 0.00%) for Hb A1c. Myristic acid was the only nutrient that appeared to mediate the association between dairy intake and glycemia.
CONCLUSION
Dairy intake, especially fermented dairy, was inversely associated with measures of glycemia and insulinemia in Brazilian adults without diagnosed diabetes. This trial was registered at clinicaltrials.com as NCT02320461. |
Advanced Gate Drive Unit With Closed-Loop $di_{{C}}/dt$ Control | This paper describes the design and the experimental investigation of a gate drive unit with closed-loop control of the collector current slope diC/dt for multichip insulated-gate bipolar transistors (IGBTs). Compared to a pure resistive gate drive, the proposed diC/dt control offers the ability to adjust the collector current slope freely which helps to find an optimized relation between switching losses and secure operation of the freewheeling diode for every type of IGBT. Based on the description of IGBT's switching behavior, the design and the realization of the gate drive are presented. The test setup and the comparison of switching tests with and without the proposed diC/dt control are discussed. |
Management of immature teeth by dentin-pulp regeneration: a recent approach. | Treatment of the young permanent tooth with a necrotic root canal system and an incompletely developed root is very difficult and challenging. Few acceptable results have been achieved through apexification but use of long-term calcium hydroxide might alter the mechanical properties of dentin. Thus, one alternative approach is to develop and restore a functional pulp-dentin complex. Procedures attempting to preserve the potentially remaining dental pulp stem cells and mesenchymal stem cells of the apical papilla can result in canal revascularization and the completion of root maturation. There are several advantages of promoting apexogenesis in immature teeth with open apices. It encourages a longer and thicker root to develop thus decreasing the propensity of long term root fracture. So, the present article reviews the recent approach of regeneration of pulp-dentin complex in immature permanent teeth. |
Reduced-Intensity Transplantation for Lymphomas Using Haploidentical Related Donors Versus HLA-Matched Sibling Donors: A Center for International Blood and Marrow Transplant Research Analysis. | PURPOSE
Related donor haploidentical hematopoietic cell transplantation (Haplo-HCT) using post-transplantation cyclophosphamide (PT-Cy) is increasingly used in patients lacking HLA-matched sibling donors (MSD). We compared outcomes after Haplo-HCT using PT-Cy with MSD-HCT in patients with lymphoma, using the Center for International Blood and Marrow Transplant Research registry.
MATERIALS AND METHODS
We evaluated 987 adult patients undergoing either Haplo-HCT (n = 180) or MSD-HCT (n = 807) following reduced-intensity conditioning regimens. The haploidentical group received graft-versus-host disease (GVHD) prophylaxis with PT-Cy with or without a calcineurin inhibitor and mycophenolate. The MSD group received calcineurin inhibitor-based GVHD prophylaxis.
RESULTS
Median follow-up of survivors was 3 years. The 28-day neutrophil recovery was similar in the two groups (95% v 97%; P = .31). The 28-day platelet recovery was delayed in the haploidentical group compared with the MSD group (63% v 91%; P = .001). Cumulative incidence of grade II to IV acute GVHD at day 100 was similar between the two groups (27% v 25%; P = .84). Cumulative incidence of chronic GVHD at 1 year was significantly lower after Haplo-HCT (12% v 45%; P < .001), and this benefit was confirmed on multivariate analysis (relative risk, 0.21; 95% CI, 0.14 to 0.31; P < .001). For Haplo-HCT v MSD-HCT, 3-year rates of nonrelapse mortality (15% v 13%; P = .41), relapse/progression (37% v 40%; P = .51), progression-free survival (48% v 48%; P = .96), and overall survival (61% v 62%; P = .82) were similar. Multivariate analysis showed no significant difference between Haplo-HCT and MSD-HCT in terms of nonrelapse mortality (P = .06), progression/relapse (P = .10), progression-free survival (P = .83), and overall survival (P = .34).
CONCLUSION
Haplo-HCT with PT-Cy provides survival outcomes comparable to MSD-HCT, with a significantly lower risk of chronic GVHD. |
Chronic oral etoposide in advanced breast cancer | Chronic oral etoposide has shown activity in some metastatic refractory tumors. To test its activity in previously treated metastatic breast cancer patients, we started a study in 18 consecutive patients given etoposide orally at 50 mg/m2 daily for 21 days. A partial response was observed in 4 of 18 patients (22%); of the responding patients, 3 had visceral metastases and 1 had multiple bone metastases. Leukopenia of grade 3 or 4 was the main hematological toxic effect (23% of patients) and alopecia was the most important nonhematological toxicity. Chronic oral etoposide shows some activity in pretreated patients with metastatic breast cancer, with tolerance being good and toxicity, acceptable. Further studies of this drug given as first-line chemotherapy or in combination with other drugs can establish all its potential activity in this cancer. |
The comparative effectiveness of Integrated treatment for Substance abuse and Partner violence (I-StoP) and substance abuse treatment alone: a randomized controlled trial | BACKGROUND
Research has shown that treatments that solely addressed intimate partner violence (IPV) perpetration were not very effective in reducing IPV, possibly due to neglecting individual differences between IPV perpetrators. A large proportion of IPV perpetrators is diagnosed with co-occurring substance use disorders and it has been demonstrated that successful treatment of alcohol dependence among alcohol dependent IPV perpetrators also led to less IPV. The current study investigated the relative effectiveness of Integrated treatment for Substance abuse and Partner violence (I-StoP) to cognitive behavioral treatment addressing substance use disorders including only one session addressing partner violence (CBT-SUD+) among patients in substance abuse treatment who repeatedly committed IPV. Substance use and IPV perpetration were primary outcome measures.
METHOD
Patients who entered substance abuse treatment were screened for IPV. Patients who disclosed at least 7 acts of physical IPV in the past year (N = 52) were randomly assigned to either I-StoP or CBT-SUD+. Patients in both conditions received 16 treatment sessions. Substance use and IPV perpetration were assessed at pretreatment, halfway treatment and posttreatment in blocks of 8 weeks. Both completers and intention-to-treat (ITT) analyses were performed.
RESULTS
Patients (completers and ITT) in both conditions significantly improved regarding substance use and IPV perpetration at posttreatment compared with pretreatment. There were no differences in outcome between conditions. Completers in both conditions almost fully abstained from IPV in 8 weeks before the end of treatment.
CONCLUSIONS
Both I-StoP and CBT-SUD+ were effective in reducing substance use and IPV perpetration among patients in substance abuse treatment who repeatedly committed IPV and self-disclosed IPV perpetration. Since it is more cost and time-effective to implement CBT-SUD+ than I-StoP, it is suggested to treat IPV perpetrators in substance abuse treatment with CBT-SUD+. |
perf fuzzer: Targeted Fuzzing of the perf event open() System Call | Fuzzing is a process where random, almost valid, input streams are automatically generated and fed into computer systems in order to test the robustness of userexposed interfaces. We fuzz the Linux kernel system call interface; unlike previous work that attempts to generically fuzz all of an operating system’s system calls, we explore the effectiveness of using specific domain knowledge and focus on finding bugs and security issues related to a single Linux system call. The perf event open() system call was introduced in 2009 and has grown to be a complex interface with over 40 arguments that interact in subtle ways. By using detailed knowledge of typical perf event usage patterns we develop a custom tool, perf fuzzer, that has found bugs that more generic, system-wide, fuzzers have missed. Numerous crashing bugs have been found, including a local root exploit. Fixes for these bugs have been merged into the main Linux source tree. Testing continues to find new bugs, although they are increasingly hard to isolate, requiring development of new isolation techniques and helper utilities. We describe the development of perf fuzzer, examine the bugs found, and discuss ways that this work can be extended to find more bugs and cover other system calls. |
Performance comparison of machine learning algorithms and number of independent components used in fMRI decoding of belief vs. disbelief | Machine learning (ML) has become a popular tool for mining functional neuroimaging data, and there are now hopes of performing such analyses efficiently in real-time. Towards this goal, we compared accuracy of six different ML algorithms applied to neuroimaging data of persons engaged in a bivariate task, asserting their belief or disbelief of a variety of propositional statements. We performed unsupervised dimension reduction and automated feature extraction using independent component (IC) analysis and extracted IC time courses. Optimization of classification hyperparameters across each classifier occurred prior to assessment. Maximum accuracy was achieved at 92% for Random Forest, followed by 91% for AdaBoost, 89% for Naïve Bayes, 87% for a J48 decision tree, 86% for K*, and 84% for support vector machine. For real-time decoding applications, finding a parsimonious subset of diagnostic ICs might be useful. We used a forward search technique to sequentially add ranked ICs to the feature subspace. For the current data set, we determined that approximately six ICs represented a meaningful basis set for classification. We then projected these six IC spatial maps forward onto a later scanning session within subject. We then applied the optimized ML algorithms to these new data instances, and found that classification accuracy results were reproducible. Additionally, we compared our classification method to our previously published general linear model results on this same data set. The highest ranked IC spatial maps show similarity to brain regions associated with contrasts for belief > disbelief, and disbelief < belief. |
Program synthesis by sketching | Program Synthesis by Sketching by Armando Solar-Lezama Doctor in Philosophy in Engineering-Electrical Engineering and Computer Science University of California, Berkeley Rastislav Bodik, Chair The goal of software synthesis is to generate programs automatically from highlevel speci cations. However, e cient implementations for challenging programs require a combination of high-level algorithmic insights and low-level implementation details. Deriving the low-level details is a natural job for a computer, but the synthesizer can not replace the human insight. Therefore, one of the central challenges for software synthesis is to establish a synergy between the programmer and the synthesizer, exploiting the programmer's expertise to reduce the burden on the synthesizer. This thesis introduces sketching, a new style of synthesis that o ers a fresh approach to the synergy problem. Previous approaches have relied on meta-programming, or variations of interactive theorem proving to help the synthesizer deduce an e cient implementation. The resulting systems are very powerful, but they require the programmer to master new formalisms far removed from traditional programming models. To make synthesis accessible, programmers must be able to provide their insight e ortlessly, using formalisms they already understand. In Sketching, insight is communicated through a partial program, a sketch that expresses the high-level structure of an implementation but leaves holes in place of the lowlevel details. This form of synthesis is made possible by a new SAT-based inductive synthesis procedure that can e ciently synthesize an implementation from a small number of test cases. This algorithm forms the core of a new counterexample guided inductive synthesis procedure (CEGIS) which combines the inductive synthesizer with a validation procedure to automatically generate test inputs and ensure that the generated program satis es its |
Amodal Instance Segmentation | We consider the problem of amodal instance segmentation, the objective of which is to predict the region encompassing both visible and occluded parts of each object. Thus far, the lack of publicly available amodal segmentation annotations has stymied the development of amodal segmentation methods. In this paper, we sidestep this issue by relying solely on standard modal instance segmentation annotations to train our model. The result is a new method for amodal instance segmentation, which represents the first such method to the best of our knowledge. We demonstrate the proposed method’s effectiveness both qualitatively and quantitatively. |
Yacc is dead | We present two novel approaches to parsing context-free languages. The first approach is based on an extension of Brzozowski’s derivative from regular expressions to context-free grammars. The second approach is based on a generalization of the derivative to parser combinators. The payoff of these techniques is a small (less than 250 lines of code), easy-to-implement parsing library capable of parsing arbitrary context-free grammars into lazy parse forests. Implementations for both Scala and Haskell are provided. Preliminary experiments with S-Expressions parsed millions of tokens per second, which suggests this technique is efficient enough for use in practice. 1 Top-down motivation: End cargo cult parsing “Cargo cult parsing” is a plague upon computing. Cargo cult parsing refers to the use of “magic” regular expressions—often cut and pasted directly from Google search results—to parse languages which ought to be parsed with contextfree grammars. Such parsing has two outcomes. In the first case, the programmer produces a parser that works “most of the time” because the underlying language is fundamentally irregular. In the second case, some domain-specific language ends up with a mulish syntax, because the programmer has squeezed the language through the regular bottleneck. There are two reasons why regular expressions are so abused while context-free languages remain foresaken: (1) regular expression libraries are available in almost every language, while parsing libraries and toolkits are not, and (2) regular expressions are “WYSIWYG”—the language described is the language that gets matched—whereas parser-generators are WYSIWYGIYULR(k)—“what you see is what you get if you understand LR(k).” To end cargo-cult parsing, we need a new approach to parsing that: 1. handles arbitrary context-free grammars; 2. parses efficiently on average; and 3. can be implemented as a library with little effort. The end goals of the three conditions are simplicity, feasibility and ubiquity. The “arbitrary context-free grammar” condition is necessary because programmers will be less inclined to use a tool that forces them to learn or think about LL/LR arcana. It is hard for compiler experts to imagine, but the constraints on LL/LR 1 The term “cargo cult parsing” is due to Larry Wall, 19 June 2008, Google Tech Talk. grammars are (far) beyond the grasp of the average programmer. [Of course, arbitrary context-free grammars bring ambiguity, which means the parser must be prepared to return a parse forest rather than a parse tree.] The “efficient parsing” condition is necessary because programmers will avoid tools branded as inefficient (however justly or unjustly this label has been applied). Specifically, a parser needs to have roughly linear behavior on average. Because ambiguous grammars may yield an exponential number of parse trees, parse trees must be produced lazily, and each parse tree should be paid for only if it is actually produced. The “easily implemented” condition is perhaps most critical. It must be the case that a programmer could construct a general parsing toolkit if their language of choice doesn’t yet have one. If this condition is met, it is reasonable to expect that proper parsing toolkits will eventually be available for every language.When proper parsing toolkits and libraries remain unavailable for a language, cargo cult parsing prevails. This work introduces parsers based on the derivative of context-free languages and upon the derivative of parser combinators. Parsers based on derivatives meet all of the aforementioned requirements: they accept arbitrary grammars, they produce parse forests efficiently (and lazily), and they are easy to implement (less than 250 lines of Scala code for the complete library). Derivative-based parsers also avoid the precompilation overhead of traditional parser generators; this cost is amortized (and memoised) across the parse itself. In addition, derivative-based parsers can be modified mid-parse, which makes it conceivable that a language could to modify its own syntax at compileor run-time. 2 Bottom-up motivation: Generalizing the derivative Brzozowski defined the derivative of regular expressions in 1964 [1]. This technique was lost to the “sands of time” until Owens, Reppy and Turon recently revived it to show that derivative-based lexer-generators were easier to write, more efficient and more flexible than the classical regex-to-NFA-to-DFA generators [15]. (Derivative-based lexers allow, for instance, both complement and intersection.) Given the payoff for regular languages, it is natural to ask whether the derivative, and its benefits, extend to context-free languages, and transitively, to parsing. As it turns out, they do. We will show that context-free languages are closed under the derivative—they critical property needed for parsing. We will then show that context-free parser combinators are also closed under a generalization of the derivative. The net impact is that we will be able to write a derivative-based parser combinator library in under 250 lines of Scala, capable of producing a lazy parse forest for any context-free grammar. 2 The second author on this paper, an undergraduate student, completed the implementation for Haskell in less than a week. |
How to reach top accuracy for a visual pedestrian warning system from a car? | Due to the wide applicability of pedestrian detection in surveillance and safety, this research topic has received much attention in computer vision literature. However, the focus of this research mainly lies in detecting and locating pedestrians individually as accurate as possible. In recent years, a number of datasets are captured using a forward looking camera from a car, which imposes the application of warning the driver when pedestrians are in front of the car. For such applications, it is not required to detect each pedestrian independently, but to generate an alarm when necessary. In this paper we explore techniques to boost the accuracy of recent channel-based algorithms in this application: algorithmic refinements as well as the inclusion of an LWIR image channel. We use the KAIST dataset which is constructed from image-pairs of both the visual and the LWIR spectrum, in day and night conditions. We study the influence of techniques that have shown success in literature. |
Single Electron Transistor : Applications and Limitations | Recent research in SET gives new ideas which are going to revolutionize the random access memory and digital data storage technologies. The goal of this paper is to discuss about the basic physics and applications of nano electronic device ‘Single electron transistor [SET]’ which is capable of controlling the transport of only one electron. Single-electron transistor (SET) is a key element of current research area of nanotechnology which can offer low power consumption and high operating speed. The single electron transistor is a new type of switching device that uses controlled electron tunneling to amplify current. |
Local-Global Landmark Confidences for Face Recognition | A key to successful face recognition is accurate and reliable face alignment using automatically-detected facial landmarks. Given this strong dependency between face recognition and facial landmark detection, robust face recognition requires knowledge of when the facial landmark detection algorithm succeeds and when it fails. Facial landmark confidence represents this measure of success. In this paper, we propose two methods to measure landmark detection confidence: local confidence based on local predictors of each facial landmark, and global confidence based on a 3D rendered face model. A score fusion approach is also introduced to integrate these two confidences effectively. We evaluate both confidence metrics on two datasets for face recognition: JANUS CS2 and IJB-A datasets. Our experiments show up to 9% improvements when face recognition algorithm integrates the local-global confidence metrics. |
Convolution engine: balancing efficiency & flexibility in specialized computing | This paper focuses on the trade-off between flexibility and efficiency in specialized computing. We observe that specialized units achieve most of their efficiency gains by tuning data storage and compute structures and their connectivity to the data-flow and data-locality patterns in the kernels. Hence, by identifying key data-flow patterns used in a domain, we can create efficient engines that can be programmed and reused across a wide range of applications.
We present an example, the Convolution Engine (CE), specialized for the convolution-like data-flow that is common in computational photography, image processing, and video processing applications. CE achieves energy efficiency by capturing data reuse patterns, eliminating data transfer overheads, and enabling a large number of operations per memory access. We quantify the tradeoffs in efficiency and flexibility and demonstrate that CE is within a factor of 2-3x of the energy and area efficiency of custom units optimized for a single kernel. CE improves energy and area efficiency by 8-15x over a SIMD engine for most applications. |
Residue management , conservation tillage and soil restoration for mitigating greenhouse effect by CO ,-enrichment | This manuscript reviews the potential impact of residue management, conservation tillage and soil restoration on carbon sequestration in world soils. The greenhouse effect is among four principal ecological issues of global concern that include: (i) adequacy of land resources to meet needs of present and future generations; (ii) role of world soils and agricultural practices in the ‘greenhouse’ effect; (iii) potential of crop residue management, restoration of degraded soils, and conservation tillage in carbon sequestration in soil; and (iv) minimizing risks of soil degradation by enhancing soil resilience and soil quality. Annual increase in CO, concentration in the atmosphere is 3.2 X 1015 g, and there exists a potential to mitigate this effect through C sequestration in soils. Just as world soils are an important active pool of organic carbon and play a major role in the global carbon cycle, crop residue is a major renewable resource which also has an important impact on the global carbon cycle. I have estimated the annual production of crop residue to be about 3.4 billion Mg in the world. If 15% of C contained in the residue can be converted to passive soil organic carbon (SOC) fraction, it may lead to C sequestration at the rate of 0.2 X lOI5 g/yr. Similarly restoring presently degraded soils, estimated at about 2.0 billion ha, and increasing SOC content by O.Ol%/yr may lead C sequestration at the rate of 3.0 Pg C/yr. Conservation tillage is an important tool for crop residue management, restoration of degraded soil, and for enhancing C sequestration in soil. Conservation tillage, any tillage system that maintains at least 30% of the soil surface covered by residue, was practised in 1995 on about 40 X lo6 ha or 35.5% of planted area in USA. It is projected that by the year 2020, conservation tillage may be adopted on 75% of cropland in USA (140 X IO6 ha), 50% in other developed countries (225 X lo6 ha), and 25% in developing countries (172 X lo6 ha). The projected conversion of conventional to conservation tillage may lead to a global C sequestration by 2020 at a low estimate of 1.5 X 1015 g, and at a high estimate of 4.9 X lOI g of C. These potentials of C ’ Corresponding author. 0167-1987/97/$17.00 |
Driver Distraction - A Review of the Literature | background, aim, method, result) max 200 words: Driver distraction has been defined in many different ways. The most important difference is whether only visual inattention or also cognitive inattention should be included. Many different methods have been used to assess the prevalence and types of driver distraction that occur, and to describe the consequences in terms of driving performance and crash involvement. There is strong agreement that distraction is detrimental for driving, and that the risk for crashes increases. Drivers rather opt for repeated glances instead of extending one single glance, if the secondary task demands attention for a longer period of time. However, repeated glances have more detrimental effects on driving performance than a single glance of the same duration as one of the repeated glances. Only recently the method of remote eye tracking has emerged, which enables real time identification of visual distraction. So far this method has mostly been used in driving simulators. Different algorithms that diagnose distracted drivers have been tested with promising results. In simulators it is difficult, however, to induce true distraction, due to the short duration of the experiment and the artificial setting. A prolonged field study under naturalistic conditions could provide new insights and validation of simulator studies. |
The stigma of obesity: a review and update. | Reviews epidemiology Obese individuals are highly stigmatized and face multiple forms of prejudice and discrimination because of their weight (1,2). The prevalence of weight discrimination in the United States has increased by 66% over the past decade (3), and is comparable to rates of racial discrimination , especially among women (4). Weight bias translates into inequities in employment settings, health-care facilities, and educational institutions, often due to widespread negative stereotypes that overweight and obese persons are lazy, unmotivated, lacking in self-discipline, less competent, non-compliant, and sloppy (2,5–7). These stereotypes are prevalent and are rarely challenged in Western society, leaving overweight and obese persons vulnerable to social injustice, unfair treatment, and impaired quality of life as a result of substantial disadvantages and stigma. In 2001, Puhl and Brownell published the first comprehensive review of several decades of research documenting bias and stigma toward overweight and obese persons (2). This review summarized weight stigma in domains of employment, health care, and education, demonstrating the vulnerability of obese persons to many forms of unfair treatment. Despite evidence of weight bias in important areas of living, the authors noted many gaps in research regarding the nature and extent of weight stigma in various settings, the lack of science on emotional and physical health consequences of weight bias, and the pau-city of interventions to reduce negative stigma. In recent years, attention to weight bias has increased, with a growing recognition of the pervasiveness of weight bias and stigma, and its potential harmful consequences for obese persons. The aim of this article is to provide an update of scientific evidence on weight bias toward overweight and obese adults through a systematic review of published literature since the 2001 article by Puhl and Brownell. This review expands upon previous findings of weight bias in major domains of living, documents new areas where weight bias has been studied, and highlights ongoing research questions that need to be addressed to advance this field of study. A systematic literature search of studies retrieved articles and books were also reviewed, and manual searches were conducted in the databases and journals for authors who had published in this field. Most studies retrieved for this review were published in the United States. Any articles published internationally are noted with their country of origin. Research on weight stigma in adolescents and children was excluded from this review, as this literature was recently reviewed elsewhere … |
Organizational Commitment and Psychological Attachment : The Effects of Compliance , Identification , and Internalization on Prosocial Behavior | Previous research on organizational commitment has typically not focused on the underlying dimensions of psychological attachment to the organization. Results of two studies using university employees (N = 82) and students (N = 162) suggest that psychological attachment may be predicated on compliance, identification, and internalization (e.g., Kelman, 19S8). Identification and internalization are positively related to prosocial behaviors and negatively related to turnover. Internalization is predictive of financial donations to a fund-raising campaign. Overall, the results suggest the importance of clearly specifying the underlying dimensions of commitment using notions of psychological attachment and the various forms such attachment can take. |
Highly photoluminescent carbon dots for multicolor patterning, sensors, and bioimaging. | Fluorescent carbon-based materials have drawn increasing attention in recent years owing to exceptional advantages such as high optical absorptivity, chemical stability, biocompatibility, and low toxicity. These materials primarily include carbon dots (CDs), nanodiamonds, carbon nanotubes, fullerene, and fluorescent graphene. The superior properties of fluorescent carbon-based materials distinguish them from traditional fluorescent materials, and make them promising candidates for numerous exciting applications, such as bioimaging, 16] medical diagnosis, catalysis, 18] and photovoltaic devices. Among all of these materials, CDs have drawn the most extensive notice, owing to their early discovery and adjustable parameters. However, many scientific issues with CDs still await further investigation. Currently, a broad series of methods for obtaining CD-based materials have been developed, but efficient one-step strategies for the fabrication of CDs on a large scale are still a challenge in this field. Current synthetic methods are mainly deficient in accurate control of lateral dimensions and the resulting surface chemistry, as well as in obtaining fluorescent materials with high quantum yields (QY). Moreover, it is important to expand these kinds of materials to novel applications. Herein, a facile and highoutput strategy for the fabrication of CDs, which is suitable for industrial-scale production (yield is ca. 58%), is discussed. The QY was as high as ca. 80 %, which is the highest value recorded for fluorescent carbon-based materials, and is almost equal to fluorescent dyes. The polymer-like CDs were converted into carbogenic CDs by a change from low to high synthesis temperature. The photoluminescence (PL) mechanism (high QY/PL quenching) was investigated in detail by ultrafast spectroscopy. The CDs were applied as printing ink on the macro/micro scale and nanocomposites were also prepared by polymerizing CDs with certain polymers. Additionally, the CDs could be utilized as a biosensor reagent for the detection of Fe in biosystems. The CDs were prepared by a hydrothermal method, which is described in the Supporting Information (Figure 1a; see also the Supporting Information, Figure S1). The reaction was conducted by first condensing citric acid and ethylenediamine, whereupon they formed polymer-like CDs, which were then carbonized to form the CDs. The morphology and structure of CDs were confirmed by analysis. Figure 1b shows transmission electron microscopy (TEM) images of the CDs, which can be seen to have a uniform dispersion without apparent aggregation and particle diameters of 2–6 nm. The sizes of CDs were also measured by atomic force microscopy (AFM; Figure S2), and the average height was 2.81 nm. From the high-resolution TEM, most particles are observed to be amorphous carbon particles without any lattices; rare particles possess well-resolved lattice fringes. With such a low carbon-lattice-structure content, no obvious D or G bands were detected in the Raman spectra of the CDs (Figure S3). The XRD patterns of the CDs (Figure 1 c) also displayed a broad peak centered at 258 (0.34 nm), which is also attributed to highly disordered carbon atoms. Moreover, NMR spectroscopy (H and C) was employed to distinguish sp-hybridized carbon atoms from sp-hybridized carbon atoms (Figure S4). In the H NMR spectrum, sp carbons were detected. In the C NMR spectrum, signals in the range of 30–45 ppm, which correspond to aliphatic (sp) carbon atoms, and signals from 100–185 ppm, which are indicative of sp carbon atoms, were observed. Signals in the range of 170– 185 ppm, which correspond to carboxyl/amide groups, were also present. In the FTIR analysis of CDs, the following were observed: stretching vibrations of C OH at 3430 cm 1 and C H at 2923 cm 1 and 2850 cm , asymmetric stretching vibrations of C-NH-C at 1126 cm , bending vibrations of N H at 1570 cm , and the vibrational absorption band of C=O at 1635 cm 1 (Figure S5). Moreover, the surface groups were also investigated by XPS analysis (Figure 1d). C1s analysis revealed three different types of carbon atoms: graphitic or aliphatic (C=C and C C), oxygenated, and nitrous (Table S1). In the UV/Vis spectra, the peak was focused on 344 nm in an aqueous solution of CDs. In the fluorescence spectra, CDs have optimal excitation and emission wavelengths at 360 nm and 443 nm, and show a blue color under a hand-held UV lamp (Figure 2 a). Excitation-dependent PL behavior was [*] S. Zhu, Q. Meng, Prof. J. Zhang, Y. Song, Prof. K. Zhang, Prof. B. Yang State Key Laboratory of Supramolecular Structure and Materials, College of Chemistry, Jilin University Changchun, 130012 (P. R. China) E-mail: [email protected] |
Is momentum really momentum ? | Momentum is primarily driven by firms’ performance 12 to seven months prior to portfolio formation, not by a tendency of rising and falling stocks to keep rising and falling. Strategies based on recent past performance generate positive returns but are less profitable than those based on intermediate horizon past performance, especially among the largest, most liquid stocks. These facts are not particular to the momentum observed in the cross section of US equities. Similar results hold for momentum strategies trading international equity indices, commodities, and currencies. |
Effects of baclofen and mirtazapine on a laboratory model of marijuana withdrawal and relapse | Only a small percentage of individuals seeking treatment for their marijuana use achieves sustained abstinence, suggesting more treatment options are needed. We investigated the effects of baclofen (study 1) and mirtazapine (study 2) in a human laboratory model of marijuana intoxication, withdrawal, and relapse. In study 1, daily marijuana smokers (n = 10), averaging 9.4 (±3.9) marijuana cigarettes/day, were maintained on placebo and each baclofen dose (60, 90 mg/day) for 16 days. In study 2, daily marijuana smokers (n = 11), averaging 11.9 (±5.3) marijuana cigarettes/day, were maintained on placebo and mirtazapine (30 mg/day) for 14 days each. Medication administration began outpatient prior to each 8-day inpatient phase. On the first inpatient day of each medication condition, participants smoked active marijuana (study 1: 3.3% THC; study 2: 6.2% THC). For the next 3 days, they could self-administer placebo marijuana (abstinence phase), followed by 4 days in which they could self-administer active marijuana (relapse phase); participants paid for self-administered marijuana using study earnings. In study 1, during active marijuana smoking, baclofen dose-dependently decreased craving for tobacco and marijuana, but had little effect on mood during abstinence and did not decrease relapse. Baclofen also worsened cognitive performance regardless of marijuana condition. In study 2, mirtazapine improved sleep during abstinence, and robustly increased food intake, but had no effect on withdrawal symptoms and did not decrease marijuana relapse. Overall, this human laboratory study did not find evidence to suggest that either baclofen or mirtazapine showed promise for the potential treatment of marijuana dependence. |
Simulink based performance analysis of lead acid batteries with the variation of load current and temperature | Storage device in renewable energy applications plays an important role for efficient operation. For solar PV applications, storage device is an integral part during post sun shine hours to supply power to the load. For Solar Home System (SHS), approximately 60% of the total cost is due to storage battery and the battery needs replacement in every five years or so. If the performance of the battery can be enhanced, not only the cost of energy will come down appreciably but the reliability of the system will improve. Battery is needed in PV system to store the excess energy from the sun and release the power to the system again when it is needed. In other words, to enhance a system performance we have to increase the battery performance. In this paper, the parameters which play important role for optimum performance has been studied. The major two parameters that affect the battery performance are load current and temperature. It has been found that, when the discharge current is high, battery capacity reduces and the vice-versa. Again in case of temperature, when temperature increases more than the rated standard value of 25°C, capacity increases but the life time of the battery decreases. On the other hand, when the ambient temperature goes down, capacity of battery decreases but lifetime of the battery is increased. To analyse the effect of temperature and load current on battery performance, the standard equivalent circuit of battery is considered first and then the equivalent circuit is simulated using Simulink under MATLAB environment. |
On-line fingerprint verification | Fingerprint verification is one of the most reliable personal identification methods. However, manual fingerprint verification is so tedious, time-consuming, and expensive that it is incapable of meeting today’s increasing performance requirements. An automatic fingerprint identification system (AFIS) is widely needed. It plays a very important role in forensic and civilian applications such as criminal identification, access control, and ATM card verification. This paper describes the design and implementation of an on-line fingerprint verification system which operates in two stages: minutia extraction and minutia matching. An improved version of the minutia extraction algorithm proposed by Ratha et al., which is much faster and more reliable, is implemented for extracting features from an input fingerprint image captured with an on-line inkless scanner. For minutia matching, an alignment-based elastic matching algorithm has been developed. This algorithm is capable of finding the correspondences between minutiae in the input image and the stored template without resorting to exhaustive search and has the ability of adaptively compensating for the nonlinear deformations and inexact pose transformations between fingerprints. The system has been tested on two sets of fingerprint images captured with inkless scanners. The verification accuracy is found to be acceptable. Typically, a complete fingerprint verification procedure takes, on an average, about eight seconds on a SPARC 20 workstation. These experimental results show that our system meets the response time requirements of on-line verification with high accuracy. Index Terms —Biometrics, fingerprints, matching, verification, minutia, orientation field, ridge extraction. —————————— ✦ —————————— |
Bioequivalence Studies of A Generic Formulation (SW651K) to the BrandDrug S-1 in Tumor-Bearing Rat Models | The generic anticancer drugs are developed in the field of anticancer therapy and are commercially available. Generally, the safety and efficacy of a brand anticancer drug are determined by phase 1, 2 and 3 clinical trials. In contrast, clinical trials are not necessarily required by guidelines for the development of generic drugs and are usually not performed. Pharmacokinetic equivalence is considered particularly important for cytotoxic anticancer drugs because of the narrow therapeutic window. Differences in the content and purity of active and inactive pharmaceutical ingredients might have an important impact on pharmacokinetics of the generic cytotoxic anticancer drug and cause the difference in the pharmacodynamics. However the bioequivalent studies of PK/PD using the tumor-bearing rat models had been hardly reported before. |
State variability in supply of office-based primary care providers: United States, 2012. | KEY FINDINGS
Data from the National Ambulatory Medical Care Survey (NAMCS) and the NAMCS Electronic Health Records Survey In 2012, 46.1 primary care physicians and 65.5 specialists were available per 100,000 population. From 2002 through 2012, the supply of specialists consistently exceeded the supply of primary care physicians. Compared with the national average, the supply of primary care physicians was higher in Massachusetts, Rhode Island, Vermont, and Washington; it was lower in Arkansas, Georgia, Mississippi, Nevada, New Mexico, and Texas. In 2012, 53.0% of office-based primary care physicians worked with physician assistants or nurse practitioners. Compared with the national average, the percentage of physicians working with physician assistants or nurse practitioners was higher in 19 states and lower in Georgia. Primary care providers include primary care physicians, physician assistants, and nurse practitioners. Primary care physicians are those in family and general practice, internal medicine, geriatrics, and pediatrics (1). Physician assistants are state-licensed health professionals practicing medicine under a physician's supervision. Nurse practitioners are registered nurses (RNs) with advanced clinical training (2-6). The ability to obtain primary care depends on the availability of primary care providers (3). This report presents state estimates of the supply of primary care physicians per capita, as well as the availability of physician assistants or nurse practitioners in primary care physicians' practices. Estimates are based on data from the National Ambulatory Medical Care Survey (NAMCS), Electronic Health Records (EHR) Survey, a nationally representative survey of office-based physicians. |
Syntax Helps ELMo Understand Semantics: Is Syntax Still Relevant in a Deep Neural Architecture for SRL? | Do unsupervised methods for learning rich, contextualized token representations obviate the need for explicit modeling of linguistic structure in neural network models for semantic role labeling (SRL)? We address this question by incorporating the massively successful ELMo embeddings (Peters et al., 2018) into LISA (Strubell et al., 2018), a strong, linguisticallyinformed neural network architecture for SRL. In experiments on the CoNLL-2005 shared task we find that though ELMo outperforms typical word embeddings, beginning to close the gap in F1 between LISA with predicted and gold syntactic parses, syntactically-informed models still outperform syntax-free models when both use ELMo, especially on out-of-domain data. Our results suggest that linguistic structures are indeed still relevant in this golden age of deep learning for NLP. |
Image-based phenotyping of plant disease symptoms | Plant diseases cause significant reductions in agricultural productivity worldwide. Disease symptoms have deleterious effects on the growth and development of crop plants, limiting yields and making agricultural products unfit for consumption. For many plant-pathogen systems, we lack knowledge of the physiological mechanisms that link pathogen infection and the production of disease symptoms in the host. A variety of quantitative high-throughput image-based methods for phenotyping plant growth and development are currently being developed. These methods range from detailed analysis of a single plant over time to broad assessment of the crop canopy for thousands of plants in a field and employ a wide variety of imaging technologies. Application of these methods to the study of plant disease offers the ability to study quantitatively how host physiology is altered by pathogen infection. These approaches have the potential to provide insight into the physiological mechanisms underlying disease symptom development. Furthermore, imaging techniques that detect the electromagnetic spectrum outside of visible light allow us to quantify disease symptoms that are not visible by eye, increasing the range of symptoms we can observe and potentially allowing for earlier and more thorough symptom detection. In this review, we summarize current progress in plant disease phenotyping and suggest future directions that will accelerate the development of resistant crop varieties. |
Adaptive smoothness constraints for efficient stereo matching using texture and edge information | An efficient stereo matching algorithm, which applies adaptive smoothness constraints using texture and edge information, is proposed in this work. First, we determine non-textured regions, on which an input image yields flat pixel values. In the non-textured regions, we penalize depth discontinuity and complement the primary CNN-based matching cost with a color-based cost. Second, by combining two edge maps from the input image and a pre-estimated disparity map, we extract denoised edges that correspond to depth discontinuity with high probabilities. Thus, near the denoised edges, we penalize small differences of neighboring disparities. Based on these adaptive smoothness constraints, the proposed algorithm outperforms the conventional methods significantly and achieves the state-of-the-art performance on the Middlebury stereo benchmark. |
Implementing a Functional ISO 9001 Quality Management System in Small and Medium-Sized Enterprises | This conceptual paper provides guidance for the implementation of a functional ISO 9001 quality management system (QMS) in small and medium-sized enterprises (SMEs). To help a SME understand its starting point, four initial states for QMS implementation are defined. Five paths for moving the QMS from the initial state to the desired state are described. To support the transition from the initial to the desired state, some key considerations in implementing a QMS in SMEs are discussed. The paper is based on site visits and implementation assistance the authors have provided to several SMEs. It is anticipated the paper will help managers in SMEs understand the process of implementing ISO 9001 and help them avoid the development of a paper-driven QMS that provides limited value. |
Learning question classifiers: the role of semantic information | In order to respond correctly to a free form factual question given a large collection of text data, one needs to understand the question to a level that allows determining some of the constraints the question imposes on a possible answer. These constraints may include a semantic classification of the sought after answer and may even suggest using different strategies when looking for and verifying a candidate answer. This work presents the first work on a machine learning approach to question classification. Guided by a layered semantic hierarchy of answer types, we develop a hierarchical classifier that classifies questions into fine-grained classes. This work also performs a systematic study of the use of semantic information sources in natural language classification tasks. It is shown that, in the context of question classification, augmenting the input of the classifier with appropriate semantic category information results in significant improvements to classification accuracy. We show accurate results on a large collection of free-form questions used in TREC 10 and 11. |
Peasants and revolution: the case of Bolivia | At the beginning of Part One of this article, an approach to an explanation of the survival of serfdom in Bolivia until 1953 and of its breakdown, is made by means of the analysis of the system of social control which was maintained, in the context of the absence of strong market currents in agricultural products. The workings of the system were demonstrated with reference to the peasants' revolt of 1899, an episode which was shown to have been possible because a fraction of the elite was prepared to suspend the rules which governed the system. |
Extracellular matrix-based biomaterial scaffolds and the host response. | Extracellular matrix (ECM) collectively represents a class of naturally derived proteinaceous biomaterials purified from harvested organs and tissues with increasing scientific focus and utility in tissue engineering and repair. This interest stems predominantly from the largely unproven concept that processed ECM biomaterials as natural tissue-derived matrices better integrate with host tissue than purely synthetic biomaterials. Nearly every tissue type has been decellularized and processed for re-use as tissue-derived ECM protein implants and scaffolds. To date, however, little consensus exists for defining ECM compositions or sources that best constitute decellularized biomaterials that might better heal, integrate with host tissues and avoid the foreign body response (FBR). Metrics used to assess ECM performance in biomaterial implants are arbitrary and contextually specific by convention. Few comparisons for in vivo host responses to ECM implants from different sources are published. This review discusses current ECM-derived biomaterials characterization methods including relationships between ECM material compositions from different sources, properties and host tissue response as implants. Relevant preclinical in vivo models are compared along with their associated advantages and limitations, and the current state of various metrics used to define material integration and biocompatibility are discussed. Commonly applied applications of these ECM-derived biomaterials as stand-alone implanted matrices and devices are compared with respect to host tissue responses. |
Evaluation of the antihypertensive efficacy and tolerability of moexipril, a new ACE inhibitor, compared to hydrochlorothiazide in elderly patients | Objective: To compare the antihypertensive efficacy of moexipril, a new angiotensin-converting enzyme (ACE) inhibitor, to treatment with hydrochlorothiazide (HCTZ). Patients: Two hundred and one non-hospitalized male and female patients between 65 and 80 years of age with essential hypertension. Methods: This was a multicentre, placebo-controlled, double-blind study with a parallel group design. Subjects with a sitting diastolic blood pressure (DBP) ≥ 95 mmHg were randomized to monotherapy with placebo, moexipril 7.5 mg o.d., moexipril 15 mg o.d. or HCTZ 25 mg o.d. for 8 weeks. Results: Throughout the study period treatment with moexipril and HCTZ resulted in significant reductions of DBP compared with placebo, but there were no significant differences between the active treatment groups. At end point the adjusted mean reductions were 10.5, 8.7 and 10.1 mmHg in the HCTZ, moexipril 7.5 mg and moexipril 15 mg groups, respectively, compared to 3.9 mmHg in the placebo group. Treatment with moexipril was associated with two cases of first dose hypotension and two cases of moderate and reversible increases in serum creatinine levels. Otherwise, both dosages of moexipril were well tolerated and the overall percentages of patients who had adverse experiences were smaller than in the placebo group. Conclusion: Moexipril is well tolerated and is at least as effective as HCTZ in elderly patients with essential hypertension. |
Latent Variable Analysis Growth Mixture Modeling and Related Techniques for Longitudinal Data | This chapter gives an overview of recent advances in latent variable analysis. Emphasis is placed on the strength of modeling obtained by using a flexible combination of continuous and categorical latent variables. To focus the discussion and make it manageable in scope, analysis of longitudinal data using growth models will be considered. Continuous latent variables are common in growth modeling in the form of random effects that capture individual variation in development over time. The use of categorical latent variables in growth modeling is, in contrast, perhaps less familiar, and new techniques have recently emerged. The aim of this chapter is to show the usefulness of growth model extensions using categorical latent variables. The discussion also has implications for latent variable analysis of cross-sectional data. The chapter begins with two major parts corresponding to continuous outcomes versus categorical outcomes. Within each part, conventional modeling using continuous latent variables will be described |
A2-RL: Aesthetics Aware Reinforcement Learning for Image Cropping | Image cropping aims at improving the aesthetic quality of images by adjusting their composition. Most weakly supervised cropping methods (without bounding box supervision) rely on the sliding window mechanism. The sliding window mechanism requires fixed aspect ratios and limits the cropping region with arbitrary size. Moreover, the sliding window method usually produces tens of thousands of windows on the input image which is very time-consuming. Motivated by these challenges, we firstly formulate the aesthetic image cropping as a sequential decision-making process and propose a weakly supervised Aesthetics Aware Reinforcement Learning (A2-RL) framework to address this problem. Particularly, the proposed method develops an aesthetics aware reward function which especially benefits image cropping. Similar to human's decision making, we use a comprehensive state representation including both the current observation and the historical experience. We train the agent using the actor-critic architecture in an end-to-end manner. The agent is evaluated on several popular unseen cropping datasets. Experiment results show that our method achieves the state-of-the-art performance with much fewer candidate windows and much less time compared with previous weakly supervised methods. |
3D Reconstruction of Human Skeleton from Single Images or Monocular Video Sequences | In this paper, we first review the approaches to recover 3D shape and related movements of a human and then we present an easy and reliable approach to recover a 3D model using just one image or monocular video sequence. A simplification of the perspective camera model is required, due to the absence of stereo view. The human figure is reconstructed in a skeleton form and to improve the visual quality, a pre-defined human model is also fitted to the recovered 3D data. |
Automatic Construction of Nonparametric Relational Regression Models for Multiple Time Series | Gaussian Processes (GPs) provide a general and analytically tractable way of modeling complex time-varying, nonparametric functions. The Automatic Bayesian Covariance Discovery (ABCD) system constructs natural-language description of time-series data by treating unknown timeseries data nonparametrically using GP with a composite covariance kernel function. Unfortunately, learning a composite covariance kernel with a single time-series data set often results in less informative kernel that may not give qualitative, distinctive descriptions of data. We address this challenge by proposing two relational kernel learning methods which can model multiple time-series data sets by finding common, shared causes of changes. We show that the relational kernel learning methods find more accurate models for regression problems on several real-world data sets; US stock data, US house price index data and currency exchange rate data. |
Theory and Practice in Parallel Job Scheduling | Parallel job scheduling has gained increasing recognition in recent years as a distinct area of study. However , there is concern about the divergence of theory and practice in the eld. We review theoretical research in this area, and recommendations based on recent results. This is contrasted with a proposal for standard interfaces among the components of a scheduling system, that has grown from requirements in the eld. |
Oxidation of Caffeine by Permanganate Ion in Perchloric and Sulfuric Acids Solutions: A Comparative Kinetic Study | The kinetics of oxidations of caffeine by permanganate ion in both perchloric and sulfuric acids solutions have been investigated spectrophotometrically at a constant ionic strength of 1.0 mol dm -3 and at 25°C. In both acids, the reaction-time curves were obtained with a sigmoid profile suggesting an autocatalytic effect caused by Mn(II) ions formed as a reaction product. Both catalytic and non-catalytic processes were determined to be first order with respect to the permanganate ion and caffeine concentrations, whereas the orders with respect to [H+] and [Mn(II)] were found to be less than unity. Variation of either ionic strength or dielectric constant of the medium had no significant effect on the oxidation rates. Spectroscopic studies and Michaelis-Menten plots showed no evidence for the formation of intermediate complexes in both acids suggesting that the reactions point towards the outer-sphere pathway. The reactions mechanism adequately describing the kinetic results was proposed. In both acids, the main oxidation products of caffeine were identified as 1,3,7-trimethyluric acid. Under comparable experimental conditions, the oxidation rate of caffeine in perchloric acid was slightly higher than that in sulfuric acid. The constants involved in the different steps of the reactions mechanism have been evaluated. With admiration to the rate-limiting step of these reactions, the activation parameters have been evaluated and discussed. |
ROBUST MULTIMODAL HAND-AND HEAD GESTURE RECOGNITION FOR CONTROLLING AUTOMOTIVE INFOTAINMENT SYSTEMS | The use of gestures in automotive environments provides an intuitive addition to existing interaction styles for seamlessly controlling various infotainment applications like radio-tuner, cd-player and telephone. In this work, we describe a robust, context-specific approach for a video-based analysis of dynamic handand head gestures. The system, implemented in a BMW limousine, evaluates a continuous stream of infrared pictures using a combination of adapted preprocessing methods and a hierarchical, mainly rule based classification scheme. Currently, 17 different hand gestures and six different head gestures can be recognized in realtime on standard hardware. As a key-feature of the system, the active gesture vocabulary can be reduced with regard to the current operating context yielding more robust performance. |
A Learning Algorithm for Boltzmann Machines | The computotionol power of massively parallel networks of simple processing elements resides in the communication bandwidth provided by the hardware connections between elements. These connections con allow a significant fraction of the knowledge of the system to be applied to an instance of a problem in o very short time. One kind of computation for which massively porollel networks appear to be well suited is large constraint satisfaction searches, but to use the connections efficiently two conditions must be met: First, a search technique that is suitable for parallel networks must be found. Second, there must be some way of choosing internal representations which allow the preexisting hardware connections to be used efficiently for encoding the constraints in the domain being searched. We describe a generol parallel search method, based on statistical mechanics, and we show how it leads to a general learning rule for modifying the connection strengths so as to incorporate knowledge obout o task domain in on efficient way. We describe some simple examples in which the learning algorithm creates internal representations thot ore demonstrobly the most efficient way of using the preexisting connectivity structure. |
Runoff and erosional responses to a drought ‐ induced shift in a desert grassland community composition | [1] This study investigates how drought‐induced change in semiarid grassland community affected runoff and sediment yield in a small watershed in southeast Arizona, USA. Three distinct periods in ecosystem composition and associated runoff and sediment yield were identified according to dominant species: native bunchgrass (1974–2005), forbs (2006), and the invasive grass, Eragrostis lehmanniana (2007–2009). Precipitation, runoff, and sediment yield for each period were analyzed and compared at watershed and plot scales. Average watershed annual sediment yield was 0.16 t ha yr. Despite similarities in precipitation characteristics, decline in plant canopy cover during the transition period of 2006 caused watershed sediment yield to increase 23‐fold to 1.64 t ha yr comparing with preceding period under native bunchgrasses (0.06 t ha yr) or succeeding period under E. lehmanniana (0.06 t ha yr). In contrast, measurements on small runoff plots on the hillslopes of the same watershed showed a significant increase in sediment discharge that continued after E. lehmanniana replaced native grasses. Together, these findings suggest alteration in plant community increased sediment yield but that hydrological responses to this event differ at watershed and plot scales, highlighting the geomorphological controls at the watershed scale that determine sediment transport efficiency and storage. Resolving these scalar issues will help identify critical landform features needed to preserve watershed integrity under changing climate conditions. |
App-guided exposure and response prevention for obsessive compulsive disorder: an open pilot trial. | Although effective treatments for obsessive-compulsive disorder (OCD) exist, there are significant barriers to receiving evidence-based care. Mobile health applications (Apps) offer a promising way of overcoming these barriers by increasing access to treatment. The current study investigated the feasibility, acceptability, and preliminary efficacy of LiveOCDFree, an App designed to help OCD patients conduct exposure and response prevention (ERP). Twenty-one participants with mild to moderate symptoms of OCD were enrolled in a 12-week open trial of App-guided self-help ERP. Self-report assessments of OCD, depression, anxiety, and quality of life were completed at baseline, mid-treatment, and post-treatment. App-guided ERP was a feasible and acceptable self-help intervention for individuals with OCD, with high rates of retention and satisfaction. Participants reported significant improvement in OCD and anxiety symptoms pre- to post-treatment. Findings suggest that LiveOCDFree is a feasible and acceptable self-help intervention for OCD. Preliminary efficacy results are encouraging and point to the potential utility of mobile Apps in expanding the reach of existing empirically supported treatments. |
Clinical Implications of Cognitive Bias Modification for Interpretative Biases in Social Anxiety: An Integrative Literature Review | Cognitive theories of social anxiety indicate that negative cognitive biases play a key role in causing and maintaining social anxiety. On the basis of these cognitive theories, laboratory-based research has shown that individuals with social anxiety exhibit negative interpretation biases of ambiguous social situations. Cognitive Bias Modification for interpretative biases (CBM-I) has emerged from this basic science research to modify negative interpretative biases in social anxiety and reduce emotional vulnerability and social anxiety symptoms. However, it is not yet clear if modifying interpretation biases via CBM will have any enduring effect on social anxiety symptoms or improve social functioning. The aim of this paper is to review the relevant literature on interpretation biases in social anxiety and discuss important implications of CBM-I method for clinical practice and research. |
Robust control for line-of-sight stabilization of a two-axis gimbal system | Line-of-sight stabilization against various disturbances is an essential property of gimbaled imaging systems mounted on mobile platforms. In recent years, the importance of target detection from higher distances has increased. This has raised the need for better stabilization performance. For that reason, stabilization loops are designed such that they have higher gains and larger bandwidths. As these are required for good disturbance attenuation, sufficient loop stability is also needed. However, model uncertainties around structural resonances impose strict restrictions on sufficient loop stability. Therefore, to satisfy high stabilization performance in the presence of model uncertainties, robust control methods are required. In this paper, a robust controller design in LQG/LTR, H∞ , and μ -synthesis framework is described for a two-axis gimbal. First, the performance criteria and weights are determined to minimize the stabilization error with moderate control effort under known platform disturbance profile. Second, model uncertainties are determined by considering locally linearized models at different operating points. Next, robust LQG/LTR, H∞ , and μ controllers are designed. Robust stability and performance of the three designs are investigated and compared. The paper finishes with the experimental performances to validate the designed robust controllers. |
Self-inflicted eye injuries. | Five cases of self-inflicted eye injury are described and discussed. A review of the literature shows that several psychiatric diagnoses have been assigned to people who damage their eyes. A variety of mechanisms to explain this phenomenon are described. |
MatrixFlow: Temporal Network Visual Analytics to Track Symptom Evolution during Disease Progression | OBJECTIVE
To develop a visual analytic system to help medical professionals improve disease diagnosis by providing insights for understanding disease progression.
METHODS
We develop MatrixFlow, a visual analytic system that takes clinical event sequences of patients as input, constructs time-evolving networks and visualizes them as a temporal flow of matrices. MatrixFlow provides several interactive features for analysis: 1) one can sort the events based on the similarity in order to accentuate underlying cluster patterns among those events; 2) one can compare co-occurrence events over time and across cohorts through additional line graph visualization.
RESULTS
MatrixFlow is applied to visualize heart failure (HF) symptom events extracted from a large cohort of HF cases and controls (n=50,625), which allows medical experts to reach insights involving temporal patterns and clusters of interest, and compare cohorts in novel ways that may lead to improved disease diagnoses.
CONCLUSIONS
MatrixFlow is an interactive visual analytic system that allows users to quickly discover patterns in clinical event sequences. By unearthing the patterns hidden within and displaying them to medical experts, users become empowered to make decisions influenced by historical patterns. |
Automatic Online Evaluation of Intelligent Assistants | Voice-activated intelligent assistants, such as Siri, Google Now, and Cortana, are prevalent on mobile devices. However, it is challenging to evaluate them due to the varied and evolving number of tasks supported, e.g., voice command, web search, and chat. Since each task may have its own procedure and a unique form of correct answers, it is expensive to evaluate each task individually. This paper is the first attempt to solve this challenge. We develop consistent and automatic approaches that can evaluate different tasks in voice-activated intelligent assistants. We use implicit feedback from users to predict whether users are satisfied with the intelligent assistant as well as its components, i.e., speech recognition and intent classification. Using this approach, we can potentially evaluate and compare different tasks within and across intelligent assistants ac-cording to the predicted user satisfaction rates. Our approach is characterized by an automatic scheme of categorizing user-system interaction into task-independent dialog actions, e.g., the user is commanding, selecting, or confirming an action. We use the action sequence in a session to predict user satisfaction and the quality of speech recognition and intent classification. We also incorporate other features to further improve our approach, including features derived from previous work on web search satisfaction prediction, and those utilizing acoustic characteristics of voice requests. We evaluate our approach using data collected from a user study. Results show our approach can accurately identify satisfactory and unsatisfactory sessions. |
Retrospective of electric machines for EV and HEV traction applications at general motors | This paper presents a retrospective of electric motor developments in General Motors (GM) for electric vehicle (EV), hybrid electric vehicle (HEV), plug-in hybrid electric vehicle (PHEV), and fuel cell electric vehicle (FCEV) production programs. This paper includes i) the progression of electric motor stator and rotor design methodologies that gradually improved motor torque, power, and efficiency performance while mitigating for noise, ii) Heavy rare earth (HRE) mitigation in subsequent design to lower cost and supply uncertainty, iii) Design techniques to lower torque ripple and radial force to mitigate noise and vibration issues. These techniques are elaborated in details with design examples, simulation and test data. |
Design Optimal PID Controller for Quad Rotor System | Quad rotor aerial vehicles are one of the most flexible and adaptable platforms for undertaking aerial research. Quad rotor in simplicity is rotorcraft that has four lift-generation propellers (four rotors), two rotors rotate in clockwise and the other two rotate anticlockwise, the speed and direction of Quad rotor can be controlled by varying the speed of these rotors. This paper describes the PID controller has been used for controlling attitude, Roll, Pitch and Yaw direction. In addition, optimal PID controller has been achieving by using particle swarm optimization (PSO), Bacterial Foraging optimization (BFO) and the BF-PSO optimization. Finally, the Quad rotor model has been simulating for several scenarios of testing. |
A Real-time EMG-based Assistive Computer Interface for the Upper Limb Disabled | This paper presents the design of an assistive real-time system for the upper limb disabled to access a computer via residual muscle activities without standard computer interfaces (e.g. a mouse and a keyboard). For this purpose, electromyogram (EMG) signals from muscles in the lower arm were extracted and filtered using signal statistics (mean and variance). In order to control movement and clicking of a cursor from the obtained signals, six patterns were classified, applying a supervised multi-layer neural network trained by a backpropagation algorithm. In addition, an on-screen keyboard was developed, making it possible to enter Roman and Korean letters on the computer. Using this computer interface, the user can browse the Internet and read/send e-mail. The developed computer interface provides an alternative means for individuals with motor disabilities to access computers. A possible extension of our interface methodology can be incorporated in controlling bionic robot systems for the limb disabled (e.g. exoskeletons, limb prostheses). |
Fully process-compatible layout design on bond pad to improve wire bond reliability in CMOS ICs | During manufacture of wire bonding in packaged IC products, the breaking of bond wires and the peeling of bond pads occur frequently. The result is open-circuit failure in IC products. There were several prior methods reported to overcome these problems by using additional process flows or special materials. In this paper, a layout method is proposed to improve the bond wire reliability in general CMOS processes. By changing the layout patterns of bond pads, the reliability of bond wires on bond pads can be improved. A set of different layout patterns of bond pads has been drawn and fabricated in a 0.6-m single-poly triple-metal CMOS process for investigation by the bond wire reliability tests, the ball shear test and the wire pull test. By implementing effective layout patterns on bond pads in packaged IC products, not only the bond wire reliability can be improved, but also the bond pad capacitance can be reduced for high frequency application. The proposed layout method for bond pad design is fully process-compatible to general CMOS processes. |
X-Diff: An Effective Change Detection Algorithm for XML Documents | XML has become the de facto standard format for web publishing and data transportation. Since online information changes frequently, being able to quickly detect changes in XML documents is important to Internet query systems, search engines, and continuous query systems. Previous work in change detection on XML, or other hierarchically structured documents, used an ordered tree model, in which left-to-right order among siblings is important and it can affect the change result. This paper argues that an unordered model (only ancestor relationships are significant) is more suitable for most database applications. Using an unordered model, change detection is substantially harder than using the ordered model, but the change result that it generates is more accurate. This paper proposes X-Diff, an effective algorithm that integrates key XML structure characteristics with standard tree-to-tree correction techniques. The algorithm is analyzed and compared with XyDiff [CAM02], a published XML diff algorithm. An experimental evaluation on both algorithms is provided. |
Restaurant Inspection Scores and Foodborne Disease | Restaurants in the United States are regularly inspected by health departments, but few data exist regarding the effect of restaurant inspections on food safety. We examined statewide inspection records from January 1993 through April 2000. Data were available from 167,574 restaurant inspections. From 1993 to 2000, mean scores rose steadily from 80.2 to 83.8. Mean inspection scores of individual inspectors were 69-92. None of the 12 most commonly cited violations were critical food safety hazards. Establishments scoring <60 had a mean improvement of 16 points on subsequent inspections. Mean scores of restaurants experiencing foodborne disease outbreaks did not differ from restaurants with no reported outbreaks. A variety of factors influence the uniformity of restaurant inspections. The restaurant inspection system should be examined to identify ways to ensure food safety. |
Stock Optimization of Spare Parts with Genetic Algorithm | In this research, a new method is proposed for the optimization of warship spare parts stock with genetic algorithm. Warships should fulfill her duties in all circumstances. Considering the warships have more than a hundred thousand unique parts, it is a very hard problem to decide which spare parts should be stocked at warehouse aiming to use in case of failure. In this study, genetic algorithm that is a heuristic optimization method is used to solve this problem. The demand quantity, the criticality and the cost of parts is used for optimization. A genetic algorithm with very long chromosome is used, i.e. over 1000 genes in one chromosome. The outputs of the method is analyzed and compared with the Price Sensitive 0.5 FLSIP+ model, which is widely used over navies, and came to a conclusion that the proposed method is better. |
An approach to discriminate GNSS spoofing from multipath fading | GNSS signals are vulnerable to various types of interference including jamming and spoofing attacks. Spoofing signals are designed to deceive target GNSS receivers without being detected by conventional receiver quality monitoring metrics. This paper focuses on detecting an overlapped spoofing attack where the correlation peaks of the authentic and spoofing signals interact during the attack. Several spoofing detection and signal quality monitoring (SQM) metrics are introduced. This paper proposes a spoofing detection architecture utilizing combination of different metrics to detect spoofing signals and distinguish them from multipath signals. Experimental results show that the pre-despreading spoofing detection metrics such as variance analysis are not sensitive to multipath propagation and can be used in conjunction with post-despreading methods to correctly detect spoofing signals. Several test scenarios based on different spoofing and multipath cases are performed to demonstrate the effectiveness of the proposed architecture to correctly detect spoofing attack ands distinguish them from multipath. |
Quantization-based hashing: a general framework for scalable image and video retrieval | Nowadays, due to the exponential growth of user generated images and videos, there is an increasing interest in learning-based hashing methods. In computer vision, the hash functions are learned in such a way that the hash codes can preserve essential properties of the original space (or label information). Then the Hamming distance of the hash codes can approximate the data similarity. On the other hand, vector quantization methods quantize the data into different clusters based on the criteria of minimal quantization error, and then perform the search using look-up tables. While hashing methods using Hamming distance can achieve faster search speed, their accuracy is often outperformed by quantization methods with the same code length, due to the low quantization error and more flexible distance lookups. To improve the effectiveness of the hashing methods, in this work, we propose Quantization-based Hashing (QBH), a general framework which incorporates the advantages of quantization error reduction methods into conventional property preserving hashing methods. The learned hash codes simultaneously preserve the properties in the original space and reduce the quantization error, and thus can achieve better performance. Furthermore, the hash functions and a quantizer can be jointly learned and iteratively updated in a unified framework, which can be readily used to generate hash codes or quantize new data points. Importantly, QBH is a generic framework that can be integrated to different property preserving hashing methods and quantization strategies, and we apply QBH to both unsupervised and supervised hashing models as showcases in this paper. Experimental results on three large-scale unlabeled datasets (i.e., SIFT1M, GIST1M, and SIFT1B), three labeled datastes (i.e., ESPGAME, IAPRTC and MIRFLICKR) and one video dataset (UQ VIDEO) demonstrate the superior performance of our QBH over existing unsupervised and supervised hashing methods. 1Jingkuan Song and Lianli Gao ([email protected], corresponding author) are with the Department of Computer Science, University of Electronic Science and Technology of China, China, 611731. Xiaofeng Zhu is with Guangxi Key Lab of Multi-source Information Mining and Security, Guangxi Normal University, Guangxi, China, 541004. Liu. Li is with University of East Anglia, UK. Nicu Sebe is with the Department of Information Engineering and Computer Science, University of Trento, Trento, Italy, 38100 Preprint submitted to Journal of Pattern Recognition March 29, 2017 |
Depth map generation by image classification | This paper presents a novel and fully automatic technique to estimate depth information from a single input image. The proposed method is based on a new image classification technique able to classify digital images (also in Bayer pattern format) as indoor, outdoor with geometric elements or outdoor without geometric elements. Using the information collected in the classification step a suitable depth map is estimated. The proposed technique is fully unsupervised and is able to generate depth map from a single view of the scene, requiring low computational resources. |
Deep Learning for Practical Image Recognition: Case Study on Kaggle Competitions | In past years, deep convolutional neural networks (DCNN) have achieved big successes in image classification and object detection, as demonstrated on ImageNet in academic field. However, There are some unique practical challenges remain for real-world image recognition applications, e.g., small size of the objects, imbalanced data distributions, limited labeled data samples, etc. In this work, we are making efforts to deal with these challenges through a computational framework by incorporating latest developments in deep learning. In terms of two-stage detection scheme, pseudo labeling, data augmentation, cross-validation and ensemble learning, the proposed framework aims to achieve better performances for practical image recognition applications as compared to using standard deep learning methods. The proposed framework has recently been deployed as the key kernel for several image recognition competitions organized by Kaggle. The performance is promising as our final private scores were ranked 4 out of 2293 teams for fish recognition on the challenge "The Nature Conservancy Fisheries Monitoring" and 3 out of 834 teams for cervix recognition on the challenge "Intel &MobileODT Cervical Cancer Screening", and several others. We believe that by sharing the solutions, we can further promote the applications of deep learning techniques. |
Object Detector on Coastal Surveillance Radar Using Two-Dimensional Order-Statistic Constant-False Alarm Rate Algoritm | This paper describes the development of radar object detection using two dimensional constant false alarm rate (2D-CFAR). Objective of this development is to minimize noise detection if compared with the previous algorithm that uses one dimensional constant false alarm rate (1D-CFAR) algorithm such as order-statistic (OS) CFAR, cell-averaging (CA) CFAR, AND logic (AND) CFAR and variability index (VI) CFAR where has been implemented on coastal surveillance radar. The optimum detection result in coastal surveillance radar testing when Pfa set to 1e-2, K set to 3/4*Nwindow and Guard Cell set to 0. Principle of 2D-CFAR algorithm is combining of two CFAR algorithms for each array data of azimuth and range. Order statistic (OS) CFAR algoritm is implemented on this 2D-CFAR by fusion rule of AND logic.The algorithm of 2D-CFAR is developed using Microsoft Visual C++ 2008 and the output of 2D-CFAR is plotted on PPI scope radar using GDI+ library. The result of 2D-CFAR development shows that 2D-CFAR can minimize noise detected if compared with 1D-CFAR with the same parameter of CFAR. Best performance of 2DCFAR in object detection when Nwindow set to 128. The time of software processing of 2D-CFAR is about two times longer than the 1D-CFAR. |
Microcephaly and early-onset nephrotic syndrome —confusion in Galloway-Mowat syndrome | We report a 2-year-old girl with nephrotic syndrome, microcephaly, seizures and psychomotor retardation. Histological studies of a renal biopsy revealed focal glomerular sclerosis with mesangiolysis and capillary microaneurysms. Dysmorphic features were remarkable: abnormal-shaped skull, coarse hair, narrow forehead, large low-set ears, almond-shaped eyes, low nasal bridge, pinched nose, thin lips and micrognathia. Cases with this rare combination of microcephaly and early onset of nephrotic syndrome with various neurological abnormalities have been reported. However, clinical manifestations and histological findings showed a wide variation, and there is a lot of confusion in this syndrome. We therefore reviewed the previous reports and propose a new clasification of this syndrome. |
1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs | We show empirically that in SGD training of deep neural networks, one can, at no or nearly no loss of accuracy, quantize the gradients aggressively—to but one bit per value—if the quantization error is carried forward across minibatches (error feedback). This size reduction makes it feasible to parallelize SGD through data-parallelism with fast processors like recent GPUs. We implement data-parallel deterministically distributed SGD by combining this finding with AdaGrad, automatic minibatch-size selection, double buffering, and model parallelism. Unexpectedly, quantization benefits AdaGrad, giving a small accuracy gain. For a typical Switchboard DNN with 46M parameters, we reach computation speeds of 27k frames per second (kfps) when using 2880 samples per minibatch, and 51kfps with 16k, on a server with 8 K20X GPUs. This corresponds to speed-ups over a single GPU of 3.6 and 6.3, respectively. 7 training passes over 309h of data complete in under 7h. A 160M-parameter model training processes 3300h of data in under 16h on 20 dual-GPU servers—a 10 times speed-up—albeit at a small accuracy loss. |
Hedges: A study in meaning criteria and the logic of fuzzy concepts | Logicians have, by and large, engaged in the convenient fiction that sentences of natural languages (at least declarative sentences) are either true or false or, at worst, lack a truth value, or have a third value often interpreted as 'nonsense'. And most contemporary linguists who have thought seriously about semantics, especially formal semantics, have largely shared this fiction, primarily for lack of a sensible alternative. Yet students o f language, especially psychologists and linguistic philosophers, have long been attuned to the fact that natural language concepts have vague boundaries and fuzzy edges and that, consequently, natural language sentences will very often be neither true, nor false, nor nonsensical, but rather true to a certain extent and false to a certain extent, true in certain respects and false in other respects. It is common for logicians to give truth conditions for predicates in terms of classical set theory. 'John is tall' (or 'TALL(j) ' ) is defined to be true just in case the individual denoted by 'John' (or ' j ') is in the set of tall men. Putting aside the problem that tallness is really a relative concept (tallness for a pygmy and tallness for a basketball player are obviously different) 1, suppose we fix a population relative to which we want to define tallness. In contemporary America, how tall do you have to be to be tall? 5'8"? 5'9"? 5'10"? 5'11"? 6'? 6'2"? Obviously there is no single fixed answer. How old do you have to be to be middle-aged? 35? 37? 39? 40? 42? 45? 50? Again the concept is fuzzy. Clearly any attempt to limit truth conditions for natural language sentences to true, false and "nonsense' will distort the natural language concepts by portraying them as having sharply defined rather than fuzzily defined boundaries. Work dealing with such questions has been done in psychology. To take a recent example, Eleanor Rosch Heider (1971) took up the question of whether people perceive category membership as a clearcut issue or a matter of degree. For example, do people think of members of a given |
The legalization of recreational marijuana: how likely is the worst-case scenario? | Last fall, voters in Colorado and Washington approved measures legalizing the recreational use of marijuana. In the near future, residents of these states who are 21 years of age and older will be able to purchase marijuana at retail stores (Donlan, 2013). Although it can be challenging to predict future behavior, Mark Kleiman, a prominent drug policy expert, described what might be characterized as the worstcase scenario. According to Kleiman, this scenario would involve three elements: more heavy drinking, “carnage on our highways,” and a “massive” increase in the use of marijuana by minors (Livingston, 2013). Below, we discuss the likely effects of legalizing marijuana for recreational use on alcohol consumption, traffic fatalities, substance use among high school students, and other outcomes of interest to policymakers and the public. Our discussion draws heavily on studies that have examined the legalization of medical marijuana. These studies are relevant because, in states such as California, Colorado, Oregon, and Washington, the legalization of marijuana for medicinal purposes approaches de facto legalization of marijuana for recreational purposes. One of the key unknowns in the debate over legalization concerns the relationship between alcohol and marijuana use. Researchers have attempted to produce causal estimates of this relationship by exploiting cross-sectional policy and price variation (Pacula, 1998; Williams et al., 2004). We note that these estimates could easily be spurious and that more reliable estimates based on clearly defined natural experiments show that alcohol and marijuana are substitutes. Because the social costs associated with the consumption of alcohol clearly outweigh those associated with the consumption of marijuana, we conclude that legalizing the recreational use of marijuana is likely to improve public health, although plenty of unanswered questions remain. |
Hesitant fuzzy sets | Several extensions and generalizations of fuzzy sets have been introduced in the literature, for example, Atanassov’s intuitionistic fuzzy sets, type 2 fuzzy sets, and fuzzy multisets. In this paper, we propose hesitant fuzzy sets. Although from a formal point of view, they can be seen as fuzzy multisets, we will show that their interpretation differs from the two existing approaches for fuzzy multisets. Because of this, together with their definition, we also introduce some basic operations. In addition, we also study their relationship with intuitionistic fuzzy sets. We prove that the envelope of the hesitant fuzzy sets is an intuitionistic fuzzy set. We prove also that the operations we propose are consistent with the ones of intuitionistic fuzzy sets when applied to the envelope of the hesitant fuzzy sets. C © 2010 Wiley Periodicals, Inc. |
Stock Market Prediction with Backpropagation Networks | In this paper we evaluate the performance of backpropagation neural networks applied to the problem of predicting stock market prices. The neural networks are trained to approximate the mathematical function generating the semi-chaotic timeseries which represents the history of stock market prices in order to predict the values for the future. In contrast to previous investigations, the training data used in our experiments is not exclusively based on stock market prices, but also incorporates a variety of other economical factors. The prediction quality obtained is illustrated by presenting several simulation results. 1 I n t r o d u c t i o n The stock market is affected by a large number of highly interrelated economical, political and psychological factors which interact with each other in a complex fashion. Since most of these relationships seem to be probabilistic and therefore cannot be expressed as deterministic rules, financial analysis is one of the most well suited and promising applications of artificial neural networks. SeverM proposals have been made to use neural network models for prediction and forecasting problems in the financial area, such as locating sources of forecast uncertainty in a recurrent gas market model [11], corporate bond rating [1], mortgage delinquency prediction [3], chaotic timeseries prediction [10], prediction of IBM daily stock prices [12], prediction of three selected German stock prices [9] and prediction of the weekly Standard Poor 500 index [5]. In some of these proposals, the neural networks performed better than regression techniques [1, 9] or as good as the Box-Jenkins technique [10], while in others the results were disappointing [3, 12]. In this paper, we apply several variants of the backpropagation neural network to the problem of predicting the weekly price of the FAZ-Index, which is calculated on the basis of 100 major German stocks and may be regarded as one of the German equivalents of the Dow-Jones-Index in the USA. The basic idea is to let the network learn an approximation of the mapping between the input and output data in order to discover the implicit rules governing the price movement of the FAZ-Index. The trained network is then used to predict the weekly closing prices for the future. Our work differs from previous stock price predictions with neural techniques [5, 9, 12] in the data presented to the networks. While in other approaches the input data was exclusively based on stock prices, we also consider other important economical factors, namely a subset of those considered in the fundamental and technical analysis methods used by human analysts to make their investment decisions. Thus, although |
Wearable/disposable sweat-based glucose monitoring device with multistage transdermal drug delivery module | Electrochemical analysis of sweat using soft bioelectronics on human skin provides a new route for noninvasive glucose monitoring without painful blood collection. However, sweat-based glucose sensing still faces many challenges, such as difficulty in sweat collection, activity variation of glucose oxidase due to lactic acid secretion and ambient temperature changes, and delamination of the enzyme when exposed to mechanical friction and skin deformation. Precise point-of-care therapy in response to the measured glucose levels is still very challenging. We present a wearable/disposable sweat-based glucose monitoring device integrated with a feedback transdermal drug delivery module. Careful multilayer patch design and miniaturization of sensors increase the efficiency of the sweat collection and sensing process. Multimodal glucose sensing, as well as its real-time correction based on pH, temperature, and humidity measurements, maximizes the accuracy of the sensing. The minimal layout design of the same sensors also enables a strip-type disposable device. Drugs for the feedback transdermal therapy are loaded on two different temperature-responsive phase change nanoparticles. These nanoparticles are embedded in hyaluronic acid hydrogel microneedles, which are additionally coated with phase change materials. This enables multistage, spatially patterned, and precisely controlled drug release in response to the patient’s glucose level. The system provides a novel closed-loop solution for the noninvasive sweat-based management of diabetes mellitus. |
On the origin of preferred bicarbonate production from carbon dioxide (CO₂) capture in aqueous 2-amino-2-methyl-1-propanol (AMP). | AMP and its blends are an attractive solvent for CO2 capture, but the underlying reaction mechanisms still remain uncertain. We attempt to elucidate the factors enhancing bicarbonate production in aqueous AMP as compared to MEA which, like most other primary amines, preferentially forms carbamate. According to our predicted reaction energies, AMP and MEA exhibit similar thermodynamic favorability for bicarbonate versus carbamate formation; moreover, the conversion of carbamate to bicarbonate also does not appear more favorable kinetically in aqueous AMP compared to MEA. Ab initio molecular dynamics simulations, however, demonstrate that bicarbonate formation tends to be kinetically more probable in aqueous AMP while carbamate is more likely to form in aqueous MEA. Analysis of the solvation structure and dynamics shows that the enhanced interaction between N and H2O may hinder CO2 accessibility while facilitating the AMP + H2O → AMPH(+) + OH(-) reaction, relative to the MEA case. This study highlights the importance of not only thermodynamic but also kinetic factors in describing CO2 capture by aqueous amines. |
Functional electrical stimulation after spinal cord injury: current use, therapeutic effects and future directions | Repair of the injured spinal cord by regeneration therapy remains an elusive goal. In contrast, progress in medical care and rehabilitation has resulted in improved health and function of persons with spinal cord injury (SCI). In the absence of a cure, raising the level of achievable function in mobility and self-care will first and foremost depend on creative use of the rapidly advancing technology that has been so widely applied in our society. Building on achievements in microelectronics, microprocessing and neuroscience, rehabilitation medicine scientists have succeeded in developing functional electrical stimulation (FES) systems that enable certain individuals with SCI to use their paralyzed hands, arms, trunk, legs and diaphragm for functional purposes and gain a degree of control over bladder and bowel evacuation. This review presents an overview of the progress made, describes the current challenges and suggests ways to improve further FES systems and make these more widely available. |
Graph-Based Semi-supervised Learning for Fault Detection and Classification in Solar Photovoltaic Arrays | Fault detection in solar photovoltaic (PV) arrays is an essential task for increasing reliability and safety in PV systems. Because of PV's nonlinear characteristics, a variety of faults may be difficult to detect by conventional protection devices, leading to safety issues and fire hazards in PV fields. To fill this protection gap, machine learning techniques have been proposed for fault detection based on measurements, such as PV array voltage, current, irradiance, and temperature. However, existing solutions usually use supervised learning models, which are trained by numerous labeled data (known as fault types) and therefore, have drawbacks: 1) the labeled PV data are difficult or expensive to obtain, 2) the trained model is not easy to update, and 3) the model is difficult to visualize. To solve these issues, this paper proposes a graph-based semi-supervised learning model only using a few labeled training data that are normalized for better visualization. The proposed model not only detects the fault, but also further identifies the possible fault type in order to expedite system recovery. Once the model is built, it can learn PV systems autonomously over time as weather changes. Both simulation and experimental results show the effective fault detection and classification of the proposed method. |
Addition of sildenafil to long-term intravenous epoprostenol therapy in patients with pulmonary arterial hypertension: a randomized trial. | BACKGROUND
Oral sildenafil and intravenous epoprostenol have independently been shown to be effective in patients with pulmonary arterial hypertension.
OBJECTIVE
To investigate the effect of adding oral sildenafil to long-term intravenous epoprostenol in patients with pulmonary arterial hypertension.
DESIGN
A 16-week, double-blind, placebo-controlled, parallel-group study.
SETTING
Multinational study at 41 centers in 11 countries from 3 July 2003 to 27 January 2006.
PATIENTS
267 patients with pulmonary arterial hypertension (idiopathic, associated anorexigen use or connective tissue disease, or corrected congenital heart disease) who were receiving long-term intravenous epoprostenol therapy.
INTERVENTION
Patients were randomly assigned to receive placebo or sildenafil, 20 mg three times daily, titrated to 40 mg and 80 mg three times daily, as tolerated, at 4-week intervals. Of 265 patients who received treatment, 256 (97%) patients (123 in the placebo group and 133 in the sildenafil group) completed the study.
MEASUREMENTS
Change from baseline in exercise capacity measured by 6-minute walk distance (primary end point) and hemodynamic measurements, time to clinical worsening, and Borg dyspnea score (secondary end points).
RESULTS
A placebo-adjusted increase of 28.8 meters (95% CI, 13.9 to 43.8 meters) in the 6-minute walk distance occurred in patients in the sildenafil group; these improvements were most prominent among patients with baseline distances of 325 meters or more. Relative to epoprostenol monotherapy, addition of sildenafil resulted in a greater change in mean pulmonary arterial pressure by -3.8 mm Hg (CI, -5.6 to -2.1 mm Hg); cardiac output by 0.9 L/min (CI, 0.5 to 1.2 L/min); and longer time to clinical worsening, with a smaller proportion of patients experiencing a worsening event in the sildenafil group (0.062) than in the placebo group (0.195) by week 16 (P = 0.002). Health-related quality of life also improved in patients who received combined therapy compared with those who received epoprostenol monotherapy. There was no effect on the Borg dyspnea score. Of the side effects generally associated with sildenafil treatment, the most commonly reported in the placebo and sildenafil groups, respectively, were headache (34% and 57%; difference, 23 percentage points [CI, 12 to 35 percentage points]), dyspepsia (2% and 16%; difference, 13 percentage points [CI, 7 to 20 percentage points]), pain in extremity (18% and 25%; difference, 8 percentage points [CI, -2 to 18 percentage points]), and nausea (18% and 25%; difference, 8 percentage points [CI, -2 to 18 percentage points]).
LIMITATIONS
The study excluded patients with pulmonary arterial hypertension associated with other causes. There was an imbalance in missing data between groups, with 8 placebo recipients having no postbaseline walk assessment compared with 1 sildenafil recipient. These patients were excluded from the analysis.
CONCLUSION
In some patients with pulmonary arterial hypertension, the addition of sildenafil to long-term intravenous epoprostenol therapy improves exercise capacity, hemodynamic measurements, time to clinical worsening, and quality of life, but not Borg dyspnea score. Increased rates of headache and dyspepsia occurred with the addition of sildenafil. |
An LPV Modeling and Identification Approach to Leakage Detection in High Pressure Natural Gas Transportation Networks | In this paper a new approach to gas leakage detection in high pressure natural gas transportation networks is proposed. The pipeline is modelled as a Linear Parameter Varying (LPV) System driven by the source node massflow with the gas inventory variation in the pipe (linepack variation, proportional to the pressure variation) as the scheduling parameter. The massflow at the offtake node is taken as the system output. The system is identified by the Successive Approximations LPV System Subspace Identification Algorithm which is also described in this paper. The leakage is detected using a Kalman filter where the fault is treated as an augmented state. Given that the gas linepack can be estimated from the massflow balance equation, a differential method is proposed to improve the leakage detector effectiveness. A small section of a gas pipeline crossing Portugal in the direction South to North is used as a case study. LPV models are identified from normal operational data and their accuracy is analyzed. The proposed LPV Kalman filter based methods are compared with a standard mass balance method in a simulated 10% leakage detection scenario. The Differential Kalman Filter method proved to be highly efficient. |
Water Microbiology. Bacterial Pathogens and Water | Water is essential to life, but many people do not have access to clean and safe drinking water and many die of waterborne bacterial infections. In this review a general characterization of the most important bacterial diseases transmitted through water-cholera, typhoid fever and bacillary dysentery-is presented, focusing on the biology and ecology of the causal agents and on the diseases' characteristics and their life cycles in the environment. The importance of pathogenic Escherichia coli strains and emerging pathogens in drinking water-transmitted diseases is also briefly discussed. Microbiological water analysis is mainly based on the concept of fecal indicator bacteria. The main bacteria present in human and animal feces (focusing on their behavior in their hosts and in the environment) and the most important fecal indicator bacteria are presented and discussed (focusing on the advantages and limitations of their use as markers). Important sources of bacterial fecal pollution of environmental waters are also briefly indicated. In the last topic it is discussed which indicators of fecal pollution should be used in current drinking water microbiological analysis. It was concluded that safe drinking water for all is one of the major challenges of the 21st century and that microbiological control of drinking water should be the norm everywhere. Routine basic microbiological analysis of drinking water should be carried out by assaying the presence of Escherichia coli by culture methods. Whenever financial resources are available, fecal coliform determinations should be complemented with the quantification of enterococci. More studies are needed in order to check if ammonia is reliable for a preliminary screening for emergency fecal pollution outbreaks. Financial resources should be devoted to a better understanding of the ecology and behavior of human and animal fecal bacteria in environmental waters. |
Systems Biology of Free Radicals and Antioxidants | Oxidative stress is a phenomenon associated with the pathology of several diseases including atherosclerosis, neurodegenerative diseases such as Alzheimer’s and Parkinson’s diseases, cancer, diabetes mellitus, inflammatory diseases, as well as psychiatric disorders or aging process. Oxidative stress is defined as an imbalance between the production of free radicals and reactive metabolites, so called oxidants, and their elimination by protective mechanisms named antioxidative systems. Free radicals and their metabolites prevail over antioxidants. This imbalance leads to damage of important biomolecules and organs with plausible impact on the whole organism. Oxidative and antioxidative processes are associated with electron transfer influencing the redox state of cells and organisms; therefore, oxidative stress is also known as redox stress. At present, the opinion that oxidative stress is not always harmful has been accepted. Depending on its intensity, it can play a role in regulation of other important processes through modulation of signal pathways, influencing synthesis of antioxidant enzymes, repair processes, inflammation, apoptosis and cell proliferation, and thus process of a malignity. Therefore, improper administration of antioxidants can potentially negatively impact biological systems. |
The influence of gender and atopy in the relationship between obesity and asthma in childhood. | BACKGROUND
The objective of the study was to examine the relationship between asthma and overweight-obesity in Spanish children and adolescents and to determine whether this relationship was affected by gender and atopy.
METHODS
The study involves 8607 Spanish children and adolescents from the International Study of Asthma and Allergies in Childhood phase III. Unconditional logistic regression was used to obtain adjusted odds ratios (OR) and 95% confidence intervals (95% CI) for the association between asthma symptoms and overweight-obesity in the two groups. Afterwards, it was stratified by sex and rhinoconjunctivitis.
RESULTS
The prevalence of overweight and obesity in 6-7-year-old children was 18.6% and 5.2% respectively and in 13-14 year-old teenagers was 11.4% and 1.1% respectively. Only the obese children, not the overweight children, of the 6-7 year old group had a higher risk of any asthma symptoms (wheezing ever: OR 1.68 [1.15-2.47], asthma ever: OR 2.29 [1.43-3.68], current asthma 2.56 [1.54-4.28], severe asthma 3.18 [1.50-6.73], exercise-induced asthma 2.71 [1.45-5.05]). The obese girls had an increased risk of suffering any asthma symptoms (wheezing ever: OR 1.73 [1.05-2.91], asthma ever: OR 3.12 [1.67-5.82], current asthma 3.20 [1.65-6.19], severe asthma 4.83[1.94-12.04], exercise-induced asthma 3.68 [1.67-8.08]). The obese children without rhinoconjunctivitis had a higher risk of asthma symptoms.
CONCLUSIONS
Obesity and asthma symptoms were associated in 6-7 year-old children but not in 13-14 year-old teenagers. The association was stronger in non-atopic children and obese girls. |
Human-Computer Interaction | Offering training programs to their employees is one of the necessary tasks that managers must comply with. Training is done mainly to provide upto-date knowledge or to convey to staff the objectives, history, corporate name, functions of the organization’s areas, processes, laws, norms or policies that must be fulfilled. Although there are a lot of methods, models or tools that are useful for this purpose, many companies face with some common problems like employee’s motivation and high costs in terms of money and time. In an effort to solve this problem, new trends have emerged in the last few years, in particular strategies related to games, such as serious games and gamification, whose success has been demonstrated by numerous researchers. According to the above, we present a systematic literature review of the different approaches that have used games or their elements, using the procedure suggested by Cooper, on this matter, ending with about the positive and negative findings. |
An End-to-End Neural Architecture for Reading Comprehension | Machine reading comprehension is an on-going area of research in both industry and academia. In this paper, we present an end-to-end model combining elements of state-of-the-art question answering systems trained on the Stanford QuestionAnswering Dataset (SQuAD). Using this architecture, we achieve an F1 of 61.5% and an EM of 48.3%. |
On Ideal Binary Mask As the Computational Goal of Auditory Scene Analysis | What is the computational goal of auditory scene analysis? This is a key issue to address in the Marrian information-processing framework. It is also an important question for researchers in computational auditory scene analysis (CASA) because it bears directly on how a CASA system should be evaluated. In this chapter I discuss different objectives used in CASA. I suggest as a main CASA goal the use of the ideal time-frequency (T-F) binary mask whose value is one for a T-F unit where the target energy is greater than the interference energy and is zero otherwise. The notion of the ideal binary mask is motivated by the auditory masking phenomenon. Properties of the ideal binary mask are discussed, including their relationship to automatic speech recognition and human speech intelligibility. This CASA goal has led to algorithms that directly estimate the ideal binary mask in monaural and binaural conditions, and these algorithms have substantially advanced the state-of-the-art performance in speech separation. |
Non-Deterministic Policy Improvement Stabilizes Approximated Reinforcement Learning | This paper investigates a type of instability that is linked to the greedy policy improvement in approximated reinforcement learning. We show empirically that non-deterministic policy improvement can stabilize methods like LSPI by controlling the improvements’ stochasticity. Additionally we show that a suitable representation of the value function also stabilizes the solution to some degree. The presented approach is simple and should also be easily transferable to more sophisticated algorithms like deep reinforcement learning. |
An Efficient Autocalibration Method for Triaxial Accelerometer | This paper investigates the autocalibration of microelectromechanical systems (MEMS) triaxial accelerometer (TA) based on experimental design (DoE). First, for a special 6-parameter second-degree model, a six-point experimental scheme is proposed, and its G-optimality has been proven based on optimal DoE. Then, a new linearization approach is introduced, by which the TA model for autocalibration can be simplified as the expected second-degree form so that the proposed optimal experimental scheme can be applied. To reliably estimate the model parameter, a convergence-guaranteed iterative algorithm is also proposed, which can significantly reduce the bias caused by linearization. Thereafter, the effectiveness and robustness of the proposed approach have been demonstrated by simulation. Finally, the proposed calibration method has been experimentally verified using two typical types of MEMS TA, and desired experimental results effectively demonstrate the efficiency and accuracy of the proposed calibration approach. |
Insider Threat Analysis Using Information-Centric Modeling | Capability acquisition graphs (CAGs) provide a powerful framework for modeling insider threats, network attacks and system vulnerabilities. However, CAG-based security modeling systems have yet to be deployed in practice. This paper demonstrates the feasibility of applying CAGs to insider threat analysis. In particular, it describes the design and operation of an information-centric, graphics-oriented tool called ICMAP. ICMAP enables an analyst without any theoretical background to apply CAGs to answer security questions about vulnerabilities and likely attack scenarios, as well as to monitor network nodes. This functionality makes the tool very useful for attack attribution and forensics. |
A Decommodified Experience? Exploring Aesthetic, Economic and Ethical Values for Volunteer Ecotourism in Costa Rica | Volunteer ecotourism has been described as an ’ideal’ form of decommodified ecotourism that overcomes problems associated with tourism in general, and ecotourism specifically. Using a case study of volunteer ecotourism and sea turtle conservation in Costa Rica, this paper interrogates this ideal. Perceptions of volunteer ecotourism were explored through in-depth interviews with 36 stakeholders, including hosts, NGO staff, government employees, local ‘cabineros’ (families who provide accommodation) and guests (volunteers). Results show that while all stakeholder groups share similarly positive views of volunteer ecotourism, subtle but important differences exist. We analyse these differences in terms of aesthetic, economic, and ethical values, and situate the results in existing theories about the moralisation and decommodification of ecotourism. |
Snippets: Taking the High Road to a Low Level | When building a compiler for a high-level language, certain intrinsic features of the language must be expressed in terms of the resulting low-level operations. Complex features are often expressed by explicitly weaving together bits of low-level IR, a process that is tedious, error prone, difficult to read, difficult to reason about, and machine dependent. In the Graal compiler for Java, we take a different approach: we use snippets of Java code to express semantics in a high-level, architecture-independent way. Two important restrictions make snippets feasible in practice: they are compiler specific, and they are explicitly prepared and specialized. Snippets make Graal simpler and more portable while still capable of generating machine code that can compete with other compilers of the Java HotSpot VM. |
YAMAMA: Yet Another Multi-Dialect Arabic Morphological Analyzer | In this paper, we present YAMAMA, a multi-dialect Arabic morphological analyzer and disambiguator. Our system is almost five times faster than the state-of-the-art MADAMIRA system with a slightly lower quality. In addition to speed, YAMAMA outputs a rich representation which allows for a wider spectrum of use. In this regard, YAMAMA transcends other systems, such as FARASA, which is faster but provides specific outputs catering to specific applications. |
A radiochemical study of irreversible adsorption of proteins on reversed-phase chromatographic packing materials. | A radiochemical study of the irreversible adsorption of proteins on commercial reversed-phase HPLC packing materials is reported. The conditions of study are similar to those used in HPLC separation of protein. The effects of the amount and contact time of two proteins, ovalbumin and cytochrome c, are reported. Additional results include the effect of column pretreatment with protein, silanophilic mobile-phase blocking agent, and type of packing material on the extent of irreversible adsorption. The loss process is shown to be at least biphasic and the mechanisms of loss distinct for different proteins. |
LSTM-Based Hierarchical Denoising Network for Android Malware Detection | Mobile security is an important issue on Android platform. Most malware detection methods based on machine learning models heavily rely on expert knowledge for manual feature engineering, which are still difficult to fully describe malwares. In this paper, we present LSTM-based hierarchical denoise network (HDN), a novel static Android malware detection method which uses LSTM to directly learn from the raw opcode sequences extracted from decompiled Android files. However, most opcode sequences are too long for LSTM to train due to the gradient vanishing problem. Hence, HDN uses a hierarchical structure, whose first-level LSTM parallelly computes on opcode subsequences (we called them method blocks) to learn the dense representations; then the secondlevel LSTM can learn and detect malware through method block sequences. Considering that malicious behavior only appears in partial sequence segments, HDN uses method block denoise module (MBDM) for data denoising by adaptive gradient scaling strategy based on loss cache. We evaluate and compare HDN with the latest mainstream researches on three datasets. The results show that HDN outperforms these Android malware detection methods,and it is able to capture longer sequence features and has better detection efficiency thanN-gram-based malware detection which is similar to our method. |
NEO‐KANTIAN PHILOSOPHIES OF SCIENCE: CASSIRER, KUHN, AND FRIEDMAN1 | In some recent writings, as well as in the article published here, Michael Friedman offers a fascinating account of how Thomas Kuhn’s groundbreaking work in the history and philosophy of science is related to—and even explicitly indebted to—an important strand of Neo-Kantianism. Friedman shows that Kuhn’s talk of “shifts” between paradigms, in particular, is influenced by the Neo-Kantian conception of the exact sciences as evolving by way of a series of massive structural transitions which do not aim at “convergence to a final truth about a mind-independent reality” (p. 242). Kuhn takes a step beyond the NeoKantians, however, in giving up not just convergence towards things-inthemselves but also any sort of convergence at all. According to Friedman, this final step of Kuhn’s is in fact a misstep—an unfortunate result of the fact that he followed Alexandre Koyre and Emile Meyerson in assuming that any convergence in mathematical physics would have to be “substantial.” Substantial convergence, roughly speaking, involves a series of theoretical structures having the same “physical referents” but accounting for them (descriptively and predictively) in an increasingly adequate way. And this, as Kuhn himself points out, is not the way the history of the exact sciences has actually gone. Instead, the physical referents of, say, the Einsteinian theory are essentially and substantially different from the physical referents of the Newtonian theory. |
Developing a tool to assess quality of stroke care across European populations: the EROS Quality Assessment Tool. | BACKGROUND AND PURPOSE
There are significant differences in the provision of care and outcome after stroke across countries. The European Registers of Stroke study aimed to develop, test, and refine a tool to assess quality of care.
METHODS
We used a systematic review and grading of evidence for stroke care across the clinical pathway and developed and field-tested a quality tool that was delivered by post and later by site visit at 7 centers. Items were refined by using an algorithm that took into account the level of evidence, measurement properties, and consensus of opinion obtained using, the Delphi techniques.
RESULTS
The tool included 251 items across 11 domains, of which 214 items could be categorized by any level of evidence. Overall agreement between postal and site visit modes of delivery was acceptable (κ=0.77), with most items having a κ>0.5. The refinement process resulted in 2 practical versions of the tool (93 items and 22 items). Positive responses to items in the tool indicated implementation of evidence-based stroke care. In field testing, the proportion of positive responses to evidence-based items ranged from 43% to 79% across populations. Proportions of different types of evidence being implemented were similar: high quality 62%, limited quality 72%, and expert opinion 54% across the populations. More than half (4 of 7) of the centers provided stroke unit care and thrombolysis, but availability and access to inpatient rehabilitation varied significantly, with poor access to community follow-up for rehabilitation and medical management.
CONCLUSIONS
The European Registers of Stroke Quality Assessment Tool has potential to be used as a framework to compare services and promote increased implementation of evidence-based care. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.