title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
The Myth of 'the Myth of Hypercomputation' | ness and Approximations Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as followsness and Approximations • This rather absurd attack goes as follows Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully.ness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully.ness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Thursday, June 9, 2011 Abstractness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully.ness and Approximations • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations • Going by the same argument: Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations • Going by the same argument: • Since Turing computers can’t be realized fully, Turing computation is now another “myth.” Thursday, June 9, 2011 • This rather absurd attack goes as follows 1. Even Turing machines are abstract models that can’t be implemented fully. 2. Therefore, no other more powerful model can be implemented fully. Abstractness and Approximations • Going by the same argument: • Since Turing computers can’t be realized fully, Turing computation is now another “myth.” • The problem is that Davis fails to recognize that a lot of th hypercomputational models are abstract models that no one hopes to build in the near future. Thursday, June 9, 2011 Necessity of Noncomputable Reals Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories • Infinite time Turing Machines Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories • Infinite time Turing Machines • Zeus Machines Thursday, June 9, 2011 Necessity of Noncomputable Reals • Another point in Davis’ argument is that almost all hypercomputation models require Physics to give them a Turing-uncomputable real number. • This is false. Quite a large number of hypercomputation models don’t require non-computable reals and roughly fall into the following categories • Infinite time Turing Machines • Zeus Machines • Kieu-type Quantum Computation Thursday, June 9, 2011 Science-based Arguments: A Meta Analysis of Davis and friends Thursday, June 9, 2011 Science-based Arguments: A Meta Analysis of Davis and friends The Main Case Science of Sciences Part 1: Chain Store Paradox Part 2: Turing-level Actors Part 3:MDL Computational Learning Theory CLT-based Model of Science |
Feature selection in text classification | In recent years, text classification have been widely used. Dimension of text data has increased more and more. Working of almost all classification algorithms is directly related to dimension. In high dimension data set, working of classification algorithms both takes time and occurs over fitting problem. So feature selection is crucial for machine learning techniques. In this study, frequently used feature selection metrics Chi Square (CHI), Information Gain (IG) and Odds Ratio (OR) have been applied. At the same time the method Relevancy Frequency (RF) proposed as term weighting method has been used as feature selection method in this study. It is used for tf.idf term as weighting method, Sequential Minimal Optimization (SMO) and Naive Bayes (NB) in the classification algorithm. Experimental results show that RF gives successful results. |
Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders | Perceptual experience consists of an enormous number of possible states. Previous fMRI studies have predicted a perceptual state by classifying brain activity into prespecified categories. Constraint-free visual image reconstruction is more challenging, as it is impractical to specify brain activity for all possible images. In this study, we reconstructed visual images by combining local image bases of multiple scales, whose contrasts were independently decoded from fMRI activity by automatically selecting relevant voxels and exploiting their correlated patterns. Binary-contrast, 10 x 10-patch images (2(100) possible states) were accurately reconstructed without any image prior on a single trial or volume basis by measuring brain activity only for several hundred random images. Reconstruction was also used to identify the presented image among millions of candidates. The results suggest that our approach provides an effective means to read out complex perceptual states from brain activity while discovering information representation in multivoxel patterns. |
A Probabilistic On-Line Mapping Algorithm for Teams of Mobile Robots | An efficient probabilistic algorithm for the concurrent mapping and localization problem that arises in mobile robotics is presented. The algorithm addresses the problem in which a team of robots builds a map on-line while simultaneously accommodating errors in the robots’ odometry. At the core of the algorithm is a technique that combines fast maximum likelihood map growing with a Monte Carlo localizer that uses particle representations. The combination of both yields an on-line algorithm that can cope with large odometric errors typically found when mapping environments with cycles. The algorithm can be implemented in a distributed manner on multiple robot platforms, enabling a team of robots to cooperatively generate a single map of their environment. Finally, an extension is described for acquiring three-dimensional maps, which capture the structure and visual appearance of indoor environments in three dimensions. KEY WORDS—mobile robotics, map acquisition, localization, robotic exploration, multi-robot systems, threedimensional modeling |
Sparse coding with an overcomplete basis set: A strategy employed by V1? | The spatial receptive fields of simple cells in mammalian striate cortex have been reasonably well described physiologically and can be characterized as being localized, oriented, and bandpass, comparable with the basis functions of wavelet transforms. Previously, we have shown that these receptive field properties may be accounted for in terms of a strategy for producing a sparse distribution of output activity in response to natural images. Here, in addition to describing this work in a more expansive fashion, we examine the neurobiological implications of sparse coding. Of particular interest is the case when the code is overcomplete--i.e., when the number of code elements is greater than the effective dimensionality of the input space. Because the basis functions are non-orthogonal and not linearly independent of each other, sparsifying the code will recruit only those basis functions necessary for representing a given input, and so the input-output function will deviate from being purely linear. These deviations from linearity provide a potential explanation for the weak forms of non-linearity observed in the response properties of cortical simple cells, and they further make predictions about the expected interactions among units in response to naturalistic stimuli. |
How to Gauge the Relevance of Codes in Qualitative Data Analysis? - A Technique Based on Information Retrieval | Qualitative research has experienced broad acceptance in the IS discipline. Despite the merits for exploring new phenomena, qualitative methods are criticized for their subjectivity when it comes to interpretation. Therefore, research mostly emphasized the development of criteria and guidelines for good practice. I present an approach to counteract the issue of credibility and traceability in qualitative data analysis and expand the repertoire of approaches used in IS research. I draw on an existing approach from the information science discipline and adapt it to analyze coded qualitative data. The developed approach is designed to answer questions about the specific relevance of codes and aims to support the researcher in detecting hidden information in the coded material. For this reason, the paper contributes to the IS methodology with bringing new insights to current methods by enhancing them with an approach from another |
Effects of de-alcoholised wines with different polyphenol content on DNA oxidative damage, gene expression of peripheral lymphocytes, and haemorheology: an intervention study in post-menopausal women. | PURPOSE
Epidemiological studies suggest that a moderate consumption of wine is associated with a reduced risk of cardiovascular diseases and with a reduced mortality for all causes, possibly due to increased antioxidant defences. The present intervention study was undertaken to evaluate the in vivo effects of wine polyphenols on gene expression in humans, along with their supposed antioxidant activity.
METHODS
Blood haemorheology and platelet function were also evaluated. In order to avoid interferences from alcohol, we used de-alcoholised wine (DAW) with different polyphenol content. A randomised cross-over trial of high-proanthocyanidin (PA) red DAW (500 mL/die, PA dose = 7 mg/kg b.w.) vs. low-PA rosé DAW (500 mL/die, PA dose = 0.45 mg/kg) was conducted in 21 post-menopausal women in Florence, Italy. Oxidative DNA damage by the comet assay and gene expression by microarray was measured in peripheral blood lymphocytes, collected during the study period. Blood samples were also collected for the evaluation of haematological, haemostatic, haemorheological, and inflammatory parameters.
RESULTS
The results of the present study provide evidence that consumption of substantial amounts of de-alcoholised wine for 1 month does not exert a protective activity towards oxidative DNA damage, nor modifies significantly the gene expression profile of peripheral lymphocytes, whereas it shows blood-fluidifying actions, expressed as a significant decrease in blood viscosity. However, this effect does not correlate with the dosage of polyphenols of the de-alcoholised wine.
CONCLUSIONS
More intervention studies are needed to provide further evidence of the health-protective effects of wine proanthocyanidins. |
Potentiation of apple procyanidin-triggered apoptosis by the polyamine oxidase inactivator MDL 72527 in human colon cancer-derived metastatic cells. | Apple procyanidins have chemopreventive properties in a model of colon cancer, they affect intracellular signalling pathways, and trigger apoptosis in a human adenocarcinoma-derived metastatic cell line (SW620). In the present study we investigated relationships between procyanidin-induced alterations in polyamine metabolism and apoptotic effects. Apple procyanidins diminish the activities of ornithine decarboxylase and S-adenosyl-L-methionine decarboxylase, key enzymes of polyamine biosynthesis, and they induce spermidine/spermine N(1)-acetyltransferase, which initiates retroconversion of poly-amines. As a consequence of the enzymatic changes polyamine concentrations are diminished, and N(1)-acetyl-polyamines accumulate in SW620 cells. In contrast with expectations MDL 72527, an inactivator of polyamine oxidase (PAO), improved the anti-proliferative effect of procyanidins, and caused an increase of the proportion of apoptotic cells, although it prevented the formation of hydrogen peroxide and 3-acetamidopropanal, the cytotoxic products of PAO-catalysed degradation of N(1)-acetylspermidine and N1-acetylspermine. Addition of 500 microM N1-acetylspermidine to the culture medium in the presence of procyanidins mimicked the effect of MDL 72527. Therefore we presume that the enhanced procyanidin-triggered apoptosis by MDL 72527 is mediated by the accumulation of N(1)-acetyl-polyamines. The observation that apple procyanidins enhance polyamine catabolism and reduce polyamine biosynthesis activity similar to known inducers of SSAT, without sharing their toxicity, and the potentiation of these effects by low concentrations of MDL 72527 suggests apple procyanidins for chemopreventive and therapeutic interventions. |
FORECASTING WITH MANY PREDICTORS * | 516 |
Online search of overlapping communities | A great deal of research has been conducted on modeling and discovering communities in complex networks. In most real life networks, an object often participates in multiple overlapping communities. In view of this, recent research has focused on mining overlapping communities in complex networks. The algorithms essentially materialize a snapshot of the overlapping communities in the network. This approach has three drawbacks, however. First, the mining algorithm uses the same global criterion to decide whether a subgraph qualifies as a community. In other words, the criterion is fixed and predetermined. But in reality, communities for different vertices may have very different characteristics. Second, it is costly, time consuming, and often unnecessary to find communities for an entire network. Third, the approach does not support dynamically evolving networks. In this paper, we focus on online search of overlapping communities, that is, given a query vertex, we find meaningful overlapping communities the vertex belongs to in an online manner. In doing so, each search can use community criterion tailored for the vertex in the search. To support this approach, we introduce a novel model for overlapping communities, and we provide theoretical guidelines for tuning the model. We present several algorithms for online overlapping community search and we conduct comprehensive experiments to demonstrate the effectiveness of the model and the algorithms. We also suggest many potential applications of our model and algorithms. |
Current Feedback Compensation Circuit for 2T1C LED Displays: Method | A novel current feedback programming principle and circuit architecture are presented, compatible with LED displays utilizing the 2T1C pixel structure. The new pixel programming approach is compatible with all TFT backplane technologies and can compensate for non-uniformities in both threshold voltage and carrier mobility of the OLED pixel drive TFT, due to a feedback loop that modulates the gate of the driving transistor according to the OLED current. The circuit can be internal or external to the integrated display data driver. Based on simulations and data gathered through a fabricated prototype driver, a pixel drive current of 20 nA can be programmed within an addressing time ranging from 10 μs to 50 μs. |
REEDMULTIMEDIA LEARNING Cognitive Architectures for Multimedia Learning | This article provides a tutorial overview of cognitive architectures that can form a theoretical foundation for designing multimedia instruction. Cognitive architectures include a description of memory stores, memory codes, and cognitive operations. Architectures that are relevant to multimedia learning include Paivio’s dual coding theory, Baddeley’s working memory model, Engelkamp’s multimodal theory, Sweller’s cognitive load theory, Mayer’s multimedia learning theory, and Nathan’s ANIMATE theory. The discussion emphasizes the interplay between traditional research studies and instructional applications of this research for increasing recall, reducing interference, minimizing cognitive load, and enhancing understanding. Tentative conclusions are that (a) there is general agreement among the different architectures, which differ in focus; (b) learners’ integration of multiple codes is underspecified in the models; (c) animated instruction is not required when mental simulations are sufficient; (d) actions must be meaningful to be successful; and (e) multimodal instruction is superior to targeting modality-specific individual differences. |
Heart rate variability during antidepressant treatment with venlafaxine and mirtazapine. | Heart rate variability (HRV) reflects the cardiac autonomic regulation, and reduced HRV is considered a pathophysiological link between depression and cardiovascular mortality. So far, there is only limited information on the effects of venlafaxine and mirtazapine on HRV.We studied 28 nondepressed controls and 41 moderately depressed patients being treated with venlafaxine (n = 20) and mirtazapine (n = 21). Heart rate, blood pressure, and HRV were measured after a 6-day washout as well as after 14 and 28 days of treatment in supine and upright position.We found increased heart rate and reduced HRV in the depressed patients compared with the nondepressed controls. Moreover, HRV total power declined during the treatment period. Medication and remission status after 4 weeks were not related to the change in HRV.We conclude that depression is related to reduced HRV, which might reflect sympathovagal dysbalance. The widely used antidepressants venlafaxine and mirtazapine led to further decline in HRV. Clinicians should consider HRV effects in the selection of antidepressants. |
Monocyte and macrophage plasticity in tissue repair and regeneration. | Heterogeneity and high versatility are the characteristic features of the cells of monocyte-macrophage lineage. The mononuclear phagocyte system, derived from the bone marrow progenitor cells, is primarily composed of monocytes, macrophages, and dendritic cells. In regenerative tissues, a central role of monocyte-derived macrophages and paracrine factors secreted by these cells is indisputable. Macrophages are highly plastic cells. On the basis of environmental cues and molecular mediators, these cells differentiate to proinflammatory type I macrophage (M1) or anti-inflammatory or proreparative type II macrophage (M2) phenotypes and transdifferentiate into other cell types. Given a central role in tissue repair and regeneration, the review focuses on the heterogeneity of monocytes and macrophages with current known mechanisms of differentiation and plasticity, including microenvironmental cues and molecular mediators, such as noncoding RNAs. |
JSAI: a static analysis platform for JavaScript | JavaScript is used everywhere from the browser to the server, including desktops and mobile devices. However, the current state of the art in JavaScript static analysis lags far behind that of other languages such as C and Java. Our goal is to help remedy this lack. We describe JSAI, a formally specified, robust abstract interpreter for JavaScript. JSAI uses novel abstract domains to compute a reduced product of type inference, pointer analysis, control-flow analysis, string analysis, and integer and boolean constant propagation. Part of JSAI's novelty is user-configurable analysis sensitivity, i.e., context-, path-, and heap-sensitivity. JSAI is designed to be provably sound with respect to a specific concrete semantics for JavaScript, which has been extensively tested against a commercial JavaScript implementation. We provide a comprehensive evaluation of JSAI's performance and precision using an extensive benchmark suite, including real-world JavaScript applications, machine generated JavaScript code via Emscripten, and browser addons. We use JSAI's configurability to evaluate a large number of analysis sensitivities (some well-known, some novel) and observe some surprising results that go against common wisdom. These results highlight the usefulness of a configurable analysis platform such as JSAI. |
Cellular Disco: resource management using virtual clusters on shared-memory multiprocessors | Despite the fact that large-scale shared-memory multiprocessors have been commercially available for several years, system software that fully utilizes all their features is still not available, mostly due to the complexity and cost of making the required changes to the operating system. A recently proposed approach, called Disco, substantially reduces this development cost by using a virtual machine monitor that leverages the existing operating system technology.In this paper we present a system called Cellular Disco that extends the Disco work to provide all the advantages of the hardware partitioning and scalable operating system approaches. We argue that Cellular Disco can achieve these benefits at only a small fraction of the development cost of modifying the operating system. Cellular Disco effectively turns a large-scale shared-memory multiprocessor into a virtual cluster that supports fault containment and heterogeneity, while avoiding operating system scalability bottle-necks. Yet at the same time, Cellular Disco preserves the benefits of a shared-memory multiprocessor by implementing dynamic, fine-grained resource sharing, and by allowing users to overcommit resources such as processors and memory. This hybrid approach requires a scalable resource manager that makes local decisions with limited information while still providing good global performance and fault containment.In this paper we describe our experience with a Cellular Disco prototype on a 32-processor SGI Origin 2000 system. We show that the execution time penalty for this approach is low, typically within 10% of the best available commercial operating system for most workloads, and that it can manage the CPU and memory resources of the machine significantly better than the hardware partitioning approach. |
HAIRY POLYP on the dorsum of the tongue – detection and comprehension of its possible dinamics | BACKGROUND
The formation of a Hairy Polyp on the dorsum of the tongue is a rare condition that may hinder vital functions such as swallowing and breathing due to mechanical obstruction. The authors present the situation on a child with an approach of significant academic value.
METHODS
Imaging diagnostics with the application of a topical oral radiocontrastant was used to determine the extent of the tumor. Performed treatment was complete excision and diagnostics was confirmed with anatomopathological analysis.
RESULTS
The patient was controlled for five months and, showing no signs of relapse, was considered free from the lesion.
CONCLUSION
Accurate diagnostics of such a lesion must be performed in depth so that proper surgical treatment may be performed. The imaging method proposed has permitted the visualization of the tumoral insertion and volume, as well as the comprehension of its threatening dynamics. |
Simulating Self-Motion I: Cues for the Perception of Motion | When people move there are many visual and non-visual cues that can inform them about their movement. Simulating self-motion in a virtual reality environment thus needs to take these non-visual cues into account in addition to the normal high-quality visual display. Here we examine the contribution of visual and non-visual cues to our perception of self-motion. The perceived distance of self-motion can be estimated from the visual flow field, physical forces or the act of moving. On its own, passive visual motion is a very effective cue to self-motion, and evokes a perception of self-motion that is related to the actual motion in a way that varies with acceleration. Passive physical motion turns out to be a particularly potent self-motion cue: not only does it evoke an exaggerated sensation of motion, but it also tends to dominate other cues. |
Complex hole-filling algorithm for 3D models | In this paper, we propose a new and simple method for filling complex holes in surfaces. To fill a hole, locally uniform points are added to the hole by creating contour curves inside the boundary edges of the hole. A set of contour curves is created by shortening the flow of the boundary edges of the hole. The Delaunay triangulation method in a local area is applied for creating new meshes. The direction of the shortening flow is changed to satisfy the convergence of the curve shortening flow. It enables the filling of a complex hole, such as a hole with an island and a hole with highly curved boundary edges. In addition, the method can be used to fill a hole by preserving the sharp features of the model. |
Antipsychotic effect of propranolol on chronic schizophrenics: Study of a gradual treatment regimen | Ten chronic treatment-resistant schizophrenic patients were treated in a single-blind trial for six weeks with propranolol in daily doses increasing up to 3 g in order to evaluate a modified dose regimen of propranolol treatment in these patients. Four of the patients improved significantly and could be released from the hospital, regaining premorbid social and work adjustments. The modified regimen proved safe as long as the dose increment was not more than 80 mg/day. The mean therapeutic level of propranolol was 1600 mg/day, which proved to be a safe dose. Although three patients with hypertensive encephalophaty responded, their response was related not to the maximum dose but to a drastic change in the rate of the dose increase. It seems that on the basis of these results a double-blind comparative study would be worthwhile. |
Learning low dimensional predictive representations | Predictive state representations (PSRs) have recently been proposed as an alternative to partially observable Markov decision processes (POMDPs) for representing the state of a dynamical system (Littman et al., 2001). We present a learning algorithm that learns a PSR from observational data. Our algorithm produces a variant of PSRs called transformed predictive state representations (TPSRs). We provide an efficient principal-components-based algorithm for learning a TPSR, and show that TPSRs can perform well in comparison to Hidden Markov Models learned with Baum-Welch in a real world robot tracking task for low dimensional representations and long prediction horizons. |
Image Segmentation for Cardiovascular Biomedical Applications at Different Scales | In this study, we present several image segmentation techniques for various image scales and modalities. We consider cellular-, organ-, and whole organism-levels of biological structures in cardiovascular applications. Several automatic segmentation techniques are presented and discussed in this work. The overall pipeline for reconstruction of biological structures consists of the following steps: image pre-processing, feature detection, initial mask generation, mask processing, and segmentation post-processing. Several examples of image segmentation are presented, including patient-specific abdominal tissues segmentation, vascular network identification and myocyte lipid droplet micro-structure reconstruction. |
k-Shape: Efficient and Accurate Clustering of Time Series | The proliferation and ubiquity of temporal data across many disciplines has generated substantial interest in the analysis and mining of time series. Clustering is one of the most popular data mining methods, not only due to its exploratory power, but also as a preprocessing step or subroutine for other techniques. In this paper, we describe k-Shape, a novel algorithm for time-series clustering. k-Shape relies on a scalable iterative refinement procedure, which creates homogeneous and well-separated clusters. As its distance measure, k-Shape uses a normalized version of the cross-correlation measure in order to consider the shapes of time series while comparing them. Based on the properties of that distance measure, we develop a method to compute cluster centroids, which are used in every iteration to update the assignment of time series to clusters. An extensive experimental evaluation against partitional, hierarchical, and spectral clustering methods, with the most competitive distance measures, showed the robustness of k-Shape. Overall, k-Shape emerges as a domain-independent, highly accurate, and efficient clustering approach for time series with broad applications. |
Object Contour Detection with a Fully Convolutional Encoder-Decoder Network | We develop a deep learning algorithm for contour detection with a fully convolutional encoder-decoder network. Different from previous low-level edge detection, our algorithm focuses on detecting higher-level object contours. Our network is trained end-to-end on PASCAL VOC with refined ground truth from inaccurate polygon annotations, yielding much higher precision in object contour detection than previous methods. We find that the learned model generalizes well to unseen object classes from the same supercategories on MS COCO and can match state-of-the-art edge detection on BSDS500 with fine-tuning. By combining with the multiscale combinatorial grouping algorithm, our method can generate high-quality segmented object proposals, which significantly advance the state-of-the-art on PASCAL VOC (improving average recall from 0.62 to 0.67) with a relatively small amount of candidates (~1660 per image). |
Delta-Sigma FDC Based Fractional-N PLLs | Fractional-N phase-locked loop frequency synthesizers based on time-to-digital converters (TDC-PLLs) have been proposed to reduce the area and linearity requirements of conventional PLLs based on delta-sigma modulation and charge pumps (ΔΣ-PLLs). Although TDC-PLLs with good performance have been demonstrated, TDC quantization noise has so far kept their phase noise and spurious tone performance below that of the best comparable ΔΣ-PLLs. An alternative approach is to use a delta-sigma frequency-to-digital converter (ΔΣ FDC) in place of a TDC to retain the benefits of TDC-PLLs and ΔΣ-PLLs. This paper proposes a practical ΔΣ FDC based PLL in which the quantization noise is equivalent to that of a ΔΣ-PLL. It presents a linearized model of the PLL, design criteria to avoid spurious tones in the ΔΣFDC quantization noise, and a design methodology for choosing the loop parameters in terms of standard PLL target specifications. |
3D Morphable Models as Spatial Transformer Networks | In this paper, we show how a 3D Morphable Model (i.e. a statistical model of the 3D shape of a class of objects such as faces) can be used to spatially transform input data as a module (a 3DMM-STN) within a convolutional neural network. This is an extension of the original spatial transformer network in that we are able to interpret and normalise 3D pose changes and self-occlusions. The trained localisation part of the network is independently useful since it learns to fit a 3D morphable model to a single image. We show that the localiser can be trained using only simple geometric loss functions on a relatively small dataset yet is able to perform robust normalisation on highly uncontrolled images including occlusion, self-occlusion and large pose changes. |
The Order of Prenominal Adjectives in Natural Language Generation | The order of prenominal adjectival modifiers in English is governed by complex and difficult to describe constraints which straddle the boundary between competence and performance. This paper describes and compares a number of statistical and machine learning techniques for ordering sequences of adjectives in the context of a natural language generation system. |
China’s AI dreams | Last year, China’s chief governing body announced an ambitious scheme for the country to become a world leader in artificial intelligence (AI) technology by 2030. The Chinese State Council, chaired by Premier Li Keqiang, detailed a series of intended milestones in AI research and development in its ‘New Generation Artificial Intelligence Development Plan’, with the aim that Chinese AI will have applications in fields as varied as medicine, manufacturing and the military. These AI ambitions, made public in July 2017, came as little surprise. ‘Innovation’ has been a favourite buzzword of China’s leadership for several years, as the country seeks to transition from a production powerhouse to a centre of knowledge creation. But China’s AI aspirations are as economic as they are political, coming at a historic stall in the country’s previously rapid growth — in 2016, the economy grew at its slowest rate since 1990. By 2020, the State Council’s plan forecasts, the value of China’s core AI industries should have exceeded 150 billion yuan (US$22.7 billion), and the total for all related industries should be 1 trillion yuan. By 2030, it is hoped those figures will be 1 trillion yuan and 10 trillion yuan, respectively. Much suggests that China is already on the right track. In 2014, the country overtook the United States in terms of the number of research publications it produced — and, crucially, the number of those that were cited — on the subject of deep learning; in the past two years, Chinese teams have dominated the prestigious ImageNet AI contest, in which researchers compete to see which algorithms can best recognize images; and Beijing-based technology giant Baidu, which in 2017 announced it was launching a deep learning research laboratory in collaboration with the Chinese government, says it will have self-driving vehicles powered by AI technology on Chinese public roads by 2020. But in its quest to become a leader in technology that does the job of people, China is uncharacteristically low on one thing: people. According to a recent LinkedIn report (see go.nature.com/2jvdcxe), there were more than 50,000 people in China’s AI workforce in the first quarter of 2017. In the United States, a country with less than one-quarter of China’s population, that number was above 850,000. India is home to an AI workforce of more than 150,000. “For the time being, talent remains a major bottleneck in China’s advances in AI,” says Elsa Kania, a Washington DC-based analyst specializing in China’s emerging technologies and defence innovation. She says that it is a lack of experience as well as a lack of people that is afflicting the country’s AI sector. Indeed, LinkedIn found that 38.7% of those working in China’s AI sector have more than 10 years’ experience, compared with 71.5% in the United States. That, she says, “will continue to necessitate active efforts to recruit foreign talent from Silicon Valley and elsewhere”. In Beijing’s tech hub Zhongguancun, steps have already been taken. In 2016, the local government made it easier for foreigners to gain permanent residency status, and in 2017 it introduced an advice service to support entrepreneurial newcomers with everything from Chinese company registration and taxation to finance and intellectual property rights. Indeed, strengthening talent is considered a matter of “utmost B Y O W E N C H U R C H I L L |
Focused clustering and outlier detection in large attributed graphs | Graph clustering and graph outlier detection have been studied extensively on plain graphs, with various applications. Recently, algorithms have been extended to graphs with attributes as often observed in the real-world. However, all of these techniques fail to incorporate the user preference into graph mining, and thus, lack the ability to steer algorithms to more interesting parts of the attributed graph. In this work, we overcome this limitation and introduce a novel user-oriented approach for mining attributed graphs. The key aspect of our approach is to infer user preference by the so-called focus attributes through a set of user-provided exemplar nodes. In this new problem setting, clusters and outliers are then simultaneously mined according to this user preference. Specifically, our FocusCO algorithm identifies the focus, extracts focused clusters and detects outliers. Moreover, FocusCO scales well with graph size, since we perform a local clustering of interest to the user rather than global partitioning of the entire graph. We show the effectiveness and scalability of our method on synthetic and real-world graphs, as compared to both existing graph clustering and outlier detection approaches. |
On-Chip Compensation of Ring VCO Oscillation Frequency Changes Due to Supply Noise and Process Variation | A novel circuit technique that stabilizes the oscillation frequency of a ring-type voltage-controlled oscillator (RVCO) is demonstrated. The technique uses on-chip bias-current and voltage-swing controllers, which compensate RVCO oscillation frequency changes caused by supply noise and process variation. A prototype phase-locked loop (PLL) having the RVCO with the compensation circuit is fabricated with 0.13-μm CMOS technology. At the operating frequency of 4 GHz, the measured PLL rms jitter improves from 20.11 to 5.78 ps with 4-MHz RVCO supply noise. Simulation results show that the oscillation frequency difference between FF and SS corner is reduced from 63% to 6% of the NN corner oscillation frequency. |
Sling bag and backpack detection for human appearance semantic in vision system | In many intelligent surveillance systems there is a requirement to search for people of interest through archived semantic labels. Other than searching through typical appearance attributes such as clothing color and body height, information such as whether a person carries a bag or not is valuable to provide more relevant targeted search. We propose two novel and fast algorithms for sling bag and backpack detection based on the geometrical properties of bags. The advantage of the proposed algorithms is that it does not require shape information from human silhouettes therefore it can work under crowded condition. In addition, the absence of background subtraction makes the algorithms suitable for mobile platforms such as robots. The system was tested with a low resolution surveillance video dataset. Experimental results demonstrate that our method is promising. |
Dysbiosis of the gut microbiota in disease | There is growing evidence that dysbiosis of the gut microbiota is associated with the pathogenesis of both intestinal and extra-intestinal disorders. Intestinal disorders include inflammatory bowel disease, irritable bowel syndrome (IBS), and coeliac disease, while extra-intestinal disorders include allergy, asthma, metabolic syndrome, cardiovascular disease, and obesity. |
Memristor Bridge Synapses | In this paper, we propose a memristor bridge circuit consisting of four identical memristors that is able to perform zero, negative, and positive synaptic weightings. Together with three additional transistors, the memristor bridge weighting circuit is able to perform synaptic operation for neural cells. It is compact as both weighting and weight programming are performed in a memristor bridge synapse. It is power efficient, since the operation is based on pulsed input signals. Its input terminals are utilized commonly for applying both weight programming and weight processing signals via time sharing. In this paper, features of the memristor bridge synapses are investigated using the TiO memristor model via simulations. |
Integrating the teaching of computer organization and architecture with digital hardware design early in undergraduate courses | This paper describes a new way to teach computer organization and architecture concepts with extensive hands-on hardware design experience very early in computer science curricula. While describing the approach, it addresses relevant questions about teaching computer organization, computer architecture and hardware design to students in computer science and related fields. The justification to concomitantly teach two often separately addressed subjects is twofold. First, to provide a better insight into the practical aspects of computer organization and architecture. Second, to allow addressing only highly abstract design levels yet achieving reasonably performing implementations, to make the integrated teaching approach feasible. The approach exposes students to many of the essential issues incurred in the analysis, simulation, design and effective implementation of processors. Although the former separation of such connected disciplines has certainly brought academic benefits in the past, some modern technologies allow capitalizing on their integration. Indeed, the new approach is enabled by the availability of two new technologies, fast hardware prototyping platforms built with reconfigurable, hardware and powerful computer-aided design tools for design entry, validation and implementation. The practical implementation of the teaching approach comprises lecture as well as laboratory courses, starting in the third semester of an undergraduate computer science curriculum. In four editions of the first two courses, most students have obtained successful processor implementations. In some cases, considerably complex applications, such as bubble sort and quick sort procedures were programed in assembly and or machine code and run at the hardware description language simulation level in the designed processors. |
Video Segmentation with Background Motion Models | Many of today’s most successful video segmentation methods use long-term feature trajectories as their first processing step. Such methods typically use spectral clustering to segment these trajectories, implicitly assuming that motion is translational in image space. In this paper, we explore the idea of explicitly fitting more general motion models in order to classify trajectories as foreground or background. We find that homographies are sufficient to model a wide variety of background motions found in real-world videos. Our simple approach achieves competitive performance on the DAVIS benchmark, while using techniques complementary to state-of-the-art approaches. |
The Treatment and Outcome of Patients With Soft Tissue Sarcomas and Synchronous Metastases | INTRODUCTION
There is a strong association between poor overall survival and a short disease-free interval for patients with soft tissue sarcomas (STS) and metastatic disease. Patients with STS and synchronous metastases should have a very dismal prognosis.The role of surgery in this subgroup of patients with STS has not been defined.
PATIENTS AND METHODS
A single-institution retrospective review was performed of 48 patients with STS and synchronous metastases in regard to patient demographics, presentation, tumor characteristics, metastatic sites, treatment, follow-up, and survival over a 27-year period.
RESULTS
Most primary tumors were >/=10 cm (58%), high-grade histology (77%), and located on the extremity (60%).The most frequent site of metastatic disease was the lung (63%); 27% of patients had metastases to >/=2 organ sites. Surgery to the primary tumor was performed in 94% of patients (n = 45) and 68% had additional radiation therapy (n = 32). Thirty- five percent of patients underwent at least one metastastectomy (n = 17). Chemotherapy was administered to 90% of patients (n = 43); 31% received >/=3 different regimens (n = 15) and 25% were given intra-arterial or intracavitary therapy (n = 12). Median overall survival was 15 months with a 21% 2-year survival. Local control of the primary tumor was achieved in 54% (n = 26), and metastastectomy was performed in 35% (n = 17). No analyzed factors were associated with an improvement in overall survivalConclusions: Despite multiple poor prognostic factors, the survival of patients with STS and metastases is comparable to those who develop delayed metastatic disease. However, unlike patients who present with metachronous disease, there was no improved survival observed for patients treated with metastastectomy. Consequently, treatment for patients with STS and synchronous metastases should be approached with caution. Surgical management of STS with synchronous metastases must be considered palliative and should be reserved for patients requiring palliation of symptoms. Patients must also be well informed of the noncurative nature of the procedure. |
Markov Cortical High-Density Counterstream Architectures | , (2013); 342 Science et al. Nikola T. Markov Cortical High-Density Counterstream Architectures This copy is for your personal, non-commercial use only. clicking here. colleagues, clients, or customers by , you can order high-quality copies for your If you wish to distribute this article to others here. following the guidelines can be obtained by Permission to republish or repurpose articles or portions of articles ): November 4, 2013 www.sciencemag.org (this information is current as of The following resources related to this article are available online at http://www.sciencemag.org/content/342/6158/1238406.full.html version of this article at: including high-resolution figures, can be found in the online Updated information and services, http://www.sciencemag.org/content/342/6158/1238406.full.html#related found at: can be related to this article A list of selected additional articles on the Science Web sites http://www.sciencemag.org/content/342/6158/1238406.full.html#ref-list-1 , 55 of which can be accessed free: cites 124 articles This article |
EMC Issues in Cars with Electric Drives | From the EMC point of view, the integration of electric drive systems into today’s cars represents a substantial challenge. The electric drive system is a new component consisting of a high-voltage power source, a frequency converter, an electric motor and shielded or unshielded high-power cables. Treating this new electric drive system or its components as a conventional automotive component in terms of EMI test procedures and emission limits would lead to substantial incompatibility problems. In this paper the EMC issues related to the integration of an electric drive system into a conventional passenger car are investigated. The components of the drive system have been analyzed being either noise sources or part of the coupling path within the new electrical system of the car. The obtained results can also be used to determine the acceptable noise levels on a high voltage bus of an electric drive system. |
Introduction to Iranian Blood Transfusion Organization and blood safety in Iran. | Currently, in Iran blood transfusion is an integral part of the national health system and blood donation is voluntary and nonremunerated and blood and its components may not be a source of profit. In 1974 and following establishment of Iranian Blood Transfusion Organization (IBTO) all blood transfusion activities from donor recruitment to production of blood components and delivery of blood and blood products were centralized. The activities of IBTO are followed the laws and regulations of Ministry of Health and criteria of Iran National Regulatory Authority. In order to meet the country's demand in 2007 IBTO collected about 1.7 millions units of blood from the population of 70 millions. In 1979 coincided with the Islamic revolution the number of blood units collected throughout the country were 124,000 units or 3.4 unit per 1000 population whereas after about 30 years this increased to about 25 unit per 1000 population. With improving the pool of voluntary donors, IBTO has been successful in excluding "family replacement" donation since 2007 and reached to 100% voluntary and nonremunerated blood donation. Currently more than 92% of blood donors in Iran are male and contribution of female in blood donation is less than 8%. Although all donated blood in Iran screened for HBsAg since 1974, screening of blood units for HIV and HCV started since 1989 and 1996, respectively. The frequency of HBV infection in blood donors showed a significant decline from 1.79% in 1998 to 0.4% in 2007. The overall frequency of HCV and HIV infection are 0.13% and 0.004% respectively. |
Click Chemistry, A Powerful Tool for Pharmaceutical Sciences | Click chemistry refers to a group of reactions that are fast, simple to use, easy to purify, versatile, regiospecific, and give high product yields. While there are a number of reactions that fulfill the criteria, the Huisgen 1,3-dipolar cycloaddition of azides and terminal alkynes has emerged as the frontrunner. It has found applications in a wide variety of research areas, including materials sciences, polymer chemistry, and pharmaceutical sciences. In this manuscript, important aspects of the Huisgen cycloaddition will be reviewed, along with some of its many pharmaceutical applications. Bioconjugation, nanoparticle surface modification, and pharmaceutical-related polymer chemistry will all be covered. Limitations of the reaction will also be discussed. |
Air quality data clustering using EPLS method | Nowadays air quality data can be easily accumulated by sensors around the world. Analysis on air quality data is very useful for society decision. Among five major air pollutants which are calculated for AQI (Air Quality Index), PM2.5 data is the most concerned by the people. PM2.5 data is also cross-impacted with the other factors in the air and which has properties of non-linear nonstationary including high noise level and outlier. Traditional methods cannot solve the problem of PM2.5 data clustering very well because of their inherent characteristics. In this paper, a novel model-based feature extraction method is proposed to address this issue. The EPLS model includes 1) Mode Decomposition, in which EEMD algorithm is applied to the aggregation dataset; 2) Dimension Reduction, which is carried out for a more significant set of vectors; 3) Least Squares Projection, in which all testing data are projected to the obtained vectors. Synthetic dataset and air quality dataset are applied to different clustering methods and similarity measures. Experimental results demonstrate IFully documented templates are available in the elsarticle package on CTAN. ∗Corresponding author at: Department of Computer Science, China University of Geosciences, Wuhan, China. ∗∗Corresponding author at: Department of Computer Science, China University of Geosciences, Wuhan, China. Email addresses: [email protected] (Lizhe Wang), [email protected] (Fangyuan Li) 1Email address: Cyl [email protected]. Preprint submitted to Journal of LTEX Templates December 1, 2016 |
Gamut Mapping to Preserve Spatial Luminance Variations | A spatial gamut mapping technique is proposed to overcome the shortcomings encountered with standard pointwise gamut mapping algorithms by preserving spatially local luminance variations in the original image. It does so by first processing the image through a standard pointwise gamut mapping algorithm. The difference between the original image luminance Y and gamut mapped image luminance Y’ is calculated. A spatial filter is then applied to this difference signal, and added back to the gamut mapped signal Y’. The filtering operation can result in colors near the gamut boundary being placed outside the gamut, hence a second gamut mapping step is required to move these pixels back into the gamut. Finally, the in-gamut pixels are processed through a color correction function for the output device, and rendered to that device. Psychophysical experiments validate the superior performance of the proposed algorithm, which reduces many of the artifacts arising from standard pointwise techniques. |
Effects of adjuvant exemestane versus anastrozole on bone mineral density for women with early breast cancer (MA.27B): a companion analysis of a randomised controlled trial. | BACKGROUND
Treatment of breast cancer with aromatase inhibitors is associated with damage to bones. NCIC CTG MA.27 was an open-label, phase 3, randomised controlled trial in which women with breast cancer were assigned to one of two adjuvant oral aromatase inhibitors-exemestane or anastrozole. We postulated that exemestane-a mildly androgenic steroid-might have a less detrimental effect on bone than non-steroidal anastrozole. In this companion study to MA.27, we compared changes in bone mineral density (BMD) in the lumbar spine and total hip between patients treated with exemestane and patients treated with anastrozole.
METHODS
In MA.27, postmenopausal women with early stage hormone (oestrogen) receptor-positive invasive breast cancer were randomly assigned to exemestane 25 mg versus anastrozole 1 mg, daily. MA.27B recruited two groups of women from MA.27: those with BMD T-scores of -2·0 or more (up to 2 SDs below sex-matched, young adult mean) and those with at least one T-score (hip or spine) less than -2·0. Both groups received vitamin D and calcium; those with baseline T-scores of less than -2·0 also received bisphosphonates. The primary endpoints were percent change of BMD at 2 years in lumbar spine and total hip for both groups. We analysed patients according to which aromatase inhibitor and T-score groups they were allocated to but BMD assessments ceased if patients deviated from protocol. This study is registered with ClinicalTrials.gov, NCT00354302.
FINDINGS
Between April 24, 2006, and May 30, 2008, 300 patients with baseline T-scores of -2·0 or more were accrued (147 allocated exemestane, 153 anastrozole); and 197 patients with baseline T-scores of less than -2·0 (101 exemestane, 96 anastrozole). For patients with T-scores greater than -2·0 at baseline, mean change of bone mineral density in the spine at 2 years did not differ significantly between patients taking exemestane and patients taking anastrozole (-0·92%, 95% CI -2·35 to 0·50 vs -2·39%, 95% CI -3·77 to -1·01; p=0·08). Respective mean loss in the hip was -1·93% (95% CI -2·93 to -0·93) versus -2·71% (95% CI -4·32 to -1·11; p=0·10). Likewise for those who started with T-scores of less than -2·0, mean change of spine bone mineral density at 2 years did not differ significantly between the exemestane and anastrozole treatment groups (2·11%, 95% CI -0·84 to 5·06 vs 3·72%, 95% CI 1·54 to 5·89; p=0·26), nor did hip bone mineral density (2·09%, 95% CI -1·45 to 5·63 vs 0·0%, 95% CI -3·67 to 3·66; p=0·28). Patients with baseline T-score of -2·0 or more taking exemestane had two fragility fractures and two other fractures, those taking anastrozole had three fragility fractures and five other fractures. For patients who had baseline T-scores of less than -2·0 taking exemestane, one had a fragility fracture and four had other fractures, whereas those taking anastrozole had five fragility fractures and one other fracture.
INTERPRETATION
Our results demonstrate that adjuvant treatment with aromatase inhibitors can be considered for breast cancer patients who have T-scores less than -2·0.
FUNDING
Canadian Cancer Society Research Institute, Pfizer, Canadian Institutes of Health Research. |
Comparative analysis of K-Means with other clustering algorithms to improve search result | The paper identifies the scope of improvement for the search result of a web site. The study includes some commonly used clustering algorithms to identify the usage of clustering approach for improving web elements analysis, in various ways. As the Search result option is extensively used at almost every web site, the main focus is to optimize search result of a web site using clustering approach. Sementic web using the concept of ontology is included, to retrieve more relevant and meaning full serach results. Some most commomly used algorithms are experimented using web data, and it is observed that K-Means clustering algorithm gives best result in term of accuracy and speed. Thus the proposed hybrid model will be using K-Means and Genetic algorithm to overcome the drawbacks of K-Means. The evaluation parameters; accuracy in terms of objects placement in correct cluster, relevancy, speed and user satisfaction are the main parameters considered for the study. |
One day ahead prediction of wind speed using annual trends | The growing revolution in wind energy encourages for more accurate models for wind speed forecasting and wind power generation prediction. This paper presents a new technique for wind speed forecasting based on using a time series model relating the predicted interval to its corresponding one and two year old data. A set of data that extends to 72 hours is used in investigating the accuracy of the model for predicting wind speeds up 24 hours ahead. Obtained results, form the proposed model, are compared with their corresponding values generated when using the persistence model. The presented results validate the effectiveness of the new prediction models for wind speed |
Internet X.509 Public Key Infrastructure: Additional Algorithms and Identifiers for DSA and ECDSA | This document updates RFC 3279 to specify algorithm identifiers and ASN.1 encoding rules for the Digital Signature Algorithm (DSA) and Elliptic Curve Digital Signature Algorithm (ECDSA) digital signatures when using SHA-224, SHA-256, SHA-384, or SHA-512 as the hashing algorithm. This specification applies to the Internet X.509 Public Key infrastructure (PKI) when digital signatures are used to sign certificates and certificate revocation lists (CRLs). This document also identifies all four SHA2 hash algorithms for use in the Internet X.509 PKI. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. |
Coordinated Power Control of Variable-Speed Diesel Generators and Lithium-Battery on a Hybrid Electric Boat | This paper presents coordinated power management strategies for a hybrid electric boat (HEB) based on dynamic load's power distribution approach using diesel generators and batteries. Two variable-speed diesel generator sets connecting to dc-bus via fully controlled rectifiers serve as the main power supply source; one lithium-battery pack linked to dc-bus through a bidirectional DC/DC converter serves as an auxiliary power supply source. The power requirement of the thrusters and other onboard equipment is represented by a load power profile. The main contribution of this paper is focused on the load power sharing of the HEB between the two variable-speed diesel generators and the lithium-battery pack, through the fluctuations extracted from load power and their assignation to the batteries. Another important issue is that when the total load demand is low, one of the diesel generator set will be switched off to improve the diesel-fuel efficiency. The output powers of the diesel generators and the batteries are controlled based on the power distribution strategy via the converters controlling. The performances of the proposed hybrid power system control are evaluated through simulations in MATALB/Simulink, and through reduced-scale experimental tests carried out for a specific driving cycle of the HEB. |
Breaking the Myths of Rewards: An Exploratory Study of Attitudes about Knowledge Sharing | Many CEOs and managers understand the importance of knowledge sharing among their employees and are eager to introduce the knowledge management paradigm in their organizations. However little is known about the determinants of the individuals knowledge sharing behavior. The purpose of this study is to develop an understanding of the factors affecting the individuals knowledge sharing behavior in the organizational context. The research model includes various constructs based on social exchange theory, self-efficacy, and theory of reasoned action. Research results from the field survey of 467 employees of four large, public organizations show that expected associations and contribution are the major determinants of the individuals attitude toward knowledge sharing. Expected rewards, believed by many as the most important motivating factor for knowledge sharing, are not significantly related to the attitude toward knowledge sharing. As expected, positive attitude toward knowledge sharing is found to lead to positive intention to share knowledge and, finally, to actual knowledge sharing behaviors. |
Dynamic Taint Analysis for Automatic Detection, Analysis, and SignatureGeneration of Exploits on Commodity Software | Software vulnerabilities have had a devastating effect on the Internet. Worms such as CodeRed and Slammer can compromise hundreds of thousands of hosts within hours or even minutes, and cause millions of dollars of damage [32, 51]. To successfully combat these fast automatic Internet attacks, we need fast automatic attack detection and filtering mechanisms. In this paper we propose dynamic taint analysis for automatic detection and analysis of overwrite attacks, which include most types of exploits. This approach does not need source code or special compilation for the monitored program, and hence works on commodity software. To demonstrate this idea, we have implemented TaintCheck, a mechanism that can perform dynamic taint analysis by performing binary rewriting at run time. We show that TaintCheck reliably detects most types of exploits. We found that TaintCheck produced no false positives for any of the many different programs that we tested. Further, we show how we can use a two-tiered approach to build a hybrid exploit detector that enjoys the same accuracy as TaintCheck but have extremely low performance overhead. Finally, we propose a new type of automatic signature generation—semanticanalysis based signature generation. We show that by backtracing the chain of tainted data structure rooted at the detection point, TaintCheck can automatically identify which original flow and which part of the original flow have caused the attack and identify important invariants of the payload that can be used as signatures. Semantic-analysis based signature generation can be more accurate, resilient against polymorphic worms, and robust to attacks exploiting polymorphism than the pattern-extraction based signature generation methods. |
Two simple questions to assess outcome after stroke: a European study. | BACKGROUND AND PURPOSE
The "2 simple questions" were designed as an efficient way of measuring outcome after stroke. We assessed the sensitivity and specificity of this tool, adapted for use in 8 European centers, and used it to compare outcomes across centers.
METHODS
Data were taken from the Biomed II prospective study of stroke care and outcomes. Three-month poststroke data from 8 European centers were analyzed. Sensitivity and specificity were assessed by comparing responses to the 2 simple questions with Barthel Index and modified Rankin scale scores. Adjusting for case mix, logistic regression was used to compare patients in each center with "good" outcome (not dependent and fully recovered) at 3 months.
RESULTS
Data for 793 patients were analyzed. For the total sample, the dependency question had a sensitivity of 88% and a specificity of 77%; the recovery question had a sensitivity of 78% and a specificity of 90%. Dependency data from Riga had much lower sensitivity. There was variation in good outcome between centers (P:=0.0015). Compared with the reference center (Kaunas), patients in Dijon, Florence, and Menorca were more likely to have good outcome, after adjusting for case mix.
CONCLUSIONS
Dependency and recovery questions showed generally high sensitivity and specificity. There were significant differences across centers in outcome, but reasons for these are unclear. Such differences raise particular questions about how patients interpreted and answered the simple questions and the extent to which expectations of recovery and perceived needs for assistance vary cross-culturally. |
Experimental Design for Learning Causal Graphs with Latent Variables | We consider the problem of learning causal structures with latent variables using interventions. Our objective is not only to learn the causal graph between the observed variables, but to locate unobserved variables that could confound the relationship between observables. Our approach is stage-wise: We first learn the observable graph, i.e., the induced graph between observable variables. Next we learn the existence and location of the latent variables given the observable graph. We propose an efficient randomized algorithm that can learn the observable graph using O(d log2 n) interventions where d is the degree of the graph. We further propose an efficient deterministic variant which uses O(log n+ l) interventions, where l is the longest directed path in the graph. Next, we propose an algorithm that uses only O(d2 log n) interventions that can learn the latents between both nonadjacent and adjacent variables. While a naive baseline approach would require O(n2) interventions, our combined algorithm can learn the causal graph with latents using O(d log2 n+ d2 log (n)) interventions. |
Potential and optimal control of human head movement using Tait-Bryan parametrization | Human head movement can be looked at, as a rotational dynamics on the space SO(3) with constraints that have to do with the axis of rotation. Typically the axis vector, after a suitable scaling, is assumed to lie in a surface called Donders’ surface. Various descriptions of Donders’ surface are in the literature and in this paper we assume that the surface is described by a quadratic form. We propose a Tait–Bryan parametrization of SO(3), that is new in the head movement literature, and describe Donders’ constraint in these parameters. Assuming that the head is a perfect sphere with its mass distributed uniformly and rotating about its own center, head movement models are constructed using classical mechanics. A new potential control method is described to regulate the head to a desired final orientation. Optimal head movement trajectories are constructed using a pseudospectral method, where the goal is to minimize a quadratic cost function on the energy of the applied control torques. Themodel trajectories are compared with measured trajectories of human head movement. © 2013 Elsevier Ltd. All rights reserved. |
Temporal expert finding through generalized time topic modeling | Please cite this article in press as: A. Daud et al., j.knosys.2010.04.008 This paper addresses the problem of semantics-based temporal expert finding, which means identifying a person with given expertise for different time periods. For example, many real world applications like reviewer matching for papers and finding hot topics in newswire articles need to consider time dynamics. Intuitively there will be different reviewers and reporters for different topics during different time periods. Traditional approaches used graph-based link structure by using keywords based matching and ignored semantic information, while topic modeling considered semantics-based information without conferences influence (richer text semantics and relationships between authors) and time information simultaneously. Consequently they result in not finding appropriate experts for different time periods. We propose a novel Temporal-Expert-Topic (TET) approach based on Semantics and Temporal Information based Expert Search (STMS) for temporal expert finding, which simultaneously models conferences influence and time information. Consequently, topics (semantically related probabilistic clusters of words) occurrence and correlations change over time, while the meaning of a particular topic almost remains unchanged. By using Bayes Theorem we can obtain topically related experts for different time periods and show how experts’ interests and relationships change over time. Experimental results on scientific literature dataset show that the proposed generalized time topic modeling approach significantly outperformed the non-generalized time topic modeling approaches, due to simultaneously capturing conferences influence with time information. 2010 Elsevier B.V. All rights reserved. |
LidarBoost: Depth superresolution for ToF 3D shape scanning | Depth maps captured with time-of-flight cameras have very low data quality: the image resolution is rather limited and the level of random noise contained in the depth maps is very high. Therefore, such flash lidars cannot be used out of the box for high-quality 3D object scanning. To solve this problem, we present LidarBoost, a 3D depth superresolution method that combines several low resolution noisy depth images of a static scene from slightly displaced viewpoints, and merges them into a high-resolution depth image. We have developed an optimization framework that uses a data fidelity term and a geometry prior term that is tailored to the specific characteristics of flash lidars. We demonstrate both visually and quantitatively that LidarBoost produces better results than previous methods from the literature. |
Social security and private saving: theory and historical evidence. | This article is a nontechnical presentation of the debate that has gone on during the past decade over whether the U.S. social security system has depressed private saving in the economy. The heart of the article is an assessment of economist Martin Feldstein's original evidence, presentation of the alternative evidence that concluded that currently available historical data do not support the proposition that social security reduces private saving, and an evaluation of the contradictory evidence presented by Feldstein in response to the alternative evidence. The article concludes that, although the total body of evidence is inconclusive, the historical evidence fails to support the hypothesis that social security has reduced private saving. The Office of Research, Statistics, and International Policy, as part of its ongoing research mission, investigates the interrelationship between social security and the economy. This article presents an examination of one of several aspects of this relationship relevant to public policy considerations and is intended to make previously published technical papers available to a broader audience. |
Arthroscopic evaluation of the accuracy of clinical examination versus MRI in diagnosing meniscus tears and cruciate ligament ruptures. | BACKGROUND
Magnetic resonance imaging (MRI) of the knee joint has often been regarded as a noninvasive alternative to diagnostic arthroscopy. In day-to-day clinical practice, the MRI scan is routinely used to support the diagnosis for meniscus or ligamentous injuries prior to recommending arthroscopic examination and surgery. On the other hand, rapidly progressing medical technology sometimes obscures the importance of history and physical examination. This study aims to evaluate the accuracy of physical examination and MRI scanning in the diagnosis of knee injury, including meniscus tears and cruciate ligament ruptures.
METHODS
In a cross-sectional, descriptive analytical study, 120 patients with knee injury who were candidates for arthroscopy were referred to Tabriz Shohada Hospital during a one-year period. Prior history of arthroscopy or knee surgery was considered as exclusion criteria. Before ordering an MRI and arthroscopy, a thorough physical examination of the affected knee was performed and a preliminary diagnosis made. The results of arthroscopy were considered as the definitive diagnosis, therefore the results of the physical examination and MRI were judged accordingly.
RESULTS
Of the 120 evaluated patients with knee injuries, there were 108 males and 12 females with a mean age of 29.13 ± 7.37 (16-54) years. For medial meniscus injuries, clinical examination had an accuracy of 85%, sensitivity of 94.8%, and specificity of 75.8%. Lateral meniscus injuries had the following results: accuracy (85%), sensitivity (70.8%) and specificity (88.5%). Clinical examination of anterior cruciate injuries had an accuracy of 95.8%, sensitivity of 98.6% and specificity of 91.7%. According to MRI results, for medial meniscus injuries there was an accuracy of 77.5%, sensitivity of 84.2%, and specificity of 71.4%. In lateral meniscus injuries, MRI had an accuracy of 85.8%, sensitivity of 56.5% and 92.8% specificity. MRI evaluation of anterior cruciate injuries was 92.5% for accuracy, 98.6% for sensitivity, and 83.3% for specificity. Both clinical examination and MRI were 100% for posterior cruciate injuries. Overall, in isolated injuries, the accuracy of clinical examination was relatively better than with complicated cases. The opposite results were seen for MRI findings in this regard.
CONCLUSION
According to our results, both physical examination and MRI scans are very sensitive and accurate in the diagnosis of knee injuries, with a mild preference for physical examination. MRI should be reserved for doubtful cases or complicated injuries. |
The organization of intrinsic computation: complexity-entropy diagrams and the diversity of natural information processing. | Intrinsic computation refers to how dynamical systems store, structure, and transform historical and spatial information. By graphing a measure of structural complexity against a measure of randomness, complexity-entropy diagrams display the different kinds of intrinsic computation across an entire class of systems. Here, we use complexity-entropy diagrams to analyze intrinsic computation in a broad array of deterministic nonlinear and linear stochastic processes, including maps of the interval, cellular automata, and Ising spin systems in one and two dimensions, Markov chains, and probabilistic minimal finite-state machines. Since complexity-entropy diagrams are a function only of observed configurations, they can be used to compare systems without reference to system coordinates or parameters. It has been known for some time that in special cases complexity-entropy diagrams reveal that high degrees of information processing are associated with phase transitions in the underlying process space, the so-called "edge of chaos." Generally, though, complexity-entropy diagrams differ substantially in character, demonstrating a genuine diversity of distinct kinds of intrinsic computation. |
Q-FDBA: improving QoE fairness for video streaming | Multiplayer video streaming scenario can be seen everywhere today as the video traffic is becoming the “killer” traffic over the Internet. The Quality of Experience fairness is critical for not only the users but also the content providers and ISP. Consequently, a QoE fairness adaptive method of multiplayer video streaming is of great importance. Previous studies focus on client-side solutions without network global view or network-assisted solution with extra reaction to client. In this paper, a pure network-based architecture using SDN is designed for monitoring network global performance information. With the flexible programming and network mastery capacity of SDN, we propose an online Q-learning-based dynamic bandwidth allocation algorithm Q-FDBA with the goal of QoE fairness. The results show the Q-FDBA could adaptively react to high frequency of bottleneck bandwidth switches and achieve better QoE fairness within a certain time dimension. |
A Review of the Stern Review on the Economics of Climate Change | How much and how fast should we react to the threat of global warming? The Stern Review argues that the damages from climate change are large, and that nations should undertake sharp and immediate reductions in greenhouse gas emissions. An examination of the Review’s radical revision of the economics of climate change finds, however, that it depends decisively on the assumption of a near-zero time discount rate combined with a specific utility function. The Review’s unambiguous conclusions about the need for extreme immediate action will not survive the substitution of assumptions that are consistent with today’s marketplace real interest rates and savings rates. |
Primary care delivery changes as nonphysician clinicians gain independence. | States across the United States have expanded health insurance coverage to more of their uninsured residents, resulting in at least 1 million more people having health insurance in 2007 than in 2006. These state-level health reforms should result in improved health care for a greater portion of the population. But the shortage of physicians, at least in some areas, has raised questions about whether the increased demand for care can be met. One study found that about 56 million U.S. residents do not have a regular source of health care because of physician shortages in their areas (1). Some patients now face long delays or many miles of travel to receive primary care. A survey conducted in Massachusetts, a state with a large increase of newly insured residents, illustrates these problems. The 2007 survey by the Massachusetts Medical Society found that even before health reform was enacted, Massachusetts had shortages of primary care physicians and some specialists and that patients already had access problems, including long waits for appointments (2). Massachusetts may be leading the nation in health care reform, but we'e falling behind in a critical aspect of patient care, and that's the supply of physicians. With an aging population, health care reform, and soaring rates of obesity and chronic diseases, such as diabetes, hypertension, and arthritis, real questions are surfacing about whether enough physicians will be available in Massachusetts to handle the increased demand for health care services, said B. Dale Magee, MD, past president of the Massachusetts Medical Society, when the findings were published. To help fill gaps in the generalist physician workforce, leaders of the health reform in Massachusetts have supported the expanded use of nonphysician health care providers. Increasingly, physician assistants (PAs) and nurse practitioners (NPs) are staffing clinics and providing basic care of common conditions, screening and routine management of chronic conditions, and preventive care (such as immunizations). I'm hoping this will help with access issues, said Jon Kingsdale, PhD, executive director of the Massachusetts Commonwealth Health Insurance Connector Authority, which is responsible for implementing the state's universal health care reform. Perhaps it will allow doctors to focus on delivering the more complicated medical care that they are so well trained to provide and less on delivering routine medical care, Kingsdale said. Some physicians have expressed doubt that so-called mid-level health professionals should be allowed to assume the duties traditionally handled by medical doctors. They worry that patients will more likely receive fragmented, poorly coordinated care. Other physicians have said that expanded use of NPs and PAs is inevitable, given the increasing patient demand for primary care and decline in the number of physicians who choose primary care careers. In any case, nonphysician clinicians have become prominent providers of medical services in many states recently, and they are providing forms of care previously provided by physicians only. A recent Association of American Medical Colleges (AAMC) conference on workforce issues featured discussions on the expanding role of PAs and NPs in the U.S. health care system, and how these nonphysician clinicians affect health care delivery. The Rapid Rise of PAs and NPs Both the PA and NP fields emerged about 35 years ago and have grown rapidly since then. The PA population rose from 250 in 1970 to 69500 in 2007. The number of PAs has tripled since the early 1990s. Today, there is approximately 1 PA for every 10 to 12 physicians, and 1 PA graduate for every 5 medical residents completing training, according to Perri A. Morgan, PhD, PAC, the director of physician assistant research at Duke University Medical Center. Meanwhile, the number of NPs has also risen rapidly in recent years, to about 140000 in 2004, an increase of about 38500 from 2000, according to the Health Resources and Services Administration 2004 National Sample Survey of Registered Nurses (3). About 65% of NPs work full-time, compared with about 90% of PAs, Morgan noted. Physician assistants and NPs are similar in many ways, although their training differs somewhat: PAs have prior health care experience and are educated in intensive medical programs designed to complement physician training; NPs are registered nurses and they have completed graduate-level education and advanced clinical training. Both are required to have national board certification and state licensing to practice. The amount of education for both programs is approximately half that for a physician, and entry into the workforce is less restrictive (4). Historically, PAs and especially NPs worked mostly as primary care providers in rural and medically underserved areas in which physicians were scarce, but today they also work in metropolitan areas that have an ample supply of physicians. They are working not only in private practices and clinics but also in surgical centers, hospitals, and academic medical centers. Nurse practitioners work predominantly in primary care, whereas PAs work in both primary and specialty care. However, there is a growing trend toward specialization and higher-level academic credentialing among PAs and NPs, which has meant that fewer are choosing to work in primary care (5). In 1974, 70% of all PAs practiced in the primary care specialties of family medicine, general internal medicine, and general pediatrics compared with approximately 43% in 2004 (6). As in medicine, economic considerations have driven at least some PAs and NPs to specialize. The average debt among PA graduates is $40000, and those who work in surgical specialties earn significantly more than those who work in primary care. Nevertheless, steady growth in the number of graduates of both primary care and specialty programs still means that a substantial number of PAs and NPs focus on primary care, despite the trend in specialization. In recent years, state laws and regulations have allowed more autonomy and practice privileges for NPs and PAs. Both PAs and NPs have gained prescribing authority throughout the United States, although rules vary from state to state. Payers have also increased access to reimbursementfor instance, Medicare reimburses PA and NP services at 85% of the Physician Fee Schedule. Although by law, PAs and NPs must still collaborate with a physician or work under physician supervision, the meaning of collaboration and supervision in practice is wide open. The Retail Health Clinic Model How much autonomy PAs and NPs should have is not a new issue, but it is coming under special scrutiny now that more PAs and NPs are stationed at the front lines of primary care. Consider the new retail health clinic model, in which NPs and PAs provide basic, nonemergency services with little physician oversight. Supervising physicians may be responsible only for telephone consultation during clinic business hours and routine reviews of patient charts, for instance, rather than for on-site, direct observation of care. At least 700 such for-profit clinics are now in 32 states. The clinics are located in chain drugstores, such as CVS, and in big retailers, such as Wal-Mart, where they offer patients immediate treatment for such common problems as strep throat, poison ivy, pink eye, ear infections, and bladder infections, as well as immunizations, pregnancy testing, and sports and camp physicals. Staff PAs and NPs write prescriptions and refer patients to medical doctors if needed. Most visits are priced between $30 and $60, and many health insurers cover the cost. Retail health clinics are popular with patients because they are so accessible and their pricing policy is transparent, according to Scott A. Shipman, MD, MPH, assistant professor of pediatrics and family and community medicine at Dartmouth Medical School and researcher at the Dartmouth Institute for Health Policy and Clinical Practice. More than half of all visits occur after regular office hours or on weekendstimes when medical practices are usually closedand waiting times tend to be minimal, according to Shipman's presentation at the AAMC workforce conference on the for-profit Take Care Health Systems, a subsidiary of Walgreens that manages 181 retail health clinics in 22 markets in 14 states. Whether the growth of retail health clinics poses any threat to primary care delivery is unknown because they are so new and understudied. For patients who simply need antibiotic eye drops for pink eye or a flu shot, such isolated, episodic care makes sense. If retail health clinics become patients' de facto medical home, then their use becomes worrisome. Medical associations, such as the American Medical Association, have warned that retail health clinics with no ties to health care systems could lead to more fragmentation of patient care, missed opportunities for preventive care, and inadequate follow-up for patients. They have asked retail health clinic operators to ensure physician supervision and to establish formal referral systems with local physicians and hospitals. Filling Gaps in Care Retail health clinics are providing convenience to patients, but they are not necessarily improving patient access to care in areas where service is lacking. The clinics are opening in mostly central, prosperous locations that already have an adequate supply of primary care physicians. Because they are for-profit, they are not necessarily an answer to improving health care access in rural areas, shortage areas, or any other areas of the most need, Shipman said. But other emerging clinic models that also emphasize the delivery of primary care by nonphysician providers are being developed to improve access to health care where it is needed. For instance, nonprofit organizations have adopted the retail health clinic model to improve access in some shortage areas. Intermountain |
DivQ: diversification for keyword search over structured databases | Keyword queries over structured databases are notoriously ambiguous. No single interpretation of a keyword query can satisfy all users, and multiple interpretations may yield overlapping results. This paper proposes a scheme to balance the relevance and novelty of keyword search results over structured databases. Firstly, we present a probabilistic model which effectively ranks the possible interpretations of a keyword query over structured data. Then, we introduce a scheme to diversify the search results by re-ranking query interpretations, taking into account redundancy of query results. Finally, we propose α-nDCG-W and WS-recall, an adaptation of α-nDCG and S-recall metrics, taking into account graded relevance of subtopics. Our evaluation on two real-world datasets demonstrates that search results obtained using the proposed diversification algorithms better characterize possible answers available in the database than the results of the initial relevance ranking. |
Menopause occurs late in life in the captive chimpanzee (Pan troglodytes) | Menopause in women occurs at mid-life. Chimpanzees, in contrast, continue to display cycles of menstrual bleeding and genital swelling, suggestive of ovulation, until near their maximum life span of about 60 years. Because ovulation was not confirmed hormonally, however, the age at which chimpanzees experience menopause has remained uncertain. In the present study, we provide hormonal data from urine samples collected from 30 female chimpanzees, of which 9 were old (>30 years), including 2 above the age of 50 years. Eight old chimpanzees showed clear endocrine evidence of ovulation, as well as cycles of genital swelling that correlated closely with measured endocrine changes. Endocrine evidence thus confirms prior observations (cyclic anogenital swelling) that menopause is a late-life event in the chimpanzee. We also unexpectedly discovered an idiopathic anovulation in some young and middle-aged chimpanzees; this merits further study. Because our results on old chimpanzees validate the use of anogenital swelling as a surrogate index of ovulation, we were able to combine data on swelling and urinary hormones to provide the first estimates of age-specific rates of menopause in chimpanzees. We conclude that menopause occurs near 50 years of age in chimpanzees as it does in women. Our finding identifies a basic difference between the human and chimpanzee aging processes: female chimpanzees can remain reproductively viable for a greater proportion of their life span than women. Thus, while menopause marks the end of the chimpanzee’s life span, women may thrive for decades more. |
DiReCt: Resource-Aware Dynamic Model Reconfiguration for Convolutional Neural Network in Mobile Systems | Although Convolutional Neural Networks (CNNs) have been widely applied in various applications, their deployment in resource-constrained mobile systems remains a significant concern. To overcome the computation resource constraints, such as limited memory and energy capacity, many works are proposed for mobile CNN optimization. However, most of them lack a comprehensive modeling analysis of the CNN computation consumption and merely focus on static optimization schemes regardless of different mobile computation scenarios. In this work, we proposed DiReCt -- a resource-aware CNN reconfiguration system. Leveraging accurate CNN computation consumption modeling and mobile resource constraint analysis, DiReCt can reconfigure a CNN with different accuracy and resource consumption levels to adapt to various mobile computation scenarios. The experiment results show that: the proposed computation consumption models in DiReCt can well estimate the CNN computation consumption with 94.1% accuracy, and DiReCt achieves at most 34.9% computation acceleration, 52.7% memory reduction, and 27.1% energy saving. Eventually, DiReCt can effectively adapt CNNs to dynamic mobile usage scenarios for optimal performance. |
Comparisons between novel oral anticoagulants and vitamin K antagonists in patients with CKD. | Novel oral anticoagulants (NOACs) (rivaroxaban, dabigatran, apixaban) have been approved by international regulatory agencies to treat atrial fibrillation and venous thromboembolism in patients with kidney dysfunction. However, altered metabolism of these drugs in the setting of impaired kidney function may subject patients with CKD to alterations in their efficacy and a higher risk of bleeding. This article examined the efficacy and safety of the NOACs versus vitamin K antagonists (VKAs) for atrial fibrillation and venous thromboembolism in patients with CKD. A systematic review and meta-analyses of randomized controlled trials were conducted to estimate relative risk (RR) with 95% confidence interval (95% CIs) using a random-effects model. MEDLINE, Embase, and the Cochrane Library were searched to identify articles published up to March 2013. We selected published randomized controlled trials of NOACs compared with VKAs of at least 4 weeks' duration that enrolled patients with CKD (defined as creatinine clearance of 30-50 ml/min) and reported data on comparative efficacy and bleeding events. Eight randomized controlled trials were eligible. There was no significant difference in the primary efficacy outcomes of stroke and systemic thromboembolism (four trials, 9693 participants; RR, 0.64 [95% CI, 0.39 to 1.04]) and recurrent thromboembolism or thromboembolism-related death (four trials, 891 participants; RR, 0.97 [95% CI, 0.43 to 2.15]) with NOACs versus VKAs. The risk of major bleeding or the combined endpoint of major bleeding or clinically relevant nonmajor bleeding (primary safety outcome) (eight trials, 10,616 participants; RR 0.89 [95% CI, 0.68 to 1.16]) was similar between the groups. The use of NOACs in select patients with CKD demonstrates efficacy and safety similar to those with VKAs. Proactive postmarketing surveillance and further studies are pivotal to further define the rational use of these agents. |
A Memory-Augmented Neural Model for Automated Grading | The need for automated grading tools for essay writing and open-ended assignments has received increasing attention due to the unprecedented scale of Massive Online Courses (MOOCs) and the fact that more and more students are relying on computers to complete and submit their school work. In this paper, we propose an efficient memory networks-powered automated grading model. The idea of our model stems from the philosophy that with enough graded samples for each score in the rubric, such samples can be used to grade future work that is found to be similar. For each possible score in the rubric, a student response graded with the same score is collected. These selected responses represent the grading criteria specified in the rubric and are stored in the memory component. Our model learns to predict a score for an ungraded response by computing the relevance between the ungraded response and each selected response in memory. The evaluation was conducted on the Kaggle Automated Student Assessment Prize (ASAP) dataset. The results show that our model achieves state-of-the-art performance in 7 out of 8 essay sets. |
The eleventh hour? Monitoring the English rural tradition in the 1990s and beyond. | It is something of an irony that in many ways the rural tradition in English life was studied more intensively and systematically a century ago than is the case today. The burgeoning of scholarly interest in language, history, tradition and society at both local and national level in the late nineteenth century provided a wealth of data for modern researchers across a wide range of disciplines. While some of this material inevitably appears dated and indeed at times inaccurate or erroneous, there is a great deal of value in the records of rural life painstakingly set down by those pioneering chroniclers, notably in the last three decades of the nineteenth century and up to the time of the First World War. It has been fashionable in recent years to denigrate the work of these early writers and collectors, or indeed to dismiss them altogether. The label ‘antiquarian’, so glibly attached to several generations of nineteenth-century scholars, has become a pejorative, redolent of the amateur, the dilettante and the pedant. While it cannot be denied that a proportion of antiquarian writing is trivial, mundane or self-indulgent, it is manifestly incorrect to treat all such work with contempt. Indeed we are greatly indebted to these individuals, and especially to those whose methods and observations were as rigorous and scholarly as possible, given the criteria and standards of their day. Nor did they confine themselves within the constricting limits of our modern academic disciplines. Their interests were wide-ranging, and they closely observed the rural scene from many different perspectives, thus building up a much fuller picture of life and society in a given locality than would usually be the case today. |
From Efficient Markets Theory to Behavioral Finance | Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/aea.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. |
Sodium restriction in hypertensive patients treated with a converting enzyme inhibitor and a thiazide. | When the function of the renin system is inhibited, blood pressure becomes more dependent on changes in sodium and water balance. Diuretics alone and sodium restriction alone are additive to converting enzyme inhibitor therapy. However, it is not known if these two ways of reducing sodium balance are additive in the presence of established converting enzyme inhibition. We therefore performed a double-blind crossover study of the effects of moderate sodium restriction in 21 patients with essential hypertension who were already being treated with the combination of a converting enzyme inhibitor and a diuretic. After 1 month of captopril (50 mg twice daily) and hydrochlorothiazide (25 mg once daily) therapy, with their usual sodium intake, average supine blood pressure was 147/96 +/- 5/3 (SEM) mm Hg 2 hours after treatment. Patients then reduced their sodium intake to around 80-100 mmol/day for the remainder of the study. After 2 weeks of sodium restriction, they entered a double-blind, randomized, crossover study of Slow Sodium (100 mmol sodium/day) compared with Slow Sodium placebo, while continuing sodium restriction and the above treatment. During the double-blind study, after 1 month of treatment with captopril (50 mg twice daily), hydrochlorothiazide (25 mg once daily), and Slow Sodium placebo, supine blood pressure 2 hours after treatment was 138/88 +/- 4/2 mm Hg (24-hour urinary sodium 104 +/- 11 mmol). After 1 month of captopril (50 mg twice daily), hydrochlorothiazide (25 mg once daily), and Slow Sodium tablets, supine blood pressure 2 hours after treatment was 147/91 +/- 5/2 mm Hg (p less than 0.05; 24-hour urinary sodium 195 +/- 14 mmol).(ABSTRACT TRUNCATED AT 250 WORDS) |
X1000 real-time phoneme recognition VLSI using feed-forward deep neural networks | Deep neural networks show very good performance in phoneme and speech recognition applications when compared to previously used GMM (Gaussian Mixture Model)-based ones. However, efficient implementation of deep neural networks is difficult because the network size needs to be very large when high recognition accuracy is demanded. In this work, we develop a digital VLSI for phoneme recognition using deep neural networks and assess the design in terms of throughput, chip size, and power consumption. The developed VLSI employs a fixed-point optimization method that only uses +Δ, 0, and -Δ for representing each of the weight. The design employs 1,024 simple processing units in each layer, which however can be scaled easily according to the needed throughput, and the throughput of the architecture varies from 62.5 to 1,000 times of the real-time processing speed. |
SparkR: Scaling R Programs with Spark | R is a popular statistical programming language with a number of extensions that support data processing and machine learning tasks. However, interactive data analysis in R is usually limited as the R runtime is single threaded and can only process data sets that fit in a single machine's memory. We present SparkR, an R package that provides a frontend to Apache Spark and uses Spark's distributed computation engine to enable large scale data analysis from the R shell. We describe the main design goals of SparkR, discuss how the high-level DataFrame API enables scalable computation and present some of the key details of our implementation. |
Tracking and Segmentation of the Airways in Chest CT Using a Fully Convolutional Network | Airway segmentation plays an important role in analyzing chest computed tomography (CT) volumes such as lung cancer detection, chronic obstructive pulmonary disease (COPD), and surgical navigation. However, due to the complex tree-like structure of the airways, obtaining segmentation results with high accuracy for a complete 3D airway extraction remains a challenging task. In recent years, deep learning based methods, especially fully convolutional networks (FCN), have improved the state-of-the-art in many segmentation tasks. 3D U-Net is an example that optimized for 3D biomedical imaging. It consists of a contracting encoder part to analyze the input volume and a successive decoder part to generate integrated 3D segmentation results. While 3D U-Net can be trained for any 3D segmentation task, its direct application to airway segmentation is challenging due to differently sized airway branches. In this work, we combine 3D deep learning with image-based tracking in order to automatically extract the airways. Our method is driven by adaptive cuboidal volume of interest (VOI) analysis using a 3D U-Net model. We track the airways along their centerlines and set VOIs according to the diameter and running direction of each airway. After setting a VOI, the 3D U-Net is utilized to extract the airway region inside the VOI. All extracted candidate airway regions are unified to form an integrated airway tree. We trained on 30 cases and tested our method on an additional 20 cases. Compared with other state-of-the-art airway tracking and segmentation methods, our method can increase the detection rate by 5.6 while decreasing the false positives (FP) by 0.7 percentage points. |
Mortality and its risk factors in Malawian children admitted to hospital with clinical pneumonia, 2001-12: a retrospective observational study. | BACKGROUND
Few studies have reported long-term data on mortality rates for children admitted to hospital with pneumonia in Africa. We examined trends in case fatality rates for all-cause clinical pneumonia and its risk factors in Malawian children between 2001 and 2012.
METHODS
Individual patient data for children (<5 years) with clinical pneumonia who were admitted to hospitals participating in Malawi's Child Lung Health Programme between 2001 and 2012 were recorded prospectively on a standardised medical form. We analysed trends in pneumonia mortality and children's clinical characteristics, and we estimated the association of risk factors with case fatality for children younger than 2 months, 2-11 months of age, and 12-59 months of age using separate multivariable mixed effects logistic regression models.
FINDINGS
Between November, 2012, and May, 2013, we retrospectively collected all available hard copies of yellow forms from 40 of 41 participating hospitals. We examined 113 154 pneumonia cases, 104 932 (92·7%) of whom had mortality data and 6903 of whom died, and calculated an overall case fatality rate of 6·6% (95% CI 6·4-6·7). The case fatality rate significantly decreased between 2001 (15·2% [13·4-17·1]) and 2012 (4·5% [4·1-4·9]; ptrend<0·0001). Univariable analyses indicated that the decrease in case fatality rate was consistent across most subgroups. In multivariable analyses, the risk factors significantly associated with increased odds of mortality were female sex, young age, very severe pneumonia, clinically suspected Pneumocystis jirovecii infection, moderate or severe underweight, severe acute malnutrition, disease duration of more than 21 days, and referral from a health centre. Increasing year between 2001 and 2012 and increasing age (in months) were associated with reduced odds of mortality. Fast breathing was associated with reduced odds of mortality in children 2-11 months of age. However, case fatality rate in 2012 remained high for children with very severe pneumonia (11·8%), severe undernutrition (15·4%), severe acute malnutrition (34·8%), and symptom duration of more than 21 days (9·0%).
INTERPRETATION
Pneumonia mortality and its risk factors have steadily improved in the past decade in Malawi; however, mortality remains high in specific subgroups. Improvements in hospital care may have reduced case fatality rates though a lack of sufficient data on quality of care indicators and the potential of socioeconomic and other improvements outside the hospital precludes adequate assessment of why case-fatality rates fell. Results from this study emphasise the importance of effective national systems for data collection. Further work combining this with data on trends in the incidence of pneumonia in the community are needed to estimate trends in the overall risk of mortality from pneumonia in children in Malawi.
FUNDING
Bill & Melinda Gates Foundation. |
Analysis of MUTYH genotypes and colorectal phenotypes in patients With MUTYH-associated polyposis. | BACKGROUND & AIMS
Biallelic mutations in the base excision DNA repair gene MUTYH lead to MUTYH-associated polyposis (MAP) and predisposition to colorectal cancer (CRC). Functional studies have demonstrated significant differences in base recognition and glycosylase activity between various MUTYH mutations, notably for the 2 mutations most frequently reported in MAP patients: Y179C and G396D (previously annotated as Y165C and G382D). Our goal was to establish correlations between genotypes and colorectal phenotype of patients with MAP.
METHODS
In this multicenter study, we analyzed genotype and phenotype data from 257 MAP patients. Data included age at presentation of MAP, polyp count, and the occurrence, location, and age at presentation of CRC.
RESULTS
Patients with a homozygous G396D mutation or compound heterozygous G396D/Y179C mutations presented later with MAP and had a significantly lower hazard of developing CRC than patients with a homozygous Y179C mutation (P < .001). The mean ages of CRC diagnosis in patients were 58 years (homozygous G396D) and 52 years (compound heterozygous G396D/Y179C) versus 46 years (homozygous Y179C; P = .001, linear regression).
CONCLUSIONS
Our study identified the phenotypic effects of Y179C as relatively severe and of G396D as relatively mild. These clinical data are in accord with findings from in vitro functional assays. Genotypic stratification may become useful in the development of guidelines for counseling, surveillance, and management of families with MAP. |
Verification and validation: verification and validation of simulation models | In this paper we discuss verification and validation of simulation models. Four different approaches to deciding model validity are described; two different paradigms that relate verification and validation to the model development process are presented; various validation techniques are defined; conceptual model validity, model verification, operational validity, and data validity are discussed; a way to document results is given; a recommended procedure for model validation is presented; and accreditation is briefly discussed. |
Developing International Standards for Very Small Enterprises | In this article software engineering addresses the needs of small organizations, especially those with a low capability level. Industry recognizes that very small enterprises (VSEs) contribute valuable products and services.According to the Organisation for Economic Cooperation and Development, Small and Medium Enterprise Outlook, 2002, enterprises with fewer than 10 employees represent 93 percent of all companies in Europe and 56 percent in the US - 66 percent of total employment. The current software engineering standards do not address the needs of these organizations, especially those with a low capability level. Compliance with standards such as those from ISO and the IEEE is difficult if not impossible for them to achieve. Subsequently, VSEs have no or very limited ways to be recognized as enterprises that produce quality software systems in their domain. Therefore, they are often cut off from some economic activities. |
A Visual Analysis Approach for Community Detection of Multi-Context Mobile Social Networks | The problem of detecting community structures of a social network has been extensively studied over recent years, but most existing methods solely rely on the network structure and neglect the context information of the social relations. The main reason is that a context-rich network offers too much flexibility and complexity for automatic or manual modulation of the multifaceted context in the analysis process. We address the challenging problem of incorporating context information into the community analysis with a novel visual analysis mechanism. Our approach consists of two stages: interactive discovery of salient context, and iterative context-guided community detection. Central to the analysis process is a context relevance model (CRM) that visually characterizes the influence of a given set of contexts on the variation of the detected communities, and discloses the community structure in specific context configurations. The extracted relevance is used to drive an iterative visual reasoning process, in which the community structures are progressively discovered. We introduce a suite of visual representations to encode the community structures, the context as well as the CRM. In particular, we propose an enhanced parallel coordinates representation to depict the context and community structures, which allows for interactive data exploration and community investigation. Case studies on several datasets demonstrate the efficiency and accuracy of our approach. |
Navigating the fourth industrial revolution. | A new framework for advanced manufacturing is being promoted in Germany, and is increasingly being adopted by other countries. The framework represents a coalescing of digital and physical technologies along the product value chain in an attempt to transform the production of goods and services1. It is an approach that focuses on combining technologies such as additive manufacturing, automation, digital services and the Internet of Things, and it is part of a growing movement towards exploiting the convergence between emerging technologies. This technological convergence is increasingly being referred to as the ‘fourth industrial revolution’, and like its predecessors, it promises to transform the ways we live and the environments we live in. (While there is no universal agreement on what constitutes an ‘industrial revolution’, proponents of the fourth industrial revolution suggest that the first involved harnessing steam power to mechanize production; the second, the use of electricity in mass production; and the third, the use of electronics and information technology to automate production.) Yet, without up-front efforts to ensure its beneficial, responsible and responsive development, there is a very real danger that this fourth industrial revolution will not only fail to deliver on its promise, but also ultimately increase the very challenges its advocates set out to solve. At its heart, the fourth industrial revolution represents an unprecedented fusion between and across digital, physical and biological technologies, and a resulting anticipated transformation in how products are made and used2. This is already being experienced with the growing Internet of Things, where dynamic information exchanges between networked devices are opening up new possibilities from manufacturing to lifestyle enhancement and risk management. Similarly, a rapid amplification of 3D printing capabilities is now emerging through the convergence of additive manufacturing technologies, online data sharing and processing, advanced materials, and ‘printable’ biological systems. And we are just beginning to see the commercial use of potentially transformative convergence between cloud-based artificial intelligence and open-source hardware and software, to create novel platforms for innovative human–machine interfaces. These and other areas of development only scratch the surface of how convergence is anticipated to massively extend the impacts of the individual technologies it draws on. This is a revolution that comes with the promise of transformative social, economic and environmental advances — from eliminating disease, protecting the environment, and providing plentiful energy, food and water, to reducing inequity and empowering individuals and communities. Yet, the path towards this utopia-esque future is fraught with pitfalls — perhaps more so than with any former industrial revolution. As more people get closer to gaining access to increasingly powerful converging technologies, a complex risk landscape is emerging that lies dangerously far beyond the ken of current regulations and governance frameworks. As a result, we are in danger of creating a global ‘wild west’ of technology innovation, where our good intentions may be among the first casualties. Within this emerging landscape, cyber security is becoming an increasingly important challenge, as global digital networks open up access to manufacturing processes and connected products across the world. The risks of cyber ‘insecurity’ increase by orders of magnitude as manufacturing becomes more distributed and less conventionally securable. Distributed manufacturing is another likely outcome of the fourth industrial revolution. A powerful fusion between online resources, modular and open-source tech, and point-of-source production devices, such as 3D printers, will increasingly enable entrepreneurs to set up shop almost anywhere. While this could be a boon for local economies, it magnifies the ease with which manufacturing can slip the net of conventional regulation, while still having the ability to have a global impact. These and other challenges reflect a blurring of the line between hardware and software systems that is characteristic of the fourth industrial revolution. We are heading rapidly towards a future where hardware manufacturers are able to grow, crash and evolve physical products with the same speed that we have become accustomed to with software products. Yet, manufacturing regulations remain based on product development cycles that span years, not hours. Anticipating this high-speed future, we are already seeing the emergence of hardware capabilities that can be updated at the push of a button. Tesla Motors, for instance, recently released a software update that added hardware-based ‘autopilot’ capabilities to the company’s existing fleet of model S vehicles3. This early demonstration of the convergence between hardware and software reflects a growing capacity to rapidly change the behaviour of hardware systems through software modifications that lies far beyond the capacity of current regulations to identify, monitor and control. This in turn increases the potential risks to health, safety and the environment, simply because well-intentioned technologies are at some point going to fall through the holes in an increasingly inadequate regulatory net. There are many other examples where converging technologies are increasing the gap between what we can do and our understanding of how to do it responsibly. The convergence between robotics, nanotechnology and cognitive augmentation, for instance, and that between artificial intelligence, gene editing and maker communities both push us into uncertain territory. Yet despite the vulnerabilities inherent with fast-evolving technological capabilities that are tightly coupled, complex and poorly regulated, we lack even the beginnings of national or international conceptual frameworks to think about responsible decisionmaking and responsive governance. How vulnerable we will be to unintended and unwanted consequences in this convergent technologies future is unclear. What is clear though is that, without new thinking on risk, resilience and governance, and without rapidly emerging abilities to identify early warnings and take corrective action, the chances of systems based around converging technologies failing fast and failing spectacularly will only increase. |
A Shortest Path Dependency Kernel for Relation Extraction | We present a novel approach to relation extraction, based on the observation that the information required to assert a relationship between two named entities in the same sentence is typically captured by the shortest path between the two entities in the dependency graph. Experiments on extracting top-level relations from the ACE (Automated Content Extraction) newspaper corpus show that the new shortest path dependency kernel outperforms a recent approach based on dependency tree kernels. |
Multimodal visualization of the optomechanical response of silicon cantilevers with ultrafast electron microscopy | The manner in which structure at the mesoscale affects emergent collective dynamics has become the focus of much attention owing, in part, to new insights into how morphology on these spatial scales can be exploited for enhancement and optimization of macroscopic properties. Key to advancements in this area is development of multimodal characterization tools, wherein access to a large parameter space (energy, space, and time) is achieved (ideally) with a single instrument. Here, we describe the study of optomechanical responses of single-crystal Si cantilevers with an ultrafast electron microscope. By conducting structural-dynamics studies in both real and reciprocal space, we are able to visualize MHz vibrational responses from atomicto micrometerscale dimensions. With nanosecond selected-area and convergent-beam diffraction, we demonstrate the effects of spatial signal averaging on the isolation and identification of eigenmodes of the cantilever. We find that the reciprocal-space methods reveal eigenmodes mainly below 5 MHz, indicative of the first five vibrational eigenvalues for the cantilever geometry studied here. With nanosecond real-space imaging, however, we are able to visualize local vibrational frequencies exceeding 30 MHz. The heterogeneously-distributed vibrational response is mapped via generation of pixel-by-pixel time-dependent Fourier spectra, which reveal the localized highfrequency modes, whose presence is not detected with parallel-beam diffraction. By correlating the transient response of the three modalities, the oscillation, and dissipation of the optomechanical response can be compared to a linear-elastic model to isolate and identify the spatial threedimensional dynamics. |
A Large-Displacement 3-DOF Flexure Parallel Mechanism with Decoupled Kinematics Structure | This paper proposes an XYZ-flexure parallel mechanism (FPM) with large displacement and decoupled kinematics structure. The large-displacement FPM has large motion range more than 1 mm. Moreover, the decoupled XYZ-stage has small cross-axis error and small parasitic rotation. In this study, the typical prismatic joints are investigated, and a new large-displacement prismatic joint using notch hinges is designed. The conceptual design of the FPM is proposed by assembling these modular prismatic joints, and then the optimal design of the FPM is conducted. The analytical models of linear stiffness and dynamics are derived using pseudo-rigid-body (PRB) method. Finally, the numerical simulation using ANSYS is conducted for modal analysis to verify the analytical dynamics equation. Experiments are conducted to verify the proposed design for linear stiffness, cross-axis error and parasitic rotation |
Automated component-handling system for education and research in mechatronics | Mechatronic practitioners are engaged in the assembly and maintenance of complex machines, plants and systems in the engineering sector or in organisations which purchase and operate such mechatronic systems. Mechatronics, often described as the synergy of mechanical, computer systems and electronic technologies, is increasingly being singled out as a core focus by both the education and business sectors. This international trend is also evident in South Africa and Engineering Faculties now have to provide students with the opportunity not only to acquire theoretical knowledge in Mechatronics but also to develop skills in implementing and designing mechatronics systems. |
CYP2D6 polymorphism in systemic lupus erythematosus patients | Objectives: To determine whether patients with idiopathic systemic lupus erythematosus (SLE) are associated with impaired CYP2D6 activity and to gain insight into whether there is an association between particular CYP2D6 genotypes and susceptibility to SLE, and whether CYP2D6 polymorphism is linked to any specific clinical features of SLE. Methods: Debrisoquine sulfate (10 mg p.o.) was given to 159 healthy volunteers and 39 idiopathic SLE patients. Genotypic assay was carried out in 80 healthy volunteers and 32 patients. A 10-ml blood sample was drawn for genotypic assay. Debrisoquine and 4-hydroxydebrisoquine were determined in 8-h urine samples. Blood samples were analysed for the presence of mutations in the CYP2D6 gene, by using polymerase chain reaction (PCR) specific for CYP2D6*3 and CYP2D6*4 alleles. Results: The metabolic ratio of debrisoquine to 4-hydroxydebrisoquine ranged from 0.01 to 86.98 in healthy subjects and from 0.02 to 96 in SLE patients. We observed the poor metabolizer(PM) debrisoquine phenotype in three of 39 patients with idiopathic SLE (7.6%) and five of 159 healthy subjects (3.1%). There was no significant difference in the frequency of PM phenotypes between idiopathic SLE and healthy subjects (Fisher's exact test, P = 0.19). No significant difference in the distribution of overall genotypes and allele frequencies were observed between the two groups. No significant relationships were found between specific clinical features and the overall genotype. Conclusion: The results of this study confirm that CYP2D6 activity is not impaired in SLE and that there is no association between SLE and phenotypic CYP2D6 status. The results also showed that there was no difference in the frequency of CYP2D6A and CYP2D6B alleles between controls and patients with SLE. |
Evolution of Navajo eclogites and hydration of the mantle wedge below the Colorado Plateau, southwestern United States: EVOLUTION OF NAVAJO ECLOGITES | [1] Eclogite and pyroxenite xenoliths from ultramafic diatremes of the Navajo province on the Colorado Plateau have been analyzed to investigate hydration of continental mantle and effects of low-angle subduction on the mantle wedge. Xenoliths have been characterized by petrographic and electron probe analysis and by Sm-Nd, Rb-Sr, K-Ar, and O isotopic analysis of mineral separates from one eclogite and by U-Pb isotopic analysis of zircons from three samples. K-Ar analysis of phengite establishes eruption of a Garnet Ridge, Arizona, diatreme at 30 Ma. Sm-Nd and Rb-Sr analyses of clinopyroxene and garnet from that eclogite document recrystallization shortly preceding eruption. Three zircon fractions have been analyzed from that eclogite and from two others representing the nearby Moses Rock and Mule Ear diatremes. Seven of nine small multigrain fractions scatter about a poorly fit discordia between ca. 35 Ma and 1515 Ma (fractions range from overlapping concordia at the lower intercept to a 207Pb/206Pb age of ca. 1220 Ma). The discordant fractions establish a mid-Proterozoic zircon component in each eclogite, inconsistent with an origin from basalt of the Farallon plate. The pressure recorded by one of these eclogites (3.3 GPa) exceeds that of an eclogite previously attributed to the Farallon plate. Nonetheless, each of the eclogites contains a fraction of nearly concordant zircons with ages in the range 35 to 41 Ma, and one rock also contains a fraction that is nearly concordant at 70 Ma. These concordant ages are interpreted to record episodic zircon growth during recrystallization of Proterozoic mantle. The concordant zircon ages are consistent with published data that establish recrystallization of Navajo eclogites from 81 to 33 Ma, a time interval similar to that of the Laramide orogeny. The eclogite-facies recrystallization and growth of new zircon are attributed to the catalytic effects of water introduced into the mantle from the Farallon slab. Water penetrated fracture zones extending for at least tens of kilometers into the mantle wedge above the Farallon slab during low-angle subduction. Magmatism in the San Juan volcanic field to the northeast of the diatremes may be related to similar hydration. |
Thorough static analysis of device drivers | Bugs in kernel-level device drivers cause 85% of the system crashes in the Windows XP operating system [44]. One of the sources of these errors is the complexity of the Windows driver API itself: programmers must master a complex set of rules about how to use the driver API in order to create drivers that are good clients of the kernel. We have built a static analysis engine that finds API usage errors in C programs. The Static Driver Verifier tool (SDV) uses this engine to find kernel API usage errors in a driver. SDV includes models of the OS and the environment of the device driver, and over sixty API usage rules. SDV is intended to be used by driver developers "out of the box." Thus, it has stringent requirements: (1) complete automation with no input from the user; (2) a low rate of false errors. We discuss the techniques used in SDV to meet these requirements, and empirical results from running SDV on over one hundred Windows device drivers. |
Multi-objective reasoning with constrained goal models | Goal models have been widely used in computer science to represent software requirements, business objectives, and design qualities. Existing goal modelling techniques, however, have shown limitations of expressiveness and/or tractability in coping with complex real-world problems. In this work, we exploit advances in automated reasoning technologies, notably satisfiability and optimization modulo theories (SMT/OMT), and we propose and formalize: (1) an extended modelling language for goals, namely the constrained goal model (CGM), which makes explicit the notion of goal refinement and of domain assumption, allows for expressing preferences between goals and refinements and allows for associating numerical attributes to goals and refinements for defining constraints and optimization goals over multiple objective functions, refinements, and their numerical attributes; (2) a novel set of automated reasoning functionalities over CGMs, allowing for automatically generating suitable refinements of input CGMs, under user-specified assumptions and constraints, that also maximize preferences and optimize given objective functions. We have implemented these modelling and reasoning functionalities in a tool, named CGM-Tool, using the OMT solver OptiMathSAT as automated reasoning backend. Moreover, we have conducted an experimental evaluation on large CGMs to support the claim that our proposal scales well for goal models with 1000s of elements. |
Intelligent stereo camera mobile platform for indoor service robot research | Stereo vision is an active research topic in computer vision. Point Grey® Bumblebee® and digital single-lens reflex camera (DSLR) are normally found in the stereo vision research, they are robust but expensive. Open source electronic prototyping platforms such as Arduino and Raspberry Pi are interesting products, which allows students or researchers to custom made inexpensive experimental equipment for their research projects. This paper describes the intelligent stereo camera mobile platform developed in our research using Pi and camera modules and presents the concept of using inexpensive open source parts for robotic stereo vision research work in details. |
Construct validity of an attention rating scale for traumatic brain injury. | Attention deficits are nearly ubiquitous after traumatic brain injury (TBI). In the subacute phase of moderate to severe TBI, these deficits may be difficult to measure with the precision needed to predict outcomes, assess degree of recovery, and monitor treatment response. This article reports the findings of four studies, three observational and one a randomized, controlled treatment trial of methylphenidate (MP), designed to provide construct validation of the Moss Attention Rating Scale (MARS), an observational measure of attention dysfunction following TBI. One hundred seven participants with moderate to severe TBI were enrolled during treatment on an inpatient rehabilitation unit. MARS scores were provided independently by four rehabilitation disciplines (Physical, Occupational and Speech Therapies and Nursing). Results indicated that the MARS: (1) is more strongly related to concurrent measures of cognitive versus physical disability, supporting its validity as a measure of cognition, (2) is more strongly related to concurrent psychometric measures of attention versus measures thought to rely less on attention, supporting its validity as a measure of attention; and (3) predicts 1-year outcomes of TBI better than psychometric measures of attention. However, the MARS (4) was not differentially affected by MP versus placebo treatment. Results support the construct validity and utility of the MARS, with further research needed to clarify its role in treatment outcome assessment. |
Deep Multi-instance Learning with Dynamic Pooling | End-to-end optimization of multi-instance learning (MIL) using neural networks is an important problem with many applications, in which a core issue is how to design a permutation-invariant pooling function without losing much instance-level information. Inspired by the dynamic routing in recent capsule networks, we propose a novel dynamic pooling function for MIL. It is an adaptive scheme for both key instance selection and modeling the contextual information among instances in a bag. The dynamic pooling iteratively updates the instance contribution to its bag. It is permutation-invariant and can interpret instance-to-bag relationship. The proposed dynamic pooling based multi-instance neural network has been validated on many MIL tasks and outperforms other MIL methods. |
Using Spreadsheets and VBA for Teaching Civil Engineering Concepts | Spreadsheets are becoming increasingly popular in solving engineering related problems. Among the strong features of spreadsheets are their instinctive cell-based structure and easy to use capabilities. Excel, for example, is a powerful spreadsheet with VBA robust programming capabilities that can be a powerful tool for teaching civil engineering concepts. Spreadsheets can do basic calculations such as cost estimates, schedule and cost control, and markup estimation, as well as structural calculations of reactions, stresses, strains, deflections, and slopes. Spreadsheets can solve complex problems, create charts and graphs, and generate useful reports. This paper highlights the use of Excel spreadsheet and VBA in teaching civil engineering concepts and creating useful applications. The focus is on concepts related to construction management and structural engineering ranging from a simple cost estimating problem to advanced applications like the simulation using PERT and the analysis of structural members. Several spreadsheet were developed for time-cost tradeoff analysis, optimum markup estimation, simulating activities with uncertain durations, scheduling repetitive projects, schedule and cost control, and optimization of construction operations, and structural calculations of reactions, internal forces, stresses, strains, deflections, and slopes. Seven illustrative examples are presented to demonstrate the use of spreadsheets as a powerful tool for teaching civil engineering concepts. |
Classifying Online Dating Profiles on Tinder using FaceNet Facial Embeddings | A method to produce personalized classification models to automatically review online dating profiles on Tinder, based on the user’s historical preference, is proposed. The method takes advantage of a FaceNet facial classification model to extract features which may be related to facial attractiveness. The embeddings from a FaceNet model were used as the features to describe an individual’s face. A user reviewed 8,545 online dating profiles. For each reviewed online dating profile, a feature set was constructed from the profile images which contained just one face. Two approaches are presented to go from the set of features for each face to a set of profile features. A simple logistic regression trained on the embeddings from just 20 profiles could obtain a 65% validation accuracy. A point of diminishing marginal returns was identified to occur around 80 profiles, at which the model accuracy of 73% would only improve marginally after reviewing a significant number of additional profiles. |
Does a tensorial energy-momentum density for gravitation exist? | No tensorial expression for the energy-momentum density of the gravitational field seems to exist in general relativity. Would this be an ineluctable property of nature, or just a limitation of the general-relativistic geometric bias? An analysis of the spin connection shows that the non-tensorial character of the general relativity expressions is due to the fact that they all include the energy-momentum density of the fictitious forces associated to the non-inertiality of the adopted frame--which is non-tensorial by its very nature. Splitting the spin connection in such a way as to separate gravity from inertia, one ends up with teleparallel gravity, where a tensorial expression for the energy-momentum of gravity alone naturally emerges. Like the energy-momentum tensor of any other field in the presence of gravitation, it is conserved only in the covariant sense. |
A Zika Vaccine Targeting NS1 Protein Protects Immunocompetent Adult Mice in a Lethal Challenge Model | Zika virus (ZIKV) is a mosquito-borne flavivirus that has rapidly extended its geographic range around the world. Its association with abnormal fetal brain development, sexual transmission, and lack of a preventive vaccine have constituted a global health concern. Designing a safe and effective vaccine requires significant caution due to overlapping geographical distribution of ZIKV with dengue virus (DENV) and other flaviviruses, possibly resulting in more severe disease manifestations in flavivirus immune vaccinees such as Antibody-Dependent Enhancement (ADE, a phenomenon involved in pathogenesis of DENV, and a risk associated with ZIKV vaccines using the envelope proteins as immunogens). Here, we describe the development of an alternative vaccine strategy encompassing the expression of ZIKV non-structural-1 (NS1) protein from a clinically proven safe, Modified Vaccinia Ankara (MVA) vector, thus averting the potential risk of ADE associated with structural protein-based ZIKV vaccines. A single intramuscular immunization of immunocompetent mice with the MVA-ZIKV-NS1 vaccine candidate provided robust humoral and cellular responses, and afforded 100% protection against a lethal intracerebral dose of ZIKV (strain MR766). This is the first report of (i) a ZIKV vaccine based on the NS1 protein and (ii) single dose protection against ZIKV using an immunocompetent lethal mouse challenge model. |
On the Failure to Detect Changes in Scenes Across Brief Interruptions | When brief blank fields are placed between alternating displays of an original and a modified scene, a striking failure of perception is induced: The changes become extremely difficult to notice, even when they are large, presented repeatedly, and the observer expects them to occur (Rensink, O’Regan, & Clark, 1997). To determine the mechanisms behind this induced “change blindness”, four experiments examine its dependence on initial preview and on the nature of the interruptions used. Results support the proposal that representations at the early stages of visual processing are inherently volatile, and that focused attention is needed to stabilize them sufficiently to support the perception of change. |
Novel 3-dimensional Dual Control-gate with Surrounding Floating-gate (DC-SF) NAND flash cell for 1Tb file storage application | A novel 3-dimensional Dual Control-gate with Surrounding Floating-gate (DC-SF) NAND flash cell has been successfully developed, for the first time. The DC-SF cell consists of a surrounding floating gate with stacked dual control gate. With this structure, high coupling ratio, low voltage cell operation (program: 15V and erase: −11V), and wide P/E window (9.2V) can be obtained. Moreover, negligible FG-FG interference (12mV/V) is achieved due to the control gate shield effect. Then we propose 3D DC-SF NAND flash cell as the most promising candidate for 1Tb and beyond with stacked multi bit FG cell (2 ∼ 4bit/cell). |
Identification of penumbra and infarct in acute ischemic stroke using computed tomography perfusion-derived blood flow and blood volume measurements. | BACKGROUND AND PURPOSE
We investigated whether computed tomography (CT) perfusion-derived cerebral blood flow (CBF) and cerebral blood volume (CBV) could be used to differentiate between penumbra and infarcted gray matter in a limited, exploratory sample of acute stroke patients.
METHODS
Thirty patients underwent a noncontrast CT (NCCT), CT angiography (CTA), and CT perfusion (CTP) scan within 7 hours of stroke onset, NCCT and CTA at 24 hours, and NCCT at 5 to 7 days. Twenty-five patients met the criteria for inclusion and were subsequently divided into 2 groups: those with recanalization at 24 hours (n=16) and those without (n=9). Penumbra was operationally defined as tissue with an admission CBF <25 mL x 100 g(-1) x min(-1) that was not infarcted on the 5- to 7-day NCCT. Logistic regression was applied to differentiate between infarct and penumbra data points.
RESULTS
For recanalized patients, CBF was significantly lower (P<0.05) for infarct (13.3+/-3.75 mL x 100 g(-1) x min(-1)) than penumbra (25.0+/-3.82 mL x 100 g(-1) x min(-1)). CBV in the penumbra (2.15+/-0.43 mL x 100 g(-1)) was significantly higher than contralateral (1.78+/-0.30 mL x 100 g(-1)) and infarcted tissue (1.12+/-0.37 mL x 100 g(-1)). Logistic regression using an interaction term (CBFxCBV) resulted in sensitivity, specificity, and accuracy of 97.0%, 97.2%, and 97.1%, respectively. The interaction term resulted in a significantly better (P<0.05) fit than CBF or CBV alone, suggesting that the CBV threshold for infarction varies with CBF. For patients without recanalization, CBF and CBV for infarcted regions were 15.1+/-5.67 mL x 100 g(-1) x min(-1) and 1.17+/-0.41 mL x 100 g(-1), respectively.
CONCLUSIONS
We have shown in a limited sample of patients that CBF and CBV obtained from CTP can be sensitive and specific for infarction and should be investigated further in a prospective trial to assess their utility for differentiating between infarct and penumbra. |
Learning Transferable Features for Speech Emotion Recognition | Emotion recognition from speech is one of the key steps towards emotional intelligence in advanced human-machine interaction. Identifying emotions in human speech requires learning features that are robust and discriminative across diverse domains that differ in terms of language, spontaneity of speech, recording conditions, and types of emotions. This corresponds to a learning scenario in which the joint distributions of features and labels may change substantially across domains. In this paper, we propose a deep architecture that jointly exploits a convolutional network for extracting domain-shared features and a long short-term memory network for classifying emotions using domain-specific features. We use transferable features to enable model adaptation from multiple source domains, given the sparseness of speech emotion data and the fact that target domains are short of labeled data. A comprehensive cross-corpora experiment with diverse speech emotion domains reveals that transferable features provide gains ranging from 4.3% to 18.4% in speech emotion recognition. We evaluate several domain adaptation approaches, and we perform an ablation study to understand which source domains add the most to the overall recognition effectiveness for a given target domain. |
Femoral hip stem with additively manufactured cellular structures | Research and development of hip stem implants started centuries ago. However, there is still no yet an optimum design that fulfills all the requirements of the patient. New manufacturing technologies have opened up new possibilities for complicated theoretical designs to become tangible reality. Current trends in the development of hip stems focus on applying porous structures to improve osseointegration and reduce stem stiffness in order to approach the stiffness of the natural human bone. In this field, modern additive manufacturing machines offer unique flexibility in manufacturing parts combining variable density mesh structures with solid and porous metal in a single manufacturing process. Furthermore, additive manufacturing machines became powerful competitors in the economical mass production of hip implants. This is due to their ability to manufacture several parts with different geometries in a single setup and with minimum material consumption. This paper reviews the application of additive manufacturing (AM) techniques in the production of innovative porous femoral hip stem design. |
Evaluation of segmentation methods on head and neck CT: Auto-segmentation challenge 2015. | PURPOSE
Automated delineation of structures and organs is a key step in medical imaging. However, due to the large number and diversity of structures and the large variety of segmentation algorithms, a consensus is lacking as to which automated segmentation method works best for certain applications. Segmentation challenges are a good approach for unbiased evaluation and comparison of segmentation algorithms.
METHODS
In this work, we describe and present the results of the Head and Neck Auto-Segmentation Challenge 2015, a satellite event at the Medical Image Computing and Computer Assisted Interventions (MICCAI) 2015 conference. Six teams participated in a challenge to segment nine structures in the head and neck region of CT images: brainstem, mandible, chiasm, bilateral optic nerves, bilateral parotid glands, and bilateral submandibular glands.
RESULTS
This paper presents the quantitative results of this challenge using multiple established error metrics and a well-defined ranking system. The strengths and weaknesses of the different auto-segmentation approaches are analyzed and discussed.
CONCLUSIONS
The Head and Neck Auto-Segmentation Challenge 2015 was a good opportunity to assess the current state-of-the-art in segmentation of organs at risk for radiotherapy treatment. Participating teams had the possibility to compare their approaches to other methods under unbiased and standardized circumstances. The results demonstrate a clear tendency toward more general purpose and fewer structure-specific segmentation algorithms. |
BigBIRD: A large-scale 3D database of object instances | The state of the art in computer vision has rapidly advanced over the past decade largely aided by shared image datasets. However, most of these datasets tend to consist of assorted collections of images from the web that do not include 3D information or pose information. Furthermore, they target the problem of object category recognition - whereas solving the problem of object instance recognition might be sufficient for many robotic tasks. To address these issues, we present a high-quality, large-scale dataset of 3D object instances, with accurate calibration information for every image. We anticipate that “solving” this dataset will effectively remove many perception-related problems for mobile, sensing-based robots. The contributions of this work consist of: (1) BigBIRD, a dataset of 100 objects (and growing), composed of, for each object, 600 3D point clouds and 600 high-resolution (12 MP) images spanning all views, (2) a method for jointly calibrating a multi-camera system, (3) details of our data collection system, which collects all required data for a single object in under 6 minutes with minimal human effort, and (4) multiple software components (made available in open source), used to automate multi-sensor calibration and the data collection process. All code and data are available at http://rll.eecs.berkeley.edu/bigbird. |
Word Embeddings based on Fixed-Size Ordinally Forgetting Encoding | In this paper, we propose to learn word embeddings based on the recent fixedsize ordinally forgetting encoding (FOFE) method, which can almost uniquely encode any variable-length sequence into a fixed-size representation. We use FOFE to fully encode the left and right context of each word in a corpus to construct a novel word-context matrix, which is further weighted and factorized using truncated SVD to generate low-dimension word embedding vectors. We have evaluated this alternative method in encoding word-context statistics and show the new FOFE method has a notable effect on the resulting word embeddings. Experimental results on several popular word similarity tasks have demonstrated that the proposed method outperforms many recently popular neural prediction methods as well as the conventional SVD models that use canonical count based techniques to generate word context matrices. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.