title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Cerebellar Changes in Guinea Pig Offspring Following Suppression of Neurosteroid Synthesis During Late Gestation | Elevated gestational concentrations of allopregnanolone are essential for the development and neuroprotection of the foetal brain. Preterm birth deprives the foetus of these high levels of allopregnanolone, which may contribute to the associated adverse effects on cerebellar development. Preterm birth alters expression of GABAA receptor subunit composition, which may further limit neurosteroid action. The objective of this study was to determine the effects of suppression of allopregnanolone levels on the markers of development and functional outcome. Pregnant guinea pigs were treated with finasteride at a dose (25 mg/kg maternal weight) shown to suppress allopregnanolone between 60 days of gestation until delivery (term ∼71 days). The cerebella from neonates, whose mothers were treated with finasteride or vehicle during pregnancy, were collected at postnatal age 8. Pups that received finasteride displayed significantly greater glial fibrillary acid protein area coverage and reduced GABAA receptor α6-subunit messenger RNA within the cerebellum than pups that were exposed to vehicle. These findings indicate that loss of neurosteroid action on the foetal brain in late gestation produces prolonged astrocyte activation and reductions in GABAA receptor α6-subunit expression. These changes may contribute to the long-term changes in function associated with preterm birth. |
A Foundational Architecture for Artificial General Intelligence | Implementing and fleshing out a number of psychological and neuroscience theories of cognition, the LIDA conceptual model aims at being a cognitive “theory of everything.” With modules or processes for perception, working memory, episodic memories, “consciousness,” procedural memory, action selection, perceptual learning, episodic learning, deliberation, volition, and non-routine problem solving, the LIDA model is ideally suited to provide a working ontology that would allow for the discussion, design, and comparison of AGI systems. The LIDA architecture is based on the LIDA cognitive cycle, a sort of “cognitive atom.” The more elementary cognitive modules and processes play a role in each cognitive cycle. Higher-level processes are performed over multiple cycles. In addition to giving a quick overview of the LIDA conceptual model, and its underlying computational technology, we argue for the LIDA architecture’s role as a foundational architecture for an AGI. Finally, lessons For AGI researchers drawn from the model and its architecture are discussed. |
Artificial intelligence, machine learning and deep learning | It is increasingly recognized that artificial intelligence has been touted as a new mobile. Because of the high volume of data that being generated by devices, sensors and social media users, the machine can learn to distinguish the pattern and makes a reasonably good prediction. This article will explore the use of machine learning and its methodologies. Furthermore, the field of deep learning which is being exploited in many leading IT providers will be clarified and discussed. |
Myostatin negatively regulates satellite cell activation and self-renewal | Satellite cells are quiescent muscle stem cells that promote postnatal muscle growth and repair. Here we show that myostatin, a TGF-beta member, signals satellite cell quiescence and also negatively regulates satellite cell self-renewal. BrdU labeling in vivo revealed that, among the Myostatin-deficient satellite cells, higher numbers of satellite cells are activated as compared with wild type. In contrast, addition of Myostatin to myofiber explant cultures inhibits satellite cell activation. Cell cycle analysis confirms that Myostatin up-regulated p21, a Cdk inhibitor, and decreased the levels and activity of Cdk2 protein in satellite cells. Hence, Myostatin negatively regulates the G1 to S progression and thus maintains the quiescent status of satellite cells. Immunohistochemical analysis with CD34 antibodies indicates that there is an increased number of satellite cells per unit length of freshly isolated Mstn-/- muscle fibers. Determination of proliferation rate suggests that this elevation in satellite cell number could be due to increased self-renewal and delayed expression of the differentiation gene (myogenin) in Mstn-/- adult myoblasts. Taken together, these results suggest that Myostatin is a potent negative regulator of satellite cell activation and thus signals the quiescence of satellite cells. |
Soccer Jersey Number Recognition Using Convolutional Neural Networks | In this paper, a deep convolutional neural network based approach to the problem of automatically recognizing jersey numbers from soccer videos is presented. Two different jersey number vector encoding schemes are presented and compared to each other. The first treats every number as a separate class, while the second one treats each digit as a class. Additionally, the semi-automatic process for the annotation of a jersey number dataset consisting of 8281 jersey numbers is described. The best recognition rate of 0.83 was achieved by the proposed approach with data augmentation and without using dropout, compared to 0.4 for a more traditional histogram of oriented gradients (HOG) and support vector machine (SVM) based approach. |
Fuzzy portfolio selection using fuzzy analytic hierarchy process | Financial problems have been the subject of much research. A widely used approach in recent work on these problems is the use of fuzzy set theory, where fuzzy terms are used to model the uncertain environments. The purpose of this work is to combine the fuzzy analytic hierarchy process (AHP) with the portfolio selection problem. More specifically, the decision-making problem is to decide which stocks are to be chosen for investment and in what proportions they will be bought. To do this, we first dealt with two constrained fuzzy AHP methods given by Enea and Piazza [M. Enea, T. Piazza, Project selection by constrained fuzzy AHP, Fuzzy Optimization and Decision Making 3 (2004) 39–62]. We revised the first of these methods, addressing some of its fallacies, and called it revised constrained fuzzy AHP method (RCFAHP). Then we applied these two methods, namely, RCFAHP and the secondmethod of Enea and Piazza (2004), to the problem of choosing stocks on the Istanbul Stock Exchange (ISE). The methodology used for the hierarchy construction is based on the paper of Satty et al. [T.L. Saaty, P.C. Rogers, R. Bell, Portfolio selection through hierarchies, The Journal of Portfolio Management (1980) 16–21]. In this paper, we show that both of the models provide both ranking and weighting information, via fuzzy AHP, to the investors in this financial scenario. Finally, we discuss the relative advantages and disadvantages of these methods in comparison to existing methods in the literature. 2008 Elsevier Inc. All rights reserved. |
The relationships between organisational citizenship behaviour, job satisfaction and turnover intention. | AIM
This study aims to explore the relationships between organisational citizenship behaviour, job satisfaction and turnover intention.
BACKGROUND
Because of the changing health policies landscape, Taiwan's hospital administrators are facing major cost reduction challenges. Specifically, the high turnover rate of nurses represents a hindrance and a human resource cost. This study focuses on ways of reducing the employee turnover rate through enhanced organisational citizenship behaviour and job satisfaction.
DESIGN
A cross-sectional study.
METHODS
This study focuses on hospital nurses in Taiwan. Our research samples were obtained from one medical centre, three regional hospitals and seven district hospitals. Out of 300 questionnaires distributed among samples, 237 were completed and returned. Pearson's correlation was used to test for relationships among the main variables. One-way analysis of variance and Scheffé's post hoc analysis were employed to test the influence of demographic data on the main variables.
RESULTS
The results reveal that the nurses' job satisfaction has a significantly positive correlation with organisational citizenship behaviour and a negative correlation with turnover intention.
CONCLUSIONS
This study has proven that the turnover intention of clinical nurses is related to their organisational citizenship behaviour and job satisfaction. Hospital administrators can reduce the turnover intention by meeting nurses' needs and by promoting their organisational citizenship behaviour.
RELEVANCE TO CLINICAL PRACTICE
Organisational citizenship behaviour involves behaviour that encourages staff to endeavour to voluntarily improve organisational performance without lobbying for compensation. Employees' job satisfaction includes satisfaction with the working environment or welfare programme in the context of human resource initiatives. Similarly, human resource protocols may serve as the starting point for promoting staff organisational citizenship behaviour. Administrators in clinical healthcare are encouraged to meet their employees' working needs through human resource practices. |
The dual-zone therapeutic concept of managing immediate implant placement and provisional restoration in anterior extraction sockets. | Improvements in implant designs have helped advance successful immediate anterior implant placement into fresh extraction sockets. Clinical techniques described in this case enable practitioners to achieve predictable esthetic success using a method that limits the amount of buccal contour change of the extraction site ridge and potentially enhances the thickness of the peri-implant soft tissues coronal to the implant-abutment interface. This approach involves atraumatic tooth removal without flap elevation, and placing a bone graft into the residual gap around an immediate fresh-socket anterior implant with a screw-retained provisional restoration acting as a prosthetic socket seal device. |
Evolution of Building Materials and Philosophy in Construction: A Process of Digitalization and Visualization of the Accumulated Knowledge | The long-term research on the constructional materials and techniques of monuments and historic buildings, allowed the accumulation of significant knowledge which could be further disseminated. The masons of antiquity followed principles in designing and building, established by their intuition and experience. The selection of raw materials, the way they upgraded them in constructing foundations, walls, domes, is still remarkable. In the paper, a process of using digital technology tools for making knowledge acquisition attractive is presented. By developing a specific platform, all relevant scientific knowledge can be sorted, while with a series of digital applications, the diachronic principles of construction, the ancient technology and the achievements of the past can be exploited in a friendly and interactive environment. By this way it is expected that the values of building philosophy in the context of safety, sustainability and economy will be forwarded to new generations. |
The Akamai network: a platform for high-performance internet applications | Comprising more than 61,000 servers located across nearly 1,000 networks in 70 countries worldwide, the Akamai platform delivers hundreds of billions of Internet interactions daily, helping thousands of enterprises boost the performance and reliability of their Internet applications. In this paper, we give an overview of the components and capabilities of this large-scale distributed computing platform, and offer some insight into its architecture, design principles, operation, and management. |
Effect of Diamel in patients with metabolic syndrome: a randomized double-blind placebo-controlled study. | BACKGROUND
The aim of the present study was to determine whether the administration of Diamel, marketed as a food supplement by Catalysis Laboratories (Madrid, Spain) could improve any of the components of metabolic syndrome (MS), as well as insulin resistance and sensitivity.
METHODS
In all, 100 patients with MS (19-70 years of age) who satisfied the World Health Organization criteria for MS were included in the study. Participants were randomly assigned to receive either oral Diamel or a placebo (while maintaining a diet appropriate to their weight and physical activity) at a dose of two capsules before each of the three main meals each day for 1 year. Anthropometric indices, blood pressure, fasting plasma glucose, lipid profile, insulin, creatinine, and uric acid (UA) were determined. Insulin resistance (IR) was assessed and three indirect indices were used to calculate insulin sensitivity (IS).
RESULTS
Compared with placebo, Diamel improved fasting insulin concentrations, IS, and IR and reduced UA concentrations from 6 months until the end of treatment (P < 0.05 for all). In addition, after 12 months treatment with Diamel, significant changes from baseline were seen for mean fasting insulin (P < 0.05), UA (P < 0.05), IR (P < 0.001), and IS (P < 0.001), whereas no such changes were seen in the placebo-treated group. Improvements were noted in body mass index, IR, and IS in both groups.
CONCLUSIONS
Long-term Diamel treatment, combined with lifestyle changes, was beneficial for IR and IS, and reduced serum UA levels in patients with MS. |
Pressure vs flow triggering during pressure support ventilation. | BACKGROUND
Adult mechanical ventilators have traditionally been pressure- or time-triggered. More recently, flow triggering has become available and some adult ventilators allow the choice between pressure or flow triggering. Prior studies have supported the superiority of flow triggering during continuous positive airway pressure, but few have compared pressure and flow triggering during pressure support ventilation (PSV). The purpose of this study was to compare pressure and flow triggering during PSV in adult mechanically ventilated patients.
METHODS
The study population consisted of 10 adult patients ventilated with a mechanical ventilator (Nellcor-Puritan-Bennett 7200ae) in the PSV mode. In random order, we compared pressure triggering of -0.5 H2O, pressure triggering -1 cm H2O, flow triggering of 5/2 L/min, and flow triggering 10/3 L/min. Pressure was measured for 5 min at the proximal endotracheal tube using a data acquisition rate of 100 Hz. From the airway pressure signal, trigger pressure (deltaP) was defined as the difference between positive end-expiratory pressure (PEEP) and the maximum negative deflection prior to onset of the triggered breath. Pressure-time product (PTP) was defined as the area produced by the pressure waveform below PEEP during onset of the triggered breath. Trigger time (deltaT) was defined as the time interval below PEEP during onset of the triggered breath.
RESULTS
A pressure trigger of -0.5 cm H2O was significantly more sensitive than the other trigger methods for deltaP, PTP, and deltaT (p<0.001). There was also a significant difference between patients for deltaP, deltaT, and PTP for each trigger method (p<0.001).
CONCLUSIONS
For this group of patients, flow triggering was not superior to pressure triggering at -0.5 cm H2O during PSV. |
The evolution of sustainable development | A central principle of corporate social responsibility is that firms should treat their stakeholders in an ethical fashion and that this behaviour should embrace environmental, as well as economic and social considerations. The purpose of this chapter is to provide a theoretical exploration of the concept of sustainable development in its broadest sense and, in so doing, encourage researchers and practitioners to locate and progress with their corporate social responsibility work within a robust ‘sustainable development’ framework. There is an increasing appreciation that Earth’s ecological systems cannot indefinitely sustain present trajectories of human activity. The nature and scale of human activity is exceeding the carrying capacity of the Earth’s resource base, and the resultant waste and pollution streams are exceeding the assimilative capacity. The contribution of the built environment and construction activity to this unsustainable human activity is substantial, and Lenssen and Roodman argue that: |
Empirical Analysis of Limit Order Markets | We provide empirical restrictions of a model of optimal order submissions in a limit order market. A trader’s optimal order submission depends on the trader’s valuation for the asset and the trade-offs between order prices, execution probabilities and picking off risks. The optimal order submission strategy is a monotone function of a trader’s valuation for the asset. We test the monotonicity restriction in a sample of order submissions and their realized outcomes from the Stockholm Stock Exchange. We do not reject the monotonicity restriction for buy orders or sell orders considered separately, but reject the monotonicity restriction for buy and sell orders considered jointly. |
State Dynamics and Modeling of Tantalum Oxide Memristors | A key requirement for using memristors in circuits is a predictive model for device behavior that can be used in simulations and to guide designs. We analyze one of the most promising materials, tantalum oxide, for high density, low power, and high-speed memory. We perform an ensemble of measurements, including time dynamics across nine decades, to deduce the underlying state equations describing the switching in Pt/TaOx/Ta memristors. A predictive, compact model is found in good agreement with the measured data. The resulting model, compatible with SPICE, is then used to understand trends in terms of switching times and energy consumption, which in turn are important for choosing device operating points and handling interactions with other circuit elements. |
Sketch retrieval via local dense stroke features | Article history: Received 13 August 2014 Received in revised form 11 July 2015 Accepted 27 November 2015 Available online 22 January 2016 Sketch retrieval aims at retrieving the most similar sketches from a large database based on one hand-drawn query. Successful retrieval hinges on an effective representation of sketch images and an efficient searchmethod. In this paper, we propose a representation scheme which takes sketch strokes into account with local features, thereby facilitating efficient retrieval with codebooks. Stroke features are detected via densely sampled points on stroke lines with crucial corners as anchor points, from which local gradients are enhanced and described by a quantized histogramof gradients. A codebook is organized in a hierarchical vocabulary tree,whichmaintains structural information of visual words and enables efficient retrieval in sub-linear time. Experimental results on three data sets demonstrate the merits of the proposed algorithm for effective and efficient sketch retrieval. © 2016 Elsevier B.V. All rights reserved. |
WhatsApp, Doc? A First Look at WhatsApp Public Group Data | In this dataset paper we describe our work on the collection and analysis of public WhatsApp group data. Our primary goal is to explore the feasibility of collecting and using WhatsApp data for social science research. We therefore present a generalisable data collection methodology, and a publicly available dataset for use by other researchers. To provide context, we perform statistical exploration to allow researchers to understand what public WhatsApp group data can be collected and how this data can be used. Given the widespread use of WhatsApp, our techniques to obtain public data and potential applications are important for the community. |
A Frame Tracking Model for Memory-Enhanced Dialogue Systems | Recently, resources and tasks were proposed to go beyond state tracking in dialogue systems. An example is the frame tracking task, which requires recording multiple frames, one for each user goal set during the dialogue. This allows a user, for instance, to compare items corresponding to different goals. This paper proposes a model which takes as input the list of frames created so far during the dialogue, the current user utterance as well as the dialogue acts, slot types, and slot values associated with this utterance. The model then outputs the frame being referenced by each triple of dialogue act, slot type, and slot value. We show that on the recently published Frames dataset, this model significantly outperforms a previously proposed rule-based baseline. In addition, we propose an extensive analysis of the frame tracking task by dividing it into sub-tasks and assessing their difficulty with respect to our model. |
Domain Separation Networks | The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. Despite their appeal, such models often fail to generalize from synthetic to real images, necessitating domain adaptation algorithms to manipulate these models before they can be successfully applied. Existing approaches focus either on mapping representations from one domain to the other, or on learning to extract features that are invariant to the domain from which they were extracted. However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain. We hypothesize that explicitly modeling what is unique to each domain can improve a model’s ability to extract domain-invariant features. Inspired by work on private-shared component analysis, we explicitly learn to extract image representations that are partitioned into two subspaces: one component which is private to each domain and one which is shared across domains. Our model is trained to not only perform the task we care about in the source domain, but also to use the partitioned representation to reconstruct the images from both domains. Our novel architecture results in a model that outperforms the state-of-the-art on a range of unsupervised domain adaptation scenarios and additionally produces visualizations of the private and shared representations enabling interpretation of the domain adaptation process. |
Ternary Complexes of Essential Metal Ions with L-Arginine and Succinic Acid in Cationic Surfactant Medium | Chemical speciation of ternary complexes MLX, ML2X and MLXH formed by Co(II), Ni(II), Cu(II) and Zn(II) with L-arginine as primary ligand (L) and succinic acid as secondary ligand (X) was studied in various concentrations (0.0-2.5% w/v) of cationic surfactant
solution maintaining an ionic strength of 0.16 mol dm-3
(NaNO3) at 303K. Titrations were carried out in the
presence of different relative concentrations (M: L: X
=1:2:2, 1:2:4, 1:4:2) of metal to L-arginine to succinic acid with sodium hydroxide as titrant. The observed extra
stability of ternary complexes compared to their binary
complexes was explained based on the electrostatic
interactions of the side chains of the ligands, charge
neutralization, chelate effect, stacking interactions and
hydrogen bonding. The trend in log β values with mole
fraction of the surfactant and distribution diagrams were
presented. Structures of plausible ternary complexes were
also presented. |
Adversarial Feature Adaptation for Cross-lingual Relation Classification | Relation Classification aims to classify the semantic relationship between two marked entities in a given sentence. It plays a vital role in a variety of natural language processing applications. Most existing methods focus on exploiting mono-lingual data, e.g., in English, due to the lack of annotated data in other languages. In this paper, we come up with a feature adaptation approach for cross-lingual relation classification, which employs a generative adversarial network (GAN) to transfer feature representations from one language with rich annotated data to another language with scarce annotated data. Such a feature adaptation approach enables feature imitation via the competition between a relation classification network and a rival discriminator. Experimental results on the ACE 2005 multilingual training corpus, treating English as the source language and Chinese the target, demonstrate the effectiveness of our proposed approach, yielding an improvement of 5.7% over the state-of-the-art. |
COMPARATIVE STUDY OF DATA WAREHOUSE DESIGN APPROACHES : A SURVEY | The process of developing a data warehouse starts with identifying and gathering requirements, designing the dimensional model followed by testing and maintenance. The design phase is the most important activity in the successful building of a data warehouse. In this paper, we surveyed and evaluated the literature related to the various data warehouse design approaches on the basis of design criteria and propose a generalized object oriented conceptual design framework based on UML that meets all types of user needs. |
Fine-Grain Parallelism with Minimal Hardware Support: A Compiler-Controlled Threaded Abstract Machine | In this paper, we present a relatively primitive execution model for ne-grain parallelism, in which all synchronization, scheduling, and storage management is explicit and under compiler control. This is de ned by a threaded abstract machine (TAM) with a multilevel scheduling hierarchy. Considerable temporal locality of logically related threads is demonstrated, providing an avenue for e ective register use under quasi-dynamic scheduling. A prototype TAM instruction set, TL0, has been developed, along with a translator to a variety of existing sequential and parallel machines. Compilation of Id, an extended functional language requiring ne-grain synchronization, under this model yields performance approaching that of conventional languages on current uniprocessors. Measurements suggest that the net cost of synchronization on conventional multiprocessors can be reduced to within a small factor of that on machines with elaborate hardware support, such as proposed data ow architectures. This brings into question whether tolerance to latency and inexpensive synchronization require speci c hardware support or merely an appropriate compilation strategy and program representation. |
Hybrid matching pursuit for distributed through-wall radar imaging | In this paper, we consider the problem of throughwall radar imaging (TWRI) with an antenna array and develop a distributed greedy algorithm named hybrid matching pursuit (HMP). By dividing all the antenna phase centers into some groups, the task of TWRI can be formulated as a problem of jointly sparse signal recovery based on distributed data subsets. In TWRI applications, existing distributed greedy algorithms such as the simultaneous orthogonal matching pursuit (SOMP) algorithm and the simultaneous subspace pursuit (SSP) algorithm suffer from high artifacts and low-resolution, respectively. The proposed HMP algorithm aims to combine the strengths of SOMP and SSP (i.e., the orthogonality among selected basis-signals and the backtracking strategy for basis-signal reevaluation) and, accordingly, to reconstruct high-resolution radar images with no visible artifacts. Experimental results on real through-wall radar data show that, compared to existing greedy methods, the proposed HMP algorithm significantly improves the radar image quality, at the cost of increased computational complexity. |
A Neural Network Integrated Decision Support System for Condition-Based Optimal Predictive Maintenance Policy | This paper develops an integrated neural-network-based decision support system for predictive maintenance of rotational equipment. The integrated system is platform-independent and is aimed at minimizing expected cost per unit operational time. The proposed system consists of three components. The first component develops a vibration-based degradation database through condition monitoring of rolling element bearings. In the second component, an artificial neural network model is developed to estimate the life percentile and failure times of roller bearings. This is then used to construct a marginal distribution. The third component consists of the construction of a cost matrix and probabilistic replacement model that optimizes the expected cost per unit time. Furthermore, the integrated system consists of a heuristic managerial decision rule for different scenarios of predictive and corrective cost compositions. Finally, the proposed system can be applied in various industries and different kinds of equipment that possess well-defined degradation characteristics |
Evidence for on-going inflation of the Socorro magma body, New Mexico, from Interferometric Synthetic Aperture Radar imaging | Interferometric synthetic aperture radar (InSAR) imaging of the central Rio Grande rift (New Mexico, USA) during 1992-1999 reveals a crustal uplift of several centimeters that spatially coincides with the seismologically determined outline of the Socorro magma body, one of the largest currently active magma intrusions in the Earth’s continental crust. Modeling of interferograms shows that the observed deformation may be due to elastic opening of a sill-like intrusion at a rate of a few millimeters per year. Despite an apparent constancy of the geodetically determined uplift rate, thermodynamic arguments suggest that it is unlikely that the Socorro magma body has formed via steady state elastic inflation. |
The maximum clique problem | In this paper we present a survey of results concerning algorithms, complexity, and applications of the maximum clique problem. We discuss enumerative and exact algorithms, heuristics, and a variety of other proposed methods. An up to date bibliography on the maximum clique and related problems is also provided. |
A Data Mining Framework for Prevention and Detection of Financial Statement Fraud | Financial statement fraud has reached the epidemic proportion globally. Recently, financial statement fraud has dominated the corporate news causing debacle at number of companies worldwide. In the wake of failure of many organisations, there is a dire need of prevention and detection of financial statement fraud. Prevention of financial statement fraud is a measure to stop its occurrence initially whereas detection means the identification of such fraud as soon as possible. Fraud detection is required only if prevention has failed. Therefore, a continuous fraud detection mechanism should be in place because management may be unaware about the failure of prevention mechanism. In this paper we propose a data mining framework for prevention and detection of financial statement fraud. |
Training Hidden Markov Models with Multiple Observations-A Combinatorial Method | ÐHidden Markov models (HMMs) are stochastic models capable of statistical learning and classification. They have been applied in speech recognition and handwriting recognition because of their great adaptability and versatility in handling sequential signals. On the other hand, as these models have a complex structure and also because the involved data sets usually contain uncertainty, it is difficult to analyze the multiple observation training problem without certain assumptions. For many years researchers have used Levinson's training equations in speech and handwriting applications, simply assuming that all observations are independent of each other. This paper presents a formal treatment of HMM multiple observation training without imposing the above assumption. In this treatment, the multiple observation probability is expressed as a combination of individual observation probabilities without losing generality. This combinatorial method gives one more freedom in making different dependence-independence assumptions. By generalizing Baum's auxiliary function into this framework and building up an associated objective function using the Lagrange multiplier method, it is proven that the derived training equations guarantee the maximization of the objective function. Furthermore, we show that Levinson's training equations can be easily derived as a special case in this treatment. Index TermsÐHidden Markov model, forward-backward procedure, Baum-Welch algorithm, multiple observation training. |
Interpenetrating network hydrogel membranes of sodium alginate and poly(vinyl alcohol) for controlled release of prazosin hydrochloride through skin. | Interpenetrating network (IPN) hydrogel membranes of sodium alginate (SA) and poly(vinyl alcohol) (PVA) were prepared by solvent casting method for transdermal delivery of an anti-hypertensive drug, prazosin hydrochloride. The prepared membranes were thin, flexible and smooth. The X-ray diffraction studies indicated the amorphous dispersion of drug in the membranes. Differential scanning calorimetric analysis confirmed the IPN formation and suggests that the membrane stiffness increases with increased concentration of glutaraldehyde (GA) in the membranes. All the membranes were permeable to water vapors depending upon the extent of cross-linking. The in vitro drug release study was performed through excised rat abdominal skin; drug release depends on the concentrations of GA in membranes. The IPN membranes extended drug release up to 24 h, while SA and PVA membranes discharged the drug quickly. The primary skin irritation and skin histopathology study indicated that the prepared IPN membranes were less irritant and safe for skin application. |
A near-optimal approximation algorithm for Asymmetric TSP on embedded graphs | We present a near-optimal polynomial-time approximation algorithm for the asymmetric traveling salesman problem for graphs of bounded orientable or non-orientable genus. Given any algorithm that achieves an approximation ratio of f(n) on arbitrary n-vertex graphs as a black box, our algorithm achieves an approximation factor of O(f(g)) on graphs with genus g. In particular, the O(log n/loglog n)-approximation algorithm for general graphs by Asadpour et al. [SODA 2010] immediately implies an O(log g/loglog g)-approximation algorithm for genus-g graphs. Moreover, recent results on approximating the genus of graphs imply that our O(log g/loglog g)-approximation algorithm can be applied to bounded-degree graphs even if no genus-g embedding of the graph is given. Our result improves and generalizes the o(√ g log g)-approximation algorithm of Oveis Gharan and Saberi [SODA 2011], which applies only to graphs with orientable genus g and requires a genus-g embedding as part of the input, even for bounded-degree graphs. Finally, our techniques yield a O(1)-approximation algorithm for ATSP on graphs of genus g with running time 2O(g) · nO(1). |
The Dynamics of Natural Philosophy in the Aristotelian Tradition (and Beyond): Doctrinal and Institutional Perspectives | Grant's lecture is one of five inaugural lectures delivered on August 16, 1999 introducing the Conference on "The Dynamics of Natural Philosophy in the Aristotelian Tradition (and Beyond): Doctrinal and Institutional Perspectives" held in Nijmegen, The Netherlands, August 16-20. |
Antinociceptive and Anti-Inflammatory Activities of Leaf Methanol Extract of Cotyledon orbiculata L. (Crassulaceae) | Leaf methanol extract of C. orbiculata L. was investigated for antinociceptive and anti-inflammatory activities using acetic acid writhing and hot-plate tests and carrageenan-induced oedema test in mice and rats, respectively. C. orbiculata (100-400 mg/kg, i.p.) significantly inhibited acetic acid-induced writhing and significantly delayed the reaction time of mice to the hot-plate-induced thermal stimulation. Paracetamol (300 mg/kg, i.p.) significantly inhibited the acetic acid-induced writhing in mice. Morphine (10 mg/kg, i.p.) significantly delayed the reaction time of mice to the thermal stimulation produced with hot plate. Leaf methanol extract of C. orbiculata (50-400 mg/kg, i.p.) significantly attenuated the carrageenan-induced rat paw oedema. Indomethacin (10 mg/kg, p.o.) also significantly attenuated the carrageenan-induced rat paw oedema. The LD(50) value obtained for the plant species was greater than 4000 mg/kg (p.o.). The data obtained indicate that C. orbiculata has antinociceptive and anti-inflammatory activities, justifying the folklore use of the plant species by traditional medicine practitioners in the treatment of painful and inflammatory conditions. The relatively high LD(50) obtained shows that C. orbiculata may be safe in or nontoxic to mice. |
A Patent Document Retrieval System Addressing Both Semantic And Syntactic Properties | Combining the principle of Differential Latent Semantic Index (DLSI) (Chen et al., 2001) and the Template Matching Technique (Tokuda and Chen, 2001), we propose a new user queries-based patent document retrieval system by NLP technology. The DLSI method first narrows down the search space of a sought-after patent document by content search and the template matching technique then pins down the documents by exploiting the words-based template matching scheme by syntactic search. Compared with the synonymous search scheme by thesaurus dictionaries, the new method results in an improved overall retrieval efficiency of patent documents. |
Discussion of sexual rigidity in women. | Sexual rigidity in women is recorded as "cold yin" in Chinese medicine. Statistical studies show that 55-80% of middle-aged Chinese women suffer from this illness. The writer believes this number is credible based on his nearly twenty years of clinical practice in Chinese medicine. This article puts forward the writers personal views on this topic and invites comments from colleagues. The writer uses different approaches in accordance with the specific situations of individual patients. With the generalized thinking of Chinese medicine as a guiding force the writer while treating the physical symptoms (including primary complaints) targets the patients sexual functioning which often leads to a synergistic result. At the same time as he makes adjustments with regard to the patients sexual functioning the writer often resorts to a three-pronged approach consisting of Chinese medicine psychological counseling and guidance regarding daily life. (excerpt) |
Do We Need a ‘Political Science of Religion’?: | Religious issues are intrinsically political but have been largely excluded from the mainstream concerns of political science. This article considers some of the reasons why this has been the case, and suggests ways in which the imbalance might be addressed. Although the idea of developing a distinctive ‘political science of religion’ is not without its advantages, this approach is rejected in favour of one that is broader, interdisciplinary and more holistic. |
An interferon-free antiviral regimen for HCV after liver transplantation. | BACKGROUND
Hepatitis C virus (HCV) infection is the leading indication for liver transplantation worldwide, and interferon-containing regimens are associated with low response rates owing to treatment-limiting toxic effects in immunosuppressed liver-transplant recipients. We evaluated the interferon-free regimen of the NS5A inhibitor ombitasvir coformulated with the ritonavir-boosted protease inhibitor ABT-450 (ABT-450/r), the nonnucleoside NS5B polymerase inhibitor dasabuvir, and ribavirin in liver-transplant recipients with recurrent HCV genotype 1 infection.
METHODS
We enrolled 34 liver-transplant recipients with no fibrosis or mild fibrosis, who received ombitasvir-ABT-450/r (at a once-daily dose of 25 mg of ombitasvir, 150 mg of ABT-450, and 100 mg of ritonavir), dasabuvir (250 mg twice daily), and ribavirin for 24 weeks. Selection of the initial ribavirin dose and subsequent dose modifications for anemia were at the investigator's discretion. The primary efficacy end point was a sustained virologic response 12 weeks after the end of treatment.
RESULTS
Of the 34 study participants, 33 had a sustained virologic response at post-treatment weeks 12 and 24, for a rate of 97% (95% confidence interval, 85 to 100). The most common adverse events were fatigue, headache, and cough. Five patients (15%) required erythropoietin; no patient required blood transfusion. One patient discontinued the study drugs owing to adverse events after week 18 but had a sustained virologic response. Blood levels of calcineurin inhibitors were monitored, and dosages were modified to maintain therapeutic levels; no episode of graft rejection was observed during the study.
CONCLUSIONS
Treatment with the multitargeted regimen of ombitasvir-ABT-450/r and dasabuvir with ribavirin was associated with a low rate of serious adverse events and a high rate of sustained virologic response among liver-transplant recipients with recurrent HCV genotype 1 infection, a historically difficult-to-treat population. (Funded by AbbVie; CORAL-I ClinicalTrials.gov number, NCT01782495.). |
Towards SMP Challenge: Stacking of Diverse Models for Social Image Popularity Prediction | Popularity prediction on social media has attracted extensive attention nowadays due to its widespread applications, such as online marketing and economical trends. In this paper, we describe a solution of our team CASIA-NLPR-MMC for Social Media Prediction (SMP) challenge. This challenge is designed to predict the popularity of social media posts. We present a stacking framework by combining a diverse set of models to predict the popularity of images on Flickr using user-centered, image content and image context features. Several individual models are employed for scoring popularity of an image at earlier stage, and then a stacking model of Support Vector Regression (SVR) is utilized to train a meta model of different individual models trained beforehand. The Spearman's Rho of this Stacking model is 0.88 and the mean absolute error is about 0.75 on our test set. On the official final-released test set, the Spearman's Rho is 0.7927 and mean absolute error is about 1.1783. The results on provided dataset demonstrate the effectiveness of our proposed approach for image popularity prediction. |
Networked Pixels: Strategies for Building Visual and Auditory Images with Distributed Independent Devices | This paper describes the development of the hardware and software for Bloom, a light installation installed at Kew Gardens, London in December of 2016. The system is made up of a set of nearly 1000 distributed pixel devices each with LEDs, GPS sensor, and sound hardware, networked together with WiFi to form a display system. Media design for this system required consideration of the distributed nature of the devices. We outline the software and hardware designed for this system, and describe two approaches to the software and media design, one whereby we employ the distributed devices themselves for computation purposes (the approach we ultimately selected), and another whereby the devices are controlled from a central server that is performing most of the computation necessary. We then review these approaches and outline possibilities for future research. |
Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information | Sentence matching is widely used in various natural language tasks such as natural language inference, paraphrase identification, and question answering. For these tasks, understanding logical and semantic relationship between two sentences is required but it is yet challenging. Although attention mechanism is useful to capture the semantic relationship and to properly align the elements of two sentences, previous methods of attention mechanism simply use a summation operation which does not retain original features enough. Inspired by DenseNet, a densely connected convolutional network, we propose a densely-connected co-attentive recurrent neural network, each layer of which uses concatenated information of attentive features as well as hidden features of all the preceding recurrent layers. It enables preserving the original and the co-attentive feature information from the bottommost word embedding layer to the uppermost recurrent layer. To alleviate the problem of an ever-increasing size of feature vectors due to dense concatenation operations, we also propose to use an autoencoder after dense concatenation. We evaluate our proposed architecture on highly competitive benchmark datasets related to sentence matching. Experimental results show that our architecture, which retains recurrent and attentive features, achieves state-of-the-art performances for most of the tasks. |
Efficient Multi-View Reconstruction of Large-Scale Scenes using Interest Points, Delaunay Triangulation and Graph Cuts | We present a novel method to reconstruct the 3D shape of a scene from several calibrated images. Our motivation is that most existing multi-view stereovision approaches require some knowledge of the scene extent and often even of its approximate geometry (e.g. visual hull). This makes these approaches mainly suited to compact objects admitting a tight enclosing box, imaged on a simple or a known background. In contrast, our approach focuses on large-scale cluttered scenes under uncontrolled imaging conditions. It first generates a quasi-dense 3D point cloud of the scene by matching keypoints across images in a lenient manner, thus possibly retaining many false matches. Then it builds an adaptive tetrahedral decomposition of space by computing the 3D Delaunay triangulation of the 3D point set. Finally, it reconstructs the scene by labeling Delaunay tetrahedra as empty or occupied, thus generating a triangular mesh of the scene. A globally optimal label assignment, as regards photo-consistency of the output mesh and compatibility with the visibility of keypoints in input images, is efficiently found as a minimum cut solution in a graph. |
A New Type of Singularity Theorem | A new type of singularity theorem, based on spatial averages of physical quantities, is presented and discussed. Alternatively, the results inform us of when a spacetime can be singularity-free. This theorem provides a decisive observational difference between singular and non-singular, globally hyperbolic, open cosmological models. |
ReverX: Reverse Engineering of Protocols | Communication protocols determine how network components interact with each other. Therefore, the ability to derive a speci cation of a protocol can be useful in various contexts, such as to support deeper black-box testing or e ective defense mechanisms. Unfortunately, it is often hard to obtain the speci cation because systems implement closed (i.e., undocumented) protocols, or because a time consuming translation has to be performed, from the textual description of the protocol to a format readable by the tools. To address these issues, we propose a new methodology to automatically infer a speci cation of a protocol from network traces, which generates automata for the protocol language and state machine. Since our solution only resorts to interaction samples of the protocol, it is well-suited to uncover the message formats and protocol states of closed protocols and also to automate most of the process of specifying open protocols. The approach was implemented in ReverX and experimentally evaluated with publicly available FTP traces. Our results show that the inferred speci cation is a good approximation of the reference speci cation, exhibiting a high level of precision and recall. |
Compositional Obverter Communication Learning From Raw Visual Input | One of the distinguishing aspects of human language is its compositionality, which allows us to describe complex environments with limited vocabulary. Previously, it has been shown that neural network agents can learn to communicate in a highly structured, possibly compositional language based on disentangled input (e.g. handengineered features). Humans, however, do not learn to communicate based on well-summarized features. In this work, we train neural agents to simultaneously develop visual perception from raw image pixels, and learn to communicate with a sequence of discrete symbols. The agents play an image description game where the image contains factors such as colors and shapes. We train the agents using the obverter technique where an agent introspects to generate messages that maximize its own understanding. Through qualitative analysis, visualization and a zero-shot test, we show that the agents can develop, out of raw image pixels, a language with compositional properties, given a proper pressure from the environment. |
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks | This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. We motivate adversarial risk as an objective for achieving models robust to worst-case inputs. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may optimize this surrogate rather than the true adversarial risk. We formalize this notion as obscurity to an adversary, and develop tools and heuristics for identifying obscured models and designing transparent models. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses. |
Towards time-aware link prediction in evolving social networks | Prediction of links - both new as well as recurring - in a social network representing interactions between individuals is an important problem. In the recent years, there is significant interest in methods that use only the graph structure to make predictions. However, most of them consider a single snapshot of the network as the input, neglecting an important aspect of these social networks viz., their evolution over time.
In this work, we investigate the value of incorporating the history information available on the interactions (or links) of the current social network state. Our results unequivocally show that time-stamps of past interactions significantly improve the prediction accuracy of new and recurrent links over rather sophisticated methods proposed recently. Furthermore, we introduce a novel testing method which reflects the application of link prediction better than previous approaches. |
New Comparative Study Between DES, 3DES and AES within Nine Factors | --With the rapid development of various multimedia technologies, more and more multimedia data are generated and transmitted in the medical, also the internet allows for wide distribution of digital media data. It becomes much easier to edit, modify and duplicate digital information .Besides that, digital documents are also easy to copy and distribute, therefore it will be faced by many threats. It is a big security and privacy issue, it become necessary to find appropriate protection because of the significance, accuracy and sensitivity of the information. , which may include some sensitive information which should not be accessed by or can only be partially exposed to the general users. Therefore, security and privacy has become an important. Another problem with digital document and video is that undetectable modifications can be made with very simple and widely available equipment, which put the digital material for evidential purposes under question. Cryptography considers one of the techniques which used to protect the important information. In this paper a three algorithm of multimedia encryption schemes have been proposed in the literature and description. The New Comparative Study between DES, 3DES and AES within Nine Factors achieving an efficiency, flexibility and security, which is a challenge of researchers. |
Framing tangible interaction frameworks | Tangible interaction is a growing area of human–computer interaction research that has become popular in recent years. Yet designers and researchers are still trying to comprehend and clarify its nature, characteristics, and implications. One approach has been to create frameworks that help us look back at and categorize past tangible interaction systems, and look forward at the possibilities and opportunities for developing new systems. To date, a number of different frameworks have been proposed that each provide different perspectives on the tangible interaction design space, and which can guide designers of new systems in different ways. In this paper, we map the space of tangible interaction frameworks. We order existing frameworks by their general type, and by the facets of tangible interaction design they address. One of our main conclusions is that most frameworks focus predominantly on the conceptual design of tangible systems, whereas fewer frameworks abstract the knowledge gained from previous systems, and hardly any framework provides concrete steps or tools for building new tangible systems. In addition, the facets most represented in existing frameworks are those that address the interactions with or the physicality of the designed systems. Other facets, such as domain-specific technology and experience, are rare. This focus on design, interaction, and physicality is interesting, as the origins of the field are rooted in engineering methods and have only recently started to incorporate more design-inspired approaches. As such, we expected more frameworks to focus on technologies and to provide concrete building suggestions for new tangible interaction systems. |
Actions and affordances in syntactic ambiguity resolution. | In 2 experiments, eye movements were monitored as participants followed instructions containing temporary syntactic ambiguities (e.g., "Pour the egg in the bowl over the flour"). The authors varied the affordances of task-relevant objects with respect to the action required by the instruction (e.g., whether 1 or both eggs in the visual workspace were in liquid form, allowing them to be poured). The number of candidate objects that could afford the action was found to determine whether listeners initially misinterpreted the ambiguous phrase ("in the bowl") as specifying a location. The findings indicate that syntactic decisions are guided by the listener's situation-specific evaluation of how to achieve the behavioral goal of an utterance. |
Wearable Medical Sensor-Based System Design: A Survey | Wearable medical sensors (WMSs) are garnering ever-increasing attention from both the scientific community and the industry. Driven by technological advances in sensing, wireless communication, and machine learning, WMS-based systems have begun transforming our daily lives. Although WMSs were initially developed to enable low-cost solutions for continuous health monitoring, the applications of WMS-based systems now range far beyond health care. Several research efforts have proposed the use of such systems in diverse application domains, e.g., education, human-computer interaction, and security. Even though the number of such research studies has grown drastically in the last few years, the potential challenges associated with their design, development, and implementation are neither well-studied nor well-recognized. This article discusses various services, applications, and systems that have been developed based on WMSs and sheds light on their design goals and challenges. We first provide a brief history of WMSs and discuss how their market is growing. We then discuss the scope of applications of WMS-based systems. Next, we describe the architecture of a typical WMS-based system and the components that constitute such a system, and their limitations. Thereafter, we suggest a list of desirable design goals that WMS-based systems should satisfy. Finally, we discuss various research directions related to WMSs and how previous research studies have attempted to address the limitations of the components used in WMS-based systems and satisfy the desirable design goals. |
WDM-PON architectures and technologies | WDM-PON can deliver huge bandwidth to customer premises with protocol transparency. This feature enables application of WDM-PON beyond last mile access. The colorless transceiver can simplify management of system considerably. |
Making sense of implementation theories, models and frameworks | BACKGROUND
Implementation science has progressed towards increased use of theoretical approaches to provide better understanding and explanation of how and why implementation succeeds or fails. The aim of this article is to propose a taxonomy that distinguishes between different categories of theories, models and frameworks in implementation science, to facilitate appropriate selection and application of relevant approaches in implementation research and practice and to foster cross-disciplinary dialogue among implementation researchers.
DISCUSSION
Theoretical approaches used in implementation science have three overarching aims: describing and/or guiding the process of translating research into practice (process models); understanding and/or explaining what influences implementation outcomes (determinant frameworks, classic theories, implementation theories); and evaluating implementation (evaluation frameworks). This article proposes five categories of theoretical approaches to achieve three overarching aims. These categories are not always recognized as separate types of approaches in the literature. While there is overlap between some of the theories, models and frameworks, awareness of the differences is important to facilitate the selection of relevant approaches. Most determinant frameworks provide limited "how-to" support for carrying out implementation endeavours since the determinants usually are too generic to provide sufficient detail for guiding an implementation process. And while the relevance of addressing barriers and enablers to translating research into practice is mentioned in many process models, these models do not identify or systematically structure specific determinants associated with implementation success. Furthermore, process models recognize a temporal sequence of implementation endeavours, whereas determinant frameworks do not explicitly take a process perspective of implementation. |
An entropy-based uncertainty measure of process models | Article history: Received 13 January 2010 Received in revised form 8 October 2010 Accepted 31 October 2010 Available online 4 November 2010 Communicated by J.L. Fiadeiro |
LV reverse remodeling imparted by aortic valve replacement for severe aortic stenosis; is it durable? A cardiovascular MRI study sponsored by the American Heart Association | BACKGROUND
In patients with severe aortic stenosis (AS), long-term data tracking surgically induced effects of afterload reduction on reverse LV remodeling are not available. Echocardiographic data is available short term, but in limited fashion beyond one year. Cardiovascular MRI (CMR) offers the ability to serially track changes in LV metrics with small numbers due to its inherent high spatial resolution and low variability.
HYPOTHESIS
We hypothesize that changes in LV structure and function following aortic valve replacement (AVR) are detectable by CMR and once triggered by AVR, continue for an extended period.
METHODS
Twenty-four patients of which ten (67 ± 12 years, 6 female) with severe, but compensated AS underwent CMR pre-AVR, 6 months, 1 year and up to 4 years post-AVR. 3D LV mass index, volumetrics, LV geometry, and EF were measured.
RESULTS
All patients survived AVR and underwent CMR 4 serial CMR's. LVMI markedly decreased by 6 months (157 ± 42 to 134 ± 32 g/m2, p < 0.005) and continued trending downwards through 4 years (127 ± 32 g/m2). Similarly, EF increased pre to post-AVR (55 ± 22 to 65 ± 11%,(p < 0.05)) and continued trending upwards, remaining stable through years 1-4 (66 ± 11 vs. 65 ± 9%). LVEDVI, initially high pre-AVR, decreased post-AVR (83 ± 30 to 68 ± 11 ml/m2, p < 0.05) trending even lower by year 4 (66 ± 10 ml/m2). LV stroke volume increased rapidly from pre to post-AVR (40 ± 11 to 44 ± 7 ml, p < 0.05) continuing to increase non-significantly through 4 years (49 ± 14 ml) with these LV metrics paralleling improvements in NYHA. However, LVmass/volume, a 3D measure of LV geometry, remained unchanged over 4 years.
CONCLUSION
After initial beneficial effects imparted by AVR in severe AS patients, there are, as expected, marked improvements in LV reverse remodeling. Via CMR, surgically induced benefits to LV structure and function are durable and, unexpectedly express continued, albeit markedly incomplete improvement through 4 years post-AVR concordant with sustained improved clinical status. This supports down-regulation of both mRNA and MMP activity acutely with robust suppression long term. |
Current Advances in Detection and Treatment of Babesiosis | Babesiosis is a disease with a world-wide distribution affecting many species of mammals principally cattle and man. The major impact occurs in the cattle industry where bovine babesiosis has had a huge economic effect due to loss of meat and beef production of infected animals and death. Nowadays to those costs there must be added the high cost of tick control, disease detection, prevention and treatment. In almost a century and a quarter since the first report of the disease, the truth is: there is no a safe and efficient vaccine available, there are limited chemotherapeutic choices and few low-cost, reliable and fast detection methods. Detection and treatment of babesiosis are important tools to control babesiosis. Microscopy detection methods are still the cheapest and fastest methods used to identify Babesia parasites although their sensitivity and specificity are limited. Newer immunological methods are being developed and they offer faster, more sensitive and more specific options to conventional methods, although the direct immunological diagnoses of parasite antigens in host tissues are still missing. Detection methods based on nucleic acid identification and their amplification are the most sensitive and reliable techniques available today; importantly, most of those methodologies were developed before the genomics and bioinformatics era, which leaves ample room for optimization. For years, babesiosis treatment has been based on the use of very few drugs like imidocarb or diminazene aceturate. Recently, several pharmacological compounds were developed and evaluated, offering new options to control the disease. With the complete sequence of the Babesia bovis genome and the B. bigemina genome project in progress, the post-genomic era brings a new light on the development of diagnosis methods and new chemotherapy targets. In this review, we will present the current advances in detection and treatment of babesiosis in cattle and other animals, with additional reference to several apicomplexan parasites. |
Dacron or ePTFE for femoro-popliteal above-knee bypass grafting: short- and long-term results of a multicentre randomised trial. | OBJECTIVES
To compare expanded polytetrafluoroethylene (ePTFE) prosthesis and collagen-impregnated knitted polyester (Dacron) for above-knee (AK) femoro-popliteal bypass grafts.
DESIGN
A prospective multicentre randomised clinical trial.
PATIENTS AND METHODS
Between 1992 and 1996, 228 AK femoro-popliteal bypass grafts were randomly allocated to either an ePTFE (n=114) or a Dacron (n=114) vascular graft (6mm in diameter). Patients were eligible for inclusion if presenting with disabling claudication, rest pain or tissue loss. Follow-up was performed and included clinical examination and duplex ultrasonography at all scheduled intervals. All patients were treated with warfarin. The main end-point of this study was primary patency of the bypass graft at 2, 5 and 10 years after implantation. Secondary end-points were mortality, primary assisted patency and secondary patency. Cumulative patency rates were calculated with life-table analysis and with log-rank test.
RESULTS
After 5 years, the primary, primary assisted and secondary patency rates were 36% (confidence interval (CI): 26-46%), 46% (CI: 36-56%) and 51% (CI: 41-61%) for ePTFE and 52% (CI: 42-62%) (p=0.04), 66% (CI: 56-76%) (p=0.01) and 70% (CI: 60-80%) (p=0.01) for Dacron, respectively. After ten years these rates were respectively 28% (CI:18-38%), 31% (CI:19-43%) and 35% (CI: 23-47%) for ePTFE and 28% (CI: 18-38%), 49% (CI: 37-61%) and 49% (CI: 37-61%) for Dacron.
CONCLUSION
During prolonged follow-up (10 years), Dacron femoro-popliteal bypass grafts have superior patency compared to those of ePTFE grafts. Dacron is the graft material of choice if the saphenous vein is not available. |
Using noise signature to optimize spike-sorting and to assess neuronal classification quality | We have developed a simple and expandable procedure for classification and validation of extracellular data based on a probabilistic model of data generation. This approach relies on an empirical characterization of the recording noise. We first use this noise characterization to optimize the clustering of recorded events into putative neurons. As a second step, we use the noise model again to assess the quality of each cluster by comparing the within-cluster variability to that of the noise. This second step can be performed independently of the clustering algorithm used, and it provides the user with quantitative as well as visual tests of the quality of the classification. |
Extracting Top-K Insights from Multi-dimensional Data | OLAP tools have been extensively used by enterprises to make better and faster decisions. Nevertheless, they require users to specify group-by attributes and know precisely what they are looking for. This paper takes the first attempt towards automatically extracting top-k insights from multi-dimensional data. This is useful not only for non-expert users, but also reduces the manual effort of data analysts. In particular, we propose the concept of insight which captures interesting observation derived from aggregation results in multiple steps (e.g., rank by a dimension, compute the percentage of measure by a dimension). An example insight is: ``Brand B's rank (across brands) falls along the year, in terms of the increase in sales''. Our problem is to compute the top-k insights by a score function. It poses challenges on (i) the effectiveness of the result and (ii) the efficiency of computation. We propose a meaningful scoring function for insights to address (i). Then, we contribute a computation framework for top-k insights, together with a suite of optimization techniques (i.e., pruning, ordering, specialized cube, and computation sharing) to address (ii). Our experimental study on both real data and synthetic data verifies the effectiveness and efficiency of our proposed solution. |
Turing Computability with Neural Nets | This paper shows the existence of a finite neural network, made up of sigmoidal nenrons, which simulates a nniversal Turing machine. It is composed of less then lo5 synchronously evolving processors, interconnected linearly. High-order connections are not required. |
Empirical likelihood ratio test for the change-point problem | A nonparametric method based on the empirical likelihood is proposed to detect the change-point from a sequence of independent random variables. The empirical likelihood ratio test statistic is proved to have the same limit null distribution as that with classical parametric likelihood. Under some mild conditions, the maximum empirical likelihood estimator of change-point is also shown to be consistent. The simulation results demonstrate the sensitivity and robustness of the proposed approach. A famous real example is studied to illustrate its effectiveness. r 2006 Elsevier B.V. All rights reserved. MSC: primary 62G05; secondary 62E20 |
A Framework for Distributed Semantic Annotation of Musical Score: "Take It to the Bridge!" | Music notation expresses performance instructions in a way commonly understood by musicians, but printed paper parts are limited to encodings of static, a priori knowledge. In this paper we present a platform for multi-way communication between collaborating musicians through the dynamic modification of digital parts: the Music Encoding and Linked Data (MELD) framework for distributed real-time annotation of digital music scores. MELD users and software agents create semantic annotations of music concepts and relationships, which are associated with musical structure specified by the Music Encoding Initiative schema (MEI). Annotations are expressed in RDF, allowing alternative music vocabularies (e.g., popular vs. classical music structures) to be applied. The same underlying framework retrieves, distributes, and processes information that addresses semantically distinguishable music elements. Further knowledge is incorporated from external sources through the use of Linked Data. The RDF is also used to match annotation types and contexts to rendering actions which display the annotations upon the digital score. Here, we present a MELD implementation and deployment which augments the digital music scores used by musicians in a group performance, collaboratively changing the sequence within and between pieces in a set list. |
Large Margin Metric Learning for Multi-Label Prediction | Canonical correlation analysis (CCA) and maximum margin output coding (MMOC) methods have shown promising results for multi-label prediction, where each instance is associated with multiple labels. However, these methods require an expensive decoding procedure to recover the multiple labels of each testing instance. The testing complexity becomes unacceptable when there are many labels. To avoid decoding completely, we present a novel large margin metric learning paradigm for multi-label prediction. In particular, the proposed method learns a distance metric to discover label dependency such that instances with very different multiple labels will be moved far away. To handle many labels, we present an accelerated proximal gradient procedure to speed up the learning process. Comprehensive experiments demonstrate that our proposed method is significantly faster than CCA and MMOC in terms of both training and testing complexities. Moreover, our method achieves superior prediction performance compared with state-of-the-art methods. |
SIMPLE SYNTHESIS OF GRAPHENE NANOSHEETS USING A MICROWAVE–ASSISTED METHOD | In this research, few–layer–graphene (FLG) sheets had been successfully fabricated by using a microwave–assisted method. First, graphite intercalation compounds were prepared from potassium–tetrahydrofuran (K–THF) –expanded graphite by solvothermal process, and then the exfoliation was assisted by microwave radiation and sonication process. The resulted nano–graphene has average thickness about ~ 2 nm with a lateral size of 3–7 μm. Raman analysis showed that the as–synthesized graphene nanosheets contain only a few numbers of structural defects or impurities. X–ray photoelectron spectroscopy and Fourier transform infrared spectroscopy spectra revealed that the nano–graphene consisted of several peaks similar to those of graphite, indicating the effectiveness of the solvothermal reduction method in lowering the oxygen level. The electrical conductivity of the as–synthesized nano–graphene was measured to be 170 S/m. In contrast to the Hummer method, the method is simple, inexpensive, and does not generate toxic gas. This simple method could provide the synthesis of high quality nano–graphene on a large scale. |
ConRec: A Software Framework for Context-Aware Recommendation Based on Dynamic and Personalized Context | Contextual information is proven helpful to recommender system. And context-aware recommender system(CARS) has been applied in various applications. To improve the accuracy of context-aware recommendation and make recommender application development easier, we develop a lightweight software framework named ConRec, which introduces a dynamic context oriented approach to extend traditional reduction based recommender. This framework takes the dynamic nature of context into full consideration from different aspects to get better recommendation result. The dynamism of context exists in the process of context modeling, the computation of context weight and the handling of newly emergent context. In ConRec, context is dynamically modeled by clustering similar context values into one set automatically, rather than statically predefined by domain experts. Users' preferences to different types of context are explicitly measured through context weighting function based on real dataset. Moreover, ConRec supports incrementally adding new type of context to recommendation process, which reduces much cost of re-building the whole recommender model. Based on our improved reduction-based algorithm, ConRec is built as a highly scalable and reusable software framework for developing context-aware recommender applications. Finally, we evaluate our proposed approach on public datasets and get more accurate recommendation than traditional methods. |
Randomised controlled trial of the effectiveness of a respiratory health worker in reducing impairment, disability, and handicap due to chronic airflow limitation. | A randomised controlled trial was undertaken to determine whether a respiratory health worker was effective in reducing the respiratory impairment, disability, and handicap experienced by patients with chronic airflow limitation attending a respiratory outpatient department. The 152 adults (aged 30-75 years) who participated had a prebronchodilator forced expiratory volume in one second (FEV1) below 60% predicted and no other disease. They were randomised to receive the care of a respiratory health worker or the normal services provided by the outpatient department. The respiratory health worker provided health education and symptom and treatment monitoring in liaison with primary care services. After one year there was little difference between the two groups in spirometric values (FEV1 and forced vital capacity before and after salbutamol 200 micrograms), disability (six minute walking distance and paced step test), and handicap (sickness impact profile, hospital anxiety and depression scale). Patients not looked after by the respiratory health worker were more likely to die (relative risk 2.9 (95% confidence limits 0.8, 10.2); when age and FEV1 were controlled for this risk increased to 5.5 (95% confidence limits 1.2, 24.5). Patients looked after by the respiratory health worker attended their general practitioner more frequently and were prescribed a greater range of drugs. This is the third study to have found limited measurable benefit in terms of morbidity from the intervention of a respiratory health worker. This may be due to the ability of such workers to keep frail patients alive. |
Effects of a long-term aerobic exercise intervention on institutionalized patients with dementia. | OBJECTIVES
Long-term interventions aimed at analyzing the impact of physical exercise on important health markers in institutionalized individuals with dementia are relatively scarce. This longitudinal study intends to identify the effects of a physical exercise program on cognitive decline, memory, depression, functional dependence and neuropsychiatric disturbances in institutionalized individuals with dementia.
DESIGN
Randomized controlled trial.
METHODS
Homecare residents with dementia were assigned to an exercise (EG) or to a control group (CG). Participants in the EG cycled for at least 15min daily during 15 months, while those in the CG performed alternative sedentary recreational activities. The Mini-Mental State Examination (MEC), the Timed "Up & Go" Test, the Neuropsychiatric Inventory, the Katz Index, the Cornell Scale for Depression in Dementia and the Fuld Object Memory Evaluation were administered before and after the intervention.
RESULTS
Sixty-three individuals in the CG and 51 individuals in the EG completed the intervention. A statistically significant decline in cognitive function was observed in individuals included in the CG (p=0.015), while a slight improvement was observed in those included in the EG. Significant improvement was observed in the neuropsychiatric symptoms (p=0.020), memory function (p=0.028) and functional mobility (p=0.043) among those who exercised. Exercise seemed to have a greater effect in those suffering from severe cognitive impairment.
CONCLUSIONS
This study provides evidence that aerobic physical exercise has a significant impact on improving cognitive functioning, behavior, and functional mobility in institutionalized individuals with dementia. |
Plyometric Training Favors Optimizing Muscle–Tendon Behavior during Depth Jumping | The purpose of the present study was to elucidate how plyometric training improves stretch-shortening cycle (SSC) exercise performance in terms of muscle strength, tendon stiffness, and muscle-tendon behavior during SSC exercise. Eleven men were assigned to a training group and ten to a control group. Subjects in the training group performed depth jumps (DJ) using only the ankle joint for 12 weeks. Before and after the period, we observed reaction forces at foot, muscle-tendon behavior of the gastrocnemius, and electromyographic activities of the triceps surae and tibialis anterior during DJ. Maximal static plantar flexion strength and Achilles tendon stiffness were also determined. In the training group, maximal strength remained unchanged while tendon stiffness increased. The force impulse of DJ increased, with a shorter contact time and larger reaction force over the latter half of braking and initial half of propulsion phases. In the latter half of braking phase, the average electromyographic activity (mEMG) increased in the triceps surae and decreased in tibialis anterior, while fascicle behavior of the gastrocnemius remained unchanged. In the initial half of propulsion, mEMG of triceps surae and shortening velocity of gastrocnemius fascicle decreased, while shortening velocity of the tendon increased. These results suggest that the following mechanisms play an important role in improving SSC exercise performance through plyometric training: (1) optimization of muscle-tendon behavior of the agonists, associated with alteration in the neuromuscular activity during SSC exercise and increase in tendon stiffness and (2) decrease in the neuromuscular activity of antagonists during a counter movement. |
Resurrecting the ( C ) CAPM : A Cross-Sectional Test When Risk Premia Are Time-Varying | This paper explores the ability of theoretically-based asset pricing models such as the CAPM and the consumption CAPM referred to jointly as the (C)CAPM to explain the cross-section of average stock returns. Unlike many previous empirical tests of the (C)CAPM, we specify the pricing kernel as a conditional linear factor model, as would be expected if risk premia vary over time. Central to our approach is the use of a conditioning variable which proxies for fluctuations in the log consumption-aggregate wealth ratio and is likely to be important for summarizing conditional expectations of excess returns. We demonstrate that such conditional factor models are able to explain a substantial fraction of the cross-sectional variation in portfolio returns. These models perform much better than unconditional (C)CAPM specifications, and about as well as the three-factor Fama-French model on portfolios sorted by size and book-to-market ratios. This specification of the linear conditional consumption CAPM, using aggregate consumption data, is able to account for the difference in returns between low book-to-market and high book-to-market firms and exhibits little evidence of residual size or book-to-market effects. (JEL G10, E21) |
The nucleotide sequence of a HMW glutenin subunit gene located on chromosome 1A of wheat (Triticum aestivum L.). | A cloned 8.2 kb EcoRI fragment has been isolated from a genomic library of DNA derived from Triticum aestivum L. cv. Cheyenne. This fragment contains sequences related to the high molecular weight (HMW) subunits of glutenin, proteins considered to be important in determining the elastic properties of gluten. The cloned HMW subunit gene appears to be derived from chromosome 1A. The nucleotide sequence of this gene has provided new information on the structure and evolution of the HMW subunits. However, hybrid-selection translation experiments suggest that this gene is silent. |
Decrease of ornithine decarboxylase activity in premalignant gastric mucosa and regression of small intestinal metaplasia in patients supplemented with high doses of vitamin E. | The effect of high doses of vitamin E (Vit.E; 400 units/ day) on ornithine decarboxylase (ODC) activity and regression of small intestinal metaplasia (SIM) was studied in a 1-year double-blind intervention trial. Biochemical and morphological parameters were estimated in 14 evaluable SIM patients of 18 in the Vit.E group and in 16 of 18 intestinal metaplasia patients enrolled in control group (placebo). In the control group, there were no statistically significant changes in Vit.E content in blood plasma, ODC activity, and the rate of SIM in multiple biopsies from antrum gastric mucosa. In the Vit.E group, after 6 and 12 months of intervention, the initial content of Vit.E in blood plasma increased from 6.4 +/- 0.9 up to 17.0 +/- 1.8 and 21.2 +/- 2.3 micrograms/ml, respectively, and the initial abnormally high activity of ODC, 62.6 +/- 7.8 units, decreased by 53 and 65%, respectively. Histological analysis of multiple biopsies, taken from the gastric antrum of patients supplemented with Vit.E, revealed that in 8 of 14 patients (57%) after 6 months and in 10 of 14 patients (71%) after 12 months, no signs of SIM were observed; gastroscopic dye procedure confirmed the regression of SIM in these cases and showed the presence of only small isolated stained areas identified as SIM. |
Modified Banker's algorithm for scheduling in multi-AGV systems | In today's highly complex multi-AGV systems key research objective is finding a scheduling and routing policy that avoids deadlock while assuring that vehicle utilization is as high as possible. It is well known that finding such an optimal policy is a NP-hard task in general case. Therefore, big part of the research is oriented towards finding various suboptimal policies that can be applied to real world plants. In this paper we propose modified Banker's algorithm for scheduling in multi-AGV systems. A predetermined mission's path is executed in a way that some non-safe states are allowed in order to achieve better utilization of vehicles. A graph-based method of polynomial complexity for verification of these states is given. Algorithm is tested on a layout of a real plant for packing and warehousing palettes. Results shown at the end of the paper demonstrate advantages of the proposed method compared with other methods based on Banker's algorithm. |
Bridging the divide between families and health professionals' perspectives on family-centred care. | OBJECTIVES
To describe and discuss key findings from a recent research project that challenge an increasingly prevalent theme, apparent in both family-centred care research and practice, of conceptualizing family-centred care as shifting care, care management, and advocacy responsibilities to families. The purpose of the research, from which these findings emerged, was to develop a conceptualization of family-centred care grounded in the experiences of families and direct health-care providers.
DESIGN
Qualitative research methods, following the grounded theory tradition, were used to develop a conceptual framework that described the dimensions of the concept of family-centred care and their interrelationships, in the substantive area of children's developmental services. This article reports on and extends key findings from this grounded theory study, in light of current trends in the literature.
SETTING AND PARTICIPANTS
The substantive area that served as the setting for the research was developmental services at a children's hospital in Alberta, Canada. Data was collected through focus groups and individual interviews with 37 parents of children diagnosed with a developmental problem and 16 frontline health-care providers.
FINDINGS
Key findings from this research project do not support the current emphasis in family-centred care research and practice on conceptualizing family-centred care as the shifting of care, care management, and advocacy responsibilities to families. Rather, what emerged was that parents want to work truly collaboratively with health-care providers in making treatment decisions and on implementing a dynamic care plan that will work best for child and family.
DISCUSSION AND CONCLUSIONS
A definition of collaboration is provided, and the nature of collaborative relationships described. Contributing factors to the difficulty in establishing true collaborative relationships between families and health-care professionals, where the respective roles to be played by health-care professionals and families are jointly determined, are discussed. In light of these findings we strongly advocate for the re-examination of current family-centred care policy and practice. |
A Virtual Impedance Comprehensive Control Strategy for the Controllably Inductive Power Filtering System | In this letter, a virtual impedance comprehensive control (VICC) strategy is proposed for the controllably inductive power filtering (CIPF) system with a new filtering mechanism. This control strategy aims to satisfy the zero-impedance design precondition of the inductive power filtering system, and can dampen the harmonic resonance at the grid side. By the proposed zero-impedance control, the quality factor of the passive power device can be adjustable, and the single-tuned filter can be multituned. First, the main circuit topology for implementing the VICC-based CIPF is presented. Then, on the basis of the multipurpose control, the VICC strategy is designed. Furthermore, by means of the established equivalent circuit model and the corresponding mathematical model, the principles of the harmonic damping and the zero-impedance realization are revealed. Finally, the experimental results verify that the proposed control strategy can weaken the harmonic amplification effectively, and improve the filtering performance significantly. |
Tackling the Poor Assumptions of Naive Bayes Text Classifiers | Naive Bayes is often used as a baseline in text classification because it is fast and easy to implement. Its severe assumptions make such efficiency possible but also adversely affect the quality of its results. In this paper we propose simple, heuristic solutions to some of the problems with Naive Bayes classifiers, addressing both systemic issues as well as problems that arise because text is not actually generated according to a multinomial model. We find that our simple corrections result in a fast algorithm that is competitive with stateof-the-art text classification algorithms such as the Support Vector Machine. |
Computational Linguistics and Deep Learning | Deep Learning waves have lapped at the shores of computational linguistics for several years now, but 2015 seems like the year when the full force of the tsunami hit the major Natural Language Processing (NLP) conferences. However, some pundits are predicting that the final damage will be even worse. Accompanying ICML 2015 in Lille, France, there was another, almost as big, event: the 2015 Deep Learning Workshop. The workshop ended with a panel discussion, and at it, Neil Lawrence said, “NLP is kind of like a rabbit in the headlights of the Deep Learning machine, waiting to be flattened.” Now that is a remark that the computational linguistics community has to take seriously! Is it the end of the road for us? Where are these predictions of steamrollering coming from? At the June 2015 opening of the Facebook AI Research Lab in Paris, its director Yann LeCun said: “The next big step for Deep Learning is natural language understanding, which aims to give machines the power to understand not just individual words but entire sentences and paragraphs.”1 In a November 2014 Reddit AMA (Ask Me Anything), Geoff Hinton said, “I think that the most exciting areas over the next five years will be really understanding text and videos. I will be disappointed if in five years’ time we do not have something that can watch a YouTube video and tell a story about what happened. In a few years time we will put [Deep Learning] on a chip that fits into someone’s ear and have an English-decoding chip that’s just like a real Babel fish.”2 And Yoshua Bengio, the third giant of modern Deep Learning, has also increasingly oriented his group’s research toward language, including recent exciting new developments in neural machine translation systems. It’s not just Deep Learning researchers. When leading machine learning researcher Michael Jordan was asked at a September 2014 AMA, “If you got a billion dollars to spend on a huge research project that you get to lead, what would you like to do?”, he answered: “I’d use the billion dollars to build a NASA-size program focusing on natural language processing, in all of its glory (semantics, pragmatics, etc.).” He went on: “Intellectually I think that NLP is fascinating, allowing us to focus on highly structured inference problems, on issues that go to the core of ‘what is thought’ but remain eminently practical, and on a technology |
Planar series-fed antenna array for 77 GHz automotive radar | An antenna array with 3 transmit antennas and 4 receive antennas for the 77 GHz automotive radar is presented. Each antenna is a planar series-fed linear array. The Dolph-Chebyshev distribution is used to taper patch width for radiation pattern synthesis. The planar array is designed, fabricated, and its performance evaluated. Simulation results show that a 16-element series-fed microstrip linear array can achieve the sidelobe level (SLL) of −16.8 dB with half power beamwidth (HPBW) of 5.8°. The simulated and measured results show that a linear array in the planar array can realize the SLL of −15.5 dB with HPBW of 6.0° in simulation and the SLL of −10.5 dB with HPBW of 6.5° in measurement, respectively. And measured antenna gain is not less than 15.0 dBi. |
Report on Post-Quantum Cryptography | In recent years, there has been a substantial amount of research on quantum computers – machines that exploit quantum mechanical phenomena to solve mathematical problems that are difficult or intractable for conventional computers. If large-scale quantum computers are ever built, they will be able to break many of the public-key cryptosystems currently in use. This would seriously compromise the confidentiality and integrity of digital communications on the Internet and elsewhere. The goal of post-quantum cryptography (also called quantum-resistant cryptography) is to develop cryptographic systems that are secure against both quantum and classical computers, and can interoperate with existing communications protocols and networks. This Internal Report shares the National Institute of Standards and Technology (NIST)’s current understanding about the status of quantum computing and post-quantum cryptography, and outlines NIST’s initial plan to move forward in this space. The report also recognizes the challenge of moving to new cryptographic infrastructures and therefore emphasizes the need for agencies to focus on crypto agility. |
Deep Outdoor Illumination Estimation | We present a CNN-based technique to estimate high-dynamic range outdoor illumination from a single low dynamic range image. To train the CNN, we leverage a large dataset of outdoor panoramas. We fit a low-dimensional physically-based outdoor illumination model to the skies in these panoramas giving us a compact set of parameters (including sun position, atmospheric conditions, and camera parameters). We extract limited field-of-view images from the panoramas, and train a CNN with this large set of input image–output lighting parameter pairs. Given a test image, this network can be used to infer illumination parameters that can, in turn, be used to reconstruct an outdoor illumination environment map. We demonstrate that our approach allows the recovery of plausible illumination conditions and enables photorealistic virtual object insertion from a single image. An extensive evaluation on both the panorama dataset and captured HDR environment maps shows that our technique significantly outperforms previous solutions to this problem. |
Multiresonator based chipless RFID tag and dedicated RFID reader | A RFID system with a chipless RFID tag on a 90-µm thin Taconic TF290 laminate is presented. The chipless tag encodes data into the spectral signature in both magnitude and phase of the spectrum. The design and operation of a prototype RFID reader is also presented. The RFID reader operates between 5 – 10.7 GHz frequency band and successfully detects a chipless tag at 15 cm range. The tag design can be transferred easily to plastic and paper, making it suitable for mass deployment for low cost items and has the potential to replace trillions of barcodes printed each year. The RFID reader is suitable for mounting over conveyor belt systems. |
The chiaroscuro stem cell: a unified stem cell theory. | Hematopoiesis has been considered hierarchical in nature, but recent data suggest that the system is not hierarchical and is, in fact, quite functionally plastic. Existing data indicate that engraftment and progenitor phenotypes vary inversely with cell cycle transit and that gene expression also varies widely. These observations suggest that there is no progenitor/stem cell hierarchy, but rather a reversible continuum. This may, in turn, be dependent on shifting chromatin and gene expression with cell cycle transit. If the phenotype of these primitive marrow cells changes from engraftable stem cell to progenitor and back to engraftable stem cell with cycle transit, then this suggests that the identity of the engraftable stem cell may be partially masked in nonsynchronized marrow cell populations. A general model indicates a marrow cell that can continually change its surface receptor expression and thus responds to external stimuli differently at different points in the cell cycle. |
Networking Models in Flying Ad-Hoc Networks (FANETs): Concepts and Challenges | In recent years, the capabilities and roles of Unmanned Aerial Vehicles (UAVs) have rapidly evolved, and their usage in military and civilian areas is extremely popular as a result of the advances in technology of robotic systems such as processors, sensors, communications, and networking technologies. While this technology is progressing, development and maintenance costs of UAVs are decreasing relatively. The focus is changing from use of one large UAV to use of multiple UAVs, which are integrated into teams that can coordinate to achieve high-level goals. This level of coordination requires new networking models that can be set up on highly mobile nodes such as UAVs in the fleet. Such networking models allow any two nodes to communicate directly if they are in the communication range, or indirectly through a number of relay nodes such as UAVs. Setting up an ad-hoc network between flying UAVs is a challenging issue, and requirements can differ from traditional networks, Mobile Ad-hoc Networks (MANETs) and Vehicular Ad-hoc Networks (VANETs) in terms of node mobility, connectivity, message routing, service quality, application areas, etc. This paper O. K. Sahingoz (B) Computer Engineering Department, Turkish Air Force Academy, Yesilyurt, Istanbul, 34149, Turkey e-mail: [email protected] identifies the challenges with using UAVs as relay nodes in an ad-hoc manner, introduces network models of UAVs, and depicts open research issues with analyzing opportunities and future work. |
Construction of learning objects with Augmented Reality: An experience in secondary education | One of the technologies that has been showing possibilities of application in educational environments is the Augmented Reality (AR), in addition to its application to other fields such as tourism, advertising, video games, among others. The present article shows the results of an experiment carried out at the National University of Colombia, with the design and construction of augmented learning objects for the seventh and eighth grades of secondary education, which were tested and evaluated by students of a school in the department of Caldas. The study confirms the potential of this technology to support educational processes represented in the creation of digital resources for mobile devices. The development of learning objects in AR for mobile devices can support teachers in the integration of information and communication technologies (ICT) in the teaching-learning processes. |
No resistance to tenofovir disoproxil fumarate detected after up to 144 weeks of therapy in patients monoinfected with chronic hepatitis B virus. | UNLABELLED
Tenofovir disoproxil fumarate (TDF) is a nucleotide analogue with potent activity against human immunodeficiency virus type 1 and hepatitis B virus (HBV). To date, no reports of HBV clinical resistance to TDF have been confirmed. In two phase 3 studies (GS-US-174-0102 and GS-US-174-0103), 375 hepatitis B e antigen-negative (HBeAg(-) ) patients and 266 HBeAg(+) patients with chronic hepatitis B (some nucleoside-naive and some lamivudine-experienced) were randomized 2:1 to receive TDF (n = 426) or adefovir dipivoxil (ADV; n = 215) for 48 weeks. After week 48, eligible patients received open-label TDF with no interruption. The studies are being continued through week 384/year 8; week 144 data are presented here. Per protocol, viremic patients (HBV DNA level ≥ 400 copies/mL or 69 IU/mL) had the option of adding emtricitabine (FTC) at or after week 72. Resistance analyses of HBV polymerase/reverse transcriptase (pol/RT) were based on population dideoxy sequencing. Phenotypic analyses were conducted in HepG2 cells with recombinant HBV derived from patient serum. Most patients maintained TDF monotherapy treatment across both studies (607/641, 95%). A resistance analysis of HBV pol/RT was performed at the baseline for all patients, for viremic patients at week 144 or at the last time when they were on TDF monotherapy (34 on TDF and 19 on ADV-TDF), and for patients who remained viremic after the addition of FTC (7/20 on TDF and 5/14 on ADV-TDF). No patient developed amino acid substitutions associated with resistance to TDF. Virological breakthrough on TDF monotherapy was infrequent over 144 weeks (13/426, 3%) and was attributed to documented nonadherence in most cases (11/13, 85%). Persistent viremia (≥400 copies/mL) through week 144 was rare (5/641, 0.8%) and was not associated with virological resistance to TDF by population or clonal analyses.
CONCLUSION
No nucleoside-naive or nucleoside-experienced patient developed HBV pol/RT mutations associated with TDF resistance after up to 144 weeks of exposure to TDF monotherapy. |
Memory and memorials : from the French Revolution to World War One | Focusing on the "long" nineteenth century, from the French Revolution to the beginnings of Modernism, this book examines the significance of memory in this era of turbulent social change. Through investigation of science, literature, history and the visual arts, the authors explore theories of memory and the cultural and literary resonances of memorializing. Drawing on the work of many of the most influential literary figures of the period, such as Tennyson, Scott, and Hardy, Memory and Memorials explores key topics such as: gender and memory; Victorian psychological theories of memory; and cultural constructions in literature, science, history and architecture. Memory and Memorials: From the French Revolution to World War One employs a range of new and influential interdisciplinary methodologies. It offers both a fresh theoretical understanding of the period, and a wealth of empirical material of use to the historian, literary critic or social psychologist. |
Educational games for improving the teaching-learning process of a CLIL subject: Physics and chemistry in secondary education | The use of educational games in an academic context seems to be a superb alternative to traditional learning activities, such as drill-type exercises, in order to engage 21st-century students. Consequently, this work tries to raise the following objectives: analyze the effectiveness of game-based learning, characterize game elements that may contribute to create playing experiences, comprehend how different player types interact with games and, finally, design interactive games which may create challenges, set goals and provide feedback on progress while motivating learners to study physics and chemistry in a foreign language in the second cycle of Secondary Education (4th E.S.O. that corresponds to the age of 15-16 years old). Specifically, we have used several Web 2.0 tools (Hot Potatoes, Scratch, What2Learn and SMART Notebook 11) and applications (Microsoft PowerPoint and Microsoft Excel) in order to create the games; and these games are based on the successive contents: laboratory safety, laboratory equipment, stoichiometry, atomic structure, electronic configuration, the periodic table, forces, motion and energy. |
Prevalence of cardiovascular disease risk factors among Egyptian and Saudi medical students: a comparative study. | BACKGROUND
Results from recent reports suggest that the mortality and the morbidity from coronary heart disease (CHD) is leveling, especially in younger adults. Studies conducted in both Saudi Arabia and Egypt, aiming at the estimation of the prevalence of cardiovascular risk factors among the young population, demonstrated a high prevalence of risk factors.
OBJECTIVE
The aim of this study was to compare the prevalence of cardiovascular risk factors among medical students aged 18-25 years in two Middle East countries (Egypt and Saudi Arabia).
PARTICIPANTS AND METHODS
This was a cross-sectional comparative study involving a sample of 360 medical students of both sexes randomly selected from students enrolled into two medical colleges in Saudi Arabia and Egypt.
RESULTS
The prevalence of risk factors for cardiovascular disease was relatively high among both Saudi and Egyptian medical students, particularly a sedentary life style, obesity, and abdominal obesity. Smoking was practiced by 29.7% of both populations. A significantly higher prevalence of obesity and a reported family history of premature CHD were observed among the Saudi students and a significantly higher prevalence of hypertension was found among male Egyptian students as compared with male Saudi students. A relatively high proportion of both populations (23.9% of Saudi students and 16.7% of the Egyptian students) was at an increased risk of developing fatal cardiovascular disease within 10 years.
CONCLUSION
Apart from the higher prevalence of obesity and reported family history of premature CHD among the Saudi students and the significantly higher prevalence of hypertension among the Egyptian students, there was no statistically significant difference between the risk profiles of both populations. Participatory behavior change programs in medical schools for the adoption of healthy lifestyles, particularly involvement in regular physical activity and smoking cessation are highly recommended. |
Factors affecting running economy in trained distance runners. | Running economy (RE) is typically defined as the energy demand for a given velocity of submaximal running, and is determined by measuring the steady-state consumption of oxygen (VO2) and the respiratory exchange ratio. Taking body mass (BM) into consideration, runners with good RE use less energy and therefore less oxygen than runners with poor RE at the same velocity. There is a strong association between RE and distance running performance, with RE being a better predictor of performance than maximal oxygen uptake (VO2max) in elite runners who have a similar VO2max). RE is traditionally measured by running on a treadmill in standard laboratory conditions, and, although this is not the same as overground running, it gives a good indication of how economical a runner is and how RE changes over time. In order to determine whether changes in RE are real or not, careful standardisation of footwear, time of test and nutritional status are required to limit typical error of measurement. Under controlled conditions, RE is a stable test capable of detecting relatively small changes elicited by training or other interventions. When tracking RE between or within groups it is important to account for BM. As VO2 during submaximal exercise does not, in general, increase linearly with BM, reporting RE with respect to the 0.75 power of BM has been recommended. A number of physiological and biomechanical factors appear to influence RE in highly trained or elite runners. These include metabolic adaptations within the muscle such as increased mitochondria and oxidative enzymes, the ability of the muscles to store and release elastic energy by increasing the stiffness of the muscles, and more efficient mechanics leading to less energy wasted on braking forces and excessive vertical oscillation. Interventions to improve RE are constantly sought after by athletes, coaches and sport scientists. Two interventions that have received recent widespread attention are strength training and altitude training. Strength training allows the muscles to utilise more elastic energy and reduce the amount of energy wasted in braking forces. Altitude exposure enhances discrete metabolic aspects of skeletal muscle, which facilitate more efficient use of oxygen. The importance of RE to successful distance running is well established, and future research should focus on identifying methods to improve RE. Interventions that are easily incorporated into an athlete's training are desirable. |
Preliminary comparison of techniques for dealing with imbalance in software defect prediction | Imbalanced data is a common problem in data mining when dealing with classification problems, where samples of a class vastly outnumber other classes. In this situation, many data mining algorithms generate poor models as they try to optimize the overall accuracy and perform badly in classes with very few samples. Software Engineering data in general and defect prediction datasets are not an exception and in this paper, we compare different approaches, namely sampling, cost-sensitive, ensemble and hybrid approaches to the problem of defect prediction with different datasets preprocessed differently. We have used the well-known NASA datasets curated by Shepperd et al. There are differences in the results depending on the characteristics of the dataset and the evaluation metrics, especially if duplicates and inconsistencies are removed as a preprocessing step.
Further Results and replication package: http://www.cc.uah.es/drg/ease14 |
Recurrent Batch Normalization | We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization. |
Building Textual Entailment Specialized Data Sets: a Methodology for Isolating Linguistic Phenomena Relevant to Inference | This paper proposes a methodology for the creation of specialized data sets for Textual Entailment, made of monothematic Text-Hypothesis pairs (i.e. pairs in which only one linguistic phenomenon relevant to the entailment relation is highlighted and isolated). The annotation procedure assumes that humans have knowledge about the linguistic phenomena relevant to inference, and a classification of such phenomena both into fine grained and macro categories is suggested. We experimented with the proposed methodology over a sample of pairs taken from the RTE-5 data set, and investigated critical issues arising when entailment, contradiction or unknown pairs are considered. The result is a new resource, which can be profitably used both to advance the comprehension of the linguistic phenomena relevant to entailment judgments and to make a first step towards the creation of large-scale specialized data sets. |
Studying Irony Detection Beyond Ironic Criticism: Let's Include Ironic Praise | Studies of irony detection have commonly used ironic criticisms (i.e., mock positive evaluation of negative circumstances) as stimulus materials. Another basic type of verbal irony, ironic praise (i.e., mock negative evaluation of positive circumstances) is largely absent from studies on individuals' aptitude to detect verbal irony. However, it can be argued that ironic praise needs to be considered in order to investigate the detection of irony in the variety of its facets. To explore whether the detection ironic praise has a benefit beyond ironic criticism, three studies were conducted. In Study 1, an instrument (Test of Verbal Irony Detection Aptitude; TOVIDA) was constructed and its factorial structure was tested using N = 311 subjects. The TOVIDA contains 26 scenario-based items and contains two scales for the detection of ironic criticism vs. ironic praise. To validate the measurement method, the two scales of the TOVIDA were experimentally evaluated with N = 154 subjects in Study 2. In Study 3, N = 183 subjects were tested to explore personality and ability correlates of the two TOVIDA scales. Results indicate that the co-variance between the ironic TOVIDA items was organized by two inter-correlated but distinct factors: one representing ironic praise detection aptitude and one representing ironic criticism detection aptitude. Experimental validation showed that the TOVIDA items truly contain irony and that item scores reflect irony detection. Trait bad mood and benevolent humor (as a facet of the sense of humor) were found as joint correlates for both ironic criticism and ironic praise detection scores. In contrast, intelligence, trait cheerfulness, and corrective humor were found as unique correlates of ironic praise detection scores, even when statistically controlling for the aptitude to detect ironic criticism. Our results indicate that the aptitude to detect ironic praise can be seen as distinct from the aptitude to detect ironic criticism. Generating unique variance in irony detection, ironic praise can be postulated as worthwhile to include in future studies-especially when studying the role of mental ability, personality, and humor in irony detection. |
Ego-motion estimation and moving object tracking using multi-layer LIDAR | This paper presents an approach for the robust recognition of a complex and dynamic driving environment, such as an urban area, using on-vehicle multi-layer LIDAR. The multi-layer LIDAR alleviates the consequences of occlusion by vertical scanning; it can detect objects with different heights simultaneously, and therefore the influence of occlusion can be curbed. The road environment recognition algorithm proposed in this paper consists of three procedures: ego-motion estimation, construction and updating of a 3-dimensional local grid map, and the detection and tracking of moving objects. The integration of these procedures enables us to estimate ego-motion accurately, along with the positions and states of moving objects, the free area where vehicles and pedestrians can move freely, and the ‘unknown’ area, which have never previously been observed in a road environment. |
Maximum likelihood estimation for conditional distribution single-index models under censoring | A new likelihood approach is proposed for the problem of semiparametric estimation of a conditional distribution or density under censoring. Consistency and asymptotic normality for two versions of the maximum likelihood estimator of the parameter vector in the single index model are proved. The single-index model considered can be seen as a useful tool for credit scoring and estimation of the default probability in credit risk. A data-driven bandwidth selection procedure is proposed. It allows to choose the smoothing parameter involved in our approach. The finite sample performance of the estimators has been studied by simulations, where the new method has been compared with the method by Bouaziz and Lopez (2010) [1]. To the best of our knowledge this is the only existing competitor in this context. The simulation study shows the good behaviour of the proposed method. |
Residual Gated Graph ConvNets | Graph-structured data such as functional brain networks, social networks, gene regulatory networks, communications networks have brought the interest in generalizing neural networks to graph domains. In this paper, we are interested to design efficient neural network architectures for graphs with variable length. Several existing works such as Scarselli et al. (2009); Li et al. (2016) have focused on recurrent neural networks (RNNs) to solve this task. A recent different approach was proposed in Sukhbaatar et al. (2016), where a vanilla graph convolutional neural network (ConvNets) was introduced. We believe the latter approach to be a better paradigm to solve graph learning problems because ConvNets are more pruned to deep networks than RNNs. For this reason, we propose the most generic class of residual multi-layer graph ConvNets that make use of an edge gating mechanism, as proposed in Marcheggiani & Titov (2017). Gated edges appear to be a natural property in the context of graph learning tasks, as the system has the ability to learn which edges are important or not for the task to solve. We apply several graph neural models to two basic network science tasks; subgraph matching and semi-supervised clustering for graphs with variable length. Numerical results show the performances of the new model. |
The Depths of Hydraulic Fracturing and Accompanying Water Use Across the United States. | Reports highlight the safety of hydraulic fracturing for drinking water if it occurs "many hundreds of meters to kilometers underground". To our knowledge, however, no comprehensive analysis of hydraulic fracturing depths exists. Based on fracturing depths and water use for ∼44,000 wells reported between 2010 and 2013, the average fracturing depth across the United States was 8300 ft (∼2500 m). Many wells (6900; 16%) were fractured less than a mile from the surface, and 2600 wells (6%) were fractured above 3000 ft (900 m), particularly in Texas (850 wells), California (720), Arkansas (310), and Wyoming (300). Average water use per well nationally was 2,400,000 gallons (9,200,000 L), led by Arkansas (5,200,000 gallons), Louisiana (5,100,000 gallons), West Virginia (5,000,000 gallons), and Pennsylvania (4,500,000 gallons). Two thousand wells (∼5%) shallower than one mile and 350 wells (∼1%) shallower than 3000 ft were hydraulically fractured with >1 million gallons of water, particularly in Arkansas, New Mexico, Texas, Pennsylvania, and California. Because hydraulic fractures can propagate 2000 ft upward, shallow wells may warrant special safeguards, including a mandatory registry of locations, full chemical disclosure, and, where horizontal drilling is used, predrilling water testing to a radius 1000 ft beyond the greatest lateral extent. |
DETECTING POTHOLES USING SIMPLE IMAGE PROCESSING TECHNIQUES AND REAL-WORLD FOOTAGE | Potholes are a nuisance, especially in the developing world, and can often result in vehicle damage or physical harm to the vehicle occupants. Drivers can be warned to take evasive action if potholes are detected in real-time. Moreover, their location can be logged and shared to aid other drivers and road maintenance agencies. This paper proposes a vehicle-based computer vision approach to identify potholes using a window-mounted camera. Existing literature on pothole detection uses either theoretically constructed pothole models or footage taken from advantageous vantage points at low speed, rather than footage taken from within a vehicle at speed. A distinguishing feature of the work presented in this paper is that a thorough exercise was performed to create an image library of actual and representative potholes under different conditions, and results are obtained using a part of this library. A model of potholes is constructed using the image library, which is used in an algorithmic approach that combines a road colour model with simple image processing techniques such as a Canny filter and contour detection. Using this approach, it was possible to detect potholes with a precision of 81.8% and recall of 74.4.%. |
Argument Mining: A Machine Learning Perspective | Argument mining has recently become a hot topic, attracting the interests of several and diverse research communities, ranging from artificial intelligence, to computational linguistics, natural language processing, social and philosophical sciences. In this paper, we attempt to describe the problems and challenges of argument mining from a machine learning angle. In particular, we advocate that machine learning techniques so far have been under-exploited, and that a more proper standardization of the problem, also with regards to the underlying argument model, could provide a crucial element to develop better systems. |
An Elementary Proof of the Johnson-lindenstrauss Lemma | The Johnson-Lindenstrauss lemma shows that a set of n points in high dimensional Euclidean space can be mapped down into an O(log n== 2) dimensional Euclidean space such that the distance between any two points changes by only a factor of (1). In this note, we prove this lemma using elementary probabilistic techniques. |
A novel four-step search algorithm for fast block motion estimation | vector distribution, a new four-step search (4SS) algorithm with center-biased checking point pattern for fast block motion estimation is proposed in this paper. Halfway-stop technique is employed in the new algorithm with searching steps of 2 to 4 and the total number of checking points is varied from 17 to 27. Simulation results show that the proposed 4SS performs better than the well-known three-step search and has similar performance to the new three-step search (N3SS) in terms of motion compensation errors. In addition, the 4SS also reduces the worst-case computational requirement from 33 to 27 search points and the average computational requirement from 21 to 19 search points as compared with N3SS. _______________________________________ This paper was published in IEEE Trans. Circuits Syst. Video Technol., vol. 6, No. 3, pp. 313-317, Jun. 1996. The authors are with the CityU Image Processing Lab, Department of Electronic Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong. Email: [email protected] |
Defining and translating a "safe" subset of simulink/stateflow into lustre | The Simulink/Stateflow toolset is an integrated suite enabling model-based design and has become popular in the automotive and aeronautics industries. We have previously developed a translator called Simtolus from Simulink to the synchronous language Lustre and we build upon that work by encompassing Stateflow as well. Stateflow is problematical for synchronous languages because of its unbounded behaviour so we propose analysis techniques to define a subset of Stateflow for which we can define a synchronous semantics. We go further and define a "safe" subset of Stateflow which elides features which are potential sources of errors in Stateflow designs. We give an informal presentation of the Stateflow to Lustre translation process and show how our model-checking tool Lesar can be used to verify some of the semantical checks we have proposed. Finally, we present a small case-study. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.