title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
What Is New in Rome IV | Functional gastrointestinal disorders (FGIDs) are diagnosed and classified using the Rome criteria; the criteria may change over time as new scientific data emerge. The Rome IV was released in May 2016. The aim is to review the main changes in Rome IV. FGIDs are now called disorders of gut-brain interaction (DGBI). Rome IV has a multicultural rather than a Western-culture focus. There are new chapters including multicultural, age-gender-women's health, intestinal microenvironment, biopsychosocial, and centrally mediated disorders. New disorders have been included although not truly FGIDs, but fit the new definition of DGBI including opioid-induced gastrointestinal hyperalgesia , opioid-induced constipation , and cannabinoid hyperemesis . Also, new FGIDs based on available evidence including reflux hypersensitivity and centrally mediated abdominal pain syndrome . Using a normative survey to determine the frequency of normal bowel symptoms in the general population changes in the time frame for diagnosis were introduced. For irritable bowel syndrome (IBS) only pain is required and discomfort was eliminated because it is non-specific, having different meanings in different languages. Pain is now related to bowel movements rather than just improving with bowel movements (ie, can get worse with bowel movement). Functional bowel disorders (functional diarrhea , functional constipation , IBS with predominant diarrhea [IBS-D], IBS with predominant constipation [IBS-C ], and IBS with mixed bowel habits ) are considered to be on a continuum rather than as independent entities. Clinical applications such as diagnostic algorithms and the Multidimensional Clinical Profile have been updated. The new Rome IV iteration is evidence-based, multicultural oriented and with clinical applications. As new evidence become available, future updates are expected. |
Towards Research Collaboration - a Taxonomy of Social Research Network Sites | The increase of scientific collaboration coincides with the technological and social advancement of social software applications which can change the way we research. Among social software, social network sites have recently gained immense popularity in a hedonic context. This paper focuses on social network sites as an emerging application designed for the specific needs of researchers. To give an overview about these sites we use a data set of 24 case studies and in-depth interviews with the founders of ten social research network sites. The gathered data leads to a first tentative taxonomy and to a definition of SRNS identifying four basic functionalities identity and network management, communication, information management, and collaboration. The sites in the sample correspond to one of the following four types: research directory sites, research awareness sites, research management sites and research collaboration sites. These results conclude with implications for providers of social research network sites. |
Discrete Content-aware Matrix Factorization | Precisely recommending relevant items from massive candidates to a large number of users is an indispensable yet computationally expensive task in many online platforms (e.g., Amazon.com and Netflix.com). A promising way is to project users and items into a Hamming space and then recommend items via Hamming distance. However, previous studies didn't address the cold-start challenges and couldn't make the best use of preference data like implicit feedback. To fill this gap, we propose a Discrete Content-aware Matrix Factorization (DCMF) model, 1) to derive compact yet informative binary codes at the presence of user/item content information; 2) to support the classification task based on a local upper bound of logit loss; 3) to introduce an interaction regularization for dealing with the sparsity issue. We further develop an efficient discrete optimization algorithm for parameter learning. Based on extensive experiments on three real-world datasets, we show that DCFM outperforms the state-of-the-arts on both regression and classification tasks. |
Gene Expression Omnibus: NCBI gene expression and hybridization array data repository | The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data. GEO provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-throughput gene expression and genomic hybridization experiments. GEO is not intended to replace in house gene expression databases that benefit from coherent data sets, and which are constructed to facilitate a particular analytic method, but rather complement these by acting as a tertiary, central data distribution hub. The three central data entities of GEO are platforms, samples and series, and were designed with gene expression and genomic hybridization experiments in mind. A platform is, essentially, a list of probes that define what set of molecules may be detected. A sample describes the set of molecules that are being probed and references a single platform used to generate its molecular abundance data. A series organizes samples into the meaningful data sets which make up an experiment. The GEO repository is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo. |
Networks of Political Poetry in a Trans-Channel Literary Triangle | in their 2001 collection of essays, The Literary Channel , Margaret Cohen and Carolyn Dever locate the origins of the novel in a trans-Channel literary zone, a space where French and British transcultural networks reflected on the imagined borders of the nation-state. Poetry too was embedded in such political and formal networks. During the mid-Victorian decades, a variety of poets harnessed nationally inflected lyric forms such as the French chanson and ballade, the quasi-Shakespearean dramatic monologue, and the italian improvvisazione and sonnet to triangulate French and British exchanges with the marginalized perspective of r isorgimento i taly. thus elizabeth Barrett Browning, informed by Germaine de Stael’s Corinne (1807), voices the spirit of Young italy in her Casa Guidi Windows (1851), while Arthur Hugh Clough puts Pierre-Jean Beranger’s lewd chansons in the mouth of the Spirit as he lures the pure-minded Dipsy chus to moral capitulation in the streets of post-revolutionary Venice (Dipsychus and the Spirit [1865]). An especially lively exchange took place in the early 1870s when Louis n apoleon, once the self-styled “saviour” of a politically gridlocked France (“Foreign intelligence”), was in his third and final exile in Britain |
Scalable knowledge harvesting with high precision and high recall | Harvesting relational facts from Web sources has received great attention for automatically constructing large knowledge bases. Stateof-the-art approaches combine pattern-based gathering of fact candidates with constraint-based reasoning. However, they still face major challenges regarding the trade-offs between precision, recall, and scalability. Techniques that scale well are susceptible to noisy patterns that degrade precision, while techniques that employ deep reasoning for high precision cannot cope with Web-scale data.
This paper presents a scalable system, called PROSPERA, for high-quality knowledge harvesting. We propose a new notion of ngram-itemsets for richer patterns, and use MaxSat-based constraint reasoning on both the quality of patterns and the validity of fact candidates.We compute pattern-occurrence statistics for two benefits: they serve to prune the hypotheses space and to derive informative weights of clauses for the reasoner. The paper shows how to incorporate these building blocks into a scalable architecture that can parallelize all phases on a Hadoop-based distributed platform. Our experiments with the ClueWeb09 corpus include comparisons to the recent ReadTheWeb experiment. We substantially outperform these prior results in terms of recall, with the same precision, while having low run-times. |
Endovascular treatment of intracranial dural arteriovenous fistulas with cortical venous drainage: new management using Onyx. | BACKGROUND AND PURPOSE
DAVFs (dural arteriovenous fistulas) represent one of the most dangerous types of intracranial AV shunts. Most of them are cured by arterial or venous embolization, but surgery/radiosurgery can be required in case of failure. Our goal was to reconsider the endovascular treatment strategy according to the new possibilities of arterial embolization using non polymerizing liquid embolic agent.
MATERIALS AND METHODS
Thirty patients were included in a prospective study during the interval between July 2003 and November 2006. Ten of these had type II, 8 had type III, and 12 had type IV fistulas. Sixteen presented with hemorrhage. Five had been treated previously with other embolic materials.
RESULTS
Complete angiographic cure was obtained in 24 cases. Of these 24 cures, 20 were achieved after a single procedure. Cures were achieved in 23 of 25 patients who had not been embolized previously and in only 1 of 5 previously embolized patients. Among these 24 patients, 23 underwent a follow-up angiography, which has confirmed the complete cure. Partial occlusion was obtained in 6 patients, 2 were cured after additional surgery, and 2 underwent radiosurgery. Onyx volume injected per procedure ranged from 0.5 to 12.2 mL (mean, 2.45 mL). Rebleeding occurred in 1 completely cured patient at day 2 due to draining vein thrombosis. One patient had cranial nerve palsy that resolved. Two ethmoidal dural arteriovenous fistulas were occluded. All 10 of the patients with sinus and then CVR drainage were cured.
CONCLUSION
Based on this experience, we believe that Onyx may be the treatment of choice for many patients with intracranial dural arteriovenous fistula (ICDAVF) with direct cortical venous reflux (CVR). The applicability of this new embolic agent indicates the need for reconsideration of the global treatment strategy for such fistulas. |
Smart Choice for the Smart Grid: Narrowband Internet of Things (NB-IoT) | The low power wide area network (LPWAN) technologies, which is now embracing a booming era with the development in the Internet of Things (IoT), may offer a brand new solution for current smart grid communications due to their excellent features of low power, long range, and high capacity. The mission-critical smart grid communications require secure and reliable connections between the utilities and the devices with high quality of service (QoS). This is difficult to achieve for unlicensed LPWAN technologies due to the crowded license-free band. Narrowband IoT (NB-IoT), as a licensed LPWAN technology, is developed based on the existing long-term evolution specifications and facilities. Thus, it is able to provide cellular-level QoS, and henceforth can be viewed as a promising candidate for smart grid communications. In this paper, we introduce NB-IoT to the smart grid and compare it with the existing representative communication technologies in the context of smart grid communications in terms of data rate, latency, range, etc. The overall requirements of communications in the smart grid from both quantitative and qualitative perspectives are comprehensively investigated and each of them is carefully examined for NB-IoT. We further explore the representative applications in the smart grid and analyze the corresponding feasibility of NB-IoT. Moreover, the performance of NB-IoT in typical scenarios of the smart grid communication environments, such as urban and rural areas, is carefully evaluated via Monte Carlo simulations. |
A COMPARISON OF OSTEOPATHIC SPINAL MANIPULATION WITH STANDARD CARE FOR PATIENTS WITH LOW BACK PAIN | A BSTRACT Background The effect of osteopathic manual therapy (i.e., spinal manipulation) in patients with chronic and subchronic back pain is largely unknown, and its use in such patients is controversial. Nevertheless, manual therapy is a frequently used method of treatment in this group of patients. Methods We performed a randomized, controlled trial that involved patients who had had back pain for at least three weeks but less than six months. We screened 1193 patients; 178 were found to be eligible and were randomly assigned to treatment groups; 23 of these patients subsequently dropped out of the study. The patients were treated either with one or more standard medical therapies (72 patients) or with osteopathic manual therapy (83 patients). We used a variety of outcome measures, including scores on the Roland–Morris and Oswestry questionnaires, a visual-analogue pain scale, and measurements of range of motion and straight-leg raising, to assess the results of treatment over a 12-week period. Results Patients in both groups improved during the 12 weeks. There was no statistically significant difference between the two groups in any of the primary outcome measures. The osteopathic-treatment group required significantly less medication (analgesics, antiinflammatory agents, and muscle relaxants) (P< 0.001) and used less physical therapy (0.2 percent vs. 2.6 percent, P<0.05). More than 90 percent of the patients in both groups were satisfied with their care. Conclusions Osteopathic manual care and standard medical care have similar clinical results in patients with subacute low back pain. However, the use of medication is greater with standard care. (N Engl J Med 1999;341:1426-31.) |
Privacy-preserving Machine Learning as a Service | Machine learning algorithms based on deep Neural Networks (NN) have achieved remarkable results and are being extensively used in different domains. On the other hand, with increasing growth of cloud services, several Machine Learning as a Service (MLaaS) are offered where training and deploying machine learning models are performed on cloud providers’ infrastructure. However, machine learning algorithms require access to the raw data which is often privacy sensitive and can create potential security and privacy risks. To address this issue, we present CryptoDL, a framework that develops new techniques to provide solutions for applying deep neural network algorithms to encrypted data. In this paper, we provide the theoretical foundation for implementing deep neural network algorithms in encrypted domain and develop techniques to adopt neural networks within practical limitations of current homomorphic encryption schemes. We show that it is feasible and practical to train neural networks using encrypted data and to make encrypted predictions, and also return the predictions in an encrypted form. We demonstrate applicability of the proposed CryptoDL using a large number of datasets and evaluate its performance. The empirical results show that it provides accurate privacy-preserving training and classification. |
A Quantitative Comparison of Calibration Methods for RGB-D Sensors Using Different Technologies | RGB-D (Red Green Blue and Depth) sensors are devices that can provide color and depth information from a scene at the same time. Recently, they have been widely used in many solutions due to their commercial growth from the entertainment market to many diverse areas (e.g., robotics, CAD, etc.). In the research community, these devices have had good uptake due to their acceptable levelofaccuracyformanyapplicationsandtheirlowcost,butinsomecases,theyworkatthelimitof their sensitivity, near to the minimum feature size that can be perceived. For this reason, calibration processes are critical in order to increase their accuracy and enable them to meet the requirements of such kinds of applications. To the best of our knowledge, there is not a comparative study of calibration algorithms evaluating its results in multiple RGB-D sensors. Specifically, in this paper, a comparison of the three most used calibration methods have been applied to three different RGB-D sensors based on structured light and time-of-flight. The comparison of methods has been carried out by a set of experiments to evaluate the accuracy of depth measurements. Additionally, an object reconstruction application has been used as example of an application for which the sensor works at the limit of its sensitivity. The obtained results of reconstruction have been evaluated through visual inspection and quantitative measurements. |
Joint 3D Scene Reconstruction and Class Segmentation | Both image segmentation and dense 3D modeling from images represent an intrinsically ill-posed problem. Strong regularizers are therefore required to constrain the solutions from being 'too noisy'. Unfortunately, these priors generally yield overly smooth reconstructions and/or segmentations in certain regions whereas they fail in other areas to constrain the solution sufficiently. In this paper we argue that image segmentation and dense 3D reconstruction contribute valuable information to each other's task. As a consequence, we propose a rigorous mathematical framework to formulate and solve a joint segmentation and dense reconstruction problem. Image segmentations provide geometric cues about which surface orientations are more likely to appear at a certain location in space whereas a dense 3D reconstruction yields a suitable regularization for the segmentation problem by lifting the labeling from 2D images to 3D space. We show how appearance-based cues and 3D surface orientation priors can be learned from training data and subsequently used for class-specific regularization. Experimental results on several real data sets highlight the advantages of our joint formulation. |
Efficient Training of Artificial Neural Networks for Autonomous Navigation | The ALVINN (Autonomous h d Vehide In a N d Network) projea addresses the problem of training ani&ial naxal naarork in real time to perform difficult perapaon tasks. A L W is a back-propagation network dmpd to dnve the CMU Navlab. a modided Chevy van. 'Ibis ptpa describes the training techniques which allow ALVIN" to luun in under 5 minutes to autonomously conm>l the Navlab by wardung ahuamr, dziver's rmaions. Usingthese technrques A L W has b&n trained to drive in a variety of Cirarmstanccs including single-lane paved and unprved roads. and multi-lane lined and rmlinecd roads, at speeds of up IO 20 miles per hour |
Unsafe abortion: the silent scourge. | An estimated 19 million unsafe abortions occur worldwide each year, resulting in the deaths of about 70,000 women. Legalization of abortion is a necessary but insufficient step toward improving women's health. Without skilled providers, adequate facilities and easy access, the promise of safe, legal abortion will remain unfulfilled, as in India and Zambia. Both suction curettage and pharmacological abortion are safe methods in early pregnancy; sharp curettage is inferior and should be abandoned. For later abortions, either dilation and evacuation or labour induction are appropriate. Hysterotomy should not be used. Timely and appropriate management of complications can reduce morbidity and prevent mortality. Treatment delays are dangerous, regardless of their origin. Misoprostol may reduce the risks of unsafe abortion by providing a safer alternative to traditional clandestine abortion methods. While the debate over abortion will continue, the public health record is settled: safe, legal, accessible abortion improves health. |
Social-information-processing factors in reactive and proactive aggression in children's peer groups. | We examined social-information-processing mechanisms (e.g., hostile attributional biases and intention-cue detection deficits) in chronic reactive and proactive aggressive behavior in children's peer groups. In Study 1, a teacher-rating instrument was developed to assess these behaviors in elementary school children (N = 259). Reactive and proactive scales were found to be internally consistent, and factor analyses partially supported convergent and discriminant validities. In Study 2, behavioral correlates of these forms of aggression were examined through assessments by peers (N = 339). Both types of aggression related to social rejection, but only proactively aggressive boys were also viewed as leaders and as having a sense of humor. In Study 3, we hypothesized that reactive aggression (but not proactive aggression) would occur as a function of hostile attributional biases and intention-cue detection deficits. Four groups of socially rejected boys (reactive aggressive, proactive aggressive, reactive-proactive aggressive, and nonaggressive) and a group of average boys were presented with a series of hypothetical videorecorded vignettes depicting provocations by peers and were asked to interpret the intentions of the provocateur (N = 117). Only the two reactive-aggressive groups displayed biases and deficits in interpretations. In Study 4, attributional biases and deficits were found to be positively correlated with the rate of reactive aggression (but not proactive aggression) displayed in free play with peers (N = 127). These studies supported the hypothesis that attributional biases and deficits are related to reactive aggression but not to proactive aggression. |
Performance assessment and uncertainty quantification of predictive models for smart manufacturing systems | We review in this paper several methods from Statistical Learning Theory (SLT) for the performance assessment and uncertainty quantification of predictive models. Computational issues are addressed so to allow the scaling to large datasets and the application of SLT to Big Data analytics. The effectiveness of the application of SLT to manufacturing systems is exemplified by targeting the derivation of a predictive model for quality forecasting of products on an assembly line. |
Effects of communication styles on acceptance of recommendations in intercultural collaboration | The objective of this study is to investigate the impact of culture and communication style (explicit versus implicit) on people’s reactions on recommendations in intercultural collaboration. The experimental results from three intercultural collaboration teams were studied: Chinese-American, Chinese-German, and Chinese-Korean. The results indicate that Chinese participants showed more positive evaluations (i.e., higher trust, higher satisfaction, and more future collaboration intention) on the implicit advisor than American and German participants. Compared with Chinese participants, Korean participants accepted explicit recommendations more often and showed more positive evaluations on the explicit advisor. The results also show that when Chinese express recommendations in an explicit way, their recommendations were accepted more often and were more positively evaluated by cross-cultural partners. |
Automatic Detection and Classification of Road Lane Markings Using Onboard Vehicular Cameras | This paper presents a new approach for road lane classification using an onboard camera. Initially, lane boundaries are detected using a linear-parabolic lane model, and an automatic on-the-fly camera calibration procedure is applied. Then, an adaptive smoothing scheme is applied to reduce noise while keeping close edges separated, and pairs of local maxima-minima of the gradient are used as cues to identify lane markings. Finally, a Bayesian classifier based on mixtures of Gaussians is applied to classify the lane markings present at each frame of a video sequence as dashed, solid, dashed solid, solid dashed, or double solid. Experimental results indicate an overall accuracy of over 96% using a variety of video sequences acquired with different devices and resolutions. |
ADOPTION OF FARM MANAGEMENT PRACTICES IN LOWLAND RICE PRODUCTION IN NORTHERN GHANA | The strategy of the Savannah Accelerated Development Authority (SADA) is ‘based on the concept of a “Forested North” where agricultural production is modernized and oriented towards a larger market embracing the Sahelian countries, including northern Cote d’Ivoire and Togo.The modernization of agricultural production hinges on the adoption of efficient and sustainable farm management practices. The main objectives of the study were to find out: farmers’ perceptions on the most important farm management practices that are relevant in increasing their output or income; and (2) the determinants of the adoption of four soil fertility management practices (improved seed varieties, inorganic fertilizers, dibbling and sowing in rows). The methods of analysis involved a Kendall’s Coefficient of Concordance and the estimation of an Ordered Probit Model for the two objectives respectively. The survey covered seven districts in the Upper East and Northern Regions involving a total of 300 lowland rice farmers. In order of importance, the farmers ranked the following as relevant in increasing their output and income : Timely land preparation; Good seed variety; Soil fertility; Water availability/irrigation; Planting time; Weed control; Harvesting time; Commodity price; and others (such as pests infestation). A Kendall’s coefficient of 51% was recorded, which means that 51% of the respondents agreed on the ranking. The maximum likelihood estimation results of the probit model showed that extension visits, experience and training had a positive influence on the adoption of farm practices, while farm size, landownership and input distance had a negative effect on adoption.The farmers’ field school and the extension delivery systems must be improved. More input shops must also be set up close to farmers for easy access to inputs. Also, in as much as large scale farming must be encouraged, this must not be done at the detriment of small-scale farming and the landless. Above all, it is important that whatever support that is given to the farmers must be timely so as to yield the full impact. |
One-dimensional scattering theory for quantum systems with nontrivial spatial asymptotics | We provide a general framework of stationary scattering theory for one-dimensional quantum systems with nontrivial spatial asymptotics. As a byproduct we characterize reeectionless potentials in terms of spectral multiplicities and properties of the diagonal Green's function of the underlying Schrr odinger operator. Moreover, we prove that single (Crum-Darboux) and double commutation methods to insert eigenvalues into spectral gaps of a given background Schrr odinger operator produce reeectionless potentials (i.e., solitons) if and only if the background potential is reeectionless. Possible applications of our formalism include impurity (defect) scattering in (half)crystals and charge transport in mesoscopic quantum-interference devices. |
Highly stretchable carbon aerogels | Carbon aerogels demonstrate wide applications for their ultralow density, rich porosity, and multifunctionalities. Their compressive elasticity has been achieved by different carbons. However, reversibly high stretchability of neat carbon aerogels is still a great challenge owing to their extremely dilute brittle interconnections and poorly ductile cells. Here we report highly stretchable neat carbon aerogels with a retractable 200% elongation through hierarchical synergistic assembly. The hierarchical buckled structures and synergistic reinforcement between graphene and carbon nanotubes enable a temperature-invariable, recoverable stretching elasticity with small energy dissipation (~0.1, 100% strain) and high fatigue resistance more than 106 cycles. The ultralight carbon aerogels with both stretchability and compressibility were designed as strain sensors for logic identification of sophisticated shape conversions. Our methodology paves the way to highly stretchable carbon and neat inorganic materials with extensive applications in aerospace, smart robots, and wearable devices. Improved compressive elasticity was lately demonstrated for carbon aerogels but the problem of reversible stretchability remained a challenge. Here the authors use a hierarchical structure design and synergistic effects between carbon nanotubes and graphene to achieve high stretchability in carbon aerogels. |
Evaluation of COPD progression based on spirometry and exercise capacity. | INTRODUCTION
Chronic obstructive pulmonary disease (COPD) is characterized by an airflow limitation that is usually progressive. The progression of COPD expressed as the rate of an annual decline in FEV 1 is very heterogeneous. Exercise capacity in COPD patients is often diminished and becomes worsened over the time. The purpose of the study was to examine how the change in FEV 1 and exercise capacity would deteriorate over long-term observation.
MATERIAL AND METHODS
A total of 22 men with COPD were examined. At the beginning the average age was 59 ± 8.1 years and the mean post-bronchodilator FEV 1 was 52 ± 14.9% predicted. Pulmonary function testing was performed at entry and then each year for 10 years, and exercise testing on a cycle ergometer was performed at entry and after 10 years.
RESULTS
FEV 1 and maximum oxygen uptake (VO2max), maximum mechanical work (W max ), maximum minute ventilation (V Emax ) and maximum tidal volume (V Tmax ) declined significantly over the observation time. The mean annual decline in FEV 1 was 42 ± 37 mL, and the mean decline for VO 2max was 30 ± 15 mL/min/yr and 0.44 ± 0.25 mL/min/kg/yr. Regression analysis revealed that the changes in FEV 1 do not predict changes in VO2max. We observed a correlation between the annual change in V Emax and annual change in VO2max (r = 0.51 p < 0.05). The baseline FEV 1 (expressed as a percentage of predicted and in absolute values) is the predictor of FEV 1 annual decline (r = 0.74 and 0.82; p < 0.05).
CONCLUSIONS
We observed over time deterioration in exercise capacity in COPD patients which is independent of decline in airflow limitation. The long term follow-up of exercise capacity is important in monitoring of COPD patients in addition to pulmonary function. |
Diagnosis and compensation of amplitude imbalance, imperfect quadrant and offset in resolver signals | In this paper, a new method for detecting some errors in resolver signals is presented. Sin and cosine signals in resolver to digital converter's (RDC's) input might have errors, such as amplitude imbalance, imperfect quadrant or DC offset. Like other sensors some defects in mechanical structure and electrical or magnetic circuits cause to these errors in resolver output signals. The amplitude and phase of resolver output signals and consequently calculated angle are affected by these errors at any moment. Resolver output signals are more affected in their peak points than the other points. These peak points are easier and more accurate than the other points for analyzing, too. It is obvious that any error has unique effects on the signals. Therefore, by analyzing the effect on signals' peak points the error type and the value can be detected. To avoid wrong detection, error signal is defined, which detects the per unit signals deviation from the unit circle. Also, for error detection, there is no need to interrupt converter's operation. The proposed method is simulated in Matlab/Simulink environment, and the obtained results validate its effectiveness. Also, it is simulated on TMS320F2812 DSP using CCS 3.3, and the results are presented. Furthermore, by implementing the present RDC on motor controller DSP, it will be a low-cost approach. |
All fiber-optic neural network using coupled SOA based ring lasers | An all-optical neural network is presented that is based on coupled lasers. Each laser in the network lases at a distinct wavelength, representing one neuron. The network status is determined by the wavelength of the network's light output. Inputs to the network are in the optical power domain. The nonlinear threshold function required for neural-network operation is achieved optically by interaction between the lasers. The behavior of the coupled lasers is explained by a simple laser model developed in the paper. In particular, the winner take all (WTA) neural-network behavior of a system of many lasers is described. An experimental system is implemented using single mode fiber optic components at wavelengths near 1550 nm. A number of functions are implemented to demonstrate the practicality of the new network. The neural network is particularly robust against input wavelength variations. |
ThreadSanitizer: data race detection in practice | Data races are a particularly unpleasant kind of threading bugs. They are hard to find and reproduce -- you may not observe a bug during the entire testing cycle and will only see it in production as rare unexplainable failures. This paper presents ThreadSanitizer -- a dynamic detector of data races. We describe the hybrid algorithm (based on happens-before and locksets) used in the detector. We introduce what we call dynamic annotations -- a sort of race detection API that allows a user to inform the detector about any tricky synchronization in the user program. Various practical aspects of using ThreadSanitizer for testing multithreaded C++ code at Google are also discussed. |
SRPGAN: Perceptual Generative Adversarial Network for Single Image Super Resolution | Single image super resolution (SISR) is to reconstruct a high resolution image from a single low resolution image. The SISR task has been a very attractive research topic over the last two decades. In recent years, convolutional neural network (CNN) based models have achieved great performance on SISR task. Despite the breakthroughs achieved by using CNN models, there are still some problems remaining unsolved, such as how to recover high frequency details of high resolution images. Previous CNN based models always use a pixel wise loss, such as l2 loss. Although the high resolution images constructed by these models have high peak signal-to-noise ratio (PSNR), they often tend to be blurry and lack high-frequency details, especially at a large scaling factor. In this paper, we build a super resolution perceptual generative adversarial network (SRPGAN) framework for SISR tasks. In the framework, we propose a robust perceptual loss based on the discriminator of the built SRPGAN model. We use the Charbonnier loss function to build the content loss and combine it with the proposed perceptual loss and the adversarial loss. Compared with other state-of-the-art methods, our method has demonstrated great ability to construct images with sharp edges and rich details. We also evaluate our method on different benchmarks and compare it with previous CNN based methods. The results show that our method can achieve much higher structural similarity index (SSIM) scores on most of the benchmarks than the previous state-of-art methods. |
Long Short Term Memory Recurrent Neural Network Classifier for Intrusion Detection | Due to the advance of information and communication techniques, sharing information through online has been increased. And this leads to creating the new added value. As a result, various online services were created. However, as increasing connection points to the internet, the threats of cyber security have also been increasing. Intrusion detection system(IDS) is one of the important security issues today. In this paper, we construct an IDS model with deep learning approach. We apply Long Short Term Memory(LSTM) architecture to a Recurrent Neural Network(RNN) and train the IDS model using KDD Cup 1999 dataset. Through the performance test, we confirm that the deep learning approach is effective for IDS. |
A Multi-View Fusion Neural Network for Answer Selection | Community question answering aims at choosing the most appropriate answer for a given question, which is important in many NLP applications. Previous neural network-based methods consider several different aspects of information through calculating attentions. These different kinds of attentions are always simply summed up and can be seen as a “single view”, causing severe information loss. To overcome this problem, we propose a Multi-View Fusion Neural Network, where each attention component generates a “view” of the QA pair and a fusion RNN integrates the generated views to form a more holistic representation. In this fusion RNN method, a filter gate collects important information of input and directly adds it to the output, which borrows the idea of residual networks. Experimental results on the WikiQA and SemEval-2016 CQA datasets demonstrate that our proposed model outperforms the state-of-the-art methods. |
A Probabilistic Analysis of Link Duration in Vehicular Ad Hoc Networks | The past decade has witnessed a phenomenal market penetration of wireless communications and a steady increase in the number of mobile users. Unlike wired networks, where communication links are inherently stable, in wireless networks, the lifetime of a link is a random variable whose probability distribution depends on mobility, transmission range, and various impairments of radio communications. Because of the very dynamic nature of Vehicular Ad hoc NETworks (VANETs) and the short transmission range mandated by the Federal Communications Commission (FCC), individual communication links come into existence and vanish unpredictably, making the task of establishing and maintaining routing paths between fast-moving vehicles very challenging. The main contribution of this work is to investigate the probability distribution of the lifetime of individual links in a VANET under the combined assumptions of a realistic radio transmission model and a realistic probability distribution model of intervehicle headway distance. Our analytical results were validated and confirmed by extensive simulation. |
Learning Multiple Tasks with Deep Relationship Networks | Deep networks trained on large-scale data can learn transferable features to promote learning multiple tasks. As deep features eventually transition from general to specific along deep networks, a fundamental problem is how to exploit the relationship across different tasks and improve the feature transferability in the task-specific layers. In this paper, we propose Deep Relationship Networks (DRN) that discover the task relationship based on novel tensor normal priors over the parameter tensors of multiple task-specific layers in deep convolutional networks. By jointly learning transferable features and task relationships, DRN is able to alleviate the dilemma of negative-transfer in the feature layers and under-transfer in the classifier layer. Extensive experiments show that DRN yields state-of-the-art results on standard multi-task learning benchmarks. |
Multi-Layer Background Subtraction Based on Color and Texture | In this paper, we propose a robust multi-layer background subtraction technique which takes advantages of local texture features represented by local binary patterns (LBP) and photometric invariant color measurements in RGB color space. LBP can work robustly with respective to light variation on rich texture regions but not so efficiently on uniform regions. In the latter case, color information should overcome LBP's limitation. Due to the illumination invariance of both the LBP feature and the selected color feature, the method is able to handle local illumination changes such as cast shadows from moving objects. Due to the use of a simple layer-based strategy, the approach can model moving background pixels with quasi-periodic flickering as well as background scenes which may vary over time due to the addition and removal of long-time stationary objects. Finally, the use of a cross-bilateral filter allows to implicitly smooth detection results over regions of similar intensity and preserve object boundaries. Numerical and qualitative experimental results on both simulated and real data demonstrate the robustness of the proposed method. |
The Suffocation of Marriage : Climbing Mount Maslow Without Enough Oxygen | This article distills insights from historical, sociological, and psychological perspectives on marriage to develop the suffocation model of marriage in America. According to this model, contemporary Americans are asking their marriage to help them fulfill different sets of goals than in the past. Whereas they ask their marriage to help them fulfill their physiological and safety needs much less than in the past, they ask it to help them fulfill their esteem and self-actualization needs much more than in the past. Asking the marriage to help them fulfill the latter, higher level needs typically requires sufficient investment of time and psychological resources to ensure that the two spouses develop a deep bond and profound insight into each other’s essential qualities. Although some spouses are investing sufficient resources—and reaping the marital and psychological benefits of doing so—most are not. Indeed, they are, on average, investing less than in the past. As a result, mean levels of marital quality and personal well-being are declining over time. According to the suffocation model, spouses who are struggling with an imbalance between what they are asking from their marriage and what they are investing in it have several promising options for corrective action: intervening to optimize their available resources, increasing their investment of resources in the marriage, and asking less of the marriage in terms of facilitating the fulfillment of spouses’ higher needs. Discussion explores the implications of the suffocation model for understanding dating and courtship, sociodemographic variation, and marriage beyond American’s borders. |
MOOCBuddy: a Chatbot for personalized learning with MOOCs | With the proliferation of MOOCs (Massive Open Online Courses) providers, like Coursera, edX, FutureLearn, UniCampus.ro, NOVAMOOC.uvt.ro or MOOC.ro, it’s a real challenge to find the best learning resource. MOOCBuddy – a MOOC recommender system as a chatbot for Facebook Messenger, based on user’s social media profile and interests, could be a solution. MOOCBuddy is looking like the big trend of 2016, based on the Messenger Platform launched by Facebook in the mid of April 2016. Author |
cryoSPARC: algorithms for rapid unsupervised cryo-EM structure determination | Single-particle electron cryomicroscopy (cryo-EM) is a powerful method for determining the structures of biological macromolecules. With automated microscopes, cryo-EM data can often be obtained in a few days. However, processing cryo-EM image data to reveal heterogeneity in the protein structure and to refine 3D maps to high resolution frequently becomes a severe bottleneck, requiring expert intervention, prior structural knowledge, and weeks of calculations on expensive computer clusters. Here we show that stochastic gradient descent (SGD) and branch-and-bound maximum likelihood optimization algorithms permit the major steps in cryo-EM structure determination to be performed in hours or minutes on an inexpensive desktop computer. Furthermore, SGD with Bayesian marginalization allows ab initio 3D classification, enabling automated analysis and discovery of unexpected structures without bias from a reference map. These algorithms are combined in a user-friendly computer program named cryoSPARC (http://www.cryosparc.com). |
Compact combline filter with improved cross coupling assembly and temperature compensation | A base-station bandpass filter using compact stepped combline resonators is presented. The bandpass filter consists of 4 resonators, has a center-frequency of 2.0175 GHz, a bandwidth of 15 MHz and cross-coupling by a cascaded quadruplet for improved blocking performance. The combline resonators have different size. Therefore, different temperature compensation arrangements need to be applied to guarantee stable performance in the temperature range from -40deg C to 85deg C. The layout will be discussed. A novel cross coupling assembly is introduced. Furthermore, measurement results are shown. |
Impact of adjunctive cilostazol therapy on platelet function profiles in patients with and without diabetes mellitus on aspirin and clopidogrel therapy. | Cilostazol is a platelet inhibitor which when added to aspirin and clopidogrel has shown to reduce the risk of recurrent ischaemic events without an increase in bleeding. These clinical benefits have shown to be more pronounced in patients with diabetes mellitus (DM). However, it remains unknown whether cilostazol exerts different pharmacodynamic effects in patients with and without DM. This was a randomised, double-blind, placebo-controlled, cross-over pharmacodynamic study comparing platelet function in patients with and without DM on aspirin and clopidogrel therapy. Patients (n=111) were randomly assigned to either cilostazol 100 mg or placebo twice daily for 14 days and afterwards crossed-over treatment for another 14 days. Platelet function was performed at baseline, 14 days post-randomisation, and 14 days post-cross-over. Functional testing to assess P2Y12 signalling included flow cytometric analysis of phosphorylation status of vasodilator-stimulated phosphoprotein measured by P2Y12 reactivity index (PRI), light transmittance aggregometry and VerifyNow. Thrombin generation processes were also studied using thrombelastography. Significantly lower PRI values were observed following treatment with cilostazol compared with placebo both in DM and non-DM groups (p < 0.0001). The absolute between-treatment differences of PRI between groups was a 35.1% lower in patients with DM (p=0.039). Similar results were obtained using all other functional measures assessing P2Y12 signalling. Thrombin generation was not affected by cilostazol. Cilostazol reduces platelet reactivity both in patients with and without DM, although these pharmacodynamic effects are enhanced in patients with DM. Despite the marked platelet inhibition, cilostazol does not alter thrombin-mediated haemostatic processes, which may explain its ischaemic benefit without the increased risk of bleeding. |
Novel Bow-tie–AMC Combination for 5.8-GHz RFID Tags Usable With Metallic Objects | A novel combination of coplanar waveguide (CPW)-fed double bow-tie slot antenna and artificial magnetic conductor (AMC), which meets the requirements of a 5.8-GHz SHF RFID tag antenna usable on metallic objects, is presented. The manufactured prototype is characterized in terms of return loss, gain, and radiation pattern measurements in an anechoic chamber, both alone and on a metallic plate. |
Treatment Effects on Measures of Body Composition in the TODAY Clinical Trial | OBJECTIVE
The Treatment Options for type 2 Diabetes in Adolescents and Youth (TODAY) trial showed superiority of metformin plus rosiglitazone (M+R) over metformin alone (M), with metformin plus lifestyle (M+L) intermediate in maintaining glycemic control. We report here treatment effects on measures of body composition and their relationships to demographic and metabolic variables including glycemia.
RESEARCH DESIGN AND METHODS
Measures of adiposity (BMI, waist circumference, abdominal height, percent and absolute fat, and bone mineral content [BMC] and density [BMD]) were analyzed as change from baseline at 6 and 24 months.
RESULTS
Measures of fat accumulation were greatest in subjects treated with M+R and least in M+L. Although fat measures in M+L were less than those of M+R and M at 6 months, differences from M were no longer apparent at 24 months, whereas differences from M+R persisted at 24 months. The only body composition measure differing by race and/or ethnicity was waist circumference, greater in M+R than either M or M+L at both 6 and 24 months in whites. BMD and BMC increased in all groups, but increased less in M+R compared with the other two groups by 24 months. Measures of adiposity (increases in BMI, waist circumference, abdominal height, and fat) were associated with reduced insulin sensitivity and increased hemoglobin A1c (HbA1c), although effects of adiposity on HbA1c were less evident in those treated with M+R.
CONCLUSIONS
Despite differential effects on measures of adiposity (with M+R resulting in the most and M+L in the least fat accumulation), group differences generally were small and unrelated to treatment effects in sustaining glycemic control. |
An Evaluation of Intel Software Guard Extensions Through Emulation | The Intel Software Guard Extensions (SGX) technology, recently introduced in the new generations of x86 processors, allows the execution of applications in a fully protected environment (i.e., within enclaves). Because it is a recent technology, machines that rely on this technology are still a minority. In order to evaluate the SGX, an emulator of this technology (called OpenSGX) implements and replicates the main functionalities and structures used in SGX. The focus is to evaluate the resulting overhead from running an application within an environment with emulated SGX. For the evaluation, benchmark applications from the MiBench platform were employed. As performance metrics, we gathered the total number of instructions and the total number of CPU cycles for the execution of each application with and without OpenSGX. |
Approximating Thin-Plate Splines for Elastic Registration: Integration of Landmark Errors and Orientation Attributes | We introduce an approach to elastic registration of tomographic images based on thin-plate splines. Central to this scheme is a well-de ned minimizing functional for which the solution can be stated analytically. In this work, we consider the integration of anisotropic landmark errors as well as additional attributes at landmarks. As attributes we use orientations at landmarks and we incorporate the corresponding constraints through scalar products. With our approximation scheme it is thus possible to integrate statistical as well as geometric information as additional knowledge in elastic image registration. On the basis of synthetic as well as real tomographic images we show that this additional knowledge can signi cantly improve the registration result. In particular, we demonstrate that our scheme incorporating orientation attributes can preserve the shape of rigid structures (such as bone) embedded in an otherwise elastic material. This is achieved without selecting further landmarks and without a full segmentation of the rigid structures. |
Art therapy improved depression and influenced fatigue levels in cancer patients on chemotherapy. | INTRODUCTION
Cancer patients are particularly vulnerable to depression and anxiety, with fatigue as the most prevalent symptom of those undergoing treatment. The purpose of this study was to determine whether improvement in depression, anxiety or fatigue during chemotherapy following anthroposophy art therapy intervention is substantial enough to warrant a controlled trial.
MATERIAL AND METHODS
Sixty cancer patients on chemotherapy and willing to participate in once-weekly art therapy sessions (painting with water-based paints) were accrued for the study. Nineteen patients who participated in > or =4 sessions were evaluated as the intervention group, and 41 patients who participated in < or =2 sessions comprised the participant group. Hospital Anxiety and Depression Scale (HADS) and the Brief Fatigue Inventory (BFI) were completed before every session, relating to the previous week.
RESULTS
BFI scores were higher in the participant group (p=0.06). In the intervention group, the median HADS score for depression was 9 at the beginning and 7 after the fourth appointment (p=0.021). The median BFI score changed from 5.7 to 4.1 (p=0.24). The anxiety score was in the normal range from the beginning.
CONCLUSION
Anthroposophical art therapy is worthy of further study in the treatment of cancer patients with depression or fatigue during chemotherapy treatment. |
Agile Modeling with the UML | This paper discusses a model-based approach to software development. It argues that an approach using models as central development artifact needs to be added to the portfolio of software engineering techniques, to further increase efficiency and flexibility of the development as well as quality and reusability of the results. Two major and strongly related techniques are identified and discussed: Test case modeling and an evolutionary approach to model transformation. |
Analysis of the Cyber Attacks against ADS-B Perspective of Aviation Experts | The present paper has a profound literature review of the relation between cyber security, aviation and the vulnerabilities prone by the increasing use of information systems in aviation realm. Civil aviation is in the process of evolution of the air traffic management system through the introduction of new technologies. Therefore, the modernization of aeronautical communications are creating network security issues in aviation that have not been mitigated yet. The purpose of this thesis is to make a systematic qualitative analysis of the cyber-attacks against Automatic Dependent Surveillance Broadcast. With this analysis, the paper combines the knowledge of two fields which are meant to deal together with the security issues in aviation. The thesis focuses on the exploitation of the vulnerabilities of ADS-B and presents an analysis taking into account the perspective of cyber security and aviation experts. The threats to ADS-B are depicted, classified and evaluated by aviation experts, making use of interviews in order to determine the possible impact, and the actions that would follow in case a cyber-attack occurs. The results of the interviews show that some attacks do not really represent a real problem for the operators of the system and that other attacks may create enough confusion due to their complexity. The experience is a determinant factor for the operators of ADS-B, because based on that a set of mitigations was proposed by aviation experts that can help to cope in a cyberattack situation. This analysis can be used as a reference guide to understand the impact of cyber security threats in aviation and the need of the research and aviation communities to broaden the knowledge and to increase the level of expertise in order to face the challenges posed by network security issues. The thesis is in English and contains 58 pages of text, 5 chapters, 17 figures, 15 tables. |
N-terminal Pro-B-type natriuretic peptide and its correlation to haemodialysis-induced myocardial stunning. | BACKGROUND
Haemodialysis (HD) is able to induce recurrent myocardial ischemia and segmental left-ventricular dysfunction (myocardial stunning). The association of N-terminal Pro-B-type natriuretic peptide (NTpro-BNP) with HD-induced myocardial stunning is unclear.
METHODS
In 70 prevalent HD patients, HD-induced myocardial stunning was assessed echocardiographically at baseline and after 12 months. The extent to which pre-dialysis NTpro-BNP was associated with the occurrence of HD-induced myocardial stunning was assessed as the primary endpoint.
RESULTS
The median Ntpro-BNP concentration in this cohort was 2,154 pg/ml (IQR 1,224-3,014). Patients experiencing HD-induced myocardial stunning at either time point displayed elevated NTpro-BNP values (2,418 pg/ml, IQR, 1,583-3,474 vs. 1,751 pg/ml, IQR (536-2,029), p = 0.02). NTpro-BNP levels did not differ between patients showing HD-induced stunning at baseline and those developing stunning during the observational period (p = 0.8). NTpro-BNP levels drawn at the beginning of the dialysis session achieved a poor diagnostic accuracy for the detection of myocardial stunning (area under the ROC curve 0.61, 95% CI 0.45-0.77), but provided an accurate rule out for myocardial stunning during the subsequent year (AUC 0.85, 95% CI 0.70-0.99). The calculated cut-off of 1,570 pg/ml achieved a sensitivity of 66% and a specificity of 78% for the exclusion of myocardial stunning at any time point. In logistic regression analysis, only low NTpro-BNP levels (OR 0.92 for every additional 100 pg/ml, 95% CI 0.85-0.99, p = 0.03) were significantly associated with absence of myocardial stunning at any time point.
CONCLUSION
Predialytic NTpro-BNP levels fail to adequately diagnose current dialysis-induced myocardial stunning, but help to identify patients with a propensity to develop dialysis-induced myocardial stunning at any time during the next 12 months. |
BeeParking: an ambient display to induce cooperative parking behavior | Interactive ambient systems offer a great potential for attracting user attention, raising awareness and supporting the acquisition of more desirable behaviors in the shared use of limited resources, like physical or digital spaces, energy, water and so on. In this paper we describe the iterative design of BeeParking, an ambient display and automatic notification system aimed to induce more cooperative use of a parking facility within a work environment. We also report main findings from a longitudinal in-situ evaluation showing how the system was adopted and how it affected users' parking behavior over time. |
Spatio-Temporal Data Mining for Climate Data : Advances , Challenges , and Opportunities | Our planet is experiencing simultaneous changes in global population, urbanization, and climate. These changes, along with the rapid growth of climate data and increasing popularity of data mining techniques may lead to the conclusion that the time is ripe for data mining to spur major innovations in climate science. However, climate data bring forth unique challenges that are unfamiliar to the traditional data mining literature, and unless they are addressed, data mining will not have the same powerful impact that it has had on fields such as biology or e-commerce. In this chapter, we refer to spatio-temporal data mining (STDM) as a collection of methods that mine the data’s spatio-temporal context to increase an algorithm’s accuracy, scalability, or interpretability (relative to non-space-time aware algorithms). We highlight some of the singular characteristics and challenges STDM faces within climate data and their applications, and provide the reader with an overview of the advances in STDM and related climate applications. We also demonstrate some of the concepts introduced in the chapter’s earlier sections with a real-world STDM pattern mining application to identify mesoscale ocean eddies from satellite data. The case-study provides the reader with concrete examples of challenges faced when mining climate data and how effectively analyzing the data’s spatio-temporal context may improve existing methods’ accuracy, interpretability, and scalability. We end the chapter with a discussion of notable opportunities for STDM research within climate. James H. Faghmous Department of Computer Science and Engineering, The University of Minnesota – Twin Cities e-mail: [email protected] Vipin Kumar Department of Computer Science and Engineering, The University of Minnesota – Twin Cities e-mail: [email protected] |
Human Reasoning and Cognitive Science | ion, 277, 283, 303, 305 Achourioti, Theodora, 9 adaptation, 39, 140, 142, 145–147, 153, 164, 167, 170, 171, 177, 282, 296, 365, 366 adaptationism, 141, 145, 146, 296 affect, 241, 243, 276, 277 affirmation of the consequent, 181, 183, 189, 192, 196, 200, 201, 210–212, 231, 232, 266, 269, 271 agent, 4, 40, 185, 192, 250, 251 algorithm, 24, 46, 120, 177, 217, 222, 231, 283, 304, 308, 321, 322, 325, 332, 333, 337, 348, 350, 351, 354, 365 Darwinian, 120, 139, 148, 153 fast and frugal, 7, 361, 365 altriciality, 152, 164, 169, 280, 288, 290, 296, 365 altruism, 146, 153, 156, 157 analogy, 9, 244, 267, 277, 301 anaphora, 53, 75, 82, 91, 98, 111, 115, 329 Aristotle, 9, 11, 81, 306, 320, 321, 323 Aspberger, see developmental syndrome, Asperger’s astrocyte, 151, 285 attention deficit hyperactivity disorder (ADHD), see developmental syndrome authority, 22, 45, 60, 75, 85, 111, 128, 132, 167 autism, see developmental syndrome, autism backpropagation, 219, 231 Bayes, 6, 66, 95, 102, 105, 114, 213, 215, 216 Biston betularia, 158 bottom–up, 126, 244, 277, 284, 317, 341 cell death, 285 cerebellum, 168, 284, 285, 296 cerebral lateralisation, 119, 134, 339 cheater detection, 15, 120, 139, 152, 153, 177 chimpanzee, 151, 161, 165, 167, 170, 241, 242, 248, 294, 296 circumscription, 189, 193 COBUILD, 83 communication adversarial, 85, 97, 107, 115, 128, 130, 310, 328 |
Computational Thinking: What and Why? | In my March 2006 CACM article I used the term " computational thinking " to articulate a vision that everyone, not just those who major in computer science, can benefit from thinking like a computer scientist [Wing06]. So, what is computational thinking? Here is a definition that Jan use; it is inspired by an email exchange I had with Al Aho of Columbia University: Computational Thinking is the thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that can be effectively carried out by an information-processing agent [CunySnyderWing10] Informally, computational thinking describes the mental activity in formulating a problem to admit a computational solution. The solution can be carried out by a human or machine, or more generally, by combinations of humans and machines. When I use the term computational thinking, my interpretation of the words " problem " and " solution " is broad; in particular, I mean not just mathematically well-defined problems whose solutions are completely analyzable, e.g., a proof, an algorithm, or a program, but also real-world problems whose solutions might be in the form of large, complex software systems. Thus, computational thinking overlaps with logical thinking and systems thinking. It includes algorithmic thinking and parallel thinking, which in turn engage other kinds of thought processes, e.g., compositional reasoning, pattern matching, procedural thinking, and recursive thinking. Computational thinking is used in the design and analysis of problems and their solutions, broadly interpreted. The most important and high-level thought process in computational thinking is the abstraction process. Abstraction is used in defining patterns, generalizing from instances, and parameterization. It is used to let one object stand for many. It is used to capture essential properties common to a set of objects while hiding irrelevant distinctions among them. For example, an algorithm is an abstraction of a process that takes inputs, executes a sequence of steps, and produces outputs to satisfy a desired goal. An abstract data type defines an abstract set of values and operations for manipulating those values, hiding the actual representation of the values from the user of the abstract data type. Designing efficient algorithms inherently involves designing abstract data types. Abstraction gives us the power to scale and deal with complexity. Recursively applying abstraction gives us the ability to build larger and larger systems, with the base case (at least for computer science) being bits (0's … |
mm-Wave Silicon ICs: Challenges and Opportunities | Millimeter-waves offer promising opportunities and interesting challenges to silicon integrated circuit and system designers. These challenges go beyond standard circuit design questions and span a broader range of topics including wave propagation, antenna design, and communication channel capacity limits. It is only meaningful to evaluate the benefits and shortcoming of silicon-based mm-wave integrated circuits in this broader context. This paper reviews some of these issues and presents several solutions to them. |
Lane detection using spline model | In this paper, a Catmull±Rom spline-based lane model which describes the perspective eect of parallel lines has been proposed for generic lane boundary. Since Catmull±Rom spline can form arbitrary shapes by dierent sets of control points, it can describe a wider range of lane structures compared with other lane models, i.e. straight and parabolic models. Moreover, the lane detection problem has been formulated here as the problem of determining the set of control points of lane model. The proposed algorithm ®rst detects the vanishing point (line) by using a Hough-like technique and then solves the lane detection problem by suggesting a maximum likelihood approach. Also, we have employed a multi-resolution strategy for rapidly achieving an accurate solution. This coarse-to-®ne matching oers us an acceptable solution at an aordable computational cost, and thus speeds up the process of lane detection. As a result, the proposed method is robust to noise, shadows, and illumination variations in the captured road images, and is also applicable to both the marked and the unmarked roads. Ó 2000 Elsevier Science B.V. All rights reserved. |
Ta₂O5 / Al / SiO₂ / P-Si MIS型 太陽電池의 製作과 特性 | The fabrication procedure and characteristics of Ta₂O_5/Al/SiO₂/p-Si MIS solar cells forming a fine grating pattern of aluminum evaporated on to p-type silicon crystal are discribed. The proper temperature for oxide growing of these cells was found to be about 450℃ for 20 minutes with oxygen flow. The conversion efficiency increased about 3% after 750 Å thickness of tantalium silica film spin on anti-reflective coating. The best results showed that Voc = 0.545 V, Jsc = 34㎃ and F.F = 0.65, which represent that the conversion efficiency is 12%. |
Self efficacy. | Academic motivation is discussed in terms of self-efficacy, an individual's judgments of his or her capabilities to perform given actions. After presenting an overview of self-efficacy theory, I contrast self-efficacy with related constructs (perceived control, outcome expectations, perceived value of outcomes, attributions, and selfconcept) and discuss some efficacy research relevant to academic motivation. Studies of the effects of person variables (goal setting and information processing) and situation variables (models, attributional feedback, and rewards) on self-efficacy and motivation are reviewed. In conjunction with this discussion, I mention substantive issues that need to be addressed in the self-efficacy research and summarize evidence on the utility of self-efficacy for predicting motivational outcomes. Areas for future research are suggested. Article: The concept of personal expectancy has a rich history in psychological theory on human motivation (Atkinson, 1957; Rotter, 1966; Weiner, 1979). Research conducted within various theoretical traditions supports the idea that expectancy can influence behavioral instigation, direction, effort, and persistence (Bandura, 1986; Locke & Latham, 1990; Weiner, 1985). In this article, I discuss academic motivation in terms of one type of personal expectancy: self-efficacy, defined as "People's judgments of their capabilities to organize and execute courses of action required to attain designated types of performances" (Bandura, 1986, p. 391). Since Bandura's (1977) seminal article on selfefficacy, much research has clarified and extended the role of self-efficacy as a mechanism underlying behavioral change, maintenance, and generalization. For example, there is evidence that self-efficacy predicts such diverse outcomes as academic achievements, social skills, smoking cessation, pain tolerance, athletic performances, career choices, assertiveness, coping with feared events, recovery from heart attack, and sales performance (Bandura, 1986). After presenting an overview of self-efficacy theory and comparison of self-efficacy with related constructs, I discuss some self-efficacy research relevant to academic motivation, pointing out substantive issues that need to be addressed. I conclude with recommendations for future research. SELF-EFFICACY THEORY Antecedents and Consequences Bandura (1977) hypothesized that self-efficacy affects an individual's choice of activities, effort, and persistence. People who have a low sense of efficacy for accomplishing a task may avoid it; those who believe they are capable should participate readily. Individuals who feel efficacious are hypothesized to work harder and persist longer when they encounter difficulties than those who doubt their capabilities. Self-efficacy theory postulates that people acquire information to appraise efficacy from their performance accomplishments, vicarious (observational) experiences, forms of persuasion, and physiological indexes. An individual's own performances offer the most reliable guides for assessing efficacy. Successes raise efficacy and failure lowers it, but once a strong sense of efficacy is developed, a failure may not have much impact (Bandura, 1986). An individual also acquires capability information from knowledge of others. Similar others offer the best basis for comparison (Schunk, 1989b). Observing similar peers perform a task conveys to observers that they too are capable of accomplishing it. Information acquired vicariously typically has a weaker effect on self-efficacy than performance-based information; a vicarious increase in efficacy can be negated by subsequent failures. Students often receive persuasory information that they possess the capabilities to perform a task (e.g., "You can do this"). Positive persuasory feedback enhances self-efficacy, but this increase will be temporary if subsequent efforts turn out poorly. Students also derive efficacy information from physiological indexes (e.g., heart rate and sweating). Bodily symptoms signaling anxiety might be interpreted to indicate a lack of skills. Information acquired from these sources does not automatically influence efficacy; rather, it is cognitively appraised (Bandura, 1986). Efficacy appraisal is an inferential process in which persons weigh and combine the contributions of such personal and situational factors as their perceived ability, the difficulty of the task, amount of effort expended, amount of external assistance received, number and pattern of successes and failures, their perceived similarity to models, and persuader credibility (Schunk, 1989b). Self-efficacy is not the only influence on behavior; it is not necessarily the most important. Behavior is a function of many variables. In achievement settings some other important variables are skills, outcome expectations, and the perceived value of outcomes (Schunk, 1989b). High self-efficacy will not produce competent performances when requisite skills are lacking. Outcome expectations, or beliefs concerning the probable outcomes of actions, are important because individuals are not motivated to act in ways they believe will result in negative outcomes. Perceived value of outcomes refers to how much people desire certain outcomes relative to others. Given adequate skills, positive outcome expectations, and personally valued outcomes, self-efficacy is hypothesized to influence the choice and direction of much human behavior (Bandura, 1989b). Schunk (1989b) discussed how self-efficacy might operate during academic learning. At the start of an activity, students differ in their beliefs about their capabilities to acquire knowledge, perform skills, master the material, and so forth. Initial self-efficacy varies as a function of aptitude (e.g., abilities and attitudes) and prior experience. Such personal factors as goal setting and information processing, along with situational factors (e.g., rewards and teacher feedback), affect students while they are working. From these factors students derive cues signaling how well they are learning, which they use to assess efficacy for further learning. Motivation is enhanced when students perceive they are making progress in learning. In turn, as students work on tasks and become more skillful, they maintain a sense of self-efficacy for performing well. |
Cascaded H-bridge multilevel converter multistring topology for large scale photovoltaic systems | Large scale grid connected photovoltaic (PV) energy conversion systems have reached the megawatt level. This imposes new challenges on existing grid interface converter topologies and opens new opportunities to be explored. In this paper a new medium voltage multilevel-multistring configuration is introduced based on a three-phase cascaded H-bridge (CHB) converter and multiple string dc-dc converters. The proposed configuration enables a large increase of the total capacity of the PV system, while improving power quality and efficiency. The converter structure is very flexible and modular since it decouples the grid converter from the PV string converter, which allows to accomplish independent control goals. The main challenge of the proposed configuration is to handle the inherent power imbalances that occur not only between the different cells of one phase of the converter but also between the three phases. The control strategy to deal with these imbalances is also introduced in this paper. Simulation results of a 7-level CHB for a multistring PV system are presented to validate the proposed topology and control method. |
Provenance of Holocene beach sand in the Western Iberian margin: the use of the Kolmogorov–Smirnov test for the deciphering of sediment recycling in a modern coastal system | Detrital zircons from Holocene beach sand and igneous zircons from the Cretaceous syenite forming Cape Sines (Western Iberian margin) were dated using laser ablation – inductively coupled plasma – mass spectrometry. The U–Pb ages obtained were used for comparison with previous radiometric data from Carboniferous greywacke, Pliocene–Pleistocene sand and Cretaceous syenite forming the sea cliff at Cape Sines and the contiguous coast. New U–Pb dating of igneous morphologically simple and complex zircons from the syenite of the Sines pluton suggests that the history of zircon crystallization was more extensive (ca 87 to 74 Ma), in contrast to the findings of previous geochronology studies (ca 76 to 74 Ma). The U–Pb ages obtained in Holocene sand revealed a wide interval, ranging from the Cretaceous to the Archean, with predominance of Cretaceous (37%), Palaeozoic (35%) and Neoproterozoic (19%) detrital-zircon ages. The paucity of round to subrounded grains seems to indicate a short transportation history for most of the Cretaceous zircons (ca 95 to 73 Ma) which are more abundant in the beach sand that was sampled south of Cape Sines. Comparative analysis using the Kolmogorov–Smirnov statistical method, analysing sub-populations separately, suggests that the zircon populations of the Carboniferous and Cretaceous rocks forming the sea cliff were reproduced faithfully in Quaternary sand, indicating sediment recycling. The similarity of the preCretaceous ages (>ca 280 Ma) of detrital zircons found in Holocene sand, as compared with Carboniferous greywacke and Pliocene–Pleistocene sand, provides support for the hypothesis that detritus was reworked into the beach from older sedimentary rocks exposed along the sea cliff. The largest percentage of Cretaceous zircons (<ca 95 Ma) found in Holocene sand, as compared with Pliocene–Pleistocene sand (secondary recycled source), suggests that the Sines pluton was the one of the primary sources that became progressively more exposed to erosion during Quaternary uplift. This work highlights the application of the Kolmogorov–Smirnov method in comparison of zircon age populations used to identify provenance and sediment recycling in modern and ancient detrital sedimentary sequences. 1 © 2015 The Authors. Sedimentology © 2015 International Association of Sedimentologists Sedimentology (2016) doi: 10.1111/sed.12254 |
Deep Learning-Based Recommendation: Current Issues and Challenges | Due to the revolutionary advances of deep learning achieved in the field of image processing, speech recognition and natural language processing, the deep learning gains much attention. The recommendation task is influenced by the deep learning trend which shows its significant effectiveness and the high-quality of recommendations. The deep learning based recommender models provide a better detention of user preferences, item features and users-items interactions history. In this paper, we provide a recent literature review of researches dealing with deep learning based recommendation approaches which are preceded by a presentation of the main lines of the recommendation approaches and the deep learning techniques. We propose also classification criteria of the different deep learning integration model. Then we finish by presenting the recommendation approach adopted by the most popular video recommendation platform YouTube which is based essentially on deep learning advances. Keywords—Recommender system; deep learning; neural network; YouTube recommendation |
Scalable Hashing-Based Network Discovery | Discovering and analyzing networks from non-network data is a task with applications in fields as diverse as neuroscience, genomics, energy, economics, and more. In these domains, networks are often constructed out of multiple time series by computing measures of association or similarity between pairs of series. The nodes in a discovered graph correspond to time series, which are linked via edges weighted by the association scores of their endpoints. After graph construction, the network may be thresholded such that only the edges with stronger weights remain and the desired sparsity level is achieved. While this approach is feasible for small datasets, its quadratic time complexity does not scale as the individual time series length and the number of compared series increase. Thus, to avoid the costly step of building a fully-connected graph before sparsification, we propose a fast network discovery approach based on probabilistic hashing of randomly selected time series subsequences. Evaluation on real data shows that our methods construct graphs nearly 15 times as fast as baseline methods, while achieving both network structure and accuracy comparable to baselines in task-based evaluation. |
Bridge Correlational Neural Networks for Multilingual Multimodal Representation Learning | Recently there has been a lot of interest in learning common representations for multiple views of data. Typically, such common representations are learned using a parallel corpus between the two views (say, 1M images and their English captions). In this work, we address a real-world scenario where no direct parallel data is available between two views of interest (say, V1 and V2) but parallel data is available between each of these views and a pivot view (V3). We propose a model for learning a common representation for V1, V2 and V3 using only the parallel data available between V1V3 and V2V3. The proposed model is generic and even works when there are n views of interest and only one pivot view which acts as a bridge between them. There are two specific downstream applications that we focus on (i) transfer learning between languages L1,L2,...,Ln using a pivot language L and (ii) cross modal access between images and a language L1 using a pivot language L2. Our model achieves state-of-the-art performance in multilingual document classification on the publicly available multilingual TED corpus and promising results in multilingual multimodal retrieval on a new dataset created and released as a part of this work. |
African American patients' perspectives on medical decision making. | BACKGROUND
The medical literature offers little information about how older African Americans view the medical decision-making process. We sought to describe the perspectives of older African American patients in a primary care clinic as they consider a medical decision.
METHODS
We interviewed 25 African American patients older than 50 years who had discussed flexible sigmoidoscopy with their primary care provider. Interviews were analyzed using qualitative methods.
RESULTS
Patients listed concerns about cancer and health, risks and benefits, their own understanding of the test, and the recommendation of the provider as the most important factors in their decision. Most patients wanted information about medical tests and procedures to increase their understanding and to provide reassurance rather than to guide decision making. Most patients explained that they wanted the provider to make medical decisions because of his or her training and experience. Despite this, many expressed a sense of ownership or control over one's own body. Patients thought trust was built by a health care provider's honesty, patience, kindness, interest, and continuity of care.
CONCLUSIONS
Although traditional models of informed consent have emphasized providing patients with information to guide autonomous decision making, patients may want this information for other reasons. Fully informing patients about their medical condition increases understanding and provides reassurance. Because many of these patients want their provider to participate in making medical decisions, he or she should not only provide information but should also provide guidance to the patient. |
3D Pose tracking of walker users' lower limb with a structured-light camera on a moving platform | Tracking and understanding human gait is an important step towards improving elderly mobility and safety. Our research team is developing a vision-based tracking system that estimates the 3D pose of a wheeled walker user's lower limbs with a depth sensor, Kinect, mounted on the moving walker. Our tracker estimates 3D poses from depth images of the lower limbs in the coronal plane in a dynamic, uncontrolled environment. We employ a probabilistic approach based on particle filtering, with a measurement model that works directly in the 3D space and another measurement model that works in the projected image space. Empirical results show that combining both measurements, assuming independence between them, yields tracking results that are better than with either one alone. Experiments are conducted to evaluate the performance of the tracking system with different users. We demonstrate that the tracker is robust against unfavorable conditions such as partial occlusion, missing observations, and deformable tracking target. Also, our tracker does not require user intervention or manual initialization commonly required in most trackers. |
Functional connectivity dynamically evolves on multiple time-scales over a static structural connectome: Models and mechanisms | Over the last decade, we have observed a revolution in brain structural and functional Connectomics. On one hand, we have an ever-more detailed characterization of the brain's white matter structural connectome. On the other, we have a repertoire of consistent functional networks that form and dissipate over time during rest. Despite the evident spatial similarities between structural and functional connectivity, understanding how different time-evolving functional networks spontaneously emerge from a single structural network requires analyzing the problem from the perspective of complex network dynamics and dynamical system's theory. In that direction, bottom-up computational models are useful tools to test theoretical scenarios and depict the mechanisms at the genesis of resting-state activity. Here, we provide an overview of the different mechanistic scenarios proposed over the last decade via computational models. Importantly, we highlight the need of incorporating additional model constraints considering the properties observed at finer temporal scales with MEG and the dynamical properties of FC in order to refresh the list of candidate scenarios. |
A New Pattern Recognition Method for Detection and Localization of Myocardial Infarction Using T-Wave Integral and Total Integral as Extracted Features from One Cycle of ECG Signal | In this paper we used two new features i.e. T-wave integral and total integral as extracted feature from one cycle of normal and patient ECG signals to detection and localization of myocardial infarction (MI) in left ventricle of heart. In our previous work we used some features of body surface potential map data for this aim. But we know the standard ECG is more popular, so we focused our detection and localization of MI on standard ECG. We use the T-wave integral because this feature is important impression of T-wave in MI. The second feature in this research is total integral of one ECG cycle, because we believe that the MI affects the morphology of the ECG signal which leads to total integral changes. We used some pattern recognition method such as Artificial Neural Network (ANN) to detect and localize the MI, because this method has very good accuracy for classification of normal signal and abnormal signal. We used one type of Radial Basis Function (RBF) that called Probabilistic Neural Network (PNN) because of its nonlinearity property, and used other classifier such as k-Nearest Neighbors (KNN), Multilayer Perceptron (MLP) and Naive Bayes Classification. We used PhysioNet database as our training and test data. We reached over 76% for accuracy in test data for localization and over 94% for detection of MI. Main advantages of our method are simplicity and its good accuracy. Also we can improve the accuracy of classification by adding more features in this method. A simple method based on using only two features which were extracted from standard ECG is presented and has good accuracy in MI localization. |
IoT based smart crop-field monitoring and automation irrigation system | Agriculture plays vital role in the development of agricultural country like India. Issues concerning agriculture have been always hindering the development of the country. The only solution to this problem is smart agriculture by modernizing the current traditional methods of agriculture. Hence the proposed method aims at making agriculture smart using automation and IoT technologies. Internet of Things (IoT) enables various applications crop growth monitoring and selection, irrigation decision support, etc. A Raspberry Pi based automatic irrigation IOT system is proposed to modernization and improves productivity of the crop. main aim of this work to crop development at low quantity water consumption, In order to focus on water available to the plants at the required time, for that purpose most of the farmers waste lot time in the fields. An efficient management of water should be developed and the system circuit complexity to be reduced. The proposed system developed on the information sent from the sensors and estimate the quantity of water needed. A two sensors are used to get the data to the base station the humidity and the temperature of the soil, the humidity, the temperature, and the duration of sunshine per day. The proposed systems based on these values and calculate the water quantity for irrigation is required. The major advantage the system is implementing of Precision Agriculture (PA) with cloud computing, that will optimize the usage of water fertilizers while maximizing the yield of the crops and also will help in analyzing the weather conditions of the field. |
Cannabinoids and glial cells: possible mechanism to understand schizophrenia | Clinical and neurobiological findings have reported the involvement of endocannabinoid signaling in the pathophysiology of schizophrenia. This system modulates dopaminergic and glutamatergic neurotransmission that is associated with positive, negative, and cognitive symptoms of schizophrenia. Despite neurotransmitter impairments, increasing evidence points to a role of glial cells in schizophrenia pathobiology. Glial cells encompass three main groups: oligodendrocytes, microglia, and astrocytes. These cells promote several neurobiological functions, such as myelination of axons, metabolic and structural support, and immune response in the central nervous system. Impairments in glial cells lead to disruptions in communication and in the homeostasis of neurons that play role in pathobiology of disorders such as schizophrenia. Therefore, data suggest that glial cells may be a potential pharmacological tool to treat schizophrenia and other brain disorders. In this regard, glial cells express cannabinoid receptors and synthesize endocannabinoids, and cannabinoid drugs affect some functions of these cells that can be implicated in schizophrenia pathobiology. Thus, the aim of this review is to provide data about the glial changes observed in schizophrenia, and how cannabinoids could modulate these alterations. |
Capturing the human figure through a wall | We present RF-Capture, a system that captures the human figure -- i.e., a coarse skeleton -- through a wall. RF-Capture tracks the 3D positions of a person's limbs and body parts even when the person is fully occluded from its sensor, and does so without placing any markers on the subject's body. In designing RF-Capture, we built on recent advances in wireless research, which have shown that certain radio frequency (RF) signals can traverse walls and reflect off the human body, allowing for the detection of human motion through walls. In contrast to these past systems which abstract the entire human body as a single point and find the overall location of that point through walls, we show how we can reconstruct various human body parts and stitch them together to capture the human figure. We built a prototype of RF-Capture and tested it on 15 subjects. Our results show that the system can capture a representative human figure through walls and use it to distinguish between various users. |
Dynamics and trajectory optimization for a soft spatial fluidic elastomer manipulator | The goal of this work is to develop a soft robotic manipulation system that is capable of autonomous, dynamic, and safe interactions with humans and its environment. First, we develop a dynamic model for a multi-body fluidic elastomer manipulator that is composed entirely from soft rubber and subject to the self-loading effects of gravity. Then, we present a strategy for independently identifying all unknown components of the system: the soft manipulator, its distributed fluidic elastomer actuators, as well as drive cylinders that supply fluid energy. Next, using this model and trajectory optimization techniques we find locally optimal open-loop policies that allow the system to perform dynamic maneuvers we call grabs. In 37 experimental trials with a physical prototype, we successfully perform a grab 92% of the time. By studying such an extreme example of a soft robot, we can begin to solve hard problems inhibiting the mainstream use of soft machines. |
Crime Data Mining: An Overview and Case Studies | The concern about national security has increased significantly since the 9/11 attacks. However, information overload hinders the effective analysis of criminal and terrorist activities. Data mining applied in the context of law enforcement and intelligence analysis holds the promise of alleviating such problems. In this paper, we review crime data mining techniques and present four case studies done in our ongoing COPLINK project. |
A Robust and Fast Video Copy Detection System Using Content-Based Fingerprinting | A video copy detection system that is based on content fingerprinting and can be used for video indexing and copyright applications is proposed. The system relies on a fingerprint extraction algorithm followed by a fast approximate search algorithm. The fingerprint extraction algorithm extracts compact content-based signatures from special images constructed from the video. Each such image represents a short segment of the video and contains temporal as well as spatial information about the video segment. These images are denoted by temporally informative representative images. To find whether a query video (or a part of it) is copied from a video in a video database, the fingerprints of all the videos in the database are extracted and stored in advance. The search algorithm searches the stored fingerprints to find close enough matches for the fingerprints of the query video. The proposed fast approximate search algorithm facilitates the online application of the system to a large video database of tens of millions of fingerprints, so that a match (if it exists) is found in a few seconds. The proposed system is tested on a database of 200 videos in the presence of different types of distortions such as noise, changes in brightness/contrast, frame loss, shift, rotation, and time shift. It yields a high average true positive rate of 97.6% and a low average false positive rate of 1.0%. These results emphasize the robustness and discrimination properties of the proposed copy detection system. As security of a fingerprinting system is important for certain applications such as copyright protections, a secure version of the system is also presented. |
Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases | BACKGROUND
Deep learning (DL) is a representation learning approach ideally suited for image analysis challenges in digital pathology (DP). The variety of image analysis tasks in the context of DP includes detection and counting (e.g., mitotic events), segmentation (e.g., nuclei), and tissue classification (e.g., cancerous vs. non-cancerous). Unfortunately, issues with slide preparation, variations in staining and scanning across sites, and vendor platforms, as well as biological variance, such as the presentation of different grades of disease, make these image analysis tasks particularly challenging. Traditional approaches, wherein domain-specific cues are manually identified and developed into task-specific "handcrafted" features, can require extensive tuning to accommodate these variances. However, DL takes a more domain agnostic approach combining both feature discovery and implementation to maximally discriminate between the classes of interest. While DL approaches have performed well in a few DP related image analysis tasks, such as detection and tissue classification, the currently available open source tools and tutorials do not provide guidance on challenges such as (a) selecting appropriate magnification, (b) managing errors in annotations in the training (or learning) dataset, and (c) identifying a suitable training set containing information rich exemplars. These foundational concepts, which are needed to successfully translate the DL paradigm to DP tasks, are non-trivial for (i) DL experts with minimal digital histology experience, and (ii) DP and image processing experts with minimal DL experience, to derive on their own, thus meriting a dedicated tutorial.
AIMS
This paper investigates these concepts through seven unique DP tasks as use cases to elucidate techniques needed to produce comparable, and in many cases, superior to results from the state-of-the-art hand-crafted feature-based classification approaches.
RESULTS
Specifically, in this tutorial on DL for DP image analysis, we show how an open source framework (Caffe), with a singular network architecture, can be used to address: (a) nuclei segmentation (F-score of 0.83 across 12,000 nuclei), (b) epithelium segmentation (F-score of 0.84 across 1735 regions), (c) tubule segmentation (F-score of 0.83 from 795 tubules), (d) lymphocyte detection (F-score of 0.90 across 3064 lymphocytes), (e) mitosis detection (F-score of 0.53 across 550 mitotic events), (f) invasive ductal carcinoma detection (F-score of 0.7648 on 50 k testing patches), and (g) lymphoma classification (classification accuracy of 0.97 across 374 images).
CONCLUSION
This paper represents the largest comprehensive study of DL approaches in DP to date, with over 1200 DP images used during evaluation. The supplemental online material that accompanies this paper consists of step-by-step instructions for the usage of the supplied source code, trained models, and input data. |
African-American rhinoplasty. | Increased width, loss of definition, and lack of projection characterize the stereotypical African-American nose. Early rhinoplasty surgeons attempted strict adherence to neoclassical aesthetic ideals. However, in reality, the anatomy and aesthetic desires of these patients are much more complex. Building dorsal height, achieving nasal tip definition amidst thick skin, and producing a more aesthetically pleasing alar base are the major challenges. Surgical planning should be sensitive to both individual and cultural differences in aesthetic perception and expectations. Here we describe the techniques used by the senior author (R.W.H.K.). |
Development of a tunable mid-IR difference frequency laser source for highly sensitive airborne trace gas detection. | The development of a compact tunable mid-IR laser system at 3.5 micrometers for quantitative airborne spectroscopic trace gas absorption measurements is reported. The mid-IR laser system is based on difference frequency generation (DFG) in periodically poled LiNbO3 and utilizes optical fiber amplified near-IR diode and fiber lasers as pump sources operating at 1083 nm and 1562 nm, respectively. This paper describes the optical sensor architecture, performance characteristics of individual pump lasers and DFG, as well as its application to wavelength modulation spectroscopy employing an astigmatic Herriott multi-pass gas absorption cell. This compact system permits detection of formaldehyde with a minimal detectable concentration (1 sigma replicate precision) of 74 parts-per-trillion by volume (pptv) for 1 min of averaging time and was achieved using calibrated gas standards, zero air background and rapid dual-beam subtraction. This corresponds to a pathlength-normalized replicate fractional absorption sensitivity of 2.5 x 10-(10 )cm-1. |
GPS Landslide Monitoring: Single Base vs. Network Solutions — A case study based on the Puerto Rico and Virgin Islands Permanent GPS Network | This study demonstrated an approach to using permanent GPS stations from a local continuous GPS network as no-cost references in conducting long-term millimeter-level landslide monitoring. Accuracy and outliers from a series of single-base and network GPS measurements of a creeping landslide were studied. The criterion for accuracy was the weighted root-mean-square (RMS) of residuals of GPS measurements with respect to true landslide displacements over a period of 14 months. This investigation indicated that the current Puerto Rico and Virgin Islands GPS network, as a reference frame, can provide accuracy of 1 to 2 mm horizontally and 6 mm vertically for local 24-hour continuous landslide monitoring with few outliers (<1%). The accuracy degraded by a factor of two for 6-hour sessions, and more for shorter sessions. This study indicated that adding a few reference stations to GPS data processing can reduce the number of outliers and increase the accuracy and robustness of landslide surveying, even if these references are far from the study site. This improvement was particularly significant for short sessions and vertical components. The accuracy of network solutions depended slightly on the number of reference stations, but the dependence on the distance and geometric distribution of the references was weak. For long-term landslide monitoring, accuracy under 5 mm horizontally and 15 mm vertically are often expected. Accuracy at this level can be stably achieved in the Puerto Rico and Virgin Islands region by performing field observations for 4 hours or longer, and applying 3 or more reference stations for solving a network solution. This study also indicated that rainfall events can play a crucial rule in high-precision GPS measurements. GPS data collected during heavy rainfall events should be cautiously analyzed in landslide studies. |
Prevalence of attention-deficit/hyperactivity disorder: a systematic review and meta-analysis. | BACKGROUND AND OBJECTIVE
Overdiagnosis and underdiagnosis of attention-deficit/hyperactivity disorder (ADHD) are widely debated, fueled by variations in prevalence estimates across countries, time, and broadening diagnostic criteria. We conducted a meta-analysis to: establish a benchmark pooled prevalence for ADHD; examine whether estimates have increased with publication of different editions of the Diagnostic and Statistical Manual of Mental Disorders (DSM); and explore the effect of study features on prevalence.
METHODS
Medline, PsycINFO, CINAHL, Embase, and Web of Science were searched for studies with point prevalence estimates of ADHD. We included studies of children that used the diagnostic criteria from DSM-III, DSM-III-R and DSM-IV in any language. Data were extracted on sampling procedure, sample characteristics, assessors, measures, and whether full or partial criteria were met.
RESULTS
The 175 eligible studies included 179 ADHD prevalence estimates with an overall pooled estimate of 7.2% (95% confidence interval: 6.7 to 7.8), and no statistically significant difference between DSM editions. In multivariable analyses, prevalence estimates for ADHD were lower when using the revised third edition of the DSM compared with the fourth edition (P = .03) and when studies were conducted in Europe compared with North America (P = .04). Few studies used population sampling with random selection. Most were from single towns or regions, thus limiting generalizability.
CONCLUSIONS
Our review provides a benchmark prevalence estimate for ADHD. If population estimates of ADHD diagnoses exceed our estimate, then overdiagnosis may have occurred for some children. If fewer, then underdiagnosis may have occurred. |
Constacyclic codes over finite local Frobenius non-chain rings with nilpotency index 3 | Article history: Received 9 January 2016 Received in revised form 15 June 2016 Accepted 25 August 2016 Available online xxxx Communicated by W. Cary Huffman |
Human mesenchymal stem cells xenografted directly to rat liver are differentiated into human hepatocytes without fusion. | Hepatic transdifferentiation of bone marrow cells has been previously demonstrated by intravenous administration of donor cells, which may recirculate to the liver after undergoing proliferation and differentiation in the recipient's bone marrow. In the present study, to elucidate which cellular components of human bone marrow more potently differentiate into hepatocytes, we fractionated human bone marrow cells into mesenchymal stem cells (MSCs), CD34+ cells, and non-MSCs/CD34- cells and examined them by directly xenografting to allylalcohol (AA)-treated rat liver. Hepatocyte-like cells, as revealed by positive immunostaining for human-specific alpha-fetoprotein (AFP), albumin (Alb), cytokeratin 19 (CK19), cytokeratin 18 (CK18), and asialoglycoprotein receptor (AGPR), and by reverse transcription-polymerase chain reaction (RT-PCR) for expression of AFP and Alb mRNA, were observed only in recipient livers with MSC fractions. Cell fusion was not likely involved since both human and rat chromosomes were independently identified by fluorescence in situ hybridization (FISH). The differentiation appeared to follow the process of hepatic ontogeny, reprogramming of gene expression in the genome of MSCs, as evidenced by expression of the AFP gene at an early stage and the albumin gene at a later stage. In conclusion, we have demonstrated that MSCs are the most potent component in hepatic differentiation, as revealed by directly xenografting into rat livers. |
LibGuides: Finding the Visual History of the 1960s: Introduction to Primary Sources | This guide was created as a supplement to the 2014 library exhibition Conflict and Counterculture: Finding the Visual History of the Sixties. |
Audiovisual Lombard speech: reconciling production and perception | An earlier study compared audiovisual perception of speech ’produced in environmental noise’ (Lombard speech) and speech ’produced in quiet’ with the same environmental noise added. The results and showed that listeners make differential use of the visual information depending on the recording condition, but gave no indication of how or why this might be so. A possible confound in that study was that high audio presentation levels might account for the small visual enhancements observed for Lombard speech. This paper reports results for a second perception study using much lower acoustic presentation levels, compares them with the results of the previous study, and integrates the perception results with analyses of the audiovisual production data: face and head motion, audio amplitude (RMS), and parameters of the spectral acoustics (line spectrum pairs). |
Global Talent Management : How Leading Multinationals Build and Sustain Their Talent Pipeline | To determine how leading companies in North America, Europe, and Asia develop and sustain strong talent pipelines, this research investigates talent management processes and practices in a sample of 37 multinational corporations, selected on the basis of their international scope, reputation, and long-term performance. In-depth case studies and a Web-based survey of human resources professionals identify various effective practices that can help companies attract, select, develop, and retain talent. However, the results suggest that competitive advantage comes not primarily from designing and implementing best practices but rather from the proper internal alignment of various elements of a company’s talent management system, as well as their embeddedness in the value system of the firm, their links to business strategy, and their global coordination. Global Talent Management: How Leading Multinationals Build and Sustain Their Talent Pipeline Executives around the world seem to agree: One of the biggest challenges facing their companies is building and sustaining a strong talent pipeline. In a recent survey of 300 firms conducted by the Hay Group and Chief Executive magazine, participating companies ranked “finding the right number of leaders” as their top challenge, and every single firm indicated its belief that demand for leaders would increase in the future. Not only do companies have trouble filling their talent pipelines due to shifting demographics and workforce preferences, but they also must develop new capabilities and revitalize their organizations as they transform their businesses, invest in new technologies, enter into new partnerships, and globalize their operations. These challenges make the need to develop effective talent management processes and practices even more pressing for global companies. In response, a team of researchers from the universities of Cambridge, Cornell, Erasmus/Tilburg, and INSEAD has conducted a major research project on the global best practices in human capital management. The qualitative portion of this research examines 20 companies in-depth, using interviews with senior executives, line managers, and human resources (HR) professionals to identify how leading multinationals manage their human capital. These companies are renowned for their international scope, reputations, and long-term performance and provide results from 312 interviews with professionals at various levels (e.g., corporate, regional, country) in more than 20 countries. In addition, we conducted a Web-based survey of HR professionals of 20 multinational corporations, gaining input from 263 respondents from three major geographic regions (Americas, Asia-Pacific, and Europe/Middle East/Africa). In total, this study involves 37 |
Hydrogen Production from Sea Wave for Alternative Energy Vehicles for Public Transport in Trapani ( Italy ) | The coupling of renewable energy and hydrogen technologies represents in the mid-term a very interesting way to match the tasks of increasing the reliable exploitation of wind and sea wave energy and introducing clean technologies in the transportation sector. This paper presents two different feasibility studies: the first proposes two plants based on wind and sea wave resource for the production, storage and distribution of hydrogen for public transportation facilities in the West Sicily; the second applies the same approach to Pantelleria (a smaller island), including also some indications about solar resource. In both cases, all buses will be equipped with fuel-cells. A first economic analysis is presented together with the assessment of the avoidable greenhouse gas emissions during the operation phase. The scenarios addressed permit to correlate the demand of urban transport to renewable resources present in the territories and to the modern technologies available for the production of hydrogen from renewable energies. The study focuses on the possibility of tapping the renewable energy potential (wind and sea wave) for the hydrogen production by electrolysis. The use of hydrogen would significantly reduce emissions of particulate matter and greenhouse gases in urban districts under analysis. The procedures applied in the present article, as well as the main equations used, are the result of previous applications made in different technical fields that show a good replicability. |
Genetic and environmental effects on disc degeneration by phenotype and spinal level: a multivariate twin study. | STUDY DESIGN
A classic twin study with multivariate analyses was conducted.
OBJECTIVE
We aimed to further clarify the presence and magnitude of genetic influences on disc degeneration, and to better understand the phenomenon of disc degeneration through comparisons of genetic and environmental influences on specific degenerative signs and different lumbar levels.
SUMMARY OF BACKGROUND DATA
Previous studies suggest a substantial genetic influence on disc degeneration, but raise important questions about which disc phenotypes are or are not largely genetically influenced and differential effects on spinal levels.
METHODS
The study sample consisted of 152 monozygotic and 148 dizygotic male twin pairs, 35 to 70 years of age, from the population-based Finnish Twin Cohort. Lumbar magnetic resonance imaging was conducted with quantitative or qualitative assessments of disc signal, bulging, and height narrowing at each lumbar level. Data on possible confounding factors were obtained from an extensive, structured interview. Quantitative genetic modeling was conducted using MPlus.
RESULTS
Heritability estimates varied from 29% to 54%, depending on the particular disc degeneration phenotype and lumbar level. The same genetic influences affected signal intensity and disc height (genetic correlations of -0.60- -0.66) or bulging (-0.71- -0.72) to a great degree at either the lower or upper lumbar levels and genetic influences on disc height narrowing and bulging were virtually the same. (0.92-0.97). Conversely, genetic correlations (and environmental correlations)were substantially lower for upper and lower lumbar levels, implying largely independent effects.
CONCLUSION
Genetic and environmental influences on disc degeneration seem to be of similar importance. Disc signal, narrowing, and bulging had a primarily common genetic pathway, suggesting a common genetic etiopathogenesis. Conversely, genetic and environmental influences differed substantially for upper versus lower lumbar levels, emphasizing the importance of examining these levels separately in studies of associated genes, other constitutional factors, and environmental influences. |
Quality-based financial incentives in health care: can we improve quality by paying for it? | This article asks whether financial incentives can improve the quality of health care. A conceptual framework drawn from microeconomics, agency theory, behavioral economics, and cognitive psychology motivates a set of propositions about incentive effects on clinical quality. These propositions are evaluated through a synthesis of extant peer-reviewed empirical evidence. Comprehensive financial incentives--balancing rewards and penalties; blending structure, process, and outcome measures; emphasizing continuous, absolute performance standards; tailoring the size of incremental rewards to increasing marginal costs of quality improvement; and assuring certainty, frequency, and sustainability of incentive payoffs--offer the prospect of significantly enhancing quality beyond the modest impacts of prevailing pay-for-performance (P4P) programs. Such organizational innovations as the primary care medical home and accountable health care organizations are expected to catalyze more powerful quality incentive models: risk- and quality-adjusted capitation, episode of care payments, and enhanced fee-for-service payments for quality dimensions (e.g., prevention) most amenable to piece-rate delivery. |
When Good Instruments Go Bad: A Reply to Neumark, Zhang, and Ciccarella | This note examines the instrumental variables method used by Neumark, Zhang, and Ciccarella (2005) to analyze Wal-Mart’s effect on retail labor markets, and exposes major flaws in that methodology. Neumark, Zhang, and Ciccarella use an interaction between distance from Wal-Mart’s headquarters and time effects to predict Wal-Mart’s presence in a county, and find that each Wal-Mart store destroys, on average, approximately 200 retail jobs. These findings are in stark contrast to Basker (2005) who found a small, but positive and statistically significant, effect on jobs. I show that the IV estimates obtained by Neumark, Zhang, and Ciccarella confound Wal-Mart’s causal effect with other factors. To illustrate the problem, I show that their methodology implies a large impact of Wal-Mart not only on retail employment but also on county manufacturing employment. Reduced-form estimates of the regressions show statistically and economically indistinguishable effects in counties with and without Wal-Mart presence, implying that other factors are most likely driving the results. JEL Codes: C21, J21, L81 |
What Are the Major Physicochemical Factors in Determining the Preferential Nuclear Uptake of the DNA "Light-Switching" Ru(II)-Polypyridyl Complex in Live Cells via Ion-Pairing with Chlorophenolate Counter-Anions? | Delivering potential theranostic metal complexes into preferential cellular targets is becoming of increasing interest. Here we report that nuclear uptake of a cell-impermeable DNA "light-switching" Ru(II)-polypyridyl complex can be significantly facilitated by chlorophenolate counter-anions, which was found, unexpectedly, to be correlated positively with the binding stability but inversely with the lipophilicity of the formed ion pairs. |
R 3 -Net: A Deep Network for Multi-oriented Vehicle Detection in Aerial Images and Videos. | Vehicle detection is a significant and challenging task in aerial remote sensing applications. Most existing methods detect vehicles with regular rectangle boxes and fail to offer the orientation of vehicles. However, the orientation information is crucial for several practical applications, such as the trajectory and motion estimation of vehicles. In this paper, we propose a novel deep network, called rotatable region-based residual network (R-Net), to detect multi-oriented vehicles in aerial images and videos. More specially, R-Net is utilized to generate rotatable rectangular target boxes in a half coordinate system. First, we use a rotatable region proposal network (R-RPN) to generate rotatable region of interests (R-RoIs) from feature maps produced by a deep convolutional neural network. Here, a proposed batch averaging rotatable anchor (BAR anchor) strategy is applied to initialize the shape of vehicle candidates. Next, we propose a rotatable detection network (R-DN) for the final classification and regression of the R-RoIs. In RDN, a novel rotatable position sensitive pooling (R-PS pooling) is designed to keep the position and orientation information simultaneously while downsampling the feature maps of RRoIs. In our model, R-RPN and R-DN can be trained jointly. We test our network on two open vehicle detection image datasets, namely DLR 3K Munich Dataset and VEDAI Dataset, demonstrating the high precision and robustness of our method. In addition, further experiments on aerial videos show the good generalization capability of the proposed method and its potential for vehicle tracking in aerial videos. The demo video is available at https://youtu.be/xCYD-tYudN0. |
Antiinflammatory effect of sevoflurane in open lung surgery with one-lung ventilation | AIM
To prospectively assess the antiinflammatory effect of volatile anesthetic sevoflurane in patients undergoing open lung surgery with one lung ventilation (OLV).
METHODS
This prospective, randomized study included 40 patients undergoing thoracic surgery with OLV (NCT02188407). The patients were randomly allocated into two equal groups that received either propofol or sevoflurane. Four patients were excluded from the study because after surgery they received blood transfusion or non-steroid antiinflammatory drugs. Inflammatory mediators (interleukins 6, 8, and 10, C-reactive protein [CRP], and procalcitonin) were measured perioperatively. The infiltration of the nonoperated lung was assessed on chest x-rays and the oxygenation index was calculated. The major postoperative complications were counted.
RESULTS
Interleukin 6 levels were significantly higher in propofol than in sevoflurane group (P=0.014). Preoperative CRP levels did not differ between the groups (P=0.351) and in all patients they were lower than 20 mg/L, but postoperative CRP was significantly higher in propofol group (31±6 vs 15±7 ng/L; P=0.035); Pre- and postoperative procalcitonin was within the reference range (<0.04 µg/L) in both groups. The oxygenation index was significantly lower in propofol group (339±139 vs 465±140; P=0.021). There was no significant difference between the groups in lung infiltrates (P=0.5849). The number of postoperative adverse events was higher in propofol group, but the difference was not-significant (5 vs 1; P=0.115).
CONCLUSION
The study suggests an antiinflammatory effect of sevoflurane in patients undergoing thoracotomy with OLV. |
Teaching social influence : Demonstrations and exercises from the discipline of social psychology | Education is enhanced when students are able to be active, rather than passive, learners (McKeachie, 2002). Fortunately, social psychologists have a rich history of creating and publishing classroom demonstrations that allow for such active learning. Unfortunately, these demonstrations have been published in diverse journals, teaching manuals, and edited volumes that are not always readily available. The purpose of this article is to review demonstrations and exercises that have been developed for teaching students about social influence. Using an annotated bibliography format, we review more than five dozen techniques that assist instructors in demonstrating such social influence principles as cognitive dissonance, conformity, obedience, deindividuation, propaganda, framing, persuasion, advertising, social norms, and the selffulfilling prophecy. |
CoRAD: Visual Analytics for Cohort Analysis | In this paper, we introduce a novel dynamic visual analytic tool called the Cohort Relative Aligned Dashboard (CoRAD). We present the design components of CoRAD, along with alternatives that lead to the final instantiation. We also present an evaluation involving expert clinical researchers, comparing CoRAD against an existing analytics method. The results of the evaluation show CoRAD to be more usable and useful for the target user. The relative alignment of physiologic data to clinical events were found to be a highlight of the tool. Clinical experts also found the interactive selection and filter functions to be useful in reducing information overload. Moreover, CoRAD was also found to allow clinical researchers to generate alternative hypotheses and test them in vivo. |
Honey as a Potential Natural Antioxidant Medicine: An Insight into Its Molecular Mechanisms of Action | Honey clasps several medicinal and health effects as a natural food supplement. It has been established as a potential therapeutic antioxidant agent for various biodiverse ailments. Data report that it exhibits strong wound healing, antibacterial, anti-inflammatory, antifungal, antiviral, and antidiabetic effects. It also retains immunomodulatory, estrogenic regulatory, antimutagenic, anticancer, and numerous other vigor effects. Data also show that honey, as a conventional therapy, might be a novel antioxidant to abate many of the diseases directly or indirectly associated with oxidative stress. In this review, these wholesome effects have been thoroughly reviewed to underscore the mode of action of honey exploring various possible mechanisms. Evidence-based research intends that honey acts through a modulatory road of multiple signaling pathways and molecular targets. This road contemplates through various pathways such as induction of caspases in apoptosis; stimulation of TNF-α, IL-1β, IFN-γ, IFNGR1, and p53; inhibition of cell proliferation and cell cycle arrest; inhibition of lipoprotein oxidation, IL-1, IL-10, COX-2, and LOXs; and modulation of other diverse targets. The review highlights the research done as well as the apertures to be investigated. The literature suggests that honey administered alone or as adjuvant therapy might be a potential natural antioxidant medicinal agent warranting further experimental and clinical research. |
Odanacatib, a cathepsin-K inhibitor for osteoporosis: a two-year study in postmenopausal women with low bone density. | Cathepsin K, a cysteine protease expressed in osteoclasts, degrades type 1 collagen. Odanacatib selectively and reversibly inhibited cathepsin K and rapidly decreased bone resorption in preclinical and phase I studies. A 1-year dose-finding trial with a 1-year extension on the same treatment assignment was performed in postmenopausal women with low bone mineral density (BMD) to evaluate the safety and efficacy of weekly doses of placebo or 3, 10, 25, or 50 mg of odanacatib on BMD and biomarkers of skeletal remodeling. Women with BMD T-scores of -2.0 or less but not less than -3.5 at the lumbar spine or femoral sites were randomly assigned to receive placebo or one of four doses of odanacatib; all received vitamin D with calcium supplementation as needed. The primary endpoint was percentage change from baseline lumbar spine BMD. Other endpoints included percentage change in BMD at hip and forearm sites, as well as changes in biomarkers of skeletal remodeling. Twenty-four months of treatment produced progressive dose-related increases in BMD. With the 50-mg dose of odanacatib, lumbar spine and total-hip BMD increased 5.5% and 3.2%, respectively, whereas BMD at these sites was essentially unchanged with placebo (-0.2% and -0.9%). Biochemical markers of bone turnover exhibited dose-related changes. The safety and tolerability of odanacatib generally were similar to those of placebo, with no dose-related trends in any adverse experiences. In summary, 2 years of weekly odanacatib treatment was generally well-tolerated and increased lumbar spine and total-hip BMD in a dose-related manner in postmenopausal women with low BMD. |
Improved cerebral vasomotor reactivity after exercise training in hemiparetic stroke survivors. | BACKGROUND AND PURPOSE
Animal studies provide strong evidence that aerobic exercise training positively influences cerebral blood flow, but no human studies support the use of exercise for improving cerebral hemodynamics. This randomized study in stroke survivors assessed the effects of treadmill aerobic exercise training (TM) on cerebral blood flow parameters compared to a control intervention of nonaerobic stretching.
METHODS
Thirty-eight participants (19 in TM group and 19 in control group) with remote stroke (>6 months) and mild to moderate gait deficits completed middle cerebral artery blood flow velocity measurements by transcranial Doppler ultrasonography before and after a 6-month intervention period. Middle cerebral artery blood flow velocity was assessed bilaterally during normocapnia and hypercapnia (6% CO2). Cerebral vasomotor reactivity (cVMR) was calculated as percent change in middle cerebral artery blood flow velocity from normocapnia to hypercapnia (cVMR percent) and as an index correcting percent change for absolute increase in end tidal CO2 (cVMR index).
RESULTS
The TM group had significantly larger improvements than did controls for both ipsilesional and contralesional cVMR index (P≤0.05) and contralesional cVMR percent (P≤0.01). Statin users in the TM group (n=10) had higher baseline cVMR and lower training-induced cVMR change, indicating that cVMR change among those not using statins (n=9) primarily accounted for the between-group effects. There was a 19% increase in Vo2 peak for the TM group compared to a 4% decrease in the control group (P<0.01), and peak fitness change correlated with cVMR change (r=0.55; P<0.05).
CONCLUSIONS
Our data provide the first evidence to our knowledge of exercise-induced cVMR improvements in stroke survivors, implying a protective mechanism against recurrent stroke and other brain-related disorders. Statin use appears to regulate cVMR and the cVMR training response. |
Author ' s personal copy Better than my loved ones : Social comparison tendencies among narcissists | Narcissists pursue superiority and status at frequent costs to their relationships, and social comparisons seem central to these pursuits. Critically, these comparison tendencies should distinguish narcissism from healthy self-esteem. We tested this hypothesis in a study examining individual differences in everyday comparison activity. Narcissists, relative to those with high self-esteem, (1) made more frequent social comparisons, particularly downward ones, (2) were more likely to think they were better-off than other important individuals in their lives, and (3) perceived themselves superior to these important individuals on agentic traits. However, narcissists’ positive emotional reactions to these self-flattering comparisons were based on their high self-esteem. These results suggest that comparison processes play an important role in narcissists’ endless pursuit of status and admiration. 2010 Elsevier Ltd. All rights reserved. |
TRX Suspension Training: A New Functional Training Approach for Older Adults – Development, Training Control and Feasibility | Because of its proximity to daily activities functional training becomes more important for older adults. Sling training, a form of functional training, was primarily developed for therapy and rehabilitation. Due to its effects (core muscle activation, strength and balance improvements), sling training may be relevant for older adults. However, to our knowledge no recent sling training program for healthy older adults included a detailed training control which is indeed an essential component in designing and implementing this type of training to reach positive effects. The purpose of this study was to develop a TRX Suspension Training for healthy older adults (TRX-OldAge) and to evaluate its feasibility. Eleven participants finished the 12 week intervention study. All participants trained in the TRX-OldAge whole-body workout which consists of seven exercises including 3-4 progressively advancing stages of difficulty for every exercise. At each stage, intensity could be increased through changes in position. Feasibility data was evaluated in terms of training compliance and a self-developed questionnaire for rating TRX-OldAge. The training compliance was 85 %. After study period, 91 % of the participants were motivated to continue with the program. The training intensity, duration and frequency were rated as optimal. All participants noted positive effects whereas strength gains were the most. On the basis of the detailed information about training control, TRX-OldAge can be individually adapted for each older adult appropriate to its precondition, demands and preference. |
Attention-based LSTM with Semantic Consistency for Videos Captioning | Recent progress in using Long Short-Term Memory (LSTM) for image description has motivated the exploration of their applications for automatically describing video content with natural language sentences. By taking a video as a sequence of features, LSTM model is trained on video-sentence pairs to learn association of a video to a sentence. However, most existing methods compress an entire video shot or frame into a static representation, without considering attention which allows for salient features. Furthermore, most existing approaches model the translating error, but ignore the correlations between sentence semantics and visual content.
To tackle these issues, we propose a novel end-to-end framework named aLSTMs, an attention-based LSTM model with semantic consistency, to transfer videos to natural sentences. This framework integrates attention mechanism with LSTM to capture salient structures of video, and explores the correlation between multi-modal representations for generating sentences with rich semantic content. More specifically, we first propose an attention mechanism which uses the dynamic weighted sum of local 2D Convolutional Neural Network (CNN) and 3D CNN representations. Then, a LSTM decoder takes these visual features at time $t$ and the word-embedding feature at time $t$-$1$ to generate important words. Finally, we uses multi-modal embedding to map the visual and sentence features into a joint space to guarantee the semantic consistence of the sentence description and the video visual content. Experiments on the benchmark datasets demonstrate the superiority of our method than the state-of-the-art baselines for video captioning in both BLEU and METEOR. |
Identification of Herpes Simplex Virus Genital Infection: Comparison of a Multiplex PCR Assay and Traditional Viral Isolation Techniques | Genital herpes simplex virus (HSV) is of major public health importance, as indicated by the marked increase in the prevalence of genital herpes over the past two decades. Viral culture has traditionally been regarded as the gold standard for diagnosis. In this study, we compared viral culture and the amplification of HSV DNA by the polymerase chain reaction (PCR) with respect to sensitivity, cost, clinical utility, and turnaround time. Patient sample swabs from 100 individuals were inoculated onto MRC-5 cells for isolation. Positive results were confirmed via a direct fluorescent antibody technique, and serotyping, when requested, was performed using HSV-1 and -2-type–specific sera. PCR techniques employed an extraction step of the same initial swab specimen, followed by PCR amplification, using a multiplex assay for HSV-1, 2 DNA. HSV-positive results were found in 32/100 samples via culture and in 36/100 samples via PCR. PCR-positive results yielded 16 (44%) patients infected with HSV-1 and 20 (56%) patients infected with HSV-2. Turnaround time for viral culture averaged 108 hours for positive results and 154 hours for negative results; PCR turnaround time averaged 24–48 hours. Laboratory cost using viral culture was $3.22 for a negative result and $6.49 for a positive result (including direct fluorescent antibody). Serotyping added $10.88 to each culture-positive test. Although laboratory costs for PCR were higher at $8.20/sample, reimbursement levels were also higher. We propose a multiplex PCR assay for diagnosis of HSV-1 and HSV-2 from patient swabs for use in a routine clinical laboratory setting. This assay offers increased sensitivity, typing, and improved turnaround time when compared with traditional viral culture techniques. Although it appears that PCR testing in a routine clinical laboratory setting is cost prohibitive compared with the case of nonserotyped viral culture, it may be very useful when clinical utility warrants distinguishing between HSV 1 and 2 and may be cost effective when reimbursement issues are examined. |
A controllable, fast and stable basis for vortex based smoke simulation | We introduce a novel method for describing and controlling a 3D smoke simulation. Using harmonic analysis and principal component analysis, we define an underlying description of the fluid flow that is compact and meaningful to non-expert users. The motion of the smoke can be modified with high level tools, such as animated current curves, attractors and tornadoes. Our simulation is controllable, interactive and stable for arbitrarily long periods of time. The simulation's computational cost increases linearly in the number of motion samples and smoke particles. Our adaptive smoke particle representation conveniently incorporates the surface-like characteristics of real smoke. |
Survey on UAANET Routing Protocols and Network Security Challenges | UAV Ad hoc Networks (UAANETs) is a subset of the well-known mobile ad hoc network (MANET) paradigm. It refers to the deployment of a swarm of small Unmanned Aerial Vehicles (UAVs) and Ground Control Stations (GCS). These UAVs collaborate in order to relay data (command and control traffic and remotely sensed data) between each other and to the Ground Control Station (GCS). Compared to other types of ad hoc networks, UAANETs have some unique features and bring several major challenges to the research community. One of these is the design of UAANET routing protocol. It must establish an efficient route between UAVs and adjust in real time to the rapidly changing topology. It must also be secured to protect the integrity of the network against malicious attackers. Security of routing protocols has been widely investigated in wired networks and MANETs, but as far as we are aware, there is no previous research paper dealing with the security features of UAANET routing protocols. This paper focuses on characteristics of UAANETs and provides a review of the literature on associated routing protocols. We also highlight the analysis of the security features of these protocols. Security requirements, potential threats and countermeasures are all described. |
Application of response surface methodology (RSM) to optimize coagulation-flocculation treatment of leachate using poly-aluminum chloride (PAC) and alum. | Coagulation-flocculation is a relatively simple physical-chemical technique in treatment of old and stabilized leachate which has been practiced using a variety of conventional coagulants. Polymeric forms of metal coagulants which are increasingly applied in water treatment are not well documented in leachate treatment. In this research, capability of poly-aluminum chloride (PAC) in the treatment of stabilized leachate from Pulau Burung Landfill Site (PBLS), Penang, Malaysia was studied. The removal efficiencies for chemical oxygen demand (COD), turbidity, color and total suspended solid (TSS) obtained using PAC were compared with those obtained using alum as a conventional coagulant. Central composite design (CCD) and response surface method (RSM) were applied to optimize the operating variables viz. coagulant dosage and pH. Quadratic models developed for the four responses (COD, turbidity, color and TSS) studied indicated the optimum conditions to be PAC dosage of 2g/L at pH 7.5 and alum dosage of 9.5 g/L at pH 7. The experimental data and model predictions agreed well. COD, turbidity, color and TSS removal efficiencies of 43.1, 94.0, 90.7, and 92.2% for PAC, and 62.8, 88.4, 86.4, and 90.1% for alum were demonstrated. |
Survey of fungal counts and natural occurrence of aflatoxins in Malaysian starch-based foods | In a survey of starch-based foods sampled from retail outlets in Malaysia, fungal colonies were mostly detected in wheat flour (100%), followed by rice flour (74%), glutinous rice grains (72%), ordinary rice grains (60%), glutinous rice flour (48%) and corn flour (26%). All positive samples of ordinary rice and glutinous rice grains had total fungal counts below 103 cfu/g sample, while among the positive rice flour, glutinous rice flour and corn flour samples, the highest total fungal count was more than 103 but less than 104 cfu/g sample respectively. However, in wheat flour samples total fungal count ranged from 102 cfu/g sample to slightly more than 104 cfu/g sample. Aflatoxigenic colonies were mostly detected in wheat flour (20%), followed by ordinary rice grains (4%), glutinous rice grains (4%) and glutinous rice flour (2%). No aflatoxigenic colonies were isolated from rice flour and corn flour samples. Screening of aflatoxin B1, aflatoxin B2, aflatoxin G1 and aflatoxin G2 using reversed-phase HPLC were carried out on 84 samples of ordinary rice grains and 83 samples of wheat flour. Two point four percent (2.4%) of ordinary rice grains were positive for aflatoxin G1 and 3.6% were positive for aflatoxin G2. All the positive samples were collected from private homes at concentrations ranging from 3.69–77.50 μg/kg. One point two percent (1.2%) of wheat flour samples were positive for aflatoxin B1 at a concentration of 25.62 μ};g/kg, 4.8% were positive for aflatoxin B2 at concentrations ranging from 11.25–252.50 μg/kg, 3.6% were positive for aflatoxin G1 at concentrations ranging from 25.00–289.38 μg/kg and 13.25% were positive for aflatoxin G2 at concentrations ranging from 16.25–436.25 μg/kg. Similarly, positive wheat flour samples were mostly collected from private homes. |
Refractory iron-deficiency anemia and autoimmune atrophic gastritis in pediatric age group: analysis of 8 clinical cases. | INTRODUCTION
Refractory iron-deficiency anemia with no obvious etiology in pediatric age can be a puzzling problem. Screening of iron malabsorption conditions, including autoimmune atrophic gastritis (AAG), is emerging as a priority in the investigational procedure.
MATERIALS AND METHODS
Retrospective analysis of clinical process of children/adolescents with the diagnosis of AAG.
RESULTS
Eight patients (aged between 4.7 and 18 years old) were identified. The diagnosis was triggered on the basis of high serum gastrin levels and strong positivity of antiparietal cell antibodies. Upper endoscopy and biopsy revealed atrophic gastritis in all patients, with 2 of them with intestinal metaplasia. Four patients presented with Helicobacter pylori infection object of eradication therapy. After a medium follow-up of 36.6 months, antiparietal cell antibodies and hypergastrinemia did not show evidence of regression. Of the 3 patients who underwent endoscopic reevaluation, a similar anatomo-pathologic pattern was observed in 2 and intestinal metaplasia in 1 patient. Normalization of hematological parameters was achieved, using alternative iron formulas.
CONCLUSIONS
AAG must be recognized as a pathology affecting pediatric patients. Gastric autoimmune lesion is a chronic process with potential evolution to malignancy. Management guidelines in childhood are not available. Their elaboration is important considering an important risk factor in these age group: a long life expectancy. |
Clinical trials of controlled-release melatonin in children with sleep-wake cycle disorders. | This is the first study to examine effective doses of controlled-release (CR) melatonin in children with chronic sleep wake cycle disorders. All 42 subjects had severe neurodevelopmental difficulties. Initially, a randomized double-blinded cross-over design was used in 16 children, comparing the effectiveness of fast-release (FR) and CR melatonin. In the remainder of the patients, the CR melatonin was studied on a clinical basis. The effectiveness of treatment was assessed by sleep charts and clinical follow-up. Emphasis was placed on the judgement of the parents, who had guidance from the physicians. The average final CR melatonin dose in the 42 patients was 5.7 mg (2-12 mg). The studies showed that the FR melatonin was most effective when there was only delayed sleep onset, but CR formulations were more useful for sleep maintenance. Children appeared to require higher doses than adults. |
Evolutionary algorithm in Forex trade strategy generation | This paper shows an evolutionary algorithm application to generate profitable strategies to trade futures contracts on foreign exchange market (Forex). Strategy model in approach is based on two decision trees, responsible for taking the decisions of opening long or short positions on Euro/US Dollar currency pair. Trees take into consideration only technical analysis indicators, which are connected by logic operators to identify border values of these indicators for taking profitable decision(s). We have tested the efficiency of presented approach on learning and test time-frames of various characteristics. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.