title
stringlengths
8
300
abstract
stringlengths
0
10k
Bladder dysfunction following renal transplantation is it predictable?
Dysfunction of the urinary bladder is often faced in kidney transplantation due to various structural, neurological, infectious, or other pathologies. As the goal is to obtain a well functioning urinary bladder or at least, a low-pressure reservoir without reflux, specific urologic examinations and therapies should be performed. This review based on a Medline and PubMed search as well as on international guidelines and personal experience, reflects the actual knowledge in the field of pretransplant urologic evaluation as well as pre- and posttransplant optimal therapeutic options. The evaluation of these factors and interventional strategies will help to improve long-term transplant outcomes.
Comparing human perceptions of post-editing effort with post-editing operations
Post-editing performed by translators is an increasingly common use of machine translated texts. While high quality MT may increase productivity, post-editing poor translations can be a frustrating task which requires more effort than translating from scratch. For this reason, estimating whether machine translations are of sufficient quality to be used for post-editing and finding means to reduce post-editing effort are an important field of study. Post-editing effort consists of different aspects, of which temporal effort, or the time spent on post-editing, is the most visible and involves not only the technical effort needed to perform the editing, but also the cognitive effort required to detect and plan necessary corrections. Cognitive effort is difficult to examine directly, but ways to reduce the cognitive effort in particular may prove valuable in reducing the frustration associated with postediting work. In this paper, we describe an experiment aimed at studying the relationship between technical post-editing effort and cognitive post-editing effort by comparing cases where the edit distance and a manual score reflecting perceived effort differ. We present results of an error analysis performed on such sentences and discuss the clues they may provide about edits requiring great cognitive effort compared to the technical effort, on one hand, or little cognitive effort, on the other.
Bit-Level Optimization of Adder-Trees for Multiple Constant Multiplications for Efficient FIR Filter Implementation
Multiple constant multiplication (MCM) scheme is widely used for implementing transposed direct-form FIR filters. While the research focus of MCM has been on more effective common subexpression elimination, the optimization of adder-trees, which sum up the computed sub-expressions for each coefficient, is largely omitted. In this paper, we have identified the resource minimization problem in the scheduling of adder-tree operations for the MCM block, and presented a mixed integer programming (MIP) based algorithm for more efficient MCM-based implementation of FIR filters. Experimental result shows that up to 15% reduction of area and 11.6% reduction of power (with an average of 8.46% and 5.96% respectively) can be achieved on the top of already optimized adder/subtractor network of the MCM block.
Natural Gradient Works Efficiently in Learning
When a parameter space has a certain underlying structure, the ordinary gradient of a function does not represent its steepest direction, but the natural gradient does. Information geometry is used for calculating the natural gradients in the parameter space of perceptrons, the space of matrices (for blind source separation), and the space of linear dynamical systems (for blind source deconvolution). The dynamical behavior of natural gradient online learning is analyzed and is proved to be Fisher efficient, implying that it has asymptotically the same performance as the optimal batch estimation of parameters. This suggests that the plateau phenomenon, which appears in the backpropagation learning algorithm of multilayer perceptrons, might disappear or might not be so serious when the natural gradient is used. An adaptive method of updating the learning rate is proposed and analyzed.
A Full-360$^{\circ}$ Reflection-Type Phase Shifter With Constant Insertion Loss
A new reflection-type phase shifter with a full 360deg relative phase shift range and constant insertion loss is presented. This feature is obtained by incorporating a new cascaded connection of varactors into the impedance-transforming quadrature coupler. The required reactance variation of a varactor can be reduced by controlling the impedance ratio of the quadrature coupler. The implemented phase shifter achieves a measured maximal relative phase shift of 407deg, an averaged insertion loss of 4.4 dB and return losses better than 19 dB at 2 GHz. The insertion-loss variation is within plusmn0.1 and plusmn0.2 dB over the 360deg and 407deg relative phase shift tuning range, respectively.
Wiki-based process framework for blended learning
With few exceptions, currently published research on theeducational use of wikis does not include how the learningactivities should be shaped, planned or enforced in a wiki [11]. Inthis paper we aim to fill that gap by providing a framework forlearning and teaching processes supported by the use of wikis. Aninstance of that process framework ("feedback-driven" process) wasformulated and implemented through a series of trials performed atUniversity of Hertfordshire Business School during the course ofthe last two academic years to 2006/7. The results of the trialhave been collected and analyzed using the quantitative andqualitative methods and have led to the conclusion that studentsengagement with wiki-based learning activities is directlyproportional to the quality and frequency of tutors feedback andthe clarity of the underlying learning and teaching process.
Lean Reachability Tree for Unbounded Petri Nets
Elaborate efforts have been made to eliminate fake markings and refine <inline-formula> <tex-math notation="LaTeX">${\omega }$ </tex-math></inline-formula>-markings in the existing modified or improved Karp–Miller trees for various classes of unbounded Petri nets since the late 1980s. The main issues fundamentally are incurred due to the generation manners of the trees that prematurely introduce some potentially unbounded markings with <inline-formula> <tex-math notation="LaTeX">${\omega }$ </tex-math></inline-formula> symbols and keep their growth into new ones. Aiming at addressing them, this work presents a non-Karp–Miller tree called a lean reachability tree (LRT). First, a sufficient and necessary condition of the unbounded places and some reachability properties are established to reveal the features of unbounded nets. Then, we present an LRT generation algorithm with a sufficiently enabling condition (SEC). When generating a tree, SEC requires that the components of a covering node are not replaced by <inline-formula> <tex-math notation="LaTeX">${\omega }$ </tex-math></inline-formula> symbols, but continue to grow until any transition on an output path of an unbounded place has been branch-enabled at least once. In return, no fake marking is produced and no legal marking is lost during the tree generation. We prove that LRT can faithfully express by folding, instead of equivalently representing, the reachability set of an unbounded net. Also, some properties of LRT are examined and a sufficient condition of deadlock existence based on it is given. The case studies show that LRT outperforms the latest modified Karp–Miller trees in terms of size, expressiveness, and applicability. It can be applied to the analysis of the emerging discrete event systems with infinite states.
A phase II study of FOLFIRI-3 (double infusion of irinotecan combined with LV5FU) after FOLFOX in advanced colorectal cancer patients
In advanced colorectal cancer previously treated with oxaliplatin, efficacy of irinotecan-based chemotherapy is poor and the best regimen is not defined. We designed FOLFIRI-3 and conducted a phase II study to establish its efficacy and safety in advanced colorectal cancer patients previously treated with FOLFOX. FOLFIRI-3 consisted of irinotecan 100 mg m−2 as a 60-min infusion on day 1, running concurrently with leucovorin 200 mg m−2 as a 2-h infusion on day 1, followed by 46-h continuous infusion of 5-fluorouracil (5FU) 2000 mg m−2, and irinotecan 100 mg m−2 repeated on day 3, at the end of the 5FU infusion, every 2 weeks. Sixty-five patients entered the study. The intent-to-treat objective response rate was 23% (95% CI 13–33%). Disease was stable in 37% of patients, progressed in 26% and was not assessable in 14%. From the start of FOLFIRI-3, median progression-free survival was 4.7 months and median survival 10.5 months. Main toxicities (% of patients) were grade 3–4 diarrhoea 23% and grade 4 neutropenia 11%. FOLFIRI-3 is a promising regimen achieving high response rate and progression-free survival in patients previously treated with FOLFOX with a moderate toxicity.
Challenges in health information systems integration: Zanzibar experience
The healthcare milieu of most developing countries is often characterized by multiplicity of health programs supported by myriad of donors geared towards reversing disease trends in these countries. However, donor policies tend to support implementation of vertical programs which maintain their own management structures and information systems. The emerging picture overtime is proliferation of multiple and uncoordinated health information systems (HIS), that are often in conflict with the primary health care goals of integrated district based health information systems. As a step towards HIS strengthening, most countries are pursuing an integration strategy of the vertical HIS. Nevertheless, the challenges presented by the vertical reporting HIS reinforced by funds from the donors renders the integration initiatives ineffective, some ending up as total failure or as mere pilot projects. The failure of the systems after implementation transcends technical fixes. This paper drew on an empirical case to analyze the challenges associated with the effort to integrate the HIS in a context characterized by multiple vertical health programs. The study revealed the tensions that exists between the ministry of health which strived to standardize and integrate the HIS and the vertical programs which pushed the agenda to maintain their systems alongside the national HIS. However, as implied from the study, attaining integration entails the ability to strike a balance between the two forces, which can be achieved by strengthening communication and collaboration linkages between the stakeholders.
Acute systemic inflammation increases arterial stiffness and decreases wave reflections in healthy individuals.
BACKGROUND Aortic stiffness is a marker of cardiovascular disease and an independent predictor of cardiovascular risk. Although an association between inflammatory markers and increased arterial stiffness has been suggested, the causative relationship between inflammation and arterial stiffness has not been investigated. METHODS AND RESULTS One hundred healthy individuals were studied according to a randomized, double-blind, sham procedure-controlled design. Each substudy consisted of 2 treatment arms, 1 with Salmonella typhi vaccination and 1 with sham vaccination. Vaccination produced a significant (P<0.01) increase in pulse wave velocity (at 8 hours by 0.43 m/s), denoting an increase in aortic stiffness. Wave reflections were reduced significantly (P<0.01) by vaccination (decrease in augmentation index of 5.0% at 8 hours and 2.5% at 32 hours) as a result of peripheral vasodilatation. These effects were associated with significant increases in inflammatory markers such as high-sensitivity C-reactive protein (P<0.001), high-sensitivity interleukin-6 (P<0.001), and matrix metalloproteinase-9 (P<0.01). With aspirin pretreatment (1200 mg PO), neither pulse wave velocity nor augmentation index changed significantly after vaccination (increase of 0.11 m/s and 0.4%, respectively; P=NS for both). CONCLUSIONS This is the first study to show through a cause-and-effect relationship that acute systemic inflammation leads to deterioration of large-artery stiffness and to a decrease in wave reflections. These findings have important implications, given the importance of aortic stiffness for cardiovascular function and risk and the potential of therapeutic interventions with antiinflammatory properties.
Bazaar-Extension: A CloudSim Extension for Simulating Negotiation Based Resource Allocations
Negotiation processes taking place between two or more parties need to agree on a viable execution mechanism. Auctioning protocols have proven useful as electronic negotiation mechanisms in the past. Auctions are a way of determining the price of a resource in a dynamic way. Additionally, auctions have well defined rules such as winner and loser determination, time restrictions or minimum price increment. These restrictions are necessary to ensure fair and transparent resource allocation. However, these rules are limiting flexibility of consumers and providers. In this paper we introduce a novel negotiation based resource allocation mechanisms using the offer-counteroffer negotiation protocol paradigm. This allocation mechanism shows similarities to the supermarket approach as consumer and provider are able to communicate directly. Further, the price is determined in a dynamic way similar to auctioning. We developed a Bazaar-Extension for CloudSim which simulates negotiation processes with different strategies. In this paper we introduce and analyze a specific Genetic Algorithm based negotiation strategy. For the comparison of the efficiency of resource allocations a novel Bazaar-Score was used.
Boom analytics: exploring data-centric, declarative programming for the cloud
Building and debugging distributed software remains extremely difficult. We conjecture that by adopting a data-centric approach to system design and by employing declarative programming languages, a broad range of distributed software can be recast naturally in a data-parallel programming model. Our hope is that this model can significantly raise the level of abstraction for programmers, improving code simplicity, speed of development, ease of software evolution, and program correctness. This paper presents our experience with an initial large-scale experiment in this direction. First, we used the Overlog language to implement a "Big Data" analytics stack that is API-compatible with Hadoop and HDFS and provides comparable performance. Second, we extended the system with complex distributed features not yet available in Hadoop, including high availability, scalability, and unique monitoring and debugging facilities. We present both quantitative and anecdotal results from our experience, providing some concrete evidence that both data-centric design and declarative languages can substantially simplify distributed systems programming.
Possible effect of the local terrain on the North Carolina tower gravity experiment.
A Comment on the Letter by D. H. Eckhardt, {ital et} {ital al}., Phys. Rev. Lett. 60, 2567 (1988).
FAMILY SAFETY NETS AND ECONOMIC TRANSITION: A STUDY OF WORKER HOUSEHOLDS IN POLAND
Can Eastern European families most severely impoverished during the transition to capitalism rely on private family safety nets? This question is likely critical for the transition's success, but little is known about family networks in Eastern Europe. We analyze newly available Polish household surveys, conducted both before and after Poland's economic transition, which measure private inter-household transfers. Such transfers are large and widespread in Poland, and in many ways they appear to function like means-tested public transfers. They flow from high to low-income households and are targeted to young couples, large families and those experiencing illness. Private transfer patterns also suggest that they are responsive to liquidity constraints. Our results from 1987 data indicate that private transfers could fill a non-trivial portion of the income gap left by unemployment. But we also find evidence from 1992 data that family networks weakened somewhat after the transition. Can Eastern European families who are most severely impoverished during the transition from socialism to capitalism rely on private family safety nets for support? Consider, for example, the plight of a family whose primary earner has just been laid off from a liquidated state enterprise or a family farm rendered insolvent because of the elimination of government subsidies. Do these families have more fortunate relatives or friends who can assist with cash, in-kind help, gifts or shared housing? Conversely, which are the households that cannot rely on such support? These questions are critical for evaluating the likelihood of successful economic transition in the Eastern bloc. On the one hand, an effective social safety net must be preserved--the rise in unemployment and widening of the income distribution could derail popular support for a quick transition to capitalism (see, for example, Kornai (1990) and Lipton and Sachs (1990)). On the other hand, governments are facing fiscal constraints which render the previous regime's universal public transfer system unsustainable. One answer is to target public transfers to the truly needy more effectively. In the words of Calvo and Frankel (1991), when "choosing among alternative safety nets, one should be aware that there is no way to protect all segments of society" (p. 42). But reforming institutions to accomplish more effective targeting is difficult and takes time. Are there other options? Fortunately, public transfers are not the only means of shuffling resources from one group to another. Family networks can also achieve substantial income redistribution, privately and with no apparent coercion. Information about the size and flows of these private transfers would be extremely useful in determining the public funds needed to round out an adequate safety net. Private transfer information is also useful for identifying households who lack private safety nets. Targeting these families can be critical since the public sector may be their only source of insurance. Despite the potential significance of family networks during Eastern Europe's transition, we currently know little about how they might function. In fact, we know little about even the basic facts, such as the incidence and magnitude of private transfers.
Lung organoids: current uses and future promise.
Lungs are composed of a system of highly branched tubes that bring air into the alveoli, where gas exchange takes place. The proximal and distal regions of the lung contain epithelial cells specialized for different functions: basal, secretory and ciliated cells in the conducting airways and type II and type I cells lining the alveoli. Basal, secretory and type II cells can be grown in three-dimensional culture, with or without supporting stromal cells, and under these conditions they give rise to self-organizing structures known as organoids. This Review summarizes the different methods for generating organoids from cells isolated from human and mouse lungs, and compares their final structure and cellular composition with that of the airways or alveoli of the adult lung. We also discuss the potential and limitations of organoids for addressing outstanding questions in lung biology and for developing new drugs for disorders such as cystic fibrosis and asthma.
Artificial intelligence
S INCE ITS INAUGURATION in 1966, the ACM A.M. Tur-ing Award has recognized major contributions of lasting importance to computing. Through the years, it has become the most prestigious award in computing. To help celebrate 50 years of the ACM Turing Award and the visionaries who have received it, ACM has launched a campaign called " Panels in Print, " which takes the form of a collection of responses from Turing laureates, ACM award recipients and other ACM experts on a given topic or trend. ACM's celebration of 50 years of the ACM Turing Award will culminate with a conference June 23–24, 2017 at the Westin St. Francis in San Francis-co to highlight the significant impact of the contributions of ACM Turing laureates on computing and society, to look ahead to the future of technology and innovation, and to help inspire the next generation of computer scientists to invent and dream. Grace Murray Hopper recipient PEDRO FELZENSZWALB to respond to several questions about Artificial Intelligence. What have been the biggest breakthroughs in AI in recent years and what impact is it having in the real-world? RAJ REDDY: Ten years ago, I would have said it wouldn't be possible, in my lifetime, to recognize unre-hearsed spontaneous speech from an open population but that's exactly what Siri, Cortana and Alexa do. The same is happening with vision and robotics. We are by no means at the end of the activity in these areas, but we have enough working examples that society can benefit from these breakthroughs. JEFF DEAN: The biggest breakthrough in the last five or so years has been the use of deep learning, a particular kind of machine learning that uses neural networks. Stacking the network into many layers that learn increasingly abstract patterns as you go up the layers seems to be a fundamentally powerful idea, and it's been very successful in a surprisingly wide variety of applications—from speech recognition, to image recognition, to language understanding. What's interesting is we don't seem to be near the limit of what deep learning can do; we'll likely see many more powerful uses of it in the coming years. PEDRO FELZENSZWALB: Among the biggest technical advances I would include the development of scalable machine learning algorithms and the computational infrastructure to process and interact with huge data-sets. The latest example of these advances is deep learning. In computer vision deep learning has …
Scalable Deep Learning Logo Detection
Existing logo detection methods usually consider a small number of logo classes and limited images per class with a strong assumption of requiring tedious object bounding box annotations, therefore not scalable to real-world dynamic applications. In this work, we tackle these challenges by exploring the webly data learning principle without the need for exhaustive manual labelling. Specifically, we propose a novel incremental learning approach, called Scalable Logo Self-co-Learning (SL), capable of automatically self-discovering informative training images from noisy web data for progressively improving model capability in a cross-model co-learning manner. Moreover, we introduce a very large (2,190,757 images of 194 logo classes) logo dataset “WebLogo-2M” by an automatic web data collection and processing method. Extensive comparative evaluations demonstrate the superiority of the proposed SL method over the state-of-the-art strongly and weakly supervised detection models and contemporary webly data learning approaches.
Molecular phylogenetics and evolution in sharks and fishes
DNA sequences from the recombination-activating gene 1 (RAG1) are being used in a growing number of studies of the phylogenetic relationships of different vertebrate lineages. I have assembled a diverse set of RAG1 sequences from four vertebrate clades and analyzed their evolutionary characteristics to better understand the phylogenetic value of this gene and to delineate regions of the protein that may be functionally important. This analysis showed that RAG1 sequences are subject to variable evolutionary constraints between different sites on the molecule and between different lineages. As a result, substitution characteristics and the rate of evolution of the DNA and amino acid sequences are heterogeneous between and within the vertebrate clades considered here. This heterogeneity must be given consideration when devising phylogenetic analyses based on RAG1 sequences. The comparative sequence analysis highlighted two highly conserved regions with no known functions. The first of these is located in the N-terminal end of the molecule and is restricted to bony vertebrates. The second is conserved in all known sequences and contains three pairs of closely spaced cysteine residues. In addition, the fish sequences examined show some unique features not observed in other vertebrates. The regions delineated by sequence comparison should help guide studies of RAG 1 function, which is critical in the development of the adaptive immune system through its mediation of the mechanism of antigen receptor loci rearrangement known as V(D)J recombination.
Tapered Transmission Lines With Dissipative Junctions
NIST is optimizing the design of a 10 V programmable Josephson voltage standard so that it uses less microwave power by employing fewer parallel-biased arrays with higher voltage per array. Increasing the voltage per array by adding more junctions is challenging because the dissipation of the over-damped Josephson junctions limits the total number that may be located in each array. If there is too much dissipation in the array, the junctions at the end receive too little microwave power compared with the junctions at the beginning of the array. To compensate for the junction attenuation, tapered impedance transmission lines were used to maintain a nearly constant microwave current along the lossy transmission line. Simulation and testing have improved the microwave uniformity of our designs for tapered impedances from 85 ohms to 5 ohms. Low-leakage bias tees for various characteristic impedances were designed so that sub-arrays could be measured within long arrays. These tapered arrays have improved the bias current margins, junction number, and bandwidth of NIST junction arrays. By measuring the microwave power from the output of these long arrays, harmonic generation and the nonlinear properties of dissipative junction arrays are studied.
Air pollution and acute respiratory infections among children 0-4 years of age: an 18-year time-series study.
Upper and lower respiratory infections are common in early childhood and may be exacerbated by air pollution. We investigated short-term changes in ambient air pollutant concentrations, including speciated particulate matter less than 2.5 μm in diameter (PM2.5), in relation to emergency department (ED) visits for respiratory infections in young children. Daily counts of ED visits for bronchitis and bronchiolitis (n = 80,399), pneumonia (n = 63,359), and upper respiratory infection (URI) (n = 359,246) among children 0-4 years of age were collected from hospitals in the Atlanta, Georgia, area for the period 1993-2010. Daily pollutant measurements were combined across monitoring stations using population weighting. In Poisson generalized linear models, 3-day moving average concentrations of ozone, nitrogen dioxide, and the organic carbon fraction of particulate matter less than 2.5 μm in diameter (PM2.5) were associated with ED visits for pneumonia and URI. Ozone associations were strongest and were observed at low (cold-season) concentrations; a 1-interquartile range increase predicted a 4% increase (95% confidence interval: 2%, 6%) in visits for URI and an 8% increase (95% confidence interval: 4%, 13%) in visits for pneumonia. Rate ratios tended to be higher in the 1- to 4-year age group compared with infants. Results suggest that primary traffic pollutants, ozone, and the organic carbon fraction of PM2.5 exacerbate upper and lower respiratory infections in early life, and that the carbon fraction of PM2.5 is a particularly harmful component of the ambient particulate matter mixture.
Graph-based Educational Data Mining
With the growing popularity of MOOCs and computer-aided learning systems, as well as the growth of social networks in education, we have begun to collect increasingly large amounts of educational graph data. This graph data includes complex user-system interaction logs, student-produced graphical representations, and conceptual hierarchies that large amounts of graph data have. There is abundant pedagogical information beneath these graph datasets. As a result, graph data mining techniques such as graph grammar induction, path analysis, and prerequisite relationship prediction has become increasingly important. Also, graphical model techniques (e.g. Hidden Markov Models or probabilistic graphical models) has become more and more important to analyze educational data. While educational graph data and data analysis based on graphical models has grown increasingly common, it’s necessary to build a strong community for educational graph researchers. This workshop will provide such a forum for interested researchers to discuss ongoing work, share common graph mining problems, and identify technique challenges. Researchers are encouraged to discuss prior analyses of graph data and educational data analyses based on graphical models. We also welcome discussions of in-progress work from researchers seeking to identify suitable sources of data or appropriate analytical tools. 1. PRIOR WORKSHOPS So far, we have successfully held two international workshops on Graph-based Educational Data-Mining. The first one was held in London, co-located with EDM 2014. It featured 12 publications of which 6 were full-papers, the remainder short papers. Having roughly 25 full-day attendees and additional drop-ins, it led to a number of individual connections between researchers and the formation of an e-mail list for group discussion. The second one was co-located with EDM 2015 in Spain. 10 authors presented their published work including 4 full papers and 6 short papers there. 2. OVERVIEW AND RELEVANCE Graph-based data mining and educational data analysis based on graphical models have become emerging disciplines in EDM. Large-scale graph data, such as social network data, complex user-system interaction logs, student-produced graphical representations, and conceptual hierarchies, carries multiple levels of pedagogical information. Exploring such data can help to answer a range of critical questions such as: • For social network data from MOOCs, online forums, and user-system interaction logs: – What social networks can foster or hinder learning? – Do users of online learning tools behave as we expect them to? – How does the interaction graph evolve over time? – What data we can use to define relationship graphs? – What path(s) do high-performing students take through online materials? – What is the impact of teacher-interaction on students’ observed behavior? – Can we identify students who are particularly helpful in a course? • For computer-aided learning (writing, programming, etc.): – What substructures are commonly found in studentproduced diagrams? – Can we use prior student data to identify students’ solution plan, if any? – Can we automatically induce empirically-valid graph rules from prior student data and use induced graph rules to support automated grading systems? Graphical model techniques, such as Bayesian Network, Markov Random Field, and Conditional Random Field, have been widely used in EDM for student modeling, decision making, and knowledge tracing. Utilizing these approaches can help to: • Learn students’ behavioral patterns. • Predict students’ behaviors and learning outcomes.
Noble metal-free hydrazine fuel cell catalysts: EPOC effect in competing chemical and electrochemical reaction pathways.
We report the discovery of a highly active Ni-Co alloy electrocatalyst for the oxidation of hydrazine (N(2)H(4)) and provide evidence for competing electrochemical (faradaic) and chemical (nonfaradaic) reaction pathways. The electrochemical conversion of hydrazine on catalytic surfaces in fuel cells is of great scientific and technological interest, because it offers multiple redox states, complex reaction pathways, and significantly more favorable energy and power densities compared to hydrogen fuel. Structure-reactivity relations of a Ni(60)Co(40) alloy electrocatalyst are presented with a 6-fold increase in catalytic N(2)H(4) oxidation activity over today's benchmark catalysts. We further study the mechanistic pathways of the catalytic N(2)H(4) conversion as function of the applied electrode potential using differentially pumped electrochemical mass spectrometry (DEMS). At positive overpotentials, N(2)H(4) is electrooxidized into nitrogen consuming hydroxide ions, which is the fuel cell-relevant faradaic reaction pathway. In parallel, N(2)H(4) decomposes chemically into molecular nitrogen and hydrogen over a broad range of electrode potentials. The electroless chemical decomposition rate was controlled by the electrode potential, suggesting a rare example of a liquid-phase electrochemical promotion effect of a chemical catalytic reaction ("EPOC"). The coexisting electrocatalytic (faradaic) and heterogeneous catalytic (electroless, nonfaradaic) reaction pathways have important implications for the efficiency of hydrazine fuel cells.
Expectation-maximization for sparse and non-negative PCA
We study the problem of finding the dominant eigenvector of the sample covariance matrix, under additional constraints on the vector: a cardinality constraint limits the number of non-zero elements, and non-negativity forces the elements to have equal sign. This problem is known as sparse and non-negative principal component analysis (PCA), and has many applications including dimensionality reduction and feature selection. Based on expectation-maximization for probabilistic PCA, we present an algorithm for any combination of these constraints. Its complexity is at most quadratic in the number of dimensions of the data. We demonstrate significant improvements in performance and computational efficiency compared to other constrained PCA algorithms, on large data sets from biology and computer vision. Finally, we show the usefulness of non-negative sparse PCA for unsupervised feature selection in a gene clustering task.
Identifying confounders using additive noise models
We propose a method for inferring the existence of a latent common cause (“confounder”) of two observed random variables. The method assumes that the two effects of the confounder are (possibly nonlinear) functions of the confounder plus independent, additive noise. We discuss under which conditions the model is identifiable (up to an arbitrary reparameterization of the confounder) from the joint distribution of the effects. We state and prove a theoretical result that provides evidence for the conjecture that the model is generically identifiable under suitable technical conditions. In addition, we propose a practical method to estimate the confounder from a finite i.i.d. sample of the effects and illustrate that the method works well on both simulated and real-world data.
Artificial Neural Networks and Machine Learning – ICANN 2017
Ladder networks are a notable new concept in the field of semi-supervised learning by showing state-of-the-art results in image recognition tasks while being compatible with many existing neural architectures. We present the recurrent ladder network, a novel modification of the ladder network, for semi-supervised learning of recurrent neural networks which we evaluate with a phoneme recognition task on the TIMIT corpus. Our results show that the model is able to consistently outperform the baseline and achieve fully-supervised baseline performance with only 75% of all labels which demonstrates that the model is capable of using unsupervised data as an effective regulariser.
A RGBD SLAM algorithm combining ORB with PROSAC for indoor mobile robot
In order to enhance the instantaneity of SLAM for indoor mobile robot, a RGBD SLAM method based on Kinect was proposed. In the method, oriented FAST and rotated BRIEF(ORB) algorithm was combined with progressive sample consensus(PROSAC) algorithm to execute feature extracting and matching. More specifically, ORB algorithm which has better property than many other feature descriptors was used for extracting feature. At the same time, ICP algorithm was adopted for coarse registration of the point clouds, and PROSAC algorithm which is superior than RANSAC in outlier removal was employed to eliminate incorrect matching. To make the result more accurate, pose-graph optimization was achieved based on g2o framework. In the end, a 3D volumetric map which can be directly used to the navigation of robots was created.
Is oxycodone efficacy reflected in serum concentrations? A multicenter, cross-sectional study in 456 adult cancer patients.
CONTEXT The relationship between oxycodone and metabolite serum concentrations and clinical effects has not previously been investigated in cancer pain patients. OBJECTIVES The aim of this study was to assess whether there is a relationship between oxycodone concentrations and pain intensity, cognitive functioning, nausea, or tiredness in cancer patients. Also, oxymorphone and noroxymorphone contributions to analgesia and the adverse effects of oxycodone were assessed. METHODS Four hundred fifty-six cancer patients receiving oxycodone for cancer pain were included. Pain was assessed using the Brief Pain Inventory. The European Organisation for Research and Treatment of Cancer Quality-of-Life Questionnaire-C30 was used to assess the symptoms of tiredness, nausea, constipation, and depression. Cognitive function was assessed using the Mini-Mental State Examination. Associations were examined by multiple linear or ordinal logistic regressions. Whether patients classified as being a "treatment success" or a "treatment failure" had different serum concentrations of oxycodone or metabolites was assessed using Mann-Whitney U-tests. RESULTS Serum concentrations of oxycodone and metabolites were not associated with pain intensity, nausea, tiredness, or cognitive function, with the exception that increased pain intensity was associated with higher oxymorphone concentrations. Patients with poor pain control and side effects had higher serum concentrations of the oxycodone metabolites, noroxycodone and noroxymorphone, compared with those with good pain relief and without side effects. CONCLUSION This study of patients receiving oxycodone for cancer pain confirms previous observations that there is most likely no association between serum concentrations of opioid analgesics and clinical effects.
New insights into the basement of the Transylvanian Depression (Romania)
Abstract In the Transylvanian Depression (Romania) a number of deep wells were drilled to investigate and exploit methane gas fields. From these, only a few penetrated the Middle to Upper Jurassic volcanics in the basement. From three boreholes (Deleni, Cenade and Zoreni) rock samples were available for investigations. Deleni and Cenade show calc-alkaline basalts to andesites which are similar to the island arc volcanics of the Southern Apuseni Mountains. Zoreni basalts and basaltic andesites show a boninitic affinity, which was not found up to now in outcrops. The distribution of the volcanics in the basement of the Transylvanian Depression, which correlates with a geomagnetic anomaly, can be explained by the presence of a magnetite-rich ophiolite layer beneath the island arc volcanics. Comparable oceanic crust rocks are found further west, in the Southern Apuseni Mountains. They are considered to be remnants of a marginal or back-arc basin, in which a volcanic arc including boninites developed. Upon the westward-directed subduction of an open ocean, the island arc and part of the back-arc basin were overthrusted eastwards onto continental crust during the Late Jurassic or Early Cretaceous. A flip in the subduction direction caused late Early Cretaceous westward thrusting of the back-arc basin and island arc rocks onto the crystalline basement of the Tisia continent.
The Feasibility of Cognitive Adaptation Training for Outpatients with Schizophrenia in Integrated Treatment
Cognitive adaptation training (CAT) has been tested as a psychosocial treatment, showing promising results. To date there are no reported tests of CAT treatment outside the United States. Thus, we decided to adjust CAT treatment and apply it to an Integrated Treatment setting in Denmark. In this article we describe and discuss the feasibility of using CAT treatment in a randomized clinical trial in Denmark. The treatment period was shorter and the patients were instructed in prompting for specific actions by using newer tools such as schedules in their mobile phones. Social functioning, symptoms and quality of life were assessed using instruments validated in a Danish context. It was judged that, after some adjustments to fit the Danish assertive community treatment, CAT treatment was feasible in a Danish setting.
The New Era of Interferon-Free Treatment of Chronic Hepatitis C
BACKGROUND Within the development and approval of several new direct-acting antivirals (DAA) against hepatitis C virus (HCV), a new era of hepatitis C therapy has begun. Even more treatment options are likely to become available during the next 1-2 years. METHODS A summary of the current phase II and III trials investigating DAA and a review of the recent HCV guidelines was conducted. RESULTS With the development of new potent DAA and the approval of different DAA combinations, cure rates of HCV infection of >90% are achievable for almost all HCV genotypes and stages of liver disease. Currently available DAA target different steps in the HCV replication cycle, in particular the NS3/4A protease, the NS5B polymerase, and the NS5A replication complex. Treatment duration varies between 8 and 24 weeks depending on the stage of fibrosis, prior treatment, HCV viral load, and HCV genotype. Ribavirin is required only for some treatment regimens and may be particularly beneficial in patients with cirrhosis. DAA resistance influences treatment outcome only marginally; thus, drug resistance testing is not routinely recommended before treatment. In the case of treatment failure, however, resistance testing should be performed before re-treatment with other DAA is initiated. CONCLUSION With the new, almost side effect-free DAA treatment options chronic HCV infection became a curable disease. The clinical benefit of DAA combination therapies in patients with advanced cirrhosis and the effects on incidence rates of hepatocellular carcinoma remain to be determined.
A printed 16 ports massive MIMO antenna system with directive port beams
The design of a massive multiple-input-multiple-output (mMIMO) antenna system based on patch antennas is presented in this paper. The array consists of 16 ports, each port consists of a 2×2 patch antenna array with different phase excitation at each element to tilt the beam toward different directions to provide lower correlation coefficient values. A fixed progressive phase feed network is designed to provide the beam tilts. The proposed antenna system is designed using a 3-layer FR4 substrate with total size of 33.33×33.33×0.16 cm3.
A Multi-User Game-Theoretical Multipath Routing Protocol to Send Video-Warning Messages over Mobile Ad Hoc Networks
The prevention of accidents is one of the most important goals of ad hoc networks in smart cities. When an accident happens, dynamic sensors (e.g., citizens with smart phones or tablets, smart vehicles and buses, etc.) could shoot a video clip of the accident and send it through the ad hoc network. With a video message, the level of seriousness of the accident could be much better evaluated by the authorities (e.g., health care units, police and ambulance drivers) rather than with just a simple text message. Besides, other citizens would be rapidly aware of the incident. In this way, smart dynamic sensors could participate in reporting a situation in the city using the ad hoc network so it would be possible to have a quick reaction warning citizens and emergency units. The deployment of an efficient routing protocol to manage video-warning messages in mobile Ad hoc Networks (MANETs) has important benefits by allowing a fast warning of the incident, which potentially can save lives. To contribute with this goal, we propose a multipath routing protocol to provide video-warning messages in MANETs using a novel game-theoretical approach. As a base for our work, we start from our previous work, where a 2-players game-theoretical routing protocol was proposed to provide video-streaming services over MANETs. In this article, we further generalize the analysis made for a general number of N players in the MANET. Simulations have been carried out to show the benefits of our proposal, taking into account the mobility of the nodes and the presence of interfering traffic. Finally, we also have tested our approach in a vehicular ad hoc network as an incipient start point to develop a novel proposal specifically designed for VANETs.
The effects of soccer training and timing of balance training on balance ability
The purpose of the present study was to investigate the effects of a soccer training session on the balance ability of the players and assess whether the effectiveness of a balance program is affected by its performance before or after the regular soccer training. Thirty-nine soccer players were randomly divided into three subject groups (n=13 each), one control group (C group), one training group that followed a balance program (12 weeks, 3 times per week, 20 min per session) before the regular soccer training (TxB group), and one training group that performed the same balance program after the soccer training (TxA group). Standard testing balance boards and the Biodex Stability System were used to assess balance ability in the C, TxB, and TxA groups at baseline (T0) and after completing the balance program (T12). The same tests and additional isokinetic knee joint moment measurements were carried out in the TxB and TxA groups pre- and post-soccer training. Two main results were obtained: (1) No differences (p>0.05) were found in balance ability and knee joint moment production between pre- and post-soccer training. (2) The balance program increased (p<0.01) the balance ability in the TxB and TxA groups, and the improvement in the TxA group was greater (p<0.05) than that in the TxB group post-soccer training. Result (1) is in contrast to the notion of a link between fatigue induced by a soccer training session or game and injury caused by impaired balance, and result (2) has implications for athletic training and rehabilitation.
Registration and Analysis of Vascular Images
We have developed a method for rigidly aligning images of tubes. This paper presents an evaluation of the consistency of that method for three-dimensional images of human vasculature. Vascular images may contain alignment ambiguities, poorly corresponding vascular networks, and non-rigid deformations, yet the Monte Carlo experiments presented in this paper show that our method registers vascular images with sub-voxel consistency in a matter of seconds. Furthermore, we show that the method's insensitivity to non-rigid deformations enables the localization, quantification, and visualization of those deformations. Our method aligns a source image with a target image by registering a model of the tubes in the source image directly with the target image. Time can be spent to extract an accurate model of the tubes in the source image. Multiple target images can then be registered with that model without additional extractions. Our registration method builds upon the principles of our tubular object segmentation work that combines dynamic-scale central ridge traversal with radius estimation. In particular, our registration method's consistency stems from incorporating multi-scale ridge and radius measures into the model-image match metric. Additionally, the method's speed is due in part to the use of coarse-to-fine optimization strategies that are enabled by measures made during model extraction and by the parameters inherent to the model-image match metric.
Signed Link Analysis in Social Media Networks
Numerous real-world relations can be represented by signed networks with positive links (e.g., trust) and negative links (e.g., distrust). Link analysis plays a crucial role in understanding the link formation and can advance various tasks in social network analysis such as link prediction. The majority of existing works on link analysis have focused on unsigned social networks. The existence of negative links determines that properties and principles of signed networks are substantially distinct from those of unsigned networks, thus we need dedicated efforts on link analysis in signed social networks. In this paper, following social theories in link analysis in unsigned networks, we adopt three social science theories, namely Emotional Information, Diffusion of Innovations and Individual Personality, to guide the task of link analysis in signed networks.
Computing Visual Attention from Scene Depth
Visual attention is the ability to rapidly detect the interesting partsof a given scene. Inspired by biological vision, the principle of visual attention is used with a similar goal in computer vision. Several previousworks deal with the computation of visual attention from images provided by standard video cameras, but little attention has been devoted so far to scene depth as source for visual attention. The investigation presented in this paper aims at an extensionof thevisual attentionmodel to thescenedepth component. A first part of thepaper is devoted to the integration of depth in thecomputationalmodel build aroundconspicuity and saliency maps. A second part is devoted to experimental work in which results of visual attention,obtained fromtheextended model andfor various3Dscenes, arepresented. Theresultsspeak for theusefulnessof theenhanced computationalmodel.
Effects of Cutting Parameters on Surface Roughness during End Milling of Aluminium under Minimum Quantity Lubrication (MQL)
In this study an experimental investigation of effects of cutting parameters on surface roughness during end milling of aluminium 6061 under minimum quantity lubrication (MQL) condition was carried out. The experiments were carried out to investigate surface quality of the machined parameters and to developed mathematical models using least square techniques. Spindle speed (N), feed rate (f), axial depth of cut (a) and radial depth of cut (r) has been chosen as input variables in order to predict surface roughness. The experiment was designed by using central composite design (CCD) in which 30 samples were run in a CNC milling machine. Each of the experimental result was measured using Mitutoyo surface tester. After the predicted surface roughness values have been obtained the average percentage errors were calculated. The mathematical model developed by using least square method shows accuracy of 89.5% which is reasonably reliable for surface roughness prediction. With the obtained optimum input parameters for surface roughness, production operations will be enhanced.
Factors associated with weaning practices in term infants: a prospective observational study in Ireland.
The WHO (2001) recommends exclusive breast-feeding and delaying the introduction of solid foods to an infant's diet until 6 months postpartum. However, in many countries, this recommendation is followed by few mothers, and earlier weaning onto solids is a commonly reported global practice. Therefore, this prospective, observational study aimed to assess compliance with the WHO recommendation and examine weaning practices, including the timing of weaning of infants, and to investigate the factors that predict weaning at ≤ 12 weeks. From an initial sample of 539 pregnant women recruited from the Coombe Women and Infants University Hospital, Dublin, 401 eligible mothers were followed up at 6 weeks and 6 months postpartum. Quantitative data were obtained on mothers' weaning practices using semi-structured questionnaires and a short dietary history of the infant's usual diet at 6 months. Only one mother (0.2%) complied with the WHO recommendation to exclusively breastfeed up to 6 months. Ninety-one (22.6%) infants were prematurely weaned onto solids at ≤ 12 weeks with predictive factors after adjustment, including mothers' antenatal reporting that infants should be weaned onto solids at ≤ 12 weeks, formula feeding at 12 weeks and mothers' reporting of the maternal grandmother as the principal source of advice on infant feeding. Mothers who weaned their infants at ≤ 12 weeks were more likely to engage in other sub-optimal weaning practices, including the addition of non-recommended condiments to their infants' foods. Provision of professional advice and exploring antenatal maternal misperceptions are potential areas for targeted interventions to improve compliance with the recommended weaning practices.
Impact of cardiovascular magnetic resonance on management and clinical decision-making in heart failure patients
BACKGROUND Cardiovascular magnetic resonance (CMR) can provide important diagnostic and prognostic information in patients with heart failure. However, in the current health care environment, use of a new imaging modality like CMR requires evidence for direct additive impact on clinical management. We sought to evaluate the impact of CMR on clinical management and diagnosis in patients with heart failure. METHODS We prospectively studied 150 consecutive patients with heart failure and an ejection fraction ≤ 50% referred for CMR. Definitions for "significant clinical impact" of CMR were pre-defined and collected directly from medical records and/or from patients. Categories of significant clinical impact included: new diagnosis, medication change, hospital admission/discharge, as well as performance or avoidance of invasive procedures (angiography, revascularization, device therapy or biopsy). RESULTS Overall, CMR had a significant clinical impact in 65% of patients. This included an entirely new diagnosis in 30% of cases and a change in management in 52%. CMR results directly led to angiography in 9% and to the performance of percutaneous coronary intervention in 7%. In a multivariable model that included clinical and imaging parameters, presence of late gadolinium enhancement (LGE) was the only independent predictor of "significant clinical impact" (OR 6.72, 95% CI 2.56-17.60, p=0.0001). CONCLUSIONS CMR made a significant additive clinical impact on management, decision-making and diagnosis in 65% of heart failure patients. This additive impact was seen despite universal use of prior echocardiography in this patient group. The presence of LGE was the best independent predictor of significant clinical impact following CMR.
Issues in the Design of a Pilot Concept-Based Query Interface for the Neuroinformatics Information Framework
This paper describes a pilot query interface that has been constructed to help us explore a “concept-based” approach for searching the Neuroscience Information Framework (NIF). The query interface is concept-based in the sense that the search terms submitted through the interface are selected from a standardized vocabulary of terms (concepts) that are structured in the form of an ontology. The NIF contains three primary resources: the NIF Resource Registry, the NIF Document Archive, and the NIF Database Mediator. These NIF resources are very different in their nature and therefore pose challenges when designing a single interface from which searches can be automatically launched against all three resources simultaneously. The paper first discusses briefly several background issues involving the use of standardized biomedical vocabularies in biomedical information retrieval, and then presents a detailed example that illustrates how the pilot concept-based query interface operates. The paper concludes by discussing certain lessons learned in the development of the current version of the interface.
Multi-Domain and Multi-Task Learning for Human Action Recognition
Domain-invariant (view-invariant and modality-invariant) feature representation is essential for human action recognition. Moreover, given a discriminative visual representation, it is critical to discover the latent correlations among multiple actions in order to facilitate action modeling. To address these problems, we propose a multi-domain and multi-task learning (MDMTL) method to: 1) extract domain-invariant information for multi-view and multi-modal action representation and 2) explore the relatedness among multiple action categories. Specifically, we present a sparse transfer learning-based method to co-embed multi-domain (multi-view and multi-modality) data into a single common space for discriminative feature learning. Additionally, visual feature learning is incorporated into the multi-task learning framework, with the Frobenius-norm regularization term and the sparse constraint term, for joint task modeling and task relatedness-induced feature learning. To the best of our knowledge, MDMTL is the first supervised framework to jointly realize domain-invariant feature learning and task modeling for multi-domain action recognition. Experiments conducted on the INRIA Xmas Motion Acquisition Sequences data set, the MSR Daily Activity 3D (DailyActivity3D) data set, and the Multi-modal & Multi-view & Interactive data set, which is the most recent and largest multi-view and multi-model action recognition data set, demonstrate the superiority of MDMTL over the state-of-the-art approaches.
Drone forensic framework: Sensor and data identification and verification
Ease of availability and affordability of unmanned aerial vehicles (UAV) have led to an increase in its popularity amongst the public. The proliferation of UAVs has also augmented several security issues. These devices are used for illegal activities such as drug smuggling and privacy invasion. The purpose of this research paper is to analyze the basic architecture of a drone, and to propose a generic drone forensic model that would improve the digital investigation process. This paper also provides recommendations on how one should perform forensics on the various components of a drone such as camera and Wi-Fi.
Trust in Smart Contracts is a Process, As Well
Distributed ledger technologies are rising in popularity, mainly for the host of financial applications they potentially enable, through smart contracts. Several implementations of distributed ledgers have been proposed, and different languages for the development of smart contracts have been suggested. A great deal of attention is given to the practice of development, i.e. programming, of smart contracts. In this position paper, we argue that more attention should be given to the “traditional developers” of contracts, namely the lawyers, and we propose a list of requirements for a human and machine-readable contract authoring language, friendly to lawyers, serving as a common (and a specification) language, for programmers, and the parties to a contract.
Predicting Fluctuations in Cryptocurrency Transactions Based on User Comments and Replies
This paper proposes a method to predict fluctuations in the prices of cryptocurrencies, which are increasingly used for online transactions worldwide. Little research has been conducted on predicting fluctuations in the price and number of transactions of a variety of cryptocurrencies. Moreover, the few methods proposed to predict fluctuation in currency prices are inefficient because they fail to take into account the differences in attributes between real currencies and cryptocurrencies. This paper analyzes user comments in online cryptocurrency communities to predict fluctuations in the prices of cryptocurrencies and the number of transactions. By focusing on three cryptocurrencies, each with a large market size and user base, this paper attempts to predict such fluctuations by using a simple and efficient method.
Design of a Monitor for Detecting Money Laundering and Terrorist Financing
Money laundering is a global problem that affects all countries to various degrees. Although, many countries take benefits from money laundering, by accepting the money from laundering but keeping the crime abroad, at the long run, “money laundering attracts crime”. Criminals come to know a country, create networks and eventually also locate their criminal activities there. Most financial institutions have been implementing antimoney laundering solutions (AML) to fight investment fraud. The key pillar of a strong Anti-Money Laundering system for any financial institution depends mainly on a well-designed and effective monitoring system. The main purpose of the Anti-Money Laundering transactions monitoring system is to identify potential suspicious behaviors embedded in legitimate transactions. This paper presents a monitor framework that uses various techniques to enhance the monitoring capabilities. This framework is depending on rule base monitoring, behavior detection monitoring, cluster monitoring and link analysis based monitoring. The monitor detection processes are based on a money laundering deterministic finite automaton that has been obtained from their corresponding regular expressions. Index Terms – Anti Money Laundering system, Money laundering monitoring and detecting, Cycle detection monitoring, Suspected Link monitoring.
Organizational Commitment , Job Redesign , Employee Empowerment and Intent to Quit Among Survivors of Restructuring and Downsizing
This study is designed to determine the relationship between job redesign, employee empowerment and intent to quit measured by affective organizational commitment among survivors of organizational restructuring and downsizing. It focused on middle level managers and employees in supervisory positions because survivors of this group are often called upon to assume expanded roles, functions and responsibilities in a post restructuring and downsizing environment. The results show statistically significant positive relationships between job redesign, empowerment and affective commitment. It therefore, provides empirical data to support theoretical models for managing and mitigating survivors’ intent to quit and subsequent voluntary turnover among survivors of organizational restructuring and downsizing. The implications of these findings, which suggest expanded roles for job redesign and employee empowerment, are discussed.
Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks
Computerized Tomography Angiography (CTA) based follow-up of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential to evaluate the progress of the patient and detect complications. In this context, accurate quantification of post-operative thrombus volume is required. However, a proper evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose a new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducible thrombus region of interest detection and subsequent fine thrombus segmentation. The DetecNet detection network is adapted to perform region of interest extraction from a complete CTA and a new segmentation network architecture, based on Fully Convolutional Networks and a Holistically-Nested Edge Detection Network, is presented. These networks are trained, validated and tested in 13 post-operative CTA volumes of different patients using a 4-fold cross-validation approach to provide more robustness to the results. Our pipeline achieves a Dice score of more than 82% for post-operative thrombus segmentation and provides a mean relative volume difference between ground truth and automatic segmentation that lays within the experienced human observer variance without the need of human intervention in most common cases.
Internet Stream Protocol Version 2 (ST2) Protocol Specification - Version ST2+
Status of this Memo This memo defines an Experimental Protocol for the Internet community. This memo does not specify an Internet standard of any kind. Discussion and suggestions for improvement are requested. Distribution of this memo is unlimited. IESG NOTE This document is a revision of RFC1190. The charter of this effort was clarifying, simplifying and removing errors from RFC1190 to ensure interoperability of implementations. NOTE WELL: Neither the version of the protocol described in this document nor the previous version is an Internet Standard or under consideration for that status. Since the publication of the original version of the protocol, there have been significant developments in the state of the art. Readers should note that standards and technology addressing alternative approaches to the resource reservation problem are currently under development within the IETF. Abstract This memo contains a revised specification of the Internet STream Protocol Version 2 (ST2). ST2 is an experimental resource reservation protocol intended to provide end-to-end real-time guarantees over an internet. It allows applications to build multi-destination simplex data streams with a desired quality of service. The revised version of ST2 specified in this memo is called ST2+.
Cloaked websites: propaganda, cyber-racism and epistemology in the digital era
This article analyzes cloaked websites, which are sites published by individuals or groups who conceal authorship in order to disguise deliberately a hidden political agenda. Drawing on the insights of critical theory and the Frankfurt School, this article examines the way in which cloaked websites conceal a variety of political agendas from a range of perspectives. Of particular interest here are cloaked white supremacist sites that disguise cyber-racism. The use of cloaked websites to further political ends raises important questions about knowledge production and epistemology in the digital era. These cloaked sites emerge within a social and political context in which it is increasingly difficult to parse fact from propaganda, and this is a particularly pernicious feature when it comes to the cyber-racism of cloaked white supremacist sites. The article concludes by calling for the importance of critical, situated political thinking in the evaluation of cloaked websites.
Проблема реализации этических ценностей в практике социальной работы Канады
This article analyzes the basic problems of realization of ethical values in practice of social work of Canada. Novelty of the work is specified by an insufficient level of scrutiny of achievements of foreign social work in our country. The conclusions of the article are recommended to be used in a training course for training of experts on social work, and also in general practice of social work including Omsk region.
Teaching Artificial Intelligence and Robotics Via Games ( Abstract ) ∗
The Department of Computer Science at the University of Southern California recently created two new degree programs, namely a Bachelor’s Program in Computer Science (Games) and a Master’s Program in Computer Science (Game Development). In this paper, we discuss two projects that use games as motivator. First, the Computer Games in the Classroom Project develops stand-alone projects on standard artificial intelligence topics that use video-game technology to motivate the students but do not require the students to use game engines. Second, the Pinball Project develops the necessary hardware and software to enable students to learn concepts from robotics by developing games on actual pinball machines.
Multi-View Stereo : Redundancy Benefits for 3 D Reconstruction
This work investigates the influence of using multiple views for 3D reconstruction with respect to depth accuracy and robustness. In particular we show that multiview matching not only contributes to scene completeness, but also improves depth accuracy by improved triangulation angles. We first start by synthetic experiments on a typical aerial photogrammetric camera network and investigate how baseline (i.e. triangulation angle) and redundancy affect the depth error. Our evaluation also includes a comparison between combined pairwise triangulated and fused stereo pairs in contrast to true multiview triangulation. By analyzing the 3D uncertainty ellipsoid of triangulated points we demonstrate the clear advantage of a multiview approach over fused two view stereo algorithms. We propose an efficient dense matching algorithm that utilizes pairwise optical flow followed by a robust correspondence chaining approach. We provide evaluation results of the proposed method on ground truth data and compare its performance in contrast to a multiview plane sweep method.
Drivers and barriers in the acceptance of mobile payment in China
Integrating the prospective user's cost and perceived risk with Unified Theory of Acceptance and Use of Technology (UTAUT), we propose a research model and collect data by survey to investigate the determinants of the mobile payment acceptance in China. By revising the hypothesized model based on the data analysis by SPSS and AMOS, it is tested empirically that in the user's acceptance of mobile payment, performance expectancy and social influence are the drivers, whereas cost and perceived risks are the barriers. The findings of this study have a number of important implications to both researchers and the mobile payment service providers.
Fascial Disorders: Implications for Treatment.
In the past 15 years, multiple articles have appeared that target fascia as an important component of treatment in the field of physical medicine and rehabilitation. To better understand the possible actions of fascial treatments, there is a need to clarify the definition of fascia and how it interacts with various other structures: muscles, nerves, vessels, organs. Fascia is a tissue that occurs throughout the body. However, different kinds of fascia exist. In this narrative review, we demonstrate that symptoms related to dysfunction of the lymphatic system, superficial vein system, and thermoregulation are closely related to dysfunction involving superficial fascia. Dysfunction involving alterations in mechanical coordination, proprioception, balance, myofascial pain, and cramps are more related to deep fascia and the epimysium. Superficial fascia is obviously more superficial than the other types and contains more elastic tissue. Consequently, effective treatment can probably be achieved with light massage or with treatment modalities that use large surfaces that spread the friction in the first layers of the subcutis. The deep fasciae and the epymisium require treatment that generates enough pressure to reach the surface of muscles. For this reason, the use of small surface tools and manual deep friction with the knuckles or elbows are indicated. Due to different anatomical locations and to the qualities of the fascial tissue, it is important to recognize that different modalities of approach have to be taken into consideration when considering treatment options.
A new Design Criteria for Hash-Functions
The most common way of constructing a hash function (e.g., SHA-1) is to iterate a compression function on the input message. The compression function is usually designed from scratch or made out of a block-cipher. In this paper, we introduce a new security notion for hash-functions, stronger than collision-resistance. Under this notion, the arbitrary length hash function H must behave as a random oracle when the fixed-length building block is viewed as an ideal primitive. This enables to eliminate all possible generic attacks against iterative hash-functions. In this paper, we show that the current design principle behind hash functions such as SHA-1 and MD5 — the (strengthened) Merkle-Damg̊ard transformation — does not satisfy this security notion. We provide several constructions that provably satisfy this notion; those new constructions introduce minimal changes to the plain Merkle-Damg̊ard construction and are easily implementable in practice. This paper is a modified version of a paper to appear at Crypto 2005.
PowerMatcher: multiagent control in the electricity infrastructure
Different driving forces push the electricity production towards decentralization. As a result, the current electricity infrastructure is expected to evolve into a network of networks, in which all system parts communicate with each other and influence each other. Multi-agent systems and electronic markets form an appropriate technology needed for control and coordination tasks in the future electricity network. We present the PowerMatcher, a market-based control concept for supply and demand matching (SDM) in electricity networks. In a presented simulation study is shown that the simultaneousness of electricity production and consumption can be raised substantially using this concept. Further, we present a field test with medium-sized electricity producing and consuming installations controlled via this concept, currently in preparation.
Nonlinear connectivity by Granger causality
The communication among neuronal populations, reflected by transient synchronous activity, is the mechanism underlying the information processing in the brain. Although it is widely assumed that the interactions among those populations (i.e. functional connectivity) are highly nonlinear, the amount of nonlinear information transmission and its functional roles are not clear. The state of the art to understand the communication between brain systems are dynamic causal modeling (DCM) and Granger causality. While DCM models nonlinear couplings, Granger causality, which constitutes a major tool to reveal effective connectivity, and is widely used to analyze EEG/MEG data as well as fMRI signals, is usually applied in its linear version. In order to capture nonlinear interactions between even short and noisy time series, a few approaches have been proposed. We review them and focus on a recently proposed flexible approach has been recently proposed, consisting in the kernel version of Granger causality. We show the application of the proposed approach on EEG signals and fMRI data.
Power Control System Design in Induction Heating with Resonant Voltage Inverter
This paper is concerned with the design of the power control system for a single-phase voltage source inverter feeding a parallel resonant induction heating load. The control of the inverter output current, meaning the active component of the current through the induction coil when the control frequency is equal or slightly exceeds the resonant frequency, is achieved by a Proportional-IntegralDerivative controller tuned in accordance with the Modulus Optimum criterion in Kessler variant. The response of the current loop for different work pipes and set currents has been tested by simulation under the Matlab-Simulink environment and illustrates a very good behavior of the control system.
A Compressive Sensing Data Acquisition and Imaging Method for Stepped Frequency GPRs
A novel data acquisition and imaging method is presented for stepped-frequency continuous-wave ground penetrating radars (SFCW GPRs). It is shown that if the target space is sparse, i.e., a small number of point like targets, it is enough to make measurements at only a small number of random frequencies to construct an image of the target space by solving a convex optimization problem which enforces sparsity through lscr 1 minimization. This measurement strategy greatly reduces the data acquisition time at the expense of higher computational costs. Imaging results for both simulated and experimental GPR data exhibit less clutter than the standard migration methods and are robust to noise and random spatial sampling. The images also have increased resolution where closely spaced targets that cannot be resolved by the standard migration methods can be resolved by the proposed method.
Prediction of homoprotein and heteroprotein complexes by protein docking and template‐based modeling: A CASP‐CAPRI experiment
We present the results for CAPRI Round 30, the first joint CASP-CAPRI experiment, which brought together experts from the protein structure prediction and protein-protein docking communities. The Round comprised 25 targets from amongst those submitted for the CASP11 prediction experiment of 2014. The targets included mostly homodimers, a few homotetramers, and two heterodimers, and comprised protein chains that could readily be modeled using templates from the Protein Data Bank. On average 24 CAPRI groups and 7 CASP groups submitted docking predictions for each target, and 12 CAPRI groups per target participated in the CAPRI scoring experiment. In total more than 9500 models were assessed against the 3D structures of the corresponding target complexes. Results show that the prediction of homodimer assemblies by homology modeling techniques and docking calculations is quite successful for targets featuring large enough subunit interfaces to represent stable associations. Targets with ambiguous or inaccurate oligomeric state assignments, often featuring crystal contact-sized interfaces, represented a confounding factor. For those, a much poorer prediction performance was achieved, while nonetheless often providing helpful clues on the correct oligomeric state of the protein. The prediction performance was very poor for genuine tetrameric targets, where the inaccuracy of the homology-built subunit models and the smaller pair-wise interfaces severely limited the ability to derive the correct assembly mode. Our analysis also shows that docking procedures tend to perform better than standard homology modeling techniques and that highly accurate models of the protein components are not always required to identify their association modes with acceptable accuracy. Proteins 2016; 84(Suppl 1):323-348. © 2016 Wiley Periodicals, Inc.
Reconfigurable Battery Techniques and Systems: A Survey
Battery packs with a large number of battery cells are becoming more and more widely adopted in electronic systems, such as robotics, renewable energy systems, energy storage in smart grids, and electronic vehicles. Therefore, a well-designed battery pack is essential for battery applications. In the literature, the majority of research in battery pack design focuses on battery management system, safety circuit, and cell-balancing strategies. Recently, the reconfigurable battery pack design has gained increasing attentions as a promising solution to solve the problems existing in the conventional battery packs and associated battery management systems, such as low energy efficiency, short pack lifespan, safety issues, and low reliability. One of the most prominent features of reconfigurable battery packs is that the battery cell topology can be dynamically reconfigured in the real-time fashion based on the current condition (in terms of the state of charge and the state of health) of battery cells. So far, there are several reconfigurable battery schemes having been proposed and validated in the literature, all sharing the advantage of cell topology reconfiguration that ensures balanced cell states during charging and discharging, meanwhile providing strong fault tolerance ability. This survey is undertaken with the intent of identifying the state-of-the-art technologies of reconfigurable battery as well as providing review on related technologies and insight on future research in this emerging area.
Assessing and addressing the re-eutrophication of Lake Erie: Central basin hypoxia
Donald Scavia, J. David Allan, Kristin K. Arend, Steven Bartell, Dmitry Beletsky, Nate S. Bosch, Stephen B. Brandt, Ruth D. Briland, Irem Daloğlu, Joseph V. DePinto, David M. Dolan, Mary Anne Evans, Troy M. Farmer, Daisuke Goto, Haejin Han, Tomas O. Höök, Roger Knight, Stuart A. Ludsin, Doran Mason, Anna M. Michalak, R. Peter Richards, James J. Roberts, Daniel K. Rucinski, Edward Rutherford, David J. Schwab, Timothy M. Sesterhenn, Hongyan Zhang, Yuntao Zhou
Magnetization transfer ratio in gray matter: a potential surrogate marker for progression in early primary progressive multiple sclerosis.
BACKGROUND Magnetization transfer imaging has the potential to provide a surrogate marker for progression in primary progressive multiple sclerosis (PPMS). OBJECTIVES To investigate whether brain magnetization transfer imaging, T2 lesion load, and atrophy changes over 3 years reflect concurrent clinical changes, and which baseline imaging measure best predicts progression over 3 years in early PPMS. DESIGN Prospective study. SETTING National Hospital for Neurology and Neurosurgery and the Institute of Neurology, London, England. PATIENTS Forty-seven patients with PPMS (of whom 43 completed the study) and 18 control subjects. INTERVENTIONS Brain magnetization transfer imaging (including T2-weighted images) and volume sequences every 6 months for 3 years. MAIN OUTCOME MEASURES Changes in Expanded Disability Status Scale (EDSS) score and associations with rate of change in imaging variables. RESULTS More rapid decline in gray matter mean and peak location magnetization transfer ratio and T2 lesion load increase were associated with greater rates of progression on the EDSS. Baseline gray matter peak height magnetization transfer ratio best predicted progression over 3 years. CONCLUSION Gray matter magnetization transfer ratio meets many of the criteria for a surrogate marker of progression in early PPMS.
Co-evolution of three megatrends nurtures uncaptured GDP – Uber ' s ridesharing revolution
Uber used a disruptive business model driven by dig ital technology to trigger a ride-sharing revolution. The institutional sources of the compa ny’s platform ecosystem architecture were analyzed to explain this revolutionary change. Both an empirical analysis of a co-existing develop ment trajectory with taxis and institutional enablers that helped to create Uber’s platform ecos ystem were analyzed. The analysis identified a correspondence with the “ wo-faced” nature of ICT that nurtures uncaptured GDP. This two-faced nature of ICT can be attributed to a virtuous cycle of decline in prices and an increase in the number of trips. We show that this cycle can be attributed to a self -propagating function that plays a vital role in the spinoff from traditional co-evolution to new co-evolution. Furthermore, we use the three mega-trends of ICT advancement, paradigm chan ge d a shift in people’s preferences to explain the secret of Uber’s system success. All these noteworthy elements seem essential to a w ell-functioning platform ecosystem architecture, not only in transportation but also f or other business institutions.
A high gain patch antenna using negative permeability metamaterial structures
In this paper, a high gain patch antenna realized by using negative permeability electromagnetic metamaterials is presented to operate at 5–5.5 GHz. The proposed metamaterial cell is investigated and analyzed by numerical methods. We found that the proposed metamaterial structure provides a negative permeability from 5 GHz to 5.5 GHz, which overlaps with the operating frequency band of the reference microstrip antenna. By periodically loading a number of the proposed metamaterial cells on the substrate around the patch antenna, the gain of the antenna at operating frequency is enhanced by about 2.5 dBi, while there is no effect on the operating bandwidth of the reference antenna.
The Effectiveness of Virtual Reality Pain Control With Multiple Treatments of Longer Durations: A Case Study
Immersive virtual reality (VR) has proved to be potentially valuable as a pain control technique for patients with severe burns undergoing wound care and physical therapy. Recent studies have shown that single, 3-min visits to a virtual world can dramatically reduce the amount of pain experienced during wound care, and the illusion of going inside the computer-generated world helps make VR analgesia unusually effective. This case study explored whether VR continues to reduce pain when the duration and frequency of VR treatments are increased to more practical levels. A patient with deep flash burns covering 42% of his body spent varying amounts of time performing physical therapy with and without virtual reality. Five subjective pain ratings INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION, 13(1), 1–12 Copyright © 2001, Lawrence Erlbaum Associates, Inc.
Randomised controlled study in the primary healthcare sector to investigate the effectiveness and safety of auriculotherapy for the treatment of uncomplicated chronic rachialgia: a study protocol
BACKGROUND Uncomplicated chronic rachialgia is a highly prevalent complaint, and one for which therapeutic results are contradictory. The aim of the present study is to evaluate the effectiveness and safety of treatment with auriculopressure, in the primary healthcare sector, carried out by trained healthcare professionals via a 30-hour course. METHODS/DESIGN The design consists of a multi-centre randomized controlled trial, with placebo, with two parallel groups, and including an economic evaluation. Patients with chronic uncomplicated rachialgia, whose GP is considering referral for auriculopressure sensory stimulation, are eligible for inclusion. Sampling will be by consecutive selection, and randomised allocation to one of the two study arms will be determined using a centralised method, following a 1:1 plan (true auriculopressure; placebo auriculopressure). The implants (true and placebo) will be replaced once weekly, and the treatment will have a duration of 8 weeks. The primary outcome measure will be the change in pain intensity, measured on a visual analogue scale (VAS) of 100 mm, at 9 weeks after beginning the treatment. A follow up study will be performed at 6 months after beginning treatment. An assessment will also be made of the changes measured in the Spanish version of the McGill Pain Questionnaire, of the changes in the Lattinen test, and of the changes in quality of life (SF-12). Also planned is an analysis of cost-effectiveness and also, if necessary, a cost-benefit analysis. DISCUSSION This study will contribute to developing evidence on the use of auriculotherapy using Semen vaccariae [wang bu liu xing] for the treatment of uncomplicated chronic rachialgia. TRIAL REGISTRATION Current Controlled Trials ISRCTN01897462.
Analytical Determination of the Phase Inductances of a Brushless DC Motor With Faulhaber Winding
This paper presents a method to analytically model the self and mutual phase inductances of a brushless dc motor with a Faulhaber winding. The method of images is used to “remove” the motor iron parts in the model in order to analyze the winding completely in the air. The analytical model is verified using 3-D finite-element method simulations and measurements.
Post-Punching Behavior of Flat Slabs by
Reinforced concrete flat slabs are a common structural system for cast-in-place concrete slabs. Failures in punching shear near the column regions are typically governing at ultimate. In case no punching shear or integrity reinforcement is placed, failures in punching develop normally in a brittle manner with almost no warning signs. Furthermore, the residual strength after punching is, in general, significantly lower than the punching load. Thus, punching of a single column of a flat slab overloads adjacent columns and can potentially lead to their failure on punching, thus triggering the progressive collapse of the structure. Over the past decades, several collapses have been reported due to punching shear failures, resulting in human casualties and extensive damage. Other than placing conventional punching shear reinforcement, the deformation capacity and residual strength after punching can also be enhanced by placing integrity reinforcement to avoid progressive collapses of flat slabs. This paper presents the main results of an extensive experimental campaign performed at the Ecole Polytechnique Fédérale de Lausanne (EPFL) on the role of integrity reinforcement by means of 20 slabs with dimensions of 1500 x 1500 x 125 mm (≈5 ft x 5 ft x 5 in.) and various integrity reinforcement layouts. The performance and robustness of the various solutions is investigated to obtain physical explanations and a consistent design model for the load-carrying mechanisms and strength after punching failures. INTRODUCTION Over the past decades, several collapses due to punching shear failure have been reported in Europe and America,
Effects of a mindfulness-based intervention during pregnancy on prenatal stress and mood: results of a pilot study
Stress and negative mood during pregnancy increase risk for poor childbirth outcomes and postnatal mood problems and may interfere with mother–infant attachment and child development. However, relatively little research has focused on the efficacy of psychosocial interventions to reduce stress and negative mood during pregnancy. In this study, we developed and pilot tested an eight-week mindfulness-based intervention directed toward reducing stress and improving mood in pregnancy and early postpartum. We then conducted a small randomized trial (n = 31) comparing women who received the intervention during the last half of their pregnancy to a wait-list control group. Measures of perceived stress, positive and negative affect, depressed and anxious mood, and affect regulation were collected prior to, immediately following, and three months after the intervention (postpartum). Mothers who received the intervention showed significantly reduced anxiety (effect size, 0.89; p < 0.05) and negative affect (effect size, 0.83; p < 0.05) during the third trimester in comparison to those who did not receive the intervention. The brief and nonpharmaceutical nature of this intervention makes it a promising candidate for use during pregnancy.
Contrasting Effects of Increased and Decreased Dopamine Transmission on Latent Inhibition in Ovariectomized Rats and Their Modulation by 17β-Estradiol: An Animal Model of Menopausal Psychosis?
Women with schizophrenia have later onset and better response to antipsychotic drugs (APDs) than men during reproductive years, but the menopausal period is associated with increased symptom severity and reduced treatment response. Estrogen replacement therapy has been suggested as beneficial but clinical data are inconsistent. Latent inhibition (LI), the capacity to ignore irrelevant stimuli, is a measure of selective attention that is disrupted in acute schizophrenia patients and in rats and humans treated with the psychosis-inducing drug amphetamine and can be reversed by typical and atypical APDs. Here we used amphetamine (1 mg/kg)-induced disrupted LI in ovariectomized rats to model low levels of estrogen along with hyperfunction of the dopaminergic system that may be occurring in menopausal psychosis, and tested the efficacy of APDs and estrogen in reversing disrupted LI. 17β-Estradiol (50, 150 μg/kg), clozapine (atypical APD; 5, 10 mg/kg), and haloperidol (typical APD; 0.1, 0.3 mg/kg) effectively reversed amphetamine-induced LI disruption in sham rats, but were much less effective in ovariectomized rats; 17β-estradiol and clozapine were effective only at high doses (150 μg/kg and 10 mg/kg, respectively), whereas haloperidol failed at both doses. Haloperidol and clozapine regained efficacy if coadministered with 17β-estradiol (50 μg/kg, an ineffective dose). Reduced sensitivity to dopamine (DA) blockade coupled with spared/potentiated sensitivity to DA stimulation after ovariectomy may provide a novel model recapitulating the combination of increased vulnerability to psychosis with reduced response to APD treatment in female patients during menopause. In addition, our data show that 17β-estradiol exerts antipsychotic activity.
Creating SERS hot spots on MoS(2) nanosheets with in situ grown gold nanoparticles.
Herein, a reliable surface-enhanced Raman scattering (SERS)-active substrate has been prepared by synthesizing gold nanoparticles (AuNPs)-decorated MoS2 nanocomposite. The AuNPs grew in situ on the surface of MoS2 nanosheet to form efficient SERS hot spots by a spontaneous redox reaction with tetrachloroauric acid (HAuCl4) without any reducing agent. The morphologies of MoS2 and AuNPs-decorated MoS2 nanosheet were characterized by TEM, HRTEM, and AFM. The formation of hot spots greatly depended on the ratio of MoS2 and HAuCl4. When the concentration of HAuCl4 was 2.4 mM, the as-prepared AuNPs@MoS2-3 nanocomposite exhibited a high-quality SERS activity toward probe molecule due to the generated hot spots. The spot-to-spot SERS signals showed that the relative standard deviation (RSD) in the intensity of the main Raman vibration modes (1362, 1511, and 1652 cm(-1)) of Rhodamine 6G were about 20%, which displayed good uniformity and reproducibility. The AuNPs@MoS2-based substrate was reliable, sensitive, and reproducible, which showed great potential to be an excellent SERS substrate for biological and chemical detection.
Matching pursuits with time-frequency dictionaries
We introduce an algorithm, called matching pursuit, that decomposes any signal into a linear expansion of waveforms that are selected from a redundant dictionary of functions. These waveforms are chosen in order to best match the signal structures. Matching pursuits are general procedures to compute adaptive signal representations. With a dictionary of Gabor functions a matching pursuit defines an adaptive time-frequency transform. We derive a signal energy distribution in the time-frequency plane, which does not include interference terms, unlike Wigner and Cohen class distributions. A matching pursuit isolates the signal structures that are coherent with respect to a given dictionary. An application to pattern extraction from noisy signals is described. We compare a matching pursuit decomposition with a signal expansion over an optimized wavepacket orthonormal basis, selected with the algorithm of Coifman and Wickerhauser.
Image Prototype Similarity Matching for Lymph Node Hemopathology
This paper describes general aspects of an automated expert system for Lymph Node Hemopathology, which utilizes methods of segmentation and classification for performing image prototype similarity matching (IPSM). The expert system consists of a set of representative prototype images of a large number of histologic features required to differentiate different lymph node pathologies. A queiy case, which may consist of one or more images, is compared against each prototype set and is assigned a degree of similarity by calculating a distance metric in a multidimensional feature space. Introductory motivation to this problem is presented together with technical details of the lowlevel segmentation algorithms utilized. As a representative application, results are presented for cases which are dominated by cytologic characteristics.
BrickRoad: a light-weight tool for spontaneous design of location-enhanced applications
It is difficult to design and test location-enhancedapplications. A large part of this difficulty is due to the added complexity of supporting location. Wizard of Oz (WOz) has become an effective technique for the early stage design of location-enhanced applications because it allows designers to test an application prototype bysimulating nonexistent components such as location sensing. However, existing WOz tools 1) require nontrivial effort from designers to specify how a prototype should behave before it can be tested with end users, and 2)support only limited control over application behavior during a test. BrickRoad is a WOz tool for spontaneousdesign of location-enhanced applications. It lowers the threshold to acquiring user feedback and exploring a design space. With BrickRoad, a designer does not need to specify any interaction logic and can experiment on-the-fly with different designs during testing. BrickRoad is a valuable complement to existing tool support for the early stage design of location-enhanced applications.
Correlations Among Endoscopic, Histologic and Serologic Diagnoses for the Assessment of Atrophic Gastritis
BACKGROUND Atrophic gastritis is a precancerous condition, which can be diagnosed by several methods. However, there is no consensus for the standard method. The aim of this study was to evaluate the correlations among endoscopic, histologic, and serologic findings for the diagnosis of atrophic gastritis. METHODS From March 2003 to August 2013, a total of 2,558 subjects were enrolled. Endoscopic atrophic gastritis was graded by Kimura-Takemoto classification and histological atrophic gastritis was assessed by updated Sydney system. Serological assessment of atrophic gastritis was based on serum pepsinogen test. RESULTS The serum pepsinogen I/II ratio showed a significant decreasing nature when the extent of atrophy increased (R(2)=0.837, P<0.001) and the cut-off value for distinguishing between presence and absence of endoscopic atrophic gastritis was 3.2. The serum pepsinogen I and pepsinogen I/II ratio were significantly lower when the histological atrophic gastritis progressed and the cut-off value was 3.0 for a diagnosis of histological atrophic gastritis. A significant correlation between endoscopic and histological atrophic gastritis was noted and the sensitivity and specificity of endoscopic diagnosis were 65.9% and 58.0% for antrum, 71.3% and 53.7% for corpus, respectively. CONCLUSIONS The endoscopic, histological, and serological atrophic gastritis showed relatively good correlations. However, as these three methods have a limitation, a multifactorial assessment might be needed to ameliorate the diagnostic accuracy of atrophic gastritis.
Wound healing activity of the fruit skin of Punica granatum.
The skin of the fruit and the bark of Punica granatum are used as a traditional remedy against diarrhea, dysentery, and intestinal parasites. The fruit skin extract of P. granatum was tested for its wound healing activity in rats using an excision wound model. The animals were divided into three groups of six each. The experimental group of animals was topically treated with P. granatum at a dose of 100 mg/kg every day for 15 days, while the controls and standard group animals were treated with petroleum jelly and mupirocin ointment, respectively. Phytochemical analysis of the extract revealed the presence of saponins, triterpenes, tannins, alkaloids, flavonoids, and cardiac glycosides. Extract-treated animals exhibited 95% reduction in the wound area when compared with controls (84%), which was statistically significant (P<.01). The extract-treated wounds were found to epithelize faster compared with controls. The hydroxyproline content of extract-treated animals was significantly higher than controls (P<.05). The fruit skin extract did not show any antimicrobial activity against the microrganisms tested. P. granatum promotes significant wound healing in rats and further evaluation of this activity in humans is suggested.
Relationships of Lower Lung Fibrosis, Pleural Disease, and Lung Mass with Occupational, Household, Neighborhood, and Slate Roof-Dense Area Residential Asbestos Exposure
This study aimed to evaluate the relationship between various asbestos exposure routes and asbestos-related disorders (ARDs). The study population comprised 11,186 residents of a metropolitan city who lived near asbestos factories, shipyards, or in slate roof-dense areas. ARDs were determined from chest X-rays indicating lower lung fibrosis (LFF), pleural disease (PD), and lung masses (LMs). Of the subjects, 11.2%, 10.4%, 67.2% and 8.3% were exposed to asbestos via occupational, household, neighborhood, and slate roof routes, respectively. The odds ratio (OR) of PD from household exposure (i.e., living with asbestos-producing workers) was 1.9 (95% confidence interval: 0.9⁻4.2), and those of LLF and PD from neighborhood exposure, or residing near asbestos factories) for <19 or >20 years, or near a mine, were 4.1 (2.8⁻5.8) and 4.8 (3.4⁻6.7), 8.3 (5.5⁻12.3) and 8.0 (5.5⁻11.6), and 4.8 (2.7⁻8.5) and 9.0 (5.6⁻14.4), respectively. The ORs of LLF, PD, and LM among those residing in slate-dense areas were 5.5 (3.3⁻9.0), 8.8 (5.6⁻13.8), and 20.5 (10.4⁻40.4), respectively. Substantial proportions of citizens residing in industrialized cities have potentially been exposed to asbestos, and various exposure routes are associated with the development of ARDs. Given the limitations of this study, including potential confounders such as socioeconomic status, further research is needed.
FORWARD AND INVERSE KINEMATICS STUDY OF INDUSTRIAL ROBOTS TAKING INTO ACCOUNT CONSTRUCTIVE AND FUNCTIONAL PARAMETER ' S MODELING
Forward and inverse kinematic studies of industrial robots (IR) have been developed and presented in a large number of papers. However, eve n g neral mathematic formalization is usually almost correct, (basically following up general Har tenberg Denavit (H-D) conventions and associated homogenous transformation matrix), only few papers presents kinematic models ready to be directly implemented on a real scale industrial robot or as well able to evaluate kinematics behavior of a real scale IR specific model. That is usually due to som e inconsistencies in modeling, the most frequently of these referring on: the incomplete formalization of the full set of constructive and functional parame ters (that mandatory need to be considered in case of a specific real IR's model), avoidance of considering IR's specific design features, (as joint dimensions a d links dimensions are) leading to wrongly locat ing the reference frames used for expressing homogenous coordinate transformations, as well as missing of the validation procedures able to check the correct itude of the mathematical models, previously to its implementing in a real scale IR's controller. That is why present paper shows first a completely new approach for IR's forward an inverse kinematics, in terms of IR's analytical modeling by taking into account the full set of IR's constructive and funct ional parameters of two different IR's models. Then , for both direct and inverse mathematical models complet e symbolic formalization and full set of solutions for forward and inverse kinematics are presented for bo th IR types. In order to study mathematical models applicability on the real scale IR, two specific IR models were studied: an ABB serial-link open chain kinematics IR and a Fanuc serial-link closed chain ki ematics IR. Numerical results were verified by cross validation using both analytically calculatio ns results and by mean of a constrained 3D CAD mode l used to geometrically verify the results. The param etric form of the model elaborated in PTC Mathcad 1 4 allows a quick reconfiguration for other robot's mo dels having similar configurations. Results can be also used for solving dynamics, path planning and contro l p blems in case of real scale IR.
Can response-adaptive randomization increase participation in acute stroke trials?
BACKGROUND AND PURPOSE A response-adaptive randomization (RAR) trial design actively adjusts the ratio of participants assigned to each trial arm, favoring the better performing treatment by using outcome data from participants already in the trial. Compared with a standard clinical trial, an RAR study design has the potential to improve patient participation in acute stroke trials. METHODS This cross-sectional randomized survey included adult emergency department patients, age≥18, without symptoms of stroke or other critical illness. A standardized protocol was used, and subjects were randomized to either an RAR or standard hypothetical acute stroke trial. After viewing the video describing the hypothetical trial (http://youtu.be/cKIWduCaPZc), reviewing the consent form, and having questions answered, subjects indicated whether they would consent to the trial. A multivariable logistic regression model was fitted to estimate the impact of RAR while controlling for demographic factors and patient understanding of the design. RESULTS A total of 418 subjects (210 standard and 208 RAR) were enrolled. All baseline characteristics were balanced between groups. There was significantly higher participation in the RAR trial (67.3%) versus the standard trial (54.5%), absolute increase: 12.8% (95% confidence interval, 3.7-22.2). The RAR group had a higher odds ratio of agreeing to research (odds ratio, 1.89; 95% confidence interval, 1.2-2.9) while adjusting for patient level factors. Trial designs were generally well understood by the participants. CONCLUSIONS The hypothetical RAR trial attracted more research participation than standard randomization. RAR has the potential to increase recruitment and offer benefit to future trial participants.
Complex dynamics of hepatitis B virus resistance to adefovir.
UNLABELLED In patients with hepatitis B e antigen-negative chronic hepatitis B, adefovir dipivoxil administration selects variants bearing reverse transcriptase rtN236T and/or rtA181V/T substitutions in 29% of cases after 5 years. The aim of this study was to characterize the dynamics of adefovir-resistant variant populations during adefovir monotherapy in order to better understand the molecular mechanisms underlying hepatitis B virus resistance to this class of nucleotide analogues. Patients included in a 240-week clinical trial of adefovir monotherapy who developed adefovir resistance-associated substitutions were studied. The dynamics of hepatitis B virus populations were analyzed over time, after generating nearly 4,000 full-length reverse transcriptase sequences, and compared with the replication kinetics of the virus during therapy. Whatever the viral kinetics pattern, adefovir resistance was characterized by exclusive detection of a dominant wild-type, adefovir-sensitive variant population at baseline and late and gradual selection by adefovir of several coexisting resistant viral populations, defined by the presence of amino acid substitutions at position rt236, position rt181, or both. The gain in fitness of one or the other of these resistant populations during adefovir administration was never associated with the selection of additional amino acid substitutions in the reverse transcriptase. CONCLUSION Our results suggest that adefovir administration selects poorly fit preexisting or emerging viral populations with low-level adefovir resistance, which subsequently compete to fill the replication space. Viral kinetics depends on the initial virological response to adefovir. Lamivudine add-on restores some antiviral efficacy, but adefovir-resistant variants remain predominant. Whether these adefovir resistance-associated substitutions may confer cross-resistance to tenofovir in vivo will need to be determined.
Stopped Object Detection by Learning Foreground Model in Videos
The automatic detection of objects that are abandoned or removed in a video scene is an interesting area of computer vision, with key applications in video surveillance. Forgotten or stolen luggage in train and airport stations and irregularly parked vehicles are examples that concern significant issues, such as the fight against terrorism and crime, and public safety. Both issues involve the basic task of detecting static regions in the scene. We address this problem by introducing a model-based framework to segment static foreground objects against moving foreground objects in single view sequences taken from stationary cameras. An image sequence model, obtained by learning in a self-organizing neural network image sequence variations, seen as trajectories of pixels in time, is adopted within the model-based framework. Experimental results on real video sequences and comparisons with existing approaches show the accuracy of the proposed stopped object detection approach.
Learning Spectral Descriptors for Deformable Shape Correspondence
Informative and discriminative feature descriptors play a fundamental role in deformable shape analysis. For example, they have been successfully employed in correspondence, registration, and retrieval tasks. In recent years, significant attention has been devoted to descriptors obtained from the spectral decomposition of the Laplace-Beltrami operator associated with the shape. Notable examples in this family are the heat kernel signature (HKS) and the recently introduced wave kernel signature (WKS). The Laplacian-based descriptors achieve state-of-the-art performance in numerous shape analysis tasks; they are computationally efficient, isometry-invariant by construction, and can gracefully cope with a variety of transformations. In this paper, we formulate a generic family of parametric spectral descriptors. We argue that to be optimized for a specific task, the descriptor should take into account the statistics of the corpus of shapes to which it is applied (the "signal") and those of the class of transformations to which it is made insensitive (the "noise"). While such statistics are hard to model axiomatically, they can be learned from examples. Following the spirit of the Wiener filter in signal processing, we show a learning scheme for the construction of optimized spectral descriptors and relate it to Mahalanobis metric learning. The superiority of the proposed approach in generating correspondences is demonstrated on synthetic and scanned human figures. We also show that the learned descriptors are robust enough to be learned on synthetic data and transferred successfully to scanned shapes.
Question Answering System for Non-factoid Type Questions and Automatic Evaluation based on BE Method
In this paper, we describe answer extraction method for non-factoid questions. We classified non-factoid type questions into three types: why type, definition type and how type. We analyzed each type of questions and developed answer extraction patterns for these types of questions. For automatic evaluation, we have developed BE based evaluation tool for answers of questions. BE method is originally proposed by Hovy et. and we applied BE method for question answering evaluation. Evaluation is done by comparison between BEs of system answer and BEs of correct answers.
A logistics demand forecasting model based on Grey neural network
Logistics demand forecasting is important for investment decision-making of infrastructure and strategy programming of the logistics industry. In this paper, a hybrid method which combines the Grey Model, artificial neural networks and other techniques in both learning and analyzing phases is proposed to improve the precision and reliability of forecasting. After establishing a learning model GNNM(1,8) for road logistics demand forecasting, we chose road freight volume as target value and other economic indicators, i.e. GDP, production value of primary industry, total industrial output value, outcomes of tertiary industry, retail sale of social consumer goods, disposable personal income, and total foreign trade value as the seven key influencing factors for logistics demand. Actual data sequences of the province of Zhejiang from years 1986 to 2008 were collected as training and test-proof samples. By comparing the forecasting results, it turns out that GNNM(1,8) is an appropriate forecasting method to yield higher accuracy and lower mean absolute percentage errors than other individual models for short-term logistics demand forecasting.
Comparison of haemodynamic changes in patients undergoing unilateral and bilateral spinal anaesthesia.
OBJECTIVE To assess the haemodynamic changes in patients receiving unilateral and bilateral spinal anaesthesia with their pre-anaesthesia recordings. STUDY DESIGN Quasi-experimental study. PLACE AND DURATION OF STUDY Main Operation Theater, Liaquat National Hospital, Karachi, from May 2006 to February 2007. METHODOLOGY Sixty patients meeting the inclusion criteria were randomly allocated in two groups of 30 patients each. One and a half ml of 0.75% hyperbaric bupivacaine was injected with free flow of cerebrospinal fluid using a 23 gauge quincke needle. Lumbar puncture was performed in the sitting position at 3 - 4 or 4 - 5 lumbar interspace. Patients were then assigned to the supine or lateral decubitus position for 10 minutes. Heart rate, systolic, mean and diastolic blood pressures of patients were recorded with their pre-anaesthesia readings in the 1st, 5th, 15th, 30th and then at every 15th minute till the end of procedure. Recovery room readings were also taken. RESULTS The systolic, mean and diastolic blood pressure changes were significant in both groups. But from 1st minute to recovery room, statistically significant difference (p < 0.05) was found at each time interval, the unilateral groups (group A) being more stable with respect to pre-anaesthesia readings. The decrease in heart rate was comparable in both groups. CONCLUSION Unilateral spinal anaesthesia was associated with a more stable cardiovascular profile, therefore, it is a valuable technique for high risk patients.
LabVIEW based system for PID tuning and implementation for a flow control loop
PID controller is widely used in industries for control applications. Tuning of PID controller is very much essential before its implementation. There are different methods of PID tuning such as Ziegler Nichols tuning method, Internal Model Control method, Cohen Coon tuning method, Tyreus-Luyben method, Chein-Hrones-Reswick method, etc. The focus of the work in this paper is to identify the system model for a flow control loop and implement PID controller in MATLAB for simulation study and in LabVIEW for real-time experimentation. Comparative study of three tuning methods viz. ZN, IMC and CC were carried out. Further the work is to appropriately tune the PID parameters. The flow control loop was interfaced to a computer via NI-DAQ card and PID was implemented using LabVIEW. The simulation and real-time results show that IMC tuning method gives better result than ZN and CC tuning methods.
The curvelet transform for fusion of very-high resolution multi-spectral and panchromatic images
This paper presents a novel image fusion method, suitable for pan-sharpening of multispectral (MS) bands, based on multi-resolution analysis (MRA). The low-resolution MS bands are sharpened by injecting high-pass directional details extracted from the high-resolution panchromatic (Pan) image by means of the curvelet transform, which is a non-separable MRA, whose basis function are directional edges with progressively increasing resolution. The advantage with respect to conventional separable MRA, either decimated or not, is twofold: directional detail coefficients matching image edges may be preliminarily soft-thresholded to achieve denoising better than in the separable wavelet domain; modeling of the relationships between high-resolution detail coefficients of MS bands and of the Pan image is more fitting, being carried out in a directional wavelet domain. Experiments carried out on a very-high resolution MS + Pan QuickBird image show that the proposed curvelet method quantitatively outperforms state-of-the art image fusion methods, in terms of geometric, radiometric, and spectral fidelity.
Effects of TCDD on the expression of nuclear encoded mitochondrial genes.
Generation of mitochondrial reactive oxygen species (ROS) can be perturbed following exposure to environmental chemicals such as 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). Reports indicate that the aryl hydrocarbon receptor (AhR) mediates TCDD-induced sustained hepatic oxidative stress by decreasing hepatic ATP levels and through hyperpolarization of the inner mitochondrial membrane. To further elucidate the effects of TCDD on the mitochondria, high-throughput quantitative real-time PCR (HTP-QRTPCR) was used to evaluate the expression of 90 nuclear genes encoding mitochondrial proteins involved in electron transport, oxidative phosphorylation, uncoupling, and associated chaperones. HTP-QRTPCR analysis of time course (30 microg/kg TCDD at 2, 4, 8, 12, 18, 24, 72, and 168 h) liver samples obtained from orally gavaged immature, ovariectomized C57BL/6 mice identified 54 differentially expressed genes (/fold change/ > 1.5 and P-value < 0.1). Of these, 8 exhibited a sigmoidal or exponential dose-response profile (0.03 to 300 microg/kg TCDD) at 4, 24 or 72 h. Dose-responsive genes encoded proteins associated with electron transport chain (ETC) complexes I (NADH dehydrogenase), III (cytochrome c reductase), IV (cytochrome c oxidase), and V (ATP synthase) and could be generally categorized as having proton gradient, ATP synthesis, and chaperone activities. In contrast, transcript levels of ETC complex II, succinate dehydrogenase, remained unchanged. Putative dioxin response elements were computationally found in the promoter regions of all 8 dose-responsive genes. This high-throughput approach suggests that TCDD alters the expression of genes associated with mitochondrial function which may contribute to TCDD-elicited mitochondrial toxicity.
What every manager needs to know about project management.
THIS PAPER OFFERS ten common sense principles that will help project managers define goals, establish checkpoints, schedules, and resource requirements, motivate and empower team members, facilitate communication, and manage conflict.
Near optimal placement of virtual network functions
Network Function Virtualization (NFV) is a new networking paradigm where network functions are executed on commodity servers located in small cloud nodes distributed across the network, and where software defined mechanisms are used to control the network flows. This paradigm is a major turning point in the evolution of networking, as it introduces high expectations for enhanced economical network services, as well as major technical challenges. In this paper, we address one of the main technical challenges in this domain: the actual placement of the virtual functions within the physical network. This placement has a critical impact on the performance of the network, as well as on its reliability and operation cost. We perform a thorough study of the NFV location problem, show that it introduces a new type of optimization problems, and provide near optimal approximation algorithms guaranteeing a placement with theoretically proven performance. The performance of the solution is evaluated with respect to two measures: the distance cost between the clients and the virtual functions by which they are served, as well as the setup costs of these functions. We provide bi-criteria solutions reaching constant approximation factors with respect to the overall performance, and adhering to the capacity constraints of the networking infrastructure by a constant factor as well. Finally, using extensive simulations, we show that the proposed algorithms perform well in many realistic scenarios.
The role of home literacy practices in preschool children's language and emergent literacy skills.
This study examined how 4 specific measures of home literacy practices (i.e., shared book reading frequency, maternal book reading strategies, child's enjoyment of reading, and maternal sensitivity) and a global measure of the quality and responsiveness of the home environment during the preschool years predicted children's language and emergent literacy skills between the ages of 3 and 5 years. Study participants were 72 African American children and their mothers or primary guardians primarily from low-income families whose home literacy environment and development have been followed since infancy. Annually, between 18 months and 5 years of age, the children's mothers were interviewed about the frequency they read to their child and how much their child enjoyed being read to, and the overall quality and responsiveness of the home environment were observed. Mothers also were observed reading to their child once a year at 2, 3, and 4 years of age, and maternal sensitivity and types of maternal book reading strategies were coded. Children's receptive and expressive language and vocabulary were assessed annually between 3 years of age and kindergarten entry, and emergent literacy skills were assessed at 4 years and kindergarten entry. The specific home literacy practices showed moderate to large correlations with each other, and only a few significant associations with the language and literacy outcomes, after controlling for maternal education, maternal reading skills, and the child's gender. The global measure of overall responsiveness and support of the home environment was the strongest predictor of children's language and early literacy skills and contributed over and above the specific literacy practice measures in predicting children's early language and literacy development.
A Comparison of Push and Pull Techniques for AJAX
AJAX applications are designed to have high user interactivity and low user-perceived latency. Real-time dynamic Web data such as news headlines, stock tickers, and auction updates need to be propagated to the users as soon as possible. However, AJAX still suffers from the limitations of the Web's request/response architecture which prevents servers from pushing real-time dynamic web data. Such applications usually use a pull style to obtain the latest updates, where the client actively requests the changes based on a predefined interval. It is possible to overcome this limitation by adopting a push style of interaction where the server broadcasts data when a change occurs on the server side. Both these options have their own trade-offs. This paper explores the fundamental limits of browser-based applications and analyzes push solutions for AJAX technology. It also shows the results of an empirical study comparing push and pull.
Design on elliptical lens monopulse antenna
Monopulse antennas can be used for accurate and rapid angle estimation in radar systems [1]. This paper presents a new kind of monopulse antenna base on two-dimensional elliptical lens. As an example, a patch-fed elliptical lens antenna is designed at 35 GHz. Simulations show the designed lens antenna exhibits clean and symmetrical patterns on both sum and difference ports. A very deep null is achieved in the difference pattern because of the circuit symmetry.
Improving retrieval performance by relevance feedback
Relevance feedback is an automatic process, introduced over 20 years ago, designed to produce improved query formulations following an initial retrieval operation. The principal relevance feedback methods described over the years are examined briefly, and evaluation data are included to demonstrate the effectiveness of the various methods. Prescriptions are given for conducting text retrieval operations iteratively using relevance feedback.
Young Adults at Risk for Excess Alcohol Consumption Are Often Not Asked or Counseled About Drinking Alcohol
Excessive alcohol consumption is most widespread among young adults. Practice guidelines recommend screening and physician advice, which could help address this common cause of injury and premature death. To assess the proportion of persons ages 18–39 who, in the past year, saw a physician and were asked about their drinking and advised what drinking levels pose health risk, and whether this differed by age or whether respondents exceeded low-risk drinking guidelines [daily (>4 drinks for men/>3 for women) or weekly (>14 for men/>7 for women)]. Survey of young adults selected from a national internet panel established using random digit dial telephone techniques. Adults age 18–39 who ever drank alcohol, n = 3,409 from the internet panel and n = 612 non-panel telephone respondents. Respondents were asked whether they saw a doctor in the past year; those who did see a doctor were asked whether a doctor asked about their drinking, advised about safe drinking levels, or counseled to reduce drinking. Of respondents, 67% saw a physician in the past year, but only 14% of those exceeding guidelines were asked and advised about risky drinking patterns. Persons 18–25 were the most likely to exceed guidelines (68% vs. 56%, p < 0.001) but were least often asked about drinking (34% vs. 54%, p < 0.001). Despite practice guidelines, few young adults are asked and advised by physicians about excessive alcohol consumption. Physicians should routinely ask all adults about their drinking and offer advice about levels that pose health risk, particularly to young adults.
A Microstrip Ultra-Wideband Bandpass Filter With Cascaded Broadband Bandpass and Bandstop Filters
This paper develops a novel ultra-wideband bandpass filter by cascading a broadband bandpass filter with another broadband bandstop filter. Properly selected impedances of transmission lines achieve broadband bandpass and bandstop filters and make independent designs possible. Detailed design and synthesis procedures are provided; moreover, agreement between measured and theoretically predicted results demonstrates feasibility of the proposed filter. Due to its simple structure, the ultra-wideband bandpass filter newly introduced in this paper is suitable for integration in the single-chipped circuit or implementation on printed circuit boards.
A longitudinal view of HTTP video streaming performance
This paper investigates HTTP streaming traffic from an ISP perspective. As streaming traffic now represents nearly half of the residential Internet traffic, understanding its characteristics is important. We focus on two major video sharing sites, YouTube and DailyMotion. We use ten packet traces from a residential ISP network, five for ADSL and five for FTTH customers, captured between 2008 and 2011. Covering a time span of four years allows us to identify changes in the service infrastructure of some providers. From the packet traces, we infer for each streaming flow the video characteristics, such as duration and encoding rate, as well as TCP flow characteristics. Using additional information from the BGP routing tables allows us to identify the originating Autonomous System (AS). With this data, we can uncover: the server side distribution policy, the impact of the serving AS on the flow characteristics and the impact of the reception quality on user behavior. A unique aspect of our work is how to measure the reception quality of the video and its impact on the viewing behavior. We see that not even half of the videos are fully downloaded. For short videos of 3 minutes or less, users stop downloading at any point, while for videos longer than 3 minutes, users either stop downloading early on or fully download the video. When the reception quality deteriorates, fewer videos are fully downloaded, and the decision to interrupt download is taken earlier. We conclude that (i) the video sharing sites have a major control over the delivery of the video and its reception quality through DNS resolution and server side streaming policy, and (ii) that only half of the videos are fully downloaded and that this fraction dramatically drops when the video reception quality is bad.
word2vec Explained: deriving Mikolov et al.'s negative-sampling word-embedding method
The word2vec software of Tomas Mikolov and colleagues 1 has gained a lot of traction lately, and provides state-of-the-art word embeddings. The learning models behind the software are described in two research papers [1, 2]. We found the description of the models in these papers to be somewhat cryptic and hard to follow. While the motivations and presentation may be obvious to the neural-networks language-modeling crowd, we had to struggle quite a bit to figure out the rationale behind the equations. This note is an attempt to explain equation (4) (negative sampling) in " Distributed Representations of Words and Phrases and their Compositionality " by 1 The skip-gram model The departure point of the paper is the skip-gram model. In this model we are given a corpus of words w and their contexts c. We consider the conditional probabilities p(c|w), and given a corpus T ext, the goal is to set the parameters θ of p(c|w; θ) so as to maximize the corpus probability: arg max θ w∈T ext   c∈C(w) p(c|w; θ)   (1) in this equation, C(w) is the set of contexts of word w. Alternatively: arg max θ (w,c)∈D p(c|w; θ) (2) here D is the set of all word and context pairs we extract from the text.
Deep Learning in Medical Image Analysis.
This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.