title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Broadband composite right/left-handed coplanar waveguide power splitters with arbitrary phase responses and balun and antenna applications | This study presents novel coplanar waveguide (CPW) power splitters comprising a CPW T-junction with outputs attached to phase-adjusting circuits, i.e., the composite right/left-handed (CRLH) CPW and the conventional CPW, to achieve a constant phase difference with arbitrary value over a wide bandwidth. To demonstrate the proposed technique, a 180/spl deg/ CRLH CPW power splitter with a phase error of less than 10/spl deg/ and a magnitude difference of below 1.5 dB within 2.4 to 5.22 GHz is experimentally demonstrated. Compared with the conventional 180/spl deg/ delay-line power splitter, the proposed structure possesses not only superior phase and magnitude performances but also a 37% size reduction. The equivalent circuit of the CRLH CPW, which represents the left-handed (LH), right-handed (RH), and lossy characteristics, is constructed and the results obtained are in good agreement with the full-wave simulation and measurement. Applications involving the wideband coplanar waveguide-to-coplanar stripline (CPW-to-CPS) transition and the tapered loop antenna are presented to stress the practicality of the 180/spl deg/ CRLH CPW power splitter. The 3-dB insertion loss bandwidth is measured as 98% for the case of a back-to-back CPW-to-CPS transition. The tapered loop antenna fed by the proposed transition achieves a measured 10-dB return loss bandwidth of 114%, and shows similar radiation patterns and 6-9 dBi antenna gain in its operating band. |
Short term load forecasting using Multiple Linear Regression | this paper we present an investigation for the short term (up 24 hours) load forecasting of the demand for the South Sulewesi's (Sulewesi Island - Indonesia) Power System, using a multiple linear regression (MLR) method. After a brief analytical discussion of the technique, the usage of polynomial terms and the steps to compose the MLR model will be explained. Report on implementation of MLR algorithm using commercially available tool such as Microsoft EXCELTM will also be discussed. As a case study, historical data consisting of hourly load demand and temperatures of South Sulawesi electrical system will be used, to forecast the short term load. The results will be presented and analysed potential for improvement using alternative methods is also discussed. |
A double-blind randomized crossover study of oral thalidomide versus placebo for androgen dependent prostate cancer treated with intermittent androgen ablation. | PURPOSE
We determined whether thalidomide can prolong progression-free survival in men with biochemically recurrent prostate cancer treated with limited androgen deprivation therapy.
MATERIALS AND METHODS
A total of 159 patients were enrolled in a double-blind randomized trial to determine if thalidomide can improve the efficacy of a gonadotropin-releasing hormone agonist in hormone responsive patients with an increasing prostate specific antigen after primary definitive therapy for prostate cancer. Patients were randomized to 6 months of gonadotropin-releasing hormone agonist followed by 200 mg per day oral thalidomide or placebo (oral phase A). At the time of prostate specific antigen progression gonadotropin-releasing hormone agonist was restarted for 6 additional months. Patients were then crossed over to the opposite drug and were treated until prostate specific antigen progression (oral phase B). Testosterone and dihydroxytestosterone were likewise monitored throughout the study.
RESULTS
During oral phase A the median time to prostate specific antigen progression was 15 months for the thalidomide group compared to 9.6 months on placebo (p = 0.21). The median time to prostate specific antigen progression during oral phase B for the thalidomide group was 17.1 vs 6.6 months on placebo (p = 0.0002). No differences in time to serum testosterone normalization between the thalidomide and placebo arms were documented during oral phase A and oral phase B. Thalidomide was tolerable although dose reductions occurred in 47% (58 of 124) of patients.
CONCLUSIONS
Despite thalidomide having no effect on testosterone normalization, there was a clear effect on prostate specific antigen progression during oral phase B. This is the first study to our knowledge to demonstrate the effects of thalidomide using intermittent hormonal therapy. |
3-D radar imaging using extended 2-D range migration technique | A three dimensional (3-D) imaging system is implemented by employing 2-D range migration algorithm (RMA) for frequency modulated continuous wave synthetic aperture radar (FMCW-SAR). The backscattered data of a 1-D synthetic aperture at specific altitudes are coherently integrated to form 2-D images. These 2-D images at different altitudes are stitched vertically to form a 3-D image. Numerical simulation for near-field scenario are also presented to validate the proposed algorithm. |
Introducing 'Bones': a parallelizing source-to-source compiler based on algorithmic skeletons | Recent advances in multi-core and many-core processors requires programmers to exploit an increasing amount of parallelism from their applications. Data parallel languages such as CUDA and OpenCL make it possible to take advantage of such processors, but still require a large amount of effort from programmers.
A number of parallelizing source-to-source compilers have recently been developed to ease programming of multi-core and many-core processors. This work presents and evaluates a number of such tools, focused in particular on C-to-CUDA transformations targeting GPUs. We compare these tools both qualitatively and quantitatively to each other and identify their strengths and weaknesses.
In this paper, we address the weaknesses by presenting a new classification of algorithms. This classification is used in a new source-to-source compiler, which is based on the algorithmic skeletons technique. The compiler generates target code based on skeletons of parallel structures, which can be seen as parameterisable library implementations for a set of algorithm classes. We furthermore demonstrate that the presented compiler requires little modifications to the original sequential source code, generates readable code for further fine-tuning, and delivers superior performance compared to other tools for a set of 8 image processing kernels. |
FallDeFi: Ubiquitous Fall Detection using Commodity Wi-Fi Devices | Falling or tripping among elderly people living on their own is recognized as a major public health worry that can even lead to death. Fall detection systems that alert caregivers, family members or neighbours can potentially save lives. In the past decade, an extensive amount of research has been carried out to develop fall detection systems based on a range of different detection approaches, i.e, wearable and non-wearable sensing and detection technologies. In this paper, we consider an emerging non-wearable fall detection approach based on WiFi Channel State Information (CSI). Previous CSI based fall detection solutions have considered only time domain approaches. Here, we take an altogether different direction, time-frequency analysis as used in radar fall detection. We use the conventional Short-Time Fourier Transform (STFT) to extract time-frequency features and a sequential forward selection algorithm to single out features that are resilient to environment changes while maintaining a higher fall detection rate. When our system is pre-trained, it has a 93% accuracy and compared to RTFall and CARM, this is a 12% and 15% improvement respectively. When the environment changes, our system still has an average accuracy close to 80% which is more than a 20% to 30% and 5% to 15% improvement respectively. |
Sector-Disk (SD) Erasure Codes for Mixed Failure Modes in RAID Systems | Traditionally, when storage systems employ erasure codes, they are designed to tolerate the failures of entire disks. However, the most common types of failures are latent sector failures, which only affect individual disk sectors, and block failures which arise through wear on SSD’s. This article introduces SD codes, which are designed to tolerate combinations of disk and sector failures. As such, they consume far less storage resources than traditional erasure codes. We specify the codes with enough detail for the storage practitioner to employ them, discuss their practical properties, and detail an open-source implementation. |
Towards quality discourse in online news comments | With the growth in sociality and interaction around online news media, news sites are increasingly becoming places for communities to discuss and address common issues spurred by news articles. The quality of online news comments is of importance to news organizations that want to provide a valuable exchange of community ideas and maintain credibility within the community. In this work we examine the complex interplay between the needs and desires of news commenters with the functioning of different journalistic approaches toward managing comment quality. Drawing primarily on newsroom interviews and reader surveys, we characterize the comment discourse of SacBee.com, discuss the relationship of comment quality to both the consumption and production of news information, and provide a description of both readers' and writers' motivations for usage of news comments. We also examine newsroom strategies for dealing with comment quality as well as explore tensions and opportunities for value-sensitive innovation within such online communities. |
A Multiband RF Antenna Duplexer on CMOS: Design and Performance | An RF duplexer has been fabricated on a CMOS IC for use in 3G/4G cellular transceivers. The passive circuit sustains large voltage swings in the transmit path, and isolates the receive path from the transmitter by more than 45 dB across a bandwidth of 200 MHz in 3G/4G bands I, II, III, IV, and IX. A low noise amplifier embedded into the duplexer demonstrates a cascade noise figure of 5 dB with more than 27 dB of gain. The duplexer inserts 2.5 dB of loss between power amplifier and antenna. |
BlitzNet: A Real-Time Deep Network for Scene Understanding | Real-time scene understanding has become crucial in many applications such as autonomous driving. In this paper, we propose a deep architecture, called BlitzNet, that jointly performs object detection and semantic segmentation in one forward pass, allowing real-time computations. Besides the computational gain of having a single network to perform several tasks, we show that object detection and semantic segmentation benefit from each other in terms of accuracy. Experimental results for VOC and COCO datasets show state-of-the-art performance for object detection and segmentation among real time systems. |
Temporal RDF | The Resource Description Framework (RDF) is a metadata model and language recommended by the W3C. This paper presents a framework to incorporate temporal reasoning into RDF, yielding temporal RDF graphs. We present a semantics for temporal RDF graphs, a syntax to incorporate temporality into standard RDF graphs, an inference system for temporal RDF graphs, complexity bounds showing that entailment in temporal RDF graphs does not yield extra asymptotic complexity with respect to standard RDF graphs and sketch a temporal query language for RDF. |
Hilbert Transform-Based Bearing Failure Detection in DFIG-Based Wind Turbines | Cost-effective, predictive and proactive maintenance of wind turbines assumes more importance with the increasing number of installed wind farms in more remote location (offshore). A well-known method for assessing impeding problems is to use current sensors installed within the wind turbine generator. This paper describes then an approach based on the generator stator current data collection and attempts to highlight the use of the Hilbert transform for failure detection in a doubly-fed induction generator-based. Indeed, this generator is commonly used in modern variable-speed wind turbines. The proposed failure detection technique has been validated experimentally regarding bearing failures. Indeed, a large fraction of wind turbine downtime is due to bearing failures, particularly in the generator and gearbox. Copyright © 2011 Praise Worthy Prize S.r.l. All rights reserved. |
Biogeography and Phylogeny of Aigarchaeota, A Novel Phylum of Archaea | Despite our knowledge of the diversity of life on earth there are major lineages of microbial life that have never been studied that are referred to as “biological dark matter” or “microbial dark matter” (Marcy et al., 2007; Rinke, et al., 2013). One of these so-called “microbial dark matter” groups was originally described as pSL4 and related gene sequences were later grouped under the name Hot Water Crenarchaeotic Group 1 (HWCG 1) (Barns et al., 1996; Nunoura et al., 2005). A metagenomics study of a microbial mat community led to the construction of a complete composite genome for a member of the pSL4/HWCG 1 lineage proposed as Candidatus ‘Caldiarchaeum subterraneum’ and further analysis of the genome led to the proposition of the phylum ‘Aigarchaeota’ (Nunoura et al., 2005; Nunoura et al., 2011). Genomic analysis of Ca. subterraneum revealed unique eukaryote-type features which led to the proposal of a superphylum level relationship called the ‘TACK’ superphylum (Guy and Ettema, 2011). The cultivationindependent studies suggest a world-wide distribution of ‘Aigarchaeota’ and dominance of 16S rRNA gene clone libraries from habitats such as a deep sea hydrothermal vent in Okinawa Trough at 35-60 °C and Great Boiling Spring at 79-87°C ( Nunoura et al., 2010; Cole et al., 2012). Cole at al 2012 reported that ‘Aigarchaeota’ 16S rRNA gene sequences had the highest relative abundance at Site A (~54%), with a general trend of decreasing relative abundance down to ~5% at 62°C (Site E), suggesting ‘Aigarchaeota’ niche differentiation along a thermal gradient. The purpose of this study was to gather 16S rRNA gene sequences potentially belonging to ‘Aigarchaeota’, and to use those sequences to address several goals: (i) Rigorously define the phylum (ii) Gain insight into the phylogenetic and potential taxonomic structure of the phylum (iii) Assess the distribution of ‘Aigarchaeota’ as a whole, along with individual groups of ‘Aigarchaeota’ in nature (iv) Design ‘Aigarchaeota’specific 16S rRNA gene primers that combined will target genus-level groups |
Development of a MALDI MS‐based platform for early detection of acute kidney injury | PURPOSE
Septic acute kidney injury (AKI) is associated with poor outcome. This can partly be attributed to delayed diagnosis and incomplete understanding of the underlying pathophysiology. Our aim was to develop an early predictive test for AKI based on the analysis of urinary peptide biomarkers by MALDI-MS.
EXPERIMENTAL DESIGN
Urine samples from 95 patients with sepsis were analyzed by MALDI-MS. Marker search and multimarker model establishment were performed using the peptide profiles from 17 patients with existing or within the next 5 days developing AKI and 17 with no change in renal function. Replicates of urine sample pools from the AKI and non-AKI patient groups and normal controls were also included to select the analytically most robust AKI markers.
RESULTS
Thirty-nine urinary peptides were selected by cross-validated variable selection to generate a support vector machine multidimensional AKI classifier. Prognostic performance of the AKI classifier on an independent validation set including the remaining 61 patients of the study population (17 controls and 44 cases) was good with an area under the receiver operating characteristics curve of 0.82 and a sensitivity and specificity of 86% and 76%, respectively.
CONCLUSION AND CLINICAL RELEVANCE
A urinary peptide marker model detects onset of AKI with acceptable accuracy in septic patients. Such a platform can eventually be transferred to the clinic as fast MALDI-MS test format. |
Structural Models of Corporate Bond Pricing : An Empirical Analysis | This article empirically tests five structural models of corporate bond pricing: those of Merton (1974), Geske (1977), Longstaff and Schwartz (1995), Leland and Toft (1996), and Collin-Dufresne and Goldstein (2001). We implement the models using a sample of 182 bond prices from firms with simple capital structures during the period 1986–1997. The conventional wisdom is that structural models do not generate spreads as high as those seen in the bond market, and true to expectations, we find that the predicted spreads in our implementation of the Merton model are too low. However, most of the other structural models predict spreads that are too high on average. Nevertheless, accuracy is a problem, as the newer models tend to severely overstate the credit risk of firms with high leverage or volatility and yet suffer from a spread underprediction problem with safer bonds. The Leland and Toft model is an exception in that it overpredicts spreads on most bonds, particularly those with high coupons. More accurate structural models must avoid features that increase the credit risk on the riskier bonds while scarcely affecting the spreads of the safest bonds. |
Exploring the Effectiveness of Convolutional Neural Networks for Answer Selection in End-to-End Question Answering | Most work on natural language question answering today focuses on answer selection: given a candidate list of sentences, determine which contains the answer. Although important, answer selection is only one stage in a standard end-to-end question answering pipeline. is paper explores the eectiveness of convolutional neural networks (CNNs) for answer selection in an end-to-end context using the standard TrecQA dataset. We observe that a simple idf-weighted word overlap algorithm forms a very strong baseline, and that despite substantial eorts by the community in applying deep learning to tackle answer selection, the gains are modest at best on this dataset. Furthermore, it is unclear if a CNN is more eective than the baseline in an end-to-end context based on standard retrieval metrics. To further explore this nding, we conducted a manual user evaluation, which conrms that answers from the CNN are detectably beer than those from idf-weighted word overlap. is result suggests that users are sensitive to relatively small dierences in answer selection quality. |
Price-Directed Replenishment of Subsets: Methodology and Its Application to Inventory Routing | The idea of price-directed control is to use an operating policy that exploits optimal dual prices from a mathematical programming relaxation of the underlying control problem. We apply it to the problem of replenishing inventory to subsets of products/locations, such as in the distribution of industrial gases, so as to minimize long-run time average replenishment costs. Given a marginal value for each product/location, whenever there is a stockout the dispatcher compares the total value of each feasible replenishment with its cost, and chooses one that maximizes the surplus. We derive this operating policy using a linear functional approximation to the optimal value function of a semi-Markov decision process on continuous spaces. This approximation also leads to a math program whose optimal dual prices yield values and whose optimal objective value gives a lower bound on system performance. We use duality theory to show that optimal prices satisfy several structural properties and can be interpreted as estimates of lowest achievable marginal costs. On real-world instances, the price-directed policy achieves superior, near optimal performance as compared with other approaches. (Inventory Routing; Approximate Dynamic Programming; Price-Directed Operations; Semi-Markov Decision Processes) |
Control optimization for a power-split hybrid vehicle | Toyota hybrid system (THS) is used in the current best-selling hybrid vehicle on the market - the Toyota Prius. This hybrid system contains a power split planetary gear system which combines the benefits of series and parallel hybrid vehicles. This paper first develops a dynamic model to investigate the control strategy of the THS power train. An equivalent consumption minimization strategy (ECMS) is developed which is based on instantaneous optimization concept. The dynamic programming (DP) technique is then utilized to obtain a performance benchmark and insight toward fine-tuning of the ECMS algorithm for better performance |
Vitamin D and sunlight: strategies for cancer prevention and other health benefits. | Vitamin D deficiency is a worldwide health problem. The major source of vitamin D for most humans is sensible sun exposure. Factors that influence cutaneous vitamin D production include sunscreen use, skin pigmentation, time of day, season of the year, latitude, and aging. Serum 25-hydroxyvitamin D [25(OH)D] is the measure for vitamin D status. A total of 100 IU of vitamin D raises blood level of 25(OH)D by 1 ng/ml. Thus, children and adults who do not receive adequate vitamin D from sun exposure need at least 1000 IU/d vitamin D. Lack of sun exposure and vitamin D deficiency have been linked to many serious chronic diseases, including autoimmune diseases, infectious diseases, cardiovascular disease, and deadly cancers. It is estimated that there is a 30 to 50% reduction in risk for developing colorectal, breast, and prostate cancer by either increasing vitamin D intake to least 1000 IU/d vitamin D or increasing sun exposure to raise blood levels of 25(OH)D >30 ng/ml. Most tissues in the body have a vitamin D receptor. The active form of vitamin D, 1,25-dihydroxyvitamin D, is made in many different tissues, including colon, prostate, and breast. It is believed that the local production of 1,25(OH)(2)D may be responsible for the anticancer benefit of vitamin D. Recent studies suggested that women who are vitamin D deficient have a 253% increased risk for developing colorectal cancer, and women who ingested 1500 mg/d calcium and 1100 IU/d vitamin D(3) for 4 yr reduced risk for developing cancer by >60%. |
Regression and Kriging metamodels with their experimental designs in simulation: A review | This article reviews the design and analysis of simulation experiments. It focusses on analysis via either low-order polynomial regression or Kriging (also known as Gaussian process) metamodels. The type of metamodel determines the design of the experiment, which determines the input combinations of the simulation experiment. For example, a first-order polynomial metamodel requires a "resolution-III" design, whereas Kriging may use Latin hypercube sampling. Polynomials of first or second order require resolution III, IV, V, or "central composite" designs. Before applying either regression or Kriging, sequential bifurcation may be applied to screen a great many inputs. Optimization of the simulated system may use either a sequence of low-order polynomials known as response surface methodology (RSM) or Kriging models fitted through sequential designs including effi cient global optimization (EGO). The review includes robust optimization, which accounts for uncertain simulation inputs. |
Joint message-passing decoding of LDPC Codes and partial-response channels | The following is a clarification of the above paper [1]. The terms Rpn(j) = P [xn = jjrp] andQnp(j) = P [xn = j] should be explicitly defined as functions of j; j 2 f0; 1g. Let the partial response channel’s output alphabet be the set I , and letj nn p+ 0 be a vector of binary inputs to the channel (j0; . . . ; j ) except forjn p+ , and let Jn; p; be the set of all such binary inputs. Then the derivation of [1, eq. (1), p. 1412] should be Rpn(j) = P [xn = jjrp] = |
Did Securitization Lead to Lax Screening ? Evidence From Subprime Loans 2001-2006 ∗ | Theories of financial intermediation suggest that securitization, the act of converting illiquid loans into liquid securities, could reduce the incentives of financial intermediaries to screen borrowers. We empirically examine this question using a unique dataset on securitized subprime mortgage loan contracts in the United States. We exploit a specific rule of thumb in the lending market to generate an instrument for ease of securitization and compare the composition and performance of lenders’ portfolios around the ad-hoc threshold. Conditional on being securitized, the portfolio that is more likely to be securitized defaults by around 20% more than a similar risk profile group with a lower probability of securitization. Crucially, these two portfolios have similar observable risk characteristics and loan terms. We use variation across lenders (banks vs. independents), state foreclosure laws, and the timing of passage of anti-predatory laws to rule out alternative explanations. Our results suggest that securitization does adversely affect the screening incentives of lenders. |
A Framework for Evaluating Intrusion Detection Architectures in Advanced Metering Infrastructures | The scale and complexity of Advanced Metering Infrastructure (AMI) networks requires careful planning for the deployment of security solutions. In particular, the large number of AMI devices and the volume and diversity of communication expected to take place on the various AMI networks make the role of intrusion detection systems (IDSes) critical. Understanding the trade-offs for a scalable and comprehensive IDS is key to investing in the right technology and deploying sensors at optimal locations. This paper reviews the benefits and costs associated with different IDS deployment options, including either centralized or distributed solution. A general cost-model framework is proposed to help utilities (AMI asset owners) make more informed decisions when selecting IDS deployment architectures and managing their security investments. We illustrate how the framework can be applied through case studies, and highlight the interesting cost/benefit trade-offs that emerge. |
TextonBoost for Image Understanding: Multi-Class Object Recognition and Segmentation by Jointly Modeling Texture, Layout, and Context | This paper details a new approach for learning a discriminative model of object classes, incorporating texture, layout, and context information efficiently. The learned model is used for automatic visual understanding and semantic segmentation of photographs. Our discriminative model exploits texture-layout filters, novel features based on textons, which jointly model patterns of texture and their spatial layout. Unary classification and feature selection is achieved using shared boosting to give an efficient classifier which can be applied to a large number of classes. Accurate image segmentation is achieved by incorporating the unary classifier in a conditional random field, which (i) captures the spatial interactions between class labels of neighboring pixels, and (ii) improves the segmentation of specific object instances. Efficient training of the model on large datasets is achieved by exploiting both random feature selection and piecewise training methods. High classification and segmentation accuracy is demonstrated on four varied databases: (i) the MSRC 21-class database containing photographs of real objects viewed under general lighting conditions, poses and viewpoints, (ii) the 7-class Corel subset and (iii) the 7-class Sowerby database used in He et al. (Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 695–702, June 2004), and (iv) a set of video sequences of television shows. The proposed algorithm gives competitive and visually pleasing results for objects that are highly textured (grass, trees, etc.), highly structured (cars, faces, bicycles, airplanes, etc.), and even articulated (body, cow, etc.). |
Optimizing sterilization logistics in hospitals. | This paper deals with the optimization of the flow of sterile instruments in hospitals which takes place between the sterilization department and the operating theatre. This topic is especially of interest in view of the current attempts of hospitals to cut cost by outsourcing sterilization tasks. Oftentimes, outsourcing implies placing the sterilization unit at a larger distance, hence introducing a longer logistic loop, which may result in lower instrument availability, and higher cost. This paper discusses the optimization problems that have to be solved when redesigning processes so as to improve material availability and reduce cost. We consider changing the logistic management principles, use of visibility information, and optimizing the composition of the nets of sterile materials. |
Circulating Mitochondrial DAMPs Cause Inflammatory Responses to Injury | Injury causes a systemic inflammatory response syndrome (SIRS) that is clinically much like sepsis. Microbial pathogen-associated molecular patterns (PAMPs) activate innate immunocytes through pattern recognition receptors. Similarly, cellular injury can release endogenous ‘damage’-associated molecular patterns (DAMPs) that activate innate immunity. Mitochondria are evolutionary endosymbionts that were derived from bacteria and so might bear bacterial molecular motifs. Here we show that injury releases mitochondrial DAMPs (MTDs) into the circulation with functionally important immune consequences. MTDs include formyl peptides and mitochondrial DNA. These activate human polymorphonuclear neutrophils (PMNs) through formyl peptide receptor-1 and Toll-like receptor (TLR) 9, respectively. MTDs promote PMN Ca2+ flux and phosphorylation of mitogen-activated protein (MAP) kinases, thus leading to PMN migration and degranulation in vitro and in vivo. Circulating MTDs can elicit neutrophil-mediated organ injury. Cellular disruption by trauma releases mitochondrial DAMPs with evolutionarily conserved similarities to bacterial PAMPs into the circulation. These signal through innate immune pathways identical to those activated in sepsis to create a sepsis-like state. The release of such mitochondrial ‘enemies within’ by cellular injury is a key link between trauma, inflammation and SIRS. |
Large-scale visual sentiment ontology and detectors using adjective noun pairs | We address the challenge of sentiment analysis from visual content. In contrast to existing methods which infer sentiment or emotion directly from visual low-level features, we propose a novel approach based on understanding of the visual concepts that are strongly related to sentiments. Our key contribution is two-fold: first, we present a method built upon psychological theories and web mining to automatically construct a large-scale Visual Sentiment Ontology (VSO) consisting of more than 3,000 Adjective Noun Pairs (ANP). Second, we propose SentiBank, a novel visual concept detector library that can be used to detect the presence of 1,200 ANPs in an image. The VSO and SentiBank are distinct from existing work and will open a gate towards various applications enabled by automatic sentiment analysis. Experiments on detecting sentiment of image tweets demonstrate significant improvement in detection accuracy when comparing the proposed SentiBank based predictors with the text-based approaches. The effort also leads to a large publicly available resource consisting of a visual sentiment ontology, a large detector library, and the training/testing benchmark for visual sentiment analysis. |
Markowitz Revisited: Mean-Variance Models in Financial Portfolio Analysis | Mean-variance portfolio analysis provided the first quantitative treatment of the tradeoff between profit and risk. We describe in detail the interplay between objective and constraints in a number of single-period variants, including semivariance models. Particular emphasis is laid on avoiding the penalization of overperformance. The results are then used as building blocks in the development and theoretical analysis of multiperiod models based on scenario trees. A key property is the possibility of removing surplus money in future decisions, yielding approximate downside risk minimization. |
Agile Software Development | With the further development of computer technology, the software development process has some new goals and requirements. In order to adapt to these changes, people has optimized and improved the previous method. At the same time, some of the traditional software development methods have been unable to adapt to the requirements of people. Therefore, in recent years there have been some new lightweight software process development methods, That is agile software development, which is widely used and promoted. In this paper the author will firstly introduces the background and development about agile software development, as well as comparison to the traditional software development. Then the second chapter gives the definition of agile software development and characteristics, principles and values. In the third chapter the author will highlight several different agile software development methods, and characteristics of each method. In the fourth chapter the author will cite a specific example, how agile software development is applied in specific areas.Finally the author will conclude his opinion. This article aims to give readers a overview of agile software development and how people use it in practice. |
Performance evaluation of CS/CB for coordinated multipoint transmission in LTE-A downlink | Coordinated multipoint (CoMP) processing is a key enabling technique in Long Term Evolution-Advanced (LTE-A) to improve the system spectral efficiency. Among the CoMP processing schemes, while joint transmission (JT) from multiple coordinated eNBs has high computing complexity due to the data sharing, coordinated scheduling/coordinated beamforming (CS/CB) lowers the backhaul requirement. In this paper, based on the modified signal-to-leakage-and-noise ratio (M-SLNR), a transmit precoding algorithm is proposed for CS/CB. The performance of this algorithm is evaluated through system level simulations under imperfect CSI scenario. As illustrated by the simulation results, the proposed algorithm is capable of significantly improving the spectral efficiency of CS/CB than that of single cell multiuser multiple-input-multiple-output (MU-MIMO), upon suppressing the dominant leakage interference using as few spatial resources as possible. Specifically, intra-site CoMP and inter-site CoMP employing the proposed algorithm have a spectral efficiency gain of 13% and 20%, respectively, compared with single cell MU-MIMO. |
Efficient Circular Thresholding | Otsu's algorithm for thresholding images is widely used, and the computational complexity of determining the threshold from the histogram is O(N) where N is the number of histogram bins. When the algorithm is adapted to circular rather than linear histograms then two thresholds are required for binary thresholding. We show that, surprisingly, it is still possible to determine the optimal threshold in O(N) time. The efficient optimal algorithm is over 300 times faster than traditional approaches for typical histograms and is thus particularly suitable for real-time applications. We further demonstrate the usefulness of circular thresholding using the adapted Otsu criterion for various applications, including analysis of optical flow data, indoor/outdoor image classification, and non-photorealistic rendering. In particular, by combining circular Otsu feature with other colour/texture features, a 96.9% correct rate is obtained for indoor/outdoor classification on the well known IITM-SCID2 data set, outperforming the state-of-the-art result by 4.3%. |
Control of Prosthetic Device Using Support Vector Machine Signal Classification Technique | An appropriate classification of the surface myoelectric signals (MES) allows people with disabilities to control assistive prosthetic devices. The performance of these pattern recognition methods significantly affects the accuracy and smoothness of the target movements. We designed an intelligent Support Vector Machine (SVM) classifier to incorporate potential variations in electrode placement, thus achieving high accuracy for predictive control. MES from seven locations of the forearm were recorded over six different sessions. Despite meticulous attempt to keep the recording locations consistent between trials, slight shifts may still occur affecting the classification performance. We hypothesize that the machine learning algorithm is able to compensate for these variations. The recorded data was first processed using Discrete Wavelet Transform over 9 frequency bands. As a result, a 63-dimension embedding of the wavelet coefficients were used as the training data for the SVM classifiers. For each session of recordings, a new classifier was trained using only the data sets from the previous sessions. The new classifier was then tested with the data obtained in the current session. The performance of the classifier was evaluated by calculating the sensitivity and specificity. The result indicated that after a critical number of recording sessions, the classifier accuracy starts to reach a plateau, meaning that inclusions of new training data will not significant improve the performance of the classifier. It was observed that the effect of electrode placement variations was reduced and that the classification accuracy of >89% can be obtained. |
Population pharmacokinetics and target attainment analysis of moxifloxacin in patients with diabetic foot infections. | The objective of this study was to provide a pharmacokinetic/pharmacodynamic (PK/PD) analysis of moxifloxacin in patients with diabetic foot infections (DFI). The plasma concentration-time courses were determined in 50 DFI patients on day 1 and 3 after intravenous moxifloxacin 400 mg once-daily. A two-compartment population pharmacokinetic model was developed, identifying as covariates total body weight on central and peripheral volume of distribution (V1, V2) and ideal body weight on clearance (CL), respectively. For a 70 kg patient V1 was 68.1 L (interindividual variability, CV: 27.4%), V2 44.6 L, and CL 12.1 L/h (25.6%). Simulations were performed to calculate the probability of target attainment (PTA) for Gram-positive and Gram-negative pathogens with fAUC/MIC targets of ≥30 and ≥100, respectively. PTA was 0.68-1 for susceptible (MIC ≤0.5 mg/L according to EUCAST) Gram-positive, but <0.25 for Gram-negative pathogens with MIC ≥0.25 mg/L. With the exception of the first 24 hours of therapy, obesity affected PTA only marginally. Pharmacokinetic parameters in DFI patients were similar to those reported for healthy volunteers, indicating the appropriateness of the standard dose of moxifloxacin. Overall clinical efficacy has been shown previously, but PTA is limited in a subpopulation infected with formally susceptible Gram-negative pathogens close to the EUCAST breakpoint. |
Clinical efficacy of acute respiratory viral infections prevention in patients with chronic heart failure. | AIM
The aim of the study was to evaluate the ARVI prevention effectiveness in patients with chronic heart failure (CHF) using interferon inducer amixin.
MATERIALS AND METHODS
Conducted a comprehensive survey, dynamic monitoring and treatment of 60 patients aged from 49 to 70 years (mean age 60.25±4.57 years, 17 men and 43 women) with CHF with preserved ejection fraction of left ventricle (LVEF) (≥50%), II-III functional class (FC) according to the classification of new York Heart Association (NYHA), which developed as a result of coronary heart disease (CHD), hypertensive disease (HD). Of these, 30 patients (group 1) on the background of standard therapy for CHF received for the prevention of ARVI tiloron (Amixin) at a dose of 125 mg once a week for 6 weeks, two courses for 1 year. Group 2 patients received only standard therapy for CHF.
RESULTS
A decrease in the frequency of ARVI in patients with CHF treated with Amixin was found, which was accompanied by a decrease in the severity of subclinical inflammation by reducing the production of proinflammatory (IL-1β) and increasing the production of anti-inflammatory (IL-10) cytokines, reducing neurohumoral activation (reducing levels of aldosterone and Nt-proBNP), increasing the level of α- and γ-interferon. The positive dynamics of biomarkers of systemic inflammation and neurohormonal activation explains the improvement of the clinical course in patients with CHF (increase of tolerance to physical loads, reducing the number of visits to General practitioner and hospital admissions in the hospital during 12 months of observation).
CONCLUSION
A promising approach to the prevention of SARS in patients with CHF is course therapy with Amixin (2 times a year before the seasonal rising in the incidence of respiratory viral infections and influenza), which allows to achieve both decreasing in the frequency of SARS per year, and improvement the clinical course of CHF. |
TraCI: an interface for coupling road traffic and network simulators | Vehicular Ad-Hoc Networks (VANETs) enable communication among vehicles as well as between vehicles and roadside infrastructures. Currently available software tools for VANET research still lack the ability to asses the usability of vehicular applications. In this article, we present <u>Tra</u>ffic <u>C</u>ontrol <u>I</u>nterface (TraCI) a technique for interlinking road traffic and network simulators. It permits us to control the behavior of vehicles during simulation runtime, and consequently to better understand the influence of VANET applications on traffic patterns.
In contrast to the existing approaches, i.e., generating mobility traces that are fed to a network simulator as static input files, the online coupling allows the adaptation of drivers' behavior during simulation runtime. This technique is not limited to a special traffic simulator or to a special network simulator. We introduce a general framework for controlling the mobility which is adaptable towards other research areas.
We describe the basic concept, design decisions and the message format of this open-source architecture. Additionally, we provide implementations for non-commercial traffic and network simulators namely SUMO and ns2, respectively. This coupling enables for the first time systematic evaluations of VANET applications in realistic settings. |
Postnatal Lactate as an Early Predictor of Short-Term Outcome after Intrapartum Asphyxia | OBJECTIVES: To compare the predictive value of pH, base deficit and lactate for the occurrence of moderate-to-severe hypoxic ischaemic encephalopathy (HIE) and systemic complications of asphyxia in term infants with intrapartum asphyxia.STUDY DESIGN: We retrospectively reviewed the records of 61 full-term neonates (≥37 weeks gestation) suspected of having suffered from a significant degree of intrapartum asphyxia from a period of January 1997 to December 2001.The clinical signs of HIE, if any, were categorized using Sarnat and Sarnat classification as mild (stage 1), moderate (stage 2) or severe (stage 3). Base deficit, pH and plasma lactate levels were measured from indwelling arterial catheters within 1 hour after birth and thereafter alongwith every blood gas measurement. The results were correlated with the subsequent presence or absence of moderate-to-severe HIE by computing receiver operating characteristic curves.RESULTS: The initial lactate levels were significantly higher (p=0.001) in neonates with moderate-to-severe HIE (mean±SD=11.09±4.6) as compared to those with mild or no HIE (mean±SD=7.1±4.7). Also, the lactate levels took longer to normalize in these babies. A plasma lactate concentration >7.5±mmol/l was associated with moderate-or-severe HIE with a sensitivity of 94% and specificity of 67%. The sensitivity and negative predictive value of lactate was greater than that of the pH or base deficit.CONCLUSIONS: The highest recorded lactate level in the first hour of life and serial measurements of lactate are important predictors of moderate-to-severe HIE. |
DIY Human Action Dataset Generation | The recent successes in applying deep learning techniques to solve standard computer vision problems has aspired researchers to propose new computer vision problems in different domains. As previously established in the field, training data itself plays a significant role in the machine learning process, especially deep learning approaches which are data hungry. In order to solve each new problem and get a decent performance, a large amount of data needs to be captured which may in many cases pose logistical difficulties. Therefore, the ability to generate de novo data or expand an existing dataset, however small, in order to satisfy data requirement of current networks may be invaluable. Herein, we introduce a novel way to partition an action video clip into action, subject and context. Each part is manipulated separately and reassembled with our proposed video generation technique. Furthermore, our novel human skeleton trajectory generation along with our proposed video generation technique, enables us to generate unlimited action recognition training data. These techniques enables us to generate video action clips from an small set without costly and time-consuming data acquisition. Lastly, we prove through extensive set of experiments on two small human action recognition datasets, that this new data generation technique can improve the performance of current action recognition neural nets. |
Efficient methods for topic model inference on streaming document collections | Topic models provide a powerful tool for analyzing large text collections by representing high dimensional data in a low dimensional subspace. Fitting a topic model given a set of training documents requires approximate inference techniques that are computationally expensive. With today's large-scale, constantly expanding document collections, it is useful to be able to infer topic distributions for new documents without retraining the model. In this paper, we empirically evaluate the performance of several methods for topic inference in previously unseen documents, including methods based on Gibbs sampling, variational inference, and a new method inspired by text classification. The classification-based inference method produces results similar to iterative inference methods, but requires only a single matrix multiplication. In addition to these inference methods, we present SparseLDA, an algorithm and data structure for evaluating Gibbs sampling distributions. Empirical results indicate that SparseLDA can be approximately 20 times faster than traditional LDA and provide twice the speedup of previously published fast sampling methods, while also using substantially less memory. |
Preschool children’s use of cues to generic meaning | Sentences that refer to categories - generic sentences (e.g., "Dogs are friendly") - are frequent in speech addressed to young children and constitute an important means of knowledge transmission. However, detecting generic meaning may be challenging for young children, since it requires attention to a multitude of morphosyntactic, semantic, and pragmatic cues. The first three experiments tested whether 3- and 4-year-olds use (a) the immediate linguistic context, (b) their previous knowledge, and (c) the social context to determine whether an utterance with ambiguous scope (e.g., "They are afraid of mice", spoken while pointing to 2 birds) is generic. Four-year-olds were able to take advantage of all the cues provided, but 3-year-olds were sensitive only to the first two. In Experiment 4, we tested the relative strength of linguistic-context cues and previous-knowledge cues by putting them in conflict; in this task, 4-year-olds, but not 3-year-olds, preferred to base their interpretations on the explicit noun phrase cues from the linguistic context. These studies indicate that, from early on, children can use contextual and semantic information to construe sentences as generic, thus taking advantage of the category knowledge conveyed in these sentences. |
Prefrontal regions supporting spontaneous and directed application of verbal learning strategies: evidence from PET. | The prefrontal cortex has been implicated in strategic memory processes, including the ability to use semantic organizational strategies to facilitate episodic learning. An important feature of these strategies is the way they are applied in novel or ambiguous situations-failure to initiate effective strategies spontaneously in unstructured settings is a central cognitive deficit in patients with frontal lobe disorders. The current study examined strategic memory with PET and a verbal encoding paradigm that manipulated semantic organization in three encoding conditions: spontaneous, directed and unrelated. During the spontaneous condition, subjects heard 24 words that were related in four categories but presented in mixed order, and they were not informed of this structure beforehand. Any semantic reorganization was, therefore, initiated spontaneously by the subject. In the directed condition, subjects were given a different list of 24 related words and explicitly instructed to notice relationships and mentally group related words together to improve memory. The unrelated list consisted of 24 unrelated words. Behavioural measures included semantic clustering, which assessed active regrouping of words into semantic categories during free recall. In graded PET contrasts (directed > spontaneous > unrelated), two distinct activations were found in left inferior prefrontal cortex (inferior frontal gyrus) and left dorsolateral prefrontal cortex (middle frontal gyrus), corresponding to levels of semantic clustering observed in the behavioural data. Additional covariate analyses in the first spontaneous condition indicated that blood flow in orbitofrontal cortex (OFC) was strongly correlated with semantic clustering scores during immediate free recall. Thus, blood flow in OFC during encoding predicted which subjects would spontaneously initiate effective strategies during free recall. Our findings indicate that OFC performs an important, and previously unappreciated, role in strategic memory by supporting the early mobilization of effective behavioural strategies in novel or ambiguous situations. Once initiated, lateral regions of left prefrontal cortex control verbal semantic organization. |
Spectral Evidence of the Invisible World: Gender and the Puritan Supernatural in American Fiction, 1798-1856 - eScholarship | In late eighteenth- and early nineteenth-century fiction, Puritans serve as source material for a distinctly American identity and as allegories for the experiences of later generations. Many texts draw upon the legacy of the Puritan supernatural, most recognizably the 1692 Salem witchcraft trials. Salem is only a fragment, however, of a belief system deeply rooted in what Puritans called the invisible world, an omnipresent geography that linked material and immaterial dimensions via an intricate system of signs and portents. Spectral Evidence considers the entire invisible world in order to trace the Puritan supernatural's extensive impact on early American fiction. Against the backdrop of a wilderness filled with wonders and witches, numerous American genres took shape: protofeminist gothic dramas, female-driven national romances couched in subversive supernatural agency, and antebellum allegories and anti-reform satires framed as supernatural cautionary tales for women, all haunted, as were the Puritans themselves, by issues of female agency. Historical female witches and heretics became fictional reincarnations, onto which eighteenth- and nineteenth-century writers mapped their own innovations and anxieties. Chapter one shows how Brown's Wieland combines prodigies, wonders, and apparitions in an exploration of invisible world manifestations that simultaneously explores female agency and lays the groundwork for an American gothic. Chapter two turns to the national romances of the 1820s to explore how female writers like Sedgwick, Cheney, and Child radically re-imagine the Puritan supernatural as a navigable realm best traversed by women. Chapter three considers Hawthorne's transformation of the invisible world from a subversive space that enfranchises women into a conservative realm characterized by social and spiritual restrictions, in which heroines are punished rather than empowered by their supernatural experiences. Chapter four turns to antireform satire in order to trace the intensification of the Puritan supernatural's new incarnation as a negative example in antebellum literature. The central example, Brownson's The Spirit-Rapper mimics the inclusive mechanics and materials of sixteenth- and seventeenth-century wonder tales and draws on Puritan archival materials to prove that spiritualist spirits and Puritan demons are identical, ungodly sources of destructive female agency. |
Overview of the Semantics of TCOZ | Object-Z is an extension to the Z language designed to facilitate spec-iication in an object-oriented style. It is an excellent tool for modelling data and operations, but its object semantics are single threaded, operations are atomic, and object control logic is deened implicitly. This makes it diicult to use Object-Z to capture the behaviour of concurrent real-time reactive systems. On the other hand, Timed CSP is good at modelling real-time concurrent behaviour, but has little support for modelling the state of a complex system. This paper describes the semantics of TCOZ, a language blended from Object-Z and Timed CSP. The semantic model adopted is the innnite timed failures model of Timed CSP, extended to include initial state and update events for modelling operations on internal state. |
The spectrum of Castleman's disease: mimics, radiologic pathologic correlation and role of imaging in patient management. | Castleman's disease (CD) is a rare benign lymphoid disorder with variable clinical course. The two principal histologic subtypes of CD are hyaline-vascular and plasma cell variants and the major clinicoradiological entities are unicentric and multicentric CD. Management of CD is tailored to clinicoradiologic subtype. In this review, we describe the CT, MR and PET/CT findings in Castleman's disease which can help suggest a diagnosis of CD as well as emphasize role of imaging in management of patients with CD. |
Harvesting WiFi Received Signal Strength Indicator (RSSI) for Control/Automation System in SOHO Indoor Environment with ESP8266 | WiFi are easily available almost everywhere nowadays. Due to this, there is increasing interest in harnessing this technology for purposes other than communication. Therefore, this research was carried out with the main idea of using WiFi in developing an efficient, low cost control system for small office home office (SOHO) indoor environment. The main objective of the research is to develop a proof of concept that WiFi received signal strength indicator (RSSI) can be harnessed and used to develop a control system. The control system basically will help to save energy in an intelligent manner with a very minimum cost for the controller circuit. There are two main parts in the development of the system. First is extracting the RSSI monitoring feed information and analyzing it for designing the control system. The second is the development of the controller circuit for real environment. The simple yet inexpensive controller was tested in an indoor environment and results showed successful operation of the circuit developed. |
On the semantics of noun compounds | The noun compound – a sequence of nouns which function as a single noun – is very common in English texts. No language processing system should ignore expressions like steel soup pot cover if it wants to be serious about such high-end applications of computational linguistics as question answering, information extraction, text summarization, machine translation – the list goes on. Processing noun compounds, however, is far from trouble-free. For one thing, they can be bracketed in various ways: is it steel soup, steel pot or steel cover? Then there are relations inside a compound, annoyingly not signalled by any words: does pot contain soup or is it for cooking soup? These and many other research challenges are the subject of this special issue. The volume opens with Preslav Nakov’s survey paper on the interpretation of noun compounds. It serves as en excellent, thorough introduction to the whole business of studying noun compounds computationally. Both theoretical and computational linguistics consider various formal definitions of the compound, its creation, its types and properties, its applications, its approximation by paraphrases. The discussion is also illustrated by a range of languages other than English. Next, the problem of bracketing is given a few typical solutions. There follows a detailed look at noun compound semantics, including coarse-grained and very fine-grained inventories of relations among nouns in a compound. Finally, a “capstone” project is presented: textual entailment, a tool which can be immensely helpful in many high-end applications. Diarmuid Ó Séaghdha and Ann Copestake tell us how to interpret compound nouns by classifying their relations with kernel methods. The kernels implement intuitive notions of lexical and relational similarity which are computed using distributional information extracted from large text corpora. The classification is tested at three different levels of specificity. Impressively, in all cases a combination of both lexical and relational information improves upon either source taken alone. |
Homography-based 2D Visual Tracking and Servoing | The objective of this paper is to propose a new homography-based approach to image-based visual tracking and servoing. The visual tracking algorithm proposed in the paper is based on a new efficient second-order minimization method. Theoretical analysis and comparative experiments with other tracking approaches show that the proposed method has a higher convergence rate than standard first-order minimization techniques. Therefore, it is well adapted to real-time robotic applications. The output of the visual tracking is a homography linking the current and the reference image of a planar target. Using the homography, a task function isomorphic to the camera pose has been designed. A new image-based control law is proposed which does not need any measure of the 3D structure of the observed target (e.g. the normal to the plane). The theoretical proof of the existence of the isomorphism between the task function and the camera pose and the theoretical proof of the stability of the control law are provided. The experimental results, obtained with a 6 d.o.f. robot, show the advantages of the proposed method with respect to the existing approaches. KEY WORDS—visual tracking, visual servoing, efficient second-order minimization, homography-based control law |
Copper-induced stress and antioxidative responses in roots of Brassica juncea L. | Copper induced antioxidative reactions in the roots of Brassica juncea L. were investigated in both timeand concentration-dependent manners. The rapid uptake of Cu was observed immediately after the start of treatment. Application of Cu at 8 μM caused 50 percent reduction in biomass of Cu-treated roots as compared with control. Cu-induced root growth inhibition paralleled the level of root oxidative damage. Treatment with Cu at 8 μM induced a twofold increase in H 2 O 2 content during the first 4 d, but it declined to the basal level thereafter. We also observed a twofold increase in superoxide dismutase activities with 8 μM Cu during the first 2 d. The stimulation lasted for 4 d and then gradually declined. Activities of both ascorbate peroxidase and guaiacol peroxidase in roots were found to be low during the first 4 d after seedling exposure to 8 μM Cu, but significantly increased after that, suggesting that increased enzyme activities would be responsible for the removal of H 2 O 2 . Catalase activities were always suppressed under Cu stress. Treatment of seedlings with 8 μM Cu induced general decreases in both reduced ascorbate and dehydroascorbate. The reduced glutathione content decreased at early stages of Cu treatment. However, it was restored to the level of controls thereafter. In contrast, the oxidized glutathione contents showed a progressive increase during the time of Cu treatment. The total non-protein thiol content was shown to increase during the first several days, but it declined at later stages. |
System-level throughput of NOMA using intra-beam superposition coding and SIC in MIMO downlink when channel estimation error exists | We investigate the influence of channel estimation error on the achievable system-level throughput performance of our previous non-orthogonal multiple access (NOMA) scheme in a multiple-input multiple-output (MIMO) downlink. The NOMA scheme employs intra-beam superposition coding of a multiuser signal at the transmitter and spatial filtering of inter-beam interference followed by an intra-beam successive interference canceller (SIC) at the user terminal receiver. The intra-beam SIC cancels the inter-user interference within a beam. This configuration achieves reduced overhead for the downlink reference signaling for channel estimation at the user terminal in the case of non-orthogonal user multiplexing and enables the SIC receiver to be applied to the MIMO downlink. The channel estimation error in the NOMA scheme causes residual interference in the SIC process, which decreases the achievable user throughput. Furthermore, the channel estimation error causes error In the transmission rate control for the respective users, which may result in decoding error at not only the destination user terminal but also at other user terminals for the SIC process. However, we show that by using a simple transmission rate back-off algorithm, the impact of the channel estimation error is effectively abated and the NOMA scheme achieves clear average and cell-edge user throughput gains relative to orthogonal multiple access (OMA) similar to the case with perfect channel estimation. |
Underwater Object Tracking Using Sonar and USBL Measurements | In the scenario where an underwater vehicle tracks an underwater target, reliable estimation of the target position is required.While USBL measurements provide target position measurements at low but regular update rate, multibeam sonar imagery gives high precision measurements but in a limited field of view. This paper describes the development of the tracking filter that fuses USBL and processed sonar image measurements for tracking underwater targets for the purpose of obtaining reliable tracking estimates at steady rate, even in cases when either sonar or USBL measurements are not available or are faulty. The proposed algorithms significantly increase safety in scenarios where underwater vehicle has to maneuver in close vicinity to human diver who emits air bubbles that can deteriorate tracking performance. In addition to the tracking filter development, special attention is devoted to adaptation of the region of interest within the sonar image by using tracking filter covariance transformation for the purpose of improving detection and avoiding false sonar measurements. Developed algorithms are tested on real experimental data obtained in field conditions. Statistical analysis shows superior performance of the proposed filter compared to conventional tracking using pure USBL or sonar measurements. |
TWINE: A real-time system for TWeet analysis via INformation Extraction | In the recent years, the amount of user generated contents shared on the Web has significantly increased, especially in social media environment, e.g. Twitter, Facebook, Google+. This large quantity of data has generated the need of reactive and sophisticated systems for capturing and understanding the underlying information enclosed in them. In this paper we present TWINE, a real-time system for the big data analysis and exploration of information extracted from Twitter streams. The proposed system based on a Named Entity Recognition and Linking pipeline and a multi-dimensional spatial geo-localization is managed by a scalable and flexible architecture for an interactive visualization of micropost streams insights. The demo is available at http://twinemind.cloudapp.net/streaming1,2. |
DIZK: A Distributed Zero Knowledge Proof System | Recently there has been much academic and industrial interest in practical implementations of zero knowledge proofs. These techniques allow a party to prove to another party that a given statement is true without revealing any additional information. In a Bitcoin-like system, this allows a payer to prove validity of a payment without disclosing the payment’s details. Unfortunately, the existing systems for generating such proofs are very expensive, especially in terms of memory overhead. Worse yet, these systems are “monolithic”, so they are limited by the memory resources of a single machine. This severely limits their practical applicability. We describe DIZK, a system that distributes the generation of a zero knowledge proof across machines in a compute cluster. Using a set of new techniques, we show that DIZK scales to computations of up to billions of logical gates (100× larger than prior art) at a cost of 10 μs per gate (100× faster than prior art). We then use DIZK to study various security applications. |
TURNING PARAMETERS OPTIMIZATION FOR SURFACE ROUGHNESS BY TAGUCHI METHOD | In the present study an attempt has been made to investigate the effect of cutting parameters (cutting speed, feed rate, and depth of cut) on surface roughness in a turning operation of cast iron. Experiments have been conducted using Taguchi’s experimental design technique. An orthogonal array, the signal to noise ratio, and the analysis of variance are employed to investigate the cutting characteristics of cast iron using carbide tool. Optimum cutting parameters for minimizing surface roughness were determined. Experimental results reveal that among the cutting parameters, the cutting speed is most significant machining parameter for surface roughness followed by feed rate and depth of cut in the specified test range. |
Suede: a Wizard of Oz prototyping tool for speech user interfaces | Speech-based user interfaces are growing in popularity. Unfortunately, the technology expertise required to build speech UIs precludes many individuals from participating in the speech interface design process. Furthermore, the time and knowledge costs of building even simple speech systems make it difficult for designers to iteratively design speech UIs. SUEDE, the speech interface prototyping tool we describe in this paper, allows designers to rapidly create prompt/response speech interfaces. It offers an electronically supported Wizard of Oz (WOz) technique that captures test data, allowing designers to analyze the interface after testing. This informal tool enables speech user interface designers, even non-experts, to quickly create, test, and analyze speech user interface prototypes. |
Feedforward semantic segmentation with zoom-out features | We introduce a purely feed-forward architecture for semantic segmentation. We map small image elements (superpixels) to rich feature representations extracted from a sequence of nested regions of increasing extent. These regions are obtained by “zooming out” from the superpixel all the way to scene-level resolution. This approach exploits statistical structure in the image and in the label space without setting up explicit structured prediction mechanisms, and thus avoids complex and expensive inference. Instead superpixels are classified by a feedforward multilayer network. Our architecture achieves 69.6% average accuracy on the PASCAL VOC 2012 test set. |
Maternal BMI, parity, and pregnancy weight gain: influences on offspring adiposity in young adulthood. | CONTEXT
The prevalence of obesity among women of childbearing age is increasing. Emerging evidence suggests that this has long-term adverse influences on offspring health.
OBJECTIVE
The aim was to examine whether maternal body composition and gestational weight gain have persisting effects on offspring adiposity in early adulthood.
DESIGN AND SETTING
The Motherwell birth cohort study was conducted in a general community in Scotland, United Kingdom.
PARTICIPANTS
We studied 276 men and women whose mothers' nutritional status had been characterized in pregnancy. Four-site skinfold thicknesses, waist circumference, and body mass index (BMI), were measured at age 30 yr; sex-adjusted percentage body fat and fat mass index were calculated.
MAIN OUTCOME MEASURE
Indices of offspring adiposity at age 30 yr were measured.
RESULTS
Percentage body fat was greater in offspring of mothers with a higher BMI at the first antenatal visit (rising by 0.35%/kg/m2; P<0.001) and in offspring whose mothers were primiparous (difference, 1.5% in primiparous vs. multiparous; P=0.03). Higher offspring percentage body fat was also independently associated with higher pregnancy weight gain (7.4%/kg/wk; P=0.002). There were similar significant associations of increased maternal BMI, greater pregnancy weight gain, and parity with greater offspring waist circumference, BMI, and fat mass index.
CONCLUSIONS
Adiposity in early adulthood is influenced by prenatal influences independently of current lifestyle factors. Maternal adiposity, greater gestational weight, and parity all impact on offspring adiposity. Strategies to reduce the impact of maternal obesity and greater pregnancy weight gain on offspring future health are required. |
Reinforcement Learning for Relation Classification From Noisy Data | Existing relation classification methods that rely on distant supervision assume that a bag of sentences mentioning an entity pair are all describing a relation for the entity pair. Such methods, performing classification at the bag level, cannot identify the mapping between a relation and a sentence, and largely suffers from the noisy labeling problem. In this paper, we propose a novel model for relation classification at the sentence level from noisy data. The model has two modules: an instance selector and a relation classifier. The instance selector chooses high-quality sentences with reinforcement learning and feeds the selected sentences into the relation classifier, and the relation classifier makes sentencelevel prediction and provides rewards to the instance selector. The two modules are trained jointly to optimize the instance selection and relation classification processes. Experiment results show that our model can deal with the noise of data effectively and obtains better performance for relation classification at the sentence level. |
Recent advances in LVCSR : A benchmark comparison of performances | Large Vocabulary Continuous Speech Recognition (LVCSR), which is characterized by a high variability of the speech, is the most challenging task in automatic speech recognition (ASR). Believing that the evaluation of ASR systems on relevant and common speech corpora is one of the key factors that help accelerating research, we present, in this paper, a benchmark comparison of the performances of the current state-of-the-art LVCSR systems over different speech recognition tasks. Furthermore, we put objectively into evidence the best performing technologies and the best accuracy achieved so far in each task. The benchmarks have shown that the Deep Neural Networks and Convolutional Neural Networks have proven their efficiency on several LVCSR tasks by outperforming the traditional Hidden Markov Models and Guaussian Mixture Models. They have also shown that despite the satisfying performances in some LVCSR tasks, the problem of large-vocabulary speech recognition is far from being solved in some others, where more research efforts are still needed. |
Time-progressive mantle-melt evolution and magma production in a Tethyan marginal sea: A case study of the Albanide-Hellenide ophiolites | We present a comprehensive overview of the melt evolution of the upper mantle peridotites and different lava types occurring in the Jurassic Albanide-Hellenide ophiolites, based on new and extant geochemical data and trace element modeling. Peridotites consist of lherzolites and harzburgites that are variably depleted, and show increasing light rare earth element (LREE) enrichment with increasing whole-rock depletion. The spatial-temporal relationships of volcanic rocks indicate four discrete types with progressively younging ages: (1) normal midoceanic ridge basalts (N-MORBs); (2) medium-Ti basalts (MTBs); (3) island arc tholeiitic (IAT) basalts; (4) boninitic rocks. Our REE modeling reveals the following results. (1) Moderately depleted lherzolites represent N-MORB mantle residua produced by 10%–20% partial melting of a depleted MORB mantle source. Melt extraction formed N-MORB lavas. (2) Residual lherzolite underwent 5%–8% partial melting without any subduction influence, producing MTB magmas. (3) Following subduction initiation, these refractory lherzolites were enriched in LREEs by subduction-derived fluids. Their partial melting (~10%–20%) generated IAT magmas. (4) With continued subduction, the highly depleted residual mantle left after the previous melting events underwent significant LREE enrichment and high degree (15%–25%) partial melting, producing the youngest, boninitic rocks. The residual mantle after boninitic melt extraction is represented by extremely refractory harzburgites. This progressive melt evolution of the upper mantle peridotites and volcanic rock types is compatible with that of the subduction initiation–related magmatism and mantle dynamics in the Izu-Bonin-Mariana arc-trench rollback system, and indicates a time-progressive mantle-melt evolution in the upper plate of the Tethyan subduction system. LITHOSPHERE; v. 10; no. 1; p. 35–53; GSA Data Repository Item 2017330 | Published online 14 September 2017 https://doi.org/10.1130/L602.1 |
A DSSS superregenerative receiver with tau-dither loop | This paper describes a direct-sequence spread-spectrum superregenerative receiver using a PN code synchronization loop based on the tan-dither technique. The receiver minimizes overall complexity by using a single signal processing path for data detection and PN code synchronization. An analytical study on the loop dynamics is presented, and the conditions for optimum performance are examined. Experimental results in the 433 MHz European ISM band confirm the receiver ability to perform acquisition and tracking, achieving a sensitivity of -103 dBm and an input dynamic range of 65 dB. |
Literature Review of Mobile Robots for Manufacturing | ........................................................................................................................................................ i |
The Virtual Reality Modeling Language and Java | Why VRML and Java together? Over 20 million VRML browsers have been shipped with Web browsers, making interactive 3D graphics suddenly available for any desktop. Java adds complete programming capabilities plus network access, making VRML fully functional and portable. This is a powerful new combination, especially as ongoing research shows that VRML plus Java provide extensive support for building large-scale virtual environments (LSVEs). This article provides historical background, a detailed overview of VRML 3D graphics, sample VRML-Java test programs, and a look ahead at future work. |
BeamSpy: Enabling Robust 60 GHz Links Under Blockage | Due to high directionality and small wavelengths, 60 GHz links are highly vulnerable to human blockage. To overcome blockage, 60 GHz radios can use a phased-array antenna to search for and switch to unblocked beam directions. However, these techniques are reactive, and only trigger after the blockage has occurred, and hence, they take time to recover the link. In this paper, we propose BeamSpy, that can instantaneously predict the quality of 60 GHz beams, even under blockage, without the costly beam searching. BeamSpy captures unique spatial and blockage-invariant correlation among beams through a novel prediction model, exploiting which we can immediately select the best alternative beam direction whenever the current beam’s quality degrades. We apply BeamSpy to a run-time fast beam adaptation protocol, and a blockage-risk assessment scheme that can guide blockage-resilient link deployment. Our experiments on a reconfigurable 60 GHz platform demonstrate the effectiveness of BeamSpy’s prediction framework, and its usefulness in enabling robust 60 GHz links. |
Organizational Culture What is Organizational Culture ? | 1. Artifacts: These are visible components of culture, they are easy to formulate, have some physical shape, yet its perception varies from one individual to another. 1. Rituals and ceremonies: New hire trainings, new hire welcome lunches, annual corporate conferences, awards, offsite meetings and trainings are few examples of most common rituals and ceremonies. 2. Symbols & Slogans: These are high level abstraction of the culture; they effectively summarize organization’s intrinsic behavior. Symbols are rituals, awards or incentives that symbolize preferred behavior; “employee of the month” is one such example of a symbol. Slogans are linguistic phrases that are intended for the same reason, “customer first” is an example of corporate slogan. 3. Stories: These are narratives based on true events, but often exaggerated as it told from old to new employees. The stories of the organization’s founders or other dominant leaders are the most common ones, the challenges they had faced and how they dealt with those hurdles etc. In some form, these are stories of the organization’s heroes, employees relate the current system due to events that had happened in the past and stories are the medium that carries the legacies. |
Revisiting IS business value research: what we already know, what we still need to know, and how we can get there | Received: 24 September 2009 Revised: 22 March 2010 2nd Revision: 6 December 2010 3rd Revision: 29 July 2011 4th Revision: 5 February 2012 5th Revision: 24 April 2012 6th Revision: 10 June 2012 Accepted: 29 August 2012 Abstract The business value of investments in Information Systems (IS) has been, and is predicted to remain, one of the major research topics for IS researchers. While the vast majority of research papers on IS business value find empirical evidence in favour of both the operational and strategic relevance of IS, the fundamental question of the causal relationship between IS investments and business value remains partly unexplained. Three research tasks are essential requisites on the path towards addressing this epistemological question: the synthesis of existing knowledge, the identification of a lack of knowledge and the proposition of paths for closing the knowledge gaps. This paper considers each of these tasks. Research findings include that correlations between IS investments and productivity vary widely among companies and that the mismeasurement of IS investment impact may be rooted in delayed effects. Key limitations of current research are based on the ambiguity and fuzziness of IS business value, the neglected disaggregation of IS investments, and the unexplained process of creating internal and competitive value. Addressing the limitations we suggest research paths, such as the identification of synergy opportunities of IS assets, and the explanation of relationships between IS innovation and change in IS capabilities. European Journal of Information Systems (2013) 22, 139–169. doi:10.1057/ejis.2012.45; published online 13 November 2012 |
Global Textual Relation Embedding for Relational Understanding | Pre-trained embeddings such as word embeddings and sentence embeddings have been widely shown to be a fundamental tool to facilitate downstream tasks. In this work, we investigate the possibility of learning general-purpose embedding of textual relations, defined as the shortest dependency path between two entities, and whether it can facilitate downstream tasks requiring relational understanding of text. To learn such embeddings, we create the largest ever distant supervision dataset by linking the entire English CluWeb09 corpus to Freebase. Using the global co-occurrence statistics between textual and knowledge base relations as supervision signal, we learn the embeddings of textual relation with the Transformer model. We conduct intrinsic evaluation as well as extrinsic evaluation on two downstream tasks, relation extraction and knowledge base completion, and show that the learned textual relation embeddings serve as a good prior for relational understanding tasks and boost their performance. Our code and pre-trained model can be found at https://github.com/soon-to-be-released. |
A Practical Perspective on Latent Structured Prediction for Coreference Resolution | Latent structured prediction theory proposes powerful methods such as Latent Structural SVM (LSSVM), which can potentially be very appealing for coreference resolution (CR). In contrast, only small work is available, mainly targeting the latent structured perceptron (LSP). In this paper, we carried out a practical study comparing for the first time online learning with LSSVM. We analyze the intricacies that may have made initial attempts to use LSSVM fail, i.e., a huge training time and much lower accuracy produced by Kruskal’s spanning tree algorithm. In this respect, we also propose a new effective feature selection approach for improving system efficiency. The results show that LSP, if correctly parameterized, produces the same performance as LSSVM, being at the same time much more efficient. |
Exploring the taxonomie and associative link between emotion and function for robot sound design | Sound is a medium that conveys functional and emotional information in a form of multilayered streams. With the use of such advantage, robot sound design can open a way for being more efficient communication in human-robot interaction. As the first step of research, we examined how individuals perceived the functional and emotional intention of robot sounds and whether the perceived information from sound is associated with their previous experience with science fiction movies. The sound clips were selected based on the context of the movie scene (i.e., Wall-E, R2-D2, BB8, Transformer) and classified as functional (i.e., platform, monitoring, alerting, feedback) and emotional (i.e., positive, neutral, negative). A total of 12 participants were asked to identify the perceived properties for each of the 30 items. We found that the perceived emotional and functional messages varied from those originally intended and differed by previous experience. |
Dynamic Graph-Based Software Watermarking | Watermarking embeds a secret message into a cover message. In media watermarking the secret is usually a copyright notice and the cover a digital image. Watermarking an object discourages intellectual property theft, or when such theft has occurred, allows us to prove ownership. The Software Watermarking problem can be described as follows. Embed a structure W into a program P such that: W can be reliably located and extracted from P even after P has been subjected to code transformations such as translation, optimization and obfuscation; W is stealthy; W has a high data rate; embedding W into P does not adversely affect the performance of P; and W has a mathematical property that allows us to argue that its presence in P is the result of deliberate actions. In this paper we describe a software watermarking technique in which a dynamic graph watermark is stored in the execution state of a program. Because of the hardness of pointer alias analysis such watermarks are difficult to attack automatically. |
A Multi-ESD-Path Low-Noise Amplifier With a 4.3-A TLP Current Level in 65-nm CMOS | This paper studies the electrostatic discharge (ESD)-protected RF low-noise amplifiers (LNAs) in 65-nm CMOS technology. Three different ESD designs, including double-diode, modified silicon-controlled rectifier (SCR), and modified-SCR with double-diode configurations, are employed to realize ESD-protected LNAs at 5.8 GHz. By using the modified-SCR in conjunction with double-diode, a 5.8-GHz LNA with multiple ESD current paths demonstrates a 4.3-A transmission line pulse (TLP) failure level, corresponding to a ~ 6.5-kV Human-Body-Mode (HBM) ESD protection level. Under a supply voltage of 1.2 V and a drain current of 6.5 mA, the proposed ESD-protected LNA demonstrates a noise figure of 2.57 dB with an associated power gain of 16.7 dB. The input third-order intercept point (IIP3) is - 11 dBm, the input and output return losses are greater than 15.9 and 20 dB, respectively. |
The effect of acid accumulation in power-transformer oil on the aging rate of paper insulation | In this article we explain how acids are formed in the oil and in paper insulation and their involvement in paper degradation. The acidity of the oil reached at the end of a number of paper-aging experiments is reported, and relationships between it and temperature, moisture, oxygen, degree of polymerization (DP), and aging time are presented. |
FAQ-Centered Organizational Memory | We use the term “FAQs” in a broader sense. Take the knowledge related to a customer service department as an example. Our FAQs will practically include all kinds of questions that are being asked to that department plus the additional ones that are potentially important. We treat the answers to the FAQs as the knowledge pieces that are valuable to the organization, which need to be preserved. FAQ-Centered Organizational Memory Shih-Hung Wu, Min-Yuh Day, Wen-Lian Hsu Institute of Information Science Academia Sinica Nankang, Taipei, Taiwan, R.O.C. [email protected], [email protected], [email protected] The value of a piece of information in an organization is related to its retrieval (or requested) frequency. Therefore, collecting the answers to the frequently asked questions (FAQs) and constructing a good retrieval mechanism is a useful way to maintain organizational memory (OM). Since natural language is the easiest way for people to communicate, we have designed a natural language dialogue system for sharing the valuable knowledge of an organization. The system receives a natural language query from the user and matches it with a FAQ. Either an appropriate answer will be returned according to the user profile or the system will ask-back another question to the user so that a more detailed query can be formed. This dialogue will continue until the user is satisfied or a detailed answer is obtained. In this paper, we applied natural language processing techniques to build a computer system that can help to achieve the goal of OM. The FAQs are indexed in our knowledge representation map and their corresponding answers are stored in a multimedia database, be it a document, a diagram, a program, a database, a video or an audio recording. We believe there should be as little restriction as possible on the format of the representation of the OM. However, unlike pure text data, different media of data storages does not have a uniform management environment and there is no easy way to retrieve the desired answer. Hence, the key to the retrieval problem lies in an effective indexing mechanism. In this paper, we use a “natural language question” as an index for each knowledge piece. Such a question is more like a “title” for that knowledge piece rather than a detailed description as seen in most metadata. The answers to one question may vary depending on the users. Hence, there can be multiple links from one question to different answers. |
Intelligence-Augmented Rat Cyborgs in Maze Solving | Cyborg intelligence is an emerging kind of intelligence paradigm. It aims to deeply integrate machine intelligence with biological intelligence by connecting machines and living beings via neural interfaces, enhancing strength by combining the biological cognition capability with the machine computational capability. Cyborg intelligence is considered to be a new way to augment living beings with machine intelligence. In this paper, we build rat cyborgs to demonstrate how they can expedite the maze escape task with integration of machine intelligence. We compare the performance of maze solving by computer, by individual rats, and by computer-aided rats (i.e. rat cyborgs). They were asked to find their way from a constant entrance to a constant exit in fourteen diverse mazes. Performance of maze solving was measured by steps, coverage rates, and time spent. The experimental results with six rats and their intelligence-augmented rat cyborgs show that rat cyborgs have the best performance in escaping from mazes. These results provide a proof-of-principle demonstration for cyborg intelligence. In addition, our novel cyborg intelligent system (rat cyborg) has great potential in various applications, such as search and rescue in complex terrains. |
Aripiprazole monotherapy in patients with rapid-cycling bipolar I disorder: an analysis from a long-term, double-blind, placebo-controlled study | AIMS
Rapid-cycling bipolar disorder is difficult to treat and associated with greater morbidity than non-rapid-cycling disease. This post hoc analysis evaluated 28 patients with rapid-cycling bipolar I disorder from a 100-week, double-blind, placebo-controlled study assessing long-term efficacy, safety and tolerability of aripiprazole in patients with bipolar I disorder (most recently manic/mixed).
METHODS
Following >or= 6 consecutive weeks' stabilisation with open-label aripiprazole, patients were randomised (1 : 1) to aripiprazole or placebo. Patients completing 26 weeks treatment without relapse could continue for a further 74 weeks. Primary end-point was time to relapse for manic, mixed or depressive symptoms, defined as discontinuation due to lack of efficacy. Safety assessments included adverse event (AE) monitoring and changes in weight and lipid, glucose and prolactin levels.
RESULTS
Of the 28 patients (aripiprazole, n = 14; placebo, n = 14) with rapid-cycling bipolar disorder, 12 (aripiprazole, n = 7; placebo, n = 5) completed the initial 26-week treatment period and three (all aripiprazole treated) completed the 100-week, double-blind period. Time to relapse was significantly longer with aripiprazole vs. placebo at week 26 [log-rank p = 0.033; 26-week hazard ratio = 0.21 (95% CI: 0.04, 1.03)] and week 100 [log-rank p = 0.017; 100-week hazard ratio = 0.18 (95% CI: 0.04, 0.88)]. The most commonly reported AEs with aripiprazole during the 100 weeks (>or= 10% incidence and twice placebo) were anxiety (n = 4), sinusitis (n = 4), depression (n = 3) and upper respiratory infection (n = 3). One aripiprazole-treated patient discontinued due to an AE (akathisia). There were no significant between-group differences in mean changes in weight or metabolic parameters.
CONCLUSION
In this small, post hoc subanalysis, aripiprazole maintained efficacy and was generally well tolerated in the long-term treatment of rapid-cycling bipolar disorder. Further research with prospectively designed and adequately powered trials is warranted. |
Cost-effectiveness of lapatinib plus capecitabine in women with HER2+ metastatic breast cancer who have received prior therapy with trastuzumab | In a phase III trial of women with HER2+ metastatic breast cancer (MBC) previously treated with trastuzumab, an anthracycline, and taxanes (EGF100151), lapatinib plus capecitabine (L + C) improved time to progression (TTP) versus capecitabine monotherapy (C-only). In a trial including HER2+ MBC patients who had received at least one prior course of trastuzumab and no more than one prior course of palliative chemotherapy (GBG 26/BIG 03-05), continued trastuzumab plus capecitabine (T + C) also improved TTP. An economic model using patient-level data from EGF100151 and published results of GBG 26/BIG 03-05 as well as other literature were used to evaluate the incremental cost per quality-adjusted life-year [QALY] gained with L + C versus C-only and versus T + C in women with HER2+ MBC previously treated with trastuzumab from the UK National Health Service (NHS) perspective. Expected costs were £28,816 with L + C, £13,985 with C-only and £28,924 with T + C. Corresponding QALYs were 0.927, 0.737 and 0.896. In the base case, L + C was estimated to provide more QALYs at a lower cost compared with T + C; cost per QALY gained was £77,993 with L + C versus C-only. In pairwise probabilistic sensitivity analyses, the probability that L + C is preferred to C-only was 0.03 given a threshold of £30,000. The probability that L + C is preferred to T + C was 0.54 regardless of the threshold. When compared against capecitabine alone, the addition of lapatinib has a cost-effectiveness ratio exceeding the threshold normally used by NICE. Compared with T + C, L + C is dominant in the base case and approximately equally likely to be cost-effective in probabilistic sensitivity analyses over a wide range of threshold values. |
Trocar types in laparoscopy. | BACKGROUND
Laparoscopic surgery has led to great clinical improvements in many fields of surgery; however, it requires the use of trocars, which may lead to complications as well as postoperative pain. The complications include intra-abdominal vascular and visceral injury, trocar site bleeding, herniation and infection. Many of these are extremely rare, such as vascular and visceral injury, but may be life-threatening; therefore, it is important to determine how these types of complications may be prevented. It is hypothesised that trocar-related complications and pain may be attributable to certain types of trocars. This systematic review was designed to improve patient safety by determining which, if any, specific trocar types are less likely to result in complications and postoperative pain.
OBJECTIVES
To analyse the rates of trocar-related complications and postoperative pain for different trocar types used in people undergoing laparoscopy, regardless of the condition.
SEARCH METHODS
Two experienced librarians conducted a comprehensive search for randomised controlled trials (RCTs) in the Menstrual Disorders and Subfertility Group Specialised Register, Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, PsycINFO, CINAHL, CDSR and DARE (up to 26 May 2015). We checked trial registers and reference lists from trial and review articles, and approached content experts.
SELECTION CRITERIA
RCTs that compared rates of trocar-related complications and postoperative pain for different trocar types used in people undergoing laparoscopy. The primary outcomes were major trocar-related complications, such as mortality, conversion due to any trocar-related adverse event, visceral injury, vascular injury and other injuries that required intensive care unit (ICU) management or a subsequent surgical, endoscopic or radiological intervention. Secondary outcomes were minor trocar-related complications and postoperative pain. We excluded trials that studied non-conventional laparoscopic incisions.
DATA COLLECTION AND ANALYSIS
Two review authors independently conducted the study selection, risk of bias assessment and data extraction. We used GRADE to assess the overall quality of the evidence. We performed sensitivity analyses and investigation of heterogeneity, where possible.
MAIN RESULTS
We included seven RCTs (654 participants). One RCT studied four different trocar types, while the remaining six RCTs studied two different types. The following trocar types were examined: radially expanding versus cutting (six studies; 604 participants), conical blunt-tipped versus cutting (two studies; 72 participants), radially expanding versus conical blunt-tipped (one study; 28 participants) and single-bladed versus pyramidal-bladed (one study; 28 participants). The evidence was very low quality: limitations were insufficient power, very serious imprecision and incomplete outcome data. Primary outcomesFour of the included studies reported on visceral and vascular injury (571 participants), which are two of our primary outcomes. These RCTs examined 473 participants where radially expanding versus cutting trocars were used. We found no evidence of a difference in the incidence of visceral (Peto odds ratio (OR) 0.95, 95% confidence interval (CI) 0.06 to 15.32) and vascular injury (Peto OR 0.14, 95% CI 0.0 to 7.16), both very low quality evidence. However, the incidence of these types of injuries were extremely low (i.e. two cases of visceral and one case of vascular injury for all of the included studies). There were no cases of either visceral or vascular injury for any of the other trocar type comparisons. No studies reported on any other primary outcomes, such as mortality, conversion to laparotomy, intensive care admission or any re-intervention. Secondary outcomesFor trocar site bleeding, the use of radially expanding trocars was associated with a lower risk of trocar site bleeding compared to cutting trocars (Peto OR 0.28, 95% CI 0.14 to 0.54, five studies, 553 participants, very low quality evidence). This suggests that if the risk of trocar site bleeding with the use of cutting trocars is assumed to be 11.5%, the risk with the use of radially expanding trocars would be 3.5%. There was insufficient evidence to reach a conclusion regarding other trocar types, their related complications and postoperative pain, as no studies reported data suitable for analysis.
AUTHORS' CONCLUSIONS
Data were lacking on the incidence of major trocar-related complications, such as visceral or vascular injury, when comparing different trocar types with one another. However, caution is urged when interpreting these results because the incidence of serious complications following the use of a trocar was extremely low. There was very low quality evidence for minor trocar-related complications suggesting that the use of radially expanding trocars compared to cutting trocars leads to reduced incidence of trocar site bleeding. These secondary outcomes are viewed to be of less clinical importance.Large, well-conducted observational studies are necessary to answer the questions addressed in this review because serious complications, such as visceral or vascular injury, are extremely rare. However, for other outcomes, such as trocar site herniation, bleeding or infection, large observational studies may be needed as well. In order to answer these questions, it is advisable to establish an international network for recording these types of complications following laparoscopic surgery. |
Prepubertal Vaginal Bleeding: Etiology, Diagnostic Approach, and Management. | IMPORTANCE
Prepubertal vaginal bleeding outside the neonatal period is always abnormal and is very alarming to parents. A variety of practitioners, including obstetrician-gynecologists and pediatricians, may be asked to see patients with this presenting complaint, yet many do not receive adequate training in pediatric gynecology.
EVIDENCE ACQUISITION
Review of the published literature in PubMed, focusing on the last 20 years, regarding the incidence, etiologies, diagnosis, and management strategies for the common causes of prepubertal vaginal bleeding.
RESULTS
Careful history taking and pediatric-specific gynecological examination skills, including awareness of normal anatomy across the age spectrum and the ability to identify an estrogenized hymen, are keys to the appropriate assessment of this clinical problem.
CONCLUSIONS AND RELEVANCE
Prepubertal vaginal bleeding has many causes and requires a thorough targeted history and pediatric genitourinary examination, requiring knowledge of the variants of normal pediatric genitourinary anatomy. Most causes can be easily treated and are less likely to be due to sexual abuse or malignancy. |
Cryptocurrency portfolio management with deep reinforcement learning | Portfolio management is the decision-making process of allocating an amount of fund into different financial investment products. Cryptocurrencies are electronic and decentralized alternatives to government-issued money, with Bitcoin as the best-known example of a cryptocurrency. This paper presents a model-less convolutional neural network with historic prices of a set of financial assets as its input, outputting portfolio weights of the set. The network is trained with 0.7 years' price data from a cryptocurrency exchange. The training is done in a reinforcement manner, maximizing the accumulative return, which is regarded as the reward function of the network. Back test trading experiments with trading period of 30 minutes is conducted in the same market, achieving 10-fold returns in 1.8 month's periods. Some recently published portfolio selection strategies are also used to perform the same back tests, whose results are compared with the neural network. The network is not limited to cryptocurrency, but can be applied to any other financial markets. |
Constructing Deterministic Finite-State Automata in Recurrent Neural Networks | Recurrent neural networks that are <italic>trained</italic> to behave like deterministic finite-state automata (DFAs) can show deteriorating performance when tested on long strings. This deteriorating performance can be attributed to the instability of the internal representation of the learned DFA states. The use of a sigmoidel discriminant function together with the recurrent structure contribute to this instability. We prove that a simple algorithm can <italic>construct</italic> second-order recurrent neural networks with a sparse interconnection topology and sigmoidal discriminant function such that the internal DFA state representations are stable, that is, the constructed network correctly classifies strings of <italic>arbitrary length</italic>. The algorithm is based on encoding strengths of weights directly into the neural network. We derive a relationship between the weight strength and the number of DFA states for robust string classification. For a DFA with <italic>n</italic> state and <italic>m</italic>input alphabet symbols, the constructive algorithm generates a “programmed” neural network with <italic>O</italic>(<italic>n</italic>) neurons and <italic>O</italic>(<italic>mn</italic>) weights. We compare our algorithm to other methods proposed in the literature. |
End-to-End Compromised Account Detection | Social media, e.g. Twitter, has become a widely used medium for the exchange of information, but it has also become a valuable tool for hackers to spread misinformation through compromised accounts. Hence, detecting compromised accounts is a necessary step toward a safe and secure social media environment. Nevertheless, detecting compromised accounts faces several challenges. First, social media activities of users are temporally correlated which plays an important role in compromised account detection. Second, data associated with social media accounts is inherently sparse. Finally, social contagions where multiple accounts become compromised, take advantage of the user connectivity to propagate their attack. Thus how to represent each user's network features for compromised account detection is an additional challenge. To address these challenges, we propose an End-to-End Compromised Account Detection framework (E2ECAD). E2ECAD effectively captures temporal correlations via an LSTM (Long Short-Term Memory) network. Further, it addresses the sparsity problem by defining and employing a user context representation. Meanwhile, informative network-related features are modeled efficiently. To verify the working of the framework, we construct a real-world dataset of compromised accounts on Twitter and conduct extensive experiments. The results of experiments show that E2ECAD outperforms the state of the art compromised account detection algorithms. |
Creating a Large Benchmark for Open Information Extraction | Open information extraction (Open IE) was presented as an unrestricted variant of traditional information extraction. It has been gaining substantial attention, manifested by a large number of automatic Open IE extractors and downstream applications. In spite of this broad attention, the Open IE task definition has been lacking – there are no formal guidelines and no large scale gold standard annotation. Subsequently, the various implementations of Open IE resorted to small scale posthoc evaluations, inhibiting an objective and reproducible cross-system comparison. In this work, we develop a methodology that leverages the recent QA-SRL annotation to create a first independent and large scale Open IE annotation,1 and use it to automatically compare the most prominent Open IE systems. |
Autonomous UAV Navigation Using Reinforcement Learning | Unmanned aerial vehicles (UAV) are commonly used for missions in unknown environments, where an exact mathematical model of the environment may not be available. This paper provides a framework for using reinforcement learning to allow the UAV to navigate successfully in such environments. We conducted our simulation and real implementation to show how the UAVs can successfully learn to navigate through an unknown environment. Technical aspects regarding to applying reinforcement learning algorithm to a UAV system and UAV flight control were also addressed. This will enable continuing research using a UAV with learning capabilities in more important applications, such as wildfire monitoring, or search and rescue missions. |
Power Laws in Economics and Finance ∗ | A power law is the form taken by a large number of surprising empirical regularities in economics and finance. This article surveys well-documented empirical power laws concerning income and wealth, the size of cities and firms, stock market returns, trading volume, international trade, and executive pay. It reviews detail-independent theoretical motivations that make sharp predictions concerning the existence and coefficients of power laws, without requiring delicate tuning of model parameters. These theoretical mechanisms include random growth, optimization, and the economics of superstars coupled with extreme value theory. Some of the empirical regularities currently lack an appropriate explanation. This article highlights these open areas for |
Flare: Native Compilation for Heterogeneous Workloads in Apache Spark | The need for modern data analytics to combine relational, procedural, and map-reduce-style functional processing is widely recognized. State-of-the-art systems like Spark have added SQL front-ends and relational query optimization, which promise an increase in expressiveness and performance. But how good are these extensions at extracting high performance from modern hardware platforms? While Spark has made impressive progress, we show that for relational workloads, there is still a significant gap compared with best-of-breed query engines. And when stepping outside of the relational world, query optimization techniques are ineffective if large parts of a computation have to be treated as user-defined functions (UDFs). We present Flare: a new back-end for Spark that brings performance closer to the best SQL engines, without giving up the added expressiveness of Spark. We demonstrate order of magnitude speedups both for relational workloads such as TPC-H, as well as for a range of machine learning kernels that combine relational and iterative functional processing. Flare achieves these results through (1) compilation to native code, (2) replacing parts of the Spark runtime system, and (3) extending the scope of optimization and code generation to large classes of UDFs. |
Safety and tolerability of a modified filter-type device for leukocytapheresis using ACD-A as anticoagulant in patients with mild to moderately active ulcerative colitis. Results of a pilot study. | Cellsorba™ is a medical device for leukocytapheresis (LCAP) treatment of ulcerative colitis (UC). Cellsorba™ EX Global type has been developed from Cellsorba E for intended use with ACD-A as anticoagulant. We evaluated safety and efficacy of the modified Cellsorba using ACD-A in a pilot trial comprising patients with active UC, despite receiving 5-ASA. A total of 10 LCAP treatments/patients were administered. Safety assessment focused on clinical signs and symptoms, hematological variables, as well as levels of bradykinin and IL-6. Efficacy was determined using the Mayo clinical/endoscopic scoring index as well histological assessment of biopsies. Additional aim was to evaluate the impact of apheresis system lines and filter on selected regulatory molecules. All six subjects completed the trial without any serious adverse events. WBC, platelet counts, and levels of bradykinin and IL-6 were not significantly affected. The median Mayo score decreased from 8.0 to 3.5 at week 8 (and to 2 at week 16 for the responders). Four patients were responders, of whom two patients went into remission. Median histological scores decreased from 3.5 to 2.0 in these four patients. Concentration of LL-37 increased within the apheresis system lines. LCAP with Cellsorba EX using ACD-A as anticoagulant was found to be a safe and well-tolerated procedure in patients with active UC. The positive impact on efficacy parameters merits further evaluation in a controlled fashion. |
AUC of Calvert's formula in targeted intra-arterial carboplatin chemoradiotherapy for cancer of the oral cavity | We investigated whether intra-arterial administration of carboplatin using Calvert's formula is useful for avoiding thrombocytopenia in targeted chemoradiotherapy in patients with squamous cell cancer of the oral cavity and oropharynx. Carboplatin was infused intra-arterially under digital subtraction angiography in 28 patients. In the first group of patients, the dose of carboplatin was calculated according to the body surface area (BS group). In the second group, the dose was calculated using Calvert's formula (AUC group). The value for AUC (area under concentration vs time curve; mg ml−1 min−1) in the formula was set at 4.5. All patients received concurrent radiotherapy (30 Gy) and were given oral tegafur-uracil (UFT®, 400–600 mg day−1). The AUC group showed a significantly lower percentage platelet reduction than the BS group (49.0±22.0 vs 65.1±23.2%; P=0.045) and also tended to have a higher platelet nadir count (10.9±4.2 vs 8.4±5.8 × 104; P=0.27) without reducing the antitumour effect. The value of 4.5 for target AUC is recommended clinically. However, AUC of Calvert's formula could not predict thrombocytopenia associated with intra-arterial chemoradiotherapy due to the variability of the actual AUC. |
Nonlinear Dynamic Modeling for High Performance Control of a Quadrotor | In this paper, we present a detailed dynamic and aerodynamic model of a quadrotor that can be used for path planning and control design of high performance, complex and aggressive manoeuvres without the need for iterative learning techniques. The accepted nonlinear dynamic quadrotor model is based on a thrust and torque model with constant thrust and torque coefficients derived from static thrust tests. Such a model is no longer valid when the vehicle undertakes dynamic manoeuvres that involve significant displacement velocities. We address this by proposing an implicit thrust model that incorporates the induced momentum effects associated with changing airflow through the rotor. The proposed model uses power as input to the system. To complete the model, we propose a hybrid dynamic model to account for the switching between different vortex ring states of the rotor. |
Model-Driven ERP Implementation | Enterprise Resource Planning (ERP) implementations are very complex. To obtain a fair level of understanding of the system, it is then necessary to model the supported business processes. However, the problem is the accuracy of the mapping between this model and the actual technical implementation. A solution is to make use of the OMG’s Model-Driven Architecture (MDA) framework. In fact, this framework lets the developer model his system at a high abstraction level and allows the MDA tool to generate the implementation details. This paper presents our results in applying the MDA framework to ERP implementation based on a high level model of the business processes. Then, we show how our prototype is structured and implemented in the IBM/Rational XDE environment |
Clash of the Titans: MapReduce vs. Spark for Large Scale Data Analytics | MapReduce and Spark are two very popular open source cluster computing frameworks for large scale data analytics. These frameworks hide the complexity of task parallelism and fault-tolerance, by exposing a simple programming API to users. In this paper, we evaluate the major architectural components in MapReduce and Spark frameworks including: shuffle, execution model, and caching, by using a set of important analytic workloads. To conduct a detailed analysis, we developed two profiling tools: (1) We correlate the task execution plan with the resource utilization for both MapReduce and Spark, and visually present this correlation; (2) We provide a break-down of the task execution time for in-depth analysis. Through detailed experiments, we quantify the performance differences between MapReduce and Spark. Furthermore, we attribute these performance differences to different components which are architected differently in the two frameworks. We further expose the source of these performance differences by using a set of micro-benchmark experiments. Overall, our experiments show that Spark is about 2.5x, 5x, and 5x faster than MapReduce, for Word Count, k-means, and PageRank, respectively. The main causes of these speedups are the efficiency of the hash-based aggregation component for combine, as well as reduced CPU and disk overheads due to RDD caching in Spark. An exception to this is the Sort workload, for which MapReduce is 2x faster than Spark. We show that MapReduce’s execution model is more efficient for shuffling data than Spark, thus making Sort run faster on MapReduce. |
An anterior tooth size comparison in unilateral and bilateral congenitally absent maxillary lateral incisors. | The purpose of this study is to compare the anterior tooth size width in patients with congenitally missing maxillary lateral incisors using the Bolton Index and divine proportion. The study sample consisted of thirty pairs of orthodontic models with unilateral (twelve patients; 7 females, 5 males) and bilateral (eighteen patients; 13 females, 5 males) absence of maxillary lateral incisors. The mean ages of the selected cases were 17.7 and 17.5 years, respectively. Descriptive statistics were used for the data analysis. The result showed the mean of the Bolton Index in cases with bilateral absence was closer to the Bolton mean than in cases with unilateral absence. In the unilateral absence cases the width of the existing lateral incisor (5.5 mm) was an average of 1.00 mm less compared to the standard mean (6.5 mm). The divine proportion showed the maxillary central incisors were small in width as indicated by the adjusted value or they were slightly larger in width than the mandibular central incisors. In cases with unilateral and bilateral absence the Bolton Index exhibited maxillary insufficiency, which was confirmed by evaluating the divine proportion of the maxillary and mandibular incisors. The result of the present study will be of great help to both the orthodontist, whether to open or close the space, and to the prosthodontist to restore the missing teeth of patients having missing maxillary lateral incisors. |
Continuous 3D scan-matching with a spinning 2D laser | Scan-matching is a technique that can be used for building accurate maps and estimating vehicle motion by comparing a sequence of point cloud measurements of the environment taken from a moving sensor. One challenge that arises in mapping applications where the sensor motion is fast relative to the measurement time is that scans become locally distorted and difficult to align. This problem is common when using 3D laser range sensors, which typically require more scanning time than their 2D counterparts. Existing 3D mapping solutions either eliminate sensor motion by taking a “stop-and-scan” approach, or attempt to correct the motion in an open-loop fashion using odometric or inertial sensors. We propose a solution to 3D scan-matching in which a continuous 6DOF sensor trajectory is recovered to correct the point cloud alignments, producing locally accurate maps and allowing for a reliable estimate of the vehicle motion. Our method is applied to data collected from a 3D spinning lidar sensor mounted on a skid-steer loader vehicle to produce quality maps of outdoor scenes and estimates of the vehicle trajectory during the mapping sequences. |
Integrating Open and Closed Information Extraction: Challenges and First Steps | Over the past years, state-of-the-art information extraction (IE) systems such as NELL [5] and ReVerb [9] have achieved impressive results by producing very large knowledge resources at web scale with minimal supervision. However, these resources lack the schema information, exhibit a high degree of ambiguity, and are difficult even for humans to interpret. Working with such resources becomes easier if there is a structured information base to which the resources can be linked. In this paper, we introduce the integration of open information extraction projects with Wikipedia-based IE projects that maintain a logical schema, as an important challenge for the NLP, semantic web, and machine learning communities. We describe the problem, present a gold-standard benchmark, and take the first steps towards a data-driven solution to the problem. This is especially promising, since NELL and ReVerb typically achieve a very large coverage, but still still lack a fullfledged clean ontological structure which, on the other hand, could be provided by large-scale ontologies like DBpedia [2] or YAGO [13]. |
Blood pressure patterns after brain death. | The control of vasomotor tone by the rostral ventrolateral medulla oblongata is lost in brain death. Hypotension is not an obligate clinical sign in brain death criteria, because an adequate intravascular volume may temporarily overcome this decrease in blood pressure. However, such an unstable situation is bound to collapse.1,2 Anticipatory knowledge of changes in blood pressure is important to physicians who perform brain death examinations. Delay in declaration of brain death in a hemodynamically unstable patient may jeopardize procurement and prompt pharmaceutical support to maintain adequate blood pressures is required.3–5 We aimed to characterize hemodynamic patterns after brain death. |
IDENTIFICAÇÃO DAS AÇÕES DE INOVAÇÃO TECNOLÓGICA EM EMPRESAS DE ARACAJU/SE IDENTIFICATION OF SHARES OF TECHNOLOGICAL INNOVATION IN FIRMS ARACAJU / SE | The relationship between universities and businesses is important in the process of technological innovation and are quite distinct contributions given by universities for this process , permeating their teaching , research, transfer / commercialization of knowledge. Through these changes that has taken place globally today, technological innovation has become a vital requirement for all organizations , whether small , medium or large , as well as belonging to any sector of activity whatsoever. The new model of innovation : Innovation or Open Innovation Open , replaces the isolation of the shares by the sharing of knowledge . A preponderant factor in innovation activities is the intention to innovate and the degree of innovation that sees the company may need to stay competitive in this market segment . Therefore , it becomes essential to consider that according to the posture adopted by the company that have specific interests and possibly different from other companies . The aim of this study is to identify whether there are actions of technological innovation in businesses in the city of Aracaju , capital of Sergipe state , and what potential innovation that will serve to promote the establishment of partnerships in projects of RD&I , as well as generating opportunities business innovation . |
A clinical staging score to measure the severity of dialysis-related amyloidosis | The ongoing effort to prevent dialysis-related amyloidosis (DRA) has been hampered by lack of any way to measure DRA’s severity. Yet, such measurement is essential for assessing the effect of DRA treatment. Accordingly, we developed a scoring system focused on the physical manifestations of DRA. Forty-four patients on maintenance hemodialysis with DRA, and 96 without it, were enrolled. The SF-36v2 Health Survey ascertained whether patients experienced general bodily pain and/or physical dysfunction with any attendant specific pain (dysfunction). If so, the association of those conditions with a finding of DRA was analyzed—including laboratory and radiographic data—and a scoring system reflecting the extent of that dysfunction was devised using the significant variables in the multivariate analysis. Both dysfunction and general bodily pain were severe in patients with DRA. Presence of polyarthralgia, trigger finger, carpal tunnel syndrome (CTS), and dialysis-related spondyloarthropathy (DRS) were associated with that dysfunction after appropriate adjustments. The new scoring system used those four variables in the model, with a 3 given for polyarthralgia and DRS, and 2 for trigger finger and CTS (possible range 0–10). Based on the physical functioning score of SF-36v2, we categorized A-score into three stages: mild (A-score 3–4), moderate (5–7), and severe (8–10). The corresponding area under the receiver-operating characteristics curve for diagnosis of DRA was 0.9345 when we set the cutoff value as 4. This validated scoring system for quantitatively estimating the severity of DRA can serve as A useful measure in clinical practice. |
Trajectory planning and following for UAVs with nonlinear dynamics | In this paper, we introduce a Matlab-based toolbox called OPTIPLAN, which is intended to formulate, solve and simulate problems of obstacle avoidance based on model predictive control (MPC). The main goal of the toolbox is that it allows the users to simply set up even complex control problems without loss in efficiency only in few lines of code. Slow mathematical and technical details are fully automated allowing researchers to focus on problem formulation. It can easily perform MPC based closed-loop simulations followed by fetching visualizations of the results. From the theoretical point of view, non-convex obstacle avoidance constraints are tackled in two ways in OPTIPLAN: either by solving mixed-integer program using binary variables, or using time-varying constraints, which leads to a suboptimal solution, but the problem remains convex. |
Automated GUI performance testing | A significant body of prior work has devised approaches for automating the functional testing of interactive applications. However, little work exists for automatically testing their performance. Performance testing imposes additional requirements upon GUI test automation tools: the tools have to be able to replay complex interactive sessions, and they have to avoid perturbing the application’s performance. We study the feasibility of using five Java GUI capture and replay tools for GUI performance test automation. Besides confirming the severity of the previously known GUI element identification problem, we also describe a related problem, the temporal synchronization problem, which is of increasing importance for GUI applications that use timer-driven activity. We find that most of the tools we study have severe limitations when used for recording and replaying realistic sessions of real-world Java applications and that all of them suffer from the temporal synchronization problem. However, we find that the most reliable tool, Pounder, causes only limited perturbation and thus can be used to automate performance testing. Based on an investigation of Pounder’s approach, we further improve its robustness and reduce its perturbation. Finally, we demonstrate in a set of case studies that the conclusions about perceptible performance drawn from manual tests still hold when using automated tests driven by Pounder. Besides the significance of our findings to GUI performance testing, the results are also relevant to capture and replay-based functional GUI test automation approaches. |
Learning to Solve Arithmetic Word Problems with Verb Categorization | This paper presents a novel approach to learning to solve simple arithmetic word problems. Our system, ARIS, analyzes each of the sentences in the problem statement to identify the relevant variables and their values. ARIS then maps this information into an equation that represents the problem, and enables its (trivial) solution as shown in Figure 1. The paper analyzes the arithmetic-word problems “genre”, identifying seven categories of verbs used in such problems. ARIS learns to categorize verbs with 81.2% accuracy, and is able to solve 77.7% of the problems in a corpus of standard primary school test questions. We report the first learning results on this task without reliance on predefined templates and make our data publicly available.1 |
Raw meat based diet influences faecal microbiome and end products of fermentation in healthy dogs | BACKGROUND
Dietary intervention studies are required to deeper understand the variability of gut microbial ecosystem in healthy dogs under different feeding conditions and to improve diet formulations. The aim of the study was to investigate in dogs the influence of a raw based diet supplemented with vegetable foods on faecal microbiome in comparison with extruded food.
METHODS
Eight healthy adult Boxer dogs were recruited and randomly divided in two experimental blocks of 4 individuals. Dogs were regularly fed a commercial extruded diet (RD) and starting from the beginning of the trial, one group received the raw based diet (MD) and the other group continued to be fed with the RD diet (CD) for a fortnight. After 14 days, the two groups were inverted, the CD group shifted to the MD and the MD shifted to the CD, for the next 14 days. Faeces were collected at the beginning of the study (T0), after 14 days (T14) before the change of diet and at the end of experimental period (T28) for DNA extraction and analysis of metagenome by sequencing 16SrRNA V3 and V4 regions, short chain fatty acids (SCFA), lactate and faecal score.
RESULTS
A decreased proportion of Lactobacillus, Paralactobacillus (P < 0.01) and Prevotella (P < 0.05) genera was observed in the MD group while Shannon biodiversity Index significantly increased (3.31 ± 0.15) in comparison to the RD group (2.92 ± 0.31; P < 0.05). The MD diet significantly (P < 0.05) decreased the Faecal Score and increased the lactic acid concentration in the feces in comparison to the RD treatment (P < 0.01). Faecal acetate was negatively correlated with Escherichia/Shigella and Megamonas (P < 0.01), whilst butyrate was positively correlated with Blautia and Peptococcus (P < 0.05). Positive correlations were found between lactate and Megamonas (P < 0.05), Escherichia/Shigella (P < 0.01) and Lactococcus (P < 0.01).
CONCLUSION
These results suggest that the diet composition modifies faecal microbial composition and end products of fermentation. The administration of MD diet promoted a more balanced growth of bacterial communities and a positive change in the readouts of healthy gut functions in comparison to RD diet. |
Effectiveness of applying progressive muscle relaxation technique on quality of life of patients with multiple sclerosis. | AIMS AND OBJECTIVES
To identify the effects of applying Progressive Muscle Relaxation Technique on Quality of Life of patients with multiple Sclerosis.
BACKGROUND
In view of the growing caring options in Multiple Sclerosis, improvement of quality of life has become increasingly relevant as a caring intervention. Complementary therapies are widely used by multiple sclerosis patients and Progressive Muscle Relaxation Technique is a form of complementary therapies.
DESIGN
Quasi-experimental study.
METHOD
Multiple Sclerosis patients (n = 66) were selected with no probability sampling then assigned to experimental and control groups (33 patients in each group). Means of data collection included: Individual Information Questionnaire, SF-8 Health Survey, Self-reported checklist. PMRT performed for 63 sessions by experimental group during two months but no intervention was done for control group. Statistical analysis was done by SPSS software.
RESULTS
Student t-test showed that there was no significant difference between two groups in mean scores of health-related quality of life before the study but this test showed a significant difference between two groups, one and two months after intervention (p < 0.05). anova test with repeated measurements showed that there is a significant difference in mean score of whole and dimensions of health-related quality of life between two groups in three times (p < 0.05).
CONCLUSIONS
Although this study provides modest support for the effectiveness of Progressive Muscle Relaxation Technique on quality of life of multiple sclerosis patients, further research is required to determine better methods to promote quality of life of patients suffer multiple sclerosis and other chronic disease.
RELEVANCE TO CLINICAL PRACTICE
Progressive Muscle Relaxation Technique is practically feasible and is associated with increase of life quality of multiple sclerosis patients; so that health professionals need to update their knowledge about complementary therapies. |
Estimating crowd densities and pedestrian flows using wi-fi and bluetooth | The rapid deployment of smartphones as all-purpose mobile computing systems has led to a wide adoption of wireless communication systems such as Wi-Fi and Bluetooth in mobile scenarios. Both communication systems leak information to the surroundings during operation. This information has been used for tracking and crowd density estimations in literature. However, an estimation of pedestrian flows has not yet been evaluated with respect to a known ground truth and, thus, a reliable adoption in real world scenarios is rather difficult. With this paper, we fill in this gap. Using ground truth provided by the security check process at a major German airport, we discuss the quality and feasibility of pedestrian flow estimations for both WiFi and Bluetooth captures. We present and evaluate three approaches in order to improve the accuracy in comparison to a naive count of captured MAC addresses. Such counts only showed an impractical Pearson correlation of 0.53 for Bluetooth and 0.61 for Wi-Fi compared to ground truth. The presented extended approaches yield a superior correlation of 0.75 in best case. This indicates a strong correlation and an improvement of accuracy. Given these results, the presented approaches allow for a practical estimation of pedestrian flows. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.