title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Style as a Choice of Blending Principles | This paper proposes a new approach to style, arising from our work on computational media using structural blending, which enriches the conceptual blending of cognitive linguistics with structure building operations in order to encompass syntax and narrative as well as metaphor. We have implemented both conceptual and structural blending, and conducted initial experiments with poetry, although the approach generalizes to other media. The central idea is to analyze style in terms of principles for blending, based on our £nding that very different principles from those of common sense blending are needed for some creative works. |
TASKer: behavioral insights via campus-based experimental mobile crowd-sourcing | While mobile crowd-sourcing has become a game-changer for many urban operations, such as last mile logistics and municipal monitoring, we believe that the design of such crowd-sourcing strategies must better accommodate the real-world behavioral preferences and characteristics of users. To provide a real-world testbed to study the impact of novel mobile crowd-sourcing strategies, we have designed, developed and experimented with a real-world mobile crowd-tasking platform on the SMU campus, called TA&Sslash;Ker. We enhanced the TA$Ker platform to support several new features (e.g., task bundling, differential pricing and cheating analytics) and experimentally investigated these features via a two-month deployment of TA$Ker, involving 900 real users on the SMU campus who performed over 30,000 tasks. Our studies (i) show the benefits of bundling tasks as a combined package, (ii) reveal the effectiveness of differential pricing strategies and (iii) illustrate key aspects of cheating (false reporting) behavior observed among workers. |
Deep Belief Networks Are Compact Universal Approximators | Deep belief networks (DBN) are generative models with many layers of hidden causal variables, recently introduced by Hinton, Osindero, and Teh (2006), along with a greedy layer-wise unsupervised learning algorithm. Building on Le Roux and Bengio (2008) and Sutskever and Hinton (2008), we show that deep but narrow generative networks do not require more parameters than shallow ones to achieve universal approximation. Exploiting the proof technique, we prove that deep but narrow feedforward neural networks with sigmoidal units can represent any Boolean expression. |
Vaccination and autoimmune disease: what is the evidence? | As many as one in 20 people in Europe and North America have some form of autoimmune disease. These diseases arise in genetically predisposed individuals but require an environmental trigger. Of the many potential environmental factors, infections are the most likely cause. Microbial antigens can induce cross-reactive immune responses against self-antigens, whereas infections can non-specifically enhance their presentation to the immune system. The immune system uses fail-safe mechanisms to suppress infection-associated tissue damage and thus limits autoimmune responses. The association between infection and autoimmune disease has, however, stimulated a debate as to whether such diseases might also be triggered by vaccines. Indeed there are numerous claims and counter claims relating to such a risk. Here we review the mechanisms involved in the induction of autoimmunity and assess the implications for vaccination in human beings. |
Energy aware virtual machine placement scheduling in cloud computing based on ant colony optimization approach | Cloud computing provides resources as services in pay-as-you-go mode to customers by using virtualization technology. As virtual machine (VM) is hosted on physical server, great energy is consumed by maintaining the servers in data center. More physical servers means more energy consumption and more money cost. Therefore, the VM placement (VMP) problem is significant in cloud computing. This paper proposes an approach based on ant colony optimization (ACO) to solve the VMP problem, named as ACO-VMP, so as to effectively use the physical resources and to reduce the number of running physical servers. The number of physical servers is the same as the number of the VMs at the beginning. Then the ACO approach tries to reduce the physical server one by one. We evaluate the performance of the proposed ACO-VMP approach in solving VMP with the number of VMs being up to 600. Experimental results compared with the ones obtained by the first-fit decreasing (FFD) algorithm show that ACO-VMP can solve VMP more efficiently to reduce the number of physical servers significantly, especially when the number of VMs is large. |
Impact of 3-D Advertising on Product Knowledge, Brand Attitude, and Purchase Intention: The Mediating Role of Presence | The conceptualization of a virtual experience has emerged because advancements in computer technology have led to a movement toward more multi-sensory online experiences. Two studies designed to explore the concepts of virtual experience and presence are presented with the results largely supporting the proposition that 3-D advertising is capable of enhancing presence and to varying degrees ultimately influencing the product knowledge, brand attitude, and purchase intention of consumers. The marketing implications are immediate because the ability to create a compelling virtual product experience is not beyond the capability of interactive advertising today. By creating compelling online virtual experiences, advertisers can potentially enhance the value of product information presented and engage consumers in an active user-controlled product experience. |
Addressing MIST (Metabolites in Safety Testing): bioanalytical approaches to address metabolite exposures in humans and animals. | Recent regulatory guidance suggests that drug metabolites identified in human plasma should be present at equal or greater levels in at least one of the animal species used in safety assessments (MIST). Often synthetic standards for the metabolites do not exist, thus this has introduced multiple challenges regarding the quantitative comparison of metabolites between human and animals. Various bioanalytical approaches are described to evaluate the exposure of metabolites in animal vs. human. A simple LC/MS/MS peak area ratio comparison approach is the most facile and applicable approach to make a first assessment of whether metabolite exposures in animals exceed that in humans. In most cases, this measurement is sufficient to demonstrate that an animal toxicology study of the parent drug has covered the safety of the human metabolites. Methods whereby quantitation of metabolites can be done in the absence of chemically synthesized authentic standards are also described. Only in rare cases, where an actual exposure measurement of a metabolite is needed, will a validated or qualified method requiring a synthetic standard be needed. The rigor of the bioanalysis is increased accordingly based on the results of animal:human ratio measurements. This data driven bioanalysis strategy to address MIST issues within standard drug development processes is described. |
Navigation-guided Ommaya reservoir placement: implications for the treatment of leptomeningeal metastases. | Ommaya reservoirs are commonly used in the diagnosis and management of leptomeningeal metastases (LM) from malignant tumors. The present study investigates the utility of an intraoperative navigation-guided technique for Ommaya reservoir placement. Between March 2004 and December 2005, 85 navigation-guided Ommaya reservoir placements were performed in 77 patients with intracranial malignancies at the Komagome Metropolitan Hospital. Anterior horn puncture and posterior horn puncture were used for 59 and 26 procedures, respectively. A slit ventricle was present in 6 cases. All procedures were performed under assistance from the Medtronic STEALTH STATION TRIA navigation system. Computed tomographic (CT) scans were routinely obtained just after completion of the procedure. Patients diagnosed with LM received subsequent treatment. An Ommaya catheter was applied to the ventricular puncture needle registered in the navigation system and was inserted into the lateral ventricle. Using the real-time "Guidance View", the surgeon was able to verify the catheter position continuously during the procedure. Postoperative CT scan revealed an appropriate catheter position in all except for one case. Complications (catheter malposition) occurred in only one case (complication rate, 1.2%). None of the patients experienced hemorrhage or infection. In conclusion, navigation-guided Ommaya reservoir placement was associated with a very low incidence of complications. This method appears to be safe and effective when employed in patients with intracranial malignancy. |
Effects of collaborative online shopping on shopping experience through social and relational perspectives | Collaborative online shopping refers to an activity in which a consumer shops at an eCommerce website with remotely located shopping partners such as friends or family. Although collaborative online shopping has increased with the pervasiveness of social networking, few studies have examined how to enhance this type of shopping experience. This study examines two potential design components, embodiment and media richness, that could enhance shoppers’ experiences. Based on theories of copresence and flow, we examined whether the implementation of these two features could increase copresence, flow, and the intention to use a collaborative online shopping website. 2013 Elsevier B.V. All rights reserved. |
Data Gloves for Sign Language Recognition System | Communication between deaf-dumb and a normal person have always been a challenging task . About 9 billion people in the world come into this category which is quite large number to be ignored. As deaf-dumb people use sign language for their communication which is difficult to understand by the normal people. This paper aims at eradicating the communication barrier between them by developing an embedded system which will translate the hand gestures into |
Knowledge, Attitude and Practice Regarding Dengue Fever among the Healthy Population of Highland and Lowland Communities in Central Nepal | BACKGROUND
Dengue fever (DF) is the most rapidly spreading mosquito-borne viral disease in the world. In this decade it has expanded to new countries and from urban to rural areas. Nepal was regarded DF free until 2004. Since then dengue virus (DENV) has rapidly expanded its range even in mountain regions of Nepal, and major outbreaks occurred in 2006 and 2010. However, no data on the local knowledge, attitude and practice (KAP) of DF in Nepal exist although such information is required for prevention and control measures.
METHODS
We conducted a community based cross-sectional survey in five districts of central Nepal between September 2011 and February 2012. We collected information on the socio-demographic characteristics of the participants and their knowledge, attitude and practice regarding DF using a structured questionnaire. We then statistically compared highland and lowland communities to identify possible causes of observed differences.
PRINCIPAL FINDINGS
Out of 589 individuals interviewed, 77% had heard of DF. Only 12% of the sample had good knowledge of DF. Those living in the lowlands were five times more likely to possess good knowledge than highlanders (P<0.001). Despite low knowledge levels, 83% of the people had good attitude and 37% reported good practice. We found a significantly positive correlation among knowledge, attitude and practice (P<0.001). Among the socio-demographic variables, the education level of the participants was an independent predictor of practice level (P<0.05), and education level and interaction between the sex and age group of the participants were independent predictors of attitude level (P<0.05).
CONCLUSION
Despite the rapid expansion of DENV in Nepal, the knowledge of people about DF was very low. Therefore, massive awareness programmes are urgently required to protect the health of people from DF and to limit its further spread in this country. |
Description and analysis of a bottom-up DFA minimization algorithm | We establish linear-time reductions between the minimization of a deterministic finite automaton (DFA) and the conjunction of 3 subproblems: the minimization of a strongly connected DFA, the isomorphism problem for a set of strongly connected minimized DFAs, and the minimization of a connected DFA consisting in two strongly connected components, both of which are minimized. We apply this procedure to minimize, in linear time, automata whose nontrivial strongly connected com- |
Extremity soft tissue sarcomas presented as hematomas | Soft tissue sarcoma (STS) with extensive intra-tumoral hemorrhage is an infrequently described entity, usually misdiagnosed as intra-muscular hematoma. The outcomes in this group of patients have not been previously described. We retrospectively identified 15 patients, with initial clinical or imaging diagnosis of hematoma, or hematoma versus hemorrhagic sarcoma, although final diagnosis of high-grade STS was established in all cases. The most common location was the thigh. Three patients had a bleeding predisposition. Ten patients were referred for further evaluation with the initial diagnosis of muscle strain/hematoma, hematoma versus abscess in one, whereas four were referred for soft tissue mass evaluation. Final diagnosis was made by one biopsy in only 53% of patients. Mean time to diagnosis for patients with two biopsies was 7 months from initial presentation. Histologic diagnosis was malignant fibrous histiocytoma in ten patients. Surgical treatment included tumor resection in eleven and amputation in three patients. One patient had lung metastatic disease at presentation and eight developed lung metastases within a median time of 7 months. We suggest that an STS masquerading as hematoma should be suspected when the mechanism and the energy of the trauma do not justify the clinically detected severity of the injury, or the lesion does not follow the expected clinical course of resolution after initial conservative management. Bleeding predisposition does not exclude malignancy. The evacuation of hematomas should include pathologic examination of tissue. Prognosis is dismal due to early metastatic disease. |
Caracterización del marketing medioambiental en empresas manufactureras de plástico del Estado Zulia | Green, eco or environmental marketing includes the development and promotion of products and services to satisfy the needs and desires of customers in terms of quality, competitive prices and convenience without causing detriment to the environment. The main objective of this research was to characterize the management of green marketing executed by plastic manufacturing companies of Zulia state, in order to determine the current status of the sector. The study was performed conducting a descriptive and field research, no experimental and transactional, reviewing library materials, making direct observation in industries and the application of an instrument to a population composed of 18 companies. The results showed that companies in the Zulia have very few characteristics which allow companies classify them as ecological, and there are factors affecting the development of strategies in the area, such as lack of human resources, poor access to raw materials, lack of customers demand "green" products, low participation of government agencies, financial constraints and inability to adopt new systems that manufacture green products, among others. |
Optimal Deployment of Wireless Sensor Networks for Air Pollution Monitoring | Recently, air pollution monitoring emerges as a main service of smart cities because of the increasing industrialization and the massive urbanization. Wireless sensor networks (WSN) are a suitable technology for this purpose thanks to their substantial benefits including low cost and autonomy. Minimizing the deployment cost is one of the major challenges in WSN design, therefore sensors positions have to be carefully determined. In this paper, we propose two integer linear programming formulations based on real pollutants dispersion modeling to deal with the minimum cost WSN deployment for air pollution monitoring. We illustrate the concept by applying our models on real world data, namely the Nottingham City street lights. We compare the two models in terms of execution time and show that the second flow based formulation is much better. We finally conduct extensive simulations to study the impact of some parameters and derive some guidelines for efficient WSN deployment for air pollution monitoring. |
An Evaluation of OCR Accuracy | The data used in the test consisted of 500 pages selected at random from a collection of approximately 2,500 documents containing 100,000 pages. The documents in this collection were chosen by the U.S. Department of Energy (DOE) to represent the kinds of documents from which the DOE plans to build large, full-text retrieval databases using OCR for document conversion. The documents are mostly scientific and technical papers [Nartker 92]. |
Applications of neural network methods to the processing of Earth observation satellite data | The new generation of earth observation satellites carries advance sensors that gather very precise data for studying the Earth system and global climate. This paper shows that neural network methods can be successfully used for solving forward and inverse remote sensing problems, providing both accurate and fast solutions. Two examples of multi-neural network systems for the determination of cloud properties and for the retrieval of total columns of ozone using satellite data are presented. The developed algorithms based on multi-neural network are currently being used for the operational processing of European atmospheric satellite sensors and plays a key role in related satellite missions planed for the near future. |
Prevalence of helminth eggs in raw vegetables consumed in Burdur, Turkey | This study aimed to determine the prevalence of intestinal helminths in raw vegetables consumed in Burdur, Turkey. The presence of helminth eggs on raw vegetables, including lettuce, parsley, green onions, cucumbers, carrots, cress, peppermint, spinach, leek, dill, and rocket from Bazaars in Burdur, Turkey was determined. A total of 111 raw vegetable samples were randomly selected from the bazaar and then were examined by a concentration method and assayed by light microscopy. Helminth eggs were detected in 7 (6.3%) of 111 raw lettuce, parsley, carrots, cress, peppermint, spinach, and rocket samples (p > 0.05). No helminth eggs were detected in leek, cucumbers, dill, and green onions. Parasitological contamination of raw vegetables sold in bazaar in Burdur may pose a health risk to consumers of such products. The importance of adequate measures throughout the farm-to-table food chain was emphasized. 2012 Elsevier Ltd. All rights reserved. |
A 100 GHz FMCW MIMO radar system for 3D image reconstruction | We present a frequency modulated continuous wave (FMCW) multiple input multiple output (MIMO) radar demonstrator system operating in the W-band at frequencies around 100 GHz. It consists of a two dimensional sparse array together with hardware for signal generation and image reconstruction that we will describe in more detail. The geometry of the sparse array was designed with the help of simulations to the aim of imaging at distances of just a few up to more than 150 meters. The FMCW principle is used to extract range information. To obtain information in both cross-range directions a back-propagation algorithm is used and further explained in this paper. Finally, we will present first measurements and explain the calibration process. |
Causal video object segmentation from persistence of occlusions | Occlusion relations inform the partition of the image domain into “objects” but are difficult to determine from a single image or short-baseline video. We show how long-term occlusion relations can be robustly inferred from video, and used within a convex optimization framework to segment the image domain into regions. We highlight the challenges in determining these occluder/occluded relations and ensuring regions remain temporally consistent, propose strategies to overcome them, and introduce an efficient numerical scheme to perform the partition directly on the pixel grid, without the need for superpixelization or other preprocessing steps. |
A design process for embedding knowledge management in everyday work | Knowledge Management Software must be embedded in processes of knowledge workers' everyday practice. In order to attain a seamless design, regarding the special qualities and requirements of knowledge work, detailed studies of the existing work processes and analysis of the used knowledge are necessary. Participation of the knowledge owners and future users us an important factor for success of knowledge management systems. In this paper we describe characteristics of knowledge work motivating the usage of participatory design techniques. We suggest a design process for developing or improving knowledge management, which includes ethnographic surveys, user participation in cyclic improvement, scenario based design, and the use of multiple design artifacts and documents. Finally we explain the benefits of our approach. The paper is based on a case study we carried out to design and introduce a knowledge management system in a training company. |
Blockchain-Based Decentralized Applications for Multiple Administrative Domain Networking | Evolving networking scenarios include multi-administrative domain network services as drivers of novel business opportunities along with emerging operational challenges. As a potential approach to tackle upcoming requirements providing basic primitives to encompass analytics, automation, and distributed orchestration, we investigate blockchain-based decentralized applications (DApps) in the context of operational phases in support of multi-administrative domain networking. We present and discuss a generalized framework for multi-domain service orchestration using blockchain-based DApps and then showcase proof-of-concept prototype experiments based on best of breed open source components that demonstrate DApp functionalities as candidate enablers of multi-domain network services. We then analyze three use case scenarios pursued by ongoing work at standards development organizations, namely MEF, 3GPP, and ETSI NFV, discussing standardization opportunities around blockchain-based DApps. |
Comparative Studies of Passive Imaging in Terahertz and Mid-Wavelength Infrared Ranges for Object Detection | We compared the possibility of detecting hidden objects covered with various types of clothing by using passive imagers operating in a terahertz (THz) range at 1.2 mm (250 GHz) and a mid-wavelength infrared at 3-6 μm (50-100 THz). We investigated theoretical limitations, performance of imagers, and physical properties of fabrics in both the regions. In order to investigate the time stability of detection, we performed measurements in sessions each lasting 30 min. We present a theoretical comparison of two spectra, as well as the results of experiments. In order to compare the capabilities of passive imaging of hidden objects, we combined the properties of textiles, performance of imagers, and properties of radiation in both spectral ranges. The paper presents the comparison of the original results of measurement sessions for the two spectrums with analysis. |
Comparison of intraosseous versus central venous vascular access in adults under resuscitation in the emergency department with inaccessible peripheral veins. | INTRODUCTION
Current European Resuscitation Council (ERC) guidelines recommend intraosseous (IO) vascular access, if intravenous (IV) access is not readily available. Because central venous catheterisation (CVC) is an established alternative for in-hospital resuscitation, we compared IO access versus landmark-based CVC in adults with difficult peripheral veins.
METHODS
In this prospective observational study we investigated success rates on first attempt and procedure times of IO access versus central venous catheterisation (CVC) in adults (≥ 18 years of age) with inaccessible peripheral veins under trauma or medical resuscitation in a level I trauma centre emergency department.
RESULTS
Forty consecutive adults under resuscitation were analysed, each receiving IO access and CVC simultaneously. Success rates on first attempt were significantly higher for IO cannulation than CVC (85% versus 60%, p=0.024) and procedure times were significantly lower for IO access compared to CVC (2.0 versus 8.0 min, p<0.001). As for complications, failure of IO access was observed in 6 patients, while 2 or more attempts of CVC were necessary in 16 patients. No other relevant complications like infection, bleeding or pneumothorax were observed.
CONCLUSIONS
IO vascular access is a reliable bridging method to gain vascular access for in-hospital adult patients under resuscitation with difficult peripheral veins. Moreover, IO access is more efficacious with a higher success rate on first attempt and a lower procedure time compared to landmark-based CVC. |
Ca2plus binding study on some low molecular weight elastin peptides. | Abstract The binding of Ca 2+ to some low molecular weight peptides obtained from the alkaline hydrolysis of purified elastin has been determined by titration using murexide as an indicator for free Ca 2+ . Ca 2+ binding was shown to be dependent on pH and Ca 2+ concentration and increases with increasing pH. The data strongly support the possibility that electrostatic interaction occurs between the ionized carboxylic groups of the peptide and Ca 2+ . In fact the peptide in which the carboxylic groups were previously blocked with the glycine ethyl ester did not interact with Ca 2+ . |
Spatiotemporal Residual Networks for Video Action Recognition | Two-stream Convolutional Networks (ConvNets) have shown strong performance for human action recognition in videos. Recently, Residual Networks (ResNets) have arisen as a new technique to train extremely deep architectures. In this paper, we introduce spatiotemporal ResNets as a combination of these two approaches. Our novel architecture generalizes ResNets for the spatiotemporal domain by introducing residual connections in two ways. First, we inject residual connections between the appearance and motion pathways of a two-stream architecture to allow spatiotemporal interaction between the two streams. Second, we transform pretrained image ConvNets into spatiotemporal networks by equipping them with learnable convolutional filters that are initialized as temporal residual connections and operate on adjacent feature maps in time. This approach slowly increases the spatiotemporal receptive field as the depth of the model increases and naturally integrates image ConvNet design principles. The whole model is trained end-to-end to allow hierarchical learning of complex spatiotemporal features. We evaluate our novel spatiotemporal ResNet using two widely used action recognition benchmarks where it exceeds the previous state-of-the-art. |
Genetic basis for individual variations in pain perception and the development of a chronic pain condition. | Pain sensitivity varies substantially among humans. A significant part of the human population develops chronic pain conditions that are characterized by heightened pain sensitivity. We identified three genetic variants (haplotypes) of the gene encoding catecholamine-O-methyltransferase (COMT) that we designated as low pain sensitivity (LPS), average pain sensitivity (APS) and high pain sensitivity (HPS). We show that these haplotypes encompass 96% of the human population, and five combinations of these haplotypes are strongly associated (P=0.0004) with variation in the sensitivity to experimental pain. The presence of even a single LPS haplotype diminishes, by as much as 2.3 times, the risk of developing myogenous temporomandibular joint disorder (TMD), a common musculoskeletal pain condition. The LPS haplotype produces much higher levels of COMT enzymatic activity when compared with the APS or HPS haplotypes. Inhibition of COMT in the rat results in a profound increase in pain sensitivity. Thus, COMT activity substantially influences pain sensitivity, and the three major haplotypes determine COMT activity in humans that inversely correlates with pain sensitivity and the risk of developing TMD. |
Sparse-to-Continuous: Enhancing Monocular Depth Estimation using Occupancy Maps | This paper addresses the problem of single image depth estimation (SIDE), focusing on improving the accuracy of deep neural network predictions. In a supervised learning scenario, the quality of predictions is intrinsically related to the training labels, which guide the optimization process. For indoor scenes, structured-light-based depth sensors (e.g. Kinect) are able to provide dense, albeit short-range, depth maps. On the other hand, for outdoor scenes, LiDARs are still considered the standard sensor, which comparatively provide much sparser measurements, especially in areas further away. Rather than modifying the neural network architecture to deal with sparse depth maps, this article introduces a novel densification method for depth maps, using the Hilbert Maps framework. A continuous occupancy map is produced based on 3D points from LiDAR scans, and the resulting reconstructed surface is projected into a 2D depth map with arbitrary resolution. Experiments conducted with various subsets of the KITTI dataset show a significant improvement produced by the proposed Sparse-to-Continuous technique, without the introduction of extra information into the training stage. |
Spectral–Spatial Classification of Hyperspectral Data Based on Deep Belief Network | Hyperspectral data classification is a hot topic in remote sensing community. In recent years, significant effort has been focused on this issue. However, most of the methods extract the features of original data in a shallow manner. In this paper, we introduce a deep learning approach into hyperspectral image classification. A new feature extraction (FE) and image classification framework are proposed for hyperspectral data analysis based on deep belief network (DBN). First, we verify the eligibility of restricted Boltzmann machine (RBM) and DBN by the following spectral information-based classification. Then, we propose a novel deep architecture, which combines the spectral-spatial FE and classification together to get high classification accuracy. The framework is a hybrid of principal component analysis (PCA), hierarchical learning-based FE, and logistic regression (LR). Experimental results with hyperspectral data indicate that the classifier provide competitive solution with the state-of-the-art methods. In addition, this paper reveals that deep learning system has huge potential for hyperspectral data classification. |
New eyes in the sky measure glaciers and ice sheets | |
A multichannel convolutional neural network for cross-language dialog state tracking | The fifth Dialog State Tracking Challenge (DSTC5) introduces a new cross-language dialog state tracking scenario, where the participants are asked to build their trackers based on the English training corpus, while evaluating them with the unlabeled Chinese corpus. Although the computer-generated translations for both English and Chinese corpus are provided in the dataset, these translations contain errors and careless use of them can easily hurt the performance of the built trackers. To address this problem, we propose a multichannel Convolutional Neural Networks (CNN) architecture, in which we treat English and Chinese language as different input channels of one single CNN model. In the evaluation of DSTC5, we found that such multichannel architecture can effectively improve the robustness against translation errors. Additionally, our method for DSTC5 is purely machine learning based and requires no prior knowledge about the target language. We consider this a desirable property for building a tracker in the cross-language context, as not every developer will be familiar with both languages. |
Projective Test Use Among School Psychologists A Survey and Critique | The use of projective techniques by school psychologists has been a point of interest and debate, with a number of survey studies documenting usage. The purpose of this study is to update the status of projective use among school psychologists, with a specific focus on their use in the social emotional assessment of children in schools. In addition to gathering information about the frequency of use, this study provides information about the types of assessment activities in which the assessments are used and practitioner’s perception of the utility of specific instruments. Results indicate that school psychologists view projective assessments as moderately useful and that they continue to use projectives across grades and for a variety of educational purposes, including eligibility determination and intervention development. Results are discussed critically in the context of previous research. |
Clap : Modeling Applause in Campaign Speeches | This work examines the rhetorical techniques that speakers employ during political campaigns. We introduce a new corpus of speeches from campaign events in the months leading up to the 2016 U.S. presidential election and develop new models for predicting moments of audience applause. In contrast to existing datasets, we tackle the challenge of working with transcripts that derive from uncorrected closed captioning, using associated audio recordings to automatically extract and align labels for instances of audience applause. In prediction experiments, we find that lexical features carry the most information, but that a variety of features are predictive, including prosody, long-term contextual dependencies, and theoretically motivated features designed to capture rhetorical techniques. |
Oral administration of RAC-alpha-lipoic acid modulates insulin sensitivity in patients with type-2 diabetes mellitus: a placebo-controlled pilot trial. | Alpha-lipoic acid (ALA), a naturally occuring compound and a radical scavenger was shown to enhance glucose transport and utilization in different experimental and animal models. Clinical studies described an increase of insulin sensitivity after acute and short-term (10 d) parenteral administration of ALA. The effects of a 4-week oral treatment with alpha-lipoic acid were evaluated in a placebo-controlled, multicenter pilot study to determine see whether oral treatment also improves insulin sensitivity. Seventy-four patients with type-2 diabetes were randomized to either placebo (n = 19); or active treatment in various doses of 600 mg once daily (n = 19), twice daily (1200 mg; n = 18), or thrice daily (1800 mg; n = 18) alpha-lipoic acid. An isoglycemic glucose-clamp was done on days 0 (pre) and 29 (post). In this explorative study, analysis was done according to the number of subjects showing an improvement of insulin sensitivity after treatment. Furthermore, the effects of active vs. placebo treatment on insulin sensitivity was compared. All four groups were comparable and had a similar degree of hyperglycemia and insulin sensitivity at baseline. When compared to placebo, significantly more subjects had an increase in insulin-stimulated glucose disposal (MCR) after ALA treatment in each group. As there was no dose effect seen in the three different alpha-lipoic acid groups, all subjects receiving ALA were combined in the "active" group and then compared to placebo. This revealed significantly different changes in MCR after treatment (+27% vs. placebo; p < .01). This placebo-controlled explorative study confirms previous observations of an increase of insulin sensitivity in type-2 diabetes after acute and chronic intravenous administration of ALA. The results suggest that oral administration of alpha-lipoic acid can improve insulin sensitivity in patients with type-2 diabetes. The encouraging findings of this pilot trial need to be substantiated by further investigations. |
Predicting Human Eye Fixations via an LSTM-Based Saliency Attentive Model | Data-driven saliency has recently gained a lot of attention thanks to the use of convolutional neural networks for predicting gaze fixations. In this paper, we go beyond standard approaches to saliency prediction, in which gaze maps are computed with a feed-forward network, and present a novel model which can predict accurate saliency maps by incorporating neural attentive mechanisms. The core of our solution is a convolutional long short-term memory that focuses on the most salient regions of the input image to iteratively refine the predicted saliency map. In addition, to tackle the center bias typical of human eye fixations, our model can learn a set of prior maps generated with Gaussian functions. We show, through an extensive evaluation, that the proposed architecture outperforms the current state-of-the-art on public saliency prediction datasets. We further study the contribution of each key component to demonstrate their robustness on different scenarios. |
Investigation of gender difference in thermal comfort for Chinese people | Gender difference in thermal comfort for Chinese people was investigated through two laboratory experiments. Both subjective assessment and objective measurement were taken during the experiment. Skin temperature (17 points) and heart rate variability (HRV) were measured in one of the experiment. Our results show that there are gender differences in thermal comfort for Chinese people. Correlation of thermal sensation votes and air temperature and vapor pressure shows that females are more sensitive to temperature and less sensitive to humidity than males. Subjective assessment, skin temperature and HRV analysis suggest that females prefer neutral or slightly warmer condition, due to their constantly lower skin temperature and the fact that mean skin temperature is a good predictor of sensation and discomfort below neutrality. Female comfortable operative temperature (26.3°C) is higher than male comfortable operative temperature (25.3°C), although males and females have almost the same neutral temperature and that there is no gender difference in thermal sensation near neutral conditions. |
How Affective Is a "Like"?: The Effect of Paralinguistic Digital Affordances on Perceived Social Support | A national survey asked 323 U.S. adults about paralinguistic digital affordances (PDAs) and how these forms of lightweight feedback within social media were associated with their perceived social support. People perceived PDAs (e.g., Likes, Favorites, and Upvotes) as socially supportive both quantitatively and qualitatively, even without implicit meaning associated with them. People who are highly sensitive about what others think of them and have high self-esteem are more likely to perceive higher social support from PDAs. |
Clinical usefulness of the free web-based image analysis application ImmunoRatio for assessment of Ki-67 labelling index in breast cancer | AIMS
Ki-67 is a prognostic marker in breast cancer; however, the use of the Ki-67 labelling index (LI) in clinical practice requires a consistent and easily accessible scoring method. The present study evaluated the use of the free internet-based image analysis program ImmunoRatio to score Ki-67 LI in breast cancer in comparison with manual counting.
METHODS
Ki-67 immunohistochemical detection was performed in 577 breast cancer cases, and the Ki-67 LI was determined by ImmunoRatio and manual counting.
RESULTS
The Ki-67 LI determined by ImmunoRatio correlated well with that obtained by manual counting. The concordance rate between ImmunoRatio and manual counting was excellent (κ coefficient of 0.881) at a Ki-67 LI cut-off value of 20%. Cases with high Ki-67 LI by ImmunoRatio were associated with poor overall survival, in particular in the hormone receptor positive group.
CONCLUSIONS
The web-based automated image analysis program ImmunoRatio is an attractive alternative to manual counting to determine the Ki-67 LI in breast cancer. |
Bayesian-Based Iterative Method of Image Restoration * | It was assumed that the degraded image H was of the form H= W*S, where W is the original image, S is the point spread function, and * denotes the operation of convolution. It was also assumed that W, S, and H are discrete probability-frequency functions, not necessarily normalized. That is, the numerical value of a point of W, S, or H is considered as a measure of the frequency of the occurrence of an event at that point. S is usually in normalized form. Units of energy (which may be considered unique events) originating at a point in W are distributed at points in H according to the frequencies indicated by S. H then represents the resulting sums of the effects of the units of energy originating at all points of W. In what follows, each of the three letters has two uses when subscripted. For example, Wi indicates either the ith location in the array W or the value associated with the ith location. The unsubscripted letter refers to the entire array or the value associated with the array as in W = E i Wi. The doublesubscripted Wi j in two dimensions is interpreted similarly to Wi in one dimension. In the approximation formulas, a subscript r appears, which is the number of the iteration. |
Advanced CAR parking system using Arduino | This paper explains the architecture and design of Arduino based car parking system. Authorization of driver or user is the basic rule used to park a vehicle in a parking place. Authorization card will be given to each user, which carries the vehicle number or other details. If the user is authorized and space is available in the parking, then the parking gate will open and the user is allowed to park the vehicle in parking place else the user is not allowed even the user is authorized person. If car is allowed to park, then mobile notification will be send to user about parking. It solves the parking issue in urban areas, also provides security to a vehicle and an unauthorized user is not allowed to enter into a parking place. It helps to park vehicle in multifloored parking also as it will display which floor has free space. |
Robust Segmentation and Measurements Techniques of White Cells in Blood Microscope Images | The analysis and the count of blood cell in microscope image can provide useful information concerning the health of the patients. In particular, morphological analysis of white cell deformations can effectively detect important diseases such as the acute lymphoblastic leukemia. Blood images obtained by microscopes coupled with a digital camera are simple to obtain and can be more simply transmitted to clinical centers than liquid blood samples. Automatic measurement systems for white cells in blood microscope image can greatly help blood experts that typically inspect blood films manually. Unfortunately, the analysis made by human experts is not rapid and it presents a not standardized accuracy due to the operator's capabilities and tiredness. The presented paper shows how that it is effectively possible to accurately measure the white cells properties in order to allow, at a second stage, the leukemia identification. In particular, the paper presents how to suitably enhance the microscope image by removing the undesired microscope background and a new segmentation strategy to robustly identify white cells permitting to better extract their features for subsequent automatic diagnosis of diseases |
Virtual reality sickness questionnaire (VRSQ): Motion sickness measurement index in a virtual reality environment. | This study aims to develop a motion sickness measurement index in a virtual reality (VR) environment. The VR market is in an early stage of market formation and technological development, and thus, research on the side effects of VR devices such as simulator motion sickness is lacking. In this study, we used the simulator sickness questionnaire (SSQ), which has been traditionally used for simulator motion sickness measurement. To measure the motion sickness in a VR environment, 24 users performed target selection tasks using a VR device. The SSQ was administered immediately after each task, and the order of work was determined using the Latin square design. The existing SSQ was revised to develop a VR sickness questionnaire, which is used as the measurement index in a VR environment. In addition, the target selection method and button size were found to be significant factors that affect motion sickness in a VR environment. The results of this study are expected to be used for measuring and designing simulator sickness using VR devices in future studies. |
Towards rain detection through use of in-vehicle multipurpose cameras | Advanced Driver Assistance Systems (ADAS) based on video camera tends to be generalized in today's automotive. However, if most of these systems perform nicely in good weather conditions, they perform very poorly under adverse weather particularly under rain. We present a novel approach that aims at detecting raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithm relies on image processing technics to highlight raindrops. Its results can be further used for image restoration and vision enhancement and hence it is a valuable tool for ADAS. |
Dexhand: A Space qualified multi-fingered robotic hand | Despite the progress since the first attempts of mankind to explore space, it appears that sending man in space remains challenging. While robotic systems are not yet ready to replace human presence, they provide an excellent support for astronauts during maintenance and hazardous tasks. This paper presents the development of a space qualified multi-fingered robotic hand and highlights the most interesting challenges. The design concept, the mechanical structure, the electronics architecture and the control system are presented throughout this overview paper. |
Thrombolysis in patients with pulmonary embolism and elevated heart-type fatty acid-binding protein levels | Recent studies have reported that a novel cardiac biomarker, heart-type fatty acid-binding protein (h-FABP), significantly predicts mortality inpatients with pulmonary embolism (PE) at intermediate risk. The aim of this study was to evaluate the effect of thrombolytic therapy on prognosis of the intermediate risk acute PE patients with elevated levels of h-FABP. This is non-interventional, prospective, and single-center cohort study where 80 patients (mean age 62 ± 17 years, 32 men) with confirmed acute PE were included. Only patients with PE at intermediate risk (echocardiographic signs of right ventricular overload but without evidence for hypotension or shock) were included in the study. h-FABP and other biomarkers were measured upon admission to the emergency department. Thrombolytic (Thrl) therapy was administered at the physician’s discretion. Of the included 80 patients, 24 were h-FABP positive (30 %). 14 patients (58 %) with positive h-FABP had clinical deterioration during the hospital course and required inotropic support and 12 of these patients died. However, of 56 patients with negative test, only 7 patients worsened or needed inotropic support and five patients died during the hospital stay. Mortality of patients with PE at intermediate risk was 21 %. The 30-day mortality rate was significantly higher in h-FABP(+) patients compared to h-FABP(−) patients (9 vs. 50 %, p < 0.001). Multivariate analysis revealed h-FABP as the only 30 day mortality predictor (HR 7.81, CI 1.59–38.34, p = 0.01). However, thrl therapy did dot affect the survival of these high-risk patients. Despite, h-FABP was successful to predict 30-days mortality in patients with PE at intermediate risk; it is suggested to be failed in determining the patients who will benefit from thrl therapy. |
Development and validation of a short form of the Fugl-Meyer motor scale in patients with stroke. | BACKGROUND AND PURPOSE
The 50-item Fugl-Meyer motor scale (FM) is commonly used in outcome studies. However, the lengthy administration time of the FM keeps it from being widely accepted for routine clinical use. We aimed to develop a short form of the FM (the S-FM) with sound psychometric properties for stroke patients.
METHODS
The FM was administered to 279 patients. It was then simplified based on expert opinions and the results of Rasch analysis. The psychometric properties (including Rasch reliability, concurrent validity, predictive validity, and responsiveness) of the S-FM were examined and were compared with those of the FM. The concurrent validity and responsiveness of the S-FM were further validated in a sample from the Netherlands.
RESULTS
We selected 6 items for each subscale to construct a 12-item S-FM. The S-FM demonstrated high Rasch reliability, high concurrent validity with the original scale, moderate responsiveness, and moderate predictive validity with the comprehensive activities of daily living function. The S-FM also showed sufficient concurrent validity and responsiveness on the Dutch sample.
CONCLUSIONS
Our results provide strong evidence that the psychometric properties of the S-FM are comparable with those of the FM. The S-FM contains only 12 items, making it a very efficient measure for assessing the motor function of stroke patients in both clinical and research settings. |
Theories of developmental dyslexia: insights from a multiple case study of dyslexic adults. | A multiple case study was conducted in order to assess three leading theories of developmental dyslexia: (i) the phonological theory, (ii) the magnocellular (auditory and visual) theory and (iii) the cerebellar theory. Sixteen dyslexic and 16 control university students were administered a full battery of psychometric, phonological, auditory, visual and cerebellar tests. Individual data reveal that all 16 dyslexics suffer from a phonological deficit, 10 from an auditory deficit, four from a motor deficit and two from a visual magnocellular deficit. Results suggest that a phonological deficit can appear in the absence of any other sensory or motor disorder, and is sufficient to cause a literacy impairment, as demonstrated by five of the dyslexics. Auditory disorders, when present, aggravate the phonological deficit, hence the literacy impairment. However, auditory deficits cannot be characterized simply as rapid auditory processing problems, as would be predicted by the magnocellular theory. Nor are they restricted to speech. Contrary to the cerebellar theory, we find little support for the notion that motor impairments, when found, have a cerebellar origin or reflect an automaticity deficit. Overall, the present data support the phonological theory of dyslexia, while acknowledging the presence of additional sensory and motor disorders in certain individuals. |
Do phytoestrogens reduce the risk of breast cancer and breast cancer recurrence? What clinicians need to know. | Oestrogen is an important determinant of breast cancer risk. Oestrogen-mimicking plant compounds called phytoestrogens can bind to oestrogen receptors and exert weak oestrogenic effects. Despite this activity, epidemiological studies suggest that the incidence of breast cancer is lower in countries where the intake of phytoestrogens is high, implying that these compounds may reduce breast cancer risk, and possibly have an impact on survival. Isoflavones and lignans are the most common phytoestrogens in the diet. In this article, we present findings from human observational and intervention studies related to both isoflavone and lignan exposure and breast cancer risk and survival. In addition, the clinical implications of these findings are examined in the light of a growing dietary supplement market. An increasing number of breast cancer patients seek to take supplements together with their standard treatment in the hope that these will either prevent recurrence or treat their menopausal symptoms. Observational studies suggest a protective effect of isoflavones on breast cancer risk and the case may be similar for increasing lignan consumption although evidence so far is inconsistent. In contrast, short-term intervention studies suggest a possible stimulatory effect on breast tissue raising concerns of possible adverse effects in breast cancer patients. However, owing to the dearth of human studies investigating effects on breast cancer recurrence and survival the role of phytoestrogens remains unclear. So far, not enough clear evidence exists on which to base guidelines for clinical use, although raising patient awareness of the uncertain effect of phytoestrogens is recommended. |
CapsGAN: Using Dynamic Routing for Generative Adversarial Networks | In this paper, we propose a novel technique for generating images in the 3D domain from images with high degree of geometrical transformations. By coalescing two popular concurrent methods that have seen rapid ascension to the machine learning zeitgeist in recent years: GANs (Goodfellow et. al.) and Capsule networks (Sabour, Hinton et. al.) we present: CapsGAN. We show that CapsGAN performs better than or equal to traditional CNN based GANs in generating images with high geometric transformations using rotated MNIST. In the process, we also show the efficacy of using capsules architecture in the GANs domain. Furthermore, we tackle the Gordian Knot in training GANs the performance control and training stability by experimenting with using Wasserstein distance (gradient clipping, penalty) and Spectral Normalization. The experimental findings of this paper should propel the application of capsules and GANs in the still exciting and nascent domain of 3D image generation, and plausibly video (frame) generation. |
Through the Twitter Glass: Detecting Questions in Micro-Text | In a separate study, we were interested in understanding people’s Q&A habits on Twitter. Finding questions within Twitter turned out to be a difficult challenge, so we considered applying some traditional NLP approaches to the problem. On the one hand, Twitter is full of idiosyncrasies, which make processing it difficult. On the other it is very restricted in length and tends to employ simple syntactic constructions, which could help the performance of NLP processing. In order to find out the viability of NLP and Twitter, we built a pipeline of tools to work specifically with Twitter input for the task of finding questions in tweets. This work is still preliminary, but in this paper we discuss the techniques we used and the lessons we learned. |
On-Line and Off-Line Handwriting Recognition: A Comprehensive Survey | ÐHandwriting has continued to persist as a means of communication and recording information in day-to-day life even with the introduction of new technologies. Given its ubiquity in human transactions, machine recognition of handwriting has practical significance, as in reading handwritten notes in a PDA, in postal addresses on envelopes, in amounts in bank checks, in handwritten fields in forms, etc. This overview describes the nature of handwritten language, how it is transduced into electronic data, and the basic concepts behind written language recognition algorithms. Both the on-line case (which pertains to the availability of trajectory data during writing) and the off-line case (which pertains to scanned images) are considered. Algorithms for preprocessing, character and word recognition, and performance with practical systems are indicated. Other fields of application, like signature verification, writer authentification, handwriting learning tools are also considered. Index TermsÐHandwriting recognition, on-line, off-line, written language, signature verification, cursive script, handwriting learning tools, writer authentification. |
Short-term effects of a combined nutraceutical of insulin-sensitivity, lipid level and indexes of liver steatosis: a double-blind, randomized, cross-over clinical trial | BACKGROUND
Overweight subjects easily develop alterations of the glucose and lipid metabolism and are exposed to an increased cardiometabolic risk. This condition is potentially reversible through the improvement of dietary and behavioural habits. However, a well-assembled nutraceutical would be a useful tool to better improve the metabolic parameters associated to overweight and insulin resistance.
METHODS
To evaluate the effect of a combined nutraceutical containing berberine, chlorogenic acid and tocotrienols, we performed a double blind, cross-over designed trial versus placebo, in 40 overweight subjects with mixed hyperlipidaemia. After the first 8 weeks of treatment (or placebo), patients were asked to observe a 2-week washout period, and they were then assigned to the alternative treatment for a further period of 8 weeks. Clinical and laboratory data associated to hyperlipidaemia and insulin resistance have been obtained at the baseline, at the end of the first treatment period, after the washout, and again after the second treatment period.
RESULTS
Both groups experienced a significant improvement of anthropometric and biochemical parameters versus baseline. However, total cholesterol, LDL cholesterol, triglycerides, non-HDL cholesterol, fasting insulin, HOMA-IR, GOT and Lipid Accumulation Product decreased more significantly in the nutraceutical group versus placebo.
CONCLUSIONS
This combination seems to improve a large number of metabolic and liver parameters on the short-term in overweight subjects. Further studies are needed to confirm these observations on the middle- and long-term. |
A New Shape Signature for Fourier Descriptors | Shape-based image description is an important approach to content-based image retrieval (CBIR). A variety of techniques are reported in the literature that aim to represent objects based on their shapes; each of these techniques has its advantages and disadvantages. Fourier descriptor (FD), a simple yet powerful technique, has attractive properties such as rotational, scale, and translational invariance. In this paper we investigate this technique and present a novel shape registration method for extracting Fourier descriptors. When evaluated against curvature scale space (CSS) and Zernike moments (ZM) in shape-based image retrieval, the proposed technique exhibits superior performance. |
Universal Stanford dependencies: A cross-linguistic typology | Revisiting the now de facto standard Stanford dependency representation, we propose an improved taxonomy to capture grammatical relations across languages, including morphologically rich ones. We suggest a two-layered taxonomy: a set of broadly attested universal grammatical relations, to which language-specific relations can be added. We emphasize the lexicalist stance of the Stanford Dependencies, which leads to a particular, partially new treatment of compounding, prepositions, and morphology. We show how existing dependency schemes for several languages map onto the universal taxonomy proposed here and close with consideration of practical implications of dependency representation choices for NLP applications, in particular parsing. |
Research Robots for Applications in Artificial Intelligence, Teleoperation and Entertainment | Sarcos Research Corporation, and the Center for Engineering Design at the University of Utah, have long been interested in both the fundamental and the applied aspects of robots and other computationally driven machines. We have produced substantial numbers of systems that function as products for commercial applications, and as advanced research tools specifically designed for experimental |
Fusing Time-of-Flight Depth and Color for Real-Time Segmentation and Tracking | We present an improved framework for real-time segmentation and tracking by fusing depth and RGB color data. We are able to solve common problems seen in tracking and segmentation of RGB images, such as occlusions, fast motion, and objects of similar color. Our proposed real-time mean shift based algorithm outperforms the current state of the art and is significantly better in difficult scenarios. |
Optimizing Hierarchical Visualizations with the Minimum Description Length Principle | In this paper we examine how the Minimum Description Length (MDL) principle can be used to efficiently select aggregated views of hierarchical datasets that feature a good balance between clutter and information. We present MDL formulae for generating uneven tree cuts tailored to treemap and sunburst diagrams, taking into account the available display space and information content of the data. We present the results of a proof-of-concept implementation. In addition, we demonstrate how such tree cuts can be used to enhance drill-down interaction in hierarchical visualizations by implementing our approach in an existing visualization tool. Validation is done with the feature congestion measure of clutter in views of a subset of the current DMOZ web directory, which contains nearly half million categories. The results show that MDL views achieve near constant clutter level across display resolutions. We also present the results of a crowdsourced user study where participants were asked to find targets in views of DMOZ generated by our approach and a set of baseline aggregation methods. The results suggest that, in some conditions, participants are able to locate targets (in particular, outliers) faster using the proposed approach. |
Attitude Estimation by Multiple-Mode Kalman Filters | This letter proposes a multiple-mode Kalman filter for one-dimensional attitude estimation using low-cost accelerometer and gyroscope. The nonlinearity and time-varying parameters are partitioned into several modes; for each mode, a linear time-invariant Kalman filter is selected. Experimental results are given to verify the proposed Kalman filter |
Accuracy of newly formulated fast-setting elastomeric impression materials. | STATEMENT OF PROBLEM
Elastomeric impression materials have been reformulated to achieve a faster set. The accuracy of fast-setting elastomeric impression materials should be confirmed, particularly with respect to disinfection.
PURPOSE
The purpose of this study was to assess the accuracy of 2 types of fast-setting impression materials when disinfected with acid glutaraldehyde.
MATERIAL AND METHODS
Impressions of the mandibular arch of a modified dentoform master model were made, from which gypsum working casts and dies were formed. Measurements of the master model and working casts included anteroposterior (AP) and cross-arch (CA) dimensions. A stainless steel circular crown preparation incorporated within the master model was measured in buccolingual (BL), mesiodistal (MD), and occlusogingival (OG) dimensions and compared to measurements from recovered gypsum dies. The impression materials examined were a fast-set vinyl polysiloxane (VPS-FS, Aquasil Ultra Fast Set), a fast-set polyether (PE-FS, Impregum Penta Soft Quick Step), and a regular-setting polyether as a control (PE, Impregum Penta). Disinfection involved immersion in 3.5% acid glutaraldehyde (Banicide Advanced) for 20 minutes, and nondisinfected impressions served as a control. Linear measurements were made with a measuring microscope. Statistical analysis utilized a 2-way and single-factor analysis of variance with pair-wise comparison of mean values when appropriate. Hypothesis testing was conducted at alpha = .05
RESULTS
No differences were shown between the disinfected and nondisinfected conditions for all locations. However, there were statistical differences among the 3 materials for AP, CA, MD, and OG dimensions. AP and CA dimensions of all working casts were larger than the master model. Impressions produced oval-shaped working dies for all impression materials. PE and PE-FS working dies were larger in all dimensions compared to the stainless steel preparation, whereas VPS-FS-generated working dies were reduced in OG and MD dimensions. Differences detected were small and may not be of clinical significance.
CONCLUSIONS
Impression material accuracy was unaffected by immersion disinfection. The working casts and dies were similar for PE and PE-FS. VPS-FS generated gypsum dies that were smaller in 2 of the 3 dimensions measured and may require additional die relief. Overall accuracy was acceptable for all 3 impression materials. |
Quaternion correlation filters for face recognition in wavelet domain | A new frequency domain face recognition method using wavelet decomposition and quaternion correlation filters is proposed. The wavelet decomposition of the face image leads to a wavelet subband representation, which contains four subband images corresponding to four orthogonal channels. These four subbands can be encoded into a 2D quaternion number array. The quaternion correlation filter method is developed to perform pattern recognition jointly on multichannel 2D signals. The proposed method has been shown to achieve significant improvement in face recognition results compared to the traditional advanced correlation filter method for handling illumination variations of face images. We experimented with the CMU PIE database consisting of 65 people with 21 illumination variations per person, showing that our method can achieve close to 100% recognition accuracy using just a single training image of a person under neutral frontal lighting and testing on all other unseen harsh illumination conditions. |
Marginal Fit Comparison of CAD/CAM Crowns Milled from Two Different Materials. | PURPOSE
To evaluate the marginal fit of CAD/CAM copings milled from hybrid ceramic (Vita Enamic) blocks and lithium disilicate (IPS e.max CAD) blocks, and to evaluate the effect of crystallization firing on the marginal fit of lithium disilicate copings.
MATERIALS AND METHODS
A standardized metal die with a 1-mm-wide shoulder finish line was imaged using the CEREC AC Bluecam. The coping was designed using CEREC 3 software. The design was used to fabricate 15 lithium disilicate and 15 hybrid ceramic copings. Design and milling were accomplished by one operator. The copings were seated on the metal die using a pressure clamp with a uniform pressure of 5.5 lbs. A Macroview Microscope (14×) was used for direct viewing of the marginal gap. Four areas were imaged on each coping (buccal, distal, lingual, mesial). Image analysis software was used to measure the marginal gaps in μm at 15 randomly selected points on each of the four surfaces. A total of 60 measurements were made per specimen. For lithium disilicate copings the measurements for marginal gap were made before and after crystallization firing. Data were analyzed using paired t-test and Kruskal-Wallis test.
RESULTS
The overall mean difference in marginal gap between the hybrid ceramic and crystallized lithium disilicate copings was statistically significant (p < 0.01). Greater mean marginal gaps were measured for crystallized lithium disilicate copings. The overall mean difference in marginal gap before and after firing (precrystallized and crystallized lithium disilicate copings) showed an average of 62 μm increase in marginal gap after firing. This difference was also significant (p < 0.01).
CONCLUSIONS
A significant difference exists in the marginal gap discrepancy when comparing hybrid ceramic and lithium disilicate CAD/CAM crowns. Also crystallization firing can result in a significant increase in the marginal gap of lithium disilicate CAD/CAM crowns. |
Hyperbaric oxygen in the treatment of delayed radiation injuries of the extremities. | Hyperbaric oxygen (HBO2) is used as an adjunct in the treatment of radiation injury at many sites, including the mandible, larynx, chest wall, bladder, and rectum. In these disorders, HBO2 is effective in stimulating neovascularization and reducing fibrosis. No previous publications report the application of HBO2 to radiation injuries of the extremities. From 1979 until 1997, 17 patients were treated at the Southwest Texas Methodist and Nix Hospitals for nonhealing necrotic wounds of the extremities within previously irradiated fields. All but one wound involved a lower extremity. Most of the patients had been irradiated for soft tissue sarcomas or skin cancers. The rest were irradiated for a variety of malignancies. HBO2 was delivered in a multiplace chamber at 2.4 atm abs daily for 90 min of 100% oxygen at pressure. This report is a retrospective, uncontrolled review of these patients. Eleven patients (65%) healed completely whereas five (29%) failed to heal and one (6%) was lost to follow-up. Three (60%) of those who failed were found to have local or distant recurrence of their tumor early in their course of hyperbaric treatment and were discontinued from therapy at that time. When last seen in the clinic, the wound of the patient who was lost to follow-up was improved but not completely healed. Four of those who failed (including the two with local tumor recurrence) required amputation. If we exclude those with active cancer and the patient lost to follow-up, the success rate was 11 of 13 or 85%. HBO2 was applied successfully with complete wound healing and the avoidance of amputation in a majority of these patients. The consequences of failure in patients suffering from radiation necrosis of the extremities (some complicated by the presence of tumor) are significant, with 80% of the five failures requiring amputation. In radiation injuries of the extremities as in delayed radiation injury at other sites, HBO2 is a useful adjunct and should be part of the overall management. |
CAPACITOR SWITCHING TRANSIENT MODELING AND ANALYSIS ON AN ELECTRICAL UTILITY DISTRIBUTION SYSTEM USING SIMULINK SOFTWARE | OF THESIS CAPACITOR SWITCHING TRANSIENT MODELING AND ANALYSIS ON AN ELECTRICAL UTILITY DISTRIBUTION SYSTEM USING SIMULINK SOFTWARE The quality of electric power has been a constant topic of study, mainly because inherent problems to it can bring great economic losses in industrial processes. Among the factors that affect power quality, those related to transients originated from capacitor bank switching in the primary distribution systems must be highlighted. In this thesis, the characteristics of the transients resulting from the switching of utility capacitor banks are analyzed, as well as factors that influence there intensities. A practical application of synchronous closing to reduce capacitor bank switching transients is presented. A model that represents a real distribution system 12.47kV from Shelbyville sub-station was built and simulated using MATLAB/SIMULINK software for purposes of this study. A spectral analysis of voltage and current waves is made to extract the acceptable capacitor switching times by observing the transient over-voltages and, harmonic components. An algorithm is developed for practical implementation of zero-crossing technique by taking the results obtained from the SIMULINK model. |
A protocol-contract for opioid use in patients with chronic pain not due to malignancy. | The legal, psychosocial, and medical factors that we believe have contributed to the success of our protocol-contract in prescribing opioids to patients with chronic pain not due to malignancy are outlined. These factors may be applicable to the treatment of a variety of chronic nonmalignant pain syndromes such as postherpetic neuralgia or human immunodeficiency virus/acquired immunodeficiency syndrome. The intended target audience of this paper is the physician (primary care, chronic pain specialist) who is involved in prescribing opioids for the treatment of chronic, nonmalignant pain. |
White matter hyperintensity and stroke lesion segmentation and differentiation using convolutional neural networks | White matter hyperintensities (WMH) are a feature of sporadic small vessel disease also frequently observed in magnetic resonance images (MRI) of healthy elderly subjects. The accurate assessment of WMH burden is of crucial importance for epidemiological studies to determine association between WMHs, cognitive and clinical data; their causes, and the effects of new treatments in randomized trials. The manual delineation of WMHs is a very tedious, costly and time consuming process, that needs to be carried out by an expert annotator (e.g. a trained image analyst or radiologist). The problem of WMH delineation is further complicated by the fact that other pathological features (i.e. stroke lesions) often also appear as hyperintense regions. Recently, several automated methods aiming to tackle the challenges of WMH segmentation have been proposed. Most of these methods have been specifically developed to segment WMH in MRI but cannot differentiate between WMHs and strokes. Other methods, capable of distinguishing between different pathologies in brain MRI, are not designed with simultaneous WMH and stroke segmentation in mind. Therefore, a task specific, reliable, fully automated method that can segment and differentiate between these two pathological manifestations on MRI has not yet been fully identified. In this work we propose to use a convolutional neural network (CNN) that is able to segment hyperintensities and differentiate between WMHs and stroke lesions. Specifically, we aim to distinguish between WMH pathologies from those caused by stroke lesions due to either cortical, large or small subcortical infarcts. The proposed fully convolutional CNN architecture, called uResNet, that comprised an analysis path, that gradually learns low and high level features, followed by a synthesis path, that gradually combines and up-samples the low and high level features into a class likelihood semantic segmentation. Quantitatively, the proposed CNN architecture is shown to outperform other well established and state-of-the-art algorithms in terms of overlap with manual expert annotations. Clinically, the extracted WMH volumes were found to correlate better with the Fazekas visual rating score than competing methods or the expert-annotated volumes. Additionally, a comparison of the associations found between clinical risk-factors and the WMH volumes generated by the proposed method, was found to be in line with the associations found with the expert-annotated volumes. |
An End-To-End Siamese Convolutional Neural Network for Loop Closure Detection in Visual Slam System | Loop closure detection is essential and important in visual simultaneous localization and mapping (SLAM) systems. Most existing methods typically utilize a separate feature extraction part and a similarity metric part. Compared to these methods, an end-to-end network is proposed in this paper to jointly optimize the two parts in a unified framework for further enhancing the interworking between these two parts. First, a two-branch siamese network is designed to learn respective features for each scene of an image pair. Then a hierarchical weighted distance (HWD) layer is proposed to fuse the multi-scale features of each convolutional module and calculate the distance between the image pair. Finally, by using the contrastive loss in the training process, the effective feature representation and similarity metric can be learned simultaneously. Experiments on several open datasets illustrate the superior performance of our approach and demonstrate that the end-to-end network is feasible to conduct the loop closure detection in real time and provides an implementable method for visual SLAM systems. |
Detecting Image Splicing using Geometry Invariants and Camera Characteristics Consistency | Recent advances in computer technology have made digital image tampering more and more common. In this paper, we propose an authentic vs. spliced image classification method making use of geometry invariants in a semi-automatic manner. For a given image, we identify suspicious splicing areas, compute the geometry invariants from the pixels within each region, and then estimate the camera response function (CRF) from these geometry invariants. The cross-fitting errors are fed into a statistical classifier. Experiments show a very promising accuracy, 87%, over a large data set of 363 natural and spliced images. To the best of our knowledge, this is the first work detecting image splicing by verifying camera characteristic consistency from a single-channel image |
Do open youth unemployment and youth programs leave the same mental health scars?--Evidence from a Swedish 27-year cohort study. | BACKGROUND
Recent findings suggest that the mental health costs of unemployment are related to both short- and long-term mental health scars. The main policy tools for dealing with young people at risk of labor market exclusion are Active Labor Market Policy programs for youths (youth programs). There has been little research on the potential effects of participation in youth programs on mental health and even less on whether participation in such programs alleviates the long-term mental health scarring caused by unemployment. This study compares exposure to open youth unemployment and exposure to youth program participation between ages 18 and 21 in relation to adult internalized mental health immediately after the end of the exposure period at age 21 and two decades later at age 43.
METHODS
The study uses a five wave Swedish 27-year prospective cohort study consisting of all graduates from compulsory school in an industrial town in Sweden initiated in 1981. Of the original 1083 participants 94.3% of those alive were still participating at the 27-year follow up. Exposure to open unemployment and youth programs were measured between ages 18-21. Mental health, indicated through an ordinal level three item composite index of internalized mental health symptoms (IMHS), was measured pre-exposure at age 16 and post exposure at ages 21 and 42. Ordinal regressions of internalized mental health at ages 21 and 43 were performed using the Polytomous Universal Model (PLUM). Models were controlled for pre-exposure internalized mental health as well as other available confounders.
RESULTS
Results show strong and significant relationships between exposure to open youth unemployment and IMHS at age 21 (OR = 2.48, CI = 1.57-3.60) as well as at age 43 (OR = 1.71, CI = 1.20-2.43). No such significant relationship is observed for exposure to youth programs at age 21 (OR = 0.95, CI = 0.72-1.26) or at age 43 (OR = 1.23, CI = 0.93-1.63).
CONCLUSIONS
A considered and consistent active labor market policy directed at youths could potentially reduce the short- and long-term mental health costs of youth unemployment. |
Knowledge, attitude and practice of breast self-examination among female undergraduate students in the University of Buea | The incidence of breast cancer is on the rise in many parts of Africa. In Cameroon, there were an estimated 2625 cases per 100,000 in 2012. The awareness of breast cancer preventive methods is therefore critical in the reduction of breast cancer morbidity and mortality. This study evaluated the knowledge, attitude and practice of breast self-examination (BSE), among female undergraduate students in the University of Buea. The study comprised 166 female students of ages 17-30years (mean = 22.8 ± 3) sampled randomly. Data was collected by a pretested self-administered questionnaire. Nearly three quarter (73.5%) of the respondents had previously heard of BSE. Only 9.0% knew how to perform BSE. Similarly, only 13.9% knew what to look for while performing BSE. Television (19.9%) was the main source of information on BSE. Although perceived by 88% of the respondents as important, only 3% had performed BSE regularly. Furthermore, only 19.9% of the respondents have been to any health facility to have breast examination. Overall, although a majority (63.3%) of the respondents had a moderate attitude towards BSE as an important method for early detection of breast cancer, just a modest 9.6% were substantially aware of it. Lack of knowledge on BSE was cited as the main reason for not performing BSE. A significant association was observed between knowledge and the practice of BSE (P = 0.029), and between attitude and the practice of BSE (P = 0.015). These findings highlight the current knowledge gap that exists in the practice of BSE in the prevention of breast cancer in the study population. Sensitization campaigns and educational programmes ought to be intensified in order to address this issue. |
Japanese spousal smoking study revisited: how a tobacco industry funded paper reached erroneous conclusions. | OBJECTIVES
To provide a participant's account of the development of a paper commissioned by the tobacco industry examining the reliability of self reported smoking status; to redress the distorted report of this Japanese spousal smoking study which evaluated the reliability and validity of self reported smoking status, and estimated confounding by diet and lifestyle factors.
DESIGN
Repeated interviews on smoking status and its verification by environmental and biological markers for environmental tobacco smoke (ETS) exposure.
SETTING
Urban wives in Osaka City and Sizuoka City, Japan
PARTICIPANTS
Semi-random sampling of 200 wives in each city. From the Osaka subjects, 100 non-smoking wives were selected for the validity study.
MAIN OUTCOME MEASURES
Kappa coefficient for reliability of self reported smoking status. Correlation coefficients between environmental nicotine concentration, cotinine in saliva and urine, and self reported smoking status.
RESULTS
The kappa coefficient for the repeated interview was high suggesting sufficient reliability of the response. The proportion of self reported current smokers misclassified as non-smokers was equivalent to the misclassified self reported non-smokers. Ambient concentration of nicotine and personal exposure to nicotine correlated with each other and also with salivary cotinine and self reported ETS exposure but not with urinary cotinine/creatinine ratio (CCR). There was no major difference in diet and lifestyle related to husband's smoking status.
CONCLUSION
Self reported smoking status by Japanese wives shows high reliability. It also shows high validity when verified by both nicotine exposure and salivary cotinine, but not by CCR. A previous report questioning the credibility of self reported smoking status, based on questionable CCR, could thus be of dubious validity. In addition, possible dietary and lifestyle confounding factors associated with smoking husbands were not demonstrable, a finding not reported previously. Using all the data from this project changes the conclusion of the previous published report. In addition to the distortion of scientific findings by a tobacco industry affiliated researcher, anti-smoking campaigners made attempts to intimidate and suppress scientific activities. These distortions of science should be counteracted. |
Clinical trial: effects of an oral preparation of mesalazine at 4 g/day on moderately active ulcerative colitis. A phase III parallel-dosing study | Oral mesalazine formulations are effective in the treatment of active ulcerative colitis (UC). It is not clear what induction dose of mesalazine is optimal for treating patients with active UC. We aimed to evaluate the efficacy and safety of 4 versus 2.25 g/day for selected patients with active UC. A multicenter, randomized, double-blind, parallel-group clinical study in 39 Japanese medical institutions. A total of 123 patients with moderately active UC received 4 g/day (two divided doses) versus 2.25 g/day (three divided doses) for 8 weeks. Primary endpoint was the ulcerative colitis-disease activity index (UC-DAI) score before and after 8 weeks of treatment. The improvement of each individual UC-DAI variable, remission, and efficacy rates were secondary endpoints. Safety was determined by laboratory data, vital signs, subjective symptoms, and objective findings. Patients receiving 4 g/day achieved a change in UC-DAI score significantly superior to those receiving 2.25 g/day [−3.0 (95% confidence intervals (CI) −3.8 to −2.3) vs. −0.8 (95% CI −1.8 to 0.1), respectively]. There were significant differences in all UC-DAI variables between the groups. Remission rates were 22.0% (4 g/day) and 15.3% (2.25 g/day). The efficacy rate was significantly better with 4 versus 2.25 g/day [76.3 vs. 45.8%, respectively (95% CI 13.8–47.2); P = 0.001]. No difference was seen in adverse events or adverse drug reactions. A dose of 4 g/day was significantly superior to 2.25 g/day in terms of UC-DAI score for patients with moderately active UC. Safety profiles were similar for both doses. |
Software engineering education (SEEd): is software engineering ready for MOOCs? | The latest rage in university education is Massive Open Online Courses (MOOCs). These courses attract thousands of students for each offering. Students view lectures online and submit their quizzes and homework to automated grading systems. How well does this format fit software engineering? This column looks at some of the choices made by Armando Fox and David Patterson at the University of California, Berkeley in the creation of their course: Software Engineering for Software as a Service (SaaS). Some of those choices reveal advantages and disadvantages of adopting the MOOC approach. Fox and Patterson teach a longer version of their course on campus at Berkeley, CS 169: Software Engineering. The lectures from the first five weeks of the campus course were recorded for use in the online version. Although the online course covers only about a third of the material in the campus course, it provides a surprising amount of useful introductory material, and it gives students a chance to experience many of the key tasks and activities of software engineering. One of the choices made by Fox and Patterson in designing their campus course was to give students a quick introduction to their chosen software development process and tools at the start of the course. This enables students to start practicing software development after only a few hours of instruction. For example, in the second homework assignment students modify a web application and see how their changes result in new functionality for the user of the app. They use standard tools and methods just like real software developers. To be fair, students who take the campus course have already completed prerequisite programming courses. Online students without any programming background would probably find the homework assignments quite difficult. But students are able to accomplish quite a bit by writing a relatively small amount of code. This is the result of another choice of Fox and Patterson: selecting Ruby and Rails for the development environment. Ruby and Rails provide most of the infrastructure needed for creation of web applications and services. They also come with additional tools that enable effective software development practices. Students in the SaaS course practice behavior-driven development with Cucumber and RSpec. By describing tests in terms that are easily understood by customers, students learn to appreciate some of the difficulties that stakeholders have in describing their goals and intentions. They also learn to appreciate the value of testing early and often. Fox and Patterson made a good case for their approach in a recent CACM article [1]. Their survey of Berkeley alumni convinced them that they should include several modern software development techniques, including: version control, working in a team, software-as-a-service, design patterns, unit testing skills, Ruby on Rails, cloud computing, test-first development, user stories, low-fidelity user interface mockups, and measurement of progress in terms of velocity. All of these topics and techniques are included in their campus course, and many of them are also in the online version. Most of these techniques translate well to MOOCs, especially those involving individual programming tasks. Programming assignments can be assessed using automated systems fairly easily. Teamwork, however, is an unknown. There is nothing in principle that prevents students from working effectively in geographically distributed teams, but to my knowledge no one has tried it in the MOOC format yet. Not all software engineering is about software-as-a-service. Some software systems are created as embedded systems, some are shrink-wrapped products, some are the "glue" that hold other components together, to name just a few alternatives. And not all skills needed by software engineering are exercised by developing web applications. Requirements elicitation, review techniques, and effective presentation skills are also important. It is these more social skills that may be the most difficult to practice and assess in the MOOC format. Again, there is nothing in principle that prevents students from participating in any of these activities at a distance, but lack of co-location certainly make them more difficult. Also, it is hard to see how robotic grading can be applied to these exercises. Programs can be tested to see if they produce the correct results, but elicitation activities and reviews have no oracle. Perhaps this is an active research project somewhere. An interesting development in MOOCs that might help develop the more social side of software engineering is the spontaneous creation of study groups and self-appointed teaching assistants. Both of these have been very effective in the MOOC by Fox and Patterson. We look forward to seeing how other software engineering courses adapt to the MOOC format. |
Estimating missing marker positions using low dimensional Kalman smoothing. | Motion capture is frequently used for studies in biomechanics, and has proved particularly useful in understanding human motion. Unfortunately, motion capture approaches often fail when markers are occluded or missing and a mechanism by which the position of missing markers can be estimated is highly desirable. Of particular interest is the problem of estimating missing marker positions when no prior knowledge of marker placement is known. Existing approaches to marker completion in this scenario can be broadly divided into tracking approaches using dynamical modelling, and low rank matrix completion. This paper shows that these approaches can be combined to provide a marker completion algorithm that not only outperforms its respective components, but also solves the problem of incremental position error typically associated with tracking approaches. |
Cluster analysis of mortality and malformations in the Provinces of Naples and Caserta (Campania Region). | The possible adverse health effects associated with the residence in the neighbourhood of toxic dump sites have been the object of many epidemiological studies in the last two decades;some of these reported increases of various health outcomes. The present study reports the cluster analysis of mortality and malformations at municipality level, standardized by socioeconomic deprivation index, in an area of the Campania Region characterized by a widespread illegal practice of dumping toxic and urban waste. Clusters have been observed with significant excess of mortality by lung, liver, gastric, kidney and bladder cancers and of prevalence of total malformations and malformations of limb, cardiovascular and urogenital system. The clusters are concentrated in a sub-area where most of the illegal practice of dumping toxic waste has taken place |
An Intelligent Agriculture Environment Monitoring System Using Autonomous Mobile Robot | 1 Student, Assistant professor 1 Information Technology,SRM University, Kattankulathur-603203 ,India 2 Information technology,SRM University, Kattankulathur-603203 ,India [email protected], [email protected] ________________________________________________________________________________________________________ Abstract— The primary concern of “An Intelligent Agriculture Environment Monitoring System Using Autonomous Mobile Robot” is to monitoring the environmental condition of the agricultural field in real time. This project deals with creating a solution for such a situation in the form of a small robot that can do the task. The system uses several sensors that are mounted around the agricultural field and the signals are transferred to the robot using ZIGBEE transceivers so that it can detect the present environmental condition of fields and give information about the environment to the user through message. In this project we develop a smart wireless sensor network (WSN) for an agricultural environment, which can monitors the crop field’s environment for various factors such as temperature ,ph level, water level and humidity. Information about the environment are sent wirelessly to a Robot through ZIGBEE and Robot collects the data, stores it and sent the information to the User mobile using GSM module. |
UNIK: unsupervised social network spam detection | Social network spam increases explosively with the rapid development and wide usage of various social networks on the Internet. To timely detect spam in large social network sites, it is desirable to discover unsupervised schemes that can save the training cost of supervised schemes. In this work, we first show several limitations of existing unsupervised detection schemes. The main reason behind the limitations is that existing schemes heavily rely on spamming patterns that are constantly changing to avoid detection. Motivated by our observations, we first propose a sybil defense based spam detection scheme SD2 that remarkably outperforms existing schemes by taking the social network relationship into consideration. In order to make it highly robust in facing an increased level of spam attacks, we further design an unsupervised spam detection scheme, called UNIK. Instead of detecting spammers directly, UNIK works by deliberately removing non-spammers from the network, leveraging both the social graph and the user-link graph. The underpinning of UNIK is that while spammers constantly change their patterns to evade detection, non-spammers do not have to do so and thus have a relatively non-volatile pattern. UNIK has comparable performance to SD2 when it is applied to a large social network site, and outperforms SD2 significantly when the level of spam attacks increases. Based on detection results of UNIK, we further analyze several identified spam campaigns in this social network site. The result shows that different spammer clusters demonstrate distinct characteristics, implying the volatility of spamming patterns and the ability of UNIK to automatically extract spam signatures. |
Why and why not explanations improve the intelligibility of context-aware intelligent systems | Context-aware intelligent systems employ implicit inputs, and make decisions based on complex rules and machine learning models that are rarely clear to users. Such lack of system intelligibility can lead to loss of user trust, satisfaction and acceptance of these systems. However, automatically providing explanations about a system's decision process can help mitigate this problem. In this paper we present results from a controlled study with over 200 participants in which the effectiveness of different types of explanations was examined. Participants were shown examples of a system's operation along with various automatically generated explanations, and then tested on their understanding of the system. We show, for example, that explanations describing why the system behaved a certain way resulted in better understanding and stronger feelings of trust. Explanations describing why the system did not behave a certain way, resulted in lower understanding yet adequate performance. We discuss implications for the use of our findings in real-world context-aware applications. |
Range-IT: detection and multimodal presentation of indoor objects for visually impaired people | In the paper we present our Range-IT prototype, which is a 3D depth camera based electronic travel aid (ETA) to assist visually impaired people in finding out detailed information of surrounding objects. In addition to detecting indoor obstacles and identifying several objects of interest (e.g., walls, open doors and stairs) up to 7 meters, the Range-IT system employs a multimodal audio-vibrotactile user interface to present this spatial information. |
Lazy Modeling of Variants of Token Swapping Problem and Multi-agent Path Finding through Combination of Satisfiability Modulo Theories and Conflict-based Search | We address item relocation problems in graphs in this paper. We assume items placed in vertices of an undirected graph with at most one item per vertex. Items can be moved across edges while various constraints depending on the type of relocation problem must be satisfied. We introduce a general problem formulation that encompasses known types of item relocation problems such as multi-agent path finding (MAPF) and token swapping (TSWAP). In this formulation we express two new types of relocation problems derived from token swapping that we call token rotation (TROT) and token permutation (TPERM). Our solving approach for item relocation combines satisfiability modulo theory (SMT) with conflict-based search (CBS). We interpret CBS in the SMT framework where we start with the basic model and refine the model with a collision resolution constraint whenever a collision between items occurs in the current solution. The key difference between the standard CBS and our SMT-based modification of CBS (SMT-CBS) is that the standard CBS branches the search to resolve the collision while in SMT-CBS we iteratively add a single disjunctive collision resolution constraint. Experimental evaluation on several benchmarks shows that the SMT-CBS algorithm significantly outperforms the standard CBS. We also compared SMT-CBS with a modification of the SAT-based MDD-SAT solver that uses an eager modeling of item relocation in which all potential collisions are eliminated by constrains in advance. Experiments show that lazy approach in SMT-CBS produce fewer constraint than MDD-SAT and also achieves faster solving run-times. |
Rejuvenation of the Upper Face and Periocular Region: Combining Neuromodulator, Facial Filler, Laser, Light, and Energy-Based Therapies for Optimal Results. | BACKGROUND
The upper face and periocular region is a complex and dynamic part of the face. Successful rejuvenation requires a combination of minimally invasive modalities to fill dents and hollows, resurface rhytides, improve pigmentation, and smooth the mimetic muscles of the face without masking facial expression.
METHODS
Using review of the literature and clinical experience, the authors discuss our strategy for combining botulinum toxin, facial filler, ablative laser, intense pulsed light, microfocused ultrasound, and microneedle fractional radiofrequency to treat aesthetic problems of the upper face including brow ptosis, temple volume loss, A-frame deformity of the superior sulcus, and superficial and deep rhytides.
RESULTS
With attention to safety recommendations, injectable, light, laser, and energy-based treatments can be safely combined in experienced hands to provide enhanced outcomes in the rejuvenation of the upper face.
CONCLUSION
Providing multiple treatments in 1 session improves patient satisfaction by producing greater improvements in a shorter amount of time and with less overall downtime than would be necessary with multiple office visits. |
The Way They Move: Tracking Multiple Targets with Similar Appearance | We introduce a computationally efficient algorithm for multi-object tracking by detection that addresses four main challenges: appearance similarity among targets, missing data due to targets being out of the field of view or occluded behind other objects, crossing trajectories, and camera motion. The proposed method uses motion dynamics as a cue to distinguish targets with similar appearance, minimize target mis-identification and recover missing data. Computational efficiency is achieved by using a Generalized Linear Assignment (GLA) coupled with efficient procedures to recover missing data and estimate the complexity of the underlying dynamics. The proposed approach works with track lets of arbitrary length and does not assume a dynamical model a priori, yet it captures the overall motion dynamics of the targets. Experiments using challenging videos show that this framework can handle complex target motions, non-stationary cameras and long occlusions, on scenarios where appearance cues are not available or poor. |
Let there be color!: joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification | We present a novel technique to automatically colorize grayscale images that combines both global priors and local image features. Based on Convolutional Neural Networks, our deep network features a fusion layer that allows us to elegantly merge local information dependent on small image patches with global priors computed using the entire image. The entire framework, including the global and local priors as well as the colorization model, is trained in an end-to-end fashion. Furthermore, our architecture can process images of any resolution, unlike most existing approaches based on CNN. We leverage an existing large-scale scene classification database to train our model, exploiting the class labels of the dataset to more efficiently and discriminatively learn the global priors. We validate our approach with a user study and compare against the state of the art, where we show significant improvements. Furthermore, we demonstrate our method extensively on many different types of images, including black-and-white photography from over a hundred years ago, and show realistic colorizations. |
Spectrum of e-business maturity in australian SMEs: a multiple case studies approach on the applicability of the stages of growth for e-business model | As a part of ongoing research to look at the pragmatic value of the stages of growth concept to map the progression of e-business maturity, this paper describes some of the empirical findings of a qualitative study into the progression and maturity of e-business in small and medium enterprises in Australia. In particular, the paper discusses the progression of ebusiness initiatives in three different companies at different levels of IS/IT maturity and the applicability of the stages concept in describing their progression. The cases presented in the paper clearly reveal that the IT managers and the CEO of the companies involved in the study responded favourably when using and applying the stages of growth for e-business (SOGe) model to their e-business experience and had no trouble in using the stages model to chart and identify the “evolution” of their e-business initiatives. |
The Wilhelmøya Formation (Upper Triassic-Lower Jurassic) at Bohemanflya, Spitsbergen | An 18.5 m thick shale sequence of Norian-Rhaetian age is described from the Bohemanflya-Syltoppen area (north of Isfjorden. central Spitsbergen). Lithological, petrographical and palynological analyses show that the sequence represents a marginal development of the lower part of the Wilhelmeya Formation. The depositional history at the Triassic-Jurassic transition is discussed in the light of this new evidence. The Wilhelmoya Formation was probably deposited during a weak marine transgression over an area of low relief. Low sediment supply and current and wave reworking of the sediments characterized the depositional conditions. |
Detection and Analysis of 2016 US Presidential Election Related Rumors on Twitter | The 2016 U.S. presidential election has witnessed the major role of Twitter in the year’s most important political event. Candidates used this social media platform extensively for online campaigns. Meanwhile, social media has been filled with rumors, which might have had huge impacts on voters’ decisions. In this paper, we present a thorough analysis of rumor tweets from the followers of two presidential candidates: Hillary Clinton and Donald Trump. To overcome the difficulty of labeling a large amount of tweets as training data, we detect rumor tweets by matching them with verified rumor articles. We analyze over 8 million tweets collected from the followers of the two candidates. Our results provide answers to several primary concerns about rumors in this election, including: which side of the followers posted the most rumors, who posted these rumors, what rumors they posted, and when they posted these rumors. The insights of this paper can help us understand the online rumor behaviors in American politics. |
Designing and Evaluating a Social Gaze-Control System for a Humanoid Robot | This paper describes a context-dependent social gaze-control system implemented as part of a humanoid social robot. The system enables the robot to direct its gaze at multiple humans who are interacting with each other and with the robot. The attention mechanism of the gaze-control system is based on features that have been proven to guide human attention: nonverbal and verbal cues, proxemics, the visual field of view, and the habituation effect. Our gaze-control system uses Kinect skeleton tracking together with speech recognition and SHORE-based facial expression recognition to implement the same features. As part of a pilot evaluation, we collected the gaze behavior of 11 participants in an eye-tracking study. We showed participants videos of two-person interactions and tracked their gaze behavior. A comparison of the human gaze behavior with the behavior of our gaze-control system running on the same videos shows that it replicated human gaze behavior 89% of the time. |
THE HEBREW BIBLE AND METAETHICS: A PHILOSOPHICAL INTRODUCTION | In the discipline of Biblical Ethics (Hebrew Bible) the concern lies with descriptive and normative ethics whereas questions pertaining to metaethics are frequently bracketed. As a result, very little attention has been paid to the semantic, epistemological and metaphysical assumptions underlying in the Hebrew Bible’s moral discourse. In response to this state of affairs, this paper seeks to make a plea for the introduction of metaethics within the field of Biblical Ethics by demonstrating how this line of inquiry can be simultaneously philosophical and hermeneutically valid in terms of research problems, objectives and methodology. The study concludes with a cursive introduction to three interesting metaethical problems in connection with biblical assumptions regarding the divinity-morality relation when viewed from the perspective of the Euthyphro Dilemma, the Principle of Sufficient Reason and the question of morality and meaning in the divine condition. |
A discriminative CNN video representation for event detection | In this paper, we propose a discriminative video representation for event detection over a large scale video dataset when only limited hardware resources are available. The focus of this paper is to effectively leverage deep Convolutional Neural Networks (CNNs) to advance event detection, where only frame level static descriptors can be extracted by the existing CNN toolkits. This paper makes two contributions to the inference of CNN video representation. First, while average pooling and max pooling have long been the standard approaches to aggregating frame level static features, we show that performance can be significantly improved by taking advantage of an appropriate encoding method. Second, we propose using a set of latent concept descriptors as the frame descriptor, which enriches visual information while keeping it computationally affordable. The integration of the two contributions results in a new state-of-the-art performance in event detection over the largest video datasets. Compared to improved Dense Trajectories, which has been recognized as the best video representation for event detection, our new representation improves the Mean Average Precision (mAP) from 27.6% to 36.8% for the TRECVID MEDTest 14 dataset and from 34.0% to 44.6% for the TRECVID MEDTest 13 dataset. |
Causal effects of violent sports video games on aggression : Is it competitiveness or violent content ? | Three experiments examined the impact of excessive violence in sport video games on aggression-related variables. Participants played either a nonviolent simulation-based sports video game (baseball or football) or a matched excessively violent sports video game. Participants then completed measures assessing aggressive cognitions (Experiment 1), aggressive affect and attitudes towards violence in sports (Experiment 2), or aggressive behavior (Experiment 3). Playing an excessively violent sports video game increased aggressive affect, aggressive cognition, aggressive behavior, and attitudes towards violence in sports. Because all games were competitive, these findings indicate that violent content uniquely leads to increases in several aggression-related variables, as predicted by the General Aggression Model and related social–cognitive models. 2009 Elsevier Inc. All rights reserved. In 2002, ESPN aired an investigative piece examining the impact of excessively violent sports video games on youth’s attitudes towards sports (ESPN, 2002). At the time, Midway Games produced several sports games (e.g., NFL Blitz, MLB Slugfest, and NHL Hitz) containing excessive and unrealistic violence, presumably to appeal to non-sport fan video game players. These games were officially licensed by the National Football League, Major League Baseball, and the National Hockey League, which permitted Midway to implement team logos, players’ names, and players’ likenesses into the games. Within these games, players control real-life athletes and can perform excessively violent behaviors on the electronic field. The ESPN program questioned why the athletic leagues would allow their license to be used in this manner and what effect these violent sports games had on young players. Then in December 2004, the NFL granted exclusive license rights to EA Sports (ESPN.com, 2005). In response, Midway Games began publishing a more violent, grittier football game based on a fictitious league. The new football video game, which is rated appropriate only for people seventeen and older, features fictitious players engaging in excessive violent behaviors on and off the field, drug use, sex, and gambling (IGN.com, 2005). Violence in video games has been a major social issue, not limited to violence in sports video games. Over 85% of the games on ll rights reserved. ychology, Iowa State Univernited States. Fax: +1 515 294 the market contain some violence (Children Now, 2001). Approximately half of video games include serious violent actions toward other game characters (Children Now, 2001; Dietz, 1998; Dill, Gentile, Richter, & Dill, 2005). Indeed, Congressman Joe Baca of California recently introduced Federal legislation to require that violent video games contain a warning label about their link to aggression (Baca, 2009). Since 1999, the amount of daily video game usage by youth has nearly doubled (Roberts, Foehr, & Rideout, 2005). Almost 60% of American youth from ages 8 to 18 report playing video games on ‘‘any given day” and 30% report playing for more than an average of an hour a day (Roberts et al., 2005). Video game usage is high in youth regardless of sex, race, parental education, or household income (Roberts et al., 2005). Competition-only versus violent-content hypotheses Recent meta-analyses (e.g., Anderson et al., 2004, submitted for publication) have shown that violent video game exposure increases physiological arousal, aggressive affect, aggressive cognition, and aggressive behavior. Other studies link violent video game play to physiological desensitization to violence (e.g., Bartholow, Bushman, & Sestir, 2006; Carnagey, Anderson, & Bushman, 2007). Particularly interesting is the recent finding that violent video game play can increase aggression in both short and long term contexts. Besides the empirical evidence, there are strong theoretical reasons from the cognitive, social, and personality domains to expect 732 C.A. Anderson, N.L. Carnagey / Journal of Experimental Social Psychology 45 (2009) 731–739 violent video game effects on aggression-related variables. However, currently there are two competing hypotheses as to how violent video games increases aggression: the violent-content hypothesis and the competition-only hypothesis. General Aggression Model and the violent-content hypothesis The General Aggression Model (GAM) is an integration of several prior models of aggression (e.g., social learning theory, cognitive neoassociation) and has been detailed in several publications (Anderson & Bushman, 2002; Anderson & Carnagey, 2004; Anderson, Gentile, & Buckley, 2007; Anderson & Huesmann, 2003). GAM describes a cyclical pattern of interaction between the person and the environment. Input variables, such as provocation and aggressive personality, can affect decision processes and behavior by influencing one’s present internal state in at least one of three primary ways: by influencing current cognitions, affective state, and physiological arousal. That is, a specific input variable may directly influence only one, or two, or all three aspects of a person’s internal state. For example, uncomfortably hot temperature appears to increase aggression primarily by its direct impact on affective state (Anderson, Anderson, Dorr, DeNeve, & Flanagan, 2000). Of course, because affect, arousal, and cognition tend to influence each other, even input variables that primarily influence one aspect of internal state also tend to indirectly influence the other aspects. Although GAM is a general model and not specifically a model of media violence effects, it can easily be applied to media effects. Theoretically, violent media exposure might affect all three components of present internal state. Research has shown that playing violent video games can temporarily increase aggressive thoughts (e.g., Kirsh, 1998), affect (e.g., Ballard & Weist, 1996), and arousal (e.g., Calvert & Tan, 1994). Of course, nonviolent games also can increase arousal, and for this reason much prior work has focused on testing whether violent content can increase aggressive behavior even when physiological arousal is controlled. This usually is accomplished by selecting nonviolent games that are equally arousing (e.g., Anderson et al., 2004). Despite’s GAM’s primary focus on the current social episode, it is not restricted to short-term effects. With repeated exposure to certain types of stimuli (e.g., media violence, certain parenting practices), particular knowledge structures (e.g., aggressive scripts, attitudes towards violence) become chronically accessible. Over time, the individual employs these knowledge structures and occasionally receives environmental reinforcement for their usage. With time and repeated use, these knowledge structures gain strength and connections to other stimuli and knowledge structures, and therefore are more likely to be used in later situations. This accounts for the finding that repeatedly exposing children to media violence increases later aggression, even into adulthood (Anderson, Sakamoto, Gentile, Ihori, & Shibuya, 2008; Huesmann & Miller, 1994; Huesmann, Moise-Titus, Podolski, & Eron, 2003; Möller & Krahé, 2009; Wallenius & Punamaki, 2008). Such longterm effects result from the development, automatization, and reinforcement of aggression-related knowledge structures. In essence, the creation and automatization of these aggression-related knowledge structures and concomitant emotional desensitization changes the individual’s personality. For example, long-term consumers of violent media can become more aggressive in outlook, perceptual biases, attitudes, beliefs, and behavior than they were before the repeated exposure, or would have become without such exposure (e.g., Funk, Baldacci, Pasold, & Baumgardner, 2004; Gentile, Lynch, Linder, & Walsh, 2004; Krahé & Möller, 2004; Uhlmann & Swanson, 2004). In sum, GAM predicts that one way violent video games increase aggression is by the violent content increasing at least one of the aggression-related aspects of a person’s current internal state (short-term context), and over time increasing the chronic accessibility of aggression-related knowledge structures. This is the violent-content hypothesis. The competition hypothesis The competition hypothesis maintains that competitive situations stimulate aggressiveness. According to this hypothesis, many previous short-term (experimental) video game studies have found links between violent games and aggression not because of the violent content, but because violent video games typically involve competition, whereas nonviolent video games frequently are noncompetitive. The competitive aspect of video games might increase aggression by increasing arousal or by increasing aggressive thoughts or affect. Previous research has demonstrated that increases in physiological arousal can cause increases in aggression under some circumstances (Berkowitz, 1993). Competitive aspects of violent video games could also increase aggressive cognitions via links between aggressive and competition concepts (Anderson & Morrow, 1995; Deutsch, 1949, 1993). Thus, at a general level such competition effects are entirely consistent with GAM and with the violentcontent hypothesis. However, a strong version of the competition hypothesis states that violent content has no impact beyond its effects on competition and its sequela. This strong version, which we call the competition-only hypothesis, has not been adequately tested. Testing the competition-only hypothesis There has been little research conducted to examine the violent-content hypothesis versus the competition-only hypothesis (see Carnagey & Anderson, 2005 for one such example). To test these hypotheses against each other, one must randomly assign participants to play either violent or nonviolent video games, all of which are competitive. The use of sports video games meets this requirement and has other benefits. E |
The Nativist Movement in America: Religious Conflict in the Nineteenth Century | Chapter one, "Anti-Catholic Religious Nativism: A Critical Moment in American History" Chapter two, "Convent Burning in Massachusetts" Chapter three "Riots in Philadelphia," Chapter four, "The Pope's Stone in Washington, DC" Chapter five, "What We Learn from Religious Nativism," Documents |
Motion estimation with non-local total variation regularization | State-of-the-art motion estimation algorithms suffer from three major problems: Poorly textured regions, occlusions and small scale image structures. Based on the Gestalt principles of grouping we propose to incorporate a low level image segmentation process in order to tackle these problems. Our new motion estimation algorithm is based on non-local total variation regularization which allows us to integrate the low level image segmentation process in a unified variational framework. Numerical results on the Mid-dlebury optical flow benchmark data set demonstrate that we can cope with the aforementioned problems. |
MODELING BY MULTI-AGENT SYSTEMS : A SWARM INTELLIGENCE APPROACH | There are a number of emergent traffic and transportation phenomena that cannot be analyzed successfully and explained using analytical models. The only way to analyze such phenomena is through the development of models that can simulate behavior of every agent. Agent-based modeling is an approach based on the idea that a system is composed of decentralized individual ‘agents’ and that each agent interacts with other agents according to localized knowledge. The agent-based approach is a ‘bottom-up’ approach to modeling where special kinds of artificial agents are created by analogy with social insects. Social insects (including bees, wasps, ants and termites) have lived on Earth for millions of years. Their behavior in nature is primarily characterized by autonomy, distributed functioning and self-organizing capacities. Social insect colonies teach us that very simple individual organisms can form systems capable of performing highly complex tasks by dynamically interacting with each other. On the other hand, a large number of traditional engineering models and algorithms are based on control and centralization. In this article, we try to obtain the answer to the following question: Can we use some principles of natural swarm intelligence in the development of artificial systems aimed at solving complex problems in traffic and transportation? |
Long-term cost-effectiveness of home versus clinic-based management of chronic heart failure: the WHICH? study. | BACKGROUND
The cost-effectiveness of a heart failure management intervention can be further informed by incorporating the expected benefits and costs of future survival.
METHODS
This study compared the long-term costs per quality-adjusted life year (QALY) gained from home-based (HBI) vs specialist clinic-based intervention (CBI) among elderly patients (mean age = 71 years) with heart failure discharged home (mean intervention duration = 12 months). Cost-utility analysis was conducted from a government-funded health system perspective. A Markov cohort model was used to simulate disease progression over 15 years based on initial data from a randomized clinical trial (the WHICH? study). Time-dependent hazard functions were modeled using the Weibull function, and this was compared against an alternative model where the hazard was assumed to be constant over time. Deterministic and probabilistic sensitivity analyses were conducted to identify the key drivers of cost-effectiveness and quantify uncertainty in the results.
RESULTS
During the trial, mortality was the highest within 30 days of discharge and decreased thereafter in both groups, although the declining rate of mortality was slower in CBI than HBI. At 15 years (extrapolated), HBI was associated with slightly better health outcomes (mean of 0.59 QALYs gained) and mean additional costs of AU$13,876 per patient. The incremental cost-utility ratio and the incremental net monetary benefit (vs CBI) were AU$23,352 per QALY gained and AU$15,835, respectively. The uncertainty was driven by variability in the costs and probabilities of readmissions. Probabilistic sensitivity analysis showed HBI had a 68% probability of being cost-effective at a willingness-to-pay threshold of AU$50,000 per QALY.
CONCLUSION
Compared with CBI (outpatient specialized HF clinic-based intervention), HBI (home-based predominantly, but not exclusively) could potentially be cost-effective over the long-term in elderly patients with heart failure at a willingness-to-pay threshold of AU$50,000/QALY, albeit with large uncertainty. |
Spectra of Transition‐Metal Complexes | The electronic transitions observed in complexes of the transition‐metal ions are interpreted in terms of a slightly modified crystal‐field theory. Parameters of chemical interest are derived. |
TGR5 signalling inhibits the production of pro-inflammatory cytokines by in vitro differentiated inflammatory and intestinal macrophages in Crohn's disease | Bile acids (BAs) play important roles not only in lipid metabolism, but also in signal transduction. TGR5, a transmembrane receptor of BAs, is an immunomodulative factor, but its detailed mechanism remains unclear. Here, we aimed to delineate how BAs operate in immunological responses via the TGR5 pathway in human mononuclear cell lineages. We examined TGR5 expression in human peripheral blood monocytes, several types of in vitro differentiated macrophages (Mϕs) and dendritic cells. Mϕs differentiated with macrophage colony-stimulating factor and interferon-γ (Mγ-Mϕs), which are similar to the human intestinal lamina propria CD14(+) Mϕs that contribute to Crohn's disease (CD) pathogenesis by production of pro-inflammatory cytokines, highly expressed TGR5 compared with any other type of differentiated Mϕ and dendritic cells. We also showed that a TGR5 agonist and two types of BAs, deoxycholic acid and lithocholic acid, could inhibit tumour necrosis factor-α production in Mγ-Mϕs stimulated by commensal bacterial antigen or lipopolysaccharide. This inhibitory effect was mediated by the TGR5-cAMP pathway to induce phosphorylation of c-Fos that regulated nuclear factor-κB p65 activation. Next, we analysed TGR5 levels in lamina propria mononuclear cells (LPMCs) obtained from the intestinal mucosa of patients with CD. Compared with non-inflammatory bowel disease, inflamed CD LPMCs contained more TGR5 transcripts. Among LPMCs, isolated CD14(+) intestinal Mϕs from patients with CD expressed TGR5. In isolated intestinal CD14(+) Mϕs, a TGR5 agonist could inhibit tumour necrosis factor-α production. These results indicate that TGR5 signalling may have the potential to modulate immune responses in inflammatory bowel disease. |
Filtering spam e-mail from mixed arabic and english messages: a comparison of machine learning techniques | Spam is one of the main problems in emails communications. As the volume of non-english language spam increases, little work is done in this area. For example, in Arab world users receive spam written mostly in arabic, english or mixed Arabic and english. To filter this kind of messages, this research applied several machine learning techniques. Many researchers have used machine learning techniques to filter spam email messages. This study compared six supervised machine learning classifiers which are maximum entropy, decision trees, artificial neural nets, naïve bayes, support system machines and k-nearest neighbor. The experiments suggested that words in Arabic messages should be stemmed before applying classifier. In addition, in most cases, experiments showed that classifiers using feature selection techniques can achieve comparable or better performance than filters do not used them. |
Competition, knowledge, and local government | This paper applies the insights of Austrian economics to an important issue in local political economy. Basic economic theory holds that greater competition produces superior outcomes in private goods markets. The same should be true in the “markets” for the output of local government. Brennan and Buchanan (1977, 1980) show that interjurisdictional competition may serve as a potential restraint on the monopoly powers of local Leviathan and Tiebout (1956) shows that it may help lead to the production of efficient quantities of local public goods. However, other potential virtues of competition in the market for local collective goods have been largely ignored. This paper explores those other virtues as well as the neoclassical theoretical foundations of the Tiebout (1956) model, upon which much of this literature is based. This has public policy implications for local governments, which have taken on increased importance given the recent global movement towards more decentralized government. |
Links between two semisymmetric graphs on 112 vertices via association schemes | This paper provides a model of the use of computer algebra experimentation in algebraic graph theory. Starting from the semisymmetric cubic graph L on 112 vertices, we embed it into another semisymmetric graph N of valency 15 on the same vertex set. In order to consider systematically the links between L and N a number of combinatorial structures are involved and related coherent configurations are investigated. In particular, the construction of the incidence double cover of directed graphs is exploited. As a natural by-product of the approach presented here, a number of new interesting (mostly non-Schurian) association schemes on 56, 112 and 120 vertices are introduced and briefly discussed. We use computer algebra system GAP (including GRAPE and nauty), as well as computer package COCO. |
Comparative Study of Optical Unipolar Codes for Incoherent DS-OCDMA system | In incoherent Direct Sequence Optical Code Division Multiple Access system (DSOCDMA), the Multiple Access Interference (MAI) is one of the main limitations. To mitigate the MAI, many types of codes can be used to remove the contributions from users. In this paper, we study two types of unipolar codes used in DS-OCDMA system incoherent which are optical orthogonal codes (OOC) and the prime code (PC). We developed the characteristics of these codes i,e factors correlations, and the theoretical upper bound of the probability of error. The simulation results showed that PC codes have better performance than OOC codes. |
DevOps: Making It Easy to Do the Right Thing | Wotif Group used DevOps principles to recover from the downward spiral of manual release activity that many IT departments face. Its approach involved the idea of "making it easy to do the right thing." By defining the right thing (deployment standards) for development and operations teams and making it easy to adopt, Wotif drastically improved the average release cycle time. This article is part of a theme issue on DevOps. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.