title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Inter-Session Modeling for Session-Based Recommendation | In recent years, research has been done on applying Recurrent Neural Networks (RNNs) as recommender systems. Results have been promising, especially in the session-based setting where RNNs have been shown to outperform state-of-the-art models. In many of these experiments, the RNN could potentially improve the recommendations by utilizing information about the user's past sessions, in addition to its own interactions in the current session. A problem for session-based recommendation, is how to produce accurate recommendations at the start of a session, before the system has learned much about the user's current interests. We propose a novel approach that extends an RNN recommender to be able to process the user's recent sessions, in order to improve recommendations. This is done by using a second RNN to learn from recent sessions, and predict the user's interest in the current session. By feeding this information to the original RNN, it is able to improve its recommendations. Our experiments on two different datasets show that the proposed approach can significantly improve recommendations throughout the sessions, compared to a single RNN working only on the current session. The proposed model especially improves recommendations at the start of sessions, and is therefore able to deal with the cold start problem within sessions. |
Using Word Embeddings for Visual Data Exploration with Ontodia and Wikidata | One of the big challenges in Linked Data consumption is to create visual and natural language interfaces to the data usable for nontechnical users. Ontodia provides support for diagrammatic data exploration, showcased in this publication in combination with the Wikidata dataset. We present improvements to the natural language interface regarding exploring and querying Linked Data entities. The method uses models of distributional semantics to find and rank entity properties related to user input in Ontodia. Various word embedding types and model settings are evaluated, and the results show that user experience in visual data exploration benefits from the proposed approach. |
Learning Step Size Controllers for Robust Neural Network Training | This paper investigates algorithms to automatically adapt the learning rate of neural networks (NNs). Starting with stochastic gradient descent, a large variety of learning methods has been proposed for the NN setting. However, these methods are usually sensitive to the initial learning rate which has to be chosen by the experimenter. We investigate several features and show how an adaptive controller can adjust the learning rate without prior knowledge of the learning problem at hand. Introduction Due to the recent successes of Neural Networks for tasks such as image classification (Krizhevsky, Sutskever, and Hinton 2012) and speech recognition (Hinton et al. 2012), the underlying gradient descent methods used for training have gained a renewed interest by the research community. Adding to the well known stochastic gradient descent and RMSprop methods (Tieleman and Hinton 2012), several new gradient based methods such as Adagrad (Duchi, Hazan, and Singer 2011) or Adadelta (Zeiler 2012) have been proposed. However, most of the proposed methods rely heavily on a good choice of an initial learning rate. Compounding this issue is the fact that the range of good learning rates for one problem is often small compared to the range of good learning rates across different problems, i.e., even an experienced experimenter often has to manually search for good problem-specific learning rates. A tempting alternative to manually searching for a good learning rate would be to learn a control policy that automatically adjusts the learning rate without further intervention using, for example, reinforcement learning techniques (Sutton and Barto 1998). Unfortunately, the success of learning such a controller from data is likely to depend heavily on the features made available to the learning algorithm. A wide array of reinforcement learning literature has shown the importance of good features in tasks ranging from Tetris (Thiery and Scherrer 2009) to haptile object identification (Kroemer, Lampert, and Peters 2011). Thus, the first step towards applying RL methods to control learning rates is to find good features. Subsequently, the main contributions of this paper are Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. • Identifying informative features for the automatic control of the learning rate. • Proposing a learning setup for a controller that automatically adapts the step size of NN training algorithms. • Showing that the resulting controller generalizes across different tasks and architectures. Together, these contributions enable robust and efficient training of NNs without the need of manual step size tuning. Method The goal of this paper is to develop an adaptive controller for the learning rate used in training algorithms such as Stochastic Gradient Descent (SGD) or RMSprop (Tieleman and Hinton 2012). We start with a general statement of the problem we are aiming to solve. Problem Statement We are interested in finding the minimizer ω∗ = arg min ω F (X;ω), (1) where in our case ω represents the weight vector of the NN and X = {x1, . . . ,xN} is the set of N training examples (e.g., images and labels). The function F (·) sums over the function values induced by the individual inputs such that |
Deep Impression: Audiovisual Deep Residual Networks for Multimodal Apparent Personality Trait Recognition | Here, we develop an audiovisual deep residual network for multimodal apparent personality trait recognition. The network is trained end-to-end for predicting the Big Five personality traits of people from their videos. That is, the network does not require any feature engineering or visual analysis such as face detection, face landmark alignment or facial expression recognition. Recently, the network won the third place in the ChaLearn First Impressions Challenge with a test accuracy of 0.9109. |
A Sensitive ANN Based Differential Relay for Transformer Protection with Security against CT Saturation and Tap Changer Operation | This paper presents an artificial neural network (ANN) based scheme for fault identification in power transformer protection. The proposed scheme is featured by the application of ANN to identifying system patterns, the unique choice of harmonics of positive sequence differential currents as ANN inputs, the effective handling of current transformer (CT) saturation with an ANN based approach, and the consideration of tap changer position for correcting secondary CT current. Performance of the proposed scheme is studied for a wide variety of operating conditions using data generated from simulation. The results indicate that the proposed scheme provides a fast and sensitive approach for identifying internal faults and is secure against CT saturation and transformer tap changer operation. |
Crisis and catharsis : the power of the Apocalypse | For the first time in complete form, the results of recent analyses of the Apocalypse are presented in a way that is easily understood by the beginning student and challenging to the scholar looking for a fresh approach. In a clear and vivid manner, Adela Yarbro Collins discusses the authorship of the book of Revelation, when it was written, the situation it addressed, the social themes it considered, and the psychological meaning behind apocalyptic language. |
Automated segmentation of MR images of brain tumors. | An automated brain tumor segmentation method was developed and validated against manual segmentation with three-dimensional magnetic resonance images in 20 patients with meningiomas and low-grade gliomas. The automated method (operator time, 5-10 minutes) allowed rapid identification of brain and tumor tissue with an accuracy and reproducibility comparable to those of manual segmentation (operator time, 3-5 hours), making automated segmentation practical for low-grade gliomas and meningiomas. |
Virtual reality simulation in neurosurgery: technologies and evolution. | Neurosurgeons are faced with the challenge of learning, planning, and performing increasingly complex surgical procedures in which there is little room for error. With improvements in computational power and advances in visual and haptic display technologies, virtual surgical environments can now offer potential benefits for surgical training, planning, and rehearsal in a safe, simulated setting. This article introduces the various classes of surgical simulators and their respective purposes through a brief survey of representative simulation systems in the context of neurosurgery. Many technical challenges currently limit the application of virtual surgical environments. Although we cannot yet expect a digital patient to be indistinguishable from reality, new developments in computational methods and related technology bring us closer every day. We recognize that the design and implementation of an immersive virtual reality surgical simulator require expert knowledge from many disciplines. This article highlights a selection of recent developments in research areas related to virtual reality simulation, including anatomic modeling, computer graphics and visualization, haptics, and physics simulation, and discusses their implication for the simulation of neurosurgery. |
Sequence of the Most Informative Joints (SMIJ): A new representation for human skeletal action recognition | Much of the existing work on action recognition combines simple features (e.g., joint angle trajectories, optical flow, spatio-temporal video features) with somewhat complex classifiers or dynamical models (e.g., kernel SVMs, HMMs, LDSs, deep belief networks). Although successful, these approaches represent an action with a set of parameters that usually do not have any physical meaning. As a consequence, such approaches do not provide any qualitative insight that relates an action to the actual motion of the body or its parts. For example, it is not necessarily the case that clapping can be correlated to hand motion or that walking can be correlated to a specific combination of motions from the feet, arms and body. In this paper, we propose a new representation of human actions called Sequence of the Most Informative Joints (SMIJ), which is extremely easy to interpret. At each time instant, we automatically select a few skeletal joints that are deemed to be the most informative for performing the current action. The selection of joints is based on highly interpretable measures such as the mean or variance of joint angles, maximum angular velocity of joints, etc. We then represent an action as a sequence of these most informative joints. Our experiments on multiple databases show that the proposed representation is very discriminative for the task of human action recognition and performs better than several state-of-the-art algorithms. |
Discrimination-aware data mining | In the context of civil rights law, discrimination refers to unfair or unequal treatment of people based on membership to a category or a minority, without regard to individual merit. Rules extracted from databases by data mining techniques, such as classification or association rules, when used for decision tasks such as benefit or credit approval, can be discriminatory in the above sense. In this paper, the notion of discriminatory classification rules is introduced and studied. Providing a guarantee of non-discrimination is shown to be a non trivial task. A naive approach, like taking away all discriminatory attributes, is shown to be not enough when other background knowledge is available. Our approach leads to a precise formulation of the redlining problem along with a formal result relating discriminatory rules with apparently safe ones by means of background knowledge. An empirical assessment of the results on the German credit dataset is also provided. |
Top-down predictions in the cognitive brain | The human brain is not a passive organ simply waiting to be activated by external stimuli. Instead, we propose that the brain continuously employs memory of past experiences to interpret sensory information and predict the immediately relevant future. The basic elements of this proposal include analogical mapping, associative representations and the generation of predictions. This review concentrates on visual recognition as the model system for developing and testing ideas about the role and mechanisms of top-down predictions in the brain. We cover relevant behavioral, computational and neural aspects, explore links to emotion and action preparation, and consider clinical implications for schizophrenia and dyslexia. We then discuss the extension of the general principles of this proposal to other cognitive domains. |
Research on data augmentation for image classification based on convolution neural networks | The performance of deep convolution neural networks will be further enhanced with the expansion of the training data set. For the image classification tasks, it is necessary to expand the insufficient training image samples through various data augmentation methods. This paper explores the impact of various data augmentation methods on image classification tasks with deep convolution Neural network, in which Alexnet is employed as the pre-training network model and a subset of CIFAR10 and ImageNet (10 categories) are selected as the original data set. The data augmentation methods used in this paper include: GAN/WGAN, Flipping, Cropping, Shifting, PCA jittering, Color jittering, Noise, Rotation, and some combinations. Experimental results show that, under the same condition of multiple increasing, the performance evaluation on small-scale data sets is more obvious, the four individual methods (Cropping, Flipping, WGAN, Rotation) perform generally better than others, and some appropriate combination methods are slightly more effective than the individuals. |
COP1 and phyB Physically Interact with PIL1 to Regulate Its Stability and Photomorphogenic Development in Arabidopsis. | In Arabidopsis thaliana, the cryptochrome and phytochrome photoreceptors act together to promote photomorphogenic development. The cryptochrome and phytochrome signaling mechanisms interact directly with CONSTITUTIVELY PHOTOMORPHOGENIC1 (COP1), a RING motif-containing E3 ligase that acts to negatively regulate photomorphogenesis. COP1 interacts with and ubiquitinates the transcription factors that promote photomorphogenesis, such as ELONGATED HYPOCOTYL5 and LONG HYPOCOTYL IN FAR-RED1 (HFR1), to inhibit photomorphogenic development. Here, we show that COP1 physically interacts with PIF3-LIKE1 (PIL1) and promotes PIL1 degradation via the 26S proteasome. We further demonstrate that phyB physically interacts with PIL1 and enhances PIL1 protein accumulation upon red light irradiation, probably through suppressing the COP1-PIL1 association. Biochemical and genetic studies indicate that PIL1 and HFR1 form heterodimers and promote photomorphogenesis cooperatively. Moreover, we demonstrate that PIL1 interacts with PIF1, 3, 4, and 5, resulting in the inhibition of the transcription of PIF direct-target genes. Our results reveal that PIL1 stability is regulated by phyB and COP1, likely through physical interactions, and that PIL1 coordinates with HFR1 to inhibit the transcriptional activity of PIFs, suggesting that PIL1, HFR1, and PIFs constitute a subset of antagonistic basic helix-loop-helix factors acting downstream of phyB and COP1 to regulate photomorphogenic development. |
Role of extracellular polymeric substances in bioflocculation of activated sludge microorganisms under glucose-controlled conditions. | Extracellular polymeric substances (EPS) secreted by suspended cultures of microorganisms from an activated sludge plant in the presence of glucose were characterized in detail using colorimetry, X-ray photoelectron spectroscopy (XPS) and Fourier transform infrared (FTIR) spectroscopy. EPS produced by the multi-species community were similar to literature reports of pure cultures in terms of functionalities with respect to C and O but differed subtly in terms of N and P. Hence, it appears that EPS produced by different microorganisms maybe homologous in major chemical constituents but may differ in minor components such as lipids and phosphodiesters. The role of specific EPS constituents on microbial aggregation was also determined. The weak tendency of microorganisms to bioflocculate during the exponential growth phase was attributed to electrostatic repulsion when EPS concentration was low and acidic in nature (higher fraction of uronic acids to total EPS) as well as reduced polymer bridging. However, during the stationary phase, polymeric interactions overwhelmed electrostatic interactions (lower fraction of uronic acids to total EPS) resulting in improved bioflocculation. More specifically, microorganisms appeared to aggregate in the presence of protein secondary structures including aggregated strands, beta-sheets, alpha- and 3-turn helical structures. Bioflocculation was also favored by increasing O-acetylated carbohydrates and overall C-(O,N) and O=C-OH+O=C-OR functionalities. |
Scanning the surface of soft tissues with a micrometer precision thanks to endomicroscopy based visual servoing | Probe-based confocal laser endomicroscopy is a recent tissue imaging technology that requires placing a probe in contact with the tissue to be imaged and provides real time images with a microscopic resolution. Additionally, generating adequate probe movements to sweep the tissue surface can be used to reconstruct a wide mosaic of the scanned region while increasing the resolution which is appropriate for anatomico-pathological cancer diagnosis. However, properly controlling the motion along the scanning trajectory is a major problem. Indeed, the tissue exhibits deformations under friction forces exerted by the probe leading to deformed mosaics. In this paper we propose a visual servoing approach for controlling the probe movements relative to the tissue while rejecting the tissue deformation disturbance. The probe displacement with respect to the tissue is firstly estimated using the confocal images and an image registration real-time algorithm. Secondly, from this real-time image-based position measurement, the probe motion is controlled thanks to a simple proportional-integral compensator and a feedforward term. Ex vivo experiments using a Stäubli TX40 robot and a Mauna Kea Technologies Cellvizio imaging device demonstrate the effectiveness of the approach on liver and muscle tissue. |
Exploring Generalization in Deep Learning | With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures explain different observed phenomena. |
Mobile banking adoption: A literature review | Electronic commerce (e-commerce) continues to have a profound impact on the global business environment, but technologies and applications also have begun to focus more on mobile computing, the wireless Web, and mobile commerce. Against this backdrop, mobile banking (m-banking) has emerged as an important distribution channel, with considerable research devoted to its adoption. However, this research stream has lacked a clear roadmap or agenda. Therefore, the present article analyzes and synthesizes existing studies of m-banking adoption and maps the major theories that researchers have used to predict consumer intentions to adopt it. The findings indicate that the m-banking adoption literature is fragmented, though it commonly relies on the technology acceptance model and its modifications, revealing that compatibility (with lifestyle and device), perceived usefulness, and attitude are the most significant drivers of intentions to adopt m-banking services in developed and developing countries. Moreover, the extant literature appears limited by its narrow focus on SMS banking in developing countries; virtually no studies address the use of m-banking applications via smartphones or tablets or consider the consequences of such usage. This study makes several recommendations for continued research in the area of mobile banking. |
SoccerStories: A Kick-off for Visual Soccer Analysis | This article presents SoccerStories, a visualization interface to support analysts in exploring soccer data and communicating interesting insights. Currently, most analyses on such data relate to statistics on individual players or teams. However, soccer analysts we collaborated with consider that quantitative analysis alone does not convey the right picture of the game, as context, player positions and phases of player actions are the most relevant aspects. We designed SoccerStories to support the current practice of soccer analysts and to enrich it, both in the analysis and communication stages. Our system provides an overview+detail interface of game phases, and their aggregation into a series of connected visualizations, each visualization being tailored for actions such as a series of passes or a goal attempt. To evaluate our tool, we ran two qualitative user studies on recent games using SoccerStories with data from one of the world's leading live sports data providers. The first study resulted in a series of four articles on soccer tactics, by a tactics analyst, who said he would not have been able to write these otherwise. The second study consisted in an exploratory follow-up to investigate design alternatives for embedding soccer phases into word-sized graphics. For both experiments, we received a very enthusiastic feedback and participants consider further use of SoccerStories to enhance their current workflow. |
A CMOS Sub-l-V nanopower current and voltage reference with leakage compensation | In this paper, a CMOS sub-1-V nanopower reference is proposed, which is implemented without resistors and with only standard CMOS transistors. The proposed circuit has the most attractive merit that it can afford reference current and reference voltage simultaneously. Moreover, the leakage compensation technique is utilized, and thus it has very low temperature coefficient for a wide temperature range. The proposed circuit is verified by SPICE simulation with CMOS 0.18um process. The temperature coefficient of the reference voltage and reference current are 0.0037%/°C and 0.0091%/°C, respectively. Also, the power supply voltage can be as low as 0.85V and its power consumption is only 5.1nW. |
An Analysis of Bronfenbrenner's Bio-Ecological Perspective for Early Childhood Educators: Implications for Working with Families Experiencing Stress. | Today’s families face many stressors during the early childhood years. Particular stressors like homelessness, violence, and chemical dependence, play havoc with the family system. Urie Bronfenbrenner’s bio-ecological perspective offers an insightful lens for understanding and supporting families under stress. This article presents the key elements of Bronfenbrenner’s perspective and applies this perspective to strategies for effectively helping families under stress. |
Optogenetics: 10 years of microbial opsins in neuroscience | Over the past 10 years, the development and convergence of microbial opsin engineering, modular genetic methods for cell-type targeting and optical strategies for guiding light through tissue have enabled versatile optical control of defined cells in living systems, defining modern optogenetics. Despite widespread recognition of the importance of spatiotemporally precise causal control over cellular signaling, for nearly the first half (2005–2009) of this 10-year period, as optogenetics was being created, there were difficulties in implementation, few publications and limited biological findings. In contrast, the ensuing years have witnessed a substantial acceleration in the application domain, with the publication of thousands of discoveries and insights into the function of nervous systems and beyond. This Historical Commentary reflects on the scientific landscape of this decade-long transition. |
Design, Realization, and Test of a UWB Radar Sensor for Breath Activity Monitoring | An analytical model of an ultrawideband range gating radar is developed. The model is used for the system design of a radar for breath activity monitoring having sub-millimeter movement resolution and fulfilling the requirements of the Federal Communications Commission in terms of effective isotropic radiated power. The system study has allowed to define the requirements of the various radar subsystems that have been designed and realized by means of a low cost hybrid technology. The radar has been assembled and some performance factors, such as range and movement resolution, and the receiver conversion factor have been experimentally evaluated and compared with the model predictions. Finally, the radar has been tested for remote breath activity monitoring, showing recorded respiratory signals in very good agreement with those obtained by means of a conventional technique employing a piezoelectric belt. |
Recurrent convolutional neural network for speech processing | Different neural networks have exhibited excellent performance on various speech processing tasks, and they usually have specific advantages and disadvantages. We propose to use a recently developed deep learning model, recurrent convolutional neural network (RCNN), for speech processing, which inherits some merits of recurrent neural network (RNN) and convolutional neural network (CNN). The core module can be viewed as a convolutional layer embedded with an RNN, which enables the model to capture both temporal and frequency dependance in the spectrogram of the speech in an efficient way. The model is tested on speech corpus TIMIT for phoneme recognition and IEMOCAP for emotion recognition. Experimental results show that the model is competitive with previous methods in terms of accuracy and efficiency. |
State of the Art on Monocular 3D Face Reconstruction, Tracking, and Applications | The computer graphics and vision communities have dedicated long standing efforts in building computerized tools for reconstructing, tracking, and analyzing human faces based on visual input. Over the past years rapid progress has been made, which led to novel and powerful algorithms that obtain impressive results even in the very challenging case of reconstruction from a single RGB or RGB-D camera. The range of applications is vast and steadily growing as these technologies are further improving in speed, accuracy, and ease of use. Motivated by this rapid progress, this state-of-the-art report summarizes recent trends in monocular facial performance capture and discusses its applications, which range from performance-based animation to real-time facial reenactment. We focus our discussion on methods where the central task is to recover and track a three dimensional model of the human face using optimization-based reconstruction algorithms. We provide an in-depth overview of the underlying concepts of real-world image formation, and we discuss common assumptions and simplifications that make these algorithms practical. In addition, we extensively cover the priors that are used to better constrain the under-constrained monocular reconstruction problem, and discuss the optimization techniques that are employed to recover dense, photo-geometric 3D face models from monocular 2D data. Finally, we discuss a variety of use cases for the reviewed algorithms in the context of motion capture, facial animation, as well as image and video editing. CCS Concepts •Computing methodologies → Reconstruction; Tracking; Motion capture; Shape modeling; 3D imaging; |
Joint adaptive loss and l2/l0-norm minimization for unsupervised feature selection | Unsupervised feature selection is a useful tool for reducing the complexity and improving the generalization performance of data mining tasks. In this paper, we propose an Adaptive Unsupervised Feature Selection (AUFS) algorithm with explicit l2/l0-norm minimization. We use a joint adaptive loss for data fitting and a l2/l0 minimization for feature selection. We solve the optimization problem with an efficient iterative algorithm and prove that all the expected properties of unsupervised feature selection can be preserved. We also show that the computational complexity and memory use is only linear to the number of instances and square to the number of clusters. Experiments show that our algorithm outperforms the state-of-the-arts on seven different benchmark data sets. |
Query-Based Summarization using Rhetorical Structure Theory | Research on Question Answering is focused mainly on classifying the question type and finding the answer. Presenting the answer in a way that suits the user’s needs has received little attention. This paper shows how existing question answering systems—which aim at finding precise answers to questions—can be improved by exploiting summarization techniques to extract more than just the answer from the document in which the answer resides. This is done using a graph search algorithm which searches for relevant sentences in the discourse structure, which is represented as a graph. The Rhetorical Structure Theory (RST) is used to create a graph representation of a text document. The output is an extensive answer, which not only answers the question, but also gives the user an opportunity to assess the accuracy of the answer (is this what I am looking for?), and to find additional information that is related to the question, and which may satisfy an information need. This has been implemented in a working multimodal question answering system where it operates with two independently developed question answering modules. |
The Austrian fulvestrant registry: results from a prospective observation of fulvestrant in postmenopausal patients with metastatic breast cancer | Background Endocrine therapy is the preferred treatment in oestrogen- and/or progesterone-receptor (ER/PgR) positive breast cancer. Fulvestrant is a pure ER-antagonist. We present results from the Austrian Fulvestrant Registry. Methods Three-hundred and fifty patients were included. Time to progression (TTP) was defined as primary endpoint. A multivariate analysis was performed to identify factors significantly associated with TTP. Results Fulvestrant was administered as first-line therapy in 26%, second-line in 49%, and third-line or beyond in 25%. TTP was median 7 months. We observed a response in 15% of patients and 41% had SD ≥ 6 months. First-line treatment and non-visceral metastases were associated with longer TTP. One case of pulmonary embolism was reported. Grade 3 toxicities consisted of joint pain (1.4%), nausea (1.4%) and hot flashes (0.3%). Conclusions Fulvestrant was effective and well tolerated. TTP was superior to other trials, due to the large proportion of first-line patients. Activity is apparently independent of Her2-status. |
ELIMINET: A MODEL FOR ELIMINATING OPTIONS | The task of Reading Comprehension with Multiple Choice Questions, requires a human (or machine) to read a given {passage, question} pair and select one of the n given options. The current state of the art model for this task first computes a query-aware representation for the passage and then selects the option which has the maximum similarity with this representation. However, when humans perform this task they do not just focus on option selection but use a combination of elimination and selection. Specifically, a human would first try to eliminate the most irrelevant option and then read the document again in the light of this new information (and perhaps ignore portions corresponding to the eliminated option). This process could be repeated multiple times till the reader is finally ready to select the correct option. We propose ElimiNet, a neural network based model which tries to mimic this process. Specifically, it has gates which decide whether an option can be eliminated given the {document, question} pair and if so it tries to make the document representation orthogonal to this eliminatedd option (akin to ignoring portions of the document corresponding to the eliminated option). The model makes multiple rounds of partial elimination to refine the document representation and finally uses a selection module to pick the best option. We evaluate our model on the recently released large scale RACE dataset and show that it outperforms the current state of the art model on 7 out of the 13 question types in this dataset. Further we show that taking an ensemble of our elimination-selection based method with a selection based method gives us an improvement of 7% (relative) over the best reported performance on this dataset. |
Estimating the Effect of Web-Based Homework | VanLehn’s recent meta-analysis suggests that the AI aspects of computer tutors are adding significant value beyond simple adaptive approaches. For example, the beneficial effects of human tutors and various computer-based interventions compared to regular classroom instruction have estimated values that range between 0.8 std and .31 std. At the upper end of effects are human tutors followed closely by computer tutors (0.74). At the lower end is simple computer-based practice with feedback systems (0.31 std). In this research we are concerned with estimating the effects of web-based homework (WBH) involving practice and feedback. An argument is made that this could serve as a more appropriate condition for comparing the benefits of additional AIED tutoring features. WBH gives students feedback on correctness only as they go. It does not offer hints, feedback messages on common wrong answers, or mastery learning in the problem selection algorithm (used in what VanLehn calls the outer loop). A second underappreciated aspect of WBH is that teachers can use the data to more efficiently review homework. Universities across the world are employing these WBH systems but there are no known comparisons of this in K12. In this work we randomly assigned 63 thirteen and fourteen year olds to either a traditional homework condition (TH) involving practice without feedback or a WBH condition that added correctness feedback and ability to try again. All students used ASSISTments to do their homework but we ablated all of the intelligent tutoring aspects of hints, feedback messages and mastery learning as appropriate to the two practice conditions. We found that students learned reliably more in the web-based homework condition and with a large effect size of 0.56. Given the small sample size and confidence interval for the effect, more studies are needed to better estimate the effect size of WBH. An argument is made that effects associated with the practice with feedback condition should serve as a more accurate baseline for comparing the benefits of additional AIED tutoring features. Future work will systematically compare conditions that add back in hints, feedback messages and mastery learning so that we can measure the value added by these components. |
Recent developments on matter dynamics within the Kaluza-Klein picture | In this paper we propose a new approach to matter dynamics in compactified Kaluza-Klein theories. We discard the idea that the motion is geodesic and perform a simultaneous reduction of matter geometry defining the test particle via a multipole approach. In the resulting dynamics the tower of huge massive modes is removed, without giving up the compactification scenario. Such an approach yields a consistent modified gravity theory with source. Some scenarios and applications to dark energy problem are sketched. |
No . 786 DECEPTION AND CONFESSION : EXPERIMENTAL EVIDENCE FROM A DECEPTION GAME IN JAPAN | This study investigated lying behavior and the behavior of people who are deceived by using a deception game (Gneezy, 2005) in both anonymity and face-to-face treatments. Subjects consist of students and non-students (citizens) to investigate whether lying behavior is depended on socioeconomic backgrounds. To explore how liars feel about lying, we give senders a chance to confess their behaviors to their counter partner for the guilty aversion of lying. The following results are obtained: i) a frequency of lying behavior for students is significantly higher than that for non-students at a payoff in the anonymity treatment, but that is not significantly difference between the anonymity and face-to-face treatments; ii) lying behavior is not influenced by gender; iii) a frequency of confession is higher in the face-to-face treatment than in the anonymity treatment; and iv) the receivers who are deceived are more likely to believe a sender’s message to be true in the anonymity treatment. This study implies that the existence of the partner prompts liars to confess their behavior because they may feel remorse or guilt. |
Deep Forward and Inverse Perceptual Models for Tracking and Prediction | We consider the problems of learning forward models that map state to high-dimensional images and inverse models that map high-dimensional images to state in robotics. Specifically, we present a perceptual model for generating video frames from state with deep networks, and provide a framework for its use in tracking and prediction tasks. We show that our proposed model greatly outperforms standard deconvolutional methods and GANs for image generation, producing clear, photo-realistic images. We also develop a convolutional neural network model for state estimation and compare the result to an Extended Kalman Filter to estimate robot trajectories. We validate all models on a real robotic system. |
Infusing software engineering technology into practice at NASA | We present an ongoing effort of the NASA Software Engineering Initiative to encourage the use of advanced software engineering technology on NASA projects. Technology infusion is in general a difficult process yet this effort seems to have found a modest approach that is successful for some types of technologies. We outline the process and describe the experience of the technology infusions that occurred over a two year period. We also present some lessons from the experiences |
Improving passive current sharing in multiphase active-clamp flyback converter with high step-up ratio | Output impedance of active-clamp converters is a valid method to achieve current sharing among parallel-connected power stages. Nevertheless, parasitic capacitances result in resonances that modify converter behavior and current balance. A solution is presented and validated. The current balance is achieved without a dedicated control. |
A systematic literature review on quality criteria for agile requirements specifications | The quality of requirements is typically considered as an important factor for the quality of the end product. For traditional up-front requirements specifications, a number of standards have been defined on what constitutes good quality : Requirements should be complete, unambiguous, specific, time-bounded, consistent, etc. For agile requirements specifications, no new standards have been defined yet, and it is not clear yet whether traditional quality criteria still apply. To investigate what quality criteria for assessing the correctness of written agile requirements exist, we have conducted a systematic literature review. The review resulted in a list of 16 selected papers on this topic. These selected papers describe 28 different quality criteria for agile requirements specifications. We categorize and analyze these criteria and compare them with those from traditional requirements engineering. We discuss findings from the 16 papers in the form of recommendations for practitioners on quality assessment of agile requirements. At the same time, we indicate the open points in the form of a research agenda for researchers working on this topic . |
Being Bayesian About Network Structure. A Bayesian Approach to Structure Discovery in Bayesian Networks | In many multivariate domains, we are interested in analyzing the dependency structure of the underlying distribution, e.g., whether two variables are in direct interaction. We can represent dependency structures using Bayesian network models. To analyze a given data set, Bayesian model selection attempts to find the most likely (MAP) model, and uses its structure to answer these questions. However, when the amount of available data is modest, there might be many models that have non-negligible posterior. Thus, we want compute the Bayesian posterior of a feature, i.e., the total posterior probability of all models that contain it. In this paper, we propose a new approach for this task. We first show how to efficiently compute a sum over the exponential number of networks that are consistent with a fixed order over network variables. This allows us to compute, for a given order, both the marginal probability of the data and the posterior of a feature. We then use this result as the basis for an algorithm that approximates the Bayesian posterior of a feature. Our approach uses a Markov Chain Monte Carlo (MCMC) method, but over orders rather than over network structures. The space of orders is smaller and more regular than the space of structures, and has much a smoother posterior “landscape”. We present empirical results on synthetic and real-life datasets that compare our approach to full model averaging (when possible), to MCMC over network structures, and to a non-Bayesian bootstrap approach. |
Controlled Experiments for Word Embeddings | An experimental approach to studying the properties of word embeddings is proposed. Controlled experiments, achieved through modifications of the training corpus, permit the demonstration of direct relations between word properties and word vector direction and length. The approach is demonstrated using the word2vec CBOW model with experiments that independently vary word frequency and word co-occurrence noise. The experiments reveal that word vector length depends more or less linearly on both word frequency and the level of noise in the co-occurrence distribution of the word. The coefficients of linearity depend upon the word. The special point in feature space, defined by the (artificial) word with pure noise in its co-occurrence distribution, is found to be small but non-zero. |
The changing view of eukaryogenesis - fossils, cells, lineages and how they all come together. | Eukaryogenesis - the emergence of eukaryotic cells - represents a pivotal evolutionary event. With a fundamentally more complex cellular plan compared to prokaryotes, eukaryotes are major contributors to most aspects of life on Earth. For decades, we have understood that eukaryotic origins lie within both the Archaea domain and α-Proteobacteria. However, it is much less clear when, and from which precise ancestors, eukaryotes originated, or the order of emergence of distinctive eukaryotic cellular features. Many competing models for eukaryogenesis have been proposed, but until recently, the absence of discriminatory data meant that a consensus was elusive. Recent advances in paleogeology, phylogenetics, cell biology and microbial diversity, particularly the discovery of the 'Candidatus Lokiarcheaota' phylum, are now providing new insights into these aspects of eukaryogenesis. The new data have allowed the time frame during which eukaryogenesis occurred to be finessed, a more precise identification of the contributing lineages and the biological features of the contributors to be clarified. Considerable advances have now been used to pinpoint the prokaryotic origins of key eukaryotic cellular processes, such as intracellular compartmentalisation, with major implications for models of eukaryogenesis. |
The Benefits of Self-Awareness and Attention in Fog and Mist Computing | Self-awareness facilitates a proper assessment of cost-constrained cyber-physical systems, allocating limited resources where they are most needed. Together, situation awareness and attention are key enablers for self-awareness in efficient distributed sensing and computing networks. |
Video-based facial expression recognition using histogram sequence of local Gabor binary patterns from three orthogonal planes | Video-based facial expression recognition has received significant attention in recent years due to its widespread applications. One key issue for video-based facial expression analysis in practice is how to extract dynamic features. In this paper, a novel approach is presented using histogram sequence of local Gabor binary patterns from three orthogonal planes (LGBP-TOP). In this approach, every facial expression sequence is firstly convolved with the multi-scale and multi-orientation Gabor filters to extract the Gabor Magnitude Sequences (GMSs). Then, we use local binary patterns from three orthogonal planes (LBP-TOP) on each GMS to further enhance the feature extraction. Finally, the facial expression sequence is modeled as a histogram sequence by concatenating the histogram pieces of all the local regions of all the LGBP-TOP maps. For recognition, Support Vector Machine (SVM) is exploited. Our experimental results on the extended Cohn-Kanade database (CK+) demonstrate that the proposed method has achieved the best results compared to other methods in recent years. |
Total transcriptome, proteome, and allergome of Johnson grass pollen, which is important for allergic rhinitis in subtropical regions. | BACKGROUND
Genomic data are lacking for many allergen sources. To circumvent this limitation, we implemented a strategy to reveal the repertoire of pollen allergens of a grass with clinical importance in subtropical regions, where an increasing proportion of the world's population resides.
OBJECTIVE
We sought to identify and immunologically characterize the allergenic components of the Panicoideae Johnson grass pollen (JGP; Sorghum halepense).
METHODS
The total pollen transcriptome, proteome, and allergome of JGP were documented. Serum IgE reactivities with pollen and purified allergens were assessed in 64 patients with grass pollen allergy from a subtropical region.
RESULTS
Purified Sor h 1 and Sor h 13 were identified as clinically important allergen components of JGP with serum IgE reactivity in 49 (76%) and 28 (43.8%), respectively, of patients with grass pollen allergy. Within whole JGP, multiple cDNA transcripts and peptide spectra belonging to grass pollen allergen families 1, 2, 4, 7, 11, 12, 13, and 25 were identified. Pollen allergens restricted to subtropical grasses (groups 22-24) were also present within the JGP transcriptome and proteome. Mass spectrometry confirmed the IgE-reactive components of JGP included isoforms of Sor h 1, Sor h 2, Sor h 13, and Sor h 23.
CONCLUSION
Our integrated molecular approach revealed qualitative differences between the allergenic components of JGP and temperate grass pollens. Knowledge of these newly identified allergens has the potential to improve specific diagnosis and allergen immunotherapy treatment for patients with grass pollen allergy in subtropical regions and reduce the burden of allergic respiratory disease globally. |
Aligned Cluster Analysis for temporal segmentation of human motion | Temporal segmentation of human motion into actions is a crucial step for understanding and building computational models of human motion. Several issues contribute to the challenge of this task. These include the large variability in the temporal scale and periodicity of human actions, as well as the exponential nature of all possible movement combinations. We formulate the temporal segmentation problem as an extension of standard clustering algorithms. In particular, this paper proposes aligned cluster analysis (ACA), a robust method to temporally segment streams of motion capture data into actions. ACA extends standard kernel k-means clustering in two ways: (1) the cluster means contain a variable number of features, and (2) a dynamic time warping (DTW) kernel is used to achieve temporal invariance. Experimental results, reported on synthetic data and the Carnegie Mellon Motion Capture database, demonstrate its effectiveness. |
The Concordance between Patients’ Renal Replacement Therapy Choice and Definitive Modality: Is It a Utopia? | INTRODUCTION
It is desirable for patients to play active roles in the choice of renal replacement therapy (RRT). Patient decision aid tools (PDAs) have been developed to allow the patients to choose the option best suited to their individual needs.
MATERIAL AND METHODS
An observational, prospective registry was conducted in 26 Spanish hospitals between September 2010 and May 2012. The results of the patients' choice and the definitive RRT modality were registered through the progressive implementation of an Education Process (EP) with PDAs designed to help Chronic Kidney Disease (CKD) patients choose RRT.
RESULTS
Patients included in this study: 1044. Of these, 569 patients used PDAs and had made a definitive choice by the end of registration. A total of 88.4% of patients chose dialysis [43% hemodialysis (HD) and 45% peritoneal dialysis (PD)] 3.2% preemptive living-donor transplant (TX), and 8.4% conservative treatment (CT). A total of 399 patients began RRT during this period. The distribution was 93.4% dialysis (53.6% HD; 40% PD), 1.3% preemptive TX and 5.3% CT. The patients who followed the EP changed their mind significantly less often [kappa value of 0.91 (95% CI, 0.86-0.95)] than those who did not follow it, despite starting unplanned treatment [kappa value of 0.85 (95% CI, 0.75-0.95]. A higher agreement between the final choice and a definitive treatment was achieved by the EP and planned patients [kappa value of 0.93 (95% CI, 0.89-0.98)]. Those who did not go through the EP had a much lower index of choosing PD and changed their decision more frequently when starting definitive treatment [kappa value of 0.73 (95% CI, 0.55-0.91)].
CONCLUSIONS
Free choice, assisted by PDAs, leads to a 50/50 distribution of PD and HD choice and an increase in TX choice. The use of PDAs, even with an unplanned start, achieved a high level of concordance between the chosen and definitive modality. |
Personalization in distributed e-learning environments | Personalized support for learners becomes even more important, when e-Learning takes place in open and dynamic learning and information networks. This paper shows how to realize personalized learning support in distributed learning environments based on Semantic Web technologies. Our approach fills the existing gap between current adaptive educational systems with well-established personalization functionality, and open, dynamic learning repository networks. We propose a service-based architecture for establishing personalized e-Learning, where personalization functionality is provided by various web-services. A Personal Learning Assistant integrates personalization services and other supporting services, and provides the personalized access to learning resources in an e-Learning network. |
Semantic Audiovisual Data Fusion for Automatic Emotion Recognition | The paper describes a novel technique for the recognition of emotions from multimodal data. We focus on the recognition of the six prototypic emotions. The results from the facial expression recognition and from the emotion recognition from speech are combined using a bi-modal multimodal semantic data fusion model that determines the most probable emotion of the subject. Two types of models based on geometric face features for facial expression recognition are being used, depending on the presence or absence of speech. In our approach we define an algorithm that is robust to changes of face shape that occur during regular speech. The influence of phoneme generation on the face shape during speech is removed by using features that are only related to the eyes and the eyebrows. The paper includes results from testing the presented models. |
Anchor-free distributed localization in sensor networks | Many sensor network applications require that each node’s sensor stream be annotated with its physical location in some common coordinate system. Manual measurement and configuration methods for obtaining location don’t scale and are error-prone, and equipping sensors with GPS is often expensive and does not work in indoor and urban deployments. Sensor networks can therefore benefit from a self-configuring method where nodes cooperate with each other, estimate local distances to their neighbors, and converge to a consistent coordinate assignment. This paper describes a fully decentralized algorithm called AFL (Anchor-Free Localization) where nodes start from a random initial coordinate assignment and converge to a consistent solution using only local node interactions. The key idea in AFL is fold-freedom, where nodes first configure into a topology that resembles a scaled and unfolded version of the true configuration, and then run a force-based relaxation procedure. We show using extensive simulations under a variety of network sizes, node densities, and distance estimation errors that our algorithm is superior to previously proposed methods that incrementally compute the coordinates of nodes in the network, in terms of its ability to compute correct coordinates under a wider variety of conditions and its robustness to measurement errors. |
Quality, clinical outcomes and treatment costs in acute intestinal failure | Type 1 and type 2 intestinal failure (IF) are associated with significant morbidity and mortality, with little published data reporting outcomes from clinical practice. This thesis will therefore examine the definitions, quality of care, clinical outcomes and treatment costs of these conditions within the setting of an acute hospital which cares for many type 1 IF patients as well as running a regional intestinal failure service for type 2 and 3 IF patients.Observational studies were conducted to examine; the parenteral nutrition (PN) care provided to patients with all types of IF, screening tools and criteria to identify type 2 IF in clinical practice and an assessment of clinical outcomes and treatments costs in this complex patient group.The multidisciplinary nutrition and intestinal failure team were involved in 90% of decisions regarding initiation of PN in this hospital compared to only 52.7% reported by the National Confidential Enquiry into Patient Outcome and Death (NCEPOD) report. Standards of assessment, monitoring and catheter complications were also better than those in the NCEPOD report. Rates of catheter related sepsis were lower in patients managed within a specialised IF unit compared to other wards; 1.8 episodes/1000 PN days versus 8.21 episodes/1000 PN days (p 28 days had a 91% sensitivity and 96% specificity for identifying type 2 IF but a low positive predictive value of only 59%. IF surgery criteria had a sensitivity of 96% and a positive predictive value of 100% for identifying type 2 IF.Mortality during an acute admission for type 2 IF patients (n=44) was 4.2%. Following reconstructive surgery (n=37) there were no post-operative deaths, no readmissions within 30 days and only one post-operative fistula recurrence. After surgery 94% of patients were independent of artificial nutrition. The median calculated treatment costs per day for patients with type 2 IF was £572. Current funding mechanisms within the NHS only allow hospitals to recover 44.7% of the treatment costs in type 2 IF.These studies confirm that standards of PN care in IF can be high within a regional specialist centre, with low rates of mortality, fistulae recurrence and PN dependence in type 2 IF. Criteria for screening and defining type 2 IF are relevant to clinical practices and their wider use could result in earlier access to specialist treatment, improvements in outcome reporting and a mechanism for establishing future IF funding. |
Improving Credit Risk Prediction in Online Peer-to-Peer (P2P) Lending Using Imbalanced Learning Techniques | Peer-to-peer (P2P) lending is a global trend of financial markets that allow individuals to obtain and concede loans without having financial institutions as a strong proxy. As many real-world applications, P2P lending presents an imbalanced characteristic, where the number of creditworthy loan requests is much larger than the number of non-creditworthy ones. In this work, we wrangle a real-world P2P lending data set from Lending Club, containing a large amount of data gathered from 2007 up to 2016. We analyze how supervised classification models and techniques to handle class imbalance impact creditworthiness prediction rates. Ensembles, cost-sensitive and sampling methods are combined and evaluated along logistic regression, decision tree, and bayesian learning schemes. Results show that, in average, sampling techniques outperform ensembles and cost sensitive approaches. |
FastQRE: Fast Query Reverse Engineering | We study the problem of Query Reverse Engineering (QRE), where given a database and an output table, the task is to find a simple project-join SQL query that generates that table when applied on the database. This problem is known for its efficiency challenge due to mainly two reasons. First, the problem has a very large search space and its various variants are known to be NP-hard. Second, executing even a single candidate SQL query can be very computationally expensive. In this work we propose a novel approach for solving the QRE problem efficiently. Our solution outperforms the existing state of the art by 2-3 orders of magnitude for complex queries, resolving those queries in seconds rather than days, thus making our approach more practical in real-life settings. |
Using Metagueries to Integrate Inductive Learning and Deductive Database Technology | This paper presents an approach that uses metaqueries to integrate inductive learning with deductive database technology in the context of knowledge discovery from databases. Metaqueries are second-order predicates or templates, and are used for (1) Guiding deductive data collection, (2) Focusing attention for inductive learning, and (3) Assisting human analysts in the discovery loop. We describe in detail a system that uses this idea to unify a Bayesian Data Cluster with the Logical Data Language (LDL++), and show the results of three case studies, namely: discovering regularities from a knowledge base, discovering patterns and errors from a large telecommunication database, and discovering patterns and errors from a large chemical database. |
Strong Cosmic Censorship for T2-Symmetric Spacetimes with Cosmological Constant and Matter | Abstract.We address the issue of strong cosmic censorship for T2-symmetric spacetimes with positive cosmological constant. In the case of collisionless matter, we complete the proof of the C2 formulation of the conjecture for this class of spacetimes. In the vacuum case, we prove that the conjecture holds for the special cases where the area element of the group orbits does not vanish on the past boundary of the maximal Cauchy development. |
Smart Health Monitoring Systems: An Overview of Design and Modeling | Health monitoring systems have rapidly evolved during the past two decades and have the potential to change the way health care is currently delivered. Although smart health monitoring systems automate patient monitoring tasks and, thereby improve the patient workflow management, their efficiency in clinical settings is still debatable. This paper presents a review of smart health monitoring systems and an overview of their design and modeling. Furthermore, a critical analysis of the efficiency, clinical acceptability, strategies and recommendations on improving current health monitoring systems will be presented. The main aim is to review current state of the art monitoring systems and to perform extensive and an in-depth analysis of the findings in the area of smart health monitoring systems. In order to achieve this, over fifty different monitoring systems have been selected, categorized, classified and compared. Finally, major advances in the system design level have been discussed, current issues facing health care providers, as well as the potential challenges to health monitoring field will be identified and compared to other similar systems. |
Amitriptyline and aerobic exercise or amitriptyline alone in the treatment of chronic migraine: a randomized comparative study. | UNLABELLED
To compare the preventive treatment benefits of amitriptyline and aerobic exercise or amitriptyline alone in patients with chronic migraine.
METHOD
Sixty patients, both genders, aged between 18 and 50 years, with a diagnosis of chronic migraine, were randomized in groups called amitriptyline and aerobic exercise or amitriptyline alone. The following parameters were evaluated: headache frequency, intensity and duration of headache, days of the analgesic medication use, body mass index (BMI), Beck Depression Inventory (BDI) and Beck Anxiety Inventory (BAI) scores.
RESULTS
In the evaluated parameters, was observed decrease in headache frequency (p=0.001), moderate intensity (p=0.048), in headache duration (p=0.001), the body mass index (p=0.001), Beck Depression Inventory (p=0.001) and Beck Anxiety Inventory scores (p=0.001), when groups were compared in the end of third month.
CONCLUSION
In this study, the amitriptyline was an effective treatment for chronic migraine, but its efficacy was increased when combined with aerobic exercise. |
Harnessing Automated Test Case Generators for GUI Testing in Industry | Modern graphical user interfaces (GUIs) are highly dynamic and support multi-touch interactions and screen gestures besides conventional inputs via mouse and keyboard. Hence, the flexibility of modern GUIs enables countless usage scenarios and combinations including all kind of interactions. From the viewpoint of testing, this flexibility results in a combinatorial explosion of possible interaction sequences. It dramatically raises the required time and effort involved in GUI testing, which brings manual exploration as well as conventional regression testing approaches to its limits. Automated test generation (ATG) has been proposed as a solution to reduce the effort for manually designing test cases and to speed-up test execution cycles. In this paper we describe how we successfully harnessed a state-of-the-art ATG tool (Randoop) developed for code-based API testing to generate GUI test cases. The key is an adapter that transforms API calls to GUI events. The approach is the result of a research transfer project with the goal to apply ATG for testing of human machine interfaces used to control industrial machinery. In this project the ATG tool was used to generate unit test cases for custom GUI controls and system tests for exploring navigation scenarios. It helped to increase the test coverage and was able reveal new defects in the implementation of the GUI controls as well as in the GUI application. |
Diurnal Variation of Rain Attenuation Obtained From Measurement of Raindrop Size Distribution in Equatorial Indonesia | The measured rain rate, raindrop size distribution (DSD), and the ITU-R model over the frequency range from 1-100 GHz have been used to elucidate the cumulative rainfall rate and the variability of rain attenuation at Kototabang. Rain rate and DSD are recorded from ground-based optical rain gauge and disdrometer measurements, respectively. Considerable differences between the recorded data and the ITU-R model are observed at small time percentage. The specific rain attenuation obtained from the DSD measurement shows diurnal variation with the largest attenuation observed in the morning hours. This characteristic is due to the raindrop spectra of rain events in this period containing more small-sized drops (<2 mm) than at others as described by the largest contribution of these drops on the specific rain attenuation. The diurnal variation is serious for frequencies higher than 60 GHz especially in very extreme rain. |
The Spline-Garch Model for Low Frequency Volatility and its Global Macroeconomic Causes | Twenty-five years of volatility research has left the macroeconomic environment playing a minor role. This paper proposes modeling equity volatilities as a combination of macroeconomic effects and time series dynamics. High frequency return volatility is specified to be the product of a slow-moving component, represented by an exponential spline, and a unit GARCH. This slowmoving component is the low frequency volatility, which in this model coincides with the unconditional volatility. This component is estimated for nearly 50 countries over various sample periods of daily data. Low frequency volatility is then modeled as a function of macroeconomic and financial variables in an unbalanced panel with a variety of dependence structures. It is found to vary over time and across countries. The low frequency component of volatility is greater when the macroeconomic factors GDP, inflation, and short-term interest rates are more volatile or when inflation is high and output growth is low. Volatility is higher for emerging markets and for markets with small numbers of listed companies and market capitalization relative to GDP, but also for large economies. The model allows long horizon forecasts of volatility to depend on macroeconomic developments, and delivers estimates of the volatility to be anticipated in a newly opened market. |
Variability in prostate and seminal vesicle delineations defined on magnetic resonance images, a multi-observer, -center and -sequence study | BACKGROUND
The use of magnetic resonance (MR) imaging as a part of preparation for radiotherapy is increasing. For delineation of the prostate several publications have shown decreased delineation variability using MR compared to computed tomography (CT). The purpose of the present work was to investigate the intra- and inter-physician delineation variability for prostate and seminal vesicles, and to investigate the influence of different MR sequence settings used clinically at the five centers participating in the study.
METHODS
MR series from five centers, each providing five patients, were used. Two physicians from each center delineated the prostate and the seminal vesicles on each of the 25 image sets. The variability between the delineations was analyzed with respect to overall, intra- and inter-physician variability, and dependence between variability and origin of the MR images, i.e. the MR sequence used to acquire the data.
RESULTS
The intra-physician variability in different directions was between 1.3 - 1.9 mm and 3 - 4 mm for the prostate and seminal vesicles respectively (1 std). The inter-physician variability for different directions were between 0.7 - 1.7 mm and approximately equal for the prostate and seminal vesicles. Large differences in variability were observed for individual patients, and also for individual imaging sequences used at the different centers. There was however no indication of decreased variability with higher field strength.
CONCLUSION
The overall delineation variability is larger for the seminal vesicles compared to the prostate, due to a larger intra-physician variability. The imaging sequence appears to have a large influence on the variability, even for different variants of the T2-weighted spin-echo based sequences, which were used by all centers in the study. |
Structural and optical properties of CuIn1-xGaxSe2 nanoparticles synthesized by solvothermal route | Abstract CuIn 1-x Ga x Se 2 (CIGS) nanoparticles have been successfully fabricated by using a relatively simple and easy handling solvothermal process in the solvent of N, N dimethylformamide (DMF). A probable formation mechanism of chalcopyrite quaternary semiconductor nanoparticles is proposed. Structural, morphological and optical properties of prepared nanoparticles CIGS (x = 0, 0.3, 0.6, 0.8 and 1) were analyzed. The synthesized product is of ordered chalcopyrite structure, with the particle size in the range of 15–25 nm Cu 2 Se secondary phases was detected by XRD for the sample with x = 1, which is also affirmed by Raman spectra. Cathodoluminescence measurements allowed us to observe a high emission which can be attributed to a band to band transition, and the possible energy gap was discussed. The absorption spectra showed strong absorption in the entire visible light to near-infrared region. All results suggested that the as-prepared CIGS nanoparticles were good light absorber layer material for thin film solar cell applications. |
Effect of misoprostol on fat malabsorption in cystic fibrosis. | Misoprostol, a synthetic prostaglandin that is known to reduce gastric acid production and stimulate duodenal bicarbonate production, was evaluated in 22 patients with cystic fibrosis. In those patients who had greater than 10% fat malabsorption while taking pancrease the addition of misoprostol significantly reduced the degree of fat malabsorption. |
Seven new species of Arhodeoporus (Acarina: Halacaridae) from the Great Barrier Reef and Coral Sea | Seven new species of the genus Arhodeoporusare described from sandy deposits of the Great Barrier Reef and Coral Sea reefs, bringing the total number of known species in this genus to 30 and the number of known species from Australia to 11. Arhodeoporus caudatusn. sp., A. clypeatusn. sp., A. corallicolusn. sp. and A. lizardensisn. sp. are found to belong to the longirostrisgroup, while A. longicrusn. sp., A. subtilisn. sp. and A. ventromaculatusn. sp. can be assigned to the eclogarius, gracilipesand bonairensisgroups, respectively. The gracilipesgroup is recorded from the southern hemisphere for the first time. Arhodeoporusspecies described from southwestern Australia or elsewhere have not been found on the Great Barrier Reef. A key to Australian species of Arhodeoporusis presented. |
TARP: ticket-based address resolution protocol | IP networks fundamentally rely on the address resolution protocol (ARP) for proper operation. Unfortunately, vulnerabilities in the ARP protocol enable a raft of IP-based impersonation, man-in-the-middle, or DoS attacks. Proposed countermeasures to these vulnerabilities have yet to simultaneously address backward compatibility and cost requirements. This paper introduces the ticket-based address resolution protocol (TARP). TARP implements security by distributing centrally issued secure MAC/IP address mapping attestations through existing ARP messages. We detail the TARP protocol and its implementation within the Linux operating system. Our experimental analysis shows that TARP improves the costs of implementing ARP security by as much as two orders of magnitude over existing protocols. We conclude by exploring a range of operational issues associated with deploying and administering ARP security |
Automated antenna impedance adjustment for Near Field Communication (NFC) | Near Field Communication (NFC) is a very intuitive way to open a communication link, to authenticate, or to implement a payment by simply bringing two mobile personal devices closely together. NFC is based upon reliable contactless card technology and combines most prominent protocol standards in a specification driven by the NFC Forum. Devices with a NFC interface operate at 13.56 MHz via inductive loop antennas. However, these operate operated in a very different environment than contactless cards. Metal, the presence of several other antennas, and market demands for compact electronic devices are driving requirements for antennas to operate on ferrite foils, resulting in significant tolerances on antenna impedance. In NFC Reader mode, the antennas operate in a resonance circuit, making this de-tuning critical. This paper presents a prototype implementation for automated antenna impedance adjustment based upon Digitally Tunable Capacitors (DTCs). To show the benefit, the paper explains efficiency for contactless power transfer by practical measurements, comparing three scenarios-fixed impedance adjustment, matching with tolerance, and automated readjustment using a DTC. |
Understanding Network Centric Warfare | Network centric warfare (NCW) is a new theory for war in the information age. NCW advocates that networking battlefield entities will produce shared information, shared knowledge and shared understanding which produce information superiority. In turn information superiority dramatically increases the power of combat. Since it is new, there do exist proponents and opponents. This paper reviews the theory of NCW, and some techniques and combat simulation systems which may help people gain more understanding of NCW. |
Characterization of Zeno behavior in hybrid systems using homological methods | It is possible to associate to a hybrid system a single topological space its underlying topological space. Simultaneously, every hybrid system has a graph as its indexing object its underlying graph. Here we discuss the relationship between the underlying topological space of a hybrid system, its underlying graph and Zeno behavior. When each domain is contractible and the reset maps are homotopic to the identity map, the homology of the underlying topological space is isomorphic to the homology of the underlying graph; the nonexistence of Zeno is implied when the first homology is trivial. Moreover, the first homology is trivial when the null space of the incidence matrix is trivial. The result is an easy way to verify the nonexistence of Zeno behavior. |
Characteristics of Stainless Steel Composites with Nano-sized TiCxNy | Titanium carbonitride is more perspective materials compared to titanium carbide. It can be used in tool industry and special products because of its higher strength, abrasive wear-resistance and especially its strong chemical stability at high temperatures. We produced STS+TiCxNy composite by the spark plasma sintering for higher strength and studied the characteristics. The planar and cross-sectional microstructures of the specimens were observed by scanning electron microscopy. Characterizations of the carbon and nitride phases on the surface of composite were carried out using an X-ray diffractometer. During annealing TiCxNy particles diffusion into STS 430 was observed. After annealing, sintering isolations between particles were formed. It causes decreasing of mechanical strength. In addition when annealing temperature was increased hardness increased. Heterogeneous distribution of alloying elements particles was observed. After annealing composites, highest value of hardness was 738.1 MHV. |
Optimal Features Set for Extractive Automatic Text Summarization | The goal of text summarization is to reduce the size of the text while preserving its important information and overall meaning. With the availability of internet, data is growing leaps and bounds and it is practically impossible summarizing all this data manually. Automatic summarization can be classified as extractive and abstractive summarization. For abstractive summarization we need to understand the meaning of the text and then create a shorter version which best expresses the meaning, While in extractive summarization we select sentences from given data itself which contains maximum information and fuse those sentences to create an extractive summary. In this paper we tested all possible combinations of seven features and then reported the best one for particular document. We analyzed the results for all 10 documents taken from DUC 2002 dataset using ROUGE evaluation matrices. |
SINGO: A single-end-operative and genderless connector for self-reconfiguration, self-assembly and self-healing | Flexible and reliable connection is critical for self-reconfiguration, self-assembly, or self-healing. However, most existing connection mechanisms suffer from a deficiency that a connection would seize itself if one end malfunctions or is out of service. To mitigate this limitation on self-healing, this paper presents a new SINGO connector that can establish or disengage a connection even if one end of the connection is not operational. We describe the design and the prototype of the connector and demonstrate its performance by both theoretical analysis and physical experimentations. |
Design and Practice of an Elevator Control System Based on PLC | This paper describes the development of two nine-storey elevators control system for a residential building. The control system adopts PLC as controller, and uses a parallel connection dispatching rule based on "minimum waiting time" to run two elevators in parallel mode. The paper gives the basic structure, control principle and realization method of the PLC control system in detail. It also presents the ladder diagram of the key aspects of the system. The system has simple peripheral circuit and the operation result showed that it enhanced the reliability and pe.rformance of the elevators. |
Information-Based Objective Functions for Active Data Selection | Learning can be made more efficient if we can actively select particularly salient data points. Within a Bayesian learning framework, objective functions are discussed that measure the expected informativeness of candidate measurements. Three alternative specifications of what we want to gain information about lead to three different criteria for data selection. All these criteria depend on the assumption that the hypothesis space is correct, which may prove to be their main weakness. |
Evaluating Education Payoff in Russia | Degrees in Technology, Law or Economics are Better Rewarded by the Market (from ?Beyond Transition?). |
Risk Factors for Bicycle-Motor Vehicle Collisions at Intersections * | In 1992, 722 bicyclists were killed in the United States in collisions with motor vehicles, and an estimated 650,000 people were treated in emergency rooms for bicycle-related injuries. It is remarkable that, for a traffic safety problem of this magnitude, so little research has been conducted to establish the causes of these accidents. Instead, design standards for roadways and bicycle facilities, individual project designs, and laws and policies regarding bicycling are based almost entirely on opinion. The quality of the results is highly variable. This paper reports a study of bicycle -motor vehicle collisions in the city of Palo Alto, California. The study compares personal characteristics and bicycling behavior-age, sex, direction of travel (with or against traffic flow), and position on the road (roadway or sidewalk) of bicyclists involved in accidents with similar data for the general population of bicyclists observed along the same streets. This comparison enables us to identify factors that are correlated with increased risk of bicycle -motor vehicle collisions, and to suggest engineering practices that reduce this risk. |
Impact of informational factors on online recommendation credibility: The moderating role of source credibility | a r t i c l e i n f o This study investigates the moderating effect of recommendation source credibility on the causal relationships between informational factors and recommendation credibility, as well as its moderating effect on the causal relationship between recommendation credibility and recommendation adoption. Using data from 199 responses from a leading online consumer discussion forum in China, we find that recommendation source credibility significantly moderates two informational factors' effects on readers' perception of recommendation credibility, each in a different direction. Further, we find that source credibility negatively moderates the effect of recommendation credibility on recommendation adoption. Traditional word-of-mouth (WOM) has been shown to play an important role on consumers' purchase decisions (e.g., [2]). With the popularization of the Internet, more and more consumers have shared their past consuming experiences (i.e., online consumer recommendation) online, and researchers often refer to this online WOM as electronic word-of-mouth (eWOM). Given the distinct characteristics of Internet communication (e.g., available to individuals without the limitation of time and location, directed to multiple individuals simultaneously), eWOM has conquered known limitations of traditional WOM. In general, eWOM has global reach and influence. In China, many online consumer discussion forums support eWOM, and much previous research [3,7,12,13,21] demonstrates that because eWOM provides indirect purchasing knowledge to readers, the recommendations on these forums can significantly affect their attitudes towards various kinds of consuming targets (e.g., stores, products and services). Various prior studies have postulated large numbers of antecedent factors which can affect information readers' cognition towards the recommendations, and many of them stem from elaboration likelihood model (ELM) (e. that there are two distinct routes that can affect information readers' attitude toward presented information: (1) the central route that considers the attitude formation (or change) as the result of the receivers' diligent consideration of the content of the information (informational factors); and (2) the peripheral route that requires less cognitive work attuned to simple cues in the information to influence attitude (information-irrelevant factors). ELM suggests two factors, named information readers' motivation and ability, can be the significant moderators to shift the effects of central and peripheral factors on readers' perception of information credibility. Other researchers [24,27] posit that the peripheral factor – source credibility – may also have a moderating rather than a direct effect on the causal relationship between the informational factors and the information credibility; this view is consistent with the attribution inference … |
The Level of Faculty Members ' Spiritual Leadership ( SL ) Qualities Display According To Students in Faculty of Education | Leadership is a process directing to a target of which followers, the participators are shared. For this reason leadership has an important effect on succeeding organizational targets. More importance is given to the leadership studies in order to increase organizational success each day. One of the leadership researches that attracts attention recently is spiritual leadership. Spiritual leadership (SL) is important for imposing ideal to the followers and giving meaning to the works they do. Focusing on SL that has recently taken its place in leadership literature, this study looks into what extend faculty members teaching at Faculty of Education display SL qualities. The study is in descriptive scanning model. 1819 students studying at Kocaeli University Faculty of Education in 2009-2010 academic year constitute the universe of the study. Observing leadership qualities takes long time. Therefore, the sample of the study is determined by deliberate sampling method and includes 432 students studying at the last year of the faculty. Data regarding faculty members' SL qualities were collected using a questionnaire adapted from Fry's (2003) 'Spiritual Leadership Scale'. Consequently, university students think that academic stuff shows the features of SL and its sub dimensions in a medium level. According to students, academicians show attitudes related to altruistic love rather than faith and vision. It is found that faculty members couldn't display leadership qualities enough according to the students at the end of the study. © 2011 Published by Elsevier Ltd. |
The naked and the dead: the ABCs of gymnosperm reproduction and the origin of the angiosperm flower. | 20 years after establishment of the ABC model many of the molecular mechanisms underlying development of the angiosperm flower are relatively well understood. Central players in the gene regulatory network controlling flower development are SQUA-like, DEF/GLO-like, AG-like and AGL6/SEP1-like MIKC-type MADS-domain transcription factors. These provide class A, class B, class C and the more recently defined class E floral homeotic functions, respectively. There is evidence that the floral homeotic proteins recognize the DNA of target genes in an organ-specific way as multimeric protein complexes, thus constituting 'floral quartets'. In contrast to the detailed insights into flower development, how the flower originated during evolution has remained enigmatic. However, while orthologues of all classes of floral homeotic genes appear to be absent from all non-seed plants, DEF/GLO-like, AG-like, and AGL6-like genes have been found in diverse extant gymnosperms, the closest relatives of the angiosperms. While SQUA-like and SEP1-like MADS-box genes appear to be absent from extant gymnosperms, reconstruction of MADS-box gene phylogeny surprisingly suggests that the most recent common ancestor of gymnosperms and angiosperms possessed representatives of both genes, but that these have been lost in the lineage that led to extant gymnosperms. Expression studies and genetic complementation experiments indicate that both angiosperm and gymnosperm AG-like and DEF/GLO-like genes have conserved functions in the specification of reproductive organs and in distinguishing male from female organs, respectively. Based on these findings novel models about the molecular basis of flower origin, involving changes in the expression patterns of DEF/GLO-like or AGL6/SEP1/SQUA-like genes in reproductive structures, were developed. While in angiosperms SEP1-like proteins play an important role in floral quartet formation, preliminary evidence suggests that gymnosperm DEF/GLO-like and AG-like proteins alone can already form floral quartet-like complexes, further corroborating the view that the formation of floral quartet-like complexes predated flower origin during evolution. |
Code club: bringing programming to UK primary schools through scratch | Code Club is a network of after-school programming clubs for primary (US: elementary) schoolchildren, run by technically-competent volunteers in conjunction with (generally technically-unskilled) teachers. The main motivation of Code Club is to inspire children with a sense of fun and achievement for programming and digital creativity. This paper reports on the first year of Code Club in 1000 UK schools. The results were extremely positive, but some children had difficulty understanding the concepts behind the projects. |
Implementation and analysis of three steganographic approaches | Due to increasing the technologies security systems are very popular in many areas. The security of information can be achieved by using encryption and steganography. In cryptography, encrypted data is transmitted after transforming the other form instead of the original data. Contrast cryptography, information hiding process can be extended for protecting from the interesting of any attacker. This paper proposes the enhance security system by combining these two techniques. In this system, the encrypted message is embedded in a BMP image file. In proposed system, three LSB steganographic techniques have been implemented and analyzed. This proposed system intends for data confidentiality, data authentication and data integrity. This system not only enhances the security of data but also becomes more powerful mechanism. This system intends to support effective ways for protecting data. The primary goal of our system is to enhance the security of data and then to compare three steganographic techniques. Then we will use the optimized method for embedding. In this paper, we just present three steganographic approaches. In this system, data is encrypted with RC4 encryption algorithm and then embedded the encrypted text in the BMP image file using three steganogrpaphic methods. |
Transcatheter closure of interatrial communications for secondary prevention of paradoxical embolism: single-center experience. | BACKGROUND
Patients with a patent foramen ovale (PFO) after cerebral, coronary, or systemic embolic events of presumed paradoxical origin are at risk for recurrent thromboembolism. We report our single-center experience of interventional closure of interatrial communications for secondary prevention of presumed paradoxical embolism. Methods and Results- Since 1997, percutaneous closure of interatrial communications was performed at our institution in 66 patients (mean age 47.8+/-12.7 years; 31 males) with a PFO or an atrial septal defect and at least 1 documented presumed paradoxical thromboembolic event. Fifty-eight patients had cerebral embolism, 10 had coronary embolism, and 3 had peripheral embolism. Several patients experienced multilocal arterial embolism. Fifty-four patients had a PFO, 33 of them with an atrial septal aneurysm, and 12 had an atrial septal defect. The implantation procedure was successful and without complication in all patients. After 3 months, only 2 patients showed a residual shunt, which disappeared in both cases after 12 months. In 112.2 patient-years of follow-up (range, 5 weeks to 3.5 years), we have not seen any recurrent thromboembolic event.
CONCLUSIONS
Interventional closure of interatrial communications is a safe and effective therapeutic option for the secondary prevention of presumed paradoxical embolism. To further evaluate this strategy, randomized trials comparing interventional closure with anticoagulation have been initiated by us and others. |
On Unexpectedness in Recommender Systems: Or How to Better Expect the Unexpected | Although the broad social and business success of recommender systems has been achieved across several domains, there is still a long way to go in terms of user satisfaction. One of the key dimensions for significant improvement is the concept of unexpectedness. In this article, we propose a method to improve user satisfaction by generating unexpected recommendations based on the utility theory of economics. In particular, we propose a new concept of unexpectedness as recommending to users those items that depart from what they would expect from the system - the consideration set of each user. We define and formalize the concept of unexpectedness and discuss how it differs from the related notions of novelty, serendipity, and diversity. In addition, we suggest several mechanisms for specifying the users’ expectations and propose specific performance metrics to measure the unexpectedness of recommendation lists. We also take into consideration the quality of recommendations using certain utility functions and present an algorithm for providing users with unexpected recommendations of high quality that are hard to discover but fairly match their interests. Finally, we conduct several experiments on “real-world” datasets and compare our recommendation results with other methods. The proposed approach outperforms these baseline methods in terms of unexpectedness and other important metrics, such as coverage, aggregate diversity and dispersion, while avoiding any accuracy loss. |
Motivational Interviewing to Increase Postdischarge Antibiotic Adherence in Older Adults with Pneumonia. | OBJECTIVE
To evaluate the impact of a pharmacist-led, motivational interviewing on antibiotic adherence following discharge in older adults with pneumonia.
SETTING
Inpatient medical wards in a large tertiary academic medical center.
PRACTICE DESCRIPTION
Older adults diagnosed with pneumonia were enrolled from December 1, 2013, to August 1, 2014, at Yale-New Haven Hospital.
PRACTICE INNOVATION
Motivational interviewing-a patient-centered method of communication-has gained recognition as a tool that can aid pharmacists in addressing negative health behaviors (e.g., medication adherence, health screenings, substance abuse during counseling sessions). However, the potential role of motivational interviewing in older adults to improve medication adherence during transitions of care is not clear. In this study, in addition to standard discharge care, older adults hospitalized with pneumonia who were randomized to the intervention group received enhanced care: pharmacist-led motivational interviewing.
MAIN OUTCOME MEASUREMENTS
Evaluation of adherence to prescribed antibiotic regimens and patient satisfaction with the motivational interviewing, enhanced-care session.
RESULTS
Ultimately, 87% of patients in the intervention group (n = 16) compared with 64% of patients in the control group (n = 14) were adherent to their antibiotic regimens. Patient satisfaction with the motivational interviewing intervention was high.
CONCLUSION
Pharmacist-led motivational interviewing sessions have the potential to positively influence antibiotic adherence rates and patient satisfaction. |
High baseline BDNF serum levels and early psychopathological improvement are predictive of treatment outcome in major depression | Major depressive disorder has been associated with low serum levels of brain-derived neurotrophic factor (sBDNF), which is functionally involved in neuroplasticity. Although sBDNF levels tend to normalize following psychopathological improvement with antidepressant treatment, it is unclear how closely sBDNF changes are associated with treatment outcome. To examine whether baseline sBDNF or early changes in sBDNF are predictive of response to therapy. Twenty-five patients with major depressive disorder underwent standardized treatment with duloxetine. Severity of depression, measured by the Hamilton Depression Rating Scale, and sBDNF were assessed at baseline, and after 1, 2, and 6 weeks of treatment. Therapy outcome after 6 weeks was defined as response (≥50 % reduction in baseline Hamilton Depression Rating score) and remission (Hamilton Depression Rating score <8). The predictive values for treatment outcome of baseline sBDNF, and early (i.e., ≤2 weeks) changes in sBDNF and Hamilton Depression Rating score were also assessed. At baseline, sBDNF correlated with Hamilton Depression Rating scores. Treatment response was associated with a higher baseline sBDNF concentration, and a greater Hamilton Depression Rating score reduction after 1 and 2 weeks. A greater early rise in sBDNF correlated with a decreased early Hamilton Depression Rating score reduction. Even though higher baseline sBDNF levels are associated with more severe depression, they may reflect an increased capacity to respond to treatment. In contrast, changes in sBDNF over the full course of treatment are not associated with psychopathological improvement. |
On topology and bisection bandwidth of hierarchical-ring networks for shared-memory multiprocessors | Hierarchical-ring based multiprocessors are interesting alternatives to the more popular two-dimensional direct networks. They allow for simple router designs and wider communication paths than their direct network counterparts. There are several ways hierarchical-ring networks can be configured for a given number of processors. Feasible topologies range from tall, lean networks to short, wide networks, but only a few of these possess high throughput and low latency. This paper presents the results of a simulation study (i) to determine how large hierarchical-ring networks can become before their performance deteriorates due to their bisection bandwidth constraints and (ii) to derive topologies with high throughput and low latency for a given number of processors. We show that a system with a maximum of 120 processors and three levels of hierarchy can sustain most memory access behaviors, but that larger systems can be sustained, only if their bisection bandwidth |
Search-based software maintenance | The high cost of software maintenance could potentially be greatly reduced by the automatic refactoring of object-oriented programs to increase their understandability, adaptability and extensibility. This paper describes a novel approach in providing automated refactoring support for software maintenance; the formulation of the task as a search problem in the space of alternative designs. Such a search is guided by a quality evaluation function that must accurately reflect refactoring goals. We have constructed a search-based software maintenance tool and report here the results of experimental refactoring of two Java programs, which yielded improvements in terms of the quality functions used. We also discuss the comparative merits of the three quality functions employed and the actual effect on program design that resulted from their use |
On the design and implementation of linear differential microphone arrays. | Differential microphone array (DMA), a particular kind of sensor array that is responsive to the differential sound pressure field, has a broad range of applications in sound recording, noise reduction, signal separation, dereverberation, etc. Traditionally, an Nth-order DMA is formed by combining, in a linear manner, the outputs of a number of DMAs up to (including) the order of N - 1. This method, though simple and easy to implement, suffers from a number of drawbacks and practical limitations. This paper presents an approach to the design of linear DMAs. The proposed technique first transforms the microphone array signals into the short-time Fourier transform (STFT) domain and then converts the DMA beamforming design to simple linear systems to solve. It is shown that this approach is much more flexible as compared to the traditional methods in the design of different directivity patterns. Methods are also presented to deal with the white noise amplification problem that is considered to be the biggest hurdle for DMAs, particularly higher-order implementations. |
State-of-the-Art Report on Temporal Coherence for Stylized Animations | Non-photorealistic rendering (NPR) algorithms allow the creation of images in a variety of styles, ranging from line drawing and pen-and-ink to oil painting and watercolor. These algorithms provide greater flexibility, control and automation over traditional drawing and painting. Despite significant progress over the past 15 years, the application of NPR to the generation of stylized animations remains an active area of research. The main challenge of computer generated stylized animations is to reproduce the look of traditional drawings and paintings while minimizing distracting flickering and sliding artifacts present in hand-drawn animations. These goals are inherently conflicting and any attempt to address the temporal coherence of stylized animations is a trade-off. This state-of-the-art report is motivated by the growing number of methods proposed in recent years and the need for a comprehensive analysis of the trade-offs they propose. We formalize the problem of temporal coherence in terms of goals and compare existing methods accordingly. We propose an analysis for both line and region stylization methods and discuss initial steps toward their perceptual evaluation. The goal of our report is to help uninformed readers to choose the method that best suits their needs, as well as motivate further research to address the limitations of existing methods. |
Recommending resolutions of ITIL services tickets using Deep Neural Network | Application development and maintenance is a good example of Information Technology Infrastructure Library (ITIL) services in which a sizable volume of tickets are raised everyday for different issues to be resolved in order to deliver uninterrupted service. An issue is captured as summary on the ticket and once a ticket is resolved, the solution is also noted down on the ticket as resolution. It will be beneficial to automatically extract information from the description of tickets to improve operations like identifying critical and frequent issues, grouping of tickets based on textual content, suggesting remedial measures for them etc. In particular, the maintenance people can save a lot of effort and time if they have access to past remedial actions for similar kind of tickets raised earlier based on history data. In this work we propose an automated method based on deep neural networks for recommending resolutions for incoming tickets. We use ideas from deep structured semantic models (DSSM) for web search for such resolution recovery. We project a small subset of existing tickets in pairs and an incoming ticket to a low dimensional feature space, following which we compute the similarity of an existing ticket with the new ticket. We select the pair of tickets which has the maximum similarity with the incoming ticket and publish both of its resolutions as the suggested resolutions for the latter ticket. The experiment of our data sets shows that we are able to achieve a promising similarity match of about 70% - 90% between the suggestions and the actual resolution. |
Evaluating Fast Algorithms for Convolutional Neural Networks on FPGAs | In recent years, Convolutional Neural Networks (CNNs) have become widely adopted for computer vision tasks. FPGAs have been adequately explored as a promising hardware accelerator for CNNs due to its high performance, energy efficiency, and reconfigurability. However, prior FPGA solutions based on the conventional convolutional algorithm is often bounded by the computational capability of FPGAs (e.g., the number of DSPs). In this paper, we demonstrate that fast Winograd algorithm can dramatically reduce the arithmetic complexity, and improve the performance of CNNs on FPGAs. We first propose a novel architecture for implementing Winograd algorithm on FPGAs. Our design employs line buffer structure to effectively reuse the feature map data among different tiles. We also effectively pipeline the Winograd PE engine and initiate multiple PEs through parallelization. Meanwhile, there exists a complex design space to explore. We propose an analytical model to predict the resource usage and reason about the performance. Then, we use the model to guide a fast design space exploration. Experiments using the state-of-the-art CNNs demonstrate the best performance and energy efficiency on FPGAs. We achieve an average 1006.4 GOP/s for the convolutional layers and 854.6 GOP/s for the overall AlexNet and an average 3044.7 GOP/s for the convolutional layers and 2940.7 GOP/s for the overall VGG16 on Xilinx ZCU102 platform. |
iMAP-CampUS: Developing an Intelligent Mobile Augmented Reality Program on Campus as a Ubiquitous System | Augmented Reality (AR) offers a combination of physical and virtual objects, drawing on the strengths of each. Therefore, it is different from virtual reality since it permits users to sight the real world enhanced with virtual objects. Thus, AR technology holds potential to support users with valuable information about the surrounding area. In this paper, we demonstrate the development of an AR app that provides the students with rich information about the buildings nearby. The application, which we name "iMAP-CampUS", is designed to assist students at Macquarie University in locating places of interest close to them by moving the camera of the device in all probable directions to overlay information of places around them. The iMAP-CampUS has been developed for both iOS and Android platforms and run on smartphones and tablets with different screen sizes. |
Improved Design of High-Performance Parallel Decimal Multipliers | The new generation of high-performance decimal floating-point units (DFUs) is demanding efficient implementations of parallel decimal multipliers. In this paper, we describe the architectures of two parallel decimal multipliers. The parallel generation of partial products is performed using signed-digit radix-10 or radix-5 recodings of the multiplier and a simplified set of multiplicand multiples. The reduction of partial products is implemented in a tree structure based on a decimal multioperand carry-save addition algorithm that uses unconventional (non BCD) decimal-coded number systems. We further detail these techniques and present the new improvements to reduce the latency of the previous designs, which include: optimized digit recoders for the generation of 2n-tuples (and 5-tuples), decimal carry-save adders (CSAs) combining different decimal-coded operands, and carry-free adders implemented by special designed bit counters. Moreover, we detail a design methodology that combines all these techniques to obtain efficient reduction trees with different area and delay trade-offs for any number of partial products generated. Evaluation results for 16-digit operands show that the proposed architectures have interesting area-delay figures compared to conventional Booth radix-4 and radix--8 parallel binary multipliers and outperform the figures of previous alternatives for decimal multiplication. |
Optimization and planning of operating theatre activities: an original definition of pathways and process modeling | BACKGROUND
The Operating Room (OR) is a key resource of all major hospitals, but it also accounts for up 40% of resource costs. Improving cost effectiveness, while maintaining a quality of care, is a universal objective. These goals imply an optimization of planning and a scheduling of the activities involved. This is highly challenging due to the inherent variable and unpredictable nature of surgery.
METHODS
A Business Process Modeling Notation (BPMN 2.0) was used for the representation of the "OR Process" (being defined as the sequence of all of the elementary steps between "patient ready for surgery" to "patient operated upon") as a general pathway ("path"). The path was then both further standardized as much as possible and, at the same time, keeping all of the key-elements that would allow one to address or define the other steps of planning, and the inherent and wide variability in terms of patient specificity. The path was used to schedule OR activity, room-by-room, and day-by-day, feeding the process from a "waiting list database" and using a mathematical optimization model with the objective of ending up in an optimized planning.
RESULTS
The OR process was defined with special attention paid to flows, timing and resource involvement. Standardization involved a dynamics operation and defined an expected operating time for each operation. The optimization model has been implemented and tested on real clinical data. The comparison of the results reported with the real data, shows that by using the optimization model, allows for the scheduling of about 30% more patients than in actual practice, as well as to better exploit the OR efficiency, increasing the average operating room utilization rate up to 20%.
CONCLUSIONS
The optimization of OR activity planning is essential in order to manage the hospital's waiting list. Optimal planning is facilitated by defining the operation as a standard pathway where all variables are taken into account. By allowing a precise scheduling, it feeds the process of planning and, further up-stream, the management of a waiting list in an interactive and bi-directional dynamic process. |
An Empirical Study on the Integrated Framework of e-CRM in Online Shopping: Evaluating the Relationships Among Perceived Value, Satisfaction, and Trust Based on Customers' Perspectives | Based on customer cognitive, affective and conative experiences in Internet online shopping, this study, from customers’ perspectives, develops a conceptual framework for e-CRM to explain the psychological process that customers maintain a long-term exchange relationship with specific online retailer. The conceptual framework proposes a series of causal linkages among the key variables affecting customer commitment to specific online retailer, such as perceived value (as cognitive belief), satisfaction (as affective experience) and trust (as conative relationship intention). Three key exogenous variables affecting Internet online shopping experiences, such as perceived service quality, perceived product quality, and perceived price fairness, are integrated into the framework. This study empirically tested and supported a large part of the proposed framework and the causal linkages within it. The empirical results highlight some managerial implications for successfully developing and implementing a strategy for e-CRM. |
PINK1, Parkin, and Mitochondrial Quality Control: What can we Learn about Parkinson’s Disease Pathobiology? | The first clinical description of Parkinson's disease (PD) will embrace its two century anniversary in 2017. For the past 30 years, mitochondrial dysfunction has been hypothesized to play a central role in the pathobiology of this devastating neurodegenerative disease. The identifications of mutations in genes encoding PINK1 (PTEN-induced kinase 1) and Parkin (E3 ubiquitin ligase) in familial PD and their functional association with mitochondrial quality control provided further support to this hypothesis. Recent research focused mainly on their key involvement in the clearance of damaged mitochondria, a process known as mitophagy. It has become evident that there are many other aspects of this complex regulated, multifaceted pathway that provides neuroprotection. As such, numerous additional factors that impact PINK1/Parkin have already been identified including genes involved in other forms of PD. A great pathogenic overlap amongst different forms of familial, environmental and even sporadic disease is emerging that potentially converges at the level of mitochondrial quality control. Tremendous efforts now seek to further detail the roles and exploit PINK1 and Parkin, their upstream regulators and downstream signaling pathways for future translation. This review summarizes the latest findings on PINK1/Parkin-directed mitochondrial quality control, its integration and cross-talk with other disease factors and pathways as well as the implications for idiopathic PD. In addition, we highlight novel avenues for the development of biomarkers and disease-modifying therapies that are based on a detailed understanding of the PINK1/Parkin pathway. |
Mechanisms of diabetic complications. | It is increasingly apparent that not only is a cure for the current worldwide diabetes epidemic required, but also for its major complications, affecting both small and large blood vessels. These complications occur in the majority of individuals with both type 1 and type 2 diabetes. Among the most prevalent microvascular complications are kidney disease, blindness, and amputations, with current therapies only slowing disease progression. Impaired kidney function, exhibited as a reduced glomerular filtration rate, is also a major risk factor for macrovascular complications, such as heart attacks and strokes. There have been a large number of new therapies tested in clinical trials for diabetic complications, with, in general, rather disappointing results. Indeed, it remains to be fully defined as to which pathways in diabetic complications are essentially protective rather than pathological, in terms of their effects on the underlying disease process. Furthermore, seemingly independent pathways are also showing significant interactions with each other to exacerbate pathology. Interestingly, some of these pathways may not only play key roles in complications but also in the development of diabetes per se. This review aims to comprehensively discuss the well validated, as well as putative mechanisms involved in the development of diabetic complications. In addition, new fields of research, which warrant further investigation as potential therapeutic targets of the future, will be highlighted. |
Phase-Gradient Meta-Dome for Increasing Grating-Lobe-Free Scan Range in Phased Arrays | This paper presents a simple and effective way to increase the scan range of phased array antennas by using a metasurface (MTS) curved dome. The technique is based on a phase-gradient passive MTS that deflects the incident rays coming from the array aperture, and on a predistortion of amplitude and phase applied to the original phased array. We assume, therefore, to have at disposal an array capable of electronic beam scanning over a limited angular range with control of amplitude and phase. Geometrical optics ray tracing is used for the design of the metadome and physical optics for the analysis. For the sake of simplicity, the idea is illustrated here for a 2-D scan case; however, the extension to full 3-D scanning appears to be straightforward. |
Approximate analysis of the resonant LCL DC-DC converter | An approximate analysis of LCL converters operating above and below resonance is presented. The study, based on replacing the rectifier and load by an equivalent parallel resistance, applies the fundamental harmonics method and the assumption that the current through the diodes of the output rectifier has a sinusoidal waveform. Equations obtained for the conduction angle of these diodes (less than /spl pi/ in discontinuous and equal to /spl pi/ in continuous conduction modes) are used for developing the output-to-input voltage ratio as a function of three independent parameters. The theoretical and experimental results are in good agreement. |
A Heartbeat and Temperature Measuring System for Remote Health Monitoring using Wireless Body Area Network | This paper presents the design and development of a microcontroller based heartbeat and body temperature monitor using fingertip and temperature sensor. The device uses the optical technology to detect the flow of blood through the finger and offers the advantage of portability over conventional recording systems. However, wireless body area network based remote patient monitoring systems have been presented with numerous problems including efficient data extraction and dynamic tuning of data to preserve the quality of data transmission. Evaluation of the device on real signals shows accuracy in heartbeat measurement, even under intense physical activity. This paper presents these challenges as well as solution to these problems by proposing an architecture which allows a network to be formed between the patient and doctor in order to enable remote monitoring of patient by analyzing the data of patient. The device consists of sensors which are used to measure heartbeat as well as body temperature of a patient and it is controlled by a central unit. The readings from these sensors are further processed and sent via GSM module to a remote location where it is displayed on cell phone. The optical heartbeat sensor counts the heartbeat per minute and temperature sensor measures the temperature from the body and both the measured data are sent to a receiving end utilizing wireless technology where the data is displayed in a cell phone for further processing and patient care. Moreover, the superiority of this device is shown in comparison to traditional system. |
Mineral facies in pelitic rocks, with particular reference to the Buchan type metamorphism of north-eastern Scotland | The Dairadian rocks of Aberdeenshire and Banffshire have been affected by a regional metamorphism giving rise to andalusite, cordierite, staurolite and garnet in pelitic rocks (Bu.cha.n. Type Metamorphism). Four zones are recognised in which the following assemblages (with muscovite and quartz) first appear: Biotite Zone (chlorite—biotite, B.1; Cordierite Zone (eordierite.-chloritebiotite, CA); Andalusite Zone (andalusite.-côrdierite -biotite A.1); Stauroli.tc Zone (staurolite.andalusite-"biotite, The biotite zone separates two N.E.—S.W. trending sets of higher grade zones, The staurolite zone is absent from the eastern set, Both sequences pass into sillimanite bearing rocks at higher grade.. Theabove distribution of assemblages along wit.' , he trend. of upgrade Mg enrichment indicated by analyses of coexisting cordIorite-biotite (in A.1) and staurolite-'biotite (in s.) suggest the :foj.lowing isograd forming reactions: Cordierite Isograd: chlorite + muscovite cordierite +biotite Andalusite Isograd: cordierite + muscovite and.alusite + biotite 2 Staurolite Isograd: andalusite -V biotite staurolite + muscovite 3 These reactions are dior multi— variant in nature and isograds in the field mark lines where the reactions occur for the most Fe rich bulk compositions present. Mineral analyses suggest that this is close to lOOMgO/MgO+FeO = 40, whilst bulk rock analyses suggest N/FM of 38. |
AGES: An Interdisciplinary Space Based on Projects for Software Engineering Learning | The Software Engineering education faces the challenge of qualifying professionals with competence to work in an interdisciplinary way and in teams, with flexibility to perform different roles and capable of adapting to change. This article describes AGES (Agência Experimental de Engenharia de Software), which is part of the curriculum of the new Software Engineering undergraduate program of PUCRS. AGES is a space that allows students to apply their knowledge in an integrated and diversified way. They participate in teams engaged in real projects. We present the assumptions, the implementation process of AGES, its status and lessons learned until now. |
Enantiomeric metabolic interactions and stereoselective human methadone metabolism. | Methadone is administered as a racemate, although opioid activity resides in the R-enantiomer. Methadone disposition is stereoselective, with considerable unexplained variability in clearance and plasma R/S ratios. N-Demethylation of methadone in vitro is predominantly mediated by cytochrome P450 CYP3A4 and CYP2B6 and somewhat by CYP2C19. This investigation evaluated stereoselectivity, models, and kinetic parameters for methadone N-demethylation by recombinant CYP2B6, CYP3A4, and CYP2C19, and the potential for interactions between enantiomers during racemate metabolism. CYP2B6 metabolism was stereoselective. CYP2C19 was less active, and stereoselectivity was opposite that for CYP2B6. CYP3A4 was not stereoselective. With all three isoforms, enantiomer N-dealkylation rates in the racemate were lower than those of (R)-(6-dimethyamino-4,4-diphenyl-heptan-3-one) hydrochloride (R-methadone) or (S)-(6-dimethyamino-4,4-diphenyl-heptan-3-one) hydrochloride (S-methadone) alone, suggesting an enantiomeric interaction and mutual metabolic inhibition. For CYP2B6, the interaction between enantiomers was stereoselective, with S-methadone as a more potent inhibitor of R-methadone N-demethylation than R-of S-methadone. In contrast, enantiomer interactions were not stereoselective with CYP2C19 or CYP3A4. For all three cytochromes P450, methadone N-demethylation was best described by two-site enzyme models with competitive inhibition. There were minor model differences between cytochromes P450 to account for stereoselectivity of metabolism and enantiomeric interactions. Changes in plasma R/S methadone ratios observed after rifampin or troleandomycin pretreatment in humans in vivo were successfully predicted by CYP2B6- but not CYP3A4-catalyzed methadone N-demethylation. CYP2B6 is a predominant catalyst of stereoselective methadone metabolism in vitro. In vivo, CYP2B6 may be a major determinant of methadone metabolism and disposition, and CYP2B6 activity and stereoselective metabolic interactions may confer variability in methadone disposition. |
A simple technique for introducing small diameter vessels into a coupler device. | Dear Sir, The venous coupler device is a safe method of venous anastomosis (Jandali, Wu, Vega, Kovach, & Serletti, 2010). The coupling device is at least equivalent to hand-sewn anastomosis in preventing venous thrombosis (Kulkarni et al., 2016). Small diameter vessels, 1–1.5 mm, can present a difficult challenge when introducing them through the coupler. We present a simple technique for introducing a small diameter vein into the coupler. A single suture can be placed in the edge of the lumen and tied loosely with the suture ends left long (Figure 1A). While holding the suture end with minimum tension the coupler can be guided over the suture. The vein can then be atraumatically pulled through the coupling device. Alternatively a triangulation technique can be used as described by Alexis Carrel in 1902 to maximize lumen area (Figure 1B). We feel that this is a simple and safe technique, which decreases the time, vessel handling and stress involved with dealing with small caliber vessels. Matthew Philip Murphy, MB BCh BAO MRCSI, Niall Michael Mc Inerney, FRCS (Plast), Katherine Mary Browne, MB BCh BAO MRCSI, Robert Henry Caulfield, FRCS (Plast), Richard Patrick Hanson, FRCS (Plast) Department of Plastic Surgery, Mater Misercordiae University Hospital, Eccles Street, Dublin, Ireland |
Novel Surgical Method for Pincer Nail Treatment: Partial Matricectomy and Triple Flap Technique. | Transverse overcurvature of the nail plate that pinches distal nail bed progressively is called as pincer nail deformity. This condition is more frequent on toes, but it is rare on fingers. The major indications for treatment are pain, inflammation, interference with wearing shoes, and cosmetic embarrassment. There are various surgical treatment methods described for the correction of pincer nail deformity. Here, a novel surgical treatment method for pincer nail deformity using triple flap technique is presented. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.