title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Trust from the past: Bayesian Personalized Ranking based Link Prediction in Knowledge Graphs | Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on largescale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in terms of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level |
A grounded theory analysis of modern web applications: knowledge, skills, and abilities for DevOps | Since 2009, DevOps, the combination of development and operation, has been adopted within organizations in industry, such as Netflix, Flickr, and Fotopedia. Configuration management tools have been used to support DevOps. However, in this paper we investigate which Knowledge, Skills, and Abilities (KSA) have been employed in developing and deploying modern web applications and how these KSAs support DevOps. By applying a qualitative analysis approach, namely grounded theory, to three web application development projects, we discover that the KSAs for both Software Development and IT Operator practitioners support the four perspectives of DevOps: collaboration culture, automation, measurement, and sharing. |
Practical Techniques for Searches on Encrypted Data | It is desirable to store data on data storage servers such as mail servers and file servers in encrypted form to reduce security and privacy risks. But this usually implies that on e has to sacrifice functionality for security. For example, if a client wishes to retrieve only documents containing certai n words, it was not previously known how to let the data storage server perform the search and answer the query without loss of data confidentiality. In this paper, we describe our cryptographic schemes for the problem of searching on encrypted data and provide proofs of security for the resulting crypto systems. Ou r techniques have a number of crucial advantages. They are provably secure : they provideprovable secrecy for encryption, in the sense that the untrusted server cannot learn anything about the plaintext when only given the ciphertext; they providequery isolationfor searches, meaning that the untrusted server cannot learn anything more about the plaintext than the search result; they provide controlled searching, so that the untrusted server cannot search for an arbitrary word without the user’s authorization; they also supporthidden queries , so that the user may ask the untrusted server to search for a secret word without revealing the word to the server. The algorithms we present are simple, fast (for a document of length n, the encryption and search algorithms only need O(n) stream cipher and block cipher operations), and introduce almost no space and communication overhead, and hence are practical to use today. We gratefully acknowledge support for this research from se veral US government agencies. This research was suported in part by t he Defense Advanced Research Projects Agency under DARPA contract N66 01-9928913 (under supervision of the Space and Naval Warfare Syst ems Center San Diego), by the National Science foundation under grant F D99-79852, and by the United States Postal Service under grant USPS 1025 90-98-C3513. Views and conclusions contained in this document are t hose of the authors and do not necessarily represent the official opinio n or policies, either expressed or implied of the US government or any of its agencies, DARPA, NSF, USPS. |
Body weight-supported bedside treadmill training facilitates ambulation in ICU patients: An interventional proof of concept study. | PURPOSE
Early mobilisation is advocated to improve recovery of intensive care unit (ICU) survivors. However, severe weakness in combination with tubes, lines and machinery are practical barriers for the implementation of ambulation with critically ill patients. The aim of this study was to explore the feasibility of Body Weight-Supported Treadmill Training (BWSTT) in critically ill patients in the ICU.
METHODS
A custom build bedside Body Weight-Supported Treadmill was used and evaluated in medical and surgical patients in the ICU. Feasibility was evaluated according to eligibility, successful number of BWSTT, number of staff needed, adverse events, number of patients that could not have walked without BWSTT, patient satisfaction and anxiety.
RESULTS
Twenty participants, underwent 54 sessions BWSTT. Two staff members executed the BWSTT and no adverse events occurred. Medical equipment did not have to be disconnected during all treatment sessions. In 74% of the sessions, the participants would not have been able to walk without the BWSTT. Patient satisfaction with BWSTT was high and anxiety low.
CONCLUSIONS
This proof of concept study demonstrated that BWSTT is safe, reduces staff resource, and facilitates the first time to ambulation in critically ill patients with severe muscle weakness in the ICU. |
Density Based k-Nearest Neighbors Clustering Algorithm for Trajectory Data | With widespread availability of low cost GPS, cellular phones, satellite imagery, robotics, Web traffic monitoring devices, it is becoming possible to record and store data about the movement of people and objects at a large amount. While these data hide important knowledge for the enhancement of location and mobility oriented infrastructures and services, by themselves, they demand the necessary semantic embedding which would make fully automatic algorithmic analysis possible. Clustering algorithm is an important task in data mining. Clustering algorithms for these moving objects provide new and helpful information, such as Jam detection and significant Location identification. In this paper we present augmentation of relative density-based clustering algorithm for movement data or trajectory data. It provides a k-nearest neighbors clustering algorithm based on relative density, which efficiently resolves the problem of being very sensitive to the user-defined parameters in DBSCAN. In this paper we consider two real datasets of moving vehicles in Milan (Italy) and Athens (Greece) and extensive experiments were conducted. |
Parameter Estimation for Probabilistic Finite-State Transducers | Weighted finite-state transducers suffer from the lack of a training algorithm. Training is even harder for transducers that have been assembled via finite-state operations such as composition, minimization, union, concatenation, and closure, as this yields tricky parameter tying. We formulate a “parameterized FST” paradigm and give training algorithms for it, including a general bookkeeping trick (“expectation semirings”) that cleanly and efficiently computes expectations and gradients. 1 Background and Motivation Rational relations on strings have become widespread in language and speech engineering (Roche and Schabes, 1997). Despite bounded memory they are well-suited to describe many linguistic and textual processes, either exactly or approximately. A relation is a set of (input, output) pairs. Relations are more general than functions because they may pair a given input string with more or fewer than one output string. The class of so-called rational relations admits a nice declarative programming paradigm. Source code describing the relation (a regular expression) is compiled into efficient object code (in the form of a 2-tape automaton called a finite-state transducer). The object code can even be optimized for runtime and code size (via algorithms such as determinization and minimization of transducers). This programming paradigm supports efficient nondeterminism, including parallel processing over infinite sets of input strings, and even allows “reverse” computation from output to input. Its unusual flexibility for the practiced programmer stems from the many operations under which rational relations are closed. It is common to define further useful operations (as macros), which modify existing relations not by editing their source code but simply by operating on them “from outside.” ∗A brief version of this work, with some additional material, first appeared as (Eisner, 2001a). A leisurely journal-length version with more details has been prepared and is available. The entire paradigm has been generalized to weighted relations, which assign a weight to each (input, output) pair rather than simply including or excluding it. If these weights represent probabilities P (input, output) or P (output | input), the weighted relation is called a joint or conditional (probabilistic) relation and constitutes a statistical model. Such models can be efficiently restricted, manipulated or combined using rational operations as before. An artificial example will appear in §2. The availability of toolkits for this weighted case (Mohri et al., 1998; van Noord and Gerdemann, 2001) promises to unify much of statistical NLP. Such tools make it easy to run most current approaches to statistical markup, chunking, normalization, segmentation, alignment, and noisy-channel decoding,1 including classic models for speech recognition (Pereira and Riley, 1997) and machine translation (Knight and Al-Onaizan, 1998). Moreover, once the models are expressed in the finitestate framework, it is easy to use operators to tweak them, to apply them to speech lattices or other sets, and to combine them with linguistic resources. Unfortunately, there is a stumbling block: Where do the weights come from? After all, statistical models require supervised or unsupervised training. Currently, finite-state practitioners derive weights using exogenous training methods, then patch them onto transducer arcs. Not only do these methods require additional programming outside the toolkit, but they are limited to particular kinds of models and training regimens. For example, the forward-backward algorithm (Baum, 1972) trains only Hidden Markov Models, while (Ristad and Yianilos, 1996) trains only stochastic edit distance. In short, current finite-state toolkits include no training algorithms, because none exist for the large space of statistical models that the toolkits can in principle describe and run. Given output, find input to maximize P (input, output). |
Closed-loop IGBT gate drive featuring highly dynamic di/dt and dv/dt control | In this paper, a closed-loop active IGBT gate drive providing highly dynamic diC/dt and dvCE/dt control is proposed. By means of using only simple passive measurement circuits for the generation of the feedback signals and a single operational amplifier as PI-controller, high analog control bandwidth is achieved enabling the application even for switching times in the sub-microsecond range. Therewith, contrary to state of the art gate drives, the parameter dependencies and nonlinearities of the IGBT are compensated enabling accurately specified and constant diC/dt and dvCE/dt values of the IGBT for the entire load and temperature range. This ensures the operation of an IGBT in the safe operating area (SOA), i.e. with limited turn-on peak reverse recovery current and turn-off overvoltage, and permits the restriction of electromagnetic interference (EMI). A hardware prototype is built to experimentally verify the proposed closed-loop active gate drive concept. |
Spatial, Structural and Temporal Feature Learning for Human Interaction Prediction | Human interaction prediction, i.e., the recognition of an ongoing interaction activity before it is completely executed, has a wide range of applications such as human-robot interaction and the prevention of dangerous events. Due to the large variations in appearance and the evolution of scenes, interaction prediction at an early stage is a challenging task. In this paper, a novel structural feature is exploited as a complement together with the spatial and temporal information to improve the performance of interaction prediction. The proposed structural feature is captured by Long Short Term Memory (LSTM) networks, which process the global and local features associated to each frame and each optical flow image. A new ranking score fusion method is then introduced to combine the spatial, temporal and structural models. Experimental results demonstrate that the proposed method outperforms state-of-the-art methods for human interaction prediction on the BIT-Interaction, the TV Human Interaction and the UT-Interaction datasets. |
Biomedical Signal and Image Processing Spring 2008 | Introduction In this chapter we will examine how we can generalize the idea of transforming a time series into an alternative representation, such as the Fourier (frequency) domain, to facilitate systematic methods of either removing (filtering) or adding (interpolating) data. In particular, we will examine the techniques of Principal Component Analysis (PCA) using Singular Value Decomposition (SVD), and Independent Component Analysis (ICA). Both of these techniques utilize a representation of the data in a statistical domain rather than a time or frequency domain. That is, the data are projected onto a new set of axes that fulfill some statistical criterion, which implies independence, rather than a set of axes that represent discrete frequencies such as with the Fourier transform, where the independence is assumed. Another important difference between these statistical techniques and Fourier-based techniques is that the Fourier components onto which a data segment is projected are fixed, whereas PCA-or ICA-based transformations depend on the structure of the data being analyzed. The axes onto which the data are projected are therefore discovered. If the structure of the data (or rather the statistics of the underlying sources) changes over time, then the axes onto which the data are projected will change too 1. Any projection onto another set of axes (or into another space) is essentially a method for separating the data out into separate components or sources which will hopefully allow us to see important structure more clearly in a particular projection. That is, the direction of projection increases the signal-to-noise ratio (SNR) for a particular signal source. For example, by calculating the power spectrum of a segment of data, we hope to see peaks at certain frequencies. The power (amplitude squared) along certain frequency vectors is therefore high, meaning we have a strong component in the signal at that frequency. By discarding the projections that correspond to the unwanted sources (such as the noise or artifact sources) and inverting the transformation, we effectively perform a filtering of the recorded observation. This is true for both ICA and PCA as well as Fourier-based techniques. However, one important difference between these techniques is that Fourier techniques assume that the projections onto each frequency component are independent of the other frequency components. In PCA and ICA we attempt to find a set of axes which are independent of one another in some sense. We assume there are a set of independent 1 … |
Impact of dexamethasone responsiveness on long term outcome in patients with newly diagnosed multiple myeloma. | Dexamethasone (Dex), alone or in combination, is commonly used for treating multiple myeloma. Dex as single agent for initial therapy of myeloma results in overall response rates of 50-60%. It is unclear whether steroid responsiveness reflects any biological characteristic that impacts long-term outcome. We studied a cohort of 182 patients with newly diagnosed myeloma seen between March 1998 and June 2007, initially treated with single-agent Dex for at least 4 weeks. The median age at diagnosis was 63 years (range, 39-81) and the median estimated survival was 55 months. At a median duration of therapy of 15 weeks, 91 (50%) patients had a partial response or better, 80 (44%) had less than partial response and the remaining (6%) patients were not evaluable. The median overall survival from diagnosis for the responders was 75 months compared to 71 months for remaining patients, P = 0.6.There was no correlation between baseline disease characteristics and Dex responsiveness. While overall survival was longer for the 130 (70%) patients who proceeded to an autologous stem cell transplant, no correlation was found between survival and Dex responsiveness among either group. Among this cohort of patients with myeloma, failure to respond to single agent steroid did not have an adverse impact on eventual outcome. |
A predictive model of menu performance | Menus are a primary control in current interfaces, but there has been relatively little theoretical work to model their performance. We propose a model of menu performance that goes beyond previous work by incorporating components for Fitts' Law pointing time, visual search time when novice, Hick-Hyman Law decision time when expert, and for the transition from novice to expert behaviour. The model is able to predict performance for many different menu designs, including adaptive split menus, items with different frequencies and sizes, and multi-level menus. We tested the model by comparing predictions for four menu designs (traditional menus, recency and frequency based split menus, and an adaptive 'morphing' design) with empirical measures. The empirical data matched the predictions extremely well, suggesting that the model can be used to explore a wide range of menu possibilities before implementation. |
FML-based prediction agent and its application to game of Go | In this paper, we present a robotic prediction agent including a darkforest Go engine, a fuzzy markup language (FML) assessment engine, an FML-based decision support engine, and a robot engine for game of Go application. The knowledge base and rule base of FML assessment engine are constructed by referring the information from the darkforest Go engine located in NUTN and OPU, for example, the number of MCTS simulations and winning rate prediction. The proposed robotic prediction agent first retrieves the database of Go competition website, and then the FML assessment engine infers the winning possibility based on the information generated by darkforest Go engine. The FML-based decision support engine computes the winning possibility based on the partial game situation inferred by FML assessment engine. Finally, the robot engine combines with the human-friendly robot partner PALRO, produced by Fujisoft incorporated, to report the game situation to human Go players. Experimental results show that the FML-based prediction agent can work effectively. |
Generating Photo-Realistic Training Data to Improve Face Recognition Accuracy | In this paper we investigate the feasibility of using synthetic data to augment face datasets. In particular, we propose a novel generative adversarial network (GAN) that can disentangle identity-related attributes from non-identity-related attributes. This is done by training an embedding network that maps discrete identity labels to an identity latent space that follows a simple prior distribution, and training a GAN conditioned on samples from that distribution. Our proposed GAN allows us to augment face datasets by generating both synthetic images of subjects in the training set and synthetic images of new subjects not in the training set. By using recent advances in GAN training, we show that the synthetic images generated by our model are photo-realistic, and that training with augmented datasets can indeed increase the accuracy of face recognition models as compared with models trained with real images alone. |
Conditional Generative Adversarial Networks for Commonsense Machine Comprehension | Recently proposed Story Cloze Test [Mostafazadeh et al., 2016] is a commonsense machine comprehension application to deal with natural language understanding problem. This dataset contains a lot of story tests which require commonsense inference ability. Unfortunately, the training data is almost unsupervised where each context document followed with only one positive sentence that can be inferred from the context. However, in the testing period, we must make inference from two candidate sentences. To tackle this problem, we employ the generative adversarial networks (GANs) to generate fake sentence. We proposed a Conditional GANs (CGANs) in which the generator is conditioned by the context. Our experiments show the advantage of the CGANs in discriminating sentence and achieve state-of-the-art results in commonsense story reading comprehension task compared with previous feature engineering and deep learning methods. |
Face Alignment With Deep Regression | In this paper, we present a deep regression approach for face alignment. The deep regressor is a neural network that consists of a global layer and multistage local layers. The global layer estimates the initial face shape from the whole image, while the following local layers iteratively update the shape with local image observations. Combining standard derivations and numerical approximations, we make all layers able to backpropagate error differentials, so that we can apply the standard backpropagation to jointly learn the parameters from all layers. We show that the resulting deep regressor gradually and evenly approaches the true facial landmarks stage by stage, avoiding the tendency that often occurs in the cascaded regression methods and deteriorates the overall performance: yielding early stage regressors with high alignment accuracy gains but later stage regressors with low alignment accuracy gains. Experimental results on standard benchmarks demonstrate that our approach brings significant improvements over previous cascaded regression algorithms. |
The neural response to facial attractiveness. | What are the neural correlates of attractiveness? Using functional MRI (fMRI), the authors addressed this question in the specific context of the apprehension of faces. When subjects judged facial beauty explicitly, neural activity in a widely distributed network involving the ventral occipital, anterior insular, dorsal posterior parietal, inferior dorsolateral, and medial prefrontal cortices correlated parametrically with the degree of facial attractiveness. When subjects were not attending explicitly to attractiveness, but rather were judging facial identity, the ventral occipital region remained responsive to facial beauty. The authors propose that this region, which includes the fusiform face area (FFA), the lateral occipital cortex (LOC), and medially adjacent regions, is activated automatically by beauty and may serve as a neural trigger for pervasive effects of attractiveness in social interactions. |
Changes in microcirculation of the trapezius muscle during a prolonged computer task | The aim of this study is to investigate if there is a change in oxygen saturation and blood flow in the different parts of the trapezius muscle in office workers with and without trapezius myalgia during a standardized computer task. Twenty right-handed office workers participated; ten were recruited based on pain in the trapezius and ten as matching controls. Subjects performed a combination of typing and mousing tasks for 60 min at a standardized workstation. Muscle tissue oxygenation and blood flow data were collected from the upper trapezius (UT), the middle trapezius (MT) and the lower trapezius (LT), both on the left and right side at seven moments (at baseline and every tenth minute during the 1-h typing task) by use of the oxygen to see device. In all three parts of the trapezius muscle, the oxygen saturation and blood flow decreased significantly over time in a similar pattern (p < 0.001). Oxygenation of the left and right UT was significantly higher compared to the other muscle parts (p < 0.001). Oxygen saturation for the MT was significantly lower in the cases compared to the control group (p = 0.027). Blood flow of the UT on the right side was significantly lower than the blood flow on the left side (p = 0.026). The main finding of this study was that 1 h of combined workstation tasks resulted in decreased oxygen saturation and blood flow in all three parts of the trapezius muscle. Future research should focus on the influence of intervention strategies on these parameters. |
Automatic Detection and Prevention of Cyberbullying | Abstract—The recent development of social media poses new challenges to the research community in analyzing online interactions between people. Social networking sites offer great opportunities for connecting with others, but also increase the vulnerability of young people to undesirable phenomena, such as cybervictimization. Recent research reports that on average, 20% to 40% of all teenagers have been victimized online. In this paper, we focus on cyberbullying as a particular form of cybervictimization. Successful prevention depends on the adequate detection of potentially harmful messages. However, given the massive information overload on the Web, there is a need for intelligent systems to identify potential risks automatically. We present the construction and annotation of a corpus of Dutch social media posts annotated with fine-grained cyberbullying-related text categories, such as insults and threats. Also, the specific participants (harasser, victim or bystander) in a cyberbullying conversation are identified to enhance the analysis of human interactions involving cyberbullying. Apart from describing our dataset construction and annotation, we present proof-of-concept experiments on the automatic identification of cyberbullying events and fine-grained cyberbullying categories. |
Otem&Utem: Over- and Under-Translation Evaluation Metric for NMT | Although neural machine translation(NMT) yields promising translation performance, it unfortunately suffers from overand under-translation issues [Tu et al., 2016], of which studies have become research hotspots in NMT. At present, these studies mainly apply the dominant automatic evaluation metrics, such as BLEU, to evaluate the overall translation quality with respect to both adequacy and fluency. However, they are unable to accurately measure the ability of NMT systems in dealing with the above-mentioned issues. In this paper, we propose two quantitative metrics, the Otem and Utem, to automatically evaluate the system performance in terms of overand under-translation respectively. Both metrics are based on the proportion of mismatched n-grams between gold reference and system translation. We evaluate both metrics by comparing their scores with human evaluations, where the values of Pearson Correlation Coefficient reveal their strong correlation. Moreover, in-depth analyses on various translation systems indicate some inconsistency between BLEU and our proposed metrics, highlighting the necessity and significance of our metrics. Keywords— Evaluation Metric Neural Machine Translation Overtransaltion Under-transaltion. |
Low-Bit-Rate Speech Coding | Low-bit-rate speech coding, at rates below 4 kb/s, is needed for both communication and voice storage applications. At such low rates, full encoding of the speech waveform is not possible; therefore, low-rate coders rely instead on parametric models to represent only the most perceptually-relevant aspects of speech. While there are a number of different approaches for this modeling, all can be related to the basic linear model of speech production, where an excitation signal drives a vocal tract filter. The basic properties of the speech signal and of human speech perception can explain the principles of parametric speech coding as applied in early vocoders. Current speech modeling approaches, such as mixed excitation linear prediction, sinusoidal coding, and waveform interpolation, use more sophisticated versions of these same concepts. Modern techniques for encoding the model parameters, in particular using the theory of vector quantization, allow the encoding of the model information with very few bits per speech frame. Successful standardization of low-rate coders has enabled their widespread use for both military and satellite communications, at rates from 4 kb/s all the way down to 600 b/s. However, the goal of tollquality low-rate coding continues to provide a research challenge. This work was sponsored by the Defense Advanced Research Projects Agency under Air Force Contract FA8721-05-C-0002 . Opinions, interpretations, conclusions, and recommendat ions are those of the authors and are not necessarily endorsed by the U nited States Government. |
Do "Also-Viewed" Products Help User Rating Prediction? | For online product recommendation engines, learning highquality product embedding that captures various aspects of the product is critical to improving the accuracy of user rating prediction. In recent research, in conjunction with user feedback, the appearance of a product as side information has been shown to be helpful for learning product embedding. However, since a product has a variety of aspects such as functionality and specifications, taking into account only its appearance as side information does not suffice to accurately learn its embedding. In this paper, we propose a matrix co-factorization method that leverages information hidden in the so-called “also-viewed” products, i.e., a list of products that has also been viewed by users who have viewed a target product. “Also-viewed” products reflect various aspects of a given product that have been overlooked by visually-aware recommendation methods proposed in past research. Experiments on multiple real-world datasets demonstrate that our proposed method outperforms state-of-the-art baselines in terms of user rating prediction. We also perform classification on the product embedding learned by our method, and compare it with a state-of-the-art baseline to demonstrate the superiority of our method in generating high-quality product embedding that better represents the product. |
Efficient Deep Neural Network Serving: Fast and Furious | The emergence of deep neural networks (DNNs) as a state-of-the-art machine learning technique has enabled a variety of artificial intelligence applications for image recognition, speech recognition and translation, drug discovery, and machine vision. These applications are backed by large DNN models running in serving mode on a cloud computing infrastructure to process client inputs such as images, speech segments, and text segments. Given the compute-intensive nature of large DNN models, a key challenge for DNN serving systems is to minimize the request response latencies. This paper characterizes the behavior of different parallelism techniques for supporting scalable and responsive serving systems for large DNNs. We identify and model two important properties of DNN workloads: 1) homogeneous request service demand and 2) interference among requests running concurrently due to cache/memory contention. These properties motivate the design of serving deep learning systems fast (SERF), a dynamic scheduling framework that is powered by an interference-aware queueing-based analytical model. To minimize response latency for DNN serving, SERF quickly identifies and switches to the optimal parallel configuration of the serving system by using both empirical and analytical methods. Our evaluation of SERF using several well-known benchmarks demonstrates its good latency prediction accuracy, its ability to correctly identify optimal parallel configurations for each benchmark, its ability to adapt to changing load conditions, and its efficiency advantage (by at least three orders of magnitude faster) over exhaustive profiling. We also demonstrate that SERF supports other scheduling objectives and can be extended to any general machine learning serving system with the similar parallelism properties as above. |
Spin-Transfer Torque Memories: Devices, Circuits, and Systems | Spin-transfer torque magnetic memory (STT-MRAM) has gained significant research interest due to its nonvolatility and zero standby leakage, near unlimited endurance, excellent integration density, acceptable read and write performance, and compatibility with CMOS process technology. However, several obstacles need to be overcome for STT-MRAM to become the universal memory technology. This paper first reviews the fundamentals of STT-MRAM and discusses key experimental breakthroughs. The state of the art in STT-MRAM is then discussed, beginning with the device design concepts and challenges. The corresponding bit-cell design solutions are also presented, followed by the STT-MRAM cache architectures suitable for on-chip applications. |
SUAV:Q - An improved design for a transformable solar-powered UAV | Throughout the wide range of aerial robot related applications, selecting a particular airframe is often a trade-off. Fixed-wing small-scale unmanned aerial vehicles (UAVs) typically have difficulty surveying at low altitudes while quadrotor UAVs, having more maneuverability, suffer from limited flight time. Recent prior work [1] proposes a solar-powered small-scale aerial vehicle designed to transform between fixed-wing and quad-rotor configurations. Surplus energy collected and stored while in a fixed-wing configuration is utilized while in a quad-rotor configuration. This paper presents an improvement to the robot's design in [1] by pursuing a modular airframe, an optimization of the hybrid propulsion system, and solar power electronics. Two prototypes of the robot have been fabricated for independent testing of the airframe in fixed-wing and quad-rotor states. Validation of the solar power electronics and hybrid propulsion system designs were demonstrated through a combination of simulation and empirical data from prototype hardware. |
Current Integration of Tuberculosis (TB) and HIV Services in South Africa, 2011 | SETTING
Public Health Facilities in South Africa.
OBJECTIVE
To assess the current integration of TB and HIV services in South Africa, 2011.
DESIGN
Cross-sectional study of 49 randomly selected health facilities in South Africa. Trained interviewers administered a standardized questionnaire to one staff member responsible for TB and HIV in each facility on aspects of TB/HIV policy, integration and recording and reporting. We calculated and compared descriptive statistics by province and facility type.
RESULTS
Of the 49 health facilities 35 (71%) provided isoniazid preventive therapy (IPT) and 35 (71%) offered antiretroviral therapy (ART). Among assessed sites in February 2011, 2,512 patients were newly diagnosed with HIV infection, of whom 1,913 (76%) were screened for TB symptoms, and 616 of 1,332 (46%) of those screened negative for TB were initiated on IPT. Of 1,072 patients newly registered with TB in February 2011, 144 (13%) were already on ART prior to Tb clinical diagnosis, and 451 (42%) were newly diagnosed with HIV infection. Of those, 84 (19%) were initiated on ART. Primary health clinics were less likely to offer ART compared to district hospitals or community health centers (p<0.001).
CONCLUSION
As of February 2011, integration of TB and HIV services is taking place in public medical facilities in South Africa. Among these services, IPT in people living with HIV and ART in TB patients are the least available. |
Bipolar mixed episodes and antidepressants: a cohort study of bipolar I disorder patients. | OBJECTIVES
The aim of this study was to elucidate the factors associated with the occurrence of mixed episodes, characterized by the presence of concomitant symptoms of both affective poles, during the course of illness in bipolar I disorder patients treated with an antidepressant, as well as the role of antidepressants in the course and outcome of the disorder.
METHOD
We enrolled a sample of 144 patients followed for up to 20 years in the referral Barcelona Bipolar Disorder Program and compared subjects who had experienced at least one mixed episode during the follow-up (n=60) with subjects who had never experienced a mixed episode (n=84) regarding clinical variables.
RESULTS
Nearly 40% of bipolar I disorder patients treated with antidepressants experienced at least one mixed episode during the course of their illness; no gender differences were found between two groups. Several differences regarding clinical variables were found between the two groups, but after performing logistic regression analysis, only suicide attempts (p<0.001), the use of serotonin norepinephrine reuptake inhibitors (p=0.041), switch rates (p=0.010), and years spent ill (p=0.022) were significantly associated with the occurrence of at least one mixed episode during follow-up.
CONCLUSIONS
The occurrence of mixed episodes is associated with a tendency to chronicity, with a poorer outcome, a higher number of depressive episodes, and greater use of antidepressants, especially serotonin norepinephrine reuptake inhibitors. |
A New Algorithm for Detecting Text Line in Handwritten Documents | Curvilinear text line detection and segmentation in handwritten documents is a significant challenge for handwriting recognition. Given no prior knowledge of script, we model text line detection as an image segmentation problem by enhancing text line structure using a Gaussian window, and adopting the level set method to evolve text line boundaries. Experiments show that the proposed method achieves high accuracy for detecting text lines in both handwritten and machine printed documents with many scripts. |
Protecting Moving Trajectories with Dummies | Dummy-based anonymization techniques for protecting location privacy of mobile users have been proposed in the literature. By generating dummies that move in humanlike trajectories, shows that location privacy of mobile users can be preserved. However, by monitoring long-term movement patterns of users, the trajectories of mobile users can still be exposed. We argue that, once the trajectory of a user is identified, locations of the user is exposed. Thus, it's critical to protect the moving trajectories of mobile users in order to preserve user location privacy. We propose two schemes that generate consistent movement patterns in a long run. Guided by three parameters in user specified privacy profile, namely, short- term disclosure, long-term disclosure and distance deviation, the proposed schemes derive movement trajectories for dummies. A preliminary performance study shows that our approach is more effective than existing work in protecting moving trajectories of mobile users and their location privacy. |
Substance Use and Intimate Violence Among Incarcerated Males | The purpose of this study was to examine substance use patterns among a sample of incarcerated males who report engaging in levels of intimate violence, as well as identifying similarities and differences in demographic, economic status, mental health, criminal justice involvement, relationships, and treatment factors for three groups of incarcerated males - those who report perpetrating low intimate violence, those who report perpetrating moderate intimate violence, and those who report perpetrating extreme intimate violence the year preceding their current incarceration. Findings indicated that low intimate violence group's perpetration consisted almost exclusively of emotional abuse. Moderately intimate violent males and extremely intimate violent males, however, report not only high rates of emotional abuse but physical abuse as well. The distinction between moderate and extremely violent groups was substantial. Findings also indicated that perpetrators at different levels of violence in this study did not vary significantly in age, employment history, marital status, or race. However, the three groups showed significant differences in three main areas: (1) cocaine and alcohol use patterns, (2) stranger violence perpetration and victimization experiences, and (3) emotional discomfort. Implications for substance abuse and mental health treatment interventions and for future research are discussed. |
Discovering facts with boolean tensor tucker decomposition | Open Information Extraction (Open IE) has gained increasing research interest in recent years. The first step in Open IE is to extract raw subject--predicate--object triples from the data. These raw triples are rarely usable per se, and need additional post-processing. To that end, we proposed the use of Boolean Tucker tensor decomposition to simultaneously find the entity and relation synonyms and the facts connecting them from the raw triples. Our method represents the synonym sets and facts using (sparse) binary matrices and tensor that can be efficiently stored and manipulated. We consider the presentation of the problem as a Boolean tensor decomposition as one of this paper's main contributions. To study the validity of this approach, we use a recent algorithm for scalable Boolean Tucker decomposition. We validate the results with empirical evaluation on a new semi-synthetic data set, generated to faithfully reproduce real-world data features, as well as with real-world data from existing Open IE extractor. We show that our method obtains high precision while the low recall can easily be remedied by considering the original data together with the decomposition. |
Model-based insulin and nutrition administration for tight glycaemic control in critical care. | OBJECTIVE
Present a new model-based tight glycaemic control approach using variable insulin and nutrition administration.
BACKGROUND
Hyperglycaemia is prevalent in critical care. Current published protocols use insulin alone to reduce blood glucose levels, require significant added clinical effort, and provide highly variable results. None directly address both the practical clinical difficulties and significant patient variation seen in general critical care, while also providing tight control.
METHODS
The approach presented manages both nutritional inputs and exogenous insulin infusions using tables simplified from a model-based, computerised protocol. Unique delivery aspects include bolus insulin delivery for safety and variable enteral nutrition rates. Unique development aspects include the use of simulated virtual patient trials created from retrospective data. The model, protocol development, and first 50 clinical case results are presented.
RESULTS
High qualitative correlation to within +/-10% between simulated virtual trials and published clinical results validates the overall approach. Pilot tests covering 7358 patient hours produced an average glucose of 5.9 +/- 1.1 mmol/L. Time in the 4-6.1 mmol/L band was 59%, with 84% in 4.0-7.0 mmol/L, and 92% in 4.0-7.75 mmol/L. The average feed rate was 63% of patient specific goal feed and the average insulin dose was 2.6U/hour. There was one hypoglycaemic measurement of 2.1 mmol/L. No departures from protocol or clinical interventions were required at any time.
SUMMARY
Modulating both low dose insulin boluses and nutrition input rates challenges the current practice of using only insulin in larger doses to reduce hyperglycaemic levels. Clinical results show very tight control in safe glycaemic bands. The approach could be readily adopted in any typical ICU. |
Free space range measurements with Semtech Lora™ technology | Although short range wireless communication explicitly targets local and very regional applications, range continues to be an extremely important issue. The range directly depends on the so called link budget, which can be increased by the choice of modulation and coding schemes. Especially, the recent transceiver generation comes with extensive and flexible support for Software Defined Radio (SDR). The SX127x family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview into the technologies to support Long Range (LoRa ™), describes the outdoor setup at the Laboratory Embedded Systems and Communication Electronics of Offenburg University of Applied Sciences, shows detailed measurement results and discusses the strengths and weaknesses of this technology. |
Pilonidal cyst involving the clitoris: a case report. | BACKGROUND
A periclitoral pilonidal cyst or sinus is an exceedingly infrequent occurrence. Diagnostic criteria consist of a sinus tract or cyst containing hair follicles and inflammatory reaction in one or more partially or completely epithelialized sinus tracts. Follow-up of patients reported in the literature has been too short to provide a consensus in regard to the extent of surgery required to provide permanent cure.
CASE
A 30-year-old patient was seen in consultation for evaluation of a chronic, draining periclitoral abscess that had been treated for approximately 2 years with multiple rounds of antibiotics and local incisions. Treatment consisted of a wide local excision of the cyst and multiple sinuses extending into the periclitoral area and labia minora. The shaft and glans of the clitoris were preserved. Primary closure and healing were accomplished.
CONCLUSION
Diagnosis and curative therapy of a pilonidal cyst of the clitoris require a thorough resection which can be accomplished with preservation of the clitoris. |
Active Plasmonics: Principles, Structures, and Applications. | Active plasmonics is a burgeoning and challenging subfield of plasmonics. It exploits the active control of surface plasmon resonance. In this review, a first-ever in-depth description of the theoretical relationship between surface plasmon resonance and its affecting factors, which forms the basis for active plasmon control, will be presented. Three categories of active plasmonic structures, consisting of plasmonic structures in tunable dielectric surroundings, plasmonic structures with tunable gap distances, and self-tunable plasmonic structures, will be proposed in terms of the modulation mechanism. The recent advances and current challenges for these three categories of active plasmonic structures will be discussed in detail. The flourishing development of active plasmonic structures opens access to new application fields. A significant part of this review will be devoted to the applications of active plasmonic structures in plasmonic sensing, tunable surface-enhanced Raman scattering, active plasmonic components, and electrochromic smart windows. This review will be concluded with a section on the future challenges and prospects for active plasmonics. |
Detecting patterns of anomalies | An anomaly is an observation that does not conform to the expected normal behavior. With the ever increasing amount of data being collected universally, automatic surveillance systems are becoming more popular and are increasingly using data mining methods to detect patterns of anomalies. Detecting anomalies can provide useful and actionable information in a variety of real-world scenarios. For example, in disease monitoring, a timely detection of an epidemic can potentially save many lives. The diverse nature of real-world datasets, and the difficulty of obtaining labeled training data make it challenging to develop a universal framework for anomaly detection. We focus on a key feature of most real world scenarios, that multiple anomalous records are usually generated by a common anomalous process. In this thesis we develop methods that utilize the similarity between records in these groups or patterns of anomalies to perform better detection. We also investigate new methods for detection of individual record anomalies, which we then incorporate into the group detection methods. A recurring feature of our methods is combinatorial search over some space (e.g. over all subsets of attributes, or over all subsets of records). We use a variety of computational speedup tricks and approximation techniques to make these methods scalable to large datasets. Since most of our motivating problems involve datasets having categorical or symbolic values, we focus on categorical valued datasets. Apart from this, we make few assumptions about the data, and our methods are very general and applicable to a wide variety of domains. Additionally, we investigate anomaly pattern detection in data structured by space and time. Our method generalizes the popular method of spatiotemporal scan statistics to learn and detect specific, time-varying spatial patterns in the data. Finally, we show an efficient and easily interpretable technique for anomaly detection in multivariate time series data. We evaluate our methods on a variety of real world data sets including both real and synthetic anomalies. |
Lump versus tylos. | |
Coffee consumption enhances high-density lipoprotein-mediated cholesterol efflux in macrophages. | RATIONALE
Association of habitual coffee consumption with coronary heart disease morbidity and mortality has not been established. We hypothesized that coffee may enhance reverse cholesterol transport (RCT) as the antiatherogenic properties of high-density lipoprotein (HDL).
OBJECTIVE
This study was to investigate whether the phenolic acids of coffee and coffee regulates RCT from macrophages in vitro, ex vivo and in vivo.
METHODS AND RESULTS
Caffeic acid and ferulic acid, the major phenolic acids of coffee, enhanced cholesterol efflux from THP-1 macrophages mediated by HDL, but not apoA-I. Furthermore, these phenolic acids increased both the mRNA and protein levels of ATP-binding cassette transporter (ABC)G1 and scavenger receptor class B type I (SR-BI), but not ABCA1. Eight healthy volunteers were recruited for the ex vivo study, and blood samples were taken before and 30 minutes after consumption of coffee or water in a crossover study. The mRNA as well as protein levels of ABCG1, SR-BI, and cholesterol efflux by HDL were increased in the macrophages differentiated under autologous sera obtained after coffee consumption compared to baseline sera. Finally, effects of coffee and phenolic acid on in vivo RCT were assessed by intraperitoneally injecting [(3)H]cholesterol-labeled acetyl low-density lipoprotein-loaded RAW264.7 cells into mice, then monitoring appearance of (3)H tracer in plasma, liver, and feces. Supporting in vitro and ex vivo data, ferulic acid was found to significantly increase the levels of (3)H tracer in feces.
CONCLUSIONS
Coffee intake might have an antiatherogenic property by increasing ABCG1 and SR-BI expression and enhancing HDL-mediated cholesterol efflux from the macrophages via its plasma phenolic acids. |
Aesthetics of interaction: a literature synthesis | New technologies provide expanded opportunities for interaction design. The growing number of possible ways to interact, in turn, creates a new responsibility for designers: Besides the product's visual aesthetics, one has to make choices about the aesthetics of interaction. This issue recently gained interest in Human-Computer Interaction (HCI) research. Based on a review of 19 approaches, we provide an overview of today's state of the art. We focused on approaches that feature "qualities", "dimensions" or "parameters" to describe interaction. Those fell into two broad categories. One group of approaches dealt with detailed spatio-temporal attributes of interaction sequences (i.e., action-reaction) on a sensomotoric level (i.e., form). The other group addressed the feelings and meanings an interaction is enveloped in rather than the interaction itself (i.e., experience). Surprisingly, only two approaches addressed both levels simultaneously, making the explicit link between form and experience. We discuss these findings and its implications for future theory building. |
A reliable routing protocol for VANET communications | Vehicular Ad-hoc Network (VANET) is an emerging new technology to enable communications among vehicles and nearby roadside infrastructures to provide intelligent transportation applications. In order to provide stable connections between vehicles, a reliable routing protocol is needed. Currently, there are several routing protocols designed for MANETs could be applied to VANETs. However, due to the unique characteristics of VANETs, the results are not encouraging. In this paper, we propose a new routing protocol named AODV-VANET, which incorporates the vehicles' movement information into the route discovery process based on Ad hoc On-Demand Distance Vector (AODV). A Total Weight of the Route is introduced to choose the best route together with an expiration time estimation to minimize the link breakages. With these modifications, the proposed protocol is able to achieve better routing performances. |
Towards a sustainable e-Participation implementation model | A great majority of the existing frameworks are inadequate to address their universal applicability in countries with certain socio-economic and technological settings. Though there is so far no “one size fits all” strategy in implementing eGovernment, there are some essential common elements in the transformation. Therefore, this paper attempts to develop a singular sustainable model based on some theories and the lessons learned from existing e-Participation initiatives of developing and developed countries, so that the benefits of ICT can be maximized and greater participation be ensured. |
Connecting users across social media sites: a behavioral-modeling approach | People use various social media for different purposes. The information on an individual site is often incomplete. When sources of complementary information are integrated, a better profile of a user can be built to improve online services such as verifying online information. To integrate these sources of information, it is necessary to identify individuals across social media sites. This paper aims to address the cross-media user identification problem. We introduce a methodology (MOBIUS) for finding a mapping among identities of individuals across social media sites. It consists of three key components: the first component identifies users' unique behavioral patterns that lead to information redundancies across sites; the second component constructs features that exploit information redundancies due to these behavioral patterns; and the third component employs machine learning for effective user identification. We formally define the cross-media user identification problem and show that MOBIUS is effective in identifying users across social media sites. This study paves the way for analysis and mining across social media sites, and facilitates the creation of novel online services across sites. |
Real-Time Video Streaming over NS 3-based Emulated LTE Networks | In this paper, we present a developed NS-3 based emulation platform for evaluating and optimizing the performance of the LTE networks. The developed emulation platform is designed to provide real-time measurements. Thus it eliminates the need for the high cost spent on real equipment. The developed platform consists of three main parts, which are video server, video client(s), and NS-3 based simulation environment for LTE network. Using the developed platform, the server streams video clips to the existing clients going through the LTE simulated network. We utilize this setup to evaluate multiple cases such as mobility and handover. Moreover, we use it for evaluating multiple streaming protocols such as UDP, RTP, and Dynamic Adaptive Streaming over HTTP (DASH). Keywords-DASH, Emulation, LTE, NS-3, Real-time, RTP, UDP. |
Sequence-to-Sequence Models Can Directly Translate Foreign Speech | We present a recurrent encoder-decoder deep neural network architecture that directly translates speech in one language into text in another. The model does not explicitly transcribe the speech into text in the source language, nor does it require supervision from the ground truth source language transcription during training. We apply a slightly modified sequence-to-sequence with attention architecture that has previously been used for speech recognition and show that it can be repurposed for this more complex task, illustrating the power of attention-based models. A single model trained end-to-end obtains state-of-the-art performance on the Fisher Callhome Spanish-English speech translation task, outperforming a cascade of independently trained sequence-to-sequence speech recognition and machine translation models by 1.8 BLEU points on the Fisher test set. In addition, we find that making use of the training data in both languages by multi-task training sequence-to-sequence speech translation and recognition models with a shared encoder network can improve performance by a further 1.4 BLEU points. |
A randomized clinical trial with two doses of a omega 3 fatty acids oral and arginine enhanced formula in clinical and biochemical parameters of head and neck cancer ambulatory patients. | INTRODUCTION
Postsurgical patients with head and neck cancer could have a high rate of ambulatory complications. The aim was to investigate whether oral ambulatory nutrition of head and neck cancer patients with recent weight loss, using two different doses of an omega 3 fatty acids and arginine enhanced diets could improve nutritional parameters.
DESIGN
At Hospital discharge post surgical head and neck cancer patients (n=37) were asked to consume two or three cans per day of a designed omega 3 fatty acid and arginine enhanced supplement for a twelve week period.
RESULTS
Albumin, prealbumin, transferrin and lymphocytes levels improved in both groups. Weight, fat mass and fat free mass improved during supplementation in group II (3 bricks per day). No differences were detected in anthropometric parameters in group I. Gastrointestinal tolerance with both formulas was good, no episodes were reported. There are no differences between both formulas on postsurgical complications rates.
CONCLUSIONS
Omega 3 and arginine enhanced formulas improved blood protein concentrations and lymphocyte levels in ambulatory postoperative head and neck cancer patients. A high dose of arginine and omega 3 fatty acids formula improved weight, too. |
An Information Security Technique Using DES-RSA Hybrid and LSB | An Information Security Technique Using DES-RSA Hybrid and LSB 1 Sandeep Singh, 2 Aman Singh 1 Department of Computer Science Engineering, Lovely Professional University, Phagwara, Punjab, India 2 Assistant Professor, Department of Computer Science Engineering, Lovely Professional University, Phagwara, Punjab, India ______________________________________________________________________________________ Abstract: Security plays a vital role while exchanging large amount of information from source to destination. Currently technology is increasing very rapidly from security point of view. Each and every person wants his communication must be confidential and protected from the access of the illegal users over internet. Cryptography and Steganography plays a very important role as security tools. Cryptography is the art of converting the readable information into an unreadable form. Steganography is a tool used to hide information inside a media files such as images, audio and video etc. In this paper a combined technique of both cryptography and steganography is proposed for the better security of the data. The message is initially encrypted with DES and the keys of DES are encrypted with RSA then the hybrid of both DES-RSA is embedded inside an image with help of LSB image steganography. Results of the technique provide a stronger security. The encryption time is also faster than the previous techniques as well as brute force attack to this technique is almost not possible. |
Gravel as a Resistant Rock | In a recent2 number of the Journal of Geology, Mr. John L. Rich presents the thesis that "gravel, in its relation to the agencies of denudation, is under certain geological conditions a highly resistant rock. To these agencies it will, in general, offer greater resistance than ordinary igneous or sedimentary rocks, with a few possible exceptions."3 The writer is in accord with this general thesis; but the specific evidence presented in arriving at this conclusion is not wholly accurate, and certain deductions affecting the physiographic history of the region should in the writer's opinion be interpreted differently. Mr. Rich divides his paper into three parts: (i) to point out the theoretical reasons for the resistant nature of gravel deposits; (2) to show, from an actual occurrence in nature, that the gravels do behave as the theoretical considerations would lead us to expect; and (3) to sketch by way of suggestion the normal course of development of topography in a region where alluvial fans of coarse material are accumulating at the base of mountains. It is especially with No. 2 that the following has to deal. It is shown that a plain-like lowland lies between the northern edge of the gravel deposits and the high mountains to the north. This lowland is in places ioo feet below the base of the gravel south of it, and is crossed by the mountain streams flowing southward out upon the desert deposits. To explain this feature Mr. Rich presents three alternative hypotheses: (i) The gravels may have been removed by erosion |
Entrepreneurial orientation, learning orientation, and innovation in small and medium enterprises | From the resource-based perspective, entrepreneurial orientation and market orientation are two separate but complementary strategic orientations that emphasize the business philosophy and behavior in proactively detecting industrial environment, including market information and competitors strategy, in order to innovate and respond to the customers needs timely. Empirical studies have separately discussed the variables of entrepreneurial orientation and market orientation in relation to firm-level innovation performance. However, there is limited research simultaneously examined the direct effect of entrepreneurial orientation and market orientation on innovation performance, especially individual-level innovation performance. Scholars have suggested that future research may examine strategic human resource practices to explore if organizational factors may enhance or diminish the entrepreneurial orientation on innovation. Both entrepreneurial orientation and market orientation still require organizational learning practices to facilitate higher-order learning and innovation. Although scholars are interested in figuring out if additional moderator variables simultaneously affect market orientation and entrepreneurial orientation on firm performance, limited empirical studies have existed. An organization with high degree of entrepreneurial orientation and market orientation still require learning orientation mechanism to create an environment where mutually beneficial relationships between employees and their organizations to facilitate learning and innovation. Therefore, learning orientation may make an organization innovate effectively. As a result, the overall purpose of this study is to assess the influence learning orientation on relationships between entrepreneurial orientation, market orientation and individual-level job related performance variable, employee innovative behaviors. |
Study of wheel slip and traction forces in differential drive robots and slip avoidance control strategy | The effect of wheel slip in differential drive robots is investigated in this paper. We consider differential drive robots with two driven wheels and ball-type caster wheels that are used to provide balance and support to the mobile robot. The limiting values of traction forces for slip and no slip conditions are dependent on wheel-ground kinetic and static friction coefficients. The traction forces are used to determine the fraction of input torque that provides robot motion and this is used to calculate the actual position of the robot under slip conditions. The traction forces under no slip conditions are used to determine the limiting value of the wheel torque above which the wheel slips. This limiting torque value is used to set a saturation limit for the input torque to avoid slip. Simulations are conducted to evaluate the behavior of the robot during slip and no slip conditions. Experiments are conducted under similar slip and no slip conditions using a custom built differential drive mobile robot with one caster wheel to validate the simulations. Experiments are also conducted with the torque limiting strategy. Results from model simulations and experiments are presented and discussed. |
Efficient Learning of Selective Bayesian Network Classifiers | In this paper, we empirically evaluate algorithms for learning four Bayesian network (BN) classifiers: Naïve-Bayes, tree augmented Naïve-Bayes (TANs), BN augmented NaïveBayes (BANs) and general BNs (GBNs), where the GBNs and BANs are learned using two variants of a conditional independence based BN-learning algorithm. Experimental results show the GBNs and BANs learned using the proposing learning algorithms are competitive with (or superior to) the best classifiers based on both Bayesian networks and other formalisms, and that the computational time for learning and using these classifiers is relatively small. These results argue that BN classifiers deserve more attention in machine learning and data mining communities. |
Seeing Small Faces from Robust Anchor's Perspective | This paper introduces a novel anchor design principle to support anchor-based face detection for superior scaleinvariant performance, especially on tiny faces. To achieve this, we explicitly address the problem that anchor-based detectors drop performance drastically on faces with tiny sizes, e.g. less than 16 × 16 pixels. In this paper, we investigate why this is the case. We discover that current anchor design cannot guarantee high overlaps between tiny faces and anchor boxes, which increases the difficulty of training. The new Expected Max Overlapping (EMO) score is proposed which can theoretically explain the low overlapping issue and inspire several effective strategies of new anchor design leading to higher face overlaps, including anchor stride reduction with new network architectures, extra shifted anchors, and stochastic face shifting. Comprehensive experiments show that our proposed method significantly outperforms the baseline anchor-based detector, while consistently achieving state-of-the-art results on challenging face detection datasets with competitive runtime speed. |
Sparse solutions to linear inverse problems with multiple measurement vectors | We address the problem of finding sparse solutions to an underdetermined system of equations when there are multiple measurement vectors having the same, but unknown, sparsity structure. The single measurement sparse solution problem has been extensively studied in the past. Although known to be NP-hard, many single-measurement suboptimal algorithms have been formulated that have found utility in many different applications. Here, we consider in depth the extension of two classes of algorithms-Matching Pursuit (MP) and FOCal Underdetermined System Solver (FOCUSS)-to the multiple measurement case so that they may be used in applications such as neuromagnetic imaging, where multiple measurement vectors are available, and solutions with a common sparsity structure must be computed. Cost functions appropriate to the multiple measurement problem are developed, and algorithms are derived based on their minimization. A simulation study is conducted on a test-case dictionary to show how the utilization of more than one measurement vector improves the performance of the MP and FOCUSS classes of algorithm, and their performances are compared. |
Characterization of autotransplant-related thrombocytopenia by evaluation of glycocalicin and reticulated platelets | Thrombocytopoiesis of 21 multiple myeloma patients undergoing single or double transplant regimen was characterized by measuring the level of reticulated platelets and plasma glycocalicin. Since reticulated platelets are an index of thrombopoietic activity and glycocalicin plasma values are related to platelet damage and turnover, it may be possible to perform a novel type of analysis of the thrombopoietic compartment during the mobilizing regimen and during transplant-related chemotherapy. Patients underwent mobilizing therapy and first transplant. Some randomized patients also underwent a second transplant with mobilized peripheral blood stem cells. The results show that the percentage of reticulated platelets decreased after therapy and then gradually increased in the recovery phase either during first or second transplant. By contrast, the percentage of reticulated platelets increased until day +8 and then gradually decreased during the mobilizing regimen. The glycocalicin index (glycocalicin plasma value normalized for the individual platelet count) increased significantly both during the course of mobilization and after transplant-related chemotherapy when the platelet number was at its nadir. However, the glycocalicin index was more elevated after transplant-related chemotherapy than after the mobilizing regimen. Our findings suggest that chemotherapy-related thrombocytopenia may be due to a dual mechanism: thrombocytopenia results from decreased platelet production in addition to increased platelet damage and possible destruction. |
Sensors on speaking terms: Schedule-based medium access control protocols for wireless sensor networks | Wireless sensor networks make the previously unobservable, observable. The basic idea behind these networks is straightforward: all wires are cut in traditional sensing systems and the sensors are equipped with batteries and radio’s to virtually restore the cut wires. The resulting sensors can be placed closely to the phenomenon that needs monitoring, without imposing high wiring costs and careful engineering of the position of the devices. Sensors can even be attached to mobile objects. As a result, monitoring can be done more efficiently and at higher sensing resolution compared to traditional sensor systems. Yet, these are not the only advantages of wireless sensor networks. We —as user of the wireless sensor network– are not interested in constant streams of sensor readings, but we are more interested in the interpretation of the sensor readings. This is exactly the big potential of wireless sensor networks. Due to intelligence, added locally to the wireless sensors, and collaboration between the individual sensors, the network by itself can carry out complex tasks related to observing. To achieve these goals, wireless sensors must be ”on speaking terms” i.e. the nodes must be able to exchange information. The aim of this thesis is to provide a set of communication rules —commonly known as medium access control (MAC) protocol– that organizes efficient communication through a shared wireless medium and is well suited for the inherent characteristics of wireless sensor networks. The MAC protocol determines when a wireless sensor transmits its sensor information and is thereby in control of one of the most energy consuming components in the wireless sensor hardware architecture. The lifetime of the battery operated wireless sensors is thus heavily dependant on the efficiency of the MAC protocol. This thesis provides a self-organizing, schedule-based medium access approach. In general, this class of medium access is recognized for its energy-efficiency and robustness against high peak loads. These aspects are required by wireless sensor networks. Its drawbacks are message delay, over-provisioning and required time synchronization between wireless sensors. In our approach, wireless sensors analyse local medium usage and autonomously choose when to access the medium i.e. transmit. In the analysis, medium usage of second order neighbours is also taken into account. Therefore, our approach does not suffer from the well-known hidden terminal problem, and additionally, the wireless medium is spatially reused without energy-wasting conflicts. The medium access schedule of wireless sensors is adjusted when —e.g. due to mobility of nodes– conflicting schedules exist. The correctness of the general medium access control protocol is |
Long-term safety and efficacy of fluticasone/formoterol combination therapy in asthma. | BACKGROUND
The long-term safety of a new asthma therapy combining fluticasone propionate and formoterol fumarate (fluticasone/formoterol; flutiform(®)) was assessed.
METHOD
In an open-label study, mild to moderate-severe asthmatics (≥12 years; N=472) were treated twice daily with fluticasone/formoterol 100/10 μg (n=224) or 250/10 μg (n=248) for 6 months (n=256) or 12 months (n=216). The primary and secondary objectives were the long-term safety and efficacy of fluticasone/formoterol, respectively.
RESULTS
In total, 413 (87.5%) patients completed the study (of which 175 participated for 12 months). Adverse events (AEs) were reported by 174 patients (36.9%): 67 (29.9%) in the 100/10 μg group and 107 (43.1%) in the 250/10 μg group. The most common AEs (>2%) were nasopharyngitis, dyspnea, pharyngitis, and headache; the majority were mild to moderate. Only 18 (3.8%) patients reported AEs considered study drug-related. Five patients per group experienced 12 serious AEs; none was study medication-related. Asthma exacerbations were reported by 53 patients (11.2%): 46 mild to moderate and nine severe. Clinical laboratory tests and vital signs showed no abnormal trends or clinically important or dose-response-related changes. The efficacy analyses showed statistically significant improvements at every time point throughout the study period at both doses.
CONCLUSION
Fluticasone/formoterol had a good safety and efficacy profile over the 6- and 12-month study periods. |
Value-Decomposition Networks For Cooperative Multi-Agent Learning | We study the problem of cooperative multi-agent reinforcement learning with a single joint reward signal. This class of learning problems is difficult because of the often large combined action and observation spaces. In the fully centralized and decentralized approaches, we find the problem of spurious rewards and a phenomenon we call the “lazy agent” problem, which arises due to partial observability. We address these problems by training individual agents with a novel value-decomposition network architecture, which learns to decompose the team value function into agent-wise value functions. |
Media as social partners: the social nature of young children's learning from screen media. | Television has become a nearly ubiquitous feature in children's cultural landscape. A review of the research into young children's learning from television indicates that the likelihood that children will learn from screen media is influenced by their developing social relationships with on-screen characters, as much as by their developing perception of the screen and their symbolic understanding and comprehension of information presented on screen. Considering the circumstances in which children under 6 years learn from screen media can inform teachers, parents, and researchers about the important nature of social interaction in early learning and development. The findings reviewed in this article suggest the social nature of learning, even learning from screen media. |
Multi-Feature Based Emotion Recognition for Video Clips | In this paper, we present our latest progress in Emotion Recognition techniques, which combines acoustic features and facial features in both non-temporal and temporal mode. This paper presents the details of our techniques used in the Audio-Video Emotion Recognition subtask in the 2018 Emotion Recognition in the Wild (EmotiW) Challenge. After the multimodal results fusion, our final accuracy in Acted Facial Expression in Wild (AFEW) test dataset achieves 61.87%, which is 1.53% higher than the best results last year. Such improvements prove the effectiveness of our methods. |
Broad expertise retrieval in sparse data environments | Expertise retrieval has been largely unexplored on data other than the W3C collection. At the same time, many intranets of universities and other knowledge-intensive organisations offer examples of relatively small but clean multilingual expertise data, covering broad ranges of expertise areas. We first present two main expertise retrieval tasks, along with a set of baseline approaches based on generative language modeling, aimed at finding expertise relations between topics and people. For our experimental evaluation, we introduce (and release) a new test set based on a crawl of a university site. Using this test set, we conduct two series of experiments. The first is aimed at determining the effectiveness of baseline expertise retrieval methods applied to the new test set. The second is aimed at assessing refined models that exploit characteristic features of the new test set, such as the organizational structure of the university, and the hierarchical structure of the topics in the test set. Expertise retrieval models are shown to be robust with respect to environments smaller than the W3C collection, and current techniques appear to be generalizable to other settings. |
Effectiveness of sports massage for recovery of skeletal muscle from strenuous exercise. | OBJECTIVE
Sport massage, a manual therapy for muscle and soft tissue pain and weakness, is a popular and widely used modality for recovery after intense exercise. Our objective is to determine the effectiveness of sport massage for improving recovery after strenuous exercise.
DATA SOURCES
We searched MEDLINE, EMBASE, and CINAHL using all current and historical names for sport massage. Reference sections of included articles were scanned to identify additional relevant articles.
STUDY SELECTION
Study inclusion criteria required that subjects (1) were humans, (2) performed strenuous exercise, (3) received massage, and (4) were assessed for muscle recovery and performance. Ultimately, 27 studies met inclusion criteria.
DATA EXTRACTION
Eligible studies were reviewed, and data were extracted by the senior author (TMB). The main outcomes extracted were type and timing of massage and outcome measures studied.
DATA SYNTHESIS
Data from 17 case series revealed inconsistent results. Most studies evaluating post-exercise function suggest that massage is not effective, whereas studies that also evaluated the symptoms of DOMS did show some benefit. Data from 10 randomized controlled trials (RCTs) do, however, provide moderate evidence for the efficacy of massage therapy. The search identified no trend between type and timing of massage and any specific outcome measures investigated.
CONCLUSIONS
Case series provide little support for the use of massage to aid muscle recovery or performance after intense exercise. In contrast, RCTs provide moderate data supporting its use to facilitate recovery from repetitive muscular contractions. Further investigation using standardized protocols measuring similar outcome variables is necessary to more conclusively determine the efficacy of sport massage and the optimal strategy for its implementation to enhance recovery following intense exercise. |
Qualities of exemplary nurse leaders: perspectives of frontline nurses. | AIM
This paper reports on a study that looked at the characteristics of exemplary nurse leaders in times of change from the perspective of frontline nurses.
BACKGROUND
Large-scale changes in the health care system and their associated challenges have highlighted the need for strong leadership at the front line.
METHODS
In-depth personal interviews with open-ended questions were the primary means of data collection. The study identified and explored six frontline nurses' perceptions of the qualities of nursing leaders through qualitative content analysis. This study was validated by results from the current literature.
RESULTS
The frontline nurses described several common characteristics of exemplary nurse leaders, including: a passion for nursing; a sense of optimism; the ability to form personal connections with their staff; excellent role modelling and mentorship; and the ability to manage crisis while guided by a set of moral principles. All of these characteristics pervade the current literature regarding frontline nurses' perspectives on nurse leaders.
CONCLUSION
This study identified characteristics of nurse leaders that allowed them to effectively assist and support frontline nurses in the clinical setting.
IMPLICATIONS FOR NURSING MANAGEMENT
The findings are of significance to leaders in the health care system and in the nursing profession who are in a position to foster development of leaders to mentor and encourage frontline nurses. |
Understanding and Designing around Users' Interaction with Hidden Algorithms in Sociotechnical Systems | While today many online platforms employ complex algorithms to curate content, these algorithms are rarely highlighted in interfaces, preventing users from understanding these algorithms' operation or even existence. Here, we study how knowledgeable users are about these algorithms, showing that providing insight to users about an algorithm's existence or functionality through design facilitates rapid processing of the underlying algorithm models and increases users' engagement with the system. We also study algorithmic systems that might introduce bias to users' online experience to gain insight into users' behavior around biased algorithms. We will leverage these insights to build an algorithm-aware design that shapes a more informed interaction between users and algorithmic systems. |
Friend or Foe?: Your Wearable Devices Reveal Your Personal PIN | The proliferation of wearable devices, e.g., smartwatches and activity trackers, with embedded sensors has already shown its great potential on monitoring and inferring human daily activities. This paper reveals a serious security breach of wearable devices in the context of divulging secret information (i.e., key entries) while people accessing key-based security systems. Existing methods of obtaining such secret information relies on installations of dedicated hardware (e.g., video camera or fake keypad), or training with labeled data from body sensors, which restrict use cases in practical adversary scenarios. In this work, we show that a wearable device can be exploited to discriminate mm-level distances and directions of the user's fine-grained hand movements, which enable attackers to reproduce the trajectories of the user's hand and further to recover the secret key entries. In particular, our system confirms the possibility of using embedded sensors in wearable devices, i.e., accelerometers, gyroscopes, and magnetometers, to derive the moving distance of the user's hand between consecutive key entries regardless of the pose of the hand. Our Backward PIN-Sequence Inference algorithm exploits the inherent physical constraints between key entries to infer the complete user key entry sequence. Extensive experiments are conducted with over 5000 key entry traces collected from 20 adults for key-based security systems (i.e. ATM keypads and regular keyboards) through testing on different kinds of wearables. Results demonstrate that such a technique can achieve 80% accuracy with only one try and more than 90% accuracy with three tries, which to our knowledge, is the first technique that reveals personal PINs leveraging wearable devices without the need for labeled training data and contextual information. |
An Empirical Analysis of the Influence of Fault Space on Search-Based Automated Program Repair | Automated program repair (APR) has attracted great research attention, and various techniques have been proposed. Search-based APR is one of the most important categories among these techniques. Existing researches focus on the design of effective mutation operators and searching algorithms to better find the correct patch. Despite various efforts, the effectiveness of these techniques are still limited by the search space explosion problem. One of the key factors attribute to this problem is the quality of fault spaces as reported by existing studies. This motivates us to study the importance of the fault space to the success of finding a correct patch. Our empirical study aims to answer three questions. Does the fault space significantly correlate with the performance of search-based APR? If so, are there any indicative measurements to approximate the accuracy of the fault space before applying expensive APR techniques? Are there any automatic methods that can improve the accuracy of the fault space? We observe that the accuracy of the fault space affects the effectiveness and efficiency of search-based APR techniques, e.g., the failure rate of GenProg could be as high as 60% when the real fix location is ranked lower than 10 even though the correct patch is in the search space. Besides, GenProg is able to find more correct patches and with fewer trials when given a fault space with a higher accuracy. We also find that the negative mutation coverage, which is designed in this study to measure the capability of a test suite to kill the mutants created on the statements executed by failing tests, is the most indicative measurement to estimate the efficiency of search-based APR. Finally, we confirm that automated generated test cases can help improve the accuracy of fault spaces, and further improve the performance of search-based APR techniques. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. c © 2017 ACM. ISBN 978-1-4503-2138-9. DOI: 10.1145/1235 CCS Concepts •Software and its engineering→Automatic programming; Empirical software validation; |
Bangla automatic number plate recognition system using artificial neural network | — Bangla automatic number plate recognition (ANPR) system using artificial neural network for number plate inscribing in Bangla is presented in this paper. This system splits into three major parts-number plate detection, plate character segmentation and Bangla character recognition. In number plate detection there arises many problems such as vehicle motion, complex background, distance changes etc., for this reason edge analysis method is applied. As Bangla number plate consists of two words and seven characters, detected number plates are segmented into individual words and characters by using horizontal and vertical projection analysis. After that a robust feature extraction method is employed to extract the information from each Bangla words and characters which is non-sensitive to the rotation, scaling and size variations. Finally character recognition system takes this information as an input to recognize Bangla characters and words. The Bangla character recognition is implemented using multilayer feed-forward network. According to the experimental result, (The abstract needs some exact figures of findings (like success rates of recognition) and how much the performance is better than previous one.) the performance of the proposed system on different vehicle images is better in case of severe image conditions. |
Translating evidence-based decision making into practice: EBDM concepts and finding the evidence. | This is the first of 2 articles that focuses on strategies that can be used to integrate an evidence-based decision making [EBDM] approach into practice. The articles will focus on EBDM methodology and enhancing skills, including how to find valid evidence to answer clinical questions, critically appraise the evidence found and determine if it applies. In addition, online resources will be identified to supplement information presented in each article. The purpose of this article is to define evidence-based decision making and discuss skills necessary for practitioners to efficiently adopt EBDM. It will provide a guide for finding evidence to answer a clinical question using PubMed's specialized searching tools under Clinical Queries. |
DIAG-NRE: A Deep Pattern Diagnosis Framework for Distant Supervision Neural Relation Extraction | Modern neural network models have achieved the state-ofthe-art performance on relation extraction (RE) tasks. Although distant supervision (DS) can automatically generate training labels for RE, the effectiveness of DS highly depends on datasets and relation types, and sometimes it may introduce large labeling noises. In this paper, we propose a deep pattern diagnosis framework, DIAG-NRE, that aims to diagnose and improve neural relation extraction (NRE) models trained on DS-generated data. DIAG-NRE includes three stages: (1) The deep pattern extraction stage employs reinforcement learning to extract regular-expression-style patterns from NRE models. (2) The pattern refinement stage builds a pattern hierarchy to find the most representative patterns and lets human reviewers evaluate them quantitatively by annotating a certain number of pattern-matched examples. In this way, we minimize both the number of labels to annotate and the difficulty of writing heuristic patterns. (3) The weak label fusion stage fuses multiple weak label sources, including DS and refined patterns, to produce noise-reduced labels that can train a better NRE model. To demonstrate the broad applicability of DIAG-NRE, we use it to diagnose 14 relation types of two public datasets with one simple hyperparameter configuration. We observe different noise behaviors and obtain significant F1 improvements on all relation types suffering from large labeling noises. |
More frequent vaginal orgasm is associated with experiencing greater excitement from deep vaginal stimulation. | INTRODUCTION
Research indicated that: (i) vaginal orgasm (induced by penile-vaginal intercourse [PVI] without concurrent clitoral masturbation) consistency (vaginal orgasm consistency [VOC]; percentage of PVI occasions resulting in vaginal orgasm) is associated with mental attention to vaginal sensations during PVI, preference for a longer penis, and indices of psychological and physiological functioning, and (ii) clitoral, distal vaginal, and deep vaginal/cervical stimulation project via different peripheral nerves to different brain regions.
AIMS
The aim of this study is to examine the association of VOC with: (i) sexual arousability perceived from deep vaginal stimulation (compared with middle and shallow vaginal stimulation and clitoral stimulation), and (ii) whether vaginal stimulation was present during the woman's first masturbation.
METHODS
A sample of 75 Czech women (aged 18-36), provided details of recent VOC, site of genital stimulation during first masturbation, and their recent sexual arousability from the four genital sites.
MAIN OUTCOME MEASURES
The association of VOC with: (i) sexual arousability perceived from the four genital sites and (ii) involvement of vaginal stimulation in first-ever masturbation.
RESULTS
VOC was associated with greater sexual arousability from deep vaginal stimulation but not with sexual arousability from other genital sites. VOC was also associated with women's first masturbation incorporating (or being exclusively) vaginal stimulation.
CONCLUSIONS
The findings suggest (i) stimulating the vagina during early life masturbation might indicate individual readiness for developing greater vaginal responsiveness, leading to adult greater VOC, and (ii) current sensitivity of deep vaginal and cervical regions is associated with VOC, which might be due to some combination of different neurophysiological projections of the deep regions and their greater responsiveness to penile stimulation. |
Large-Scale Analysis of Email Search and Organizational Strategies | Email continues to be an important form of communication as well as a way to manage tasks and archive personal information. As the volume of email grows, organizing and finding relevant email remains challenging. In this paper, we present a large-scale log analysis of the activities that people perform on email mes-sages (accessing external information via links or attachments, responding to messages, and organizing messages), their search behavior, and their organizational practices in a popular web email client.
First, we characterize general email activities as well as activities associated with search. We find that within search sessions, peo-ple are more likely to access information and respond to messag-es but less likely to organize. Second, we examine the relation-ship between characteristics of a person's mailbox and their search and organizational practices. People with larger mailboxes tend to organize more, respond a little more, and access infor-mation less. People with larger mailboxes and folder structures search more, but the number of folders has less influence on search. Third, we extend previous work on email organization (e.g., filers vs. pilers; cleaners vs. keepers) by examining the extent to which these strategies are evident in our large-scale analysis and influence email activities and search. People who rely heavily on one organizational strategy tend to use others less. People who organize less tend to search more. Finally, we de-scribe how these insights can influence the design of email search. |
Algorithm 572: Solution of the Helmholtz Equation for the Dirichlet Problem on General Bounded Three-Dimensional Regions [D3] | H e r e ~2, a t h r e e d i m e n s i o n a l b o u n d e d reg ion , c, a n a r b i t r a r y r e a l c o n s t a n t (posit ive , nega t ive , o r zero) , a n d t h e f u n c t i o n s f a n d g a r e spec i f i ed b y t h e user . T h e L a p l a c e o p e r a t o r A is in C a r t e s i a n c o o r d i n a t e s . A s e c o n d o r d e r a c c u r a t e f i n i t e -d i f f e rence m e t h o d is u s e d to d i s c r e t i ze t h e H e l m h o l t z e q u a t i o n . T h e r e s u l t i n g l i n e a r s y s t e m of e q u a t i o n s is r e d u c e d to a c a p a c i t a n c e m a t r i x e q u a t i o n t h a t is so lved a p p r o x i m a t e l y b y a c o n j u g a t e g r a d i e n t |
Designing for persistent audio conversations in the enterprise | Social media websites like flickr and del.icio.us enable collaboration by allowing users to easily share content on the web through tagging. To provide a similar advantage to the enterprise, we have designed a tagging system for audio conversations. We are developing telephonic interfaces, where participants of a spoken conversation can opt to archive and share it. We have also developed a web-based visual interface that enables annotation, search, and retrieval of archived conversations. In this interface, we have focused on visualizing relationships between users, tags and conversations, which will enable efficient searching and browsing, and more importantly, provide contextualized interaction histories. A pilot user study, conducted using simulated data, showed that our social network visualization was effective. |
Experiments with computer vision methods for hand detection | The goal of a fall detection system is to automatically detect cases where a human falls and may have been injured. A natural application of such a system is in home monitoring of patients and elderly persons, so as to automatically alert relatives and/or authorities in case of an injury caused by a fall. This paper describes experiments with three computer vision methods for fall detection in a simulated home environment. The first method makes a decision based on a single frame, simply based on the vertical position of the image centroid of the person. The second method makes a threshold-based decision based on the last few frames, by considering the number of frames during which the person has been falling, the magnitude (in pixels) of the fall, and the maximum velocity of the fall. The third method is a statistical method that makes a decision based on the same features as the previous two methods, but using probabilistic models as opposed to thresholds for making the decision. Preliminary experimental results are promising, with the statistical method attaining relatively high accuracy in detecting falls while at the same time producing a relatively small number of false positives. |
Gated Recurrent Unit (GRU) for Emotion Classification from Noisy Speech | Despite the enormous interest in emotion classification from speech, the impact of noise on emotion classification is not well understood. This is important because, due to the tremendous advancement of the smartphone technology, it can be a powerful medium for speech emotion recognition in the outside laboratory natural environment, which is likely to incorporate background noise in the speech. We capitalize on the current breakthrough of Recurrent Neural Network (RNN) and seek to investigate its performance for emotion classification from noisy speech. We particularly focus on the recently proposed Gated Recurrent Unit (GRU), which is yet to be explored for emotion recognition from speech. Experiments conducted with speech compounded with eight different types of noises reveal that GRU incurs an 18.16% smaller run-time while performing quite comparably to the Long Short-Term Memory (LSTM), which is the most popular Recurrent Neural Network proposed to date. This result is promising for any embedded platform in general and will initiate further studies to utilize GRU to its full potential for emotion recognition on smartphones. |
Facebook groups as LMS: A case study | This paper describes a pilot study in using Facebook as an alternative to a learning management system (LMS). The paper reviews the current research on the use of Facebook in academia and analyzes the differences between a Facebook group and a regular LMS. The paper reports on a precedent-setting attempt to use a Facebook group as a course Web site, serving as a platform for delivering content and maintaining interactions among the students and between the students and the instructor. The paper presents findings from the students’ self-assessments and reflections on their experience. The students expressed satisfaction with learning in Facebook and willingness to continue using these groups in future courses. |
Screening and Follow-Up Monitoring for Substance Use in Primary Care: An Exploration of Rural–Urban Variations | Rates of substance use in rural areas are close to those of urban areas. While recent efforts have emphasized integrated care as a promising model for addressing workforce shortages in providing behavioral health services to those living in medically underserved regions, little is known on how substance use problems are addressed in rural primary care settings. To examine rural–urban variations in screening and monitoring primary care- based patients for substance use problems in a state-wide mental health integration program. This was an observational study using patient registry. The study included adult enrollees (n = 15,843) with a mental disorder from 133 participating community health clinics. We measured whether a standardized substance use instrument was used to screen patients at treatment entry and to monitor symptoms at follow-up visits. While on average 73.6 % of patients were screened for substance use, follow-up on substance use problems after initial screening was low (41.4 %); clinics in small/isolated rural settings appeared to be the lowest (13.6 %). Patients who were treated for a mental disorder or substance abuse in the past and who showed greater psychiatric complexities were more likely to receive a screening, whereas patients of small, isolated rural clinics and those traveling longer distances to the care facility were least likely to receive follow-up monitoring for their substance use problems. Despite the prevalent substance misuse among patients with mental disorders, opportunities to screen this high-risk population for substance use and provide a timely follow-up for those identified as at risk remained overlooked in both rural and urban areas. Rural residents continue to bear a disproportionate burden of substance use problems, with rural–urban disparities found to be most salient in providing the continuum of services for patients with substance use problems in primary care. |
Effects of unilateral pedunculopontine stimulation on electromyographic activation patterns during gait in individual patients with Parkinson’s disease | In Parkinson’s disease (PD), the effects of deep brain stimulation of the pedunculopontine nucleus (PPTg-DBS) on gait has been object of international debate. Some evidence demonstrated that, in the late swing-early stance phase of gait cycle, a reduced surface electromyographic activation (sEMG) of tibialis anterior (TA) is linked to the striatal dopamine deficiency in PD patients. In the present study we report preliminary results on the effect of PPTg-DBS on electromyographic patterns during gait in individual PD patients. To evaluate the sEMG amplitude of TA, the root mean square (RMS) of the TA burst in late swing-early stance phase (RMS-A) was normalized as a percent of the RMS of the TA burst in late stance-early swing (RMS-B). We studied three male patients in the following conditions: on PPTg-DBS/on l-dopa, on PPTg-DBS/off l-dopa, off PPTg-DBS/on l-dopa, off PPTg-DBS/off l-dopa. For each assessment the UPDRS III was filled in. We observed no difference between on PPTg-DBS/off l-dopa and off PPTg-DBS/off l-dopa in UPDRS III scores. In off PPTg-DBS/off l-dopa, patient A (right implant) showed absence of the right and left RMSA, respectively, in 80% and 83% of gait cycles. Patient B (right implant) showed absence of the right RMS-A in 86% of cycles. RMS-A of the patient C (left implant) was bilaterally normal. In on PPTg- DBS/off l-dopa, no patient showed reduced RMS-A. Although the very low number of subjects we evaluated, our observations suggest that PPTg plays a role in modulating TA activation pattern during the steady state of gait. |
A Planar Feeding Technology Using Phase-and-Amplitude-Corrected SIW Horn and Its Application | As traditional horn antennas that could be used as feeds for reflector antenna systems, substrate integrate waveguide (SIW) horns could be used as feeds for planar antennas and arrays. In this letter, a phase-and-amplitude-corrected SIW horn by metal-via arrays is presented as a planar feeding structure. The compact SIW horn is applied to feed two 1×8 antipodal linearly tapered slot antenna (ALTSA) arrays, forming sum and difference beams at X-band. The measured gain of the sum beam is 12.84 dBi, the half-power beamwidth is 18.6°, the FTBR is 15.07 dB, and the sidelobe level is -20.79 dB at 10.1 GHz. The null depth of the difference beam is -44.24 dB. Good agreement between the simulation and the measured results is obtained. |
Backprop Evolution | The back-propagation algorithm is the cornerstone of deep learning. Despite its importance, few variations of the algorithm have been attempted. This work presents an approach to discover new variations of the back-propagation equation. We use a domain specific language to describe update equations as a list of primitive functions. An evolution-based method is used to discover new propagation rules that maximize the generalization performance after a few epochs of training. We find several update equations that can train faster with short training times than standard back-propagation, and perform similar as standard back-propagation at convergence. |
ConceptNet — A Practical Commonsense Reasoning Tool-Kit | ConceptNet is a freely available commonsense knowledgebase and natural-language-processing toolkit which supports many practical textual-reasoning tasks over real-world documents including topic-jisting (e.g. a news article containing the concepts, “gun,” “convenience store,” “demand money” and “make getaway” might suggest the topics “robbery” and “crime”), affect-sensing (e.g. this email is sad and angry), analogy-making (e.g. “scissors,” “razor,” “nail clipper,” and “sword” are perhaps like a “knife” because they are all “sharp,” and can be used to “cut something”), and other contextoriented inferences. The knowledgebase is a semantic network presently consisting of over 1.6 million assertions of commonsense knowledge encompassing the spatial, physical, social, temporal, and psychological aspects of everyday life. Whereas similar large-scale semantic knowledgebases like Cyc and WordNet are carefully handcrafted, ConceptNet is generated automatically from the 700,000 sentences of the Open Mind Common Sense Project – a World Wide Web based collaboration with over 14,000 authors. ConceptNet is a unique resource in that it captures a wide range of commonsense concepts and relations, such as those found in the Cyc knowledgebase, yet this knowledge is structured not as a complex and intricate logical framework, but rather as a simple, easy-to-use semantic network, like WordNet. While ConceptNet still supports many of the same applications as WordNet, such as query expansion and determining semantic similarity, its focus on concepts-rather-than-words, its more diverse relational ontology, and its emphasis on informal conceptual-connectedness over formal linguistic-rigor allow it to go beyond WordNet to make practical, context-oriented, commonsense inferences over real-world texts. In this paper, we first give an overview of the role that commonsense knowledge plays in making sense of text, and we situate our commonsense toolkit, ConceptNet, in the literature of large-scale semantic knowledgebases; we then discuss how ConceptNet was built and how it is structured; third, we present the ConceptNet natural-language-processing engine and describe the various practical reasoning tasks that it supports; fourth, we delve into a more detailed quantitative and qualitative analysis of ConceptNet; fifth, we review the gamut of real-world applications which researchers have built using the ConceptNet toolkit; we conclude by reflecting back on the Big Picture. INTRODUCTION In today’s digital age, text is the primary medium of representing and transmitting information, as evidenced by the pervasiveness of emails, instant messages, documents, weblogs, news articles, homepages, and printed materials. Our lives are now saturated with textual information, and there is an increasing urgency to develop technology to help us manage and make sense of the resulting information overload. While keyword-based and statistical approaches have enjoyed some success in assisting information retrieval, data mining, and natural language processing (NLP) systems, there is a growing recognition that such approaches deliver too shallow an understanding. To continue to make progress in textual-information management, vast amounts of semantic knowledge are needed to give our software the capacity for deeper and more meaningful understanding of text. What is Commonsense Knowledge? Of the different sorts of semantic knowledge that are researched, arguably the most general and widely applicable kind is knowledge about the everyday world that is possessed by all people – what is widely called ‘commonsense knowledge’. While to the average person the term “common sense” is regarded as synonymous with “good judgment,” to the AI community it is used in a technical sense to refer to the millions of basic facts and understandings possessed by most people. A lemon is sour. To open a door, you must usually first turn the doorknob. If you forget someone’s birthday, they may be unhappy with you. Commonsense knowledge, thusly defined, spans a huge portion of human experience, encompassing knowledge about the spatial, physical, social, temporal, and psychological aspects of typical everyday life. Because it is assumed that every person possesses common sense, such knowledge is typically omitted from social communications, such as text. A full understanding of any text then, requires a surprising amount of common sense, which currently only people possess. It is our purpose to find ways to provide such common sense to machines. |
Virtual assembly using virtual reality techniques | Virtual reality is a technology which is often regarded as a natural extension to 3D computer graphics with advanced input and output devices. This technology has only recently matured enough to warrant serious engineering applications. The integration of this new technology with software systems for engineering, design, and manufacturing will provide a new boost to the field of computer-aided engineering. One aspect of design and manufacturing which may be significantly affected by virtual reality is design for assembly. This paper presents a research effort aimed at creating a virtual assembly design environment. |
The development of reading in children who speak English as a second language. | Patterns of reading development were examined in native English-speaking (L1) children and children who spoke English as a second language (ESL). Participants were 978 (790 L1 speakers and 188 ESL speakers) Grade 2 children involved in a longitudinal study that began in kindergarten. In kindergarten and Grade 2, participants completed standardized and experimental measures including reading, spelling, phonological processing, and memory. All children received phonological awareness instruction in kindergarten and phonics instruction in Grade 1. By the end of Grade 2, the ESL speakers' reading skills were comparable to those of L1 speakers, and ESL speakers even outperformed L1 speakers on several measures. The findings demonstrate that a model of early identification and intervention for children at risk is beneficial for ESL speakers and also suggest that the effects of bilingualism on the acquisition of early reading skills are not negative and may be positive. |
Concentrating Correctly on Cybercrime Concentration | We review the cybercrime literature to draw attention to the many occasions on which authors have identified concentrations within criminal activity. We note that the existence of concentrations often leads authors to suggest that concentrations, or ‘choke points’ are amenable to effective intervention. We then discuss the reasons that concentrations are observed – it is often the result of the criminals being economically efficient, but there are other possible explanations. We then set out a methodology for establishing whether a specific concentration might be the opportunity for a successful intervention. We also argue that the mere possibility of a successful intervention on a specific concentration point does not necessarily mean that incentives of the various stakeholders will be sufficiently well aligned for that intervention to occur. |
Finding Low-rank Solutions to Matrix Problems, Efficiently and Provably | A rank-r matrix X ∈ Rm×n can be written as a product UV >, where U ∈ Rm×r and V ∈ Rn×r. One could exploit this observation in optimization: e.g., consider the minimization of a convex function f(X) over rank-r matrices, where the scaffold of rank-r matrices is modeled via the factorization in U and V variables. Such heuristic has been widely used before for specific problem instances, where the solution sought is (approximately) low-rank. Though such parameterization reduces the number of variables and is more efficient in computational speed and memory requirement (of particular interest is the case r min{m,n}), it comes at a cost: f(UV >) becomes a non-convex function w.r.t. U and V . In this paper, we study such parameterization in optimization of generic convex f and focus on first-order, gradient descent algorithmic solutions. We propose an algorithm we call the Bi-Factored Gradient Descent (BFGD) algorithm, an efficient first-order method that operates on the U, V factors. We show that when f is smooth, BFGD has local sublinear convergence, and linear convergence when f is both smooth and strongly convex. Moreover, for several key applications, we provide simple and efficient initialization schemes that provide approximate solutions good enough for the above convergence results to hold. |
Analysis and Design of LLC Resonant Converters With Capacitor–Diode Clamp Current Limiting | This paper presents a design methodology for LLC resonant converters with capacitor-diode clamp for current limiting in overload conditions. A new fundamental harmonic approximation-based equivalent circuit model is obtained through the application of describing function techniques, by examining the fundamental behavior of the capacitor-diode clamp. An iterative procedure to determine the conduction point of the diode clamp is also given. The behavior of this type of converter is analyzed and guidelines for designing the current limiting characteristics are discussed. The characterization of a 90 W converter design using the proposed methodology is presented. The converter voltage gain and the voltage-current characteristics under different overload conditions and operating frequencies are predicted using the proposed model, which accuracies are validated against the prototype with good correlation. |
Early bond strength of two resin cements to Y-TZP ceramic using MPS or MPS/4-META silanes | For cementation of yttrium-stabilized tetragonal zirconium polycrystal (Y-TZP) ceramic frameworks, protocols of surface-conditioning methods and available cements vary, resulting in confusion among clinicians regarding selection and effects of different conditioning methods on cement adhesion. This study evaluated the effect of two silanes (3-trimethoxysilylpropylmethacrylate (MPS) and 3-trimethoxysilylpropylmethacrylate/4-methacryloyloxyethyl trimellitate anhydride methyl methacrylate (MPS/4-META) on the adhesion of two resin-based cements (SuperBond and Panavia F 2.0) to Y-TZP ceramic and compared several protocols with those indicated by the manufacturer of each of these cements. Disks of Y-TZP ceramic (LAVA, 3M ESPE) (n = 60) were divided into six experimental groups (n = 10 per group) and treated as follows: (1) silica coating (SC) + MPS silane + SuperBond; (2) SC + MPS/4-META + silane + SuperBond); (3) SC + MPS silane + Panavia F 2.0); (4) SC + MPS/4-META silane + Panavia F 2.0); (5) no conditioning + MPS/4-META silane + Super-Bond (SuperBond instructions); and (6) 50-μm Al2O3 conditioning + Panavia F 2.0 (Panavia F 2.0 instructions). The specimens were subjected to shear-bond testing after water storage at 37°C for 3 months in the dark. Data were analyzed by analysis of variance and Tukey’s HSD (α = 0.05). After silica coating, the mean bond strength of SuperBond cement was not significantly different between MPS and MPS/4-META silanes (20.2 ± 3.7 and 20.9 ± 1.6 MPa, respectively), but the mean bond strength of Panavia F 2.0 was significantly higher with MPS silane (24.4 ± 5.3 MPa) than with MPS/4-META (12.3 ± 1.4 MPa) (P < 0.001). The SuperBond manufacturer’s instructions alone resulted in significantly higher bond strength (9.7 ± 3.1 MPa) than the Panavia F 2.0 manufacturer’s instruction (0 MPa) (P < 0.001). When silica coating and silanization were used, both SuperBond and Panavia F 2.0 cements demonstrated higher bond strengths they did when the manufacturers’ instructions were followed. With SuperBond, use of MPS or MPS/4-META silane resulted in no significant difference when the ceramic surface was silica coated, but with Panavia F 2.0, use of MPS silane resulted in a significantly higher bond strength than use of MPS/4-META. Use of chairside silica coating and silanization to condition the zirconia surface improved adhesion compared with the manufacturers’ cementation protocols for SuperBond and Panavia F 2.0 resin cements. |
ACQUISITION BASED ON OPENCV FOR CLOSE-RANGE PHOTOGRAMMETRY APPLICATIONS | Development of the technology in the area of the cameras, computers and algorithms for 3D the reconstruction of the objects from the images resulted in the increased popularity of the photogrammetry. Algorithms for the 3D model reconstruction are so advanced that almost anyone can make a 3D model of photographed object. The main goal of this paper is to examine the possibility of obtaining 3D data for the purposes of the close-range photogrammetry applications, based on the open source technologies. All steps of obtaining 3D point cloud are covered in this paper. Special attention is given to the camera calibration, for which two-step process of calibration is used. Both, presented algorithm and accuracy of the point cloud are tested by calculating the spatial difference between referent and produced point clouds. During algorithm testing, robustness and swiftness of obtaining 3D data is noted, and certainly usage of this and similar algorithms has a lot of potential in the real-time application. That is the reason why this research can find its application in the architecture, spatial planning, protection of cultural heritage, forensic, mechanical engineering, traffic management, medicine and other sciences. * Corresponding author |
A bicycle can be self-stable without gyroscopic or caster effects. | A riderless bicycle can automatically steer itself so as to recover from falls. The common view is that this self-steering is caused by gyroscopic precession of the front wheel, or by the wheel contact trailing like a caster behind the steer axis. We show that neither effect is necessary for self-stability. Using linearized stability calculations as a guide, we built a bicycle with extra counter-rotating wheels (canceling the wheel spin angular momentum) and with its front-wheel ground-contact forward of the steer axis (making the trailing distance negative). When laterally disturbed from rolling straight, this bicycle automatically recovers to upright travel. Our results show that various design variables, like the front mass location and the steer axis tilt, contribute to stability in complex interacting ways. |
Clinical performance of a multivariate index assay for detecting early-stage ovarian cancer. | OBJECTIVE
We sought to analyze the effectiveness of a multivariate index assay (MIA) in identifying early-stage ovarian malignancy compared to clinical assessment, CA 125-II, and modified American Congress of Obstetricians and Gynecologists (ACOG) guidelines among women undergoing surgery for an adnexal mass.
STUDY DESIGN
Patients were recruited in 2 related prospective, multi-institutional trials involving 44 sites. All women had preoperative imaging and biomarker analysis. Preoperative biomarker values, physician assessment of ovarian cancer risk, and modified ACOG guideline risk stratification were correlated with surgical pathology.
RESULTS
A total of 1016 patients were evaluable for MIA, CA 125-II, and clinical assessment. Overall, 86 patients (8.5%) had primary-stage I/II primary ovarian malignancy, with 70.9% having stage I disease and 29.1% having stage II disease. For all early-stage ovarian malignancies, MIA combined with clinical assessment had significantly higher sensitivity (95.3%; 95% confidence interval [CI], 88.6-98.2) compared to clinical assessment alone (68.6%; 95% CI, 58.2-77.4), CA 125-II (62.8%; 95% CI, 52.2-72.3), and modified ACOG guidelines (76.7%; 95% CI, 66.8-84.4) (P < .0001). Among the 515 premenopausal patients, the sensitivity for early-stage ovarian cancer was 89.3% (95% CI, 72.8-96.3) for MIA combined with clinical assessment, 60.7% (95% CI, 42.4-76.4) for clinical assessment alone, 35.7% (95% CI, 20.7-54.2) for CA 125-II, and 78.6% (95% CI, 60.5-89.8) for modified ACOG guidelines. Early-stage ovarian cancer in postmenopausal patients was correctly detected in 98.3% (95% CI, 90.9-99.7) of cases by MIA combined with clinical assessment, compared to 72.4% (95% CI, 59.8-82.2) for clinical assessment alone, 75.9% (95% CI, 63.5-85.0) for CA 125-II, and 75.9% (95% CI, 63.5-85.0) for modified ACOG guidelines.
CONCLUSION
MIA combined with clinical assessment demonstrated higher sensitivity for early-stage ovarian malignancy compared to clinical assessment alone, CA 125-II, and modified ACOG guidelines with consistent performance across menopausal status. |
Comparative Performance Study of Conventional and Islamic Banking in Pakistan | The purpose of this empirical study is to analyze and compare the performance of Islamic and conventional banking in Pakistan and to find out which of the banking stream is performing better than other. For this study, sample of 22 conventional banks and 5 Islamic banks were selected. For in-depth understanding and sound comparison, key performance indicators were divided into external and internal bank factors. The external factor analysis includes studying the customer behavior and perception about both Islamic and conventional banking. Internal factor analysis includes measure of differences in performance of Islamic and conventional banks in terms of profitability, liquidity, credit risk and solvency. Nine financial ratios were used to gauge profitability, liquidity and credit risk; and a model known as “Bank-o-meter” is used to gauge solvency. Findings suggest in terms of profitability and liquidity conventional banking leads, while in credit risk management and solvency maintenance Islamic banking dominates. Motivating factors for customers of Islamic banking are the location and Shari’a compliance, while in case of conventional banking it is wide range of products and services. |
Mining Minimal Contrast Subgraph Patterns | In this paper, we introduce a new type of contrast pattern, the minimal contrast subgraph. It is able to capture structural differences between any two collections of graphs and can be useful in chemical compound comparison and building graph classification models. However, mining minimal contrast subgraphs is a challenging task, due to the exponentially large search space and graph (sub)isomorphism problems. We present an algorithm which utilises a backtracking tree to first compute the maximal common edge sets and then uses a minimal hypergraph transversal algorithm, to derive the set of minimal contrast subgraphs. An experimental evaluation demonstrates the potential of our technique for finding interesting differences in graph data. |
A Survey of Binary Similarity and Distance Measures | The binary feature vector is one of the most common representations of patterns and measuring similarity and distance measures play a critical role in many problems such as clustering, classification, etc. Ever since Jaccard proposed a similarity measure to classify ecological species in 1901, numerous binary similarity and distance measures have been proposed in various fields. Applying appropriate measures results in more accurate data analysis. Notwithstanding, few comprehensive surveys on binary measures have been conducted. Hence we collected 76 binary similarity and distance measures used over the last century and reveal their correlations through the hierarchical clustering technique. |
Association of genes to genetically inherited diseases using data mining | Although approximately one-quarter of the roughly 4,000 genetically inherited diseases currently recorded in respective databases (LocusLink, OMIM) are already linked to a region of the human genome, about 450 have no known associated gene. Finding disease-related genes requires laborious examination of hundreds of possible candidate genes (sometimes, these are not even annotated; see, for example, refs 3,4). The public availability of the human genome draft sequence has fostered new strategies to map molecular functional features of gene products to complex phenotypic descriptions, such as those of genetically inherited diseases. Owing to recent progress in the systematic annotation of genes using controlled vocabularies, we have developed a scoring system for the possible functional relationships of human genes to 455 genetically inherited diseases that have been mapped to chromosomal regions without assignment of a particular gene. In a benchmark of the system with 100 known disease-associated genes, the disease-associated gene was among the 8 best-scoring genes with a 25% chance, and among the best 30 genes with a 50% chance, showing that there is a relationship between the score of a gene and its likelihood of being associated with a particular disease. The scoring also indicates that for some diseases, the chance of identifying the underlying gene is higher. |
Development of a management system with RFID and QR code for matching and breeding in Taiwan pig farm | The outbreak of swine diseases in Taiwan during the past years has caused a large number of meat from sick pigs to reach the market. This greatly affected the willingness of consumers to purchase pork. In order to enhance food safety and sustain the income of pig farmers, the Council of Agriculture recommended that the pig's production history be presented to the public so that product information could be made transparent. At present, most of Taiwan's pig farmers still use handwritten cards for administrative work. New record cards are filled out for each batch of swine and collected as a booklet for management. For some pig farm, this management approach requires very high labor costs with regard to vaccination and production management and lacks management efficiency as well. Therefore, this study established a management system for (1) pig matching, (2) vaccination, (3) production date, and (4) production capacity that combines radiofrequency identification (RFID) and Quick Response (QR) code technology with electronic database management system for stock people to keep track of the farm situation at all times. Results of actual test in pig farms showed that aside from effectively improving the efficiency of breeding and matching, this system can also significantly improve and upgrade administration of vaccines and efficiency of breeding. |
A place-based model for understanding community resilience to natural disasters | There is considerable research interest on the meaning and measurement of resilience from a variety of research perspectives including those from the hazards/disasters and global change communities. The identification of standards and metrics for measuring disaster resilience is one of the challenges faced by local, state, and federal agencies, especially in the United States. This paper provides a new framework, the disaster resilience of place (DROP) model, designed to improve comparative assessments of disaster resilience at the local or community level. A candidate set of variables for implementing the model are also presented as a first step towards its implementation. Purchase Export |
VDNet: an infrastructure-less UAV-assisted sparse VANET system with vehicle location prediction | Vehicular Ad Hoc Network (VANET) has been a hot topic in the past few years. Compared with vehicular networks where vehicles are densely distributed, sparse VANET have more realistic significance. The first challenge of a sparse VANET system is that the network suffers from frequent disconnections. The second challenge is to adapt the transmission route to the dynamic mobility pattern of the vehicles. Also, some infrastructural requirements are hard to meet when deploying a VANET widely. Facing these challenges, we devise an infrastructure-less unmanned aerial vehicle (UAV) assisted VANET system called Vehicle-Drone hybrid vehicular ad hoc Network (VDNet), which utilizes UAVs, particularly quadrotor drones, to boost vehicle-to-vehicle data message transmission under instructions conducted by our distributed vehicle location prediction algorithm. VDNet takes the geographic information into consideration. Vehicles in VDNet observe the location information of other vehicles to construct a transmission route and predict the location of a destination vehicle. Some vehicles in VDNet equips an on-board UAV, which can deliver data message directly to destination, relay messages in a multi-hop route, and collect location information while flying above the traffic. The performance evaluation shows that VDNet achieves high efficiency and low end-to-end delay with controlled communication overhead. Copyright © 2016 John Wiley & Sons, Ltd. |
Fast concurrent queues for x86 processors | Conventional wisdom in designing concurrent data structures is to use the most powerful synchronization primitive, namely compare-and-swap (CAS), and to avoid contended hot spots. In building concurrent FIFO queues, this reasoning has led researchers to propose combining-based concurrent queues.
This paper takes a different approach, showing how to rely on fetch-and-add (F&A), a less powerful primitive that is available on x86 processors, to construct a nonblocking (lock-free) linearizable concurrent FIFO queue which, despite the F&A being a contended hot spot, outperforms combining-based implementations by 1.5x to 2.5x in all concurrency levels on an x86 server with four multicore processors, in both single-processor and multi-processor executions. |
A clean wafer-scale chip-release process without dicing based on vapor phase etching | A new method to release MEMS chips from a wafer without dicing is presented. It can be applied whenever SOI wafers are used that are structured from both the device and the handle side using DRIE. This method enables the release of extremely fragile structures without any mechanical impact on the chips. No more dicing residues or debris are created and deposited onto the wafer. The basic idea consists of etching deep surrounding trenches on the device and the handle layer that are displaced by about 20 /spl mu/m and thus create overlapping areas. For release, the buried silicon dioxide between the overlapping areas is etched away using hydrofluoric acid vapor phase etching. |
Estimation of battery parameters of the equivalent circuit model using Grey Wolf Optimization | For dynamic simulation of battery electric vehicles, it is vital to estimate accurately battery parameters, to use battery effectively. The estimation of parameters deploys experimental methods that are expensive, require high computational power and are time-consuming. Hence to overcome this problem, a methodology based on meta-heuristic techniques (Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and recently proposed Grey Wolf Optimization (GWO)) has used. These techniques are simple to use and require less computational power. Estimation has done by how close the model estimated voltage curve is to the known catalogue voltage curve and feasibility of techniques evaluated by accuracy (minimizing error) and it's the rate of convergence. Investigation showed that GWO has the best accuracy among meta-heuristic techniques for estimation of the battery parameters. |
Genetic analysis of host resistance: Toll-like receptor signaling and immunity at large. | Classical genetic methods, driven by phenotype rather than hypotheses, generally permit the identification of all proteins that serve nonredundant functions in a defined biological process. Long before this goal is achieved, and sometimes at the very outset, genetics may cut to the heart of a biological puzzle. So it was in the field of mammalian innate immunity. The positional cloning of a spontaneous mutation that caused lipopolysaccharide resistance and susceptibility to Gram-negative infection led directly to the understanding that Toll-like receptors (TLRs) are essential sensors of microbial infection. Other mutations, induced by the random germ line mutagen ENU (N-ethyl-N-nitrosourea), have disclosed key molecules in the TLR signaling pathways and helped us to construct a reasonably sophisticated portrait of the afferent innate immune response. A still broader genetic screen--one that detects all mutations that compromise survival during infection--is permitting fresh insight into the number and types of proteins that mammals use to defend themselves against microbes. |
Outcome assessment of decentralization of antiretroviral therapy provision in a rural district of Malawi using an integrated primary care model. | OBJECTIVE
To assess the effect of decentralization (DC) of antiretroviral therapy (ART) provision in a rural district of Malawi using an integrated primary care model.
METHODS
Between October 2004 and December 2008, 8093 patients (63% women) were registered for ART. Of these, 3440 (43%) were decentralized to health centres for follow-up ART care. We applied multivariate regression analysis that adjusted for sex, age, clinical stage at initiation, type of regimen, presence of side effects because of ART, and duration of treatment and follow-up at site of analysis.
RESULTS
Patients managed at health centres had lower mortality [adjusted OR 0.19 (95% C.I. 0.15-0.25)] and lower loss to follow-up (defaulted from treatment) [adjusted OR 0.48 (95% C.I. 0.40-0.58)]. During the first 10 months of follow-up, those decentralized to health centres were approximately 60% less likely to default than those not decentralized; and after 10 months of follow-up, 40% less likely to default. DC was significantly associated with a reduced risk of death from 0 to 25 months of follow-up. The lower mortality may be explained by the selection of stable patients for DC, and the mentorship and supportive supervision of lower cadre health workers to identify and refer complicated cases.
CONCLUSION
Decentralization of follow-up ART care to rural health facilities, using an integrated primary care model, appears a safe and effective way to rapidly scale-up ART and improves both geographical equity in access to HIV-related services and adherence to ART. |
Wireless Body Area Network for Heart Attack Detection [Education Corner] | This article describes a body area network (BAN) for measuring an electrocardiogram (ECG) signal and transmitting it to a smartphone via Bluetooth for data analysis. The BAN uses a specially designed planar inverted F-antenna (PIFA) with a small form factor, realizable with low-fabricationcost techniques. Furthermore, due to the human body's electrical properties, the antenna was designed to enable surface-wave propagation around the body. The system utilizes the user's own smartphone for data processing, and the built-in communications can be used to raise an alarm if a heart attack is detected. This is managed by an application for Android smartphones that has been developed for this system. The good functionality of the system was confirmed in three real-life user case scenarios. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.