title
stringlengths
8
300
abstract
stringlengths
0
10k
Myofascial Release Therapy in the Treatment of Occupational Mechanical Neck Pain: A Randomized Parallel Group Study.
OBJECTIVE As myofascial release therapy is currently under development, the objective of this study was to compare the effectiveness of myofascial release therapy with manual therapy for treating occupational mechanical neck pain. DESIGN A randomized, single-blind parallel group study was developed. The sample (n = 59) was divided into GI, treated with manual therapy, and GII, treated with myofascial release therapy. Variables studied were intensity of neck pain, cervical disability, quality of life, craniovertebral angle, and ranges of cervical motion. RESULTS At five sessions, clinical significance was observed in both groups for all the variables studied, except for flexion in GI. At this time point, an intergroup statistical difference was observed, which showed that GII had better craniovertebral angle (P = 0.014), flexion (P = 0.021), extension (P = 0.003), right side bending (P = 0.001), and right rotation (P = 0.031). A comparative analysis between therapies after intervention showed statistical differences indicating that GII had better craniovertebral angle (P = 0.000), right (P = 0.000) and left (P = 0.009) side bending, right (P = 0.024) and left (P = 0.046) rotations, and quality of life. CONCLUSIONS The treatment of occupational mechanical neck pain by myofascial release therapy seems to be more effective than manual therapy for correcting the advanced position of the head, recovering range of motion in side bending and rotation, and improving quality of life.
Adtranz: A Mobile Computing System for Maintenance and Collaboration
The paper describes the mobile information and communication aspects of a next generation train maintenance and diagnosis system, discusses the working prototype features, and research results. Wearable/ Mobile computers combined with the wireless technology improve efficiency and accuracy of the maintenance work. This technology enables maintenance personnel at the site to communicate with a remote helpdesk / expertise center through digital data, audio, and image.
Pollution Biology — The North American Experience
Earliest references to tubificids in pollution biology in North America were related to the simple abundance of the group in grossly polluted situations. With the improvement in taxonomy in the decade of the sixties, it was possible to recognize species assemblages, especially in the St. Lawrence Great Lakes. The distribution of these associations has now been worked out in considerable detail, and consulting companies and government agencies now work with identified species. Very few traditional laboratory tolerance tests have been done, but a start has been made on the investigation of the activity of worms in recycling sediment contaminants such as metals.
Understanding social media marketing: a case study on topics, categories and sentiment on a Facebook brand page
Social networks have changed the way information is delivered to the customers, shifting from traditional one-to-many to one-to-one communication. Opinion mining and sentiment analysis offer the possibility to understand the user-generated comments and explain how a certain product or a brand is perceived. Classification of different types of content is the first step towards understanding the conversation on the social media platforms. Our study analyses the content shared on Facebook in terms of topics, categories and shared sentiment for the domain of a sponsored Facebook brand page. Our results indicate that Product, Sales and Brand are the three most discussed topics, while Requests and Suggestions, Expressing Affect and Sharing are the most common intentions for participation. We discuss the implications of our findings for social media marketing and opinion mining.
Immune effector mechanisms implicated in atherosclerosis: from mice to humans.
According to the traditional view, atherosclerosis results from a passive buildup of cholesterol in the artery wall. Yet, burgeoning evidence implicates inflammation and immune effector mechanisms in the pathogenesis of this disease. Both innate and adaptive immunity operate during atherogenesis and link many traditional risk factors to altered arterial functions. Inflammatory pathways have become targets in the quest for novel preventive and therapeutic strategies against cardiovascular disease, a growing contributor to morbidity and mortality worldwide. Here we review current experimental and clinical knowledge of the pathogenesis of atherosclerosis through an immunological lens and how host defense mechanisms essential for survival of the species actually contribute to this chronic disease but also present new opportunities for its mitigation.
Contact force based compliance control for a trotting quadruped robot
In these paper, a compliance control strategy based on force control for quadruped trot gait locomotion over unperceived rough terrain. To avoid interference of environmental disturbance and unknown model characters, a sliding mode based controller is designed to control the position/posture of the robot torso. The force distribution and control modules calculate desired contact force by eliminating interaction force between the stance legs. Simulations and experimental results demonstrate the effectiveness of our compliance control strategy as the quadruped robot successfully trotting over rough terrain.
Intention-aware online POMDP planning for autonomous driving in a crowd
This paper presents an intention-aware online planning approach for autonomous driving amid many pedestrians. To drive near pedestrians safely, efficiently, and smoothly, autonomous vehicles must estimate unknown pedestrian intentions and hedge against the uncertainty in intention estimates in order to choose actions that are effective and robust. A key feature of our approach is to use the partially observable Markov decision process (POMDP) for systematic, robust decision making under uncertainty. Although there are concerns about the potentially high computational complexity of POMDP planning, experiments show that our POMDP-based planner runs in near real time, at 3 Hz, on a robot golf cart in a complex, dynamic environment. This indicates that POMDP planning is improving fast in computational efficiency and becoming increasingly practical as a tool for robot planning under uncertainty.
Clustering by pattern similarity in large data sets
Clustering is the process of grouping a set of objects into classes of similar objects. Although definitions of similarity vary from one clustering model to another, in most of these models the concept of similarity is based on distances, e.g., Euclidean distance or cosine distance. In other words, similar objects are required to have close values on at least a set of dimensions. In this paper, we explore a more general type of similarity. Under the pCluster model we proposed, two objects are similar if they exhibit a coherent pattern on a subset of dimensions. For instance, in DNA microarray analysis, the expression levels of two genes may rise and fall synchronously in response to a set of environmental stimuli. Although the magnitude of their expression levels may not be close, the patterns they exhibit can be very much alike. Discovery of such clusters of genes is essential in revealing significant connections in gene regulatory networks. E-commerce applications, such as collaborative filtering, can also benefit from the new model, which captures not only the closeness of values of certain leading indicators but also the closeness of (purchasing, browsing, etc.) patterns exhibited by the customers. Our paper introduces an effective algorithm to detect such clusters, and we perform tests on several real and synthetic data sets to show its effectiveness.
Activity Recognition from Inertial Sensors with Convolutional Neural Networks
Human Activity Recognition is one of the attractive topics to develop smart interactive environment in which computing systems can understand human activities in natural context. Besides traditional approaches with visual data, inertial sensors in wearable devices provide a promising approach for human activity recognition. In this paper, we propose novel methods to recognize human activities from raw data captured from inertial sensors using convolutional neural networks with either 2D or 3D filters. We also take advantage of hand-crafted features to combine with learned features from Convolution-Pooling blocks to further improve accuracy for activity recognition. Experiments on UCI Human Activity Recognition dataset with six different activities demonstrate that our method can achieve 96.95%, higher than existing methods.
Walk detection and step counting on unconstrained smartphones
Smartphone pedometry offers the possibility of ubiquitous health monitoring, context awareness and indoor location tracking through Pedestrian Dead Reckoning (PDR) systems. However, there is currently no detailed understanding of how well pedometry works when applied to smartphones in typical, unconstrained use. This paper evaluates common walk detection (WD) and step counting (SC) algorithms applied to smartphone sensor data. Using a large dataset (27 people, 130 walks, 6 smartphone placements) optimal algorithm parameters are provided and applied to the data. The results favour the use of standard deviation thresholding (WD) and windowed peak detection (SC) with error rates of less than 3%. Of the six different placements, only the back trouser pocket is found to degrade the step counting performance significantly, resulting in undercounting for many algorithms.
Reducing Wrong Labels in Distant Supervision for Relation Extraction
In relation extraction, distant supervision seeks to extract relations between entities from text by using a knowledge base, such as Freebase, as a source of supervision. When a sentence and a knowledge base refer to the same entity pair, this approach heuristically labels the sentence with the corresponding relation in the knowledge base. However, this heuristic can fail with the result that some sentences are labeled wrongly. This noisy labeled data causes poor extraction performance. In this paper, we propose a method to reduce the number of wrong labels. We present a novel generative model that directly models the heuristic labeling process of distant supervision. The model predicts whether assigned labels are correct or wrong via its hidden variables. Our experimental results show that this model detected wrong labels with higher performance than baseline methods. In the experiment, we also found that our wrong label reduction boosted the performance of relation extraction.
Seismic Performance Evaluation of Existing RC Buildings Without Seismic Details . Comparison of Nonlinear Static Methods and IDA
The inelastic response of existing reinforced concrete (RC) buildings without seismic details is investigated, presenting the results from more than 1000 nonlinear analyses. The seismic performance is investigated for two buildings, a typical building form of the 60s and a typical form of the 80s. Both structures are designed according to the old Greek codes. These building forms are typical for that period for many Southern European countries. Buildings of the 60s do not have seismic details, while buildings of the 80s have elementary seismic details. The influence of masonry infill walls is also investigated for the building of the 60s. Static pushover and incremental dynamic analyses (IDA) for a set of 15 strong motion records are carried out for the three buildings, two bare and one infilled. The IDA predictions are compared with the results of pushover analysis and the seismic demand according to Capacity Spectrum Method (CSM) and N2 Method. The results from IDA show large dispersion on the response, available ductility capacity, behaviour factor and failure displacement, depending on the strong motion record. CSM and N2 predictions are enveloped by the nonlinear dynamic predictions, but have significant differences from the mean values. The better behaviour of the building of the 80s compared to buildings of the 60s is validated with both pushover and nonlinear dynamic analyses. Finally, both types of analysis show that fully infilled frames exhibit an improved behaviour compared to bare frames.
Lattice-Based WOM Codes for Multilevel Flash Memories
We consider t-write codes for write-once memories with n cells that can store multiple levels. Assuming an underlying lattice-based construction and using the continuous approximation, we derive upper bounds on the worst-case sum-rate optimal and fixed-rate optimal n-cell t-write write-regions for the asymptotic case of continuous levels. These are achieved using hyperbolic shaping regions that have a gain of 1 bit/cell over cubic shaping regions. Motivated by these hyperbolic write-regions, we discuss construction and encoding of codebooks for cells with discrete support. We present a polynomial-time algorithm to assign messages to the codebooks and show that it achieves the optimal sum-rate for any given codebook when n = 2. Using this approach, we construct codes that achieve high sum-rate. We describe an alternative formulation of the message assignment problem for n≥ 3, a problem which remains open.
Controllability of complex networks
The ultimate proof of our understanding of natural or technological systems is reflected in our ability to control them. Although control theory offers mathematical tools for steering engineered and natural systems towards a desired state, a framework to control complex self-organized systems is lacking. Here we develop analytical tools to study the controllability of an arbitrary complex directed network, identifying the set of driver nodes with time-dependent control that can guide the system’s entire dynamics. We apply these tools to several real networks, finding that the number of driver nodes is determined mainly by the network’s degree distribution. We show that sparse inhomogeneous networks, which emerge in many real complex systems, are the most difficult to control, but that dense and homogeneous networks can be controlled using a few driver nodes. Counterintuitively, we find that in both model and real systems the driver nodes tend to avoid the high-degree nodes.
Dual-polarized log.-periodic antenna on a conical MID substrate
This paper presents the design of a dual-polarized log.-periodic four arm antenna bent on a conical MID substrate. The bending of a planar structure in free space is highlighted and the resulting effects on the input impedance and radiation characteristic are analyzed. The subsequent design of the UWB compliant prototype is introduced. An adequate agreement between simulated and measured performance can be observed. The antenna provides an input matching of better than −8 dB over a frequency range from 3GHz to 9GHz. The antenna pattern is characterized by a radiation with two linear, orthogonal polarizations and a front-to-back ratio of 6 dB. A maximum gain of 5.6 dBi is achieved at 5.5GHz. The pattern correlation coefficients confirm the suitability of this structure for diversity and MIMO applications. The overall antenna diameter and height are 50mm and 24mm respectively. It could therefore be used as a surface mounted or ceiling antenna in buildings, vehicles or aircrafts for communication systems.
Estimating heterogeneous choice models with oglm
When a binary or ordinal regression model incorrectly assumes that error variances are the same for all cases, the standard errors are wrong and (unlike OLS regression) the parameter estimates are biased. Heterogeneous choice (also known as location-scale or heteroskedastic ordered) models explicitly specify the determinants of heteroskedasticity in an attempt to correct for it. Such models are also useful when the variance itself is of substantive interest. This paper illustrates how the author’s Stata program oglm (Ordinal Generalized Linear Models) can be used to estimate heterogeneous choice and related models. It shows that two other models that have appeared in the literature (Allison’s model for group comparisons and Hauser and Andrew’s logistic response model with proportionality constraints) are special cases of a heterogeneous choice model and alternative parameterizations of it. The paper further argues that heterogeneous choice models may sometimes be an attractive alternative to other ordinal regression models, such as the generalized ordered logit model estimated by gologit2. Finally, the paper offers guidelines on how to interpret, test and modify heterogeneous choice models.
Optimizing and Visualizing Deep Learning for Benign/Malignant Classification in Breast Tumors
Breast cancer has the highest incidence and second highest mortality rate for women in the US. Our study aims to utilize deep learning for benign/malignant classification of mammogram tumors using a subset of cases from the Digital Database of Screening Mammography (DDSM). Though it was a small dataset from the view of Deep Learning (∼ 1000 patients), we show that currently state of the art architectures of deep learning can find a robust signal, even when trained from scratch. Using convolutional neural networks (CNNs), we are able to achieve an accuracy of 85% and an ROC AUC of 0.91, while leading hand-crafted feature based methods are only able to achieve an accuracy of 71%. We investigate an amalgamation of architectures to show that our best result is reached with an ensemble of the lightweight GoogLe Nets tasked with interpreting both the coronal caudal view and the mediolateral oblique view, simply averaging the probability scores of both views to make the final prediction. In addition, we have created a novel method to visualize what features the neural network detects for the benign/malignant classification, and have correlated those features with well known radiological features, such as spiculation. Our algorithm significantly improves existing classification methods for mammography lesions and identifies features that correlate with established clinical markers.
The Design and Use of Algorithms for Permuting Large Entries to the Diagonal of Sparse Matrices
We consider techniques for permuting a sparse matrix so that the diagonal of the permuted matrix has entries of large absolute value. We discuss various criteria for this and consider their implementation as computer codes. We then indicate several cases where such a permutation can be useful. These include the solution of sparse equations by a direct method and by an iterative technique. We also consider its use in generating a preconditioner for an iterative method. We see that the effect of these reorderings can be dramatic although the best a priori strategy is by no means clear.
GoalBaural: A Training Application for Goalball-related Aural Sense
Goalball, one of the official Paralympic events, is popular with visually impaired people all over the world. The purpose of goalball is to throw the specialized ball, with bells inside it, to the goal line of the opponents as many times as possible while defenders try to block the thrown ball with their bodies. Since goalball players cannot rely on visual information, they need to grasp the game situation using their auditory sense. However, it is hard, especially for beginners, to perceive the direction and distance of the thrown ball. In addition, they generally tend to be afraid of the approaching ball because, without visual information, they could be hit by a high-speed ball. In this paper, our goal is to develop an application called GoalBaural (Goalball + aural) that enables goalball players to improve the recognizability of the direction and distance of a thrown ball without going onto the court and playing goalball. The evaluation result indicated that our application would be efficient in improving the speed and the accuracy of locating the balls.
Recurrent Symptoms After Heller Myotomy for Achalasia: Evaluation and Treatment
A laparoscopic Heller myotomy with partial fundoplication is considered today in most centers in the United States and abroad the treatment of choice for patients with esophageal achalasia. Even though the operation has initially a very high success rate, dysphagia eventually recurs in some patients. In these cases, it is important to perform a careful work-up to identify the cause of the failure and to design a tailored treatment plan by either endoscopic means or revisional surgery. The best results are obtained by a team approach, in Centers where radiologists, gastroenterologists, and surgeons have experience in the diagnosis and treatment of this disease.
SNAS: Stochastic Neural Architecture Search
We propose Stochastic Neural Architecture Search (SNAS), an economical endto-end solution to Neural Architecture Search (NAS) that trains neural operation parameters and architecture distribution parameters in same round of backpropagation, while maintaining the completeness and differentiability of the NAS pipeline. In this work, NAS is reformulated as an optimization problem on parameters of a joint distribution for the search space in a cell. To leverage the gradient information in generic differentiable loss for architecture search, a novel search gradient is proposed. We prove that this search gradient optimizes the same objective as reinforcement-learning-based NAS, but assigns credits to structural decisions more efficiently. This credit assignment is further augmented with locally decomposable reward to enforce a resource-efficient constraint. In experiments on CIFAR-10, SNAS takes fewer epochs to find a cell architecture with state-of-theart accuracy than non-differentiable evolution-based and reinforcement-learningbased NAS, which is also transferable to ImageNet. It is also shown that child networks of SNAS can maintain the validation accuracy in searching, with which attention-based NAS requires parameter retraining to compete, exhibiting potentials to stride towards efficient NAS on big datasets.
Control of an LLC Resonant Converter Using Load Feedback Linearization
LLC resonant converter is a nonlinear system, limiting the use of typical linear control methods. This paper proposed a new nonlinear control strategy, using load feedback linearization for an LLC resonant converter. Compared with the conventional PI controllers, the proposed feedback linearized control strategy can achieve better performance with elimination of the nonlinear characteristics. The LLC resonant converter's dynamic model is built based on fundamental harmonic approximation using extended describing function. By assuming the dynamics of resonant network is much faster than the output voltage and controller, the LLC resonant converter's model is simplified from seven-order state equations to two-order ones. Then, the feedback linearized control strategy is presented. A double loop PI controller is designed to regulate the modulation voltage. The switching frequency can be calculated as a function of the load, input voltage, and modulation voltage. Finally, a 200 W laboratory prototype is built to verify the proposed control scheme. The settling time of the LLC resonant converter is reduced from 38.8 to 20.4 ms under the positive load step using the proposed controller. Experimental results prove the superiority of the proposed feedback linearized controller over the conventional PI controller.
Camera Augmented Mobile C-Arm (CAMC): Calibration, Accuracy Study, and Clinical Applications
Mobile C-arm is an essential tool in everyday trauma and orthopedics surgery. Minimally invasive solutions, based on X-ray imaging and coregistered external navigation created a lot of interest within the surgical community and started to replace the traditional open surgery for many procedures. These solutions usually increase the accuracy and reduce the trauma. In general, they introduce new hardware into the OR and add the line of sight constraints imposed by optical tracking systems. They thus impose radical changes to the surgical setup and overall procedure. We augment a commonly used mobile C-arm with a standard video camera and a double mirror system allowing real-time fusion of optical and X-ray images. The video camera is mounted such that its optical center virtually coincides with the C-arm's X-ray source. After a one-time calibration routine, the acquired X-ray and optical images are coregistered. This paper describes the design of such a system, quantifies its technical accuracy, and provides a qualitative proof of its efficiency through cadaver studies conducted by trauma surgeons. In particular, it studies the relevance of this system for surgical navigation within pedicle screw placement, vertebroplasty, and intramedullary nail locking procedures. The image overlay provides an intuitive interface for surgical guidance with an accuracy of <;1 mm, ideally with the use of only one single X-ray image. The new system is smoothly integrated into the clinical application with no additional hardware especially for down-the-beam instrument guidance based on the anteroposterior oblique view, where the instrument axis is aligned with the X-ray source. Throughout all experiments, the camera augmented mobile C-arm system proved to be an intuitive and robust guidance solution for selected clinical routines.
Ideas on Knowledge Synthesis Stemming from the KBBKN Endgame
Synthesis and optical properties of small Au nanorods using a seedless growth technique.
Gold nanoparticles have shown potential in photothermal cancer therapy and optoelectronic technology. In both applications, a call for small size nanorods is warranted. In the present work, a one-pot seedless synthetic technique has been developed to prepare relatively small monodisperse gold nanorods with average dimensions (length × width) of 18 × 4.5 nm, 25 × 5 nm, 15 × 4.5 nm, and 10 × 2.5 nm. In this method, the pH was found to play a crucial role in the monodispersity of the nanorods when the NaBH(4) concentration of the growth solution was adjusted to control the reduction rate of the gold ions. At the optimized pH and NaBH(4) concentrations, smaller gold nanorods were produced by adjusting the CTAB concentration in the growth solution. In addition, the concentration of silver ions in the growth solution was found to be pivotal in controlling the aspect ratio of the nanorods. The extinction coefficient values for the small gold nanorods synthesized with three different aspect ratios were estimated using the absorption spectra, size distributions, and the atomic spectroscopic analysis data. The previously accepted relationships between the extinction coefficient or the longitudinal band wavelength values and the nanorods' aspect ratios found for the large nanorods do not extend to the small size domain reported in the present work. The failure of extending these relationships over larger sizes is a result of the interaction of light with the large rods giving an extinction band which results mostly from scattering processes while the extinction of the small nanorods results from absorption processes.
Multi-task Learning for Predicting Health , Stress , and Happiness
Multi-task Learning (MTL) is applied to the problem of predicting next-day health, stress, and happiness using data from wearable sensors and smartphone logs. Three formulations of MTL are compared: i) Multi-task Multi-Kernel learning, which feeds information across tasks through kernel weights on feature types, ii) a Hierarchical Bayes model in which tasks share a common Dirichlet prior, and iii) Deep Neural Networks, which share several hidden layers but have final layers unique to each task. We show that by using MTL to leverage data from across the population while still customizing a model for each person, we can account for individual differences, and obtain state-of-the-art performance on this dataset.
Assessment of the appearance, location and morphology of mandibular lingual foramina using cone beam computed tomography.
OBJECTIVES To investigate the appearance, location and morphology of mandibular lingual foramina (MLF) in the Chinese Han population using cone beam computed tomography (CBCT). METHODS CBCT images of the mandibular body in 200 patients (103 female patients and 97 male patients, age range 10-70 years) were retrospectively analysed to identify MLF. The canal number, location and direction were assessed. Additionally, the diameter of the lingual foramen, the distance between the alveolar crest and the lingual foramen, the distance between the tooth apex and the lingual foramen and the distance from the mandibular border to the lingual foramen were examined to describe the MLF characteristics. Gender and age differences with respect to foramina were also studied. RESULTS CBCT can be utilized to visualise lingual foramina. In this study, 683 lingual foramina were detected in 200 CBCT scans, with 538 (78.77%) being ≤1 mm in diameter and 145 (21.23%) being >1 mm. In total, 85.07% of MLF are median lingual canals (MLC) and 14.93% are lateral lingual canals (LLC). Two typical types of lingual foramina were identified according to their relationship with the tooth apex. Most lingual foramina (74.08%) were found below the tooth apex, and those above the tooth apex were much smaller in diameter. Male patients had statistically larger lingual foramina. The distance between the lingual foramen and the tooth apex changed with increasing age. CONCLUSIONS Determination of the presence, position and size of lingual foramina is important before performing a surgical procedure. Careful implant-prosthetic treatment planning is particularly important in male and/or elderly patients because of the structural characteristics of their lingual foramina.
Ca2+ signalling between single L-type Ca2+ channels and ryanodine receptors in heart cells
Ca2+-induced Ca2+ release is a general mechanism that most cells use to amplify Ca2+ signals. In heart cells, this mechanism is operated between voltage-gated L-type Ca2+ channels (LCCs) in the plasma membrane and Ca2+ release channels, commonly known as ryanodine receptors, in the sarcoplasmic reticulum. The Ca2+ influx through LCCs traverses a cleft of roughly 12 nm formed by the cell surface and the sarcoplasmic reticulum membrane, and activates adjacent ryanodine receptors to release Ca2+ in the form of Ca2+ sparks. Here we determine the kinetics, fidelity and stoichiometry of coupling between LCCs and ryanodine receptors. We show that the local Ca2+ signal produced by a single opening of an LCC, named a ‘Ca2+ sparklet’, can trigger about 4–6 ryanodine receptors to generate a Ca2+ spark. The coupling between LCCs and ryanodine receptors is stochastic, as judged by the exponential distribution of the coupling latency. The fraction of sparklets that successfully triggers a spark is less than unity and declines in a use-dependent manner. This optical analysis of single-channel communication affords a powerful means for elucidating Ca2+-signalling mechanisms at the molecular level.
A Survey of Fog Computing: Concepts, Applications and Issues
Despite the increasing usage of cloud computing, there are still issues unsolved due to inherent problems of cloud computing such as unreliable latency, lack of mobility support and location-awareness. Fog computing can address those problems by providing elastic resources and services to end users at the edge of network, while cloud computing are more about providing resources distributed in the core network. This survey discusses the definition of fog computing and similar concepts, introduces representative application scenarios, and identifies various aspects of issues we may encounter when designing and implementing fog computing systems. It also highlights some opportunities and challenges, as direction of potential future work, in related techniques that need to be considered in the context of fog computing.
Eviza: A Natural Language Interface for Visual Analysis
Natural language interfaces for visualizations have emerged as a promising new way of interacting with data and performing analytics. Many of these systems have fundamental limitations. Most return minimally interactive visualizations in response to queries and often require experts to perform modeling for a set of predicted user queries before the systems are effective. Eviza provides a natural language interface for an interactive query dialog with an existing visualization rather than starting from a blank sheet and asking closed-ended questions that return a single text answer or static visualization. The system employs a probabilistic grammar based approach with predefined rules that are dynamically updated based on the data from the visualization, as opposed to computationally intensive deep learning or knowledge based approaches. The result of an interaction is a change to the view (e.g., filtering, navigation, selection) providing graphical answers and ambiguity widgets to handle ambiguous queries and system defaults. There is also rich domain awareness of time, space, and quantitative reasoning built in, and linking into existing knowledge bases for additional semantics. Eviza also supports pragmatics and exploring multi-modal interactions to help enhance the expressiveness of how users can ask questions about their data during the flow of visual analysis.
Implementation of a testbed with a hardware channel emulator for simulating the different atmospheric conditions to verify the transmitter and receiver of Optical Wireless systems
Related to different international activities in the Optical Wireless Communications (OWC) field Graz University of Technology (TUG) has high experience on developing different high data rate transmission systems and is well known for measurements and analysis of the OWC-channel. In this paper, a novel approach for testing Free Space Optical (FSO) systems in a controlled laboratory condition is proposed. Based on fibre optics technology, TUG testbed could effectively emulate the operation of real wireless optical communication systems together with various atmospheric perturbation effects such as fog and clouds. The suggested architecture applies an optical variable attenuator as a main device representing the tropospheric influences over the launched Gaussian beam in the free space channel. In addition, the current scheme involves an attenuator control unit with an external Digital Analog Converter (DAC) controlled by self-developed software. To obtain optimal results in terms of the presented setup, a calibration process including linearization of the non-linear attenuation versus voltage graph is performed. Finally, analytical results of the attenuation based on real measurements with the hardware channel emulator under laboratory conditions are shown. The implementation can be used in further activities to verify OWC-systems, before testing under real conditions.
Natural language question answering over RDF: a graph data driven approach
RDF question/answering (Q/A) allows users to ask questions in natural languages over a knowledge base represented by RDF. To answer a national language question, the existing work takes a two-stage approach: question understanding and query evaluation. Their focus is on question understanding to deal with the disambiguation of the natural language phrases. The most common technique is the joint disambiguation, which has the exponential search space. In this paper, we propose a systematic framework to answer natural language questions over RDF repository (RDF Q/A) from a graph data-driven perspective. We propose a semantic query graph to model the query intention in the natural language question in a structural way, based on which, RDF Q/A is reduced to subgraph matching problem. More importantly, we resolve the ambiguity of natural language questions at the time when matches of query are found. The cost of disambiguation is saved if there are no matching found. We compare our method with some state-of-the-art RDF Q/A systems in the benchmark dataset. Extensive experiments confirm that our method not only improves the precision but also speeds up query performance greatly.
Adductor canal block versus femoral nerve block for analgesia after total knee arthroplasty: a randomized, double-blind study.
BACKGROUND AND OBJECTIVES Femoral nerve block (FNB), a commonly used postoperative pain treatment after total knee arthroplasty (TKA), reduces quadriceps muscle strength essential for mobilization. In contrast, adductor canal block (ACB) is predominately a sensory nerve block. We hypothesized that ACB preserves quadriceps muscle strength as compared with FNB (primary end point) in patients after TKA. Secondary end points were effects on morphine consumption, pain, adductor muscle strength, morphine-related complications, and mobilization ability. METHODS We performed a double-blind, randomized, controlled study of patients scheduled for TKA with spinal anesthesia. The patients were randomized to receive either a continuous ACB or an FNB via a catheter (30-mL 0.5% ropivacaine given initially, followed by a continuous infusion of 0.2% ropivacaine, 8 mL/h for 24 hours). Muscle strength was assessed with a handheld dynamometer, and we used the percentile change from baseline for comparisons. The trial was registered at clinicaltrials.gov (Identifier: NCT01470391). RESULTS We enrolled 54 patients, of which 48 were analyzed. Quadriceps strength as a percentage of baseline was significantly higher in the ACB group compared with the FNB group: (median [range]) 52% [31-71] versus 18% [4-48], (95% confidence interval, 8-41; P = 0.004). There was no difference between the groups regarding morphine consumption (P = 0.94), pain at rest (P = 0.21), pain during flexion of the knee (P = 0.16), or adductor muscle strength (P = 0.39); neither was there a difference in morphine-related adverse effects or mobilization ability (P > 0.05). CONCLUSIONS Adductor canal block preserved quadriceps muscle strength better than FNB, without a significant difference in postoperative pain.
A current-mode buck converter with a pulse-skipping soft-start circuit
This paper presents a soft-start circuit that adopts a pulse-skipping control to prevent inrush current and output voltage overshoot during the start-up period of dc-dc converters. The purpose of the pulse-skipping control is to significantly restrain the increasing rate of the reference voltage of the error amplifier. Thanks to the pulse-skipping mechanism and the duty cycle minimization, the soft-start-up time can be extended and the restriction of the charging current and the capacitance can be relaxed. The proposed soft-start circuit is fully integrated on chip without external components, leading to a reduction in PCB area and cost. A current-mode buck converter is implemented with TSMC 0.35-μm 2P4M CMOS process. Simulation results show the output voltage of the buck converter increases smoothly and inrush current is less than 300 mA.
Vision-Based Offline-Online Perception Paradigm for Autonomous Driving
Autonomous driving is a key factor for future mobility. Properly perceiving the environment of the vehicles is essential for a safe driving, which requires computing accurate geometric and semantic information in real-time. In this paper, we challenge state-of-the-art computer vision algorithms for building a perception system for autonomous driving. An inherent drawback in the computation of visual semantics is the trade-off between accuracy and computational cost. We propose to circumvent this problem by following an offline-online strategy. During the offline stage dense 3D semantic maps are created. In the online stage the current driving area is recognized in the maps via a re-localization process, which allows to retrieve the pre-computed accurate semantics and 3D geometry in real-time. Then, detecting the dynamic obstacles we obtain a rich understanding of the current scene. We evaluate quantitatively our proposal in the KITTI dataset and discuss the related open challenges for the computer vision community.
Learning Discriminative Stein Kernel for SPD Matrices and Its Applications
Stein kernel (SK) has recently shown promising performance on classifying images represented by symmetric positive definite (SPD) matrices. It evaluates the similarity between two SPD matrices through their eigenvalues. In this paper, we argue that directly using the original eigenvalues may be problematic because: 1) eigenvalue estimation becomes biased when the number of samples is inadequate, which may lead to unreliable kernel evaluation, and 2) more importantly, eigenvalues reflect only the property of an individual SPD matrix. They are not necessarily optimal for computing SK when the goal is to discriminate different classes of SPD matrices. To address the two issues, we propose a discriminative SK (DSK), in which an extra parameter vector is defined to adjust the eigenvalues of input SPD matrices. The optimal parameter values are sought by optimizing a proxy of classification performance. To show the generality of the proposed method, three kernel learning criteria that are commonly used in the literature are employed as a proxy. A comprehensive experimental study is conducted on a variety of image classification tasks to compare the proposed DSK with the original SK and other methods for evaluating the similarity between SPD matrices. The results demonstrate that the DSK can attain greater discrimination and better align with classification tasks by altering the eigenvalues. This makes it produce higher classification performance than the original SK and other commonly used methods.
Unsupervised Learning for Trustworthy IoT
The advancement of Internet-of-Things (IoT) edge devices with various types of sensors enables us to harness diverse information with Mobile Crowd-Sensing applications (MCS). This highly dynamic setting entails the collection of ubiquitous data traces, originating from sensors carried by people, introducing new information security challenges; one of them being the preservation of data trustworthiness. What is needed in these settings is the timely analysis of these large datasets to produce accurate insights on the correctness of user reports. Existing data mining and other artificial intelligence methods are the most popular to gain hidden insights from IoT data, albeit with many challenges. In this paper, we first model the cyber trustworthiness of MCS reports in the presence of intelligent and colluding adversaries. We then rigorously assess, using real IoT datasets, the effectiveness and accuracy of well-known data mining algorithms when employed towards IoT security and privacy. By taking into account the spatio-temporal changes of the underlying phenomena, we demonstrate how concept drifts can masquerade the existence of attackers and their impact on the accuracy of both the clustering and classification processes. Our initial set of results clearly show that these unsupervised learning algorithms are prone to adversarial infection, thus, magnifying the need for further research in the field by leveraging a mix of advanced machine learning models and mathematical optimization techniques.
Extension and validation of the GN model for non-linear interference to uncompensated links using Raman amplification.
We show the extension of the Gaussian Noise model, which describes non-linear propagation in uncompensated links of multilevel modulation formats, to systems using Raman amplification. We successfully validate the analytical results by comparison with numerical simulations of Nyquist-WDM PM-16QAM channels transmission over multi-span uncompensated links made of a single fiber type and using hybrid EDFA/Raman amplification with counter-propagating pumps. We analyze two typical high- and low-dispersion fiber types. We show that Raman amplification always induces a limited non-linear interference enhancement compared to the dominant ASE noise reduction.
Deep Learning with Dynamic Computation Graphs
Neural networks that compute over graph structures are a natural fit for problems in a variety of domains, including natural language (parse trees) and cheminformatics (molecular graphs). However, since the computation graph has a different shape and size for every input, such networks do not directly support batched training or inference. They are also difficult to implement in popular deep learning libraries, which are based on static data-flow graphs. We introduce a technique called dynamic batching, which not only batches together operations between different input graphs of dissimilar shape, but also between different nodes within a single input graph. The technique allows us to create static graphs, using popular libraries, that emulate dynamic computation graphs of arbitrary shape and size. We further present a high-level library1 of compositional blocks that simplifies the creation of dynamic graph models. Using the library, we demonstrate concise and batch-wise parallel implementations for a variety of models from the literature.
Offline EEG-based driver drowsiness estimation using enhanced batch-mode active learning (EBMAL) for regression
There are many important regression problems in real-world brain-computer interface (BCI) applications, e.g., driver drowsiness estimation from EEG signals. This paper considers offline analysis: given a pool of unlabeled EEG epochs recorded during driving, how do we optimally select a small number of them to label so that an accurate regression model can be built from them to label the rest? Active learning is a promising solution to this problem, but interestingly, to our best knowledge, it has not been used for regression problems in BCI so far. This paper proposes a novel enhanced batch-mode active learning (EBMAL) approach for regression, which improves upon a baseline active learning algorithm by increasing the reliability, representativeness and diversity of the selected samples to achieve better regression performance. We validate its effectiveness using driver drowsiness estimation from EEG signals. However, EBMAL is a general approach that can also be applied to many other offline regression problems beyond BCI.
Polyps auto-detection in Wireless Capsule Endoscopy images using improved method based on image segmentation
Wireless Capsule Endoscopy (WCE) is a noninvasive instrument that widely used in screening the whole intestine and it has been utilized as a model especially for the examination of gastrointestinal (GI) diseases. However, it is numerous images of the detecting result produced by WCE that always burdens the physicians. To solve this problem, it is necessary to combine the manual diagnosis with the image segmentation technology. In this paper we proposed a feasible method by using K-means clustering and localizing region-based active contour segmentation for polyps auto-detection in WCE images. Experimental results shows the method is promising and efficient.
3D Semantic Segmentation with Submanifold Sparse Convolutional Networks
Convolutional networks are the de-facto standard for analyzing spatio-temporal data such as images, videos, and 3D shapes. Whilst some of this data is naturally dense (e.g., photos), many other data sources are inherently sparse. Examples include 3D point clouds that were obtained using a LiDAR scanner or RGB-D camera. Standard "dense" implementations of convolutional networks are very inefficient when applied on such sparse data. We introduce new sparse convolutional operations that are designed to process spatially-sparse data more efficiently, and use them to develop spatially-sparse convolutional networks. We demonstrate the strong performance of the resulting models, called submanifold sparse convolutional networks (SS-CNs), on two tasks involving semantic segmentation of 3D point clouds. In particular, our models outperform all prior state-of-the-art on the test set of a recent semantic segmentation competition.
Non-monopolizable caches: Low-complexity mitigation of cache side channel attacks
We propose a flexibly-partitioned cache design that either drastically weakens or completely eliminates cache-based side channel attacks. The proposed Non-Monopolizable (NoMo) cache dynamically reserves cache lines for active threads and prevents other co-executing threads from evicting reserved lines. Unreserved lines remain available for dynamic sharing among threads. NoMo requires only simple modifications to the cache replacement logic, making it straightforward to adopt. It requires no software support enabling it to automatically protect pre-existing binaries. NoMo results in performance degradation of about 1% on average. We demonstrate that NoMo can provide strong security guarantees for the AES and Blowfish encryption algorithms.
Transforming Constructivist Learning into Action: Design Thinking in education
an ever changing society of the 21st century, there is a demand to equip students with meta competences going beyond cognitive knowledge. Education, therefore, needs a transition from transferring knowledge to developing individual potentials with the help of constructivist learning. Advantages of constructivist learning, and criteria for its realisation have been well-determined through theoretical findings in pedagogy (Reich 2008, de Corte, OECD 2010). However, the practical implementation leaves a lot to be desired (Gardner 2010, Wagner 2011). Knowledge acquisition is still fragmented into isolated subjects. Lesson layouts are not efficiently designed to help teachers execute a holistic and interdisciplinary learning. As is shown in this paper, teachers are having negative classroom experience with project work or interdisciplinary teaching, due to a constant feeling of uncertainty and chaos, as well as lack of a process to follow. We therefore conclude: there is a missing link between theoretical findings and demands by pedagogy science and its practical implementation. We claim that, Design Thinking as a team-based learning process offers teachers support towards practice-oriented and holistic modes of constructivist learning in projects. Our case study confirms an improvement of classroom experience for teacher and student alike when using Design Thinking. This leads to a positive attitude towards constructivist learning and an increase of its implementation in education. The ultimate goal of this paper is to prove that Design Thinking gets teachers empowered to facilitate constructivist learning in order to foster 21st century skills. Introduction The mandate of schools is to unfold the personality of every student and to build a strong character with a sense of responsibility for democracy and community. This implies developing skills of reflection, interpretation of different information and other complex meta-competences. Science, business and social organisations alike describe a strong need for a set of skills and competences, often referred to as 21st century skills
Similarity Flooding: A Versatile Graph Matching Algorithm and Its Application to Schema Matching
Matching elements of two data schemas or two data instances plays a key role in data warehousing, e-business, or even biochemical applications. In this paper we present a matching algorithm based on a fixpoint computation that is usable across different scenarios. The algorithm takes two graphs (schemas, catalogs, or other data structures) as input, and produces as output a mapping between corresponding nodes of the graphs. Depending on the matching goal, a subset of the mapping is chosen using filters. After our algorithm runs, we expect a human to check and if necessary adjust the results. As a matter of fact, we evaluate the ‘accuracy’ of the algorithm by counting the number of needed adjustments. We conducted a user study, in which our accuracy metric was used to estimate the labor savings that the users could obtain by utilizing our algorithm to obtain an initial matching. Finally, we illustrate how our matching algorithm is deployed as one of several high-level operators in an implemented testbed for managing information models and mappings.
Implementation of detection and tracking mechanism for small UAS
Unmanned Aircraft Systems (UAS) are being used commonly for video surveillance, providing valuable video data and reducing the risks associated with human operators. Thanks to its benefits, the UAS traffic is nearly doubling every year. However, the risks associated with the UAS are also growing. According to the FAA, the volume of air traffic will grow steadily, doubling in the next 20 years. Paired with the exponential growth of the UAS traffic, the risk of collision is also growing as well as privacy concerns. An effective UAS detection and/or tracking method is critically needed for air traffic safety. This research is aimed at developing a system that can identify/detect a UAS, which will subsequently enable counter measures against UAS. The proposed system will identify a UAS through various methods including image processing and mechanical tracking. Once a UAS is detected, a countermeasure can be employed along with the tracking system. In this research, we describe the design, algorithms, and implementation details of the system as well as some performance aspects. The proposed system will help keep the malicious or harmful UAS away from the restricted or residential areas.
Readmission after robot-assisted radical cystectomy: outcomes and predictors at 90-day follow-up.
OBJECTIVE To characterize the outcomes and predictors of readmission after robot-assisted radical cystectomy (RARC) during early (30-day) and late (31-90-day) postoperative periods. METHODS We retrospectively evaluated our prospectively maintained RARC quality assurance database of 272 consecutive patients operated between 2005 and 2012. We evaluated the relationship of readmission with perioperative outcomes and examined possible predictors during the postoperative period. RESULTS Overall 30- and 90-day mortality was 0.7% and 4.8%, respectively, with 25.5% patients readmitted within 90 days after RARC (61% of them were readmitted within 30 days and 39% were readmitted between 31-90 days postoperatively). Infection-related problems were the most common cause of readmission during early and late periods. Overall operative time and obesity were significantly associated with readmission (P = .034 and .033, respectively). Body mass index and female gender were independent predictors of 90-day readmission (P = .004 and .014, respectively). Having any type of complication correlated with 90-day readmission (P = .0045); meanwhile, when complications were graded on the basis of Clavien grading system, only grade 1-2 complications statistically correlated with readmission (P = .046). Four patients needed reoperation (2 patients in early "for appendicitis and adhesive small bowel obstruction" and 2 in late "for ureteroenteric stricture" readmission); meanwhile, 6 patients needed percutaneous procedures (4 patients in early "1 for anastomotic leak and 3 for pelvic collections" and 2 "for pelvic collections and ureterocutaneous fistula" in late readmission). CONCLUSION The rate of readmission within 90 days after RARC is significant. Female gender and body mass index are independent predictors of readmission. Outcomes at 90 days provide more thorough results, essential to proper patient counseling.
Introduction to the special section on educational data mining
Educational Data Mining (EDM) is an emerging multidisciplinary research area, in which methods and techniques for exploring data originating from various educational information systems have been developed. EDM is both a learning science, as well as a rich application area for data mining, due to the growing availability of educational data. EDM contributes to the study of how students learn, and the settings in which they learn. It enables data-driven decision making for improving the current educational practice and learning material. We present a brief overview of EDM and introduce four selected EDM papers representing a crosscut of different application areas for data mining in education.
PERFORMANCE MEASURES FOR INFORMATION EXTRACTION
While precision and recall have served the information extraction community well as two separate measures of system performance, we show that the F -measure, the weighted harmonic mean of precision and recall, exhibits certain undesirable behaviors. To overcome these limitations, we define an error measure, the slot error rate, which combines the different types of error directly, without having to resort to precision and recall as preliminary measures. The slot error rate is analogous to the word error rate that is used for measuring speech recognition performance; it is intended to be a measure of the cost to the user for the system to make the different types of errors.
A quantitative analysis on microarchitectures of modern CPU-FPGA platforms
CPU-FPGA heterogeneous acceleration platforms have shown great potential for continued performance and energy efficiency improvement for modern data centers, and have captured great attention from both academia and industry. However, it is nontrivial for users to choose the right platform among various PCIe and QPI based CPU-FPGA platforms from different vendors. This paper aims to find out what microarchitectural characteristics affect the performance, and how. We conduct our quantitative comparison and in-depth analysis on two representative platforms: QPI-based Intel-Altera HARP with coherent shared memory, and PCIe-based Alpha Data board with private device memory. We provide multiple insights for both application developers and platform designers.
Building block of a programmable neuromorphic substrate: A digital neurosynaptic core
The grand challenge of neuromorphic computation is to develop a flexible brain-inspired architecture capable of a wide array of real-time applications, while striving towards the ultra-low power consumption and compact size of biological neural systems. Toward this end, we fabricated a building block of a modular neuromorphic architecture, a neurosynaptic core. Our implementation consists of 256 integrate-and-fire neurons and a 1,024×256 SRAM crossbar memory for synapses that fits in 4.2mm2 using a 45nm SOI process and consumes just 45pJ per spike. The core is fully configurable in terms of neuron parameters, axon types, and synapse states and its fully digital implementation achieves one-to-one correspondence with software simulation models. One-to-one correspondence allows us to introduce an abstract neural programming model for our chip, a contract guaranteeing that any application developed in software functions identically in hardware. This contract allows us to rapidly test and map applications from control, machine vision, and classification. To demonstrate, we present four test cases (i) a robot driving in a virtual environment, (ii) the classic game of pong, (iii) visual digit recognition and (iv) an autoassociative memory.
FPGA modeling of diverse superscalar processors
There is increasing interest in using Field Programmable Gate Arrays (FPGAs) as platforms for computer architecture simulation. This paper is concerned with modeling superscalar processors with FPGAs. To be transformative, the FPGA modeling framework should meet three criteria. (1) Configurable: The framework should be able to model diverse superscalar processors, like a software model. In particular, it should be possible to vary superscalar parameters such as fetch, issue, and retire widths, depths of pipeline stages, queue sizes, etc. (2) Automatic: The framework should be able to automatically and efficiently map any one of its superscalar processor configurations to the FPGA. (3) Realistic: The framework should model a modern superscalar microarchitecture in detail, ideally with prototype quality, to enable a new era and depth of microarchitecture research. A framework that meets these three criteria will enjoy the convenience of a software model, the speed of an FPGA model, and the experience of a prototype. This paper describes FPGA-Sim, a configurable, automatically FPGA-synthesizable, and register-transfer-level (RTL) model of an out-of-order superscalar processor. FPGA-Sim enables FPGA modeling of diverse superscalar processors out-of-the-box. Moreover, its direct RTL implementation yields the fidelity of a hardware prototype.
ROS-Based SLAM for a Gazebo-Simulated Mobile Robot in Image-Based 3D Model of Indoor Environment
At present, robot simulators have robust physics engine, high-quality graphics, convenient customer and graphical interfaces, that gives rich opportunities to substitute the real robot by its simulation model, providing the calculation of a robot locomotion by odometry and sensor data. This paper aims at describing a Gazebo simulation approach of simultaneous localization and mapping (SLAM) based on the Robotic operating system (ROS) for a simulated mobile robot with a system of two scanning lasers, which moves in a 3D model of realistic indoor environment. The image-based 3D model of a real room with obstacles was obtained by camera shots and reconstructed by Autodesk 123D Catch software with meshing in MeshLab software. We use the existing Gazebo simulation of the Willow Garage Personal Robot 2 (PR2) with its sensor system, which facilitates the simulation of robot locomotion and sensor measurements for SLAM and navigation tasks. The ROS-based SLAM approach applies Rao-Blackwellized particle filters and laser data to locate the PR2 robot in unknown environment and build a map. The Gazebo simulation of the PR2 robot locomotion, sensor data and SLAM algorithm is considered in details. The results qualitatively demonstrate the fidelity of the simulated 3D room with obstacles to the ROS-calculated map obtained from the robot laser system. It proves the feasibility of ROS-based SLAM of a Gazebo-simulated mobile robot to its usage in camera-based 3D model of a realistic indoor environment. This approach can be spread to further ROS-based robotic simulations with Gazebo, e.g. concerning a Russian android robot AR-601M.
Pelvic fractures: epidemiology and predictors of associated abdominal injuries and outcomes.
BACKGROUND Pelvic fractures are often associated with major intraabdominal injuries or severe bleeding from the fracture site. OBJECTIVE To study the epidemiology of pelvic fractures and identify important risk factors for associated abdominal injuries, bleeding, need for angiographic embolization, and death. METHODS Trauma registry study on pelvic fractures from blunt trauma. Stepwise logistic regression was used to identify risk factors of severe pelvic fractures, associated abdominal injuries, need for major blood transfusion, therapeutic embolization, and death from pelvic fracture. Adjusted relative risks and 95% confidence intervals were derived. RESULTS There were 16,630 trauma registry patients with blunt trauma, of whom 1,545 (9.3%) had a pelvic fracture. The incidence of abdominal injuries was 16.5%, and the most common injured organs were the liver (6.1%) and the bladder and urethra (5.8%). In severe pelvic fractures (Abbreviated Injury Scale [AIS] > or =4), the incidence of associated intraabdominal injuries was 30.7%, and the most commonly injured organs were the bladder and urethra (14.6%). Among the risk factors studied, motor vehicle crash is the only notable risk factor negatively associated with severe pelvic fracture. Major risk factors for associated liver injury were motor vehicle crash and pelvis AIS > or = 4. Risk factors of major blood loss were age > 16 years, pelvic AIS > or =4, angiographic embolization, and Injury Severity Score (ISS) > 25. Age> 55 years was the only predictor for associated aortic injury. Factors associated with therapeutic angiographic embolization were pelvic AIS > or =4 and ISS > 25. The overall mortality was 13.5%, but only 0.8% died as a direct result of pelvic fracture. The only pronounced risk factor associated with mortality was ISS>25. CONCLUSIONS Some epidemiological variables are important risk factors of severity of pelvic fractures, presence of associated abdominal injuries, blood loss, and need of angiography. These risk factors can help in selecting the most appropriate diagnostic and therapeutic interventions.
QR Code Image Correction based on Corner Detection and Convex Hull Algorithm
Since the angular deviation produced when shooting a QR code image by a camera would cause geometric distortion of the QR code image, the traditional algorithm of QR code image correction would produce distortion. Therefore this paper puts forward the algorithm which combines corner detection with convex hull algorithm. Firstly, binaryzation of the collected QR code image with uneven light is obtained by the methods of local threshold and mathematical morphology. Next, the outline of the QR code and the dots on it are found and the distorted image is recovered by perspective collineation, according to the algorithm raised by this paper. Finally, experimental verification is made that the algorithm raised by this paper can correctly find the four apexes of QR code and achieves good effects of geometric correction. It will also significantly increase the recognition rate of seriously distorted QR code images.
FIA: An Open Forensic Integration Architecture for Composing Digital Evidence
The analysis and value of digital evidence in an investigation has been the domain of discourse in the digital forensic community for several years. While many works have considered different approaches to model digital evidence, a comprehensive understanding of the process of merging different evidence items recovered during a forensic analysis is still a distant dream. With the advent of modern technologies, pro-active measures are integral to keeping abreast of all forms of cyber crimes and attacks. This paper motivates the need to formalize the process of analyzing digital evidence from multiple sources simultaneously. In this paper, we present the forensic integration architecture (FIA) which provides a framework for abstracting the evidence source and storage format information from digital evidence and explores the concept of integrating evidence information from multiple sources. The FIA architecture identifies evidence information from multiple sources that enables an investigator to build theories to reconstruct the past. FIA is hierarchically composed of multiple layers and adopts a technology independent approach. FIA is also open and extensible making it simple to adapt to technological changes. We present a case study using a hypothetical car theft case to demonstrate the concepts and illustrate the value it brings into the field.
Prevalence of oral mucositis, dry mouth, and dysphagia in advanced cancer patients
Oral symptoms can be a sign of an underlying systemic condition and have a significant impact on quality of life, nutrition, and cost of care, while these lesions are often studied in the context of cancer treatment. However, information regarding oral symptoms in advanced cancer patients is poor. The aim of this multicenter study was to determine the prevalence and the characteristics of oral symptoms in a large population of advanced cancer patients. A consecutive sample of patients with advanced cancer for a period of 6 months was prospectively assessed for an observational study. At time of admission, the epidemiological characteristics, surgery-radiotherapy of head and neck, and oncologic treatments in the last month were recorded. The presence of mucositis, dry mouth, and dysphagia was assessed by clinical examination and patients’ report and their intensity recorded. Patients were also asked whether they had limitation on nutrition of hydration due to the local condition. Six hundred sixty-nine patients were surveyed in the period taken into consideration. The mean age was 72.1 years (SD 12.3), and 342 patients were males. The primary tumors are listed in Table 1. The prevalence of mucositis was 22.3 %. The symptom relevantly reduced the ingestion of food or fluids and was statistically associated with the Karnofsky level and head and neck cancer. The prevalence of dry mouth was 40.4 %, with a mean intensity of 5.4 (SD 2.1). Several drugs were concomitantly given, particularly opioids (78 %), corticosteroids (75.3 %), and diuretics (70.2 %). Various and nonhomogeneous treatments were given for dry mouth, that was statistically associated with current or recent chemotherapy, and hematological tumors. The prevalence of dysphagia was 15.4 % with a mean intensity of 5.34 (SD 3). Dysphagia for liquids was observed in 52.4 % of cases. A high level of limitation for oral nutrition due to dysphagia was found, and in 53.4 % of patients, alternative routes to the oral one were used. Dysphagia was statistically associated with the Karnofsky level and head and neck cancer. A strong relationship between the three oral symptoms was found. In advanced cancer patients, a range of oral problems significantly may impact on the physical, social, and psychological well-being of advanced cancer patients to varying degrees. These symptoms should be carefully assessed early but become imperative in the palliative care setting when they produce relevant consequences that may be life-threatening other than limiting the daily activities, particularly eating and drinking.
Predictors for length of hospital stay in patients with community-acquired Pneumonia: Results from a Swiss Multicenter study
BACKGROUND Length of hospital stay (LOS) in patients with community-acquired pneumonia (CAP) is variable and directly related to medical costs. Accurate estimation of LOS on admission and during follow-up may result in earlier and more efficient discharge strategies. METHODS This is a prospective multicenter study including patients in emergency departments of 6 tertiary care hospitals in Switzerland between October 2006 and March 2008. Medical history, clinical data at presentation and health care insurance class were collected. We calculated univariate and multivariate cox regression models to assess the association of different characteristics with LOS. In a split sample analysis, we created two LOS prediction rules, first including only admission data, and second including also additional inpatient information. RESULTS The mean LOS in the 875 included CAP patients was 9.8 days (95%CI 9.3-10.4). Older age, respiratory rate >20 pm, nursing home residence, chronic pulmonary disease, diabetes, multilobar CAP and the pneumonia severity index class were independently associated with longer LOS in the admission prediction model. When also considering follow-up information, low albumin levels, ICU transfer and development of CAP-associated complications were additional independent risk factors for prolonged LOS. Both weighted clinical prediction rules based on these factors showed a high separation of patients in Kaplan Meier Curves (p logrank <0.001 and <0.001) and a good calibration when comparing predicted and observed results. CONCLUSIONS Within this study we identified different baseline and follow-up characteristics to be strong and independent predictors for LOS. If validated in future studies, these factors may help to optimize discharge strategies and thus shorten LOS in CAP patients.
Recent Developments in Classical and Quantum Theories of Connections Including General Relativity
General relativity can be recast as a theory of connections by performing a canonical transformation on its phase space. In this form, its (kinematical) structure is closely related to that of Yang-Mills theory and topological field theories. Over the past few years, a variety of techniques have been developed to quantize all these theories non-perturbatively. These developments are summarized with special emphasis on loop space methods and their applications to quantum gravity.
Stereotypes and the Media : A Re-evaluation
It is a commonplace that the mass media are populated with stereotypes. They are readily recognized on television, where their frequency has been ceaselessly documented by researchers. Why, then, return to the problem of defining stereotypes at this time? I believe that by reevaluating and clarifying the term we can improve the way we study the media, particularly television, in the academy, in our research, and in our teaching. The study of stereotypes provides a point of intersection between quantitative and qualitative research, between social science and humanities perspectives, between the cultural studies and administrative approaches. Assumptions about stereotyping influence the way we think about media effects, uses and gratifications, and the ideological analysis of television. While television content analysis has been useful-even essential-its methods could be refined if researchers were to scrutinize their use of the concept of stereotype. Scholars in social psychology, mass communications, and popular culture have used the term diEerently and often approach diEerent areas in their research: the audience, for social psychologists; television in general, for mass communications researchers; and specific texts and genres, for popular culture critics. In each case, the definition of a
Experiences before things: a primer for the (yet) unconvinced
While things (i.e., technologies) play a crucial role in creating and shaping meaningful, positive experiences, their true value lies only in the resulting experiences. It is about what we can do and experience with a thing, about the stories unfolding through using a technology, not about its styling, material, or impressive list of features. This paper explores the notion of "experiences" further: from the link between experiences, well-being, and people's developing post-materialistic stance to the challenges of the experience market and the experience-driven design of technology.
A compact microstrip antenna with tapered peripheral slits for CubeSat RF Payloads at 436MHz: Miniaturization techniques, design & numerical results
We elaborate the design and simulation of a planar antenna that is suitable for CubeSat picosatellites. The antenna operates at 436 MHz and its main features are miniature size and the built-in capability to produce circular polarization. The miniaturization procedure is given in detail, and the electrical performance of this small antenna is documented. Two main miniaturization techniques have been applied, i.e. dielectric loading and distortion of the current path. We have added an extra degree of freedom to the latter. The radiator is integrated with the chassis of the picosatellite and, at the same time, operates at the lower end of the UHF spectrum. In terms of electrical size, the structure presented herein is one of the smallest antennas that have been proposed for small satellites. Despite its small electrical size, the antenna maintains acceptable efficiency and gain performance in the band of interest.
Inventing the Louvre : art, politics, and the origins of the modern museum in eighteenth-century Paris
Founded in the final years of the Enlightenment, the Louvre--with the greatest collection of Old Master paintings and antique sculpture assembled under one roof--became the model for all state art museums subsequently established. Andrew McClellan chronicles the formation of this great museum from its origins in the French royal picture collections to its apotheosis during the Revolution and Napoleonic Empire. More than a narrative history, McClellan's account explores the ideological underpinnings, pedagogic aims, and aesthetic criteria of the Louvre. Drawing on new archival materials, McClellan also illuminates the art world of eighteenth-century Paris.
Thinking Positively - Explanatory Feedback for Conversational Recommender Systems
When it comes to buying expensive goods people expect to be skillfully steered through the options by well-informed sales assistants that are capable of balancing the user’s many and varied requirements. In addition users often need to be educated about the product-space, especially if they are to come to understand what is available and why certain options are being recommended by the sales-assistant. The same issues arise in interactive recommender systems, our online equivalent of a sales assistant and explanation in recommender systems, as a means to educate users and justify recommendations, is now well accepted. In this paper we focus on a novel approach to explanation. Instead of attempting to justify a particular recommendation we focus on how explanations can help users to understand the recommendation opportunities that remain if the current recommendation should not meet their requirements. We describe how this approach to explanation is tightly coupled with the generation of compound critiques, which act as a form of feedback for the users. And we argue that these explanation-rich critiques have the potential to dramatically improve recommender performance and usability.
A 1.9nJ/pixel embedded deep neural network processor for high speed visual attention in a mobile vision recognition SoC
An energy-efficient Deep Neural Network (DNN) processor is proposed for high-speed Visual Attention (VA) engine in a mobile vision SoC. The proposed embedded DNN realizes VA to rapidly find ROI tiles of potential target objects reducing ~70% of recognition workloads of vision processor. Compared to previous VA, the DNN VA reduces execution time by 90%, which results in 73.4% overall OR time reduction. Highly-parallel 200-way PEs are implemented in the DNN processor with 2D image sliding architecture, and only 3ms of DNN VA latency can be obtained. Also, the dual-mode PE configuration is proposed for both DNN and multi-layer-perceptron (MLP) to share same hardware for high energy efficiency. As a result, the proposed work achieves only 1.9nJ/pixel energy efficiency which is 7.7x smaller than state-of-the-art VA accelerator.
Interference-driven resource management for GPU-based heterogeneous clusters
GPU-based clusters are increasingly being deployed in HPC environments to accelerate a variety of scientific applications. Despite their growing popularity, the GPU devices themselves are under-utilized even for many computationally-intensive jobs. This stems from the fact that the typical GPU usage model is one in which a host processor periodically offloads computationally intensive portions of an application to the coprocessor. Since some portions of code cannot be offloaded to the GPU (for example, code performing network communication in MPI applications), this usage model results in periods of time when the GPU is idle. GPUs could be time-shared across jobs to "fill" these idle periods, but unlike CPU resources such as the cache, the effects of sharing the GPU are not well understood. Specifically, two jobs that time-share a single GPU will experience resource contention and interfere with each other. The resulting slow-down could lead to missed job deadlines. Current cluster managers do not support GPU-sharing, but instead dedicate GPUs to a job for the job's lifetime. In this paper, we present a framework to predict and handle interference when two or more jobs time-share GPUs in HPC clusters. Our framework consists of an analysis model, and a dynamic interference detection and response mechanism to detect excessive interference and restart the interfering jobs on different nodes. We implement our framework in Torque, an open-source cluster manager, and using real workloads on an HPC cluster, show that interference-aware two-job colocation (although our method is applicable to colocating more than two jobs) improves GPU utilization by 25%, reduces a job's waiting time in the queue by 39% and improves job latencies by around 20%.
The Singer of Tales in Performance
Preface I. Common Ground: Oral-Formulaic Theory and the Ethnography of Speaking II. Ways of Speaking, Ways of Meaning III. The Rhetorical Persistence of Traditional Forms IV. Spellbound: The Serbian Tradition of Magical Charms V. Continuities of Reception: The Homeric Hymn to Demeter VI. Indexed Translation: The PoetOs Self-Interruption in the Old English Andreas Conclusion Bibliography Index
Learning inter-related visual dictionary for object recognition
Object recognition is challenging especially when the objects from different categories are visually similar to each other. In this paper, we present a novel joint dictionary learning (JDL) algorithm to exploit the visual correlation within a group of visually similar object categories for dictionary learning where a commonly shared dictionary and multiple category-specific dictionaries are accordingly modeled. To enhance the discrimination of the dictionaries, the dictionary learning problem is formulated as a joint optimization by adding a discriminative term on the principle of the Fisher discrimination criterion. As well as presenting the JDL model, a classification scheme is developed to better take advantage of the multiple dictionaries that have been trained. The effectiveness of the proposed algorithm has been evaluated on popular visual benchmarks.
An Optimized Transformerless Photovoltaic Grid-Connected Inverter
Unipolar sinusoidal pulsewidth modulation (SPWM) full-bridge inverter brings high-frequency common-mode voltage, which restricts its application in transformerless photovoltaic grid-connected inverters. In order to solve this problem, an optimized full-bridge structure with two additional switches and a capacitor divider is proposed in this paper, which guarantees that a freewheeling path is clamped to half input voltage in the freewheeling period. Sequentially, the high-frequency common-mode voltage has been avoided in the unipolar SPWM full-bridge inverter, and the output current flows through only three switches in the power processing period. In addition, a clamping branch makes the voltage stress of the added switches be equal to half input voltage. The operation and clamping modes are analyzed, and the total losses of power device of several existing topologies and proposed topology are fairly calculated. Finally, the common-mode performance of these topologies is compared by a universal prototype inverter rated at 1 kW.
A de novo interstitial deletion of 2p23.3-24.3 in a boy presenting with intellectual disability, overgrowth, dysmorphic features, skeletal myopathy, dilated cardiomyopathy.
Interstitial deletions of the distal part of chromosome 2p are rare, with only six reported cases involving regions from 2p23 to 2pter. Most of these were cytogenetic investigations. We describe a 14-year-old boy with an 8.97 Mb deletion of 2p23.3-24.3 detected by array comparative genomic hybridization (array CGH) who had intellectual disability (ID), unusual facial features, cryptorchidism, skeletal myopathy, dilated cardiomyopathy (DCM), and postnatal overgrowth (macrocephaly and tall stature). We compared the clinical features of the present case to previously described patients with an interstitial deletion within this chromosomal region and conclude that our patient exhibits a markedly different phenotype. Additional patients are needed to further delineate phenotype-genotype correlations.
Analytical Methods for Minimizing Cogging Torque in Permanent-Magnet Machines
Cogging torque in permanent-magnet machines causes torque and speed ripples, as well as acoustic noise and vibration, especially in low speed and direct drive applications. In this paper, a general analytical expression for cogging torque is derived by the energy method and the Fourier series analysis, based on the air gap permeance and the flux density distribution in an equivalent slotless machine. The optimal design parameters, such as slot number and pole number combination, skewing, pole-arc to pole-pitch ratio, and slot opening, are derived analytically to minimize the cogging torque. Finally, the finite-element analysis is adopted to verify the correctness of analytical methods.
Prediction and explanation in social systems
Historically, social scientists have sought out explanations of human and social phenomena that provide interpretable causal mechanisms, while often ignoring their predictive accuracy. We argue that the increasingly computational nature of social science is beginning to reverse this traditional bias against prediction; however, it has also highlighted three important issues that require resolution. First, current practices for evaluating predictions must be better standardized. Second, theoretical limits to predictive accuracy in complex social systems must be better characterized, thereby setting expectations for what can be predicted or explained. Third, predictive accuracy and interpretability must be recognized as complements, not substitutes, when evaluating explanations. Resolving these three issues will lead to better, more replicable, and more useful social science.
Analysis of chronic low back pain with magnetic resonance imaging T2 mapping of lumbar intervertebral disc.
BACKGROUND Magnetic resonance imaging (MRI) T2 mapping utilizes the T2 values for quantification of moisture content and collagen sequence breakdown. Recently, attempts at quantification of lumbar disc degeneration through MRI T2 mapping have been reported. We conducted an analysis of the relationship between T2 values of degenerated intervertebral discs (IVD) and chronic low back pain (CLBP). METHODS The subjects who had CLBP comprised 28 patients (15 male, 13 female; mean age 48.9 ± 9.6 years; range 22-60 years). All subjects underwent MRI and filled out the low back pain visual analog scale (VAS) and Japanese Orthopaedic Association Back Pain Evaluation Questionnaire (JOABPEQ). The disc was divided into the anterior annulus fibrosus (AF), the nucleus pulposus (NP), and the posterior AF, and each T2 value was measured. This study involved 25 asymptomatic control participants matched with the CLBP group subjects for gender and age (13 male, 12 female; mean age 43.8 ± 14.5 years; range 23-60 years). These subjects had no low back pain, and constituted the control group. RESULTS T2 values for IVD tended to be lower in the CLBP group than in the control group, and these values were significantly different within the posterior AF. The correlation coefficients between the VAS scores and T2 values of anterior AF, NP and posterior AF were r = 0.30, -0.15 and -0.50. The correlation coefficient between the JOABPEQ scores (low back pain) and T2 values of anterior AF, NP and posterior AF were r = -0.0041, 0.11 and 0.42. Similarly, the JOABPEQ scores (lumbar function) were r = -0.22, -0.12 and 0.57. CONCLUSIONS The results indicated a correlation between posterior AF degeneration and CLBP. This study suggests that MRI T2 mapping could be used as a quantitative method for diagnosing discogenic pain.
Enhancing the estimation of blood pressure using pulse arrival time and two confounding factors.
A new method of blood pressure (BP) estimation using multiple regression with pulse arrival time (PAT) and two confounding factors was evaluated in clinical and unconstrained monitoring situations. For the first analysis with clinical data, electrocardiogram (ECG), photoplethysmogram (PPG) and invasive BP signals were obtained by a conventional patient monitoring device during surgery. In the second analysis, ECG, PPG and non-invasive BP were measured using systems developed to obtain data under conditions in which the subject was not constrained. To enhance the performance of BP estimation methods, heart rate (HR) and arterial stiffness were considered as confounding factors in regression analysis. The PAT and HR were easily extracted from ECG and PPG signals. For arterial stiffness, the duration from the maximum derivative point to the maximum of the dicrotic notch in the PPG signal, a parameter called TDB, was employed. In two experiments that normally cause BP variation, the correlation between measured BP and the estimated BP was investigated. Multiple-regression analysis with the two confounding factors improved correlation coefficients for diastolic blood pressure and systolic blood pressure to acceptable confidence levels, compared to existing methods that consider PAT only. In addition, reproducibility for the proposed method was determined using constructed test sets. Our results demonstrate that non-invasive, non-intrusive BP estimation can be obtained using methods that can be applied in both clinical and daily healthcare situations.
A White-Box DES Implementation for DRM Applications
For digital rights management (drm) software implementations incorporating cryptography, white-box cryptography (cryptographic implementation designed to withstand the white-box attack context) is more appropriate than traditional black-box cryptography. In the whitebox context, the attacker has total visibility into software implementation and execution. Our objective is to prevent extraction of secret keys from the program. We present methods to make such key extraction difficult, with focus on symmetric block ciphers implemented by substitution boxes and linear transformations. A des implementation (useful also for triple-des) is presented as a concrete example.
Fault Ride-Through of a DFIG Wind Turbine Using a Dynamic Voltage Restorer During Symmetrical and Asymmetrical Grid Faults
The application of a dynamic voltage restorer (DVR) connected to a wind-turbine-driven doubly fed induction generator (DFIG) is investigated. The setup allows the wind turbine system an uninterruptible fault ride-through of voltage dips. The DVR can compensate the faulty line voltage, while the DFIG wind turbine can continue its nominal operation as demanded in actual grid codes. Simulation results for a 2 MW wind turbine and measurement results on a 22 kW laboratory setup are presented, especially for asymmetrical grid faults. They show the effectiveness of the DVR in comparison to the low-voltage ride-through of the DFIG using a crowbar that does not allow continuous reactive power production.
The effect of single set resistance training on strength and functional fitness in pulmonary rehabilitation patients.
PURPOSE The primary goal of pulmonary rehabilitation (PR) is for patients to achieve and maintain their maximum level of independence and functioning in the community. Traditional PR uses a predominantly aerobic/endurance approach to rehabilitation with little or no inclusion of exercises to increase strength. Few studies have investigated the impact of resistance training on PR despite growing evidence supporting its efficacy to improve physical function (functional fitness) in both healthy individuals and those with chronic disease. The purpose of this study was to investigate the effect of single-set resistance training on strength and functional fitness outcomes in PR patients. METHODS Twenty PR patients, 60 to 81 years old, were randomly assigned to an 8-week endurance-based PR program (ET) or an ET plus resistance training program (RT). RESULTS Strength increased in RT (P < .05) and decreased in ET for both upper and lower body. Functional fitness improved (P < .05) in 5 of 7 tests for RT compared with 2 tests for ET. CONCLUSIONS Single set RT can elicit significant improvements in both strength and functional fitness, which is not obtained by traditional PR alone. Our results are comparable to other studies with similar outcomes using multiple-set RT protocols. These findings may have important implications for program design, application, and adherence in PR.
The role of parent depressive symptoms in positive and negative parenting in a preventive intervention.
This study examined the role of parent depressive symptoms as a mediator of change in behaviorally observed positive and negative parenting in a preventive intervention program. The purpose of the program was to prevent child problem behaviors in families with a parent who has current or a history of major depressive disorder. One hundred eighty parents and one of their 9- to 15-year-old children served as participants and were randomly assigned to a family group cognitive-behavioral (FGCB) intervention or a written information (WI) comparison condition. At two months after baseline, parents in the FGCB condition had fewer depressive symptoms than those in the WI condition, and these symptoms served as a mediator for changes in negative, but not positive, parenting at 6 months after baseline. The findings indicate that parent depressive symptoms are important to consider in family interventions with a parent who has current or a history of depression.
A Parallel Apriori Algorithm for Frequent Itemsets Mining
Finding frequent itemsets is one of the most investigated fields of data mining. The Apriori algorithm is the most established algorithm for frequent itemsets mining (FIM). Several implementations of the Apriori algorithm have been reported and evaluated. One of the implementations optimizing the data structure with a trie by Bodon catches our attention. The results of the Bodon's implementation for finding frequent itemsets appear to be faster than the ones by Borgelt and Goethals. In this paper, we revised Bodon's implementation into a parallel one where input transactions are read by a parallel computer. The effect a parallel computer on this modified implementation is presented
Natural Language Processing for Intelligent Access to Scientific Information
During the last decade the amount of scientific information available on-line increased at an unprecedented rate. As a consequence, nowadays researchers are overwhelmed by an enormous and continuously growing number of articles to consider when they perform research activities like the exploration of advances in specific topics, peer reviewing, writing and evaluation of proposals. Natural Language Processing Technology represents a key enabling factor in providing scientists with intelligent patterns to access to scientific information. Extracting information from scientific papers, for example, can contribute to the development of rich scientific knowledge bases which can be leveraged to support intelligent knowledge access and question answering. Summarization techniques can reduce the size of long papers to their essential content or automatically generate state-of-the-art-reviews. Paraphrase or textual entailment techniques can contribute to the identification of relations across different scientific textual sources. This tutorial provides an overview of the most relevant tasks related to the processing of scientific documents, including but not limited to the in-depth analysis of the structure of the scientific articles, their semantic interpretation, content extraction and summarization.
Robust real-time performance-driven 3D face tracking
We introduce a novel robust hybrid 3D face tracking framework from RGBD video streams, which is capable of tracking head pose and facial actions without pre-calibration or intervention from a user. In particular, we emphasize on improving the tracking performance in instances where the tracked subject is at a large distance from the cameras, and the quality of point cloud deteriorates severely. This is accomplished by the combination of a flexible 3D shape regressor and the joint 2D+3D optimization on shape parameters. Our approach fits facial blendshapes to the point cloud of the human head, while being driven by an efficient and rapid 3D shape regressor trained on generic RGB datasets. As an on-line tracking system, the identity of the unknown user is adapted on-the-fly resulting in improved 3D model reconstruction and consequently better tracking performance. The result is a robust RGBD face tracker capable of handling a wide range of target scene depths, whose performances are demonstrated in our extensive experiments better than those of the state-of-the-arts.
Sample-based motion planning in high-dimensional and differentially-constrained systems
State of the art sample-based path planning algorithms, such as the Rapidly-exploring Random Tree (RRT), have proven to be effective in path planning for systems subject to complex kinematic and geometric constraints. The performance of these algorithms, however, degrade as the dimension of the system increases. Furthermore, sample-based planners rely on distance metrics which do not work well when the system has differential constraints. Such constraints are particularly challenging in systems with non-holonomic and underactuated dynamics. This thesis develops two intelligent sampling strategies to help guide the search process. To reduce sensitivity to dimension, sampling can be done in a low-dimensional task space rather than in the high-dimensional state space. Altering the sampling strategy in this way creates a Voronoi Bias in task space, which helps to guide the search, while the RRT continues to verify trajectory feasibility in the full state space. Fast path planning is demonstrated using this approach on a 1500-link manipulator. To enable task-space biasing for underactuated systems, a hierarchical task space controller is developed by utilizing partial feedback linearization. Another sampling strategy is also presented, where the local reachability of the tree is approximated, and used to bias the search, for systems subject to differential constraints. Reachability guidance is shown to improve search performance of the RRT by an order of magnitude when planning on a pendulum and non-holonomic car. The ideas of task-space biasing and reachability guidance are then combined for demonstration of a motion planning algorithm implemented on LittleDog, a quadruped robot. The motion planning algorithm successfully planned bounding trajectories over extremely rough terrain.
Implementation of an energy monitoring and control device based on IoT
Energy monitoring and conservation holds prime importance in today's world because of the imbalance between power generation and demand. The current scenario says that the power generated, which is primarily contributed by fossil fuels may get exhausted within the next 20 years. Currently, there are very accurate electronic energy monitoring systems available in the market. Most of these monitor the power consumed in a domestic household, in case of residential applications. Many a times, consumers are dissatisfied with the power bill as it does not show the power consumed at the device level. This paper presents the design and implementation of an energy meter using Arduino microcontroller which can be used to measure the power consumed by any individual electrical appliance. Internet of Things (IoT) is an emerging field and IoT based devices have created a revolution in electronics and IT. The main intention of the proposed energy meter is to monitor the power consumption at the device level, upload it to the server and establish remote control of any appliance. The energy monitoring system precisely calculates the power consumed by various electrical devices and displays it through a home energy monitoring website. The advantage of this device is that a user can understand the power consumed by any electrical appliance from the website and can take further steps to control them and thus help in energy conservation. Further on, the users can monitor the power consumption as well as the bill on daily basis.
Accuracy and consensus in judgments of trustworthiness from faces: behavioral and neural correlates.
Perceivers' inferences about individuals based on their faces often show high interrater consensus and can even accurately predict behavior in some domains. Here we investigated the consensus and accuracy of judgments of trustworthiness. In Study 1, we showed that the type of photo judged makes a significant difference for whether an individual is judged as trustworthy. In Study 2, we found that inferences of trustworthiness made from the faces of corporate criminals did not differ from inferences made from the faces of noncriminal executives. In Study 3, we found that judgments of trustworthiness did not differ between the faces of military criminals and the faces of military heroes. In Study 4, we tempted undergraduates to cheat on a test. Although we found that judgments of intelligence from the students' faces were related to students' scores on the test and that judgments of students' extraversion were correlated with self-reported extraversion, there was no relationship between judgments of trustworthiness from the students' faces and students' cheating behavior. Finally, in Study 5, we examined the neural correlates of the accuracy of judgments of trustworthiness from faces. Replicating previous research, we found that perceptions of trustworthiness from the faces in Study 4 corresponded to participants' amygdala response. However, we found no relationship between the amygdala response and the targets' actual cheating behavior. These data suggest that judgments of trustworthiness may not be accurate but, rather, reflect subjective impressions for which people show high agreement.
Treatment of blastic phase chronic myeloid leukemia with mitoxantrone, cytosine arabinoside and high dose methylprednisolone.
Fourteen patients with blastic phase chronic myelogenous leukemia received combination chemotherapy with mitoxantrone 5 mg/m2 intravenously daily for 3 days, cytosine arabinoside 100 mg/m2 intravenously over 2 hours bid for 7 days and high dose methylprednisolone 1000 mg/day intravenously for 5 days. The patients' mean age was 52 +/- 10 (range 34-64) and Philadelphia chromosome was positive in all. Five patients (35%) achieved complete remission and four patients (28%) had a partial remission. Overall remission rate was 64%. The mean survival was 11.1 +/- 8.6 months (median 13) for all patients, 19.4 +/- 4.0 months (median 19) for those achieving a complete remission, 12.50 +/- 5.7 months (median 14) for patients with partial remission and 1.8 +/- 1.8 months (median 2) for the unresponsive patients. Two of 5 unresponsive patients died early after the second course of remission induction. The treatment regimen was generally well tolerated. Marrow hypoplasia was observed in 9 (64%) patients and 7 (50%) had febrile episodes. Non-myelosupressive toxicity of the regimen was acceptable. Nausea and vomiting were observed in 8 (57%) patients and 3 (21%) patients developed flushing due to cytosine arabinoside. These results suggest that the regimen with mitoxantrone, cytosine arabinoside and high dose methylprednisolone in remission-induction of blastic phase chronic myelogenous leukemia may be a valid option that may also improve overall prognosis.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsuper-vised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolu-tional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks-demonstrating their applicability as general image representations .
Bachelor Degree Project Hierarchical Temporal Memory Software Agent
Artificial general intelligence is not well defined, but attempts such as the recent list of “Ingredients for building machines that think and learn like humans” are a starting point for building a system considered as such [1]. Numenta is attempting to lead the new era of machine intelligence with their research to re-engineer principles of the neocortex. It is to be explored how the ingredients are in line with the design principles of their algorithms. Inspired by Deep Minds commentary about an autonomyingredient, this project created a combination of Numentas Hierarchical Temporal Memory theory and Temporal Difference learning to solve simple tasks defined in a browser environment. An open source software, based on Numentas intelligent computing platform NUPIC and Open AIs framework Universe, was developed to allow further research of HTM based agents on customized browser tasks. The analysis and evaluation of the results show that the agent is capable of learning simple tasks and there is potential for generalization inherent to sparse representations. However, they also reveal the infancy of the algorithms, not capable of learning dynamic complex problems, and that much future research is needed to explore if they can create scalable solutions towards a more general intelligent system.
Interaction criticism and aesthetics
As HCI becomes more self-consciously implicated in cul-ture, theories from cultural studies, in particular aesthetics and critical theory, are increasingly working their way into the field. However, the use of aesthetics and critical theory in HCI remains both marginal and uneven in quality. This paper explores the state of the art of aesthetics and critical theory in the field, before going on to explore the role of these cultural theories in the analysis and deployment of the twin anchors of interaction: the user and the artifact. In concludes with a proposed mapping of aesthetics and criti-cal theory into interaction design, both as a practice and as a discipline.
A new family of growth factors produced by the fat body and active on Drosophila imaginal disc cells.
By fractionating conditioned medium (CM) from Drosophila imaginal disc cell cultures, we have identified a family of Imaginal Disc Growth Factors (IDGFs), which are the first polypeptide growth factors to be reported from invertebrates. The active fraction from CM, as well as recombinant IDGFs, cooperate with insulin to stimulate the proliferation, polarization and motility of imaginal disc cells. The IDGF family in Drosophila includes at least five members, three of which are encoded by three genes in a tight cluster. The proteins are structurally related to chitinases, but they show an amino acid substitution that is known to abrogate catalytic activity. It therefore seems likely that they have evolved from chitinases but acquired a new growth-promoting function. The IDGF genes are expressed most strongly in the embryonic yolk cells and in the fat body of the embryo and larva. The predicted molecular structure, expression patterns, and mitogenic activity of these proteins suggest that they are secreted and transported to target tissues via the hemolymph. However, the genes are also expressed in embryonic epithelia in association with invagination movements, so the proteins may have local as well as systemic functions. Similar proteins are found in mammals and may constitute a novel class of growth factors.
Cloture Votes:n/4-resilient Distributed Consensus int + 1 rounds
TheDistributed Consensus problem involvesn processors each of which holds an initial binary value. At mostt processors may be faulty and ignore any protocol (even behaving maliciously), yet it is required that the nonfaulty processors eventually agree on a value that was initially held by one of them. We measure the quality of a consensus protocol using the following parameters; total number of processorsn, number of rounds of message exchanger, and maximal message sizem. The known lower bounds are respectively 3t + 1,t + 1, and 1. While no known protocol is optimal in all these three aspects simultaneously,Cloture Votes—the protocol presented in this paper—takes further steps in this direction, by making consensus possible withn = 4t + 1,r = t + 1, and polynomial message size. Cloture is a parliamentary procedure (also known as “parliamentary guillotine”) which makes it possible to curtail unnecessary long debates. In our protocol the unanimous will of the correct processors (akin to parliamentarian supermajority) may curtail the debate. This is facilitated by having the processors open in each round a new process (debate), which either ends quickly, with the conclusion “continue” or “terminate with the default value,” or lasts through many rounds. Importantly, in the latter case the messages being sent are short.
Orthogonal RNNs and Long-Memory Tasks
Although RNNs have been shown to be powerful tools for processing sequential data, finding architectures or optimization strategies that allow them to model very long term dependencies is still an active area of research. In this work, we carefully analyze two synthetic datasets originally outlined in (Hochreiter & Schmidhuber, 1997) which are used to evaluate the ability of RNNs to store information over many time steps. We explicitly construct RNN solutions to these problems, and using these constructions, illuminate both the problems themselves and the way in which RNNs store different types of information in their hidden states. These constructions furthermore explain the success of recent methods that specify unitary initializations or constraints on the transition matrices.
Social Comparisons and Contributions to Online Communities: A Field Experiment on MovieLens
We design a field experiment to explore the use of social comparison to increase contributions to an online community. We find that, after receiving behavioral information about the median user’s total number of movie ratings, users below the median demonstrate a 530% increase in the number of monthly movie ratings, while those above the median do not necessarily decrease their ratings. When given outcome information about the average user’s net benefit score, above-average users mainly engage in activities that help others. Our findings suggest that effective personalized social information can increase the level of public goods
Cost-effectiveness of first- v. second-generation antipsychotic drugs: results from a randomised controlled trial in schizophrenia responding poorly to previous therapy.
BACKGROUND There are claims that the extra costs of atypical (second-generation) antipsychotic drugs over conventional (first-generation) drugs are offset by improved health-related quality of life. AIMS To determine the relative costs and value of treatment with conventional or atypical antipsychotics in people with schizophrenia. METHOD Cost-effectiveness acceptability analysis integrated clinical and economic randomised controlled trial data of conventional and atypical antipsychotics in routine practice. RESULTS Conventional antipsychotics had lower costs and higher quality-adjusted life-years (QALYs) than atypical antipsychotics and were more than 50% likely to be cost-effective. CONCLUSIONS The primary and sensitivity analyses indicated that conventional antipsychotics may be cost-saving and associated with a gain in QALYs compared with atypical antipsychotics.
Verbal memory and verbal fluency tasks used for language localization and lateralization during magnetoencephalography
OBJECTIVE The aim of this study was to develop a presurgical magnetoencephalography (MEG) protocol to localize and lateralize expressive and receptive language function as well as verbal memory in patients with epilepsy. Two simple language tasks and a different analytical procedure were developed. METHODS Ten healthy participants and 13 epileptic patients completed two language tasks during MEG recording: a verbal memory task and a verbal fluency task. As a first step, principal component analyses (PCA) were performed on source data from the group of healthy participants to identify spatiotemporal factors that were relevant to these paradigms. Averaged source data were used to localize areas activated during each task and a laterality index (LI) was computed on an individual basis for both groups, healthy participants and patients, using sensor data. RESULTS PCA revealed activation in the left temporal lobe (300 ms) during the verbal memory task, and from the frontal lobe (210 ms) to the temporal lobe (500 ms) during the verbal fluency task in healthy participants. Averaged source data showed activity in the left hemisphere (250-750 ms), in Wernicke's area, for all participants. Left hemisphere dominance was demonstrated better using the verbal memory task than the verbal fluency task (F1,19=4.41, p=0.049). Cohen's kappa statistic revealed 93% agreement (k=0.67, p=0.002) between LIs obtained from MEG sensor data and fMRI, the IAT, electrical cortical stimulation or handedness with the verbal memory task for all participants. At 74%, agreement results for the verbal fluency task did not reach statistical significance. SIGNIFICANCE Analysis procedures yielded interesting findings with both tasks and localized language-related activation. However, based on source localization and laterality indices, the verbal memory task yielded better results in the context of the presurgical evaluation of epileptic patients. The verbal fluency task did not add any further information to the verbal memory task as regards language localization and lateralization for most patients and healthy participants that would facilitate decision making prior to surgery.
Reactive power capability of the wind turbine with Doubly Fed Induction Generator
With the increasing integration into power grids, wind power plants play an important role in the power system. Many requirements for the wind power plants have been proposed in the grid codes. According to these grid codes, wind power plants should have the ability to perform voltage control and reactive power compensation at the point of common coupling (PCC). Besides the shunt flexible alternating current transmission system (FACTS) devices such as the static var compensator (SVC) and the static synchronous compensator (STATCOM), the wind turbine itself can also provide a certain amount of reactive power compensation, depending on the wind speed and the active power control strategy. This paper analyzes the reactive power capability of Doubly Fed Induction Generator (DFIG) based wind turbine, considering the rated stator current limit, the rated rotor current limit, the rated rotor voltage limit, and the reactive power capability of the grid side convertor (GSC). The boundaries of reactive power capability of DFIG based wind turbine are derived. The result was obtained using the software MATLAB.
Incorporation of inhaled insulin into the FDA accepted University of Virginia/Padova Type 1 Diabetes Simulator
The University of Virginia/Padova Type 1 Diabetes (T1DM) Simulator has been extensively used in artificial pancreas research mostly for testing and design of control algorithms. However, it also offers the possibility of testing new insulin analogs and alternative routes of delivery given that subcutaneous insulin administration present significant delays & variability. Inhaled insulin appears an important candidate to improve post-prandial glucose control given its rapid appearance in plasma. In this contribution, we present the results of incorporating a pharmacokinetic model of inhaled Technosphere® Insulin (TI) into the T1DM simulator. In particular, we successfully reproduced in silico the post-prandial glucose control observed in T1DM subjects treated with TI given at meal time, and the post-prandial glucose dynamics in response to different timing of TI dose.
Security Issues and Solutions in Wireless Sensor Networks
This paper focuses and talks about the wide and varied areas of applications wireless sensor networks have taken over today, right from military surveillance and smart home automation to medical and environmental monitoring. It also gives a gist why security is a primary issue of concern even today for the same, discussing the existing solutions along with outlining the security issues and suggesting possible directions of research over the same. This paper is about the security of wireless sensor networks. These networks create new security threats in comparison to the traditional methods due to some unique characteristics of these networks. A detailed study of the threats, risks and attacks need to be done in order to come up with proper security solutions. Here the paper presents the unique characteristics of these networks and how they pose new security threats. There are several security goals of these networks. These goals and requirements must be kept in mind while designing of security solutions for these networks. It also describes the various attacks that are possible at important layers such as data-link, network, physical and transport layer.
Unified formulation of a class of image thresholding techniques
-In this paper, we show that Otsu's image thresholding, Kittler and Illingworth's minimum error thresholding, and Huang and Wang's fuzzy thresholding methods can be derived under a similar mathematical formulation. The difference among the three methods is the choice of different weighting functions for computing a criterion function that can be considered as a weighted summation of the image gray level histogram. We can have a better understanding of the three thresholding techniques and derive other thresholding methods based on this unified formulation. Copyright © 1996 Pattern Recognition Society. Published by Elsevier Science Ltd. Image thresholding Otsu's method Minimum error thresholding Fuzzy thresholding method Unified formulation of thresholding methods
Review: Existing Image Segmentation Techniques
Image segmentation is the most important part in digital image processing. Segmentation is nothing but a portion of any image and object. In image segmentation, digital image is divided into multiple set of pixels. Image segmentation is generally required to cut out region of interest (ROI) from an image. Currently there are many different algorithms available for image segmentation. Each have their own advantages and purpose. In this paper, different image segmentation algorithms with their prospects are reviewed.
Chapter 2 Electric Vehicle Battery Technologies
As discussed in the previous chapter, electrification is the most viable way to achieve clean and efficient transportation that is crucial to the sustainable development of the whole world. In the near future, electric vehicles (EVs) including hybrid electric vehicles (HEVs), plug-in hybrid electric vehicles (PHEVs), and pure battery electric vehicles (BEVs) will dominate the clean vehicle market [1, 2]. By 2020, it is expected that more than half of new vehicle sales will likely be EV models. The key and the enabling technology to this revolutionary change is battery. The importance of batteries to EVs has been verified in the history. The first EV was seen on the road shortly after the invention of rechargeable lead–acid batteries and electric motors in the late 1800s [4]. In the early years of 1900s, there was a golden period of EVs. At that time, the number of EVs was almost double that of gasoline power cars. However, EVs almost disappeared and gave the wholemarket to