title
stringlengths
8
300
abstract
stringlengths
0
10k
Healthcare process analysis: the use of simulation to evaluate hospital operations between the emergency department and a medical telemetry unit
This paper presents a simulation model of the operations in the Emergency Department (ED) and Medical Telemetry (Med Tele) Units at Rush North Shore Medical Center. The model allows management to see the operations of both units as well as how the processes of each unit impact the other. Due to the large amount of variability that can take place within these units, Rush North Shore Medical Center along with Cap Gemini Ernst & Young sought the use of simulation to help evaluate their operations and provide insight into possible areas for improvement. Rockwell Automation created a model which depicts the current operations and evaluates possible alternatives to reduce the length of stay in the ED and improve operations. Using simulation, the hospital was able to select two to three key changes, rather than creating more stress with ten or more changes, to get the same result.
Endonasal endoscopic surgery for squamous cell carcinoma of the sinonasal cavities and skull base: Oncologic outcomes based on treatment strategy and tumor etiology.
BACKGROUND Oncologic outcomes for sinonasal and skull base squamous cell carcinoma (SCC) treated with an endoscopic endonasal approach (EEA) needs investigation. METHODS Patients with SCC treated with EEA were stratified by treatment strategy and tumor etiology and reviewed. RESULTS Thirty-four patients were treated with EEA, or which 27 had definitive resection and 7 had debulking surgery. In the definitive group, 17 had de novo tumors and 10 had tumors arising from inverted papilloma. Definitive resection was associated with better 5-year disease-free survival (DFS) and overall survival (OS) than debulking (62% vs 17%; p = .02; and 78% vs 30%; p = .03). Patients with de novo tumors had similar 5-year DFS and OS to those arising from inverted papilloma (62% vs 62%; p = .75; and 75% vs 86%; p = .24). CONCLUSION Definitive resection of sinonasal SCC with EEA provides sound oncologic outcomes. SCC arising from inverted papilloma does not have prognostic significance.
NONPARAMETRIC APPROACHES FOR TRAINING DEEP GENERATIVE NETWORKS
Algorithms that learn to generate data whose distributions match that of the training data, such as generative adversarial networks (GANs), have been a focus of much recent work in deep unsupervised learning. Unfortunately, GAN models have drawbacks, such as instable training due to the minmax optimization formulation and the issue of zero gradients. To address these problems, we explore and develop a new family of nonparametric objective functions and corresponding training algorithms to train a DNN generator that learn the probability distribution of the training data. Preliminary results presented in the paper demonstrate that the proposed approach converges faster and the trained models provide very good quality results even with a small number of iterations. Special cases of our formulation yield new algorithms for the Wasserstein and the MMD metrics. We also develop a new algorithm based on the Prokhorov metric between distributions, which we believe can provide promising results on certain kinds of data. We conjecture that the nonparametric approach for training DNNs can provide a viable alternative to the popular GAN formulations.
Proactive Threat Detection for Connected Cars Using Recursive Bayesian Estimation
Upcoming disruptive technologies around autonomous driving of connected cars have not yet been matched with appropriate security by design principles and lack approaches to incorporate proactive preventative measures in the wake of increased cyber-threats against such systems. In this paper, we introduce proactive anomaly detection to a use-case of hijacked connected cars to improve cyber-resilience. First, we manifest the opportunity of behavioral profiling for connected cars from recent literature covering related underpinning technologies. Then, we design and utilize a new data set file for connected cars influenced by the automatic dependent surveillance–broadcast surveillance technology used in the aerospace industry to facilitate data collection and sharing. Finally, we simulate the analysis of travel routes in real time to predict anomalies using predictive modeling. Simulations show the applicability of a Bayesian estimation technique, namely, Kalman filter. With the analysis of future state predictions based on the previous behavior, cyber-threats can be addressed with a vastly increased time window for a reaction when encountering anomalies. We discuss that detecting real-time deviations for malicious intent with the predictive profiling and behavioral algorithms can be superior in effectiveness than the retrospective comparison of known-good/known-bad behavior. When quicker action can be taken while connected cars encounter cyberattacks, more effective engagement or interception of command and control will be achieved.
Multimodal Deep Domain Adaptation
Typically a classifier trained on a given dataset (source domain) does not performs well if it is tested on data acquired in a different setting (target domain). This is the problem that domain adaptation (DA) tries to overcome and, while it is a well explored topic in computer vision, it is largely ignored in robotic vision where usually visual classification methods are trained and tested in the same domain. Robots should be able to deal with unknown environments, recognize objects and use them in the correct way, so it is important to explore the domain adaptation scenario also in this context. The goal of the project is to define a benchmark and a protocol for multimodal domain adaptation that is valuable for the robot vision community. With this purpose some of the state-of-the-art DA methods are selected: Deep Adaptation Network (DAN), Domain Adversarial Training of Neural Network (DANN), Automatic Domain Alignment Layers (AutoDIAL) and Adversarial Discriminative Domain Adaptation (ADDA). Evaluations have been done using different data types: RGB only, depth only and RGB-D over the following datasets, designed for the robotic community: RGB-D Object Dataset (ROD), Web Object Dataset (WOD), Autonomous Robot Indoor Dataset (ARID), Big Berkeley Instance Recognition Dataset (BigBIRD) and Active Vision Dataset. Although progresses have been made on the formulation of effective adaptation algorithms and more realistic object datasets are available, the results obtained show that, training a sufficiently good object classifier, especially in the domain adaptation scenario, is still an unsolved problem. Also the best way to combine depth with RGB informations to improve the performance is a point that needs to be investigated more.
Ontology expansion: appending with extracted sub-ontology
The reuse of ontologies existing on the Web should be required to speed up ontology construction process. However, current ontology mapping approaches assume that two input ontologies are given not discovered from available semantic data of the web. In this paper, we present a new approach, which integrates ontology selection, mapping, and merging processes in order to minimise human mediation. Our ontology selection mechanism accepts a hand-crafted ontology as query ontology to search already constructed relevant ontologies. In addition, we develop a ranking method based on syntactic and semantic structure of classes to provide the best search result to the user.
Advanced 0.13um smart power technology from 7V to 70V
This paper presents BCD process integrating 7V to 70V power devices on 0.13um CMOS platform for various power management applications. BJT, Zener diode and Schottky diode are available and non-volatile memory is embedded as well. LDMOS shows best-in-class specific Ron (R<sub>SP</sub>) vs. BV<sub>DSS</sub> characteristics (i.e., 70V NMOS has R<sub>SP</sub> of 69mΩ-mm<sup>2</sup> with BV<sub>DSS</sub> of 89V). Modular process scheme is used for flexibility to various requirements of applications.
A Master Attack Methodology for an AI-Based Automated Attack Planner for Smart Cities
America’s critical infrastructure is becoming “smarter” and increasingly dependent on highly specialized computers called industrial control systems (ICS). Networked ICS components now called the industrial Internet of Things (IIoT) are at the heart of the “smart city”, controlling critical infrastructure, such as CCTV security networks, electric grids, water networks, and transportation systems. Without the continuous, reliable functioning of these assets, economic and social disruption will ensue. Unfortunately, IIoT are hackable and difficult to secure from cyberattacks. This leaves our future smart cities in a state of perpetual uncertainty and the risk that the stability of our lives will be upended. The Local government has largely been absent from conversations about cybersecurity of critical infrastructure, despite its importance. One reason for this is public administrators do not have a good way of knowing which assets and which components of those assets are at the greatest risk. This is further complicated by the highly technical nature of the tools and techniques required to assess these risks. Using artificial intelligence planning techniques, an automated tool can be developed to evaluate the cyber risks to critical infrastructure. It can be used to automatically identify the adversarial strategies (attack trees) that can compromise these systems. This tool can enable both security novices and specialists to identify attack pathways. We propose and provide an example of an automated attack generation method that can produce detailed, scalable, and consistent attack trees–the first step in securing critical infrastructure from cyberattack.
Chunk-based Decoder for Neural Machine Translation
Chunks (or phrases) once played a pivotal role in machine translation. By using a chunk rather than a word as the basic translation unit, local (intrachunk) and global (inter-chunk) word orders and dependencies can be easily modeled. The chunk structure, despite its importance, has not been considered in the decoders used for neural machine translation (NMT). In this paper, we propose chunk-based decoders for NMT, each of which consists of a chunk-level decoder and a word-level decoder. The chunklevel decoder models global dependencies while the word-level decoder decides the local word order in a chunk. To output a target sentence, the chunk-level decoder generates a chunk representation containing global information, which the wordlevel decoder then uses as a basis to predict the words inside the chunk. Experimental results show that our proposed decoders can significantly improve translation performance in a WAT ’16 Englishto-Japanese translation task.
BaNa: A Noise Resilient Fundamental Frequency Detection Algorithm for Speech and Music
Fundamental frequency (F0) is one of the essential features in many acoustic related applications. Although numerous F0 detection algorithms have been developed, the detection accuracy in noisy environments still needs improvement. We present a hybrid noise resilient F0 detection algorithm named BaNa that combines the approaches of harmonic ratios and Cepstrum analysis. A Viterbi algorithm with a cost function is used to identify the F0 value among several F0 candidates. Speech and music databases with eight different types of additive noise are used to evaluate the performance of the BaNa algorithm and several classic and state-of-the-art F0 detection algorithms. Results show that for almost all types of noise and signal-to-noise ratio (SNR) values investigated, BaNa achieves the lowest Gross Pitch Error (GPE) rate among all the algorithms. Moreover, for the 0 dB SNR scenarios, the BaNa algorithm is shown to achieve 20% to 35% GPE rate for speech and 12% to 39% GPE rate for music. We also describe implementation issues that must be addressed to run the BaNa algorithm as a real-time application on a smartphone platform.
Étude de six huiles essentielles : composition chimique et activité antibactérienne
Les huiles essentielles sont utilisées en médecine traditionnelle pour leurs activités antiseptiques. Au cours de travaux précédents, le genre Thymus avait montré de bons résultats comme antifongique. L’étude a été poursuivie sur l’activité antibactérienne. Six huiles essentielles (Lavandula angustifolia, Lavandula latifolia, Origanum vulgare, Rosmarinus officinalis, Thymus vulgaris chémotype carvacrol, Thymus zygis chémotype thymol) ont été testées sur deux souches: Escherichia coli et Staphylococcus aureus. Pour la souche Escherichia coli, les huiles essentielles d’origan et de thym à thymol sont les plus efficaces. Pour la souche Staphylococcus aureus, on note une moindre efficacité; l’huile essentielle d’origan est la plus active. Ce sont les phénols présents dans les huiles essentielles qui présentent une bonne activité antibactérienne. Essential oils are used in traditional medicine for their antiseptic action. During previous studies, the Thymus genus had shown good results as an antifungal. Research continued into its antibacterial activity. Six essential oils (Lavandula angustifolia, Lavandula latifolia, Origanum vulgare, Rosmarinus officinalis, Thymus vulgaris chemotype carvacrol, Thymus zygis chemotype thymol) were tested on two strains: Escherichia coli and Staphylococcus aureus. Essential oils of oregano and thyme with thymol were most effective with respect to the strain Escherichia coli. For the strain Staphylococcus aureus we observed little effect: essential oil of oregano was the more active. Phenols present in essential oils have a good antibacterial activity.
The effect of hydroxyapatite coating on the fixation of hip prostheses
The efficacy of a total hip replacement with a hydroxyapatite-coated hip prosthesis was compared with that of an uncoated, cementless prosthesis of the same type. Preoperatively, there was no difference in the patient's diagnosis, hip score, age, and sex. All operations were performed by one surgeon in a standardized manner. The choice of the implant was randomized, and the follow-up period was equal for both types. The implant used was associated with a poor outcome due to a high incidence of early aseptic loosening. Probably because of a poor initial fixation, there was a significant difference in the clinical results after a short follow-up period when an additional HA layer was used. According to the patients' pain, migration of the implant, and presence of a progressive radiolucent line, use of the HA-coated prosthesis led to a significantly better result; however, we also found an increased rate of heterotopic bone formation in the HA-coated group. It was concluded that the HA coating improves the initial fixation of a hip prosthesis.
MITTS: Memory Inter-arrival Time Traffic Shaping
Memory bandwidth severely limits the scalability and performance of multicore and manycore systems. Application performance can be very sensitive to both the delivered memory bandwidth and latency. In multicore systems, a memory channel is usually shared by multiple cores. Having the ability to precisely provision, schedule, and isolate memory bandwidth and latency on a per-core basis is particularly important when different memory guarantees are needed on a per-customer, per-application, or per-core basis. Infrastructure as a Service (IaaS) Cloud systems, and even general purpose multicores optimized for application throughput or fairness all benefit from the ability to control and schedule memory access on a fine-grain basis. In this paper, we propose MITTS (Memory Inter-arrival Time Traffic Shaping), a simple, distributed hardware mechanism which limits memory traffic at the source (Core or LLC). MITTS shapes memory traffic based on memory request inter-arrival time, enabling fine-grain bandwidth allocation. In an IaaS system, MITTS enables Cloud customers to express their memory distribution needs and pay commensurately. For instance, MITTS enables charging customers that have bursty memory traffic more than customers with uniform memory traffic for the same aggregate bandwidth. Beyond IaaS systems, MITTS can also be used to optimize for throughput or fairness in a general purpose multi-program workload. MITTS uses an online genetic algorithm to configure hardware bins, which can adapt for program phases and variable input sets. We have implemented MITTS in Verilog and have taped-out the design in a 25-core 32nm processor and find that MITTS requires less than 0.9% of core area. We evaluate across SPECint, PARSEC, Apache, and bhm Mail Server workloads, and find that MITTS achieves an average 1.18× performance gain compared to the best static bandwidth allocation, a 2.69× average performance/cost advantage in an IaaS setting, and up to 1.17× better throughput and 1.52× better fairness when compared to conventional memory bandwidth provisioning techniques.
Recognizing Textures with Mobile Cameras for Pedestrian Safety Applications
As smartphone rooted distractions become commonplace, the lack of compelling safety measures has led to a rise in the number of injuries to distracted walkers. Various solutions address this problem by sensing a pedestrian’s walking environment. Existing camera-based approaches have been largely limited to obstacle detection and other forms of object detection. Instead, we present TerraFirma, an approach that performs material recognition on the pedestrian’s walking surface. We explore, first, how well commercial off-the-shelf smartphone cameras can learn texture to distinguish among paving materials in uncontrolled outdoor urban settings. Second, we aim at identifying when a distracted user is about to enter the street, which can be used to support safety functions such as warning the user to be cautious. To this end, we gather a unique dataset of street/sidewalk imagery from a pedestrian’s perspective, that spans major cities like New York, Paris, and London. We demonstrate that modern phone cameras can be enabled to distinguish materials of walking surfaces in urban areas with more than 90% accuracy, and accurately identify when pedestrians transition from sidewalk to street.
Towards a scientific blockchain framework for reproducible data analysis
Publishing reproducible analyses is a long-standing and widespread challenge [1] for the scientific community, funding bodies and publishers [2, 3, 4]. Although a definitive solution is still elusive [5], the problem is recognized to affect all disciplines [6, 7, 8] and lead to a critical system inefficiency [9]. Here, we propose a blockchain-based approach to enhance scientific reproducibility, with a focus on life science studies and precision medicine. While the interest of encoding permanently into an immutable ledger all the study key information–including endpoints, data and metadata, protocols, analytical methods and all findings–has been already highlighted, here we apply the blockchain approach to solve the issue of rewarding time and expertise of scientists that commit to verify reproducibility. Our mechanism builds a trustless ecosystem of researchers, funding bodies and publishers cooperating to guarantee digital and permanent access to information and reproducible results. As a natural byproduct, a procedure to quantify scientists’ and institutions’ reputation for ranking purposes is obtained.
Pharmacokinetics of temozolomide given three times a day in pediatric and adult patients
To characterize and compare pharmacokinetic parameters in children and adults treated with temozolomide (TMZ) administered for 5 days in three doses daily, and to evaluate the possible relationship between AUC values and hematologic toxicity. TMZ pharmacokinetic parameters were characterized in pediatric and adult patients with primary central nervous system tumors treated with doses ranging from 120 to 200 mg/m2 per day, divided into three doses daily for 5 days. Plasma levels were measured over 8 h following oral administration in a fasting state. A total of 40 courses were studied in 22 children (mean age 10 years, range 3–16 years) and in 8 adults (mean age 30 years, range 19–54 years). In all patients, a linear relationship was found between systemic exposure (AUC) and increasing doses of TMZ. Time to peak concentration, elimination half-life, apparent clearance and volume of distribution were not related to TMZ dose. No differences were seen among TMZ Cmax, t1/2, Vd or CL/F in children compared with adults. Intra- and interpatient variability of systemic exposure were limited in both children and adults. No statistically significant differences were found between the AUCs of children who experienced grade 4 hematologic toxicity and children who did not. No difference appears to exist between pharmacokinetic parameters in adults and children when TMZ is administered in three doses daily. Hematologic toxicity was not related to TMZ AUC. AUC measurement does not appear to be of any use in optimizing TMZ treatment.
On stabilization methods of descriptor systems
Numerically reliable computational methods are proposed for the stabilization of a linear descriptor system with or without simultaneous elimination of its impulsive behavior. Two basic stabilization approaches are discussed. The rst approach relies on methods which represent generalizations of direct stabilization techniques for standard state-space systems. The second approach is based on a recursive generalized Schur algorithm for pole assignment. Both approaches are based exclusively on numerically reliable procedures and can serve for robust software implementations.
DeepHand: Robust Hand Pose Estimation by Completing a Matrix Imputed with Deep Features
We propose DeepHand to estimate the 3D pose of a hand using depth data from commercial 3D sensors. We discriminatively train convolutional neural networks to output a low dimensional activation feature given a depth map. This activation feature vector is representative of the global or local joint angle parameters of a hand pose. We efficiently identify 'spatial' nearest neighbors to the activation feature, from a database of features corresponding to synthetic depth maps, and store some 'temporal' neighbors from previous frames. Our matrix completion algorithm uses these 'spatio-temporal' activation features and the corresponding known pose parameter values to estimate the unknown pose parameters of the input feature vector. Our database of activation features supplements large viewpoint coverage and our hierarchical estimation of pose parameters is robust to occlusions. We show that our approach compares favorably to state-of-the-art methods while achieving real time performance (≈ 32 FPS) on a standard computer.
A Probabilistic Neural-Fuzzy Learning System for Stochastic Modeling
A probabilistic fuzzy neural network (PFNN) with a hybrid learning mechanism is proposed to handle complex stochastic uncertainties. Fuzzy logic systems (FLSs) are well known for vagueness processing. Embedded with the probabilistic method, an FLS will possess the capability to capture stochastic uncertainties. Further enhanced with the neural learning, it will be able to work under time-varying stochastic environment. Integrated with a statistical process control (SPC) based monitoring method, the PFNN can maintain the robust modeling performance. Finally, the successful simulation demonstrates the modeling effectiveness of the proposed PFNN under the time-varying stochastic conditions.
A lambda calculus of objects and method specialization
This paper presents an untyped lambda calculus, extended with object primitives that reflect the capabilities of so-called delegation-based object-oriented languages. A type inference system allows static detection of errors, such as message not understood, while at the same time allowing the type of an inherited method to be specialized to the type of the inheriting object. Type soundness is proved using operational semantics and examples illustrating the expressiveness of the pure calculus are presented. CR Classification: F.3.1, D.3.3, F.4.1
Using Neural Network Model Predictive Control for Controlling Shape Memory Alloy-Based Manipulator
This paper presents a new setup and investigates neural model predictive and variable structure controllers designed to control the single-degree-of-freedom rotary manipulator actuated by shape memory alloy (SMA). SMAs are a special group of metallic materials and have been widely used in the robotic field because of their particular mechanical and electrical characteristics. SMA-actuated manipulators exhibit severe hysteresis, so the controllers should confront this problem and make the manipulator track the desired angle. In this paper, first, a mathematical model of the SMA-actuated robot manipulator is proposed and simulated. The controllers are then designed. The results set out the high performance of the proposed controllers. Finally, stability analysis for the closed-loop system is derived based on the dissipativity theory.
Connectivism and Dimensions of Individual Experience
Connectivism has been offered as a new learning theory for a digital age, with four key principles for learning: autonomy, connectedness, diversity, and openness. The testing ground for this theory has been massive open online courses (MOOCs). As the number of MOOC offerings increases, interest in how people interact and develop as individual learners in these complex, diverse, and distributed environments is growing. In their work in these environments the authors have observed a growing tension between the elements of connectivity believed to be necessary for effective learning and the variety of individual perspectives both revealed and concealed during interactions with these elements. In this paper we draw on personality and self-determination theories to gain insight into the dimensions of individual experience in connective environments and to further explore the meaning of autonomy, connectedness, diversity, and openness. The authors suggest that definitions of all four principles can be expanded to recognize individual and psychological diversity within connective environments. They also suggest that such expanded definitions have implications for learners’ experiences of MOOCs, recognizing that learners may vary greatly in their desire for and interpretation of connectivity, autonomy, openness, and diversity.
Hydrothermal liquefaction of biomass: developments from batch to continuous process.
This review describes the recent results in hydrothermal liquefaction (HTL) of biomass in continuous-flow processing systems. Although much has been published about batch reactor tests of biomass HTL, there is only limited information yet available on continuous-flow tests, which can provide a more reasonable basis for process design and scale-up for commercialization. High-moisture biomass feedstocks are the most likely to be used in HTL. These materials are described and results of their processing are discussed. Engineered systems for HTL are described; however, they are of limited size and do not yet approach a demonstration scale of operation. With the results available, process models have been developed, and mass and energy balances determined. From these models, process costs have been calculated and provide some optimism as to the commercial likelihood of the technology.
Discovering and Exploiting Additive Structure for Bayesian Optimization
Bayesian optimization has proven invaluable for black-box optimization of expensive functions. Its main limitation is its exponential complexity with respect to the dimensionality of the search space using typical kernels. Luckily, many objective functions can be decomposed into additive sub-problems, which can be optimized independently. We investigate how to automatically discover such (typically unknown) additive structure while simultaneously exploiting it through Bayesian optimization. We propose an efficient algorithm based on Metropolis–Hastings sampling and demonstrate its efficacy empirically on synthetic and real-world data sets. Throughout all our experiments we reliably discover hidden additive structure whenever it exists and exploit it to yield significantly faster convergence.
Upper White Watershed Integrated Economic and Environmental Management Project
This report outlines enhanced existing local cooperative water quality efforts, summarizes economic and physical data, and discusses how that information was used to develop analytical models.
ARTINO: A New High Resolution 3D Imaging Radar System on an Autonomous Airborne Platform
The new radar system ARTINO (Airborne Radar for Three-dimensional Imaging and Nadir Observation), developed at FGAN-FHR, allows to image a direct overflown scene in three dimensions. Integrated in a small, mobile, and dismountable UAV (Unmanned Aerial Vehicle) it will be an ideal tool for various applications. This paper gives an overview about the ARTINO principle, the raw data simulation, the image formation, the technical realisation, and the status of the experimental system. I. THE ARTINO PRINCIPLE ARTINO is a new radar system integrated in a small and dismountable low-wing UAV, which allows to image the direct overflown scene in three dimensions (Figure 1). Fig. 1. Artist impression of an imaging mission using ARTINO. This new system can image the direct overflown scene in three dimensions. General side-looking SAR systems are constraint by shading effects which can hide essential information in the explored scene. The downward-looking concept of ARTINO overcomes this restriction and enables imaging of street canyons and deep terrain in mountainous areas. Moreover, the 3D imaging capability together with the small and mobile platform is an ideal tool of close in time data acquisitions of fast changing terrains, like snow slopes (danger of avalanches) and active volcanoes. This new system could be used for various applications, like DEM (Digital Elevation Model) generation, surveying, city planing, environmental monitoring, disaster relief, surveillance, and reconnaissance. In contrary to similar concepts (e.g. [1]–[3]) the ARTINO principle works with a sparse antenna array distributed along the wings, with the transmitting elements at the tips and the receiving elements in between. Virtual antenna elements are formed by the mean positions of every couple of single transmit and receive elements. Finally, one gets a fully distributed virtual antenna array. The 3D resolution cells are formed by the appliance of the synthetic aperture and a beamforming operation. A detailed description of the ARTINO principle, the used UAV (Figure 2), and the simulation of raw data can be found in [4]. The image formation using the ARTINO principle is extensively discussed in [5]. A detailed description of the technical realization and its status is given in [6]. This paper gives an overview of the concept, the processing, some first simulation results, and the technical realisation. Fig. 2. Photo of the low wing UAV ARTINO. The new radar system will be integrated in the fuselage and the wings. II. MODEL OF THE ARTINO CONCEPT A. Geometrical consideration and signal model ARTINO is supposed to fly at the altitude h along the x-axis with the velocity v. The virtual antenna is composed of Nvirt elements, which are centered at the y-axis and regularly spaced along this axis. The position of the i-th virtual antenna element with i ∈ [−Nvirt−1 2 ; Nvirt−1 2 ] is given by ηi = (x, yi, h). The transpose operator is denoted by the superscript . T denotes the pulse-to-pulse time and t the fast time. The antenna position along the x-axis at time T is given by x = v·T . Figure 3 shows the geometry of the ARTINO principle. The distance d between the virtual antenna elements was determined by simulations in order to optimize the antenna beam (reduction of grating lobes) of the whole array (Figure 4). For the demonstration of the feasibility of this new radar concept, a pulse radar is assumed. To obtain a distinct assignment of each virtual antenna element, it will be necessary for the experimental system that the real antenna elements transmit with a time multiplex from pulse to pulse. In the simulation, all virtual antenna elements transmit simultaneously. A point scatterer P is positioned at ξ = (ξx, ξy, ξz) with the reflectivity α(ξ) (Figure 3). The signal assigned to the 0-7803-9510-7/06/$20.00 © 2006 IEEE 3825 Fig. 3. Geometry for the ARTINO principle −3 −2 −1 0 1 2 3 −40 −30 −20 −10 0
Localized maxillary ridge augmentation with a block allograft for dental implant placement: case reports.
Autogenous block bone grafts have been highly successful in treating human periodontal defects, restoring esthetics, and developing adequate bone volume for dental implant placement. Limitations in available donor bone, the need for an added surgical procedure, and other potential complications have made the use of allogenic bone graft materials an important alternative. One patient described in this report presented with fractured root syndrome of the right maxillary incisor with severe resorption of the buccal plate. After atraumatic tooth extraction, a staged treatment approach involving localized ridge augmentation with an allogenic iliac bone block material and dental implant placement was used. The host bone completely incorporated the graft with only minor resorption, which enabled the implant to be placed. The allogenic bone block material used in this study was an effective alternative to harvesting and grafting autogenous bone for implant site development. The cases presented in this article clinically demonstrate the efficacy of using a block allograft in generating effective new bone fill for dental implant placement.
DualIso : Scalable Subgraph Pattern Matching On Large Labeled Graphs SUPPLEMENT
This supplement to the paper provides all figures and substantial code listings which would not fit into the paper due to page limitations. Roadmap: The supplement is organized in the same manner as the main paper i.e. the section names in the main paper corresponds to the same section in the supplement for the easy reference. Note: In many cases, text from the main paper is included and or extended in order to provide context for the figures, tables, and code listings contained in this supplement. The figure numbers may get changed as some more figures are added in this supplement for detailed explanation. A. Graph Pattern Matching The goal of graph pattern matching is to find all subgraphs of a large graph, called the data graph, that are similar to a query graph structurally and semantically. To consider why this might be useful, look again at the graph displayed in Figure 1. Suppose a user of Facebook has just moved to Lyon, France from the United States and is looking for someone to accompany him to the U2 concert there. Thus, he wants to find out if any of his friends are friends with a person who lives in Lyon and who likes U2. Given that Facebook can be modeled as a graph like Figure 1, this question can be modeled as a graph query, as shown in Figure 2. In this query, the capital letters represent variables that the user would like filled in. In this case, using the graph in Figure 1, the query would return X = John and Y = Jim. In fact, Facebook Graph Search2 attempts to allow users to do just that, though at the moment it only has limited querying capabilities.
0414-001 2 APK File Dynamic Analysis Emulator Android OS Dalvik VM Analysis Report Static Analysis Post Processing Network Protocols Code Coverage
The smartphone industry has been one of the fastest growing technological areas in recent years. Naturally, the considerable market share of the Android OS and the diversity of app distribution channels besides the official Google Play Store has attracted the attention of malware authors. To deal with the increasing numbers of malicious Android apps in the wild, malware analysts typically rely on analysis tools to extract characteristic information about an app in an automated fashion. While the importance of such tools has been addressed by the research community [8], [24], [25], [27], the resulting prototypes remain limited in terms of analysis capabilities and availability. In this paper we present ANDRUBIS, a completely automated, publicly available and comprehensive analysis system for Android applications. ANDRUBIS combines static analysis techniques with dynamic analysis on both Dalvik VM and system level, as well as several stimulation techniques to increase code coverage.
Status Quo Bias in Decision Making
Most real decisions, unlike those of economics texts, have a status quo alternative-that is, doing nothing or maintaining one’s current or previous decision. A series of decision-making experiments shows that individuals disproportionately stick with the status quo. Data on the selections of health plans and retirement programs by faculty members reveal that the status quo bias is substantial in important real decisions. Economics, psychology, and decision theory provide possible explanations for this bias. Applications are discussed ranging from marketing techniques, to industrial organization, to the advance of science. “To do nothing is within the power of all men.” Samuel Johnson How do individuals make decisions? This question is of crucial interest to researchers in economics, political science, psychology, sociology, history, and law. Current economic thinking embraces the concept of rational choice as a prescriptive and descriptive paradigm. That is, economists believe that economic agents-individuals, managers, government regulators-should (and in large part do) choose among alternatives in accordance with well-defined preferences. In the canonical model of decision making under certainty, individuals select one of a known set of alternative choices with certain outcomes. They are endowed with preferences satisfying the basic choice axioms-that is, they have a transitive ranking of these alternatives. Rational choice simply means that they select their most preferred alternative in this ranking. If we know the decision maker’s ranking, we can predict his or her choice infallibly. For instance, an individual’s choice should not be affected by removing or adding an irrelevant (i.e., not top-ranked) alternative. Conversely, when we observe his or her actual choice, we know it was his or her top-ranked alternative. 8 WILLIAM SAMUELSON AND RICHARD ZECKHAUSER The theory of rational decision making under uncertainty, first formalized by Savage (19.54) requires the individual to assign probabilities to the possible outcomes and to calibrate utilities to value these outcomes. The decision maker selects the alternative that offers the highest expected utility. A critical feature of this approach is that transitivity is preserved for the more general category, decision making under uncertainty. Most of the decisions discussed here involve what Frank Knight referred to as risk (probabilities of the outcomes are well defined) or uncertainty (only subjective probabilities can be assigned to outcomes). In a number of instances, the decision maker’s preferences are uncertain. A fundamental property of the rational choice model, under certainty or uncertainty, is that only preference-relevant features of the alternatives influence the individual’s decision. Thus, neither the order in which the alternatives are presented nor any labels they carry should affect the individual’s choice. Of course, in realworld decision problems the alternatives often come with influential labels. Indeed, one alternative inevitably carries the label status quo-that is, doing nothing or maintaining one’s current or previous decision is almost always a possibility. Faced with new options, decision makers often stick with the status quo altemative, for example, to follow customary company policy, to elect an incumbent to still another term in office, to purchase the same product brands, or to stay in the same job. Thus, with respect to the canonical model, a key question is whether the framing of an alternative-whether it is in the status quo position or not-will significantly affect the likelihood of its being chosen.’ This article reports the results of a series of decision-making experiments designed to test for status quo effects. The main finding is that decision makers exhibit a significant status quo bias. Subjects in our experiments adhered to status quo choices more frequently than would be predicted by the canonical model. The vehicle for the experiments was a questionnaire consisting of a series of decision problems, each requiring a choice from among a fixed number of alternatives. While controlling for preferences and holding constant the set of choice alternatives, the experimental design varied the framing of the alternatives. Under neutralframing, a menu of potential alternatives with no specific labels attached was presented; all options were on an equal footing, as in the usual depiction of the canonical model. Under status quo framing, one of the choice alternatives was placed in the status quo position and the others became alternatives to the status quo. In some of the experiments, the status quo condition was manipulated by the experimenters. In the remainder, which involved sequential decisions, the subject’s initial choice self-selected the status quo option for a subsequent choice. In both parts of the experiment, status quo framing was found to have predictable and significant effects on subjects’ decision making. Individuals exhibited a significant status quo bias across a range of decisions. The degree of bias varied with the strength of the individual’s discernible preference and with the number of alternatives in the choice set. The stronger was an individual’s preference for a selected alternative, the weaker was the bias. The more options that were included in the choice set, the stronger was the relative bias for the status quo. STATUS QUO BIAS IN DECISION MAKING 9 To illustrate our findings, consider an election contest between two candidates who would be expected to divide the vote evenly if neither were an incumbent (the neutral setting). (This example should be regarded as a metaphor; we do not claim that our experimental results actually explain election outcomes.‘) Now suppose that one of these candidates is the incumbent office holder, a status generally acknowledged as a significant advantage in an election. An extrapolation of our experimental results indicates that the incumbent office holder (the status quo alternative) would claim an election victory by a margin of 59% to 41%. Conversely, a candidate who would command as few as 39% of the voters in the neutral setting could still earn a narrow election victory as an incumbent. With multiple candidates in a plurality election, the status quo advantage is more dramatic. Consider a race among four candidates, each of whom would win 25% of the vote in the neutral setting. Here, the incumbent earns 38.5% of the vote, and each challenger 20.5%. In turn, an incumbent candidate who would earn as little as 9% of the vote in a neutral election can still earn a 25.4% plurality. The finding that individuals exhibit significant status quo bias in relatively simple hypothetical decision tasks challenges the presumption (held implicitly by many economists) that the rational choice model provides a valid descriptive model for all economic behavior. (In Section 3, we explore possible explanations for status quo bias that are consistent with rational behavior.) In particular, this finding challenges perfect optimizing models that claim (at least) allegorical significance in explaining actual behavior in a complicated imperfect world. Even in simple experimental settings, perfect models are violated. In themselves, the experiments do not address the larger question of the importance of status quo bias in actual private and public decision making. Those who are skeptical of economic experiments purporting to demonstrate deviations from rationality contend that actual economic agents, with real resources at stake, will make it their business to act rationally. For several reasons, however, we believe that the skeptic’s argument applies only weakly to the status quo findings. First, the status quo bias is not a mistake-like a calculation error or an error in maximizing-that once pointed out is easily recognized and corrected. This bias is considerably more subtle. In the debriefing discussions following the experiments, subjects expressed surprise at the existence of the bias. Most were readily persuaded of the aggregate pattern of behavior (and the reasons for it), but seemed unaware (and slightly skeptical) that they personaly would fall prey to this bias. Furthermore, even if the bias is recognized, there appear to be no obvious ways to avoid it beyond calling on the decision maker to weigh all options evenhandedly. Second, we would argue that the controlled experiments’ hypothetical decision tasks provide fewer reasons for the expression of status quo bias than do realworld decisions. Many, if not most, subjects did not consciously perceive the differences in framing across decision problems in the experiment. When they did recognize the framing, they stated that it should not make much of a difference. By contrast, one would expect the status quo characteristic to have a much greater impact on actual decision making. Despite a desire to weigh all options evenhand10 WILLIAM SAMUELSON AND RICHARD ZECKHAUSER edly, a decision maker in the real world may have a considerable commitment to, or psychological investment in, the status quo option. The individual may retain the status quo out of convenience, habit or inertia, policy (company or government) or custom, because of fear or innate conservatism, or through simple rationalization. His or her past choice may have become known to others and, unlike the subject in a compressed-time laboratory setting, he or she may have lived with the status quo choice for some time. Moreover, many real-world decisions are made by a person acting as part of an organization or group, which may exert additional pressures for status quo choices. Finally, in our experiments, an alternative to the status quo was always explicitly identified. In day-to-day decision making, by contrast, a decision maker may not even recognize the potential for a choice. When, as is often the case in the real world, the first decision is to recognize that th
Measuring the permittivity and permeability of lossy materials :: solids, liquids, metals, building materials, and negative-index materials
The goal of this report is to provide a comprehensive guide for researchers in the area of measurements of lossy dielectric and magnetic materials with the intent of assembhng the relevant information needed to perform and interpret dielectric measurements on lossy materials in one place. The report should aid in the selection of the most relevant methods for particular applications. We emphasize the metrology aspects of the measurement of lossy dielectrics and the associated uncertainty analysis. We present measurement methods, algorithms, and procedures for measuring both commonly used dielectric and magnetic materials, and in addition we present the fundamentals for measuring negative-index materials.
Fast accurate fish detection and recognition of underwater images with Fast R-CNN
This paper aims at detecting and recognizing fish species from underwater images by means of Fast R-CNN (Regions with Convolutional Neural and Networks) features. Encouraged by powerful recognition results achieved by Convolutional Neural Networks (CNNs) on generic VOC and ImageNet dataset, we apply this popular deep ConvNets to domain-specific underwater environment which is more complicated than overland situation, using a new dataset of 24277 ImageCLEF fish images belonging to 12 classes. The experimental results demonstrate the promising performance of our networks. Fast R-CNN improves mean average precision (mAP) by 11.2% relative to Deformable Parts Model (DPM) baseline-achieving a mAP of 81.4%, and detects 80× faster than previous R-CNN on a single fish image.
Twitter: A Good Place to Detect Health Conditions
With the proliferation of social networks and blogs, the Internet is increasingly being used to disseminate personal health information rather than just as a source of information. In this paper we exploit the wealth of user-generated data, available through the micro-blogging service Twitter, to estimate and track the incidence of health conditions in society. The method is based on two stages: we start by extracting possibly relevant tweets using a set of specially crafted regular expressions, and then classify these initial messages using machine learning methods. Furthermore, we selected relevant features to improve the results and the execution times. To test the method, we considered four health states or conditions, namely flu, depression, pregnancy and eating disorders, and two locations, Portugal and Spain. We present the results obtained and demonstrate that the detection results and the performance of the method are improved after feature selection. The results are promising, with areas under the receiver operating characteristic curve between 0.7 and 0.9, and f-measure values around 0.8 and 0.9. This fact indicates that such approach provides a feasible solution for measuring and tracking the evolution of health states within the society.
Scapular muscle activity from selected strengthening exercises performed at low and high intensities.
A balanced level of muscle strength between the different parts of the scapular muscles is important in optimizing performance and preventing injuries in athletes. Emerging evidence suggests that many athletes lack balanced strength in the scapular muscles. Evidence-based recommendations are important for proper exercise prescription. This study determines scapular muscle activity during strengthening exercises for scapular muscles performed at low and high intensities (Borg CR10 levels 3 and 8). Surface electromyography (EMG) from selected scapular muscles was recorded during 7 strengthening exercises and expressed as a percentage of the maximal EMG. Seventeen women (aged 24-55 years) without serious disorders participated. Several of the investigated exercises-press-up, prone flexion, one-arm row, and prone abduction at Borg 3 and press-up, push-up plus, and one-arm row at Borg 8-predominantly activated the lower trapezius over the upper trapezius (activation difference [Δ] 13-30%). Likewise, several of the exercises-push-up plus, shoulder press, and press-up at Borg 3 and 8-predominantly activated the serratus anterior over the upper trapezius (Δ18-45%). The middle trapezius was activated over the upper trapezius by one-arm row and prone abduction (Δ21-30%). Although shoulder press and push-up plus activated the serratus anterior over the lower trapezius (Δ22-33%), the opposite was true for prone flexion, one-arm row, and prone abduction (Δ16-54%). Only the press-up and push-up plus activated both the lower trapezius and the serratus anterior over the upper trapezius. In conclusion, several of the investigated exercises both at low and high intensities predominantly activated the serratus anterior and lower and middle trapezius, respectively, over the upper trapezius. These findings have important practical implications for exercise prescription for optimal shoulder function. For example, both workers with neck pain and athletes at risk of shoulder impingement (e.g., overhead sports) should perform push-up plus and press-ups to specifically strengthen the serratus anterior and lower trapezius.
ALEF: From Application to Platform for Adaptive Collaborative Learning
Web 2.0 has had a tremendous impact on education. It facilitates access and availability of learning content in variety of new formats, content creation, learning tailored to students’ individual preferences, and collaboration. The range of Web 2.0 tools and features is constantly evolving, with focus on users and ways that enable users to socialize, share and work together on (user-generated) content. In this chapter we present ALEF – Adaptive Learning Framework that responds to the challenges posed on educational systems in Web 2.0 era. Besides its base functionality – to deliver educational content – ALEF particularly focuses on making the learning process more efficient by delivering tailored learning experience via personalized recommendation, and enabling learners to collaborate and actively participate in learning via interactive educational components. Our existing and successfully utilized solution serves as the medium for presenting key concepts that enable realizing Web 2.0 principles in education, namely lightweight models, and three components of framework infrastructure important for constant evolution and inclusion of students directly into the educational process – annotation framework, feedback infrastructure and widgets. These make possible to devise and implement various mechanisms for recommendation and collaboration – we also present selected methods for personalized recommendation and collaboration together with their evaluation in ALEF.
Ontology-Based Extraction and Structuring of Information from Data-Rich Unstructured Documents
We present a new approach to extracting information from unstructured documents based on an application ontology that describes a domain of interest. Starting with such an ontology, we formulate rules to extract constants and context keywords from unstructured documents. For each unstructured document of interest, we extract its constants and keywords and apply a recognizer to organize extracted constants as attribute values of tuples in a generated database schema. To make our approach general, we fix all the processes and change only the ontological description for a different application domain. In experiments we conducted on two different types of unstructured documents taken from the Web, our approach attained recall ratios in the 80% and 90% range and precision ratios near 98%.
Retreatment and maintenance therapy with infliximab in fistulizing Crohn's disease.
OBJECTIVES Infliximab has clearly demonstrated its efficacy in the short-term treatment of fistulizing Crohn's disease. We present here the results of retreatment and long-term maintenance therapy. PATIENTS AND METHODS Eighty one consecutive patients with active fistulizing Crohn's disease, in whom previous treatments had failed, were treated with infliximab. All patients received as the initial treatment of 5 mg/kg i.v. infusions (weeks 0, 2, and 6). Those patients who failed to respond after the initial cycle (group 1, n = 25), or those who relapsed after having responded (group 2, n = 13), received retreatment with three similar doses (weeks 0,2, and 6). Those who responded to retreatment were included in a long-term maintenance programme (n = 44), with repeated doses (5 mg/kg i.v. infusions) every eight weeks for 1-2 years. RESULTS In the initial treatment 56% of the patients responded partially; this response being complete in 44%. In the retreatment, 28% of group 1 (non-responders) presented a complete response, compared to 77% in group 2 (relapsers) (p < 0.0001). In the maintenance treatment, the global response was 88% (39/44). The mean number of doses per patient was 4.4 +/- 2 (range 1-9) with a duration of 36 +/- 12 weeks (range 8-72). Adverse effects were not significantly increased in either treatment. CONCLUSIONS Both retreatment and long-term maintenance therapy with infliximab, are highly effective and well tolerated in fistulizing Crohn's disease patients.
Modeling and control of a new quadrotor manipulation system
This paper introduces a new quadrotor manipulation system that consists of a 2-link manipulator attached to the bottom of a quadrotor. This new system presents a solution for the drawbacks found in the current quadrotor manipulation system which uses a gripper fixed to a quadrotor. Unlike the current system, the proposed system enables the end-effector to achieve any arbitrary orientation and thus increases its degrees of freedom from 4 to 6. Also, it provides enough distance between the quadrotor and the object to be manipulated. This is useful in some applications such as demining applications. System kinematics and dynamics are derived which are highly nonlinear. Controller is designed based on feedback linearization to track desired trajectories. Controlling the movements in the horizontal directions is simplified by utilizing the derived nonholonmic constraints. Finally, the proposed system is simulated using MATLAB/SIMULINK program. The simulation results show the effectiveness of the proposed controller.
The mirror neuron system and its function in humans
Mirror neurons are a particular type of neurons that discharge when an individual performs an action, as well as when he/she observes a similar action done by another individual. Mirror neurons have been described originally in the premotor cortex (area F5) of the monkey. Subsequent studies have shown that they are present also in the monkey inferior parietal lobule (Rizzolatti et al. 2001). In the human brain, evidence for mirror neurons is indirect, but, although there is no single-neuron study showing the existence of mirror neurons, functional imaging studies revealed activation of the likely homo-logue of monkey area F5 (area 44 and the adjacent ventral area 6) during action observation (see Rizzolatti and Craighero 2004). Furthermore, magnetoencepha-lography (Hari et al. 1998) and EEG (Cochin et al. 1999) have shown activation of motor cortex during observation of finger movements. Very recently, alpha rhythm desynchronization in functionally delimited language and hand motor areas was demonstrated during execution and observation of finger movements in a patient with implanted subdural electrodes (Tremblay et al. 2004). What is the functional role of the mirror neurons? Various hypotheses have advanced: action understanding , imitation, intention understanding, and empathy (see Rizzolatti and Craighero 2004; Gallese et al. 2004). In addition, it has been suggested that mirror-neuron system is the basic neural mechanism from which language developed (Rizzolatti and Arbib 1998). It is my opinion that the question of which is the function of the mirror neurons or of the mirror-neuron system is ill posed. Mirror neurons do not have a specific functional role. The properties of mirror neurons indicate that primate brain is endowed with a mechanism mapping the pictorial description of actions, carried out in the higher order visual areas onto their motor counterpart. This matching mechanism may underlie a variety of functions, depending on what aspect of the observed action is coded, the species considered, the circuit in which mirror neurons are included, and the connectivity of the mirror-neuron system with other systems. Let us examine first action understanding, the original hypothesis that has been proposed for explaining the functional role of the mirror system (Gallese et al. 1996; Rizzolatti et al. 1996). It might sound bizarre that in order to recognize an action, one should activate the motor system. As a matter of fact, this is not so strange. A mere visual perception, without involvement of the motor system would only provide a description …
The multiple faces of working memory: Storage, processing, supervision, and coordination
Working memory capacity was differentiated along functional and content-related facets. Twenty-four tasks were constructed to operationalize the cells of the proposed taxonomy. We tested 133 university students with the new tasks, together with six working memory marker tasks. With structural equation models, three working memory functions could be distinguished: Simultaneous storage and processing, supervision, and coordination of elements into structures. Each function was further subdivided into distinct components of variance. On the content dimension, evidence for a dissociation between verbal-numerical working memory and spatial working memory was comparatively weak. DOI: https://doi.org/10.1016/S0160-2896(02)00115-0 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-97155 Originally published at: Oberauer, Klaus; Süß, H M; Wilhelm, Oliver; Wittmann, Werner W (2003). The multiple faces of working memory: Storage, processing, supervision, and coordination. Journal of Artificial Intelligence Research, 31(2):167-193. DOI: https://doi.org/10.1016/S0160-2896(02)00115-0
Narcolepsy in orexin Knockout Mice Molecular Genetics of Sleep Regulation
Neurons containing the neuropeptide orexin (hypocretin) are located exclusively in the lateral hypothalamus and send axons to numerous regions throughout the central nervous system, including the major nuclei implicated in sleep regulation. Here, we report that, by behavioral and electroencephalographic criteria, orexin knockout mice exhibit a phenotype strikingly similar to human narcolepsy patients, as well as canarc-1 mutant dogs, the only known monogenic model of narcolepsy. Moreover, modafinil, an anti-narcoleptic drug with ill-defined mechanisms of action, activates orexin-containing neurons. We propose that orexin regulates sleep/wakefulness states, and that orexin knockout mice are a model of human narcolepsy, a disorder characterized primarily by rapid eye movement (REM) sleep dysregulation.
New sunscreen materials based on amorphous cerium and titanium phosphate
Abstract Cerium–titanium pyrophosphates Ce 1− x Ti x P 2 O 7 (with x  = 0, 0.50, and 1.0), which are novel phosphate materials developed as UV-shielding agents for use in cosmetics, were characterized by X-ray diffraction, X-ray fluorescent analysis, UV–vis reflectance, and Raman spectroscopy. Since the optical reflectance shifted to lower wavelengths by the crystallization of the phosphates and the stabilization of the amorphous state of the cerium–titanium pyrophosphates was carried out by doping niobium (Nb). Raman spectroscopic study of the phosphate showed that P O P bending and stretching modes decreased with the loading of Nb, accompanying with the formation of Nb O stretching mode. Therefore, the increase in the amount of the non-bridging oxygen in the amorphous phosphate should be the reason for the inhibition of the crystallization. This stabilization is a significant improvement, which enables to apply these amorphous phosphates not only to cosmetics and paints, but also plastics and films.
Barrel menu: a new mobile phone menu for feature rich devices
Mobile phones have ceased to be devices that people merely use to make phone calls. Today, modern mobile phones offer their users a large selection of features which are accessed via hierarchical menus. These hierarchical menus typically result in deep nested menus that require numerous clicks to navigate through, often resulting in usability issues. This paper presents a prototype of a new menu style (Barrel menu) for mobile phones, and compares the usability of this menu style with that of a traditional hierarchical (Hub-and-Spoke) design. The Barrel menu was found to be as efficient as the Hub-and-Spoke menu in terms of time-on-task and key presses, but was found to cause fewer user errors. In addition, the Barrel menu was found to be better in terms of ease of use, orientation, user satisfaction and learnability. Thus, the Barrel menu shows the potential to be a feasible alternative for mobile phone manufacturers to overcome some of the usability issues associated with Hub-and-Spoke menus.
Power and empowerment in nursing: three theoretical approaches.
Definitions and uses of the concept of empowerment are wide-ranging: the term has been used to describe the essence of human existence and development, but also aspects of organizational effectiveness and quality. The empowerment ideology is rooted in social action where empowerment was associated with community interests and with attempts to increase the power and influence of oppressed groups (such as workers, women and ethnic minorities). Later, there was also growing recognition of the importance of the individual's characteristics and actions. Based on a review of the literature, this paper explores the uses of the empowerment concept as a framework for nurses' professional growth and development. Given the complexity of the concept, it is vital to understand the underlying philosophy before moving on to define its substance. The articles reviewed were classified into three groups on the basis of their theoretical orientation: critical social theory, organization theory and social psychological theory. Empowerment seems likely to provide for an umbrella concept of professional development in nursing.
The CRISPR-Cas system for plant genome editing: advances and opportunities.
Genome editing is an approach in which a specific target DNA sequence of the genome is altered by adding, removing, or replacing DNA bases. Artificially engineered hybrid enzymes, zinc-finger nucleases (ZFNs), and transcription activator-like effector nucleases (TALENs), and the CRISPR (clustered regularly interspaced short palindromic repeats)-Cas (CRISPR-associated protein) system are being used for genome editing in various organisms including plants. The CRISPR-Cas system has been developed most recently and seems to be more efficient and less time-consuming compared with ZFNs or TALENs. This system employs an RNA-guided nuclease, Cas9, to induce double-strand breaks. The Cas9-mediated breaks are repaired by cellular DNA repair mechanisms and mediate gene/genome modifications. Here, we provide a detailed overview of the CRISPR-Cas system and its adoption in different organisms, especially plants, for various applications. Important considerations and future opportunities for deployment of the CRISPR-Cas system in plants for numerous applications are also discussed. Recent investigations have revealed the implications of the CRISPR-Cas system as a promising tool for targeted genetic modifications in plants. This technology is likely to be more commonly adopted in plant functional genomics studies and crop improvement in the near future.
Sensing-Throughput Tradeoff for Cognitive Radio Networks
In a cognitive radio network, the secondary users are allowed to utilize the frequency bands of primary users when these bands are not currently being used. To support this spectrum reuse functionality, the secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance in cognitive radio networks. There are two parameters associated with spectrum sensing: probability of detection and probability of false alarm. The higher the probability of detection, the better the primary users are protected. However, from the secondary users' perspective, the lower the probability of false alarm, the more chances the channel can be reused when it is available, thus the higher the achievable throughput for the secondary network. In this paper, we study the problem of designing the sensing duration to maximize the achievable throughput for the secondary network under the constraint that the primary users are sufficiently protected. We formulate the sensing-throughput tradeoff problem mathematically, and use energy detection sensing scheme to prove that the formulated problem indeed has one optimal sensing time which yields the highest throughput for the secondary network. Cooperative sensing using multiple mini-slots or multiple secondary users are also studied using the methodology proposed in this paper. Computer simulations have shown that for a 6 MHz channel, when the frame duration is 100 ms, and the signal-to-noise ratio of primary user at the secondary receiver is -20 dB, the optimal sensing time achieving the highest throughput while maintaining 90% detection probability is 14.2 ms. This optimal sensing time decreases when distributed spectrum sensing is applied.
Differentiation of optic disc edema from optic nerve head drusen with spectral-domain optical coherence tomography.
BACKGROUND To assess the efficacy of quantitative analysis of the optic nerve head and peripapillary retinal nerve fiber layer (RNFL) with the spectral-domain optical coherence tomography (SD-OCT) in differentiating optic disc edema (ODE) from optic nerve head drusen (ONHD). METHODS Prospective clinical study. Twenty-five eyes of 25 ODE patients (group 1), 25 eyes of 25 ONHD patients (group 2), and 25 eyes of 25 healthy subjects were included. The thickness of the peripapillary RNFL, the thickness of the subretinal hyporeflective space (SHYPS), the area of the SHYPS, the horizontal length of the optic nerve head, and the angle between the temporal RNFL and the optic nerve head (α-angle) were evaluated with SD-OCT. RESULTS The mean RNFL thickness was significantly greater in group 1 when compared with group 2 and control group (P < 0.001). The receiver operating characteristic curve areas for temporal and nasal RNFL thicknesses in differentiating group 1 and group 2 were 0.819 and 0.851, respectively (for temporal RNFL thickness >101.5 μm: sensitivity 92%, specificity 65%; for nasal RNFL thickness >74.5 μm: sensitivity 92%, specificity 47%). The mean SHYPS thickness, SHYPS area, and degree of the α-angle were greater in group 1 when compared with group 2 (P < 0.05). For the SHYPS thickness >464 μm: 85% sensitivity and 60% specificity; for the SHYPS area >811 μm: 85% sensitivity and 89% specificity; and for the α-angle >141°: 77% sensitivity and 95% specificity were obtained. CONCLUSION The quantitative analysis of the optic nerve head and peripapillary RNFL with SD-OCT can provide useful data in differentiating ODE from ONHD.
Improving the Reproducibility of PAN's Shared Tasks: - Plagiarism Detection, Author Identification, and Author Profiling
This paper reports on the PAN 2014 evaluation lab which hosts three shared tasks on plagiarism detection, author identification, and author profiling. To improve the reproducibility of shared tasks in general, and PAN’s tasks in particular, the Webis group developed a new web service called TIRA, which facilitates software submissions. Unlike many other labs, PAN asks participants to submit running softwares instead of their run output. To deal with the organizational overhead involved in handling software submissions, the TIRA experimentation platform helps to significantly reduce the workload for both participants and organizers, whereas the submitted softwares are kept in a running state. This year, we addressed the matter of responsibility of successful execution of submitted softwares in order to put participants back in charge of executing their software at our site. In sum, 57 softwares have been submitted to our lab; together with the 58 software submissions of last year, this forms the largest collection of softwares for our three tasks to date, all of which are readily available for further analysis. The report concludes with a brief summary of each task.
Practical and flexible path planning for car-like mobile robot using maximal-curvature cubic spiral
This paper presents a nonholonomic path planning method, aiming at taking into considerations of curvature constraint, length minimization, and computational demand, for car-like mobile robot based on cubic spirals. The generated path is made up of at most five segments: at most two maximal-curvature cubic spiral segments with zero curvature at both ends in connection with up to three straight line segments. A numerically efficient process is presented to generate a Cartesian shortest path among the family of paths considered for a given pair of start and destination configurations. Our approach is resorted to minimization via linear programming over the sum of length of each path segment of paths synthesized based on minimal locomotion cubic spirals linking start and destination orientations through a selected intermediate orientation. The potential intermediate configurations are not necessarily selected from the symmetric mean circle for non-parallel start and destination orientations. The novelty of the presented path generation method based on cubic spirals is: (i) Practical: the implementation is straightforward so that the generation of feasible paths in an environment free of obstacles is efficient in a few milliseconds; (ii) Flexible: it lends itself to various generalizations: readily applicable to mobile robots capable of forward and backward motion and Dubins’ car (i.e. car with only forward driving capability); well adapted to the incorporation of other constraints like wall-collision avoidance encountered in robot soccer games; straightforward extension to planning a path connecting an ordered sequence of target configurations in simple obstructed environment. © 2005 Elsevier B.V. All rights reserved.
釜關連絡船 始末과 釜山府 日本人 人口變動
This paper investigates the history and situation of Busan-Shimonoseki cross-channel liner and analyses the relations between Busan-Shimonoseki...
Consensus spectral clustering in near-linear time
This paper addresses the scalability issue in spectral analysis which has been widely used in data management applications. Spectral analysis techniques enjoy powerful clustering capability while suffer from high computational complexity. In most of previous research, the bottleneck of computational complexity of spectral analysis stems from the construction of pairwise similarity matrix among objects, which costs at least O(n2) where n is the number of the data points. In this paper, we propose a novel estimator of the similarity matrix using K-means accumulative consensus matrix which is intrinsically sparse. The computational cost of the accumulative consensus matrix is O(nlogn). We further develop a Non-negative Matrix Factorization approach to derive clustering assignment. The overall complexity of our approach remains O(nlogn). In order to validate our method, we (1) theoretically show the local preserving and convergent property of the similarity estimator, (2) validate it by a large number of real world datasets and compare the results to other state-of-the-art spectral analysis, and (3) apply it to large-scale data clustering problems. Results show that our approach uses much less computational time than other state-of-the-art clustering methods, meanwhile provides comparable clustering qualities. We also successfully apply our approach to a 5-million dataset on a single machine using reasonable time. Our techniques open a new direction for high-quality large-scale data analysis.
Comprehensive Topological Analysis of Conductive and Inductive Charging Solutions for Plug-In Electric Vehicles
The impending global energy crisis has opened up new opportunities for the automotive industry to meet the ever-increasing demand for cleaner and fuel-efficient vehicles. This has necessitated the development of drivetrains that are either fully or partially electrified in the form of electric and plug-in hybrid electric vehicles (EVs and HEVs), respectively, which are collectively addressed as plug-in EVs (PEVs). PEVs in general are equipped with larger on-board storage and power electronics for charging or discharging the battery, in comparison with HEVs. The extent to which PEVs are adopted significantly depends on the nature of the charging solution utilized. In this paper, a comprehensive topological survey of the currently available PEV charging solutions is presented. PEV chargers based on the nature of charging (conductive or inductive), stages of conversion (integrated single stage or two stages), power level (level 1, 2, or 3), and type of semiconductor devices utilized (silicon, silicon carbide, or gallium nitride) are thoroughly reviewed in this paper.
A novel approach for collaborative filtering to alleviate the new item cold-start problem
Recommender systems have been widely used as an important response to information overload problem by providing users with more personalized information services. The most popular core technique of such systems is collaborative filtering, which utilizes users' known preference to generate predictions of the unknown preferences. A key challenge for collaborative filtering recommender systems is generating high quality recommendations on the cold-start items, on which no user has expressed preferences yet. In this paper, we propose a hybrid algorithm by using both the ratings and content information to tackle item-side cold-start problem. We first cluster items based on the rating matrix and then utilize the clustering results and item content information to build a decision tree to associate the novel items with the existing ones. Considering the ratings on novel item constantly increasing, we show predictions of our approach can be combined with the traditional collaborative-filtering methods to yield superior performance with a coefficient. Experiments on real data set show the improvement of our approach in overcoming the item-side cold-start problem.
A meta-analysis of state-of-the-art electoral prediction from Twitter data
Electoral prediction from Twitter data is an appealing research topic. It seems relatively straightforward and the prevailing view is overly optimistic. This is problematic because while simple approaches are assumed to be good enough, core problems are not addressed. Thus, this paper aims to (1) provide a balanced and critical review of the state of the art; (2) cast light on the presume predictive power of Twitter data; and (3) depict a roadmap to push forward the field. Hence, a scheme to characterize Twitter prediction methods is proposed. It covers every aspect from data collection to performance evaluation, through data processing and vote inference. Using that scheme, prior research is analyzed and organized to explain the main approaches taken up to date but also their weaknesses. This is the first meta-analysis of the whole body of research regarding electoral prediction from Twitter data. It reveals that its presumed predictive power regarding electoral prediction has been somewhat exaggerated: although social media may provide a glimpse on electoral outcomes current research does not provide strong evidence to support it can currently replace traditional polls. Finally, future lines of work are suggested.
Sampling Big Trajectory Data
The increasing prevalence of sensors and mobile devices has led to an explosive increase of the scale of spatio-temporal data in the form of trajectories. A trajectory aggregate query, as a fundamental functionality for measuring trajectory data, aims to retrieve the statistics of trajectories passing a user-specified spatio-temporal region. A large-scale spatio-temporal database with big disk-resident data takes very long time to produce exact answers to such queries. Hence, approximate query processing with a guaranteed error bound is a promising solution in many scenarios with stringent response-time requirements. In this paper, we study the problem of approximate query processing for trajectory aggregate queries. We show that it boils down to the distinct value estimation problem, which has been proven to be very hard with powerful negative results given that no index is built. By utilizing the well-established spatio-temporal index and introducing an inverted index to trajectory data, we are able to design random index sampling (RIS) algorithm to estimate the answers with a guaranteed error bound. To further improve system scalability, we extend RIS algorithm to concurrent random index sampling (CRIS) algorithm to process a number of trajectory aggregate queries arriving concurrently with overlapping spatio-temporal query regions. To demonstrate the efficacy and efficiency of our sampling and estimation methods, we applied them in a real large-scale user trajectory database collected from a cellular service provider in China. Our extensive evaluation results indicate that both RIS and CRIS outperform exhaustive search for single and concurrent trajectory aggregate queries by two orders of magnitude in terms of the query processing time, while preserving a relative error ratio lower than 10\%, with only 1% search cost of the exhaustive search method.
ASL Recognition Based on a Coupling Between HMMs and 3D Motion Analysis
We present a framework for recognizing isolated and continuous American Sign Language (ASL) sentences from three-dimensional data. The data are obtained by using physics-based three-dimensional tracking methods and then presented as input to Hidden Markov Models (HMMs) for recognition. To improve recognition performance, we model context-dependent HMMs and present a novel method of coupling three-dimensional computer vision methods and HMMs by temporally segmenting the data stream with vision methods. We then use the geometric properties of the segments to constrain the HMM framework for recognition. We show in experiments with a 53 sign vocabulary that three-dimensional features outperform two-dimensional features in recognition performance. Furthermore, we demonstrate that contextdependent modeling and the coupling of vision methods and HMMs improve the accuracy of continuous ASL recognition.
( 1 + 1 ) -dimensional Galilean supersymmetry in ultracold quantum gases
We discuss a (1+1)-dimensional Galilean invariant model recently introduced in connection with ultracold quantum gases. After showing its relation to a nonrelativistic (2+1) Chern-Simons matter system, we identify the generators of the supersymmetry and its relation with the existence of self-dual equations.
Cut to the Chase: A Context Zoom-in Network for Reading Comprehension
In recent years many deep neural networks have been proposed to solve Reading Comprehension (RC) tasks. Most of these models suffer from reasoning over long documents and do not trivially generalize to cases where the answer is not present as a span in a given document. We present a novel neural-based architecture that is capable of extracting relevant regions based on a given question-document pair and generating a well-formed answer. To show the effectiveness of our architecture, we conducted several experiments on the recently proposed and challenging RC dataset ‘NarrativeQA’. The proposed architecture outperforms state-of-the-art results (Tay et al., 2018) by 12.62% (ROUGE-L) relative improvement.
Understanding Consumer Decision Making for Complex Choices: The Effects of Individual and Contextual Factors
E-commerce consumers are facing increasingly complex purchasing decisions. Due to their cognitive capacity, however, consumers may not always achieve their goal of optimal product choice. Existing research has focused on providing aids to consumers to help them make rational and conscious information choices for complex purchasing decisions. Recently, a dual-process Unconscious Thought Theory (UTT) has suggested otherwise. It shows that due to the limited cognitive resources, unconscious information processing may outperform conscious information processing for complex decisions. Drawing on the UTT, the study proposes that strategically designed interventions would interact with other contextual and individual factors in consumer information processing, and ultimately lead to superior consumer choice under certain choice environment. An experiment was conducted to test the research model. Focusing on the unconscious information processing in online shopping, this study has important implications for Web-specific human-computer interaction research and e-commerce
Triage of patients with acute chest pain and possible cardiac ischemia: the elusive search for diagnostic perfection.
Quality Grand Rounds is a series of articles and companion conferences designed to explore a range of quality issues and medical errors. Presenting actual cases drawn from institutions around the United States, the articles integrate traditional medical case histories with results of root-cause analyses and, where appropriate, anonymous interviews with the involved patients, physicians, nurses, and risk managers. Cases do not come from the discussants' home institutions. Summary of the Case Mrs. T., a 68-year-old woman with many cardiac risk factors and a history of myocardial infarction (MI), presented with atypical symptoms but a changed electrocardiogram (ECG). These ECG changes were not appreciated by Dr. M., the emergency department physician, and Mrs. T. was mistakenly sent home with what proved to be an acute MI. Dr. M. was interviewed by a Quality Grand Rounds editor on 21 March 2002. The Case Twelve hours before presenting to the emergency department, Mrs. T. called the hospital's telephone triage nurse and reported dull, midsternal pain relieved after a bowel movement. After probing for associated symptoms, the nurse reassured the patient and told her to call back if she experienced any further discomfort. Several hours later, when the pain recurred, Mrs. T. called again. When asked whether she had sublingual nitroglycerin on hand, Mrs. T. confirmed that she had a bottle of nitroglycerin pills but that the expiration date had passed. She was told to take the nitroglycerin if the pain recurred and was given an appointment for 2 days later (at which time she was instructed to exchange her expired bottle for a new one). She was advised to call 911 if the pain recurred and was associated with nausea, diaphoresis, or dyspnea. Because of continued pain, the patient came into the emergency department of a large urban hospital 4 hours later (at approximately 2:00 a.m.) with a chief symptom of chest pain. The patient had a history of inferior-wall MI, hypertension, diabetes mellitus, hyperlipidemia, and peripheral vascular disease. She described the pain as very different from any pain that she had experienced in the past. It had a burning quality, was located across her epigastrium and chest, persisted for 4 to 6 minutes at a time, and had been intermittent for 24 hours. The pain came on at rest and was relieved by activity. She reported no associated dyspnea, diaphoresis, or nausea. Review of systems revealed 1 week of constipation. Dr. M., a moonlighting internist, was awakened from sleep to evaluate the patient. Physical examination revealed a pulse of 85 beats/min, blood pressure of 140/70 mm Hg, and respiratory rate of 18 breaths/min. The lungs were clear to auscultation, and heart sounds were normal, with no rubs, murmurs, or gallops. The ECG (obtained during a painful episode) was interpreted as sinus rhythm, with normal axis and normal intervals ( Figure 1 , top). Dr. M. noted a Q wave in lead III but specifically noted the absence of ST-segment and T-wave changes consistent with ischemia. The patient was discharged from the emergency department 1 hour later with a diagnosis of chest and abdominal pain secondary to constipation. She was prescribed a regimen to relieve constipation and was told to schedule a follow-up appointment with her primary physician. Figure 1. Electrocardiograms ( ECGs ) obtained at presentation and a previous comparison tracing. Top. Bottom. Diagnosing Chest Pain in the Emergency Department Few diagnostic decisions have been more heavily researched than the approach to the patient with acute chest pain. In the context of the patient safety movement, it is useful to consider this case not only for what it teaches us about triaging patients with acute chest pain but also for what it may reveal about improving the individual physician's diagnostic performance through the use of algorithms or protocols. Chest pain accounts for about 5.6 million emergency department visits annually, second only to abdominal pain as the most common reason for an emergency department visit. Approximately 1% to 4% of patients who present to an emergency department with what is actually an acute MI are mistakenly discharged (1-8), and the percentage of missed diagnoses increases when the denominator includes not only acute MI but also unstable angina. Patients discharged from the emergency department with MI have a generally worse prognosis than do appropriately hospitalized patients with MI (1-4), partly because of their risk for sudden death but also because of the delay in implementing treatments that are known to be effective for MI or the acute coronary syndrome (unstable angina or nonST-elevation MI). Patients with atypical symptoms, and especially patients without chest pain (2, 3), are most likely to be mistakenly discharged. The clinical question is which patients with acute chest pain have a presentation benign enough to make discharge from the emergency department safe and appropriate. Cost-effectiveness analyses suggest that a coronary care unit is the appropriate triage option for patients whose probability of acute MI is about 20% or higher (9, 10). For patients whose risks for MI or acute coronary ischemia are lower (5-8, 11), admission to telemetry units is often recommended, including a short stay on a chest pain (or coronary) evaluation or observation unit. In analyzing Mrs. T.'s presentation, it is essential to determine whether any combination of initial symptoms, signs, laboratory studies, or ECG findings has enough discriminatory power to reduce the likelihood of misdiagnosing an acute coronary syndrome to a level that would render discharge from the emergency department safe and appropriate. In the acute setting, the ECG is not only the most important piece of information (12), it is nearly as important as all other information combined. About 80% of patients with acute MI have an initial ECG that shows evidence of infarction or ischemia not known to be old (Figure 2), and any patient who has such abnormalities has too high a risk to be safely discharged, regardless of the clinical history or physical examination (13, 14). The sensitivity is lower if the goal is to identify ischemia in addition to infarction, but comparisons with previous ECGs can improve the accuracy and usefulness of interpreting the ECG (15). Although a normal ECG at presentation predicts a relatively lower risk for complications (16-18), it cannot absolutely exclude myocardial ischemia or even MI. For example, among patients mistakenly discharged from the emergency department, up to 50% have normal or nondiagnostic ECG findings (2, 19). Thus, even if Mrs. T.'s ECG had been normal or unchanged from her previous ECG, this would not have had enough negative predictive value to exclude an acute MI or the acute coronary syndrome. Figure 2. Receiver-operating characteristic curve of the initial electrocardiographic interpretation. MI New The description of the presenting symptom is also important. Patients with chest pain are more likely to have MI or the acute coronary syndrome (7, 8, 11, 14), but up to 25% of patients with these diagnoses may present with symptoms such as shortness of breath, dizziness, or weakness, so cardiac ischemia must also be considered in patients with these symptoms. Demographic factors and traditional cardiovascular risk factors (with the very notable exception of a history of MI or coronary disease [5, 6, 20, 21]) are of little importance in predicting the cause of acute chest pain (21-24). Aspects of the medical history that appreciably lower the patient's likelihood of ischemia (likelihood ratios of approximately 0.2) include reproducibility of pain with palpation or positional changes, pleuritic pain, stabbing pain, or pain radiating to the lower extremities (5, 6, 20, 21, 24). However, even these negative predictors cannot reliably exclude MI (20, 25). Mrs. T.'s description of painful episodes lasting only 4 to 6 minutes may also seem atypical, but the duration of symptoms is not a useful predictor (5, 7-9) unless the pain has persisted for 48 hours or more without ECG changes (5, 6). Patients who describe their pain as similar to previous episodes of cardiac ischemia are in a high-risk category (5, 18), but any chest pain carries a higher risk than no pain (7, 8, 11). Although the precise reproduction of chest pain by local palpation decreases risk (5, 18), normal results on physical examination do not lower the risk (5, 18, 20, 24). How can these data have been used in caring for Mrs. T.? She had pain that was different from her previous MI and was thought to have an unchanged ECG. If it is assumed that all of these data are accurate, she would have had less than a 7% risk for MI and a low risk for complications that would require intensive care (18). However, because of her history of coronary disease and the absence of a clear-cut benign diagnosis, because constipation is not an established cause of chest pain, and because her pain had not resolved, she is the type of patient for whom admission to a chest pain evaluation unit is appropriate (20, 21, 26-33) (Table). It is very important for individual hospitals to adopt clear guidelines for triaging such patients, because these patients may be evaluated by many different physicians with varying experience, knowledge, personality traits, and levels of fatigue (8, 34, 35). Table. Recommended Strategies for Determining Where To Admit Patients with Acute Chest Pain for Treatment of Ongoing Life-Threatening Conditions* Dr. M.: I think one of the factors that affected my decision making when I first evaluated the patient was the time of night (2:00 a.m.) and the fact that I had just awakened. I saw her less than a minute after being awakened. What I probably should have done was had her stay in the emergency department, even if I thought she was low risk (which I obviously at that time did), and let more time pass so that my sleep ine
Instance-Based Learning Algorithms
Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instance-based learning, that generates classification predictions using only specific instances. Instance-based learning algorithms do not maintain a set of abstractions derived from specific instances. This approach extends the nearest neighbor algorithm, which has large storage requirements. We describe how storage requirements can be significantly reduced with, at most, minor sacrifices in learning rate and classification accuracy. While the storage-reducing algorithm performs well on several real-world databases, its performance degrades rapidly with the level of attribute noise in training instances. Therefore, we extended it with a significance test to distinguish noisy instances. This extended algorithm's performance degrades gracefully with increasing noise levels and compares favorably with a noise-tolerant decision tree algorithm.
Adding Sarcosine to Antipsychotic Treatment in Patients with Stable Schizophrenia Changes the Concentrations of Neuronal and Glial Metabolites in the Left Dorsolateral Prefrontal Cortex
The glutamatergic system is a key point in pathogenesis of schizophrenia. Sarcosine (N-methylglycine) is an exogenous amino acid that acts as a glycine transporter inhibitor. It modulates glutamatergic transmission by increasing glycine concentration around NMDA (N-methyl-d-aspartate) receptors. In patients with schizophrenia, the function of the glutamatergic system in the prefrontal cortex is impaired, which may promote negative and cognitive symptoms. Proton nuclear magnetic resonance (¹H-NMR) spectroscopy is a non-invasive imaging method enabling the evaluation of brain metabolite concentration, which can be applied to assess pharmacologically induced changes. The aim of the study was to evaluate the influence of a six-month course of sarcosine therapy on the concentration of metabolites (NAA, N-acetylaspartate; Glx, complex of glutamate, glutamine and γ-aminobutyric acid (GABA); mI, myo-inositol; Cr, creatine; Cho, choline) in the left dorso-lateral prefrontal cortex (DLPFC) in patients with stable schizophrenia. Fifty patients with schizophrenia, treated with constant antipsychotics doses, in stable clinical condition were randomly assigned to administration of sarcosine (25 patients) or placebo (25 patients) for six months. Metabolite concentrations in DLPFC were assessed with 1.5 Tesla ¹H-NMR spectroscopy. Clinical symptoms were evaluated with the Positive and Negative Syndrome Scale (PANSS). The first spectroscopy revealed no differences in metabolite concentrations between groups. After six months, NAA/Cho, mI/Cr and mI/Cho ratios in the left DLPFC were significantly higher in the sarcosine than the placebo group. In the sarcosine group, NAA/Cr, NAA/Cho, mI/Cr, mI/Cho ratios also significantly increased compared to baseline values. In the placebo group, only the NAA/Cr ratio increased. The addition of sarcosine to antipsychotic therapy for six months increased markers of neurons viability (NAA) and neurogilal activity (mI) with simultaneous improvement of clinical symptoms. Sarcosine, two grams administered daily, seems to be an effective adjuvant in the pharmacotherapy of schizophrenia.
Music-genre classification system based on spectro-temporal features and feature selection
An automatic classification system of the music genres is proposed. Based on the timbre features such as mel-frequency cepstral coefficients, the spectro-temporal features are obtained to capture the temporal evolution and variation of the spectral characteristics of the music signal. Mean, variance, minimum, and maximum values of the timbre features are calculated. Modulation spectral flatness, crest, contrast, and valley are estimated for both original spectra and timbre-feature vectors. A support vector machine (SVM) is used as a classifier where an elaborated kernel function is defined. To reduce the computational complexity, an SVM ranker is applied for feature selection. Compared with the best algorithms submitted to the music information retrieval evaluation exchange (MIREX) contests, the proposed method provides higher accuracy at a lower feature dimension for the GTZAN and ISMIR2004 databases.
Personality dimensions in bulimia nervosa, binge eating disorder, and obesity.
OBJECTIVE The purpose of this investigation was to examine differences in personality dimensions among individuals with bulimia nervosa, binge eating disorder, non-binge eating obesity, and a normal-weight comparison group as well as to determine the extent to which these differences were independent of self-reported depressive symptoms. METHOD Personality dimensions were assessed using the Multidimensional Personality Questionnaire in 36 patients with bulimia nervosa, 54 patients with binge eating disorder, 30 obese individuals who did not binge eat, and 77 normal-weight comparison participants. RESULTS Participants with bulimia nervosa reported higher scores on measures of stress reaction and negative emotionality compared to the other 3 groups and lower well-being scores compared to the normal-weight comparison and the obese samples. Patients with binge eating disorder scored lower on well-being and higher on harm avoidance than the normal-weight comparison group. In addition, the bulimia nervosa and binge eating disorder groups scored lower than the normal-weight group on positive emotionality. When personality dimensions were reanalyzed using depression as a covariate, only stress reaction remained higher in the bulimia nervosa group compared to the other 3 groups and harm avoidance remained higher in the binge eating disorder than the normal-weight comparison group. CONCLUSIONS The higher levels of stress reaction in the bulimia nervosa sample and harm avoidance in the binge eating disorder sample after controlling for depression indicate that these personality dimensions are potentially important in the etiology, maintenance, and treatment of these eating disorders. Although the extent to which observed group differences in well-being, positive emotionality, and negative emotionality reflect personality traits, mood disorders, or both, is unclear, these features clearly warrant further examination in understanding and treating bulimia nervosa and binge eating disorder.
Transitioning to Physics-of-Failure as a Reliability Driver in Power Electronics
Power electronics has progressively gained an important status in power generation, distribution, and consumption. With more than 70% of electricity processed through power electronics, recent research endeavors to improve the reliability of power electronic systems to comply with more stringent constraints on cost, safety, and availability in various applications. This paper serves to give an overview of the major aspects of reliability in power electronics and to address the future trends in this multidisciplinary research direction. The ongoing paradigm shift in reliability research is presented first. Then, the three major aspects of power electronics reliability are discussed, respectively, which cover physics-of-failure analysis of critical power electronic components, state-of-the-art design for reliability process and robustness validation, and intelligent control and condition monitoring to achieve improved reliability under operation. Finally, the challenges and opportunities for achieving more reliable power electronic systems in the future are discussed.
BorderSense: Border patrol through advanced wireless sensor networks
The conventional border patrol systems suffer from intensive human involvement. Recently, unmanned border patrol systems employ high-tech devices, such as unmanned aerial vehicles, unattended ground sensors, and surveillance towers equipped with camera sensors. However, any single technique encounters inextricable problems, such as high false alarm rate and line-of-sight-constraints. There lacks a coherent system that coordinates various technologies to improve the system accuracy. In this paper, the concept of BorderSense, a hybrid wireless sensor network architecture for border patrol systems, is introduced. BorderSense utilizes the most advanced sensor network technologies, including the wireless multimedia sensor networks and the wireless underground sensor networks. The framework to deploy and operate BorderSense is developed. Based on the framework, research challenges and open research issues are discussed. 2010 Elsevier B.V. All rights reserved.
Semantic Data Models
Semantic data models have emerged from a requirement for more expressive conceptual data models. Current generation data models lack direct support for relationships, data abstraction, inheritance, constraints, unstructured objects, and the dynamic properties of an application. Although the need for data models with richer semantics is widely recognized, no single approach has won general acceptance. This paper describes the generic properties of semantic data models and presents a representative selection of models that have been proposed since the mid-1970s. In addition to explaining the features of the individual models, guidelines are offered for the comparison of models. The paper concludes with a discussion of future directions in the area of conceptual data modeling.
Semantic, phonological, and hybrid veridical and false memories in healthy older adults and in individuals with dementia of the Alzheimer type.
Five groups of participants (young, healthy old, healthy old-old, very mild dementia of the Alzheimer type [DAT], and mild DAT) studied 12-item lists of words that converged on a critical nonpresented word (cold) semantically (chill, frost, warm, ice), phonologically (code, told, fold, old), or in a hybrid list of both (chill, told, warm, old). The results indicate that (a) veridical recall decreased with age and dementia; (b) recall of the nonpresented items increased with age and remained fairly stable across dementia; and (c) false recall varied by list type, with hybrid lists producing superadditive effects. For hybrid lists, individuals with DAT were 3 times more likely to recall the critical nonpresented word than a studied word. When false memory was considered as a proportion of veridical memory, there was an increase in relative false memory as a function of age and dementia. Results are discussed in terms of age- and dementia-related changes in attention and memory.
Towards Building a SentiWordNet for Tamil
Sentiment analysis is a discipline of Natural Language Processing which deals with analysing the subjectivity of the data. It is an important task with both commercial and academic functionality. Languages like English have several resources which assist in the task of sentiment analysis. SentiWordNet for English is one such important lexical resource that contains subjective polarity for each lexical item. With growing data in native vernacular, there is a need for language-specific SentiWordNet(s). In this paper, we discuss a generic approach followed for the development of a Tamil SentiWordNet using currently available resources in English. For Tamil SentiWordNet, a substantial agreement Fleiss Kappa score of 0.663 was obtained after verification from Tamil annotators. Such a resource would serve as a baseline for future improvements in the task of sentiment analysis specific to
Comparative investigations on the efficacy of articaine 4% (epinephrine 1:200,000) and articaine 2% (epinephrine 1:200,000) in local infiltration anaesthesia in dentistry—a randomised double-blind study
A randomised double-blind study investigated 155 patients with tooth extractions in the mandibular and maxillary jaws for a loss of anaesthetic potency when reducing the concentration of the active in articaine solutions. Tests were performed on the preparations of articaine 4% with a 1:200,000 addition of epinephrine (Ultracain D-S) and articaine 2% with a 1:200,000 addition of epinephrine (Ultracain 2%-Suprarenin). Local infiltration anaesthesia was the chosen method of anaesthesia. The most noticeable difference observed between the two injection solutions concerned the duration of anaesthesia, which was significantly shortened under the low-dose solution. The 4% articaine solution did not prove superior in local anaesthetic effect. Articaine 2% with epinephrine 1:200,000, therefore, can be considered a suitable local anaesthetic for tooth extractions.
Broadband Circularly Polarized Slot Antenna Array Using a Compact Sequential-Phase Feeding Network
A broadband circularly polarized (CP) slot antenna array fed by an asymmetric coplanar waveguide (CPW) with stepped and inverted T-shaped strips is proposed. Using four square slot antenna elements with sequential rotation oblique feed and a modified sequential-phase (SP) feeding network, broadband CP can be achieved. The measured −10 dB reflection coefficient bandwidth and 3 dB axial ratio (AR) bandwidth are 55.4% (1.63–2.88 GHz) and 58% (1.65–3GHz), respectively. Good radiation characteristics with gain more than 6 dBic over the operating band are obtained by the proposed antenna array with a compact size of 155 × 155 × 0.8 mm3. Details of the proposed antenna array design and experimental results are presented and discussed.
The role of tutoring in problem solving.
THIS PAPER is concerned with the nature of the tutorial process; the means whereby an adult or "expert" helps somebody who is less adult or less expert. Though its aim is general, it is expressed in terms of a particular task: a tutor seeks to teach children aged 3, 4 and 5 yr to build a particular three-dimensional structure that requires a degree of skill that is initially beyond them. It is the usual type of tutoring situation in which one member "knows the answer" and the other does not, rather like a "practical" in which only the instructor "knows how". The changing interaction of tutor and children provide our data. A great deal of early problem solving by the developing child is of this order. Although from the earliest months of life he is a "natural" problem solver in his own right (e.g. Bruner, 1973) it is often the ease that his efforts are assisted and fostered by others who are more skilful than he is (Kaye, 1970). Whether he is learning the procedures that constitute the skills of attending, communicating, manipulating objects, locomoting, or, indeed, a more effective problem solving procedure itself, there are usually others in attendance who help him on his way. Tutorial interactions are, in short, a crucial feature of infancy and childhood. Our species, moreover, appears to be the only one in which any "intentional" tutoring goes on (Bruner, 1972; Hinde, 1971). For although it is true that many of the higher primate species learn by observation of their elders (Hamburg, 1968; van Lawick-Goodall, 1968), there is no evidence that those elders do anything to instruct their charges in the performance of the skill in question. What distinguishes man as a species is not only his capacity for learning, but for teaching as well. It is the main aim of this paper to examine some of the major implications of this interactive, instructional relationship between the developing child and his elders for the study of skill acquisition and problem solving. The acquisition of skill in the human child can be fruitfully conceived as a hierarchical program in which component skills are combined into "higher skills" by appropriate orchestration to meet new, more complex task requirements (Bruner, 1973). The process is analogous to problem solving in which mastery of "lower order" or constituent problems in a sine qua non for success with a larger jjroblcm, each level influencing the other—as with reading where the deciphering of words makes possible the deciphering of sentences, and sentences then aid in the deciphering of particular words (F. Smith, 1971). Given persistent intention in the young learner, given a "lexicon" of constituent skills, the crucial task is often one of com-
Anchors: High-Precision Model-Agnostic Explanations
We introduce a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, “sufficient” conditions for predictions. We propose an algorithm to efficiently compute these explanations for any black-box model with high-probability guarantees. We demonstrate the flexibility of anchors by explaining a myriad of different models for different domains and tasks. In a user study, we show that anchors enable users to predict how a model would behave on unseen instances with less effort and higher precision, as compared to existing linear explanations or no explanations.
Aspirin versus coumadin in the prevention of reocclusion and recurrent ischemia after successful thrombolysis: a prospective placebo-controlled angiographic study. Results of the APRICOT Study.
BACKGROUND Successful coronary thrombolysis involves a risk for reocclusion that cannot be prevented by invasive strategies. Therefore, we studied the effects of three antithrombotic regimens on the angiographic and clinical courses after successful thrombolysis. METHODS AND RESULTS Patients treated with intravenous thrombolytic therapy followed by intravenous heparin were eligible when a patent infarct-related artery was demonstrated at angiography < 48 hours. Three hundred patients were randomized to either 325 mg aspirin daily or placebo with discontinuation of heparin or to Coumadin with continuation of heparin until oral anticoagulation was established (international normalized ratio, 2.8-4.0). After 3 months, in which conservative treatment was intended, vessel patency and ventricular function were reassessed in 248 patients. Reocclusion rates were not significantly different: 25% (23 of 93) with aspirin, 30% (24 of 81) with Coumadin, and 32% (24 of 74) with placebo. Reinfarction was seen in 3% of patients on aspirin, in 8% on Coumadin, and in 11% on placebo (aspirin versus placebo, p < 0.025; other comparison, p = NS). Revascularization rate was 6% with aspirin, 13% with Coumadin, and 16% with placebo (aspirin versus placebo, p < 0.05; other comparisons, p = NS). Mortality was 2% and did not differ between groups. An event-free clinical course was seen in 93% with aspirin, in 82% with Coumadin, and in 76% with placebo (aspirin versus placebo, p < 0.001; aspirin versus Coumadin, p < 0.05). An event-free course without reocclusion was observed in 73% with aspirin, in 63% with Coumadin, and in 59% with placebo (p = NS). An increase of left ventricular ejection fraction was only found in the aspirin group (4.6%, p < 0.001). CONCLUSIONS At 3 months after successful thrombolysis, reocclusion occurred in about 30% of patients, regardless of the use of antithrombotics. Compared with placebo, aspirin significantly reduces reinfarction rate and revascularization rate, improves event-free survival, and better preserves left ventricular function. The efficacy of Coumadin on these end points appears less than that of aspirin. The still-high reocclusion rate emphasizes the need for better antithrombotic therapy in these patients.
CS Teacher Experiences with Educational Technology, Problem-BasedLearning, and a CS Principles Curriculum
Little is known about how K-12 Computer Science (CS) teachers use technology and problem-based learning (PBL) to teach CS content in the context of CS Principles curricula. Significantly, little qualitative research has been conducted in these areas in computer science education, so we lack an in-depth understanding of the complicated realities of CS teachers' experiences. This paper describes the practices and experiences of six teachers' use of technology that was implemented to support PBL in the context of a dual enrollment CS Principles course. Results from an early offering of this course suggest that (1) while CS teachers used technology, they did not appear to use it to support student inquiry, (2) local adaptations to the curriculum were largely teacher-centric, and (3) the simultaneous adoption of new instructional practices, technologies, and curricula was overwhelming to teachers. This paper then describes how these results were used to modify the curriculum and professional development, leading to increased teacher satisfaction and student success in the course.
Taming Google-Scale Continuous Testing
Growth in Google's code size and feature churn rate has seen increased reliance on continuous integration (CI) and testing to maintain quality. Even with enormous resources dedicated to testing, we are unable to regression test each code change individually, resulting in increased lag time between code check-ins and test result feedback to developers. We report results of a project that aims to reduce this time by: (1) controlling test workload without compromising quality, and (2) distilling test results data to inform developers, while they write code, of the impact of their latest changes on quality. We model, empirically understand, and leverage the correlations that exist between our code, test cases, developers, programming languages, and code-change and test-execution frequencies, to improve our CI and development processes. Our findings show: very few of our tests ever fail, but those that do are generally "closer" to the code they test, certain frequently modified code and certain users/tools cause more breakages, and code recently modified by multiple developers (more than 3) breaks more often.
Botulinum toxin A in the mid and lower face and neck.
Botulinum toxins have been smoothing hyperkinetic lines in the upper face for over 15 years. More recently, their use has widened to include applications in the mid and lower face and neck to smooth, shape, and sculpt, blurring the line between science and art. Their use in the lower face, however, requires a thorough and detailed knowledge of not only facial and cervical anatomy, but also the complex interactions of muscles and the aesthetic and implications of a misplaced injection. Although proper patient selection and injection techniques do not guarantee optimal results, poor selection and techniques almost certainly guarantee disappointing results. In addition to its use as primary procedure, botulinum toxin is also an effective adjunct to other cosmetic procedures, enhancing and prolonging the benefits of surgery, soft tissue augmentation, and laser resurfacing.
Deep sedation during catheter ablation for atrial fibrillation in elderly patients
Atrial fibrillation (AF) is the most common cardiac arrhythmia. AF incidence increases with age. AF ablation procedures are routinely performed under deep sedation with propofol. The purpose of the study was to evaluate if propofol deep sedation during AF ablation is safe in elderly patients. Four hundred one consecutive patients (mean age, 61.4 ± 11.1 years; range, 20–82; 66.3 % men) who were presented to our institution for ablation of symptomatic AF were enrolled. Patients were divided into three groups: Patients in group A were ≤50 years old; patients in group B were 51–74 years old; and patients in group C were ≥75 years old. Procedures were performed under deep sedation with propofol, midazolam, and piritramide. SaO2, electrocardiogram, arterial blood pressure, and arterial blood gas were monitored throughout the procedure. Sedation-related complications, intraprocedural complications, and other adverse events were evaluated. Fisher exact or χ 2 tests were used for comparison of adverse events and complications among groups. Analysis of variance was used to compare sedation- and procedure-related parameters. Fifty-three (13.2 %) elderly patients were in group C and were compared to 73 (18.2 %) patients in group A and 275 (68.8 %) in group B. No significant differences in sedation-related or intraprocedural complications were seen (group A, 1.4 %; group B, 1.1 %; group C, 3.7 %; p = 0.336). Despite a significantly greater drop in systolic blood pressure in under sedation in group C (group A, 15.5 ± 9.5 mmHg; group B, 18.9 ± 16.3 mmHg; group C, 32.3 ± 15.5 mmHg; p < 0.001), no prolonged hypotension was observed. The rate of other adverse events (delirium, respiratory infection, renal failure) was significantly higher in group C (9.4 %), compared to group A (0 %) and group B (2.2 %; p = 0.004). Deep sedation with propofol and midazolam during AF ablation did not result in an increased rate in sedation-related complications in elderly patients. Similarly, the rate of procedural complications was not significantly different among the study groups. The rate of respiratory infections and renal failure was significantly higher in the elderly. All adverse events were treated successfully without any remaining sequelae.
Lean UX: the next generation of user-centered agile development?
In this paper we discuss the opportunities and challenges of the recently introduced Lean UX software development philosophy. The point of view is product design and development in a software agency. Lean UX philosophy is identified by three ingredients: design thinking, Lean production and Agile development. The major challenge for an agency is the organizational readiness of the client organization to adopt a new way of working. Rather than any special tool or practice, we see that the renewal of user-centered design and development is hindered by existing purchase processes and slow decision making patterns.
Invasive lobular carcinoma: response to neoadjuvant letrozole therapy
Invasive lobular cancer (ILC) responds poorly to neoadjuvant chemotherapy but appears to respond well to endocrine therapy. We examined the effectiveness of neoadjuvant letrozole in postmenopausal women (PMW) with estrogen receptor (ER)-rich ILC. PMW were considered for treatment with neoadjuvant letrozole if they had ER-rich, large operable, or locally advanced cancers, or were unfit for surgical therapy. Tumor volume was estimated at diagnosis and at 3 months using calipers (clinical), ultrasound, and mammography. At 3 months, if physically fit, women were assessed for surgery. Responsive women with cancers too large for breast-conserving surgery continued with letrozole. Patients had surgery or were switched to alternative therapy if tumor volume was increasing. Sixty-one patients (mean age, 76.2 years) with 63 ILCs were treated with letrozole for ≥3 months. The mean reduction in tumor volume at 3 months was 66% (median, 76%) measured clinically, 61% (median, 73%) measured by ultrasound, and 54% (median, 60%) measured by mammography. Surgery was possible at 3 months in 24 cancers in 24 patients, and all but two of the remaining patients continued with letrozole therapy for a median duration of 9 months. At the time of this publication, 40 patients with a total of 41 cancers have undergone surgery. The rate of successful breast conservation was 81% (25/31). Twenty-one patients have continued with letrozole monotherapy, and 19 remain controlled on letrozole at a median of 2.8 years. There is a high rate of response to letrozole in PMW with ER-rich ILC.
Acromioclavicular joint separations.
Acromioclavicular (AC) joint separations are common injuries of the shoulder girdle, especially in the young and active population. Typically the mechanism of this injury is a direct force against the lateral aspect of the adducted shoulder, the magnitude of which affects injury severity. While low-grade injuries are frequently managed successfully using non-surgical measures, high-grade injuries frequently warrant surgical intervention to minimize pain and maximize shoulder function. Factors such as duration of injury and activity level should also be taken into account in an effort to individualize each patient's treatment. A number of surgical techniques have been introduced to manage symptomatic, high-grade injuries. The purpose of this article is to review the important anatomy, biomechanical background, and clinical management of this entity.
Immunosuppressive, anti-inflammatory and anti-cancer properties of triptolide: A mini review
OBJECTIVE Triptolide, the active component of Tripterygium wilfordii Hook F has been used to treat autoimmune and inflammatory conditions for over two hundred years in traditional Chinese medicine. However, the processes through which triptolide exerts immunosuppression and anti-inflammation are not understood well. In this review, we discuss the autoimmune disorders and inflammatory conditions that are currently treated with triptolide. Triptolide also possesses anti-tumorigenic effects. We discuss the toxicity of various triptolide derivatives and offer suggestions to improve its safety. This study also examines the clinical trials that have investigated the efficacy of triptolide. Our aim is to examine the mechanisms that are responsible for the immunosuppressive, anti-inflammatory, and anti-cancer effects of triptolide. MATERIALS AND METHODS The present review provides a comprehensive summary of the literature with respect to the immunosuppressive, anti-inflammatory, and anti-cancer properties of triptolide. RESULTS Triptolide possesses immunosuppressive, anti-inflammatory, and anti-cancer effects. CONCLUSION Triptolide can be used alone or in combination with existing therapeutic modalities as novel treatments for autoimmune disorders, cancers, and for immunosuppression.
RCD snubber circuit design for 5-level 4-switch DC-AC converter
This paper presents an optimized single phase DC-AC converter with RCD snubber circuit. The proposed converter has an optimized number of levels per number of switches (nL/nS) which is by far the best relationship among the converters proposed in literature. The most important characteristics of the proposed configuration are: (i) reduced number of semiconductor devices, while keeping a high number of levels at the output converter side, (ii) only one DC source without any need to balance capacitor voltages, and (in) reduced semiconductor losses and as a result higher efficiency. The principle of operation, RCD snubber design, modulation technique, and modeling of the proposed converter are presented in this paper. The proposed multilevel DC-AC converter topology is simulated and validated experimentally.
Privacy by Design in Federated Identity Management
Federated Identity Management (FIM), while solving important scalability, security and privacy problems of remote entity authentication, introduces new privacy risks. By virtue of sharing identities with many systems, the improved data quality of subjects may increase the possibilities of linking private data sets, moreover, new opportunities for user profiling are being introduced. However, FIM models to mitigate these risks have been proposed. In this paper we elaborate privacy by design requirements for this class of systems, transpose them into specific architectural requirements, and evaluate a number of FIM models with respect to these requirements. The contributions of this paper are a catalog of privacy-related architectural requirements, joining up legal, business and system architecture viewpoints, and the demonstration of concrete FIM models showing how the requirements can be implemented in practice.
Estimation of virtual interpupillary distances for immersive head-mounted displays
Head-mounted displays (HMDs) allow users to observe virtual environments (VEs) from an egocentric perspective. In order to present a realistic stereoscopic view, the rendering system has to be adjusted to the characteristics of the HMD, e. g., the display's field of view (FOV), as well as to characteristics that are unique for each user, in particular her interpupillary distance (IPD). Typically, the user's IPD is measured, and then applied to the virtual IPD used for rendering, assuming that the HMD's display units are correctly adjusted in front of the user's eyes. A discrepancy between the user's IPD and the virtual IPD may distort the perception of the VE. In this poster we analyze the user's perception of a VE in a HMD environment, which is displayed stereoscopically with different IPDs. We conducted an experiment to identify virtual IPDs that are identified as natural by subjects for different FOVs. In our experiment, subjects had to adjust the IPD for a rendered virtual replica of our real laboratory until perception of the virtual replica matched perception of the real laboratory. We found that the virtual IPDs subjects estimate as most natural are often not identical to their IPDs, and that the estimations were affected by the FOV of the HMD and the virtual FOV used for rendering.
Integrated fiber-wireless access architecture for mobile backhaul and fronthaul in 5G wireless data networks
Recent rapid proliferation of smart mobile devices using 4G LTE-A and beyond wireless communications technologies is driving a near term, 10-fold increase in mobile data traffic that requires a build-up of wireless cell sites to support near term evolution fiber-optic based backhaul and fronthaul architecture. Optical fiber access transport needs to be scalable to support the projected 5G deployment goals by 2020: 1-10Gb/s at the user terminal; 100Gb/s for the backhaul truck; 1Tb/s for metro transport and 1Pb/s for the core transport. In order to provide multi-gigabit wireless link rate to mobile data users, one has to rely on efficient use of the available RF bandwidth and explore the wireless transmission technology at millimeter wave bands (30-300GHz) in addition to the deployment of small-cell architecture in an integrated optical and wireless access network platform. Due to the conflict between drastic growth of mobile data traffic and the limited wireless spectral resources at conventional RF bands for both cellular and WiFi networks, more aggressive spectral reuse and new spectral exploration at higher RF bands and cooperative multipoint operation among the remote radio heads (RRHs) are the three main directions for high-speed and high capacity wireless access networks. By reducing the cell size, limited spectral resources can be reused among small cells more frequently, thus enhancing the total system capacity. Due to the limited transmission range at higher RF bands, the combination of small-cell architecture and higher RF bands provides a promising solution to drastically increase the mobile data system capacity through new frequency band exploitation, frequency reuse and coordinated multi-point (CoMP) technologies.
Two-Cloud Secure Database for Numeric-Related SQL Range Queries With Privacy Preserving
Industries and individuals outsource database to realize convenient and low-cost applications and services. In order to provide sufficient functionality for SQL queries, many secure database schemes have been proposed. However, such schemes are vulnerable to privacy leakage to cloud server. The main reason is that database is hosted and processed in cloud server, which is beyond the control of data owners. For the numerical range query (“>,” “<,” and so on), those schemes cannot provide sufficient privacy protection against practical challenges, e.g., privacy leakage of statistical properties, access pattern. Furthermore, increased number of queries will inevitably leak more information to the cloud server. In this paper, we propose a two-cloud architecture for secure database, with a series of intersection protocols that provide privacy preservation to various numeric-related range queries. Security analysis shows that privacy of numerical information is strongly protected against cloud providers in our proposed scheme.
DesIGN: Design Inspiration from Generative Networks
Can an algorithm create original and compelling fashion designs to serve as an inspirational assistant? To help answer this question, we design and investigate different image generation models associated with different loss functions to boost creativity in fashion generation. The dimensions of our explorations include: (i) different Generative Adversarial Networks architectures that start from noise vectors to generate fashion items, (ii) novel loss functions that encourage creativity, inspired from Sharma-Mittal divergence, a generalized mutual information measure for the widely used relative entropies such as Kullback-Leibler, and (iii) a generation process following the key elements of fashion design (disentangling shape and texture components). A key challenge of this study is the evaluation of generated designs and the retrieval of best ones, hence we put together an evaluation protocol associating automatic metrics and human experimental studies that we hope will help ease future research. We show that our proposed creativity losses yield better overall appreciation than the one employed in Creative Adversarial Networks. In the end, about 61% of our images are thought to be created by human designers rather than by a computer while also being considered original per our human subject experiments, and our proposed loss scores the highest compared to existing losses in both novelty and likability. Figure 1: Training generative adversarial models with appropriate losses leads to realistic and creative 512× 512 fashion images.
Prostate Segmentation using 2D Bridged U-net.
In this paper, we focus on three problems in deep learning based medical image segmentation. Firstly, U-net, as a popular model for medical image segmentation, is difficult to train when convolutional layers increase even though a deeper network usually has a better generalization ability because of more learnable parameters. Secondly, the exponential ReLU (ELU), as an alternative of ReLU, is not much different from ReLU when the network of interest gets deep. Thirdly, the Dice loss, as one of the pervasive loss functions for medical image segmentation, is not effective when the prediction is close to ground truth and will cause oscillation during training. To address the aforementioned three problems, we propose and validate a deeper network that can fit medical image datasets that are usually small in the sample size. Meanwhile, we propose a new loss function to accelerate the learning process and a combination of different activation functions to improve the network performance. Our experimental results suggest that our network is comparable or superior to state-of-the-art methods.
Social mobile Augmented Reality for retail
Consumers are increasingly relying on web-based social content, such as product reviews, prior to making to a purchase. Recent surveys in the Retail Industry confirm that social content is indeed the #1 aid in a buying decision. Currently, accessing or adding to this valuable web-based social content repository is mostly limited to computers far removed from the site of the shopping experience itself. We present a mobile Augmented Reality application, which extends such social content from the computer monitor into the physical world through mobile phones, providing consumers with in situ information on products right when and where they need to make buying decisions.
Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks.
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.
Why Are Consumers Willing to Pay for Privacy? An Application of the Privacy-freemium Model to Media Companies
Monetizing their users’ personal information instead of charging a fee has become an established revenue model for platform operators—a new form of media companies specialized in aggregating, managing, and distributing user-generated online content. However, the commodification of privacy leads to privacy concerns that might be a risk for such businesses. Thus, a new approach is to focus on consumers’ willingness to pay for privacy, assuming that monetizing privacy protection might be an alternative revenue model. Following the freemium idea, we developed an innovative research design, offering 553 online survey participants the opportunity to subscribe to a fictional premium version of Facebook with additional privacy control features in return for a monthly fee. Based on the theory of planned behavior, we developed and tested a research model to explain actual willingness to pay for privacy behavior. Our findings show that perceived usefulness and trust significantly affect willingness to pay. In contrast, perceived internet privacy risk was not found to have a significant influence. We thus conclude that consumers are willing to pay for privacy in the form of a privacy-freemium model, provided they perceive the premium version as offering added value and as trustworthy.
A Cancer Vaccine Induces Expansion of NY-ESO-1-Specific Regulatory T Cells in Patients with Advanced Melanoma
Cancer vaccines are designed to expand tumor antigen-specific T cells with effector function. However, they may also inadvertently expand regulatory T cells (Treg), which could seriously hamper clinical efficacy. To address this possibility, we developed a novel assay to detect antigen-specific Treg based on down-regulation of surface CD3 following TCR engagement, and used this approach to screen for Treg specific to the NY-ESO-1 tumor antigen in melanoma patients treated with the NY-ESO-1/ISCOMATRIX™ cancer vaccine. All patients tested had Treg (CD25(bright) FoxP3(+) CD127(neg)) specific for at least one NY-ESO-1 epitope in the blood. Strikingly, comparison with pre-treatment samples revealed that many of these responses were induced or boosted by vaccination. The most frequently detected response was toward the HLA-DP4-restricted NY-ESO-1(157-170) epitope, which is also recognized by effector T cells. Notably, functional Treg specific for an HLA-DR-restricted epitope within the NY-ESO-1(115-132) peptide were also identified at high frequency in tumor tissue, suggesting that NY-ESO-1-specific Treg may suppress local anti-tumor immune responses. Together, our data provide compelling evidence for the ability of a cancer vaccine to expand tumor antigen-specific Treg in the setting of advanced cancer, a finding which should be given serious consideration in the design of future cancer vaccine clinical trials.
Model predictive control of autonomous mobility-on-demand systems
In this paper we present a model predictive control (MPC) approach to optimize vehicle scheduling and routing in an autonomous mobility-on-demand (AMoD) system. In AMoD systems, robotic, self-driving vehicles transport customers within an urban environment and are coordinated to optimize service throughout the entire network. Specifically, we first propose a novel discrete-time model of an AMoD system and we show that this formulation allows the easy integration of a number of real-world constraints, e.g., electric vehicle charging constraints. Second, leveraging our model, we design a model predictive control algorithm for the optimal coordination of an AMoD system and prove its stability in the sense of Lyapunov. At each optimization step, the vehicle scheduling and routing problem is solved as a mixed integer linear program (MILP) where the decision variables are binary variables representing whether a vehicle will 1) wait at a station, 2) service a customer, or 3) rebalance to another station. Finally, by using real-world data, we show that the MPC algorithm can be run in real-time for moderately-sized systems and outperforms previous control strategies for AMoD systems.
DISTRIBUTED BUS PROTECTION APPLICATION IN A PLATFORM FOR PROCESS BUS DEPLOYMENT IN THE SMART SUBSTATION
Bus protection is typically a station-wide protection function, as it uses the majority of the high voltage (HV) electrical signals available in a substation. All current measurements that define the bus zone of protection are needed. Voltages may be included in bus protection relays, as the number of voltages is relatively low, so little additional investment is not needed to integrate them into the protection system. This paper presents a new Distributed Bus Protection System that represents a step forward in the concept of a Smart Substation solution. This Distributed Bus Protection System has been conceived not only as a protection system, but as a platform that incorporates the data collection from the HV equipment in an IEC 61850 process bus scheme. This new bus protection system is still a distributed bus protection solution. As opposed to dedicated bay units, this system uses IEC 61850 process interface units (that combine both merging units and contact I/O) for data collection. The main advantage then, is that as the bus protection is deployed, it is also deploying the platform to do data collection for other protection, control, and monitoring functions needed in the substation, such as line, transformer, and feeder. By installing the data collection pieces, this provides for the simplification of engineering tasks, and substantial savings in wiring, number of components, cabinets, installation, and commissioning. In this way the new bus protection system is the gateway to process bus, as opposed to an addon to a process bus system. The paper analyzes and describes the new Bus Protection System as a new conceptual design for a Smart Substation, highlighting the advantages in a vision that comprises not only a single element, but the entire installation. Keyword: Current Transformer, Digital Fault Recorder, Fiber Optic Cable, International Electro Technical Commission, Process Interface Units
Unsupervised Text Style Transfer using Language Models as Discriminators
Binary classifiers are often employed as discriminators in GAN-based unsupervised style transfer systems to ensure that transferred sentences are similar to sentences in the target domain. One difficulty with this approach is that the error signal provided by the discriminator can be unstable and is sometimes insufficient to train the generator to produce fluent language. In this paper, we propose a new technique that uses a target domain language model as the discriminator, providing richer and more stable token-level feedback during the learning process. We train the generator to minimize the negative log likelihood (NLL) of generated sentences, evaluated by the language model. By using a continuous approximation of discrete sampling under the generator, our model can be trained using back-propagation in an end-to-end fashion. Moreover, our empirical results show that when using a language model as a structured discriminator, it is possible to forgo adversarial steps during training, making the process more stable. We compare our model with previous work that uses convolutional networks (CNNs) as discriminators, as well as a broad set of other approaches. Results show that the proposed method achieves improved performance on three tasks: word substitution decipherment, sentiment modification, and related language translation.
A compact circularly-polarized square slot antenna with enhanced axial-ratio bandwidth using metasurface
A low-profile single-fed, wideband, circularly-polarized slot antenna is proposed. The antenna comprises a square slot fed by a U-shaped microstrip line which provides a wide impedance bandwidth. Wideband circular polarization is obtained by incorporating a metasurface consisting of a 9 × 9 lattice of periodic metal plates. It is shown that this metasurface generates additional resonances, lowers the axial ratio (AR) of the radiating structure, and enhances the radiation pattern stability at higher frequencies. The overall size of the antenna is only 28 mm × 28 mm (0.3 λo × 0.3 λo). The proposed antenna shows an impedance bandwidth from 2.6 GHz to 9 GHz (110.3%) for |S11| > −10 dB, and axial ratio bandwidth from 3.5 GHz to 6.1 GHz (54.1%) for AR > 3 dB. The antenna has a stable radiation pattern and a gain of greater than 3 dBi over the entire frequency band.
Inscribing Difference: Maronites, Jews and Arabs in Mexican Public Culture and French Imperial Practice1
This paper traces the relationship between Arab, Maronite and Jewish populations that have circulated between the Mashreq – contemporary Lebanon, Israel and Syria – and Mexico over the past century and a half. It turns to historical anthropology to argue that a number of factors have contributed to the polarization of the Mashreqi migrant population along ethno-religious axes that were not salient in the same sense during the early decades of the migration, when families and individuals established cross-confessional networks based on a shared spoken language – Arabic – a shared culinary tradition and a shared space of life and labor – downtown Mexico City. It explores the categorization of migrants, through French administrative practice and the migrants’ creation of and participation in institutions, to describe the progressive erasure of common spaces and the privileging of allegiances articulated through ethno-religious categories, turned ethno-national labels. The French Mandate over the Mashreq, the...
Alethology as the First Philosophy
In the early, metaphysical, period, Shalva Nutsubidze focused his inquiry on the question of defining "truth". His alethological realism is nothing more than a search for this truth and for a path to God. Nutsubidze's first important work was "Bolzano and the Theory of Science", a long article that contained the fundamental ideas of his Principles of Alethology . Traditionally, metaphysics, as the First philosophy, dealt basically with the problem of Existence and Existent categories. According to Nutsubidze, philosophy is what shows man the way to the truth that stands above contradiction. A man must leave everything that is "human" and rise to the "suprahuman", higher than the being itself. This is the path to God, that is, the idea (thought) and the being merging into each other. All of Nutsubidze's research and his concept of alethology, the First philosophy, are directed towards this. Keywords: alethological realism; First philosophy; path to God; principles of Alethology; Shalva Nutsubidze
Object Detection using Deep Learning
Autonomous vehicles, surveillance systems, face detection systems lead to the development of accurate object detection system [1]. These systems recognize, classify and localize every object in an image by drawing bounding boxes around the object [2]. These systems use existing classification models as backbone for Object Detection purpose. Object detection is the process of finding instances of real-world objects such as human faces, animals and vehicles etc., in pictures, images or in videos. An Object detection algorithm uses extracted features and learning techniques to recognize the objects in an image. In this paper, various Object Detection techniques have been studied and some of them are implemented. As a part of this paper, three algorithms for object detection in an image were implemented and their results were compared. The algorithms are “Object Detection using Deep Learning Framework by OpenCV”, “Object Detection using Tensorflow” and “Object Detection using Keras models”.