title
stringlengths
8
300
abstract
stringlengths
0
10k
Micromachined Integrated Quantum Circuit Containing a Superconducting Qubit
We present a device demonstrating a lithographically patterned transmon integrated with a micromachined cavity resonator. Our two-cavity, one-qubit device is a multilayer microwave-integrated quantum circuit (MMIQC), comprising a basic unit capable of performing circuit-QED operations. We describe the qubit-cavity coupling mechanism of a specialized geometry using an electric-field picture and a circuit model, and obtain specific system parameters using simulations. Fabrication of the MMIQC includes lithography, etching, and metallic bonding of silicon wafers. Superconducting wafer bonding is a critical capability that is demonstrated by a micromachined storage-cavity lifetime of 34.3 μs, corresponding to a quality factor of 2 × 10 at single-photon energies. The transmon coherence times are T1 1⁄4 6.4 μs, and Techo 2 1⁄4 11.7 μs. We measure qubit-cavity dispersive coupling with a rate χqμ=2π 1⁄4 −1.17 MHz, constituting a Jaynes-Cummings system with an interaction strength g=2π 1⁄4 49 MHz. With these parameters we are able to demonstrate circuit-QED operations in the strong dispersive regime with ease. Finally, we highlight several improvements and anticipated extensions of the technology to complex MMIQCs.
Fiber optically sensorized multi-fingered robotic hand
We present the design, fabrication, and characterization of a fiber optically sensorized robotic hand for multi purpose manipulation tasks. The robotic hand has three fingers that enable both pinch and power grips. The main bone structure was made of a rigid plastic material and covered by soft skin. Both bone and skin contain embedded fiber optics for force and tactile sensing, respectively. Eight fiber optic strain sensors were used for rigid bone force sensing, and six fiber optic strain sensors were used for soft skin tactile sensing. For characterization, different loads were applied in two orthogonal axes at the fingertip and the sensor signals were measured from the bone structure. The skin was also characterized by applying a light load on different places for contact localization. The actuation of the hand was achieved by a tendon-driven under-actuated system. Gripping motions are implemented using an active tendon located on the volar side of each finger and connected to a motor. Opening motions of the hand were enabled by passive elastic tendons located on the dorsal side of each finger.
Taxonomy of information security risk assessment (ISRA)
Information is a perennially significant business asset in all organizations. Therefore, it must be protected as any other valuable asset. This is the objective of information security, and an information security program provides this kind of protection for a company’s information assets and for the company as a whole. One of the best ways to address information security problems in the corporate world is through a risk-based approach. In this paper, we present a taxonomy of security risk assessment drawn from 125 papers published from 1995 to May 2014. Organizations with different size may face problems in selecting suitable risk assessment methods that satisfy their needs. Although many risk-based approaches have been proposed, most of them are based on the old taxonomy, avoiding the need for considering and applying the important criteria in assessing risk raised by rapidly changing technologies and the attackers knowledge level. In this paper, we discuss the key features of risk assessment that should be included in an information security management system. We believe that our new risk assessment taxonomy helps organizations to not only understand the risk assessment better by comparing different new concepts but also select a suitable way to conduct the risk assessment properly. Moreover, this taxonomy will open up interesting avenues for future research in the growing field of security risk assessment.
Robot Pose Estimation in Unknown Environments by Matching 2D Range Scans
A mobile robot exploring an unknown environment has no absolute frame of reference for its position, other than features it detects through its sensors. Using distinguishable landmarks is one possible approach, but it requires solving the object recognition problem. In particular, when the robot uses two-dimensional laser range scans for localization, it is di cult to accurately detect and localize landmarks in the environment (such as corners and occlusions) from the range scans. In this paper, we develop two new iterative algorithms to register a range scan to a previous scan so as to compute relative robot positions in an unknown environment, that avoid the above problems. The rst algorithm is based on matching data points with tangent directions in two scans and minimizing a distance function in order to solve the displacement between the scans. The second algorithm establishes correspondences between points in the two scans and then solves the point-to-point least-squares problem to compute the relative pose of the two scans. Our methods work in curved environments and can handle partial occlusions by rejecting outliers.
Using Filter Banks in Convolutional Neural Networks for Texture Classification
Deep learning has established many new state of the art solutions in the last decade in areas such as object, scene and speech recognition. In particular Convolutional Neural Network (CNN) is a category of deep learning which obtains excellent results in object detection and recognition tasks. Its architecture is indeed well suited to object analysis by learning and classifying complex (deep) features that represent parts of an object or the object itself. However, some of its features are very similar to texture analysis methods. CNN layers can be thought of as filter banks of complexity increasing with the depth. Filter banks are powerful tools to extract texture features and have been widely used in texture analysis. In this paper we develop a simple network architecture named Texture CNN (T-CNN) which explores this observation. It is built on the idea that the overall shape information extracted by the fully connected layers of a classic CNN is of minor importance in texture analysis. Therefore, we pool an energy measure from the last convolution layer which we connect to a fully connected layer. We show that our approach can improve the performance of a network while greatly reducing the memory usage and computation. c © 2016 Elsevier Ltd. All rights reserved.
Natural scene text detection based on SWT, MSER and candidate classification
This paper presents a novel scene text detection algorithm based on Stroke Width Transform (SWT), Maximally Extremal Regions (MSER) and candidate classification. Firstly, utilize the SWT and MSER to extract the candidate characters at the same time. Secondly, preliminary filtering the candidate connected components based on heuristic rules. Thirdly, using mutual verification and integration to class all candidate into two categories: strong candidates, weak candidates. If the weak candidate has similar properties with strong candidate, then the weak candidate is changed into strong candidate. Finally, the text area is aggregated into text lines by text line aggregation algorithm. The experiment results on public datasets show that the proposed method can detect text lines effectively.
GraphReduce: processing large-scale graphs on accelerator-based systems
Recent work on real-world graph analytics has sought to leverage the massive amount of parallelism offered by GPU devices, but challenges remain due to the inherent irregularity of graph algorithms and limitations in GPU-resident memory for storing large graphs. We present GraphReduce, a highly efficient and scalable GPU-based framework that operates on graphs that exceed the device's internal memory capacity. GraphReduce adopts a combination of edge- and vertex-centric implementations of the Gather-Apply-Scatter programming model and operates on multiple asynchronous GPU streams to fully exploit the high degrees of parallelism in GPUs with efficient graph data movement between the host and device. GraphReduce-based programming is performed via device functions that include gatherMap, gatherReduce, apply, and scatter, implemented by programmers for the graph algorithms they wish to realize. Extensive experimental evaluations for a wide variety of graph inputs and algorithms demonstrate that GraphReduce significantly outperforms other competing out-of-memory approaches.
Machine Learning Based Detection of Clickbait Posts in Social Media
Clickbait (headlines) make use of misleading titles that hide critical information from or exaggerate the content on the landing target pages to entice clicks. As clickbaits often use eye-catching wording to attract viewers, target contents are often of low quality. Clickbaits are especially widespread on social media such as Twitter, adversely impacting user experience by causing immense dissatisfaction. Hence, it has become increasingly important to put forward a widely applicable approach to identify and detect clickbaits. In this paper, we make use of a dataset from the clickbait challenge 2017 (clickbait-challenge.com) comprising of over 21,000 headlines/titles, each of which is annotated by at least five judgments from crowdsourcing on how clickbait it is. We attempt to build an effective computational clickbait detection model on this dataset. We first considered a total of 331 features, filtered out many features to avoid overfitting and improve the running time of learning, and eventually selected the 60 most important features for our final model. Using these features, Random Forest Regression achieved the following results: MSE=0.035 MSE, Accuracy=0.82, and F1sore=0.61 on the clickbait class.
A Mobile Transaction Model That Captures Both the Data and Movement Behavior
Unlike distributed transactions, mobile transactions do not originate and end at the same site. The implication of the movement of such transactions is that classical atomicity, concurrency and recovery solutions must be revisited to capture the movement behavior. As an effort in this direction, we define a model of mobile transactions by building on the concepts of split transactions and global transactions in a multidatabase environment. Our view of mobile transactions, called Kangaroo Transactions, incorporates the property that transactions in a mobile computing system hop from one base station to another as the mobile unit moves through cells. Our model is the first to capture this movement behavior as well as the data behavior which reflects the access to data located in databases throughout the static network. The mobile behavior is dynamic and is realized in our model via the use of split operations. The data access behavior is captured by using the idea of global and local transactions in a multidatabase system.
A 160-GHz Subharmonic Transmitter and Receiver Chipset in an SiGe HBT Technology
A monolithically integrated 160-GHz transmitter and receiver chipset with in-phase/quadrature baseband inputs and outputs and on-chip local oscillator (LO) generation has been implemented in a 0.25- μm silicon-germanium heterojunction bipolar transistor technology. The chipset features a three-stage differential power amplifier, a low-noise amplifier, up- and down-conversion subharmonic quadrature mixers, and an 80-GHz voltage-controlled oscillator equipped with a 1/16 frequency prescaler for frequency locking by an external phase-locked loop. To investigate the behavior of the Gilbert-cell-based subharmonic mixer operated close to fmax , the correlation between LO phases and conversion gain is studied. The conclusion suggests that the maximum conversion gain can be obtained with certain LO phases at millimeter-wave frequencies. Over the 150-168-GHz bandwidth, the transmitter delivers an output power of more than 8 dBm with a maximum 10.6-dBm output power at 156 GHz. The receiver provides a noise figure lower than 9 dB and more than 25 dB of conversion gain at 150-162 GHz, including the losses of an auxiliary input balun. The transmitter and receiver chips consume 610 and 490 mW, respectively.
An Energy-Efficient , Wireless Top-of-Rack to Top-of-Rack Datacenter Network using 60 GHz Links
Datacenters have become the digital backbone of the modern society and consume enormous amounts of power. Significant portion of the power consumption is due to the power hungry switching fabric necessary for communication in the datacenter. Additionally, the complex cabling in traditional datacenters pose design and maintenance challenges and increase the energy cost of the cooling infrastructure by obstructing the flow of chilled air. Recent research on wireless datacenters have proposed interconnecting rows of racks at the top-of-rack (ToR) level via wireless links to eliminate the need for complex network of power hungry routers and cables. Links are established using either highly directional phased array antennas or narrow-beam horned antennas between those ToR entities. ToR-to-ToR wireless links have also been used to augment existing wired networks, improving overall performance characteristics. All these wireless approaches advocate the use of 60GHz line-of-sight (LoS) communication paths between antennas for the establishment of reliable wireless channels. In this work, we explore the feasibility of a ToR-to-ToR wireless network for a small to medium-scale datacenter from the perspective of system-level performance. We evaluate a ToR-to-ToR wireless datacenter network (DCN) for network-level data rate and overall power consumption and compare it to a traditional fat-tree based DCN. We find that the ToR-to-ToR wireless DCN can sustain similar data rates for typical query based applications and consume less power compared to that of traditional datacenters. KeywordsWireless datacenter; Energy Efficiency; 60GHz; IEEE 802.11ad; Top-of-Rack;
High reproducibility using sodium hydroxide-stripped long oligonucleotide DNA microarrays.
Recently, long oligonucleotide (60- to 70-mer) microarrays for two-color experiments have been developed and are gaining widespread use. In addition, when there is limited availability of mRNA from tissue sources, RNA amplification can and is being used to produce sufficient quantities of cRNA for microarray hybridization. Taking advantage of the selective degradation of RNA under alkaline conditions, we have developed a method to "strip" glass-based oligonucleotide microarrays that use fluorescent RNA in the hybridization, while leaving the DNA oligonucleotide probes intact and usable for a second experiment. Replicate microarray experiments conducted using stripped arrays showed high reproducibility, however, we found that arrays could only be stripped and reused once without compromising data quality. The intraclass correlation (ICC) between a virgin array and a stripped array hybridized with the same sample showed a range of 0.90-0.98, which is comparable to the ICC of two virgin arrays hybridized with the same sample. Using this method, once-stripped oligonucleotide microarrays are usable, reliable, and help to reduce costs.
A Sender-Centric Approach to Detecting Phishing Emails
Email-based online phishing is a critical security threat on the Internet. Although phishers have great flexibility in manipulating both the content and structure of phishing emails, phishers have much less flexibility in completely concealing the sender information of a phishing message. Importantly, such sender information is often inconsistent with the target institution of a phishing email. Based on this observation, in this paper we advocate and develop a sender-centric approach to detecting phishing emails by focusing on the sender information of a message instead of the content or structure of the message. Our evaluation studies based on real-world email traces show that the sender-centric approach is a feasible and effective method in detecting phishing emails. For example, using an email trace containing both phishing and legitimate messages, we show that the sender-centric approach can detect 98.7% of phishing emails while correctly classifying all legitimate messages.
Usable cryptographic QR codes
QR codes are widely used in various settings such as consumer advertising, commercial tracking, ticketing and marketing. People tend to scan QR codes and trust their content, but there exists no standard mechanism for providing authenticity and confidentiality of the code content. Attacks such as the redirection to a malicious website or the infection of a smartphone with a malware are realistic and feasible in practice. In this paper, we present the first systematic study of usable state-of-the-art cryptographic primitives inside QR codes. We select standard, popular signature schemes and we compare them based on performance, size and security. We conduct tests that show how different usability factors impact on the QR code scanning performance and we evaluate the usability/security trade-off of the considered signature schemes. Interestingly, we find out that in some cases security breaks usability and we provide recommendations for the choice of secure and usable signature schemes.
Vortex Visualization in Ultra Low Reynolds Number Insect Flight
We present the visual analysis of a biologically inspired CFD simulation of the deformable flapping wings of a dragonfly as it takes off and begins to maneuver, using vortex detection and integration-based flow lines. The additional seed placement and perceptual challenges introduced by having multiple dynamically deforming objects in the highly unsteady 3D flow domain are addressed. A brief overview of the high speed photogrammetry setup used to capture the dragonfly takeoff, parametric surfaces used for wing reconstruction, CFD solver and underlying flapping flight theory is presented to clarify the importance of several unsteady flight mechanisms, such as the leading edge vortex, that are captured visually. A novel interactive seed placement method is used to simplify the generation of seed curves that stay in the vicinity of relevant flow phenomena as they move with the flapping wings. This method allows a user to define and evaluate the quality of a seed's trajectory over time while working with a single time step. The seed curves are then used to place particles, streamlines and generalized streak lines. The novel concept of flowing seeds is also introduced in order to add visual context about the instantaneous vector fields surrounding smoothly animate streak lines. Tests show this method to be particularly effective at visually capturing vortices that move quickly or that exist for a very brief period of time. In addition, an automatic camera animation method is used to address occlusion issues caused when animating the immersed wing boundaries alongside many geometric flow lines. Each visualization method is presented at multiple time steps during the up-stroke and down-stroke to highlight the formation, attachment and shedding of the leading edge vortices in pairs of wings. Also, the visualizations show evidence of wake capture at stroke reversal which suggests the existence of previously unknown unsteady lift generation mechanisms that are unique to quad wing insects.
INCREMENTAL SENSOR FUSION IN FACTOR GRAPHS WITH UNKNOWN DELAYS
Sensor fusion by incremental smoothing in factor graphs allows the easy incorporation of asynchronous and delayed measurements, which is one of the main advantages of this approach compared to the ubiquitous filtering techniques. While incorporating delayed measurements into the factor graph representation is in principle easy when the delay is known, handling unknown delays is a non-trivial task that has not been explored before in this context. Our paper addresses the problem of performing incremental sensor fusion in factor graphs when some of the sensor information arrive with a significant unknown delay. We develop and compare two techniques to handle such delayed measurements under mild conditions on the characteristics of that delay: We consider the unknown delay to be bounded and quantizable into multiples of the state transition cycle time. The proposed methods are evaluated using a simulation of a dynamic 3-DoF system that fuses odometry and GPS measurements.
Comparison of daily filgrastim and pegfilgrastim to prevent febrile neutropenia in Asian lymphoma patients.
AIM Febrile neutropenia (FN) is a highly prevalent complication of chemotherapy, particularly in patients with non-Hodgkin's lymphoma. This study aimed to compare the efficacy of filgrastim and pegfilgrastim in Asian lymphoma patients by evaluating the incidence of FN and associated complications. METHODS This was a single-center, retrospective cohort study in Asian lymphoma patients who received chemotherapy with primary prophylactic granulocyte colony-stimulating factors support between January 2008 and August 2009. Data were analyzed using an intent-to-treat approach, which aimed to reflect actual prescribing practices. RESULTS A total of 204 Asian lymphoma patients were included in this study, with 81 patients in the filgrastim arm and 123 patients in the pegfilgrastim arm. Overall, the incidence of breakthrough FN was similar between the two groups of patients (13.6%: filgrastim arm vs 16.3%: pegfilgrastim arm; P=0.69). Neutropenic complications such as chemotherapy treatment delay and chemotherapy dose reduction were similar between the two arms. CONCLUSION In Asian patients, pegfilgrastim prophylaxis did not show a therapeutic advantage for preventing neutropenic outcomes compared with filgrastim prophylaxis.
Evaluation of weight based enoxaparin dosing on anti-Xa concentrations in patients with obesity
Current treatment dose of enoxaparin is based on total body weight (TBW), however dosage in obesity remains unclear. “Dose capping” commonly occurs if TBW > 100 kg minimising bleeding risk. However, this may result in under-dosing and increasing embolisation risk. The primary objective evaluated efficacy of current dosing strategies in obese patients and determined if resultant anti-Xa concentrations (aXaC) were therapeutic. The secondary objective was to investigate if an uncapped 0.75–0.85 mg/kg (TBW) twice daily dose, advocated by previous authors, results in therapeutic aXaC (0.5–1.0 IU/ml). This retrospective study included 133 patients with a median TBW of 128 kg, producing 59% therapeutic, 15% sub-therapeutic and 26% supra-therapeutic aXaC. Approximately 60% of patients in each dose group (< 0.75, 0.75–0.85 and > 0.85 mg/kg) had a therapeutic aXaC, however the percentage of sub-therapeutic versus supra-therapeutic was higher in the < 0.75 (27% vs 9%) and > 0.85 mg/kg (10% vs 34%) groups respectively. Most patients who weighed 100–119 kg (TBW) received doses > 0.85 mg/kg, however 32% had toxic aXaC. Those between 120 and 139 kg (TBW) had a high percentage of therapeutic aXaC (87%) when dosed < 0.75 mg/kg and a high percentage of supra-therapeutic aXaC (71%) when dosed > 0.85 mg/kg; although numbers were low. Dose reduction occurred in patients > 140 kg (TBW), however < 0.75 mg/kg resulted in higher percentage of sub-therapeutic aXaC (42%). Dosing at 0.75–0.85 mg/kg results in 62% of therapeutic, 14% sub-therapeutic and 24% supra-therapeutic aXaC. This appears to be a “safe” starting dose-range, however all obese patients should have aXaC monitoring due to high inter-patient variability.
Novelty and redundancy detection in adaptive filtering
This paper addresses the problem of extending an adaptive information filtering system to make decisions about the novelty and redundancy of relevant documents. It argues that relevance and redundance should each be modelled explicitly and separately. A set of five redundancy measures are proposed and evaluated in experiments with and without redundancy thresholds. The experimental results demonstrate that the cosine similarity metric and a redundancy measure based on a mixture of language models are both effective for identifying redundant documents.
Paternal long-term exercise programs offspring for low energy expenditure and increased risk for obesity in mice.
Obesity has more than doubled in children and tripled in adolescents in the past 30 yr. The association between metabolic disorders in offspring of obese mothers with diabetes has long been known; however, a growing body of research indicates that fathers play a significant role through presently unknown mechanisms. Recent observations have shown that changes in paternal diet may result in transgenerational inheritance of the insulin-resistant phenotype. Although diet-induced epigenetic reprogramming via paternal lineage has recently received much attention in the literature, the effect of paternal physical activity on offspring metabolism has not been adequately addressed. In the current study, we investigated the effects of long-term voluntary wheel-running in C57BL/6J male mice on their offspring's predisposition to insulin resistance. Our observations revealed that fathers subjected to wheel-running for 12 wk produced offspring that were more susceptible to the adverse effects of a high-fat diet, manifested in increased body weight and adiposity, impaired glucose tolerance, and elevated insulin levels. Long-term paternal exercise also altered expression of several metabolic genes, including Ogt, Oga, Pdk4, H19, Glut4, and Ptpn1, in offspring skeletal muscle. Finally, prolonged exercise affected gene methylation patterns and micro-RNA content in the sperm of fathers, providing a potential mechanism for the transgenerational inheritance. These findings suggest that paternal exercise produces offspring with a thrifty phenotype, potentially via miRNA-induced modification of sperm.
Framework for developing volcanic fragility and vulnerability functions for critical infrastructure
Volcanic risk assessment using probabilistic models is increasingly desired for risk management, particularly for loss forecasting, critical infrastructure management, land-use planning and evacuation planning. Over the past decades this has motivated the development of comprehensive probabilistic hazard models. However, volcanic vulnerability models of equivalent sophistication have lagged behind hazard modelling because of the lack of evidence, data and, until recently, minimal demand. There is an increasingly urgent need for development of quantitative volcanic vulnerability models, including vulnerability and fragility functions, which provide robust quantitative relationships between volcanic impact (damage and disruption) and hazard intensity. The functions available to date predominantly quantify tephra fall impacts to buildings, driven by life safety concerns. We present a framework for establishing quantitative relationships between volcanic impact and hazard intensity, specifically through the derivation of vulnerability and fragility functions. We use tephra thickness and impacts to key infrastructure sectors as examples to demonstrate our framework. Our framework incorporates impact data sources, different impact intensity scales, preparation and fitting of data, uncertainty analysis and documentation. The primary data sources are post-eruption impact assessments, supplemented by laboratory experiments and expert judgment, with the latter drawing upon a wealth of semi-quantitative and qualitative studies. Different data processing and function fitting techniques can be used to derive functions; however, due to the small datasets currently available, simplified approaches are discussed. We stress that documentation of data processing, assumptions and limitations is the most important aspect of function derivation; documentation provides transparency and allows others to update functions more easily. Following our standardised approach, a volcanic risk scientist can derive a fragility or vulnerability function, which then can be easily compared to existing functions and updated as new data become available. To demonstrate how to apply our framework, we derive fragility and vulnerability functions for discrete tephra fall impacts to electricity supply, water supply, wastewater and transport networks. These functions present the probability of an infrastructure site or network component equalling or exceeding one of four impact states as a function of tephra thickness.
Studies on Economic Feasibility of an Autonomous Power Delivery System Utilizing Alternative Hybrid Distributed Energy Resources
An economic evaluation of a network of distributed energy resources (DERs) forming an autonomous power delivery system in an Indian scenario has been made. The mathematical analysis is based on the application of a real valued cultural algorithm (RVCA). The RVCA-evaluated total annual costs for the autonomous microgrid system utilizing both solar module and fuel cells as DERs and solar module and bio-mass gassifier unit as DERs have been compared. Different types of consumers together form a microgrid with the optimal supply of power from DERs. The optimal power generation conditions have been obtained pertaining to minimum cost of microgrid system. The results for different loading scenarios, using hybrid solar-biomass gassifier unit are found to be more cost competitive. A reduction of 8.1% in the annual cost is obtained using solar module-biomass gassifier unit to that using solar module-fuel cell for the same load demand in microgrid operation.
Which Type of Parent Training Works Best for Preschoolers with Comorbid ADHD and ODD? A Secondary Analysis of a Randomized Controlled Trial Comparing Generic and Specialized Programs.
The present study examined whether the presence of comorbid ODD differentially moderated the outcome of two Behavioral Parent Training (BPT) programs in a sample of preschoolers with ADHD: One designed specifically for ADHD (NFPP: New Forest Parenting Programme) and one designed primarily for ODD (HNC: Helping the Noncompliant Child). In a secondary analysis, 130 parents and their 3-4 year-old children diagnosed with ADHD were assigned to one of the two programs. 44.6 % of the children also met criteria for ODD. Significant interactions between treatment conditions (NFPP vs. HNC) and child ODD diagnosis (presence vs. absence) indicated that based on some parent and teacher reports, HNC was more effective with disruptive behaviors than NFPP but only when children had a comorbid diagnosis. Further, based on teacher report, NFPP was more effective with these behaviors when children had a diagnosis of only ADHD whereas HNC was equally effective across ADHD only and comorbid ODD diagnoses. Comorbidity profile did not interact with treatment program when parent or teacher reported ADHD symptoms served as the outcome. Implications for clinical interventions are discussed and directions for future work are provided.
Group Anomaly Detection Using Deep Generative Models
Unlike conventional anomaly detection research that focuses on point anomalies, our goal is to detect anomalous collections of individual data points. In particular, we perform group anomaly detection (GAD) with an emphasis on irregular group distributions (e.g. irregular mixtures of image pixels). GAD is an important task in detecting unusual and anomalous phenomena in real-world applications such as high energy particle physics, social media and medical imaging. In this paper, we take a generative approach by proposing deep generative models: Adversarial autoencoder (AAE) and variational autoencoder (VAE) for group anomaly detection. Both AAE and VAE detect group anomalies using point-wise input data where group memberships are known a priori. We conduct extensive experiments to evaluate our models on real world datasets. The empirical results demonstrate that our approach is effective and robust in detecting group anomalies.
Cell signaling during Trypanosoma cruzi invasion
Cell signaling is an essential requirement for mammalian cell invasion by Trypanosoma cruzi. Depending on the parasite strain and the parasite developmental form, distinct signaling pathways may be induced. In this short review, we focus on the data coming from studies with metacyclic trypomastigotes (MT) generated in vitro and tissue culture-derived trypomastigotes (TCT), used as counterparts of insect-borne and bloodstream parasites, respectively. During invasion of host cells by MT or TCT, intracellular Ca(2) (+) mobilization and host cell lysosomal exocytosis are triggered. Invasion mediated by MT surface molecule gp82 requires the activation of mammalian target of rapamycin (mTOR), phosphatidylinositol 3-kinase (PI3K), and protein kinase C (PKC) in the host cell, associated with Ca(2) (+)-dependent disruption of the actin cytoskeleton. In MT, protein tyrosine kinase, PI3K, phospholipase C, and PKC appear to be activated. TCT invasion, on the other hand, does not rely on mTOR activation, rather on target cell PI3K, and may involve the host cell autophagy for parasite internalization. Enzymes, such as oligopeptidase B and the major T. cruzi cysteine proteinase cruzipain, have been shown to generate molecules that induce target cell Ca(2) (+) signal. In addition, TCT may trigger host cell responses mediated by transforming growth factor β receptor or integrin family member. Further investigations are needed for a more complete and detailed picture of T. cruzi invasion.
A hybrid machine learning approach to network anomaly detection
Zero-day cyber attacks such as worms and spy-ware are becoming increasingly widespread and dangerous. The existing signature-based intrusion detection mechanisms are often not sufficient in detecting these types of attacks. As a result, anomaly intrusion detection methods have been developed to cope with such attacks. Among the variety of anomaly detection approaches, the Support Vector Machine (SVM) is known to be one of the best machine learning algorithms to classify abnormal behaviors. The soft-margin SVM is one of the well-known basic SVM methods using supervised learning. However, it is not appropriate to use the soft-margin SVM method for detecting novel attacks in Internet traffic since it requires pre-acquired learning information for supervised learning procedure. Such pre-acquired learning information is divided into normal and attack traffic with labels separately. Furthermore, we apply the one-class SVM approach using unsupervised learning for detecting anomalies. This means one-class SVM does not require the labeled information. However, there is downside to using one-class SVM: it is difficult to use the one-class SVM in the real world, due to its high false positive rate. In this paper, we propose a new SVM approach, named Enhanced SVM, which combines these two methods in order to provide unsupervised learning and low false alarm capability, similar to that of a supervised SVM approach. We use the following additional techniques to improve the performance of the proposed approach (referred to as Anomaly Detector using Enhanced SVM): First, we create a profile of normal packets using Self-Organized Feature Map (SOFM), for SVM learning without pre-existing knowledge. Second, we use a packet filtering scheme based on Passive TCP/IP Fingerprinting (PTF), in order to reject incomplete network traffic that either violates the TCP/IP standard or generation policy inside of well-known platforms. Third, a feature selection technique using a Genetic Algorithm (GA) is used for extracting optimized information from raw internet packets. Fourth, we use the flow of packets based on temporal relationships during data preprocessing, for considering the temporal relationships among the inputs used in SVM learning. Lastly, we demonstrate the effectiveness of the Enhanced SVM approach using the above-mentioned techniques, such as SOFM, PTF, and GA on MIT Lincoln Lab datasets, and a live dataset captured from a real network. The experimental results are verified by m-fold cross validation, and the proposed approach is compared with real world Network Intrusion Detection Systems (NIDS). ! 2007 Elsevier Inc. All rights reserved.
Value of FGFR2 expression for advanced gastric cancer patients receiving pazopanib plus CapeOX (capecitabine and oxaliplatin)
The aim of this study was to use immunohistochemistry (IHC) to determine the effect of FGFR2 and VEGFR2 expression on treatment outcomes for patients with metastatic or recurrent advanced gastric cancer (AGC) receiving a combination of pazopanib with CapeOx (capecitabine and oxaliplatin). We conducted a single-arm, open-label phase II study to determine the efficacy and toxicity of the combination of pazopanib with CapeOx in 66 patients with metastatic or recurrent AGC (ClinicalTrials.gov NCT01130805). IHC analysis of FGFR2 and VEGFR2 was possible in 54 patients (81.8 %). Among 54 patients, the median age was 51.5 years (range 23–72 years). Male patients were 59.3 %. Seven patients (13.5 %) had tumor tissues that expressed FGFR2 by IHC. No patients had tumors that expressed VEGFR2. Among seven patients with tumors with FGFR2 expression, six achieved partial response (PR) with a 85.7 % response rate and one patient with stable disease. Among 47 patients with tumors without FGFR2 expression, one had complete response and 27 had PR (59.5 %). A significant difference in PFS was seen between patients who were positive and negative for FGFR2 using IHC (8.5 vs. 5.6 months, P = 0.050). By prognostic analysis for PFS, only FGFR2 status by IHC (positive vs. negative) had significant prognostic value for predicting PFS. FGFR2 expression by IHC might be a useful biomarker for predicting treatment outcomes of patients with metastatic or recurrent AGC treated with a combination of pazopanib and CapeOx.
Where Are We Going? Perspective on Hindu–Muslim Relations in India
The twin issues of making peace and building it over time, which are very much at the forefront of social concerns in contemporary India, remain a major source of worry and require a thoughtful understanding. The lack of effort that has been dedicated towards the development of a systematic understanding of the psychological dynamics underpinning intergroup hostility and violence between Hindus and Muslims in India is disappointing to say the least. While elaborate analyses and accounts of these intergroup dynamics have emerged from academic disciplines such as sociology, political science, economics and history (e.g. Basu, Datta, Sarkar, Sarkar, & Sen, 1993; Brass, 2003; Engineer, 1995; Lal, 2003b; Ludden, 2005; Pandey, 1991; Varshney, 2002; Wilkinson, 2004), psychological theory and research with some predictive validity have been slow to emerge (Ghosh & Kumar, 1991; Hutnik, 2004; Kakar, 1996; Nandy, 1990; Singh, 1989). This brings to the forefront a couple of basic queries: (a) how can the discipline of psychology contribute towards the current understanding of intergroup dynamics in India and (b) can psychological theory and research translate into knowledge and action to promote peaceful coexistence in applied contexts? The objective of this chapter is to address these two questions which comprise the core of our account. We will begin this endeavour by briefly reviewing theoretical and empirical paradigms that have been explored previously. These will then be juxtaposed against the historical, social and political contexts of Hindu–Muslim relations in India to elucidate those issues that have been adequately investigated, but most importantly, those issues that need further elaboration and inquiry. Before embarking upon this assessment, we will provide a brief outline of the historical, political and social contexts of Hindu–Muslim relations in India. We maintain that for analysing socially meaningful phenomena it is necessary to depart from the habitually close confines of psychology’s argumentation and to include historical, cultural, social and political perspectives in analysis and theorising (Valsiner, 2001). Consequently, this chapter aims to gather insights from other disciplines and integrate them with psychological understanding in order to help augment the psychology of peace and conflict resolution.
The charge state of titanium ions implanted into sapphire: An EXAFS investigation
X-ray absorption near-edge structure (XANES) and extended X-ray absorption fine structure (EXAFS) have been used to study the charge states and atomic environments of Ti+ ions implanted into a single crystal sapphire substrate. Fluorescence techniques have been shown capable of detecting signals from the implanted ions even though the implant density in the ∼100μm thick specimen used is ∼ 0.02 at% (the implant profile all being within 0.2μm of the surface). By comparison with TiO2, Ti2O3 and TiO standards, XANES data suggest that the implant species environment is disordered. Further, EXAFS Fourier transforms show that the first shell radius of the implant species is between that of the TiO and Ti2O3 standards. Again, this indicates considerable structural disorder. After annealing the implanted specimen in air at 1150° C for 2 h, a shift towards the shell radius of the TiO2 structure is observed together with a similar change in the XANES appearances. Our conclusion is that the initial titanium ion implant into sapphire exists in both the Ti2+ and Ti3+ charge states and is located in a range of sites in the radiation-damaged material. Annealing produces a shift in charge state towards Ti4+. Implications for the solid solution hardening effect of the implant in sapphire are discussed.
Quantitative cone-beam computed tomography evaluation of palatal bone thickness for orthodontic miniscrew placement.
INTRODUCTION The purpose of this study was to evaluate the 3-dimensional thickness of the palate to determine the best location to place miniscrews. METHODS We selected digital volumetric tomographs from 162 healthy subjects, aged 10 to 44 years (80 male, 82 female). The sample was divided into 3 groups. Group A included 52 subjects (ages, 10-15 years; 28 boys, 24 girls); group B included 38 subjects (ages, 15-20 years; 18 males, 20 females), and group C had 72 subjects (age, 20-44 years; 34 men, 38 women). Ninety-degree paracoronal views of the palatal region at 4, 8, 16, and 24 mm posterior to the incisive foramen were reconstructed, and bone height was measured laterally from the midline in each reconstruction at 0-, 3-, and 6-mm increments to describe the topography of the palate. Measurements of palatal height in 27 of the 162 patients were made by 2 different investigators. Method error was calculated according to the Dahlberg formula (S(2) = Sigmad(2)/2n), and systematic error was evaluated with the dependent Student t test, with P <0.05 considered significant. RESULTS The thickest bone (4-8 mm) was found in the anterior part of the palate, at the suture and in the paramedian areas, but the posterior region, despite its reduced thickness, is also suitable for miniscrews. The Kruskal-Wallis test showed no significant differences between the groups in the various palatal sections (median suture, 3 and 6 mm to the right and left of the suture) except between groups A and C in the 16-mm paracoronal section at 6 mm to the right and left of the suture. There were no statistically significant differences due to sex or between the right and left sides of the palate. CONCLUSIONS The anterior region is the thickest part of the palate, but the bone thickness in the posterior region is also suitable for screws of appropriate diameter and length.
B5. Ultra wideband patch antenna and power amplifier transmitter front end
An ultra wideband (UWB) transmitter front end (RFE) consisting of UWB circular patch antenna and an UWB GaAs common source (CS) power amplifier (PA) for the frequency range 3 to 10 GHz is presented. The patch antenna covers most of the UWB range and has an area of 30×30 squared mm. The UWB patch antenna has a return loss below 10 dB from 3 to 9 GHz, simulated with a port of 50 Ohm impedance. Since the PA and antenna both have variable impedance, the matching is performed simultaneously. The antenna impedance is directly matched to the PA using optimization algorithms. The transmitted power from the antenna is simulated with a variation around 2 dB over the entire frequency range. The PA has a gain varying from 15 to 30 dB to compensate for the antenna impedance variation.
Young adults' use of communication technology within their romantic relationships and associations with attachment style
In an online survey with two cohorts (2009 and 2011) of undergraduates in dating relationshi ps, we examined how attachment was related to communication technology use within romantic relation ships. Participants reported on their attachment style and frequency of in-person communication as well as phone, text messaging, social network site (SNS), and electronic mail usage with partners. Texting and SNS communication were more frequent in 2011 than 2009. Attachment avoidance was related to less frequent phone use and texting, and greater email usage. Electronic communication channels (phone and texting) were related to positive relationship qualities, however, once accounting for attachment, only moderated effects were found. Interactions indicated texting was linked to more positive relationships for highly avoidant (but not less avoidant) participants. Additionally, email use was linked to more conflict for highly avoidant (but not less avoidant) participants. Finally, greater use of a SNS was positively associated with intimacy/support for those higher (but not lower) on attachment anxiety. This study illustrates how attachment can help to explain why the use of specific technology-based communication channels within romantic relationships may mean different things to different people, and that certain channels may be especially relevant in meeting insecurely attached individuals’ needs. 2013 Elsevier Ltd. All rights reserved.
The hB-Tree: A Multiattribute Indexing Method with Good Guaranteed Performance
A new multiattribute index structure called the hB-tree is introduced. It is derived from the K-D-B-tree of Robinson [15] but has additional desirable properties. The hB-tree internode search and growth processes are precisely analogous to the corresponding processes in B-trees [1]. The intranode processes are unique. A k-d tree is used as the structure within nodes for very efficient searching. Node splitting requires that this k-d tree be split. This produces nodes which no longer represent brick-like regions in k-space, but that can be characterized as holey bricks, bricks in which subregions have been extracted. We present results that guarantee hB-tree users decent storage utilization, reasonable size index terms, and good search and insert performance. These results guarantee that the hB-tree copes well with arbitrary distributions of keys.
UWB: Machine Learning Approach to Aspect-Based Sentiment Analysis
This paper describes our system participating in the aspect-based sentiment analysis task of Semeval 2014. The goal was to identify the aspects of given target entities and the sentiment expressed towards each aspect. We firstly introduce a system based on supervised machine learning, which is strictly constrained and uses the training data as the only source of information. This system is then extended by unsupervised methods for latent semantics discovery (LDA and semantic spaces) as well as the approach based on sentiment vocabularies. The evaluation was done on two domains, restaurants and laptops. We show that our approach leads to very promising results.
Architectural Issues in Software Reuse: It's Not Just the Functionality, It's the Packaging
Effective reuse depends not only on finding and reusing components, but also on the ways those components are combined. The informal folklore of software engineering provides a number of diverse styles for organizing software systems. These styles, or architectures, show how to compose systems from components; different styles expect different kinds of component packaging and different kinds of interactions between the components. Unfortunately, these styles and packaging distinctions are often implicit; as a consequence, components with appropriate functionality may fail to work together. This talk surveys common architectural styles, including important packaging and interaction distinctions, and proposes an approach to the problem of reconciling architectural mismatches.
Game Design to Measure Reflexes and Attention Based on Biofeedback Multi-Sensor Interaction
This paper presents a multi-sensor system for implementing biofeedback as a human-computer interaction technique in a game involving driving cars in risky situations. The sensors used are: Eye Tracker, Kinect, pulsometer, respirometer, electromiography (EMG) and galvanic skin resistance (GSR). An algorithm has been designed which gives rise to an interaction logic with the game according to the set of physiological constants obtained from the sensors. The results reflect a 72.333 response to the System Usability Scale (SUS), a significant difference of p = 0.026 in GSR values in terms of the difference between the start and end of the game, and an r = 0.659 and p = 0.008 correlation while playing with the Kinect between the breathing level and the energy and joy factor. All the sensors used had an impact on the end results, whereby none of them should be disregarded in future lines of research, even though it would be interesting to obtain separate breathing values from that of the cardio.
A Multi-Objective Optimization Framework for Offshore Wind Farm Layouts and Electric Infrastructures
Current offshore wind farms (OWFs) design processes are based on a sequential approach which does not guarantee system optimality because it oversimplifies the problem by discarding important interdependencies between design aspects. This article presents a framework to integrate, automate and optimize the design of OWF layouts and the respective electrical infrastructures. The proposed framework optimizes simultaneously different goals (e.g., annual energy delivered and investment cost) which leads to efficient trade-offs during the design phase, e.g., reduction of wake losses vs collection system length. Furthermore, the proposed framework is independent of economic assumptions, meaning that no a priori values such as the interest rate or energy price, are needed. The proposed framework was applied to the Dutch Borssele areas I and II. A wide range of OWF layouts were obtained through the optimization framework. OWFs with similar energy production and investment cost as layouts designed with standard sequential strategies were obtained through the framework, meaning that the proposed framework has the capability to create different OWF layouts that would have been missed by the designers. In conclusion, the proposed multi-objective optimization framework represents a mind shift in design tools for OWFs which allows cost savings in the design and operation phases.
Intracellular amino acid concentrations in children with chronic renal insufficiency
We studied amino acid concentrations in granulocytes and plasma of 24 children with chronic renal failure and 15 healthy children. Granulocytes were isolated from 10 ml of blood using a dextran-Ficoll-Hypaque procedure. Intracellular levels of leucine, isoleucine, methionine, phenylalanine, lysine, histidine, tyrosine, and arginine were significantly lower in children with chronic renal failure than healthy children. There were no significant differences in intracellular and plasma amino acid concentrations between children with chronic renal failure on a well-balanced protein-restricted diet and children with chronic renal failure with a normal protein intake.
The gender identity/gender dysphoria questionnaire for adolescents and adults.
The present study reports on the construction of a dimensional measure of gender identity (gender dysphoria) for adolescents and adults. The 27-item gender identity/gender dysphoria questionnaire for adolescents and adults (GIDYQ-AA) was administered to 389 university students (heterosexual and nonheterosexual) and 73 clinic-referred patients with gender identity disorder. Principal axis factor analysis indicated that a one-factor solution, accounting for 61.3% of the total variance, best fits the data. Factor loadings were all >or= .30 (median, .82; range, .34-.96). A mean total score (Cronbach's alpha, .97) was computed, which showed strong evidence for discriminant validity in that the gender identity patients had significantly more gender dysphoria than both the heterosexual and nonheterosexual university students. Using a cut-point of 3.00, we found the sensitivity was 90.4% for the gender identity patients and specificity was 99.7% for the controls. The utility of the GIDYQ-AA is discussed.
Model-based Deep Reinforcement Learning for Dynamic Portfolio Optimization
Dynamic portfolio optimization is the process of sequentially allocating wealth to a collection of assets in some consecutive trading periods, based on investors’ return-risk profile. Automating this process with machine learning remains a challenging problem. Here, we design a deep reinforcement learning (RL) architecture with an autonomous trading agent such that, investment decisions and actions are made periodically, based on a global objective, with autonomy. In particular, without relying on a purely model-free RL agent, we train our trading agent using a novel RL architecture consisting of an infused prediction module (IPM), a generative adversarial data augmentation module (DAM) and a behavior cloning module (BCM). Our model-based approach works with both on-policy or off-policy RL algorithms. We further design the back-testing and execution engine which interact with the RL agent in real time. Using historical real financial market data, we simulate trading with practical constraints, and demonstrate that our proposed model is robust, profitable and risk-sensitive, as compared to baseline trading strategies and model-free RL agents from prior work.
From the neuron doctrine to neural networks
For over a century, the neuron doctrine — which states that the neuron is the structural and functional unit of the nervous system — has provided a conceptual foundation for neuroscience. This viewpoint reflects its origins in a time when the use of single-neuron anatomical and physiological techniques was prominent. However, newer multineuronal recording methods have revealed that ensembles of neurons, rather than individual cells, can form physiological units and generate emergent functional properties and states. As a new paradigm for neuroscience, neural network models have the potential to incorporate knowledge acquired with single-neuron approaches to help us understand how emergent functional states generate behaviour, cognition and mental disease.
Variations in Multiple Birth Rates and Impact on Perinatal Outcomes in Europe
OBJECTIVE Infants from multiple pregnancies have higher rates of preterm birth, stillbirth and neonatal death and differences in multiple birth rates (MBR) exist between countries. We aimed to describe differences in MBR in Europe and to investigate the impact of these differences on adverse perinatal outcomes at a population level. METHODS We used national aggregate birth data on multiple pregnancies, maternal age, gestational age (GA), stillbirth and neonatal death collected in the Euro-Peristat project (29 countries in 2010, N = 5 074 643 births). We also used European Society of Human Reproduction and Embryology (ESHRE) data on assisted conception and single embryo transfer (SET). The impact of MBR on outcomes was studied using meta-analysis techniques with random-effects models to derive pooled risk ratios (pRR) overall and for four groups of country defined by their MBR. We computed population attributable risks (PAR) for these groups. RESULTS In 2010, the average MBR was 16.8 per 1000 women giving birth, ranging from 9.1 (Romania) to 26.5 (Cyprus). Compared to singletons, multiples had a nine-fold increased risk (pRR 9.4, 95% Cl 9.1-9.8) of preterm birth (<37 weeks GA), an almost 12-fold increased risk (pRR 11.7, 95% CI 11.0-12.4) of very preterm birth (<32 weeks GA). Pooled RR were 2.4 (95% Cl 1.5-3.6) for fetal mortality at or after 28 weeks GA and 7.0 (95% Cl 6.1-8.0) for neonatal mortality. PAR of neonatal death and very preterm birth were higher in countries with high MBR compared to low MBR (17.1% (95% CI 13.8-20.2) versus 9.8% (95% Cl 9.6-11.0) for neonatal death and 29.6% (96% CI 28.5-30.6) versus 17.5% (95% CI 15.7-18.3) for very preterm births, respectively). CONCLUSIONS Wide variations in MBR and their impact on population outcomes imply that efforts by countries to reduce MBR could improve perinatal outcomes, enabling better long-term child health.
MIMO SAR OFDM Chirp Waveform Diversity Design With Random Matrix Modulation
Multiple-input multiple-output (MIMO) synthetic aperture radar (SAR) has received much attention due to its interesting application potentials, but effective waveform diversity design is still a technical challenge. In a MIMO SAR, each antenna should transmit a unique waveform, orthogonal to the waveforms transmitted by other antennas. The waveforms should have a large time-bandwidth product, low cross-correlation interferences, and a low peak-average ratio. To reach these aims, this paper proposes an orthogonal frequency division multiplexing (OFDM) chirp waveform with random matrix modulation. The designed waveforms are time-delay and frequency-shift decorrelated. Referring to MIMO SAR high-resolution imaging, the proposed OFDM chirp waveform parameters are optimally designed, and their performances are analyzed through the ambiguity function and range-Doppler-based MIMO SAR imaging algorithm. Extensive and comparative simulation results show that the waveforms have the superiorities of high range resolution, constant time domain and almost constant frequency-domain modulus, large time-bandwidth product, low peak-average ratio, and low time-delay and frequency-shift correlation peaks. More importantly, this scheme can easily generate over three orthogonal waveforms with a large time-bandwidth product.
An Elementary Derivation of Mean Wait Time in Polling Systems
Polling systems are a well-established subject in queueing theory. However, their formal treatments generally rely heavily on relatively sophisticated theoretical tools, such as moment generating functions and Laplace transforms, and solutions often require the solution of large systems of equations. We show that, if you are willing to only have the average waiting of a system time rather than higher moments, it can found through an elementary derivation based only on algebra and some well-known properties of Poisson processes. Our result is simple enough to be easily used in real-world applications, and the simplicity of our derivation makes it ideal for pedagogical purposes. Introduction Polling systems are a classic subject in stochastic analysis, applicable to such diverse areas as computer hardware and elevator performance. In a polling system there are N queues which receive jobs, and a single server processes all queues. The server visits the queues in a deterministic order, stopping at each queue to process any jobs which might be there, before continuing along its path. Depending on the specific system, when the processor arrives at a queue it may process jobs until the queue is empty (usually called “exhaustive processing”) or process only those jobs that are in the queue at the time of arrival (“gated polling”). Polling systems have been treated by many authors, both as a subject in themselves and as models for applied situations. Some of the earliest works were in the context of modeling the performance of hard disks, where each track on the disk is seen as a distinct queue and the jobs are read/write memory requests. Several of the earliest derivations [1, 2] were later found to contain subtle errors, which were eventually corrected at the cost of more complicated derivations and more unwieldy final results. Typically in the literature, determining quantities of interest for a polling system (such as average queue length or waiting times at the different queues) requires numerically solving a set of K equations, where K is polynomial in the number of queues [3-5]. This complexity makes these analytical results impractical for many real-world situations. For example, the effort required to understand a derivation, formulate the full set of equations for a particular system, and solve them numerically is often less than the effort required to exhaustively simulate the system and evaluate its performance empirically. [6] is one of the few works that trades off fine-grained results for a simple derivation and easyto-use formula. They studied the performance of hard disks by modeling them as a polling system with a continuum of queues. Their derivation uses no math beyond basic calculus, and they give a closed formula for the system’s waiting time rather than a set of equations (on the other hand, their approach does not give higher moment of waiting time, or waiting times for the different queues). The current paper generalizes that work to discrete queues, and discusses it in the more general context of polling systems. Our analysis is applicable to any order in which the queues are served, and unlike previous work it allows for the distribution of job size to vary by queue (though it requires that the average job size be the same across all queues). The key to the proof is an accounting technique we call “shuffling” (the terminology used by [6]), which breaks the mean wait time into two easy-tocalculate terms. In the next section we give an overview of our model and the terminology we use; the model is deliberately simple for clarity of exposition. Next we present the derivation of mean wait time, with an eye toward how essentially the same derivation (perhaps with a bit more calculation) can be applied to related models. Finally, we discuss some applications of our work and directions for future research.
Towards Smart Robots: Rock-Paper-Scissors Gaming versus Human Players
In this project a human robot interaction system was developed in order to let people naturally play rock-paper-scissors games against a smart robotic opponent. The robot does not perform random choices, the system is able to analyze the previous rounds trying to forecast the next move. A Machine Learning algorithm based on Gaussian Mixture Model (GMM) allows us to increase the percentage of robot victories. This is a very important aspect in the natural interaction between human and robot, in fact, people do not like playing against “stupid” machines, while they are stimulated in confronting with a skilled opponent.
The lexical nature of syntactic ambiguity resolution.
Ambiguity resolution is a central problem in language comprehension. Lexical and syntactic ambiguities are standardly assumed to involve different types of knowledge representations and be resolved by different mechanisms . An alternative account is provided in which both types of ambiguity derive from aspects of lexical representation and are resolved by the same processing mechanisms . Reinterpreting syntactic ambiguity resolution as a form of lexical ambiguity resolution obviates the need for special parsing principles to account for syntactic interpretation preferences, reconciles a number of apparently conflicting results concerning the roles of lexical and contextual information in sentence processing, explains differences among ambiguities in terms of ease of resolution, and provides a more unified account of language comprehension than was previously available .
Nanostructured powders and their industrial application. Materials Research Society symposium proceedings Volume 520
This new volume from the MRS brings together industrial and academic researchers involved in the synthesis and use of nanostructured powders such as fumed silica, pyrolytic titania and precipitated silica, as well as less conventional nanostructured powders such as exfoliated clays. Similarities and differences among these various fields of study and application are featured. In some ways, the volume is a continuation of the ``Better Ceramics Through Chemistry'' series. One main difference, however, is that this volume focused on the industrial use of these materials. Topics include: overview of nanopowder technology; physical aspects of nanostructured powders; synthesis of nanostructured powders; and applications of nanostructured powders.
Investigation of selective catalytic reduction impact on mercury speciation under simulated NOx emission control conditions.
Selective catalytic reduction (SCR) technology increasingly is being applied for controlling emissions of nitrogen oxides (NOx) from coal-fired boilers. Some recent field and pilot studies suggest that the operation of SCR could affect the chemical form of mercury (Hg) in coal combustion flue gases. The speciation of Hg is an important factor influencing the control and environmental fate of Hg emissions from coal combustion. The vanadium and titanium oxides, used commonly in the vanadia-titania SCR catalyst for catalytic NOx reduction, promote the formation of oxidized mercury (Hg2+). The work reported in this paper focuses on the impact of SCR on elemental mercury (Hg0) oxidation. Bench-scale experiments were conducted to investigate Hg0 oxidation in the presence of simulated coal combustion flue gases and under SCR reaction conditions. Flue gas mixtures with different concentrations of hydrogen chloride (HCl) and sulfur dioxide (SO2) for simulating the combustion of bituminous coals and subbituminous coals were tested in these experiments. The effects of HCl and SO2 in the flue gases on Hg0 oxidation under SCR reaction conditions were studied. It was observed that HCl is the most critical flue gas component that causes conversion of Hg0 to Hg2+ under SCR reaction conditions. The importance of HCl for Hg0 oxidation found in the present study provides the scientific basis for the apparent coal-type dependence observed for Hg0 oxidation occurring across the SCR reactors in the field.
Radio resource allocation and power control scheme to mitigate interference in device-to-device communications underlaying LTE-A uplink cellular networks
The integration of Device-to-Device (D2D) communication into a cellular network improves system spectral efficiency, increases capacity, reduces power consumption and also reduces traffic to the evolved NodeB (eNB). However, D2D communication generates interference to the conventional cellular network which deteriorates the system performance. In this paper, we propose a radio resource allocation and power control scheme to mitigate the interference using cell sectorization scheme while reusing uplink cellular resources by the D2D pairs. By simulations, we show that our proposed scheme improves the overall system performance compared with the existing methods.
A Basic-Cycle Calculation Technique for Efficient Dynamic Data Redistribution
Array redistribution is usually required to enhance algorithm performance in many parallel programs on distributed memory multicomputers. Since it is performed at run-time, there is a performance trade-off between the efficiency of the new data decomposition for a subsequent phase of an algorithm and the cost of redistributing data among processors. In this paper, we present a basic-cycle calculation technique to efficiently perform BLOCK-CYCLIC(s) to BLOCK-CYCLIC(t) redistribution. The main idea of the basic-cycle calculation technique is, first, to develop closed forms for computing source/destination processors of some specific array elements in a basic-cycle, which is defined as lcm(s, t)/gcd(s, t). These closed forms are then used to efficiently determine the communication sets of a basic-cycle. From the source/destination processor/data sets of a basic-cycle, we can efficiently perform a BLOCK-CYCLIC(s) to BLOCK-CYCLIC(t) redistribution. To evaluate the performance of the basic-cycle calculation technique, we have implemented this technique on an IBM SP2 parallel machine, along with the PITFALLS method and the multiphase method. The cost models for these three methods are also presented. The experimental results show that the basiccycle calculation technique outperforms the PITFALLS method and the multiphase method for most test samples.
3 D-printed spherical dipole antenna integrated on small RF node
ELECT New three-dimensional (3D) printing techniques enable the integration of an antenna directly onto the package of a small wireless sensor node. This volume-filling approach ensures near-optimal bandwidth performance of the small antenna, increasing a system’s battery life, data rate or range. Simulated results show that the fabricated spherical antenna’s bandwidth-efficiency product is more than half of the fundamental limit and radiation pattern measurements exhibit a dipole pattern with −0.7 dBi gain.
Machine Learning Attacks on PolyPUF, OB-PUF, RPUF, and PUF-FSM
A physically unclonable function (PUF) is a circuit of which the input– output behavior is designed to be sensitive to the random variations of its manufacturing process. This building block hence facilitates the authentication of any given device in a population of identically laid-out silicon chips, similar to the biometric authentication of a human. The focus and novelty of this work is the development of efficient impersonation attacks on the following four PUF-based authentication protocols: (1) the so-called PolyPUF protocol of Konigsmark, Chen, and Wong, as published in the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems in 2016, (2) the so-called OB-PUF protocol of Gao, Li, Ma, Al-Sarawi, Kavehei, Abbott, and Ranasinghe, as presented at the IEEE conference PerCom 2016, (3) the so-called RPUF protocol of Ye, Hu, and Li, as presented at the IEEE conference AsianHOST 2016, and (4) the so-called PUF–FSM protocol of Gao, Ma, Al-Sarawi, Abbott, and Ranasinghe, as published in the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems in 2017. The common flaw of all four designs is that the use of lightweight obfuscation logic provides insufficient protection against machine learning attacks.
Risk and Real Estate Investment: An International Perspective
Ever since Markowitz (1952, 1959) laid the groundwork for modern portfolio theory, much is written about the benefits of diversification. Markowitz suggested that it is possible to reduce risk by diversifying across investments without sacrificing return. He provided a solution to the problem of portfolio construction by demonstrating that risk is quantifiable and can be divided into two parts; the systematic part, or the portion that is unavoidable once the investor invests in a particular asset class, and the unsystematic risk, or the part that can be reduced by creating a mixed-asset portfolio. One aspect of this portfolio selection process can be mixing assets across geographic boundaries. The main argument in favor of international diversification is that foreign investments offer additional diversification potential to further reduce the total risk of the portfolio. Or stated differently, international diversification improves the risk-adjusted performance of a domestic portfolio, provided that the investments have independent price behavior. The leading finance paradigm of efficient markets stresses these benefits of so-called naïve diversification. This strategy is acceptable as long as investors are able to freely move their investments between the capital markets of different countries. In other words, naïve diversification would be the best strategy if international capital markets would be fully integrated and efficient. This may approximately be the case for the markets for stocks and bonds, but it is certainly not true in the global market for real estate. In this THE JOURNAL OF REAL ESTATE RESEARCH 1
The effects of divided attention on encoding and retrieval processes in human memory.
The authors examined the effects of divided attention (DA) at encoding and retrieval in free recall, cued recall, and recognition memory in 4 experiments. Lists of words or word pairs were presented auditorily and recalled orally; the secondary task was a visual continuous reaction-time (RT) task with manual responses. At encoding, DA was associated with large reductions in memory performance, but small increases in RT; trade-offs between memory and RT were under conscious control. In contrast, DA at retrieval resulted in small or no reductions in memory, but in comparatively larger increases in RT, especially in free recall. Memory performance was sensitive to changes in task emphasis at encoding but not at retrieval. The results are discussed in terms of controlled and automatic processes and speculatively linked to underlying neuropsychological mechanisms.
emoji2vec: Learning Emoji Representations from their Description
Many current natural language processing applications for social media rely on representation learning and utilize pre-trained word embeddings. There currently exist several publicly-available, pre-trained sets of word embeddings, but they contain few or no emoji representations even as emoji usage in social media has increased. In this paper we release emoji2vec, pre-trained embeddings for all Unicode emoji which are learned from their description in the Unicode emoji standard.1 The resulting emoji embeddings can be readily used in downstream social natural language processing applications alongside word2vec. We demonstrate, for the downstream task of sentiment analysis, that emoji embeddings learned from short descriptions outperforms a skip-gram model trained on a large collection of tweets, while avoiding the need for contexts in which emoji need to appear frequently in order to estimate a representation.
The nature of feedback: how different types of peer feedback affect writing performance
Although providing feedback is commonly practiced in education, there is no general agreement regarding what type of feedback is most helpful and why it is helpful. This study examined the relationship between various types of feedback, potential internal mediators, and the likelihood of implementing feedback. Five main predictions were developed from the feedback literature in writing, specifically regarding feedback features (summarization, identifying problems, providing solutions, localization, explanations, scope, praise, and mitigating language) as they relate to potential causal mediators of problem or solution understanding and problem or solution agreement, leading to the final outcome of feedback implementation. To empirically test the proposed feedback model, 1,073 feedback segments from writing assessed by peers was analyzed. Feedback was collected using SWoRD, an online peer review system. Each segment was coded for each of the feedback features, implementation, agreement, and understanding. The correlations between the feedback features, levels of mediating variables, and implementation rates revealed several significant relationships. Understanding was the only significant mediator of implementation. Several feedback features were associated with understanding: including solutions, a summary of the performance, and the location of the problem were associated with increased understanding; and explanations of problems were associated with decreased understanding. Implications of these results are discussed.
Dynamic Social Grouping Based Routing in a Mobile Ad-Hoc Network
People (pedestrians, drivers, passengers in public transport) use different services on small mobile gadgets on a daily basis. So far, mobile applications don't react to context changes. Running services should adapt to the changing environment and new services should be installed and deployed automatically. We propose a classification of context elements that influence the behavior of the mobile services, focusing on the challenges of the transportation domain. Malware Detection on Mobile Devices Asaf Shabtai*, Ben-Gurion University, Israel Abstract: We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. Dynamic Approximative Data Caching in Wireless Sensor Networks Nils Hoeller*, IFIS, University of Luebeck Abstract: Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations for processing queries by using adaptive caching structures are discussed. Results can be retrieved from caches that are placed nearer to the query source. As a result the communication demand is reduced and hence energy is saved by using the cached results. To verify cache coherence in networks with non-reliable communication channels, an approximate update policy is presented. A degree of result quality can be defined for a query to find the adequate cache adaptively. Gossip-based Data Fusion Framework for Radio Resource Map Jin Yang*, Ilmenau University of Technology Abstract: In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. Dynamic Social Grouping Based Routing in a Mobile Ad-Hoc Network Roy Cabaniss*, Missouri S&T Abstract: Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. MobileSOA framework for Context-Aware Mobile Applications Aaratee Shrestha*, University of Leipzig Abstract: Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Performance Analysis of Secure Hierarchical Data Aggregation in Wireless Sensor Networks Vimal Kumar*, Missouri S&T Abstract: Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. A Communication Efficient Framework for Finding Outliers in Wireless Sensor Networks Dylan McDonald*, MS&T Abstract: Outlier detection is a well studied problem in various fields. The unique challenges of wireless sensor networks make this problem especially challenging. Sensors can detect outliers for a plethora of reasons and these reasons need to be inferred in real time. Here, we present a new communication technique to find outliers in a wireless sensor network. Communication is minimized through controlling sensor when sensors are allowed to communicate. At the same time, minimal assumptions are made about the nature of the data set as to
Foreground Segmentation for Anomaly Detection in Surveillance Videos Using Deep Residual Networks
Efficient anomaly detection in surveillance videos across diverse environments represents a major challenge in Computer Vision. This paper proposes a background subtraction approach based on the recent deep learning framework of residual neural networks that is capable of detecting multiple objects of different sizes by pixel-wise foreground segmentation. The proposed algorithm takes as input a reference (anomalyfree) and a target frame, both temporally aligned, and outputs a segmentation map of same spatial resolution where the highlighted pixels denoting the detected anomalies, which should be all the elements not present in the reference frame. Furthermore, we analyze the benefits of different reconstruction methods to the restore original image resolution and demonstrate the improvement of residual architectures over the smaller and simpler models proposed by previous similar works. Experiments show competitive performance in the tested dataset, as well as real-time processing capability. Keywords— Deep learning, convolutional neural networks, ResNet, residual networks, background subtraction, foreground segmentation, anomaly detection, surveillance, real-time.
An Image Matching Algorithm Based on SIFT and Improved LTP
SIFT is one of the most robust and widely used image matching algorithms based on local features. But the key-points descriptor of SIFT algorithm have 128 dimensions. Aiming to the problem of its high dimension and complexity, a novel image matching algorithm is proposed. The descriptors of SIFT key-points are constructed by the rotation invariant LTP, city-block distance is also employed to reduce calculation of key-points matching. The experiment is achieved through different lighting, blur changes and rotation of images, the results show that this method can reduce the processing time and raise image matching efficiency.
The place of the literature review in grounded theory research
For those employing grounded theory as a research methodology, the issue of how and when to engage with existing literature is often problematic, especially for PhD students. With this in mind, the current article seeks to offer some clarity on the topic and provide novice grounded theory researchers in particular with advice on how to approach the issue of the literature review in grounded theory. This is done by reviewing the origins of grounded theory, exploring the original stance taken by the founders of the methodology with regard to the literature review, tracking how this position has changed over time, outlining the rationale associated with specific positions and discussing ideas for reconciling opposing perspectives. Coupled with this, the author draws on his own experience of using grounded theory for his PhD research to explain how extant literature may be used and discusses how the nature of engagement with existing literature may impact upon the overall written presentation of a grounded theory study.
Risk Taking Under the Influence: A Fuzzy-Trace Theory of Emotion in Adolescence.
Fuzzy-trace theory explains risky decision making in children, adolescents, and adults, incorporating social and cultural factors as well as differences in impulsivity. Here, we provide an overview of the theory, including support for counterintuitive predictions (e.g., when adolescents "rationally" weigh costs and benefits, risk taking increases, but it decreases when the core gist of a decision is processed). Then, we delineate how emotion shapes adolescent risk taking-from encoding of representations of options, to retrieval of values/principles, to application of those values/principles to representations of options. Our review indicates that: (i) Gist representations often incorporate emotion including valence, arousal, feeling states, and discrete emotions; and (ii) Emotion determines whether gist or verbatim representations are processed. We recommend interventions to reduce unhealthy risk-taking that inculcate stable gist representations, enabling adolescents to identify quickly and automatically danger even when experiencing emotion, which differs sharply from traditional approaches emphasizing deliberation and precise analysis.
Worse glycaemic control in LADA patients than in those with type 2 diabetes, despite a longer time on insulin therapy
Our aim was to study whether glycaemic control differs between individuals with latent autoimmune diabetes in adults (LADA) and patients with type 2 diabetes, and whether it is influenced by time on insulin therapy. We performed a retrospective study of 372 patients with LADA (205 men and 167 women; median age 54 years, range 35–80 years) from Swedish cohorts from Skåne (n = 272) and Västerbotten (n = 100). Age- and sex-matched patients with type 2 diabetes were included as controls. Data on the use of oral hypoglycaemic agents (OHAs), insulin and insulin–OHA combination therapy was retrieved from the medical records. Poor glycaemic control was defined as HbA1c ≥7.0% (≥53 mmol/mol) at follow-up. The individuals with LADA and with type 2 diabetes were followed for an average of 107 months. LADA patients were leaner than type 2 diabetes patients at diagnosis (BMI 27.7 vs 31.0 kg/m2; p < 0.001) and follow-up (BMI 27.9 vs 30.2 kg/m2; p < 0.001). Patients with LADA had been treated with insulin for longer than those with type 2 diabetes (53.3 vs 28.8 months; p < 0.001). There was no significant difference between the patient groups with regard to poor glycaemic control at diagnosis, but more patients with LADA (67.8%) than type 2 diabetes patients (53.0%; p < 0.001) had poor glycaemic control at follow-up. Patients with LADA had worse glycaemic control at follow-up compared with participants with type 2 diabetes (OR = 1.8, 95% CI 1.2, 2.7), adjusted for age at diagnosis, HbA1c, BMI at diagnosis, follow-up time and duration of insulin treatment. Individuals with LADA have worse glycaemic control than patients with type 2 diabetes despite a longer time on insulin therapy.
Automatic music genre classification using ensemble of classifiers
This paper presents a novel approach to the task of automatic music genre classification which is based on multiple feature vectors and ensemble of classifiers. Multiple feature vectors are extracted from a single music piece. First, three 30-second music segments, one from the beginning, one from the middle and one from end part of a music piece are selected and feature vectors are extracted from each segment. Individual classifiers are trained to account for each feature vector extracted from each music segment. At the classification, the outputs provided by each individual classifier are combined through simple combination rules such as majority vote, max, sum and product rules, with the aim of improving music genre classification accuracy. Experiments carried out on a large dataset containing more than 3,000 music samples from ten different Latin music genres have shown that for the task of automatic music genre classification, the features extracted from the middle part of the music provide better results than using the segments from the beginning or end part of the music. Furthermore, the proposed ensemble approach, which combines the multiple feature vectors, provides better accuracy than using single classifiers and any individual music segment.
FPGA-Based Implementation of Multiple Modes in Near Field Inductive Communication Using Frequency Splitting and MIMO Configuration
Conventional near field inductive wireless power transfer theory shows that systems suffer from splitting frequency behaviors when strong coupling condition exists between the transmitter and the receiver. However, this characteristic has not been explored for communication. Our analysis demonstrates that the splitting behaviour of frequency creates multiple frequencies that support inductive communication in MIMO configuration. As a result, we implement a binary chirp modulation on an FPGA and validate two channel communication using splitting. This paper introduces the use of chirp signals to spread data and excite inductive MIMO systems. The simulation and experiment show that the splitting frequency depends on a quality factor and the flux coupling condition between the data source and receiver. In other words, the degree of mutual coupling defines the splitting mode. This paper proves that multi-channel communication using splitting can be used for data transmission. The results show that data rates of 50 Mbps or 69 Kbps can be achieved for each channel between the transmitters and receivers when the transmitter and receiver operate at the original resonant frequency of 13.56 MHz or 28 KHz, respectively and the distance between them varies from about 1 cm to 10 cm.
Targeted property-based testing
We introduce targeted property-based testing, an enhanced form of property-based testing that aims to make the input generation component of a property-based testing tool guided by a search strategy rather than being completely random. Thus, this testing technique combines the advantages of both search-based and property-based testing. We demonstrate the technique with the framework we have built, called Target, and show its effectiveness on three case studies. The first of them demonstrates how Target can employ simulated annealing to generate sensor network topologies that form configurations with high energy consumption. The second case study shows how the generation of routing trees for a wireless network equipped with directional antennas can be guided to fulfill different energy metrics. The third case study employs Target to test the noninterference property of information-flow control abstract machine designs, and compares it with a sophisticated hand-written generator for programs of these abstract machines.
Compensation strategy: does business strategy influence compensation in high‐technology firms?
Drawing on the strategic employee group concept, this study empirically examines whether a firm's innovation strategy influences compensation systems for strategic employee groups in the high-technology industry. We focus on compensation packages for R&D employees who play a critical role in successful implementations of innovation strategy. Using compensation data for middle-level managers and professional employees from 237 firms in the high-technology industry, we found that a firm's strategic intention to pursue innovation has a significant influence on the relative pay level, compensation time horizon, and stock option vesting period lengths of this strategic employee group. Copyright © 2006 John Wiley & Sons, Ltd.
Adaptive Directional Total-Variation Model for Latent Fingerprint Segmentation
A new image decomposition scheme, called the adaptive directional total variation (ADTV) model, is proposed to achieve effective segmentation and enhancement for latent fingerprint images in this work. The proposed model is inspired by the classical total variation models, but it differentiates itself by integrating two unique features of fingerprints; namely, scale and orientation. The proposed ADTV model decomposes a latent fingerprint image into two layers: cartoon and texture. The cartoon layer contains unwanted components (e.g., structured noise) while the texture layer mainly consists of the latent fingerprint. This cartoon-texture decomposition facilitates the process of segmentation, as the region of interest can be easily detected from the texture layer using traditional segmentation methods. The effectiveness of the proposed scheme is validated through experimental results on the entire NIST SD27 latent fingerprint database. The proposed scheme achieves accurate segmentation and enhancement results, leading to improved feature detection and latent matching performance.
Ventilator advisory system employing load and tolerance strategy recommends appropriate pressure support ventilation settings: multisite validation study.
BACKGROUND Loads on the respiratory muscles, reflected by noninvasive measurement of the real-time power of breathing (POBn), and tolerance of these loads, reflected by spontaneous breathing frequency (f) and tidal volume (Vt), should be considered when evaluating patients with respiratory failure. Pressure support ventilation (PSV) should be applied so that muscle loads are not too high or too low. We propose a computerized, ventilator advisory system employing a load (POBn) and tolerance (f and Vt) strategy in a fuzzy logic algorithm to provide guidance for setting PSV. To validate these recommendations, we performed a multisite study comparing the advisory system recommendations to experienced physician decisions. METHODS Data were obtained from adults who were receiving PSV (n = 87) at three university sites via a combined pressure/flow sensor, which was positioned between the endotracheal tube and the Y-piece of the ventilator breathing circuit and was directed to the advisory system. Recommendations from the advisory system for increasing, maintaining, or decreasing PSV were compared at specific time points to decisions made by physician intensivists at the bedside. RESULTS There were no significant differences in the recommendations by the advisory system (n = 210) compared to those of the physician intensivists to increase, maintain, or decrease PSV (p > 0.05). Physician intensivists agreed with 90.5% of all recommendations. The advisory system was very good at predicting intensivist decisions (r(2) = 0.90; p < 0.05) in setting PSV. CONCLUSIONS The novel load-and-tolerance strategy of the advisory system provided automatic and valid recommendations for setting PSV to appropriately unload the respiratory muscles that were as good as the clinical judgment of physician intensivists.
QoS-Aware Revenue-Cost Optimization for Latency-Sensitive Services in IaaS Clouds
Recently, application service providers have been employing Infrastructure-as-a-Service (IaaS) clouds such as Amazon EC2 to scale their computing resources on-demand to adapt to dynamic workloads. Existing research has been focusing more on cloud resource scaling in batch processing, non latency-sensitive applications. In this paper, we consider the problem of revenue-cost optimization in cloud-based application service providers with stringent QoS requirements, e.g., online gaming services. We propose an integrated approach which combines resource provisioning algorithms and request scheduling disciplines. The main goal is to maximize the service provider's revenue via satisfying pre-defined QoS requirements, and at the same time, to minimize cloud resource cost. We have implemented the proposed resource provisioning algorithms and scheduling disciplines into a cloud scaling framework developed in our previous work. Extensive experiments have been conducted with a fully functional implementation and realistic workloads modeled after real traces of popular online game servers. The results demonstrated the effectiveness of our proposed approach.
Navigating in the new competitive landscape : Building strategic flexibility and competitive advantage in the 21 st century
Executive Overview A new competitive landscape is developing largely based on the technological revolution and increasing globalization. The strategic discontinuities encountered by firms are transforming the nature of competition. To navigate effectively in this new competitive landscape, to build and maintain competitive advantage, requires a new type of organization. Success in the 21st century organization will depend first on building strategic flexibility. To develop strategic flexibility and competitive advantage, requires exercising strategic leadership, building dynamic core competences, focusing and developing human capital, effectively using new manufacturing and information technologies, employing valuable strategies (exploiting global markets and cooperative strategies) and implementing new organization structures and culture (horizontal organization, learning and innovative culture, managing firm as bundles of assets). Thus, the new competitive landscape will require new types of organization and leaders for survival and global market leadership.
Internet of Music Things: an edge computing paradigm for opportunistic crowdsensing
Device centric music computation in the era of the Internet is participant-centric data recognition and computation that includes devices such as smartphones, real sound sensors, and computing systems. These participatory devices enhance the progression of Internet of Things, the devices which are responsible for gathering sensor data to the devices as per the requirements of the end users. This contribution analyzes a class of qualitative music composition applications in the context of the Internet of Things that we entitle as the Internet of Music Things. In this work, participated individuals having sensing devices capable of music sensing and computation share data within a group and retrieve information for analyzing and mapping any interconnected processes of common interest. We present the crowdsensing architecture for music composition in this contribution. Musical components like vocal and instrumental performances are provided by a committed edge layer in music crowdsensing architecture for improving computational efficiencies and lessening data traffic in cloud services for information processing and storage. Proposed opportunistic music crowdsensing orchestration organizes a categorical step toward aggregated music composition and sharing within the network. We also discuss an analytical case study of music crowdsensing challenges, clarify the unique features, and demonstrate edge-cloud computing paradigm along with deliberate outcomes. The requirement for four-layer unified crowdsensing archetype is discussed. The data transmission time, power, and relevant energy consumption of the proposed system are analyzed.
Clustering Categorical Data Using Silhouette Coefficient as a Relocating Measure
Cluster analysis is an unsupervised learning method that constitutes a cornerstone of an intelligent data analysis process. Clustering categorical data is an important research area data mining. In this paper we propose a novel algorithm to cluster categorical data. Based on the minimum dissimilarity value objects are grouped into cluster. In the merging process, the objects are relocated using silhouette coefficient. Experimental results show that the proposed method is efficient.
A CONTINUITY CORRECTION FOR DISCRETE BARRIER OPTIONS
The payoff of abarrier option depends on whether or not a specified asset price, index, or rate reaches a specified level during the life of the option. Most models for pricing barrier options assume continuous monitoring of the barrier; under this assumption, the option can often be priced in closed form. Many (if not most) real contracts with barrier provisions specify discrete monitoring instants; there are essentially no formulas for pricing these options, and even numerical pricing is difficult. We show, however, that discrete barrier options can be priced with remarkable accuracy using continuous barrier formulas by applying a simple continuity correction to the barrier. The correction shifts the barrier away from the underlying by a factor of exp (βσ √ 1t), whereβ ≈ 0.5826,σ is the underlying volatility, and1t is the time between monitoring instants. The correction is justified both theoretically and experimentally.
Light-Weight Head Pose Invariant Gaze Tracking
Unconstrained remote gaze tracking using off-the-shelf cameras is a challenging problem. Recently, promising algorithms for appearance-based gaze estimation using convolutional neural networks (CNN) have been proposed. Improving their robustness to various confounding factors including variable head pose, subject identity, illumination and image quality remain open problems. In this work, we study the effect of variable head pose on machine learning regressors trained to estimate gaze direction. We propose a novel branched CNN architecture that improves the robustness of gaze classifiers to variable head pose, without increasing computational cost. We also present various procedures to effectively train our gaze network including transfer learning from the more closely related task of object viewpoint estimation and from a large high-fidelity synthetic gaze dataset, which enable our ten times faster gaze network to achieve competitive accuracy to its current state-of-the-art direct competitor.
Dynamic FAUST: Registering Human Bodies in Motion
While the ready availability of 3D scan data has influenced research throughout computer vision, less attention has focused on 4D data, that is 3D scans of moving non-rigid objects, captured over time. To be useful for vision research, such 4D scans need to be registered, or aligned, to a common topology. Consequently, extending mesh registration methods to 4D is important. Unfortunately, no ground-truth datasets are available for quantitative evaluation and comparison of 4D registration methods. To address this we create a novel dataset of high-resolution 4D scans of human subjects in motion, captured at 60 fps. We propose a new mesh registration method that uses both 3D geometry and texture information to register all scans in a sequence to a common reference topology. The approach exploits consistency in texture over both short and long time intervals and deals with temporal offsets between shape and texture capture. We show how using geometry alone results in significant errors in alignment when the motions are fast and non-rigid. We evaluate the accuracy of our registration and provide a dataset of 40,000 raw and aligned meshes. Dynamic FAUST extends the popular FAUST dataset to dynamic 4D data, and is available for research purposes at http://dfaust.is.tue.mpg.de.
Ambiguity Identification and Measurement in Natural Language Texts
Text ambiguity is one of the most interesting phenomenon in human communication and a difficult problem in Natural Language Processing (NLP). Identification of text ambiguities is an important task for evaluating the quality of text and uncovering its vulnerable points. There exist several types of ambiguity. In the present work we review and compare different approaches to ambiguity identification task. We also propose our own approach to this problem. Moreover, we present the prototype of a tool for ambiguity identification and measurement in natural language text. The tool is intended to support the process of writing high quality documents.
Transition density: a new measure of activity in digital circuits
Reliability assessment is an important part of the design process of digital integrated circuits. We observe that a common thread that runs through most causes of run-time failure is the extent of circuit activity, i.e., the rate at which its nodes are switching. We propose a new measure of activity, called the transition density, which may be de ned as the \average switching rate" at a circuit node. Based on a stochastic model of logic signals, we also present an algorithm to propagate density values from the primary inputs to internal and output nodes. To illustrate the practical signi cance of this work, we demonstrate how the density values at internal nodes can be used to study circuit reliability by estimating (1) the average power & ground currents, (2) the average power dissipation, (3) the susceptibility to electromigration failures, and (4) the extent of hot-electron degradation. The density propagation algorithm has been implemented in a prototype density simulator. Using this, we present experimental results to assess the validity and feasibility of the approach. In order to obtain the same circuit activity information by traditional means, the circuit would need to be simulated for thousands of input transitions. Thus this approach is very e cient and makes possible the analysis of VLSI circuits, which are traditionally too big to simulate for long input sequences. Submitted to the IEEE Transactions on Computer-Aided Design, 1991.
Constructive hypothesizing, dialogic understanding and the therapist's inner conversation: some ideas about knowing and not knowing in the family therapy session.
The primary tasks of the therapist can be described as listening to what the client says and making space for what the client has not yet said. According to Anderson and Goolishian, the therapist should take a not-knowing stance in this dialogic process. The question remains, however, what not-knowing exactly means. In this article, I will explore this question and I will propose the concept of constructive hypothesizing. Constructive hypothesizing is described as a process in which there is a movement back and forth between knowing and not knowing. Of central importance are creative and dialogic understanding, rather than knowledge. Recommendations are made to ensure the constructive and collaborative use of hypotheses in the therapeutic dialogue.
Epidemiology of methicillin-resistant Staphylococcus aureus bacteremia in Gaborone, Botswana.
This cross-sectional study at a tertiary-care hospital in Botswana from 2000 to 2007 was performed to determine the epidemiologic characteristics of Staphylococcus aureus bacteremia. We identified a high prevalence (11.2% of bacteremia cases) of methicillin-resistant S. aureus (MRSA) bacteremia. MRSA isolates had higher proportions of resistance to commonly used antimicrobials than did methicillin-susceptible isolates, emphasizing the need to revise empiric prescribing practices in Botswana.
Generating Notifications for Missing Actions: Don't Forget to Turn the Lights Off!
We all have experienced forgetting habitual actions among our daily activities. For example, we probably have forgotten to turn the lights off before leaving a room or turn the stove off after cooking. In this paper, we propose a solution to the problem of issuing notifications on actions that may be missed. This involves learning about interdependencies between actions and being able to predict an ongoing action while segmenting the input video stream. In order to show a proof of concept, we collected a new egocentric dataset, in which people wear a camera while making lattes. We show promising results on the extremely challenging task of issuing correct and timely reminders. We also show that our model reliably segments the actions, while predicting the ongoing one when only a few frames from the beginning of the action are observed. The overall prediction accuracy is 46.2% when only 10 frames of an action are seen (2/3 of a sec). Moreover, the overall recognition and segmentation accuracy is shown to be 72.7% when the whole activity sequence is observed. Finally, the online prediction and segmentation accuracy is 68.3% when the prediction is made at every time step.
Insulin secretory defect in familial partial lipodystrophy Type 2 and successful long-term treatment with a glucagon-like peptide 1 receptor agonist.
BACKGROUND Familial partial lipodystrophies are rare monogenic disorders that are often associated with diabetes. In such cases, it can be difficult to achieve glycaemic control. CASE REPORT We report a 34-year old woman with familial partial lipodystrophy type 2 (Dunnigan) and diabetes; her hyperglycaemia persisted despite metformin treatment. A combined intravenous glucose tolerance-euglycaemic clamp test showed severe insulin resistance, as expected, but also showed strongly diminished first-phase insulin secretion. After the latter finding, we added the glucagon-like peptide-1 receptor agonist liraglutide to the patient's treatment regimen, which rapidly normalized plasma glucose levels. HbA1c values <42 mmol/mol (6.0%) have now been maintained for over 4 years. CONCLUSION This case suggests that a glucagon-like peptide-1 receptor agonist may be a useful component of glucose-lowering therapy in individuals with familial partial lipodystrophy and diabetes mellitus.
Diffusion of innovations in service organizations: systematic review and recommendations.
This article summarizes an extensive literature review addressing the question, How can we spread and sustain innovations in health service delivery and organization? It considers both content (defining and measuring the diffusion of innovation in organizations) and process (reviewing the literature in a systematic and reproducible way). This article discusses (1) a parsimonious and evidence-based model for considering the diffusion of innovations in health service organizations, (2) clear knowledge gaps where further research should be focused, and (3) a robust and transferable methodology for systematically reviewing health service policy and management. Both the model and the method should be tested more widely in a range of contexts.
Clustering techniques: The user's dilemma
-Numerous papers on clustering techniques and their applications in engineering, medical, and biological areas have appeared in pattern recognition literature during the past decade. This paper attempts to set some guidelines for a potential user of a clustering technique. We examine eight clustering programs which are representative of the various available techniques and compare their performances from several points of view. A formal comparative analysis is also performed with a portion of Munson's handprinted character data set. We believe that an understanding of the intrinsic characteristics of a clustering technique is essential to the intelligent application of the technique. Further, the output of a clustering program, along with whatever information a user may have about the data set, should be used together to form hypotheses about the structure of the data set. Clustering technique Patterns Features Squared error Distance measures Dendrogram Similarity matrix Hierarchical clustering Minimum spanning tree Admissibility criteria
Media Source Media Encoder Packetization Internet Media Renderer Media Decoder
WebRTC has quickly become popular as a video conferencing platform, partly due to the fact that many browsers support it. WebRTC utilizes the Google Congestion Control (GCC) algorithm to provide congestion control for realtime communications over UDP. The performance during a WebRTC call may be influenced by several factors, including the underlying WebRTC implementation, the device and network characteristics, and the network topology. In this paper, we perform a thorough performance evaluation of WebRTC both in emulated synthetic network conditions as well as in real wired and wireless networks. Our evaluation shows that WebRTC streams have a slightly higher priority than TCP flows when competing with cross traffic. In general, while in several of the considered scenarios WebRTC performed as expected, we observed important cases where there is room for improvement. These include the wireless domain and the newly added support for the video codecs VP9 and H.264 that does not perform as expected.
Do baseline client characteristics predict the therapeutic alliance in the treatment of schizophrenia?
This study examined clinical predictors of client and therapist alliance ratings early in therapy, the relationship between client and therapist alliance ratings, and the psychometric properties of the Working Alliance Inventory in individuals with schizophrenia receiving manual-based treatment. Assessment of clinical symptoms and social functioning were conducted at baseline, and alliance ratings were obtained at 5 weeks. The Working Alliance Inventory had high internal consistency, but there were low correlations between client and therapist ratings. Results also indicated that social functioning and the activation and autistic preoccupation factors on the Positive and Negative Syndrome Scale were significant predictors of therapists' alliance ratings. There were no significant relationships between clinical predictors and clients' therapeutic alliance ratings. The findings indicate that client interpersonal factors are significant predictors of the therapist-rated alliance in the treatment of schizophrenia. Low correlations between clients' and therapists' ratings of the alliance should be examined in future research.
Development of a Multi-fingered Robot Hand with Softness-changeable Skin Mechanism
This paper develops a multi-fingered robot hand with skin mechanism enabling softness change. We show how the softness of the skin affects grasping and manipulation. Elastic skin provides stable grasping while precise manipulation is lost. Hard skin provides precise manipulation while stability of grasp is lost. We will try to change the softness of the skin according to situation, object, and so on. In this paper, we develop a novel human like robot hand with softnesschangeable skin mechanism.
Path-guided artificial potential fields with stochastic reachable sets for motion planning in highly dynamic environments
Highly dynamic environments pose a particular challenge for motion planning due to the need for constant evaluation or validation of plans. However, due to the wide range of applications, an algorithm to safely plan in the presence of moving obstacles is required. In this paper, we propose a novel technique that provides computationally efficient planning solutions in environments with static obstacles and several dynamic obstacles with stochastic motions. Path-Guided APF-SR works by first applying a sampling-based technique to identify a valid, collision-free path in the presence of static obstacles. Then, an artificial potential field planning method is used to safely navigate through the moving obstacles using the path as an attractive intermediate goal bias. In order to improve the safety of the artificial potential field, repulsive potential fields around moving obstacles are calculated with stochastic reachable sets, a method previously shown to significantly improve planning success in highly dynamic environments. We show that Path-Guided APF-SR outperforms other methods that have high planning success in environments with 300 stochastically moving obstacles. Furthermore, planning is achievable in environments in which previously developed methods have failed.
The Relative Effectiveness of Human Tutoring , Intelligent Tutoring Systems , and Other Tutoring Systems
This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.
Impacts of Organizational Capabilities In Information Security
Purpose – This research aims to examine the relationship between information security strategy and organization performance, with organizational capabilities as important factors influencing successful implementation of information security strategy and organization performance. Design/methodology/approach – Based on existing literature in strategic management and information security, a theoretical model was proposed and validated. A self-administered survey instrument was developed to collect empirical data. Structural equation modeling was used to test hypotheses and to fit the theoretical model. Findings – Evidence suggests that organizational capabilities, encompassing the ability to develop high-quality situational awareness of the current and future threat environment, the ability to possess appropriate means, and the ability to orchestrate the means to respond to information security threats, are positively associated with effective implementation of information security strategy, which in turn positively affects organization performance. However, there is no significant relationship between decision making and information security strategy implementation success. Research limitations/implications – The study provides a starting point for further research on the role of decision-making in information security. Practical implications – Findings are expected to yield practical value for business leaders in understanding the viable predisposition of organizational capabilities in the context of information security, thus enabling firms to focus on acquiring the ones indispensable for improving organization performance. Originality/value – This study provides the body of knowledge with an empirical analysis of organization’s information security capabilities as an aggregation of sense making, decision-making, asset availability, and operations management constructs.
Global optimization of cerebral cortex layout.
Functional areas of mammalian cerebral cortex seem positioned to minimize costs of their interconnections, down to a best-in-a-billion optimality level. The optimization problem here, originating in microcircuit design, is: Given connections among components, what is the physical placement of the components on a surface that minimizes total length of connections? Because of unfeasibility of measuring long-range "wire length" in the cortex, a simpler adjacency cost was validated. To deal with incomplete information on brain networks, a size law was developed that predicts optimization patterns in subnetworks. Macaque and cat cortex rank better in this connection optimization than the wiring of comparably structured computer chips, but somewhat worse than the macroeconomic commodity-flow network among U.S. states. However, cortex wiring conforms to the size law better than the macroeconomic patterns, which may indicate cortex optimizing mechanisms involve more global processes.
Language Model Based Grammatical Error Correction without Annotated Training Data
Since the end of the CoNLL-2014 shared task on grammatical error correction (GEC), research into language model (LM) based approaches to GEC has largely stagnated. In this paper, we re-examine LMs in GEC and show that it is entirely possible to build a simple system that not only requires minimal annotated data (∼1000 sentences), but is also fairly competitive with several state-of-the-art systems. This approach should be of particular interest for languages where very little annotated training data exists, although we also hope to use it as a baseline to motivate future research.
Ways of Applying Artificial Intelligence in Software Engineering
As Artificial Intelligence (AI) techniques become more powerful and easier to use they are increasingly deployed as key components of modern software systems. While this enables new functionality and often allows better adaptation to user needs it also creates additional problems for software engineers and exposes companies to new risks. Some work has been done to better understand the interaction between Software Engineering and AI but we lack methods to classify ways of applying AI in software systems and to analyse and understand the risks this poses. Only by doing so can we devise tools and solutions to help mitigate them. This paper presents the AI in SE Application Levels (AI-SEAL) taxonomy that categorises applications according to their point of application, the type of AI technology used and the automation level allowed. We show the usefulness of this taxonomy by classifying 15 papers from previous editions of the RAISE workshop. Results show that the taxonomy allows classification of distinct AI applications and provides insights concerning the risks associated with them. We argue that this will be important for companies in deciding how to apply AI in their software applications and to create strategies for its use.
The Graph Neural Network Model
Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.
Prospects of encoding Java source code in XML
Currently, the only standard format for representing Java source code is plain text-based. This paper explores the prospects of using Extensible Markup Language (XML) for this purpose. XML enables the leverage of tools and standards more powerful than those available for plain-text formats, while retaining broad accessibility. The paper outlines the potential benefits of future XML grammars that would allow for improved code structure and querying possibilities; code extensions, construction, and formatting; and referencing parts of code. It also introduces the concept of grammar levels and argues for the inclusion of several grammar levels into a common framework. It discusses conversions between grammars and some practical grammar design issues. Keywords—XML, Java, source code, parsing, code formatting
1/f noise of NMOS and PMOS transistors and their implications to design of voltage controlled oscillators
Low frequency noise of NMOS and PMOS transistors in a 0.25 /spl mu/m foundry CMOS process with a pure SiO/sub 2/ gate oxide layer is characterized for the entire range of MOSFET operation. Surprisingly, the measurement results showed that surface channel PMOS transistors have about an order of magnitude lower 1/f noise than NMOS transistors especially at V/sub GS/-V/sub TH/ less than /spl sim/0.4 V The data were used to show that a VCO using all surface channel PMOS transistors can have /spl sim/14 dB lower close-in phase noise compared to that for a VCO using all surface channel NMOS transistors.
A common role of insula in feelings, empathy and uncertainty
Although accumulating evidence highlights a crucial role of the insular cortex in feelings, empathy and processing uncertainty in the context of decision making, neuroscientific models of affective learning and decision making have mostly focused on structures such as the amygdala and the striatum. Here, we propose a unifying model in which insula cortex supports different levels of representation of current and predictive states allowing for error-based learning of both feeling states and uncertainty. This information is then integrated in a general subjective feeling state which is modulated by individual preferences such as risk aversion and contextual appraisal. Such mechanisms could facilitate affective learning and regulation of body homeostasis, and could also guide decision making in complex and uncertain environments.
A Sensorimotor Circuit in Mouse Cortex for Visual Flow Predictions
The cortex is organized as a hierarchical processing structure. Feedback from higher levels of the hierarchy, known as top-down signals, have been shown to be involved in attentional and contextual modulation of sensory responses. Here we argue that top-down input to the primary visual cortex (V1) from A24b and the adjacent secondary motor cortex (M2) signals a prediction of visual flow based on motor output. A24b/M2 sends a dense and topographically organized projection to V1 that targets most neurons in layer 2/3. By imaging the activity of A24b/M2 axons in V1 of mice learning to navigate a 2D virtual environment, we found that their activity was strongly correlated with locomotion and resulting visual flow feedback in an experience-dependent manner. When mice were trained to navigate a left-right inverted virtual environment, correlations of neural activity with behavior reversed to match visual flow. These findings are consistent with a predictive coding interpretation of visual processing.
B-cell exhaustion in HIV infection: the role of immune activation.
PURPOSE OF REVIEW To discuss a component of the pathogenic mechanisms of HIV infection in the context of phenotypic and functional alterations in B cells that are due to persistent viral replication leading to aberrant immune activation and cellular exhaustion. We explore how B-cell exhaustion arises during persistent viremia and how it compares with T-cell exhaustion and similar B-cell alterations in other diseases. RECENT FINDINGS HIV-associated B-cell exhaustion was first described in 2008, soon after the demonstration of persistent virus-induced T-cell exhaustion, as well as the identification of a subset B cells in tonsil tissues with immunoregulatory features similar to those observed in T-cell exhaustion. Our understanding of B-cell exhaustion has since expanded in two important areas: the role of inhibitory receptors in the unresponsiveness of exhausted B cells and the increasing evidence that similar B cells are found in other diseases that are associated with aberrant immune activation and inflammation. SUMMARY The phenomenon of B-cell exhaustion is now well established in HIV infection and other diseases characterized by immune activation. Over the coming years, it will be important to understand how cellular exhaustion affects the capacity of the immune system to respond to persisting primary pathogens, as well as to other microbial antigens, whether encountered as secondary infections or following immunization.
EVALUATION OF BUCKET CAPACITY , DIGGING FORCE CALCULATIONS AND STATIC FORCE ANALYSIS OF MINI HYDRAULIC BACKHOE EXCAVATOR
Rapidly growing rate of industry of earth moving machines is assured through the high performance construction machineries with complex mechanism and automation of construction activity. Design of backhoe link mechanism is critical task in context of digging force developed through actuators during the digging operation. The digging forces developed by actuators must be greater than that of the resistive forces offered by the terrain to be excavated. This paper focuses on the evaluation method of bucket capacity and digging forces required to dig the terrain for light duty construction work. This method provides the prediction of digging forces and can be applied for autonomous operation of excavation task. The evaluated digging forces can be used as boundary condition and loading conditions to carry out Finite Element Analysis of the backhoe mechanism for strength and stress analysis. A generalized breakout force and digging force model also developed using the fundamentals of kinematics of backhoe mechanism in context of robotics. An analytical approach provided for static force analysis of mini hydraulic backhoe excavator attachment.
Uncertainty in Managers ’ Reporting Objectives and Investors ’ Response to Earnings Reports : Evidence from the 2006 Executive Compensation Disclosures
We examine whether the information content of the earnings report, as captured by the earnings response coefficient (ERC), increases when investors’ uncertainty about the manager’s reporting objectives decreases, as predicted in Fischer and Verrecchia (2000). We use the 2006 mandatory compensation disclosures as an instrument to capture a decrease in investors’ uncertainty about managers’ incentives and reporting objectives. Employing a difference-in-differences design and exploiting the staggered adoption of the new rules, we find a statistically and economically significant increase in ERC for treated firms relative to control firms, largely driven by profit firms. Cross-sectional tests suggest that the effect is more pronounced in subsets of firms most affected by the new rules. Our findings represent the first empirical evidence of a role of compensation disclosures in enhancing the information content of financial reports. JEL Classification: G38, G30, G34, M41