title
stringlengths
8
300
abstract
stringlengths
0
10k
An integrated approach to optimizing orofacial health, function, and esthetics: a 5-year retrospective study.
The restorative dentist treats patients with needs that often transcend dental disciplines. This dentist will be responsible for long-term dental maintenance and should logically oversee interdisciplinary reconstruction. This case report demonstrates an integrated treatment approach and 5-year retrospective study involving a patient with maxillary anterior excess, pronounced overjet, and advanced periodontitis. Extraction and radical alveolectomy with removable prosthesis is often considered in such cases; however, results are often less than optimum function, esthetics, and general oral health. In this case, a previously undescribed anterior maxillary rotating segmental ostectomy was integrated with restorative, periodontal, and removable full and partial denture procedures.
Investigation of Flow Behavior in Minimum Quantity Lubrication Nozzle for End Milling Processes
Minimum quantity lubrication (MQL) is a sustainable manufacturing technique that has replaced conventional flooded lubrication methods and dry machining. In the MQL technique, the lubricant is sprayed onto the friction surfaces through nozzles through small pneumatically-operated pumps. This paper presents an investigation into the flow behavior of the lubricant and air mixture under certain pressures at the tip of a nozzle specially designed for MQL. The nozzle used is an MQL stainless steel nozzle, 6.35 mm in diameter. Computational fluid dynamics is used to determine the flow pattern at the tip of the nozzle where the lubricant and compressed air are mixed to form a mist. The lubricant volume flow is approximately 0.08 ml/cycle of the pump. A transient, pressure-based, threedimensional analysis is performed with a viscous, realizable k-ε model. The results are obtained in the form of vector plots and flow fields. The flow mixing at the tip of the nozzle is wholly shown through the flow fields and vector plots. This study provides an insight into the flow distribution at the tip of the nozzle for a certain pressure to aid modifications in the design of the nozzle for future MQL studies. It attainable aids to determine the correct pressure for the air jet at the nozzle tip.
Numerical solutions for micropolar transport phenomena over a nonlinear stretching sheet
Department of Mathematics, Indian Institute of Technology, Roorkee, I ndia Department of Engineering, Manchester Metropolitan University Manchester, M1 5GD, England, UK 3 Fire Safety Engineering Science Program Leeds College of Building, Leeds Metropolitan University North Street, Leeds, LS2 7QT, England [email protected]; [email protected] Department of Civil Engineering, Indian Institute of Technology, Roor kee, India
Wideband Fabry–Perot Resonator Antenna With Two Layers of Dielectric Superstrates
This letter presents a new design of a Fabry-Perot resonator antenna (FPRA) with a wide gain bandwidth. A double-layered dielectric superstrate, which produces a reflection phase curve versus frequency with a positive slope, is used as a partially reflective surface (PRS) to enhance the bandwidth of the FPRA. For a physical insight, the PRS is analyzed by using the transmission line theory and Smith Chart. Experimental results demonstrate that the antenna has a 3-dB gain bandwidth over 13.5-17.5 GHz, relatively 25.8%, with a peak gain of 15 dBi. Furthermore, the gain band is overlapped well by the impedance band for the reflection coefficient ( S11) less than -10 dB.
Context-Aware Collaborative Filtering System: Predicting the User's Preference in the Ubiquitous Computing Environment
In this paper we present a context-aware collaborative filtering system that predicts a user’s preference in different context situations based on past experiences. We extend collaborative filtering techniques so that what other like-minded users have done in similar context can be used to predict a user’s preference towards an activity in the current context. Such a system can help predict the user’s behavior in different situations without the user actively defining it. For example, it could recommend activities customized for Bob for the given weather, location, and traveling companion(s), based on what other people like Bob have done in similar context.
Iterative Side-Channel Cube Attack on KeeLoq
KeeLoq is a 528-round lightweight block cipher which has a 64-bit secret key and a 32-bit block length. The cube attack, proposed by Dinur and Shamir, is a new type of attacking method. In this paper, we investigate the security of KeeLoq against iterative side-channel cube attack which is an enhanced attack scheme. Based on structure of typical block ciphers, we give the model of iterative side-channel cube attack. Using the traditional single-bit leakage model, we assume that the attacker can exactly possess the information of one bit leakage after round 23. The new attack model costs a data complexity of 211.00 chosen plaintexts to recover the 23-bit key of KeeLoq. Our attack will reduce the key searching space to 241 by considering an error-free bit from internal states.
The impact of center experience on results of reduced intensity: allogeneic hematopoietic SCT for AML. An analysis from the Acute Leukemia Working Party of the EBMT
Allogeneic hematopoietic SCT with reduced-intensity conditioning (RIC-HSCT) is increasingly adopted for the treatment of older adults with AML. Our goal was to verify for the first time, if center experience influences outcome of RIC-HSCT. Results of 1413 transplantations from HLA-matched related or unrelated donors for adult patients with AML in first CR were analyzed according to the level of center activity. Transplants were performed in 203 European centers between 2001 and 2007. The 2-year probability of leukemia-free survival (LFS) after RIC-HSCT performed in centers with the lowest activity (⩽15 procedures/7 years) was 43±3% compared with 55±2% in the remainder (P<0.001). The incidence of non-relapse mortality (NRM) was 24±3% and 15±1% (P=0.004), whilst relapse rate was 33±3% and 31±1% (P=0.33), respectively. In a multivariate model, adjusted for other prognostic factors, low RIC-HSCT activity was associated with decreased chance of LFS (hazard ratio (HR)=0.64; P<0.001) and increased risk of NRM (HR=1.47, P=0.04) and relapse (HR=1.41, P=0.01). Center experience is a very important predictor of outcome and should be considered in future analyses evaluating the results of RIC-HSCT. The reasons why centers with low RIC-HSCT activity have worse outcomes should be further investigated.
Wireless software-defined networks (W-SDNs) and network function virtualization (NFV) for 5G cellular systems: An overview and qualitative evaluation
Cellular network technologies have evolved to support the ever-increasing wireless data traffic, which results from the rapidly-evolving Internet and widely-adopted cloud applications over wireless networks. However, hardware-based designs, which rely on closed and inflexible architectures of current cellular systems, make a typical 10-year cycle for a new generation of wireless networks to be standardized and deployed. To overcome this limitation, the concept of software-defined networking (SDN) has been proposed to efficiently create centralized network abstraction with the provisioning of programmability over the entire network. Moreover, the complementary concept of network function virtualization (NFV) has been further proposed to effectively separate the abstraction of functionalities from the hardware by decoupling the data forwarding plane from the control plane. These two concepts provide cellular networks with the needed flexibility to evolve and adapt according to the ever-changing network context and introduce wireless software-defined networks (W-SDNs) for 5G cellular systems. Thus, there is an urgent need to study the fundamental architectural principles underlying a new generation of software-defined cellular network as well as the enabling technologies that supports and manages such emerging architecture. In this paper, first, the state-of-the-art W-SDNs solutions along with their associated NFV techniques are surveyed. Then, the key differences among these W-SDN solutions as well as their limitations are highlighted. To counter those limitations, SoftAir, a new SDN architecture for 5G cellular systems, is introduced. © 2015 Elsevier B.V. All rights reserved. Full Article
Automated Generation of Event-Oriented Exploits in Android Hybrid Apps
Recently more and more Android apps integrate the embedded browser, known as “WebView”, to render web pages and run JavaScript code without leaving these apps. WebView provides a powerful feature that allows event handlers defined in the native context (i.e., Java in Android) to handle web events that occur in WebView. However, as shown in prior work, this feature suffers from remote attacks, which we generalize as EventOriented Exploit (EOE) in this paper, such that adversaries may remotely access local critical functionalities through event handlers in WebView without any permission or authentication. In this paper, we propose a novel approach, EOEDroid, which can automatically vet event handlers in a given hybrid app using selective symbolic execution and static analysis. If a vulnerability is found, EOEDroid also automatically generates exploit code to help developers and analysts verify the vulnerability. To support exploit code generation, we also systematically study web events, event handlers and their trigger constraints. We evaluated our approach on 3,652 most popular apps. The result showed that our approach found 97 total vulnerabilities in 58 apps, including 2 cross-frame DOM manipulation, 53 phishing, 30 sensitive information leakage, 1 local resources access, and 11 Intent abuse vulnerabilities. We also found a potential backdoor in a high profile app that could be used to steal users’ sensitive information, such as IMEI. Even though developers attempted to close it, EOEDroid found that adversaries were still able to exploit it by triggering two events together and feeding event handlers with well designed input.
(Im)Partiality, Compassion, and Cross-Cultural Change: Re-Envisioning Political Decision-Making and Free Expression
Past justifications of free expression rely on the crucial role speech plays in deliberative democracies and respecting persons. Beneath each of these justifications lies the common goal of creating greater justice for individuals and groups. Yet 20th century political liberalism limits the kinds of arguments that ought to motivate political decisions. In this paper I explore how an inclusive political decision-making process can bring about a more just world. By relying on personal views and compassion rather than impartiality and reasonability, political actors can engage in a discourse that results in greater understanding among persons and lasting community change. Macalester College Philosophy Dr. William Wilcox 27 April 2014
An MLP based Approach for Recognition of Handwritten 'Bangla' Numerals
The work presented here involves the design of a Multi Layer Perceptron (MLP) based pattern classifier for recognition of handwritten Bangla digits using a 76 element feature vector. Bangla is the second most popular script and language in the Indian subcontinent and the fifth most popular language in the world. The feature set developed for representing handwritten Bangla numerals here includes 24 shadow features, 16 centroid features and 36 longest-run features. On experimentation with a database of 6000 samples, the technique yields an average recognition rate of 96.67% evaluated after three-fold cross validation of results. It is useful for applications related to OCR of handwritten Bangla Digit and can also be extended to include OCR of handwritten characters of Bangla alphabet.
Session-based adaptive overload control for secure dynamic Web applications
As dynamic Web content and security capabilities are becoming popular in current Web sites, the performance demand on application servers that host the sites is increasing, leading sometimes these servers to overload. As a result, response times may grow to unacceptable levels and the server may saturate or even crash. In this paper we present a session-based adaptive overload control mechanism based on SSL (secure socket layer) connections differentiation and admission control. The SSL connections differentiation is a key factor because the cost of establishing a new SSL connection is much greater than establishing a resumed SSL connection (it reuses an existing SSL session on server). Considering this big difference, we have implemented an admission control algorithm that prioritizes the resumed SSL connections to maximize performance on session-based environments and limits dynamically the number of new SSL connections accepted depending on the available resources and the current number of connections in the system to avoid server overload. In order to allow the differentiation of resumed SSL connections from new SSL connections we propose a possible extension of the Java Secure Sockets Extension (JSSE) API. Our evaluation on Tomcat server demonstrates the benefit of our proposal for preventing server overload.
Online extrinsic multi-camera calibration using ground plane induced homographies
This paper presents an approach for online estimation of the extrinsic calibration parameters of a multi-camera rig. Given a coarse initial estimate of the parameters, the relative poses between cameras are refined through recursive filtering. The approach is purely vision based and relies on plane induced homographies between successive frames. Overlapping fields of view are not required. Instead, the ground plane serves as a natural reference object. In contrast to other approaches, motion, relative camera poses, and the ground plane are estimated simultaneously using a single iterated extended Kalman filter. This reduces not only the number of parameters but also the computational complexity. Furthermore, an arbitrary number of cameras can be incorporated. Several experiments on synthetic as well as real data were conducted using a setup of four synchronized wide angle fisheye cameras, mounted on a moving platform. Results were obtained, using both, a planar and a general motion model with full six degrees of freedom. Additionally, the effects of uncertain intrinsic parameters and nonplanar ground were evaluated experimentally.
Stylize Aesthetic QR Code
With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the visual-unpleasant appearance of QR codes, existing works have developed a series of techniques. However, these works still leave much to be desired, such as personalization, artistry, and robustness. To address these issues, in this paper, we propose a novel type of aesthetic QR codes, SEE (Stylize aEsthEtic) QR code, and a three-stage approach to automatically produce such robust style-oriented codes. Specifically, in the first stage, we propose a method to generate an optimized baseline aesthetic QR code, which reduces the visual contrast between the noise-like black/white modules and the blended image. In the second stage, to obtain art style QR code, we tailor an appropriate neural style transformation network to endow the baseline aesthetic QR code with artistic elements. In the third stage, we design an error-correction mechanism by balancing two competing terms, visual quality and readability, to ensure the performance robust. Extensive experiments demonstrate that SEE QR code has high quality in terms of both visual appearance and robustness, and also offers a greater variety of personalized choices to users.
Is Blood Eosinophil Count a Predictor of Response to Bronchodilators in Chronic Obstructive Pulmonary Disease? Results from Post Hoc Subgroup Analyses
BACKGROUND Chronic obstructive pulmonary disease (COPD) patients with blood eosinophil (EOS) count ≥ 2% benefit from exacerbation reductions with inhaled corticosteroids (ICSs). We conducted post hoc analyses to determine if EOS count ≥ 2% is a marker for greater responsiveness to the bronchodilators umeclidinium (UMEC; long-acting muscarinic antagonist), vilanterol (VI; long-acting β2-agonist) or UMEC/VI combination. METHODS Effects of once-daily UMEC/VI 62.5/25, UMEC 62.5 and VI 25 µg versus placebo on trough forced expiratory volume in one second (FEV1), Transition Dyspnoea Index (TDI), St George's Respiratory Questionnaire (SGRQ) scores and adverse event (AE) incidences in four completed, 6-month studies were assessed by EOS subgroup. Trough FEV1 was also evaluated by ICS use and EOS subgroup. Analyses were performed using a repeated measures model. RESULTS At baseline, 2437 of 4647 (52%) patients had EOS count ≥ 2%. Overall, ≈ 50% of patients used ICSs. At day 169, no notable variations were observed in trough FEV1 least squares mean differences between EOS subgroups versus placebo for UMEC/VI, UMEC and VI; results according to ICS use were similar. No differences were reported between EOS subgroups in TDI and SGRQ scores on day 168, or for incidences of AEs, serious AEs and AEs leading to withdrawal. CONCLUSIONS Response to UMEC/VI, UMEC and VI in terms of trough FEV1, dyspnoea and health-related quality of life was similar for COPD patients with baseline EOS counts ≥ 2 or <2%. EOS count did not appear to predict bronchodilator response in either ICS users or non-users.
Consumer Socialization of Children : A Retrospective Look at Twenty-Five Years of Research
Twenty-five years of consumer socialization research have yielded an impressive set of findings. The purpose of our article is to review these findings and assess what we know about children’s development as consumers. Our focus is on the developmental sequence characterizing the growth of consumer knowledge, skills, and values as children mature throughout childhood and adolescence. In doing so, we present a conceptual framework for understanding consumer socialization as a series of stages, with transitions between stages occurring as children grow older and mature in cognitive and social terms. We then review empirical findings illustrating these stages, including children’s knowledge of products, brands, advertising, shopping, pricing, decision-making strategies, parental influence strategies, and consumption motives and values. Based on the evidence reviewed, implications are drawn for future theoretical and empirical development in the field of consumer socialization.
FooPar: A Functional Object Oriented Parallel Framework in Scala
We present FooPar, an extension for highly efficient Parallel Computing in the multi-paradigm programming language Scala. Scala offers concise and clean syntax and integrates functional programming features. Our framework FooPar combines these features with parallel computing techniques. FooPar is designed modular and supports easy access to different communication backends for distributed memory architectures as well as high performance math libraries. In this article we use it to parallelize matrix matrix multiplication and show its scalability by a isoefficiency analysis. In addition, results based on a empirical analysis on two supercomputers are given. We achieve close-to-optimal performance wrt. theoretical peak performance. Based on this result we conclude that FooPar allows to fully access Scala's design features without suffering from performance drops when compared to implementations purely based on C and MPI.
A survey of vertical handover decision algorithms in Fourth Generation heterogeneous wireless networks
1389-1286/$ see front matter 2010 Elsevier B.V doi:10.1016/j.comnet.2010.02.006 * Corresponding author. Tel.: +61 3 99053503; fa E-mail address: [email protected] (Y. Ahmet S ek Vertical handover decision (VHD) algorithms are essential components of the architecture of the forthcoming Fourth Generation (4G) heterogeneous wireless networks. These algorithms need to be designed to provide the required Quality of Service (QoS) to a wide range of applications while allowing seamless roaming among a multitude of access network technologies. In this paper, we present a comprehensive survey of the VHD algorithms designed to satisfy these requirements. To offer a systematic comparison, we categorize the algorithms into four groups based on the main handover decision criterion used. Also, to evaluate tradeoffs between their complexity of implementation and efficiency, we discuss three representative VHD algorithms in each group. 2010 Elsevier B.V. All rights reserved.
Effectiveness and Efficiency of Open Relation Extraction
A large number of Open Relation Extraction approaches have been proposed recently, covering a wide range of NLP machinery, from “shallow” (e.g., part-of-speech tagging) to “deep” (e.g., semantic role labeling–SRL). A natural question then is what is the tradeoff between NLP depth (and associated computational cost) versus effectiveness. This paper presents a fair and objective experimental comparison of 8 state-of-the-art approaches over 5 different datasets, and sheds some light on the issue. The paper also describes a novel method, EXEMPLAR, which adapts ideas from SRL to less costly NLP machinery, resulting in substantial gains both in efficiency and effectiveness, over binary and n-ary relation extraction tasks.
Pre-hospital electrocardiogram triage with tele-cardiology support is associated with shorter time-to-balloon and higher rates of timely reperfusion even in rural areas: data from the Bari- Barletta/Andria/Trani public emergency medical service 118 registry on primary angioplasty in ST-elevation myo
BACKGROUND We report the preliminary data from a regional registry on ST-elevation myocardial infarction (STEMI) patients treated with primary angioplasty in Apulia, Italy; the region is covered by a single public health-care service, a single public emergency medical service (EMS), and a single tele-medicine service provider. METHODS Two hundred and ninety-seven consecutive patients with STEMI transferred by regional free public EMS 1-1-8 for primary-PCI were enrolled in the study; 123 underwent pre-hospital electrocardiograms (ECGs) triage by tele-cardiology support and directly referred for primary-PCI, those remaining were just transferred by 1-1-8 ambulances for primary percutaneous coronary intervention (PCI) (diagnosis not based on tele-medicine ECG; already hospitalised patients, emergency-room without tele-medicine support). Time from first ECG diagnostic for STEMI to balloon was recorded; a time-to-balloon <1 h was considered as optimal and patients as timely treated. RESULTS Mean time-to-balloon with pre-hospital triage and tele-cardiology ECG was significantly shorter (0:41 ± 0:17 vs 1:34 ± 1:11 h, p<0.001, -0:53 h, -56%) and rates of patients timely treated higher (85% vs 35%, p<0.001, +141%), both in patients from the 'inner' zone closer to PCI catheterisation laboratories (0:34 ± 0:13 vs 0:54 ± 0:30 h, p<0.001; 96% vs 77%, p<0.01, +30%) and in the 'outer' zone (0:52 ± 0:17 vs 1:41 ± 1:14 h, p<0.001; 69% vs 29%, p<0.001, +138%). Results remained significant even after multivariable analysis (odds ratio for time-to-balloon 0.71, 95% confidence interval (CI) 0.63-0.80, p<0.001; 1.39, 95% CI 1.25-1.55, p<0.001, for timely primary-PCI). CONCLUSIONS Pre-hospital triage with tele-cardiology ECG in an EMS registry from an area with more than one and a half million inhabitants was associated with shorter time-to-balloon and higher rates of timely treated patients, even in 'rural' areas.
IoT based urban climate monitoring using Raspberry Pi
Internet of Things is the web of physical objects that contain the embedded technology which is helping to develop man to machine or machine to machine communication. This paper mainly propounds a stand-alone system which is providing a dynamic datasheet about the parameters of the city environment. The system is using low cost low power ARM based minicomputer that is Raspberry Pi. It can communicate through Local Area Network (LAN) or the external Wi-Fi module. Commands from user are processed at the Raspberry Pi using Python language. The data can be monitored with other terminal devices like Laptop, Smart Phone and Tablet which is endowed with the internet facility. This framework is giving access to real-time information about an urban environment which includes the parameters: temperature, humidity, pressure, CO and harmful air pollutants.
Joint Object Segmentation and Depth Upsampling
With the advent of powerful ranging and visual sensors, nowadays, it is convenient to collect sparse 3-D point clouds and aligned high-resolution images. Benefitted from such convenience, this letter proposes a joint method to perform both depth assisted object-level image segmentation and image guided depth upsampling. To this end, we formulate these two tasks together as a bi-task labeling problem, defined in a Markov random field. An alternating direction method (ADM) is adopted for the joint inference, solving each sub-problem alternatively. More specifically, the sub-problem of image segmentation is solved by Graph Cuts, which attains discrete object labels efficiently. Depth upsampling is addressed via solving a linear system that recovers continuous depth values. By this joint scheme, robust object segmentation results and high-quality dense depth maps are achieved. The proposed method is applied to the challenging KITTI vision benchmark suite, as well as the Leuven dataset for validation. Comparative experiments show that our method outperforms stand-alone approaches.
Acquisition of the constraints on wanna contraction by advanced second language learners : Universal Grammar and imperfect knowledge
Contraction of want to to wanna is subject to constraints which have been related to the operation of Universal Grammar. Contraction appears to be blocked when the trace of an extracted wh-word intervenes. Evidence for knowledge of these constraints by young English-speaking children in as been taken to show the operation of Universal Grammar in early child language acquisition. The present study investigates the knowledge these constraints in adults, both English native speakers and advanced Korean learners of English. The results of three experiments, using elicited production, oral repair, and grammaticality judgements, confirmed native speaker knowledge of the constraints. A second process of phonological elision may also operate to produce wanna. Learners also showed some differentiation of contexts, but much less clearly than native speakers. We speculate that non-natives may be using rules of complement selection, rather than the constraints of UG, to control contraction. Introduction: wanna contraction and language learnability In English, want to can be contracted to wanna, but not invariably. As first observed by Lakoff (1970) in examples such as (1), in which the object of the infinitival complement of want has been extracted by wh-movement, contraction is possible, but not in (2), in which the subject of the infinitival complement is extracted from the position between want and to. We shall call examples like (1) "subject extraction questions" (SEQ) and examples like (2) "object extraction questions" (OEQ).
Dose and polymorphic genes xrcc1, xrcc3, gst play a role in the risk of articledeveloping erythema in breast cancer patients following single shot partial breast irradiation after conservative surgery
To evaluate the association between polymorphisms involved in DNA repair and oxidative stress genes and mean dose to whole breast on acute skin reactions (erythema) in breast cancer (BC) patients following single shot partial breast irradiation (SSPBI) after breast conservative surgery. Acute toxicity was assessed using vers.3 criteria. single nucleotides polymorphisms(SNPs) in genes: XRCC1(Arg399Gln/Arg194Trp), XRCC3 (A4541G-5'UTR/Thr241Met), GSTP1(Ile105Val), GSTA1 and RAD51(untranslated region). SNPs were determined in 57 BC patients by the Pyrosequencing analysis. Univariate(ORs and 95% CI) and logistic multivariate analyses (MVA) were performed to correlate polymorphic genes with the risk of developing acute skin reactions to radiotherapy. After SSPBI on the tumour bed following conservative surgery, grade 1 or 2 acute erythema was observed in 19 pts(33%). Univariate analysis indicated a higher significant risk of developing erythema in patients with polymorphic variant wt XRCC1Arg194Trp, mut/het XRCC3Thr241Met, wt/het XRCC3A4541G-5'UTR. Similarly a higher erythema rate was also found in the presence of mut/het of XRCC1Arg194Trp or wt of GSTA1. Whereas, a lower erythema rate was observed in patients with mut/het of XRCC1Arg194Trp or wt of XRCC1Arg399Gln. The mean dose to whole breast(p = 0.002), the presence of either mut/het XRCC1Arg194Trp or wt XRCC3Thr241Met (p = 0.006) and the presence of either mut/het XRCC1Arg194Trp or wt GSTA1(p = 0.031) were confirmed as predictors of radiotherapy-induced erythema by MVA. The Whole breast mean dose together with the presence of some polymorphic genes involved in DNA repair or oxidative stress could explain the erythema observed after SSPBI, but further studies are needed to confirm these results in a larger cohort. ClinicalTrials.gov Identifier: NCT01316328
Linking Business Intelligence into Your Business
IT departments are under pressure to serve their enterprises by professionalizing their business intelligence (BI) operation. Companies can only be effective when their systematic and structured approach to BI is linked into the business itself.
Ka band spatial power-combining amplifier structures
2.24 w power-amplifier (PA) module at 35 GHz presented using broad-band spatial power-combining system. The combiner can accommodate more monolithic microwave integrated-circuit (MMIC) PA with stagger placement structure on limited microstrip space in Ka-band waveguide structure with good return losses, and heat can dissipated into aluminum carrier quickly. This combiner is based on a slotline-to-microstrip transition structure, which also serves as a four-way power combiner. The proposed 2*2 combining structure combined by vertical stacking inside the waveguide was analyzed and optimized by finite-element-method (FEM) simulations and experiments.
Determining the Origin of Downloaded Files Using Metadata Associations
Determining the “origin of a file” in a file system is often required during digital investigations. While the problem of “origin of a file” appears intractable in isolation, it often becomes simpler if one considers the environmental context, viz., the presence of browser history, cache logs, cookies and so on. Metadata can help bridge this contextual gap. Majority of the current tools, with their search-and-query interface, while enabling extraction of metadata stops short of leading the investigator to the “associations” that metadata potentially point to, thereby enabling an approach to solving the “origin of a file” problem. In this paper, we develop a method to identify the origin of files downloaded from the Internet using metadata based associations. Metadata based associations are derived though metadata value matches on the digital artifacts and the artifacts thus associated, are grouped together automatically. These associations can reveal certain higher-order relationships across different sources such as file systems and log files. We define four relationships between files on file systems and log records in log files which we use to determine the origin of a particular file. The files in question are tracked from the user file system under examination to the different browser logs generated during a user’s online activity to their points of origin in the Internet.
An Analysis of the Elastic Net Approach to the Traveling Salesman Problem
This paper analyzes the elastic net approach (Durbin and Willshaw 1987) to the traveling salesman problem of finding the shortest path through a set of cities. The elastic net approach jointly minimizes the length of an arbitrary path in the plane and the distance between the path points and the cities. The tradeoff between these two requirements is controlled by a scale parameter K. A global minimum is found for large K, and is then tracked to a small value. In this paper, we show that (1) in the small K limit the elastic path passes arbitrarily close to all the cities, but that only one path point is attracted to each city, (2) in the large K limit the net lies at the center of the set of cities, and (3) at a critical value of K the energy function bifurcates. We also show that this method can be interpreted in terms of extremizing a probability distribution controlled by K. The minimum at a given K corresponds to the maximum a posteriori (MAP) Bayesian estimate of the tour under a natural statistical interpretation. The analysis presented in this paper gives us a better understanding of the behavior of the elastic net, allows us to better choose the parameters for the optimization, and suggests how to extend the underlying ideas to other domains.
Consensus-Oriented Parallelization: How to Earn Your First Million
Consensus protocols employed in Byzantine fault-tolerant systems are notoriously compute intensive. Unfortunately, the traditional approach to execute instances of such protocols in a pipelined fashion is not well suited for modern multi-core processors and fundamentally restricts the overall performance of systems based on them. To solve this problem, we present the consensus-oriented parallelization (COP) scheme, which disentangles consecutive consensus instances and executes them in parallel by independent pipelines; or to put it in the terminology of our main target, today's processors: COP is the introduction of superscalarity to the field of consensus protocols. In doing so, COP achieves 2.4 million operations per second on commodity server hardware, a factor of 6 compared to a contemporary pipelined approach measured on the same code base and a factor of over 20 compared to the highest throughput numbers published for such systems so far. More important, however, is: COP provides up to 3 times as much throughput on a single core than its competitors and it can make use of additional cores where other approaches are confined by the slowest stage in their pipeline. This enables Byzantine fault tolerance for the emerging market of extremely demanding transactional systems and gives more room for conventional deployments to increase their quality of service.
Ethnicity and weight status affect the accuracy of proxy indices of insulin sensitivity.
This study tested the hypotheses that correlations between direct measures of insulin sensitivity and proxy indices of insulin sensitivity derived from fasting values, (i) would not be affected by ethnicity, and (ii) would be stronger in overweight vs. weight-reduced states. We further hypothesized that associations between proxy indices and fat distribution would be similar to those between directly measured insulin sensitivity and fat distribution. Testing was performed in weight-stable conditions in 59 African-American (AA) and 62 white-American (WA) overweight, premenopausal women before and after a weight loss intervention. Subjects were retested 1 year following weight loss. Proxy indices were correlated against the insulin sensitivity index S(I) determined via minimal modeling. Fat distribution was assessed using computed tomography. Correlations between Si and proxy indices were consistently stronger among overweight women (r = 0.44-0.52) vs. weight-reduced women (r = 0.18-0.32), and among AA (r = 0.49-0.56, baseline; 0.24-0.36, weight-reduced) vs. WA (r = 0.38-0.46, baseline; 0.19-0.31, weight-reduced). Among subjects who regained >3 kg after 1 year, correlations between S(I) and proxy indices were similar to those observed at baseline, whereas correlations were weak among women who maintained their reduced body weight. S(I) and all proxy indices were similarly correlated with intra-abdominal adipose tissue (IAAT) at baseline, but not after weight loss. In conclusion, correlations between S(I) and proxy indices were affected by both ethnicity and weight status. If proxy indices are used in multiethnic populations, or in populations including both lean and overweight/obese subjects, data should be interpreted with caution.
Interaction between early life stress and alcohol dependence on neural stress reactivity.
Stress response biologic systems are altered in alcohol-dependent individuals. Early life stress (ELS) is associated with a heightened risk of alcohol dependence, presumably because of stress-induced neuroplastic changes. This study was designed to assess the contribution of ELS to a stress-induced neural response in alcohol-dependent participants. Fifteen alcohol-dependent men abstinent for 3-5 weeks and 15 age- and race-matched healthy controls were studied. Anticipatory anxiety was induced by a conditioned stimulus paired with an uncertain physically painful unconditioned stressor. Neural response was assessed with functional magnetic resonance imaging. ELS was assessed with the Childhood Adversity Interview. There was a significant interaction between ELS and group on blood-oxygen-level-dependent (BOLD) amplitude during anticipatory anxiety in the right amygdala and bilateral orbitofrontal cortex, posterior putamen and insula. Higher ELS scores were associated with decreased BOLD amplitude during anticipatory anxiety in alcohol-dependent, but not control, participants. These findings suggest that ELS interacts with alcohol dependence to induce a muted cortico-striatal response to high threat stimuli. Allostatic changes due to both ELS and excessive alcohol use may jointly induce persistent changes in the neural response to acute stressors.
Purification and properties of an enantioselective and thermoactive amidase from the thermophilic actinomycete Pseudonocardia thermophila
A constitutively expressed thermoactive amidase from the thermophilic actinomycete Pseudonocardia thermophila was purified to homogeneity by applying hydrophobic interaction, anion exchange and gel filtration chromatography, giving a yield of 26% and a specific activity of 19.5 units mg−1. The purified enzyme has an estimated molecular mass of 108 kDa and an isoelectric point of 4.2. The amidase is active at a broad pH range (pH 4–9) and temperature range (40–80°C) and has a half-life of 1.2 h at 70°C. Inhibition of enzyme activity was observed in the presence of metal ions, such as Co2+, Hg2+, Cu2+, Ni2+, and thiol reagents. The amidase has a broad substrate spectrum, including aliphatic, aromatic and amino acid amides. The presence of a double bond or a methyl group near the carboxamide group of aliphatic and amino acid amides enhances the enzymatic activity. Among aromatic amides with substitutions at the o-, m-, or p-position, the p-substituted amides are the preferred substrates. The highest acyl transferase activity was detected with hexanoamide, isobutyramide and propionamide. The K m values for propionamide, methacrylamide, benzamide and 2-phenylpropionamide are 7.4, 9.2, 4.9 and 0.9 mM, respectively. The amidase is highly S-stereoselective for 2-phenylpropionamide; and the racemic amide was converted to the corresponding S-acid with an enantiomeric excess of >95% at 50% conversion of the substrate. In contrast, the d,l-tryptophanamide and d,l-methioninamide were converted to the corresponding d,l-acids at the same rate. This thermostable enzyme represents the first reported amidase from a thermophilic actinomycete.
How Linguistic Metaphor Scaffolds Reasoning
Language helps people communicate and think. Precise and accurate language would seem best suited to achieve these goals. But a close look at the way people actually talk reveals an abundance of apparent imprecision in the form of metaphor: ideas are 'light bulbs', crime is a 'virus', and cancer is an 'enemy' in a 'war'. In this article, we review recent evidence that metaphoric language can facilitate communication and shape thinking even though it is literally false. We first discuss recent experiments showing that linguistic metaphor can guide thought and behavior. Then we explore the conditions under which metaphors are most influential. Throughout, we highlight theoretical and practical implications, as well as key challenges and opportunities for future research.
The Effects of Feedback Interventions on Performance : A Historical Review , a Meta-Analysis , and a Preliminary Feedback Intervention Theory
the total number of papers may exceed 10,000. Nevertheless, cost consideration forced us to consider mostly published papers and technical reports in English. 4 Formula 4 in Seifert (1991) is in error—a multiplier of n, of cell size, is missing in the numerator. 5 Unfortunately, the technique of meta-analysis cannot be applied, at present time, to such effects because the distribution of dis based on a sampling of people, whereas the statistics of techniques such as ARIMA are based on the distribution of a sampling of observations in the time domain regardless of the size of the people sample involved (i.e., there is no way to compare a sample of 100 points in time with a sample of 100 people). That is, a sample of 100 points in time has the same degrees of freedom if it were based on an observation of 1 person or of 1,000 people. 258 KLUGER AND DENISI From the papers we reviewed, only 131 (5%) met the criteria for inclusion. We were concerned that, given the small percentage of usable papers, our conclusions might not fairly represent the larger body of relevant literature. Therefore, we analyzed all the major reasons to reject a paper from the meta-analysis, even though the decision to exclude a paper came at the first identification of a missing inclusion criterion. This analysis showed the presence of review articles, interventions of natural feedback removal, and papers that merely discuss feedback, which in turn suggests that the included studies represent 1015% of the empirical FI literature. However, this analysis also showed that approximately 37% of the papers we considered manipulated feedback without a control group and that 16% reported confounded treatments, that is, roughly two thirds of the empirical FI literature cannot shed light on the question of FI effects on performance—a fact that requires attention from future FI researchers. Of the usable 131 papers (see references with asterisks), 607 effect sizes were extracted. These effects were based on 12,652 participants and 23,663 observations (reflecting multiple observations per participant). The average sample size per effect was 39 participants. The distribution of the effect sizes is presented in Figure 1. The weighted mean (weighted by sample size) of this distribution is 0.41, suggesting that, on average, FI has a moderate positive effect on performance. However, over 38% of the effects were negative (see Figure 1). The weighted variance of this distribution is 0.97, whereas the estimate of the sampling error variance is only 0.09. A potential problem in meta-analyses is a violation of the assumption of independence. Such a violation occurs either when multiple observations are taken from the same study (Rosenthal, 1984) or when several papers are authored by the same person (Wolf, 1986). In the present investigation, there were 91 effects derived from the laboratory experiments reported by Mikulincer (e.g., 1988a, 1988b). This raises the possibility that the average effect size is biased, because his studies manipulated extreme negative FIs and used similar tasks. In fact, the weighted average d in Mikulincer's studies was —0.39; whereas in the remainder of the
Interdisciplinary Research Issues in Music Information Retrieval: ISMIR 2000–2002
Music Information Retrieval (MIR) is an interdisciplinary research area that has grown out of the need to manage burgeoning collections of music in digital form. Its diverse disciplinary communities, exemplified by the recently established ISMIR conference series, have yet to articulate a common research agenda or agree on methodological principles and metrics of success. In order for MIR to succeed, researchers need to work with real user communities and develop research resources such as reference music collections , so that the wide variety of techniques being developed in MIR can be meaningfully compared with one another. Out of these efforts, a common MIR practice can emerge.
Prevention of panic attacks and panic disorder in COPD.
This study examined whether cognitive behavioural therapy (CBT) could prevent the development or worsening of panic-spectrum psychopathology and anxiety symptoms in chronic obstructive pulmonary disease (COPD). 41 patients with COPD, who had undergone pulmonary rehabilitation, were randomised to either a four-session CBT intervention condition (n = 21) or a routine care condition (n = 20). Assessments were at baseline, post-intervention, and at 6-, 12- and 18-month follow-ups. Primary outcomes were the rates of panic attacks, panic disorder and anxiety symptoms. Secondary outcomes were depressive symptoms, catastrophic cognitions about breathing difficulties, disease-specific quality of life and hospital admission rates. There were no significant differences between the groups on outcome measures at baseline. By the 18-month follow-up assessment, 12 (60%) routine care group participants had experienced at least one panic attack in the previous 6 months, with two (17%) of these being diagnosed with panic disorder, while no CBT group participants experienced any panic attacks during the follow-up phase. There were also significant reductions in anxiety symptoms and catastrophic cognitions in the CBT group at all three follow-ups and a lower number of hospital admissions between the 6- and 12-month follow-ups. The study provides evidence that a brief, specifically targeted CBT intervention can treat panic attacks in COPD patients and prevent the development and worsening of panic-spectrum psychopathology and anxiety symptoms.
Robust kernel density estimation
In this paper, we propose a method for robust kernel density estimation. We interpret a KDE with Gaussian kernel as the inner product between a mapped test point and the centroid of mapped training points in kernel feature space. Our robust KDE replaces the centroid with a robust estimate based on M-estimation (P. Huber, 1981), The iteratively re-weighted least squares (IRWLS) algorithm for M-estimation depends only on inner products, and can therefore be implemented using the kernel trick. We prove the IRWLS method monotonically decreases its objective value at every iteration for a broad class of robust loss functions. Our proposed method is applied to synthetic data and network traffic volumes, and the results compare favorably to the standard KDE.
Endotoxin contamination in the dental surgery.
Dental waterlines contain large numbers of Gram-negative bacteria. Endotoxin, a component of such organisms, has significant health implications. Paired samples of dental unit water and the aerosols generated during dental procedures were collected, and assayed for bacteria and endotoxin levels, using heterotrophic plate counts and the Limulus amoebocyte lysate test. Consistent with published studies, the extent of bacterial contamination in the dental waters sampled for this investigation surpassed the levels associated with potable water, with counts in excess of 2.0x10(6) c.f.u. ml(-1) in some samples. Correspondingly high concentrations of endotoxin [up to 15 000 endotoxin units (EU) ml(-1)] were present in the water. A statistically significant Spearman correlation coefficient of rho=0.94 between endotoxin (EU ml(-1)) and bacterial load (c.f.u. ml(-1)) was demonstrated. All of the aerosol samples contained detectable endotoxin. Further studies of the consequences of dental endotoxin exposure, and evaluation of means to prevent exposure, are warranted.
On the Recognizability of Self-generating Sets
Let I be a finite set of integers and F be a finite set of maps of the form n?k i n + ? i with integer coefficients. For an integer base k ? 2, we study the k-recognizability of the minimal set X of integers containing I and satisfying φ(X) ? X for all φ ? F. In particular, solving a conjecture of Allouche, Shallit and Skordev, we show under some technical conditions that if two of the constants k i are multiplicatively independent, then X is not k-recognizable for any k ? 2.
CoCaml: Functional Programming with Regular Coinductive Types
Functional languages offer a high level of abstraction, which results in programs that are elegant and easy to understand. Central to the development of functional programming are inductive and coinductive types and associated programming constructs, such as pattern-matching. Whereas inductive types have a long tradition and are well supported in most languages, coinductive types are subject of more recent research and are less mainstream. We present CoCaml, a functional programming language extending OCaml, which allows us to define recursive functions on regular coinductive datatypes. These functions are defined like usual recursive functions, but parameterized by an equation solver. We present a full implementation of all the constructs and solvers and show how these can be used in a variety of examples, including operations on infinite lists, infinitary λ-terms, and p-adic numbers.
Automatic Detection and Categorization of Election-Related Tweets
With the rise in popularity of public social media and micro-blogging services, most notably Twitter, the people have found a venue to hear and be heard by their peers without an intermediary. As a consequence, and aided by the public nature of Twitter, political scientists now potentially have the means to analyse and understand the narratives that organically form, spread and decline among the public in a political campaign. However, the volume and diversity of the conversation on Twitter, combined with its noisy and idiosyncratic nature, make this a hard task. Thus, advanced data mining and language processing techniques are required to process and analyse the data. In this paper, we present and evaluate a technical framework, based on recent advances in deep neural networks, for identifying and analysing election-related conversation on Twitter on a continuous, longitudinal basis. Our models can detect election-related tweets with an F-score of 0.92 and can categorize these tweets into 22 topics with an F-score of 0.90.
Direct WYSIWYG painting and texturing on 3D shapes
This paper describes a 3D object-space paint program. This program allows the user to directly manipulate the parameters used to shade the surface of the 3D shape by applying pigment to its surface. The pigment has all the properties normally associated with material shading models. This includes, but is not limited to, the diffuse color, the specular color, and the surface roughness. The pigment also can have thickness, which is modeled by simultaneously creating a bump map attached to the shape. The output of the paint program is a 3D model with associated texture maps. This information can be used with any rendering program with texture mapping capabilities. Almost all traditional techniques of 2D computer image painting have analogues in 3D object painting, but there are also many new techniques unique to 3D. One example is the use of solid textures to pattern the surface.
Axiomatic Analysis for Improving the Log-Logistic Feedback Model
Pseudo-relevance feedback (PRF) has been proven to be an effective query expansion strategy to improve retrieval performance. Several PRF methods have so far been proposed for many retrieval models. Recent theoretical studies of PRF methods show that most of the PRF methods do not satisfy all necessary constraints. Among all, the log-logistic model has been shown to be an effective method that satisfies most of the PRF constraints. In this paper, we first introduce two new PRF constraints. We further analyze the log-logistic feedback model and show that it does not satisfy these two constraints as well as the previously proposed "relevance effect" constraint. We then modify the log-logistic formulation to satisfy all these constraints. Experiments on three TREC newswire and web collections demonstrate that the proposed modification significantly outperforms the original log-logistic model, in all collections.
Classification using Hybrid SVM and KNN Approach
Phishing is a potential web threat that includes mimicking official websites to trick users by stealing their important information such as username and password related to financial systems. The attackers use social engineering techniques like email, SMS and malware to fraud the users. Due to the potential financial losses caused by phishing, it is essential to find effective approaches for phishing websites detection. This paper proposes a hybrid approach for classifying the websites as Phishing, Legitimate or Suspicious websites, the proposed approach intelligently combines the K-nearest neighbors (KNN) algorithm with the Support Vector Machine (SVM) algorithm in two stages. Firstly, the KNN was utilized as a robust to noisy data and effective classifier. Secondly, the SVM is employed as a powerful classifier. The proposed approach integrates the simplicity of KNN with the effectiveness of SVM. The experimental results show that the proposed hybrid approach achieved the highest accuracy of 90.04% when compared with other approaches. Keywords—Information security; phishing websites; support vector machine; K-nearest neighbors
Treatment of lymphangiomas with OK-432 (Picibanil) sclerotherapy: a prospective multi-institutional trial.
OBJECTIVE To describe and to determine the robustness of our study evaluating the efficacy of OK-432 (Picibanil) as a therapeutic modality for lymphangiomas. DESIGN AND SETTING Prospective, randomized trial and parallel-case series at 13 US tertiary care referral centers. SUBJECTS Thirty patients diagnosed as having lymphangioma. Ages in 25 ranged from 6 months to 18 years. Twenty-nine had lesions located in the head-and-neck area. INTERVENTION Every patient received a 4-dose injection series of OK-432 scheduled 6 to 8 weeks apart unless a contraindication existed or a complete response was observed before completion of all injections. A control group was observed for 6 months. OUTCOME MEASURES Successful outcome of therapy was defined as a complete or a substantial (>60%) reduction in lymphangioma size as determined by calculated lesion volumes on computed tomographic or magnetic resonance imaging scans. RESULTS Overall, 19 (86%) of the 22 patients with predominantly macrocystic lymphangiomas had a successful outcome. CONCLUSIONS OK-432 should be efficacious in the treatment of lymphangiomas. Our study design is well structured to clearly define the role of this treatment agent.
CUSTOMER LIFETIME VALUE: MARKETING MODELS AND APPLICATIONS
the topic (a) has been dedicated to extolling its use as a decisionmaking criterion; (b) has presented isolated numerical examples of its calculation/determination; and (c) has considered it as part of the general discussions of profitability and discussed its role in customer acquisition decisions and customer acquisition/retention trade-offs. There has been a dearth of general modeling of the topic. This paper presents a series of mathematical models for determination of customer lifetime value. The choice of the models is based on a systematic theoretical taxonomy and on assumptions grounded in customer behavior. In NADA I. NASR is a doctoral student in Marketing at the School addition, selected managerial applications of these general models of of Management, Boston University. customer lifetime value are offered. 1998 John Wiley & Sons, Inc. and Direct Marketing Educational Foundation, Inc. CCC 1094-9968/98/010017-14
Incidence of metabolic syndrome among night-shift healthcare workers.
OBJECTIVE Night-shift work is associated with ischaemic cardiovascular disorders. It is not currently known whether it may be causally linked to metabolic syndrome (MS), a risk condition for ischaemic cardiovascular disorders. The syndrome presents with visceral obesity associated with mild alterations in glucidic and lipidic homeostasis, and in blood pressure. The aim of this study was to assess whether a causal relationship exists between night-shift work and the development of MS. METHODS Male and female nurses performing night shifts, free from any component of MS at baseline, were evaluated annually for the development of the disorder during a 4-year follow-up. Male and female nurses performing daytime work only, visited during the same time period, represented the control group. RESULTS The cumulative incidence of MS was 9.0% (36/402) among night-shift workers, and 1.8% (6/336) among daytime workers (relative risk (RR) 5.0, 95% CI -2.1 to 14.6). The annual rate of incidence of MS was 2.9% in night-shift workers and 0.5% in daytime workers. Kaplan-Meier survival curves of the two groups were significantly different (log-rank test; p<0.001). Multiple Cox regression analysis (forward selection method based on likelihood ratio) showed that among selected variables (age, gender, smoking, alcohol intake, familiar history, physical activity, and work schedule) the only predictors of occurrence of MS were sedentariness (hazard ratio (HR) 2.92; 95% CI 1.64 to 5.18; p = 0.017), and night-shift work (HR 5.10; 95% CI 2.15 to 12.11; p<0.001). CONCLUSIONS The risk of developing MS is strongly associated with night-shift work in nurses. Medical counselling should be promptly instituted in night-shift workers with the syndrome, and in case of persistence or progression, a change in work schedule should be considered.
Testing for baseline differences in randomized controlled trials: an unhealthy research behavior that is hard to eradicate
BACKGROUND According to the CONSORT statement, significance testing of baseline differences in randomized controlled trials should not be performed. In fact, this practice has been discouraged by numerous authors throughout the last forty years. During that time span, reporting of baseline differences has substantially decreased in the leading general medical journals. Our own experience in the field of nutrition behavior research however, is that co-authors, reviewers and even editors are still very persistent in their demand for these tests. The aim of this paper is therefore to negate this demand by providing clear evidence as to why testing for baseline differences between intervention groups statistically is superfluous and why such results should not be published. DISCUSSION Testing for baseline differences is often propagated because of the belief that it shows whether randomization was successful and it identifies real or important differences between treatment arms that should be accounted for in the statistical analyses. Especially the latter argument is flawed, because it ignores the fact that the prognostic strength of a variable is also important when the interest is in adjustment for confounding. In addition, including prognostic variables as covariates can increase the precision of the effect estimate. This means that choosing covariates based on significance tests for baseline differences might lead to omissions of important covariates and, less importantly, to inclusion of irrelevant covariates in the analysis. We used data from four supermarket trials on the effects of pricing strategies on fruit and vegetables purchases, to show that results from fully adjusted analyses sometimes do appreciably differ from results from analyses adjusted for significant baseline differences only. We propose to adjust for known or anticipated important prognostic variables. These could or should be pre-specified in trial protocols. Subsequently, authors should report results from the fully adjusted as well as crude analyses, especially for dichotomous and time to event data. Based on our arguments, which were illustrated by our findings, we propose that journals in and outside the field of nutrition behavior actively adopt the CONSORT 2010 statement on this topic by not publishing significance tests for baseline differences anymore.
Two medical abortion regimens for late first-trimester termination of pregnancy: a prospective randomized trial.
BACKGROUND Medical abortion regimens based on the use of either misoprostol alone or in association with mifepristone have shown high efficacy and excellent safety profile in early pregnancy abortion. However, no clear recommendation is available for late first-trimester termination of pregnancy. STUDY DESIGN A prospective randomized controlled trial included 122 women seeking medical abortion at 9 to 12 weeks of gestation. Seventy-three patients were given a fixed protocol of 200 mg of mifepristone followed 48 h later by 400 mcg oral misoprostol (Group 1). The second group of 49 patients was administered 800-mcg intravaginal single-dose misoprostol (Group 2). This study sought to compare safety, efficacy and acceptability of these two nonsurgical abortion regimens. RESULTS Fifty-nine (80.8%) women in Group 1 had complete abortion vs. 38 (77.4%) women in Group 2 (p=.66). Abdominal pain was observed significantly more often in Group 2 (35/49 (71.4%) vs. 32/73 (43.8%) in Group 1, p<.0001. Medical abortion was equally acceptable among the two groups [37/49 (75.5%) and 55/73 (75.7%), p=.89]. CONCLUSION For late first-trimester termination, a single 800-mcg vaginal dose of misoprostol seems to be as effective as the mifepristone+misoprostol regimen, with acceptable side effects.
Sexting among young adults.
PURPOSE Sexting has stirred debate over its legality and safety, but few researchers have documented the relationship between sexting and health. We describe the sexting behavior of young adults in the United States, and examine its association with sexual behavior and psychological well-being. METHODS Using an adapted Web version of respondent-driven sampling, we recruited a sample of U.S. young adults (aged 18-24 years, N = 3,447). We examined participant sexting behavior using four categories of sexting: (1) nonsexters, (2) receivers, (3) senders, and (4) two-way sexters. We then assessed the relationships between sexting categories and sociodemographic characteristics, sexual behavior, and psychological well-being. RESULTS More than half (57%) of the respondents were nonsexters, 28.2% were two-way sexters, 12.6% were receivers, and 2% were senders. Male respondents were more likely to be receivers than their female counterparts. Sexually active respondents were more likely to be two-way sexters than non-sexually active ones. Among participants who were sexually active in the past 30 days, we found no differences across sexting groups in the number of sexual partners or the number of unprotected sex partners in the past 30 days. We also found no relationship between sexting and psychological well-being. CONCLUSIONS Our results suggest that sexting is not related to sexual risk behavior or psychological well-being. We discuss the findings of this study and propose directions for further research on sexting.
Analysis of Soil Behaviour and Prediction of Crop Yield Using Data Mining Approach
Yield prediction is very popular among farmers these days, which particularly contributes to the proper selection of crops for sowing. This makes the problem of predicting the yielding of crops an interesting challenge. Earlier yield prediction was performed by considering the farmer's experience on a particular field and crop. This work presents a system, which uses data mining techniques in order to predict the category of the analyzed soil datasets. The category, thus predicted will indicate the yielding of crops. The problem of predicting the crop yield is formalized as a classification rule, where Naive Bayes and K-Nearest Neighbor methods are used.
Hybrid models based on rough set classifiers for setting credit rating decision rules in the global banking industry
Banks are important to national, and even global, economic stability. Banking panics that follow bank insolvency or bankruptcy, especially of large banks, can severely jeopardize economic stability. Therefore, issuers and investors urgently need a credit rating indicator to help identify the financial status and operational competence of banks. A credit rating provides financial entities with an assessment of credit worthiness, investment risk, and default probability. Although numerous models have been proposed to solve credit rating problems, they have the following drawbacks: (1) lack of explanatory power; (2) reliance on the restrictive assumptions of statistical techniques; and (3) numerous variables, which result in multiple dimensions and complex data. To overcome these shortcomings, this work applies two hybrid models that solve the practical problems in credit rating classification. For model verification, this work uses an experimental dataset collected from the Bankscope database for the period 1998–2007. Experimental results demonstrate that the proposed hybrid models for credit rating classification outperform the listing models in this work. A set of decision rules for classifying credit ratings is extracted. Finally, study findings and managerial implications are provided for academics and practitioners. 2012 Elsevier B.V. All rights reserved.
Role of vasoactive intestinal peptide and pituitary adenylate cyclase activating polypeptide in the vaginal wall of women with stress urinary incontinence and pelvic organ prolapse
Pelvic floor connective tissue degeneration is closely associated with retrogradation of its dominating nerve fibers. We hypothesized that some neuropeptides from pelvic floor tissue might be involved in the pathological progress of stress urinary incontinence (SUI) and pelvic organ prolapse (POP) in women. Thirty premenopausal and 31 postmenopausal patients participated in the study. The morphological appearance in the vaginal tissue was examined. The vasoactive intestinal peptide (VIP) and pituitary adenylate cyclase activating polypeptide-38 (PACAP) immunoreactivities (ir-VIP, ir-PACAP) were tested by immunohistochemistry and radioimmunoassay. We found that the VIP and PACAP immunostainings were weaker and sparser, and ir-VIP and ir-PACAP levels were significantly decreased in the anterior vaginal wall in the premenopausal and postmenopausal SUI or POP patients. Ir-VIP and ir-PACAP levels were reversely correlated with the age and menopausal status in the SUI or POP patients. Our data suggest that VIP and PACAP may participate in the pathophysiological process of SUI and POP.
The Cut-Off Point and Boundary Values of Waist-to-Height Ratio as an Indicator for Cardiovascular Risk Factors in Chinese Adults from the PURE Study
To explore a scientific boundary of WHtR to evaluate central obesity and CVD risk factors in a Chinese adult population. The data are from the Prospective Urban Rural Epidemiology (PURE) China study that was conducted from 2005-2007. The final study sample consisted of 43 841 participants (18 019 men and 25 822 women) aged 35-70 years. According to the group of CVD risk factors proposed by Joint National Committee 7 version and the clustering of risk factors, some diagnosis parameters, such as sensitivity, specificity and receiver operating characteristic (ROC) curve least distance were calculated for hypertension, diabetes, high serum triglyceride (TG), high serum low density lipoprotein cholesterol (LDL-C), low serum high density lipoprotein cholesterol (HDL-C) and clustering of risk factors (number≥2) to evaluate the efficacy at each value of the WHtR cut-off point. The upper boundary value for severity was fixed on the point where the specificity was above 90%. The lower boundary value, which indicated above underweight, was determined by the percentile distribution of WHtR, specifically the 5th percentile (P5) for both males and females population. Then, based on convenience and practical use, the optimal boundary values of WHtR for underweight and obvious central obesity were determined. For the whole study population, the optimal WHtR cut-off point for the CVD risk factor cluster was 0.50. The cut-off points for severe central obesity were 0.57 in the whole population. The upper boundary values of WHtR to detect the risk factor cluster with specificity above 90% were 0.55 and 0.58 for men and women, respectively. Additionally, the cut-off points of WHtR for each of four cardiovascular risk factors with specificity above 90% in males ranged from 0.55 to 0.56, whereas in females, it ranged from 0.57 to 0.58. The P5 of WHtR, which represents the lower boundary values of WHtR that indicates above underweight, was 0.40 in the whole population. WHtR 0.50 was an optimal cut-off point for evaluating CVD risks in Chinese adults of both genders. The optimal boundaries of WHtR were 0.40 and 0.57, indicating low body weight and severe risk for CVD, respectively, in Chinese adults.
Anxiety and outcome evaluation: The good, the bad and the ambiguous
Previous research has indicated that anxious individuals are more prone to evaluate ambiguous information as negative compared to non-anxious individuals. The feedback-related negativity (FRN) component of event-related brain potential (ERP) has been shown to be sensitive to outcome evaluation. The current ERP study aimed to test the hypothesis that the FRNs associated with ambiguous outcomes and negative outcomes are different between high-trait anxiety (HTA) and low-trait anxiety (LTA) individuals. The FRN was measured as a difference wave created across conditions. We found significantly different FRN responses between high-anxious and low-anxious participants in ambiguous outcome condition, as well as in negative outcome condition. Moreover, the HTA group's FRN responses under the ambiguous outcome condition were larger than the negative outcome condition. Nevertheless, the FRN following neutral outcome did not show any difference between the two groups. The present results support the idea that there is link between individual differences in anxiety and ambiguous outcome evaluation, which possibly reflects the adaptive function of anxiety. Additionally, the results indicate that the mechanisms underlying the evaluation of neutral outcomes and ambiguous outcomes might be different from each other.
Artificial Intelligence Approaches To UCAV Autonomy
This paper covers a number of approaches that leverage Artificial Intelligence algorithms and techniques to aid Unmanned Combat Aerial Vehicle (UCAV) autonomy. An analysis of current approaches to autonomous control is provided followed by an exploration of how these techniques can be extended and enriched with AI techniques including Artificial Neural Networks (ANN), Ensembling and Reinforcement Learning (RL) to evolve control strategies for UCAVs.
18.4 A 4.9mΩ-sensitivity mobile electrical impedance tomography IC for early breast-cancer detection system
Approximately 1 in 8 U.S. women will develop breast cancer over the course of her lifetime, and breast cancer death rates are higher than those for any other cancer, besides lung cancer. In 2013, an estimated 232,340 new cases of invasive breast cancer are expected to be diagnosed in women in the U.S. and about 39,620 women in the U.S. are expected to die from breast cancer [1]. According to the World Health Organization (WHO), if breast cancer can be detected and treated early, one-third of these cancer deaths could be prevented. For the early detection of breast cancer, X-ray mammography and ultrasonic screening are mainly used in hospitals. However, for personal cancer detection at home, currently, only unscientific palpation can be used, which is not particularly effective for early detection of tumors.
A PWM plus phase-shift control bidirectional DC-DC converter
A PWM plus phase-shift control bidirectional DC-DC converter is proposed. In this converter, PWM control and phase-shift control are combined to reduce current stress and conducting loss, and to expand ZVS range. The operation principle and analysis of the converter are explained, and ZVS condition is derived. A prototype of PWM plus phase-shift bidirectional DC-DC converter is built to verify analysis.
Sensorless control of high-speed PM BLDC motor
In the paper were examined the possibilities to control a high-speed sensorless PM BLDC motor, with a target speed 100 000 rpm and power 1 kW, designed in the Department of Power Electronics, Electrical Drives and Robotics - KENER, Silesian University of Technology. The article describes methods based on phase voltages integration and the third harmonic, also presents a method of open-loop starting and ways of determining the position of the stopped rotor. Described sensorless control methods were examined by computer simulation, in order to test the possibilities of their implementation in the high-speed motor project.
Unequal Wilkinson Power Dividers With Favorable Selectivity and High-Isolation Using Coupled-Line Filter Transformers
In the conventional unequal-split Wilkinson power divider, poor selectivity for each transmission path is usually a problem. To surmount this obstacle, the parallel coupled-line bandpass filter structure is utilized as an impedance transformer as well as a band selector for each transmission path in the proposed unequal-split Wilkinson power dividers. However, the bandpass filters in the proposed dividers require careful design because they may not be functional under certain conditions. For example, the odd-order coupled-line filters are not appropriate for impedance transformers in the proposed unequal-split dividers and high-isolation requirement. Using the even-order coupled-line filter transformers, this study proposes two types of unequal-split Wilkinson power dividers. The first type of the proposed dividers arranges two filter transformers near two output ports, respectively, and is capable of achieving a highly remarkable isolation between the two output ports and a good band selection in each transmission path. Specifically, not only the operating band but also the lower and higher stopbands can achieve highly favorable isolation for this type of divider. By arranging the load impedance of each port properly, the second type of the proposed dividers, which has only one filter transformer to be shared by each transmission path near the input port, is also proposed to provide effective isolation between two output ports and favorable selectivity in each transmission path.
DYNAMICALLY PROPAGATING SHEAR BANDS IN IMPACT-LOADED PRENOTCHED PLATES-II . NUMERICAL
The experimental observations of dynamic failure in the form of propagating shear bands and of the transition in failure mode presented in Part I of this investigation is analyzed. Finite element simulations are carried out for the initiation and propagation of shear-dominated failure in prenotched plates subjected to asymmetric impact loading. Coupled thermomechanical simulations are carried out under the assumption of plane strain. The simulations take into account finite deformations, inertia, heat conduction, thermal softening, strain hardening and strain-rate hardening. The propagation of shear bands is assumed to be governed by a critical plastic strain criterion. The results demonstrate a strong dependence of band propagation speed on impact velocity, in accordance with experimental observations. The calculations reveal an active plastic zone in front of the tip of the propagating shear bands. The size of this zone and the level of the shear stresses inside it do not change significantly with the impact velocity or the speed of shear band propagation. Shear stresses are uniform inside this zone except near the band tip where higher rates of strain prevail. The shear band behind the propagating tip exhibits highly localized deformations and intense heating. Temperature rises are relatively small in the active plastic zone compared with those inside the well-developed shear band behind the propagating tip. The calculations also show shear band speeds and temperature rises that are in good agreement with experimental observations. Computed temperature fields confirm the experimental observation that dissipation continues behind the propagating shear band tip. In addition, the numerical results capture the arrest of the shear band. The arrested shear band is first subjected to reverse shear. Subsequently, the arrested band is subjected to mixed-mode loading which eventually leads to tensile failure at an angle about 30” to the band. Copyright
Learning Representations by Recirculation
We describe a new learning procedure for networks that contain groups of nonlinear units arranged in a closed loop. The aim of the learning is to discover codes that allow the activity vectors in a "visible" group to be represented by activity vectors in a "hidden" group. One way to test whether a code is an accurate representation is to try to reconstruct the visible vector from the hidden vector. The difference between the original and the reconstructed visible vectors is called the reconstruction error, and the learning procedure aims to minimize this error. The learning procedure has two passes. On the fust pass, the original visible vector is passed around the loop, and on the second pass an average of the original vector and the reconstructed vector is passed around the loop. The learning procedure changes each weight by an amount proportional to the product of the "presynaptic" activity and the difference in the post-synaptic activity on the two passes. This procedure is much simpler to implement than methods like back-propagation. Simulations in simple networks show that it usually converges rapidly on a good set of codes, and analysis shows that in certain restricted cases it performs gradient descent in the squared reconstruction error.
How readers discover content in scholarly publications
ING AND INDEXING SERVICES OR SPECIALIST BIBLIOGRAPHIC DATABASES Major subject A&Is – e.g. Scopus, PubMed, Web of Science, focus on structured access to the highest quality information within a discipline. They typically cover all the key literature but not necessarily all the literature in a discipline. Their utility flows from the perceived certainty and reassurance that they offer to users in providing the authoritative source of search results within a discipline. However, they cannot boast universal coverage of the literature – they provide good coverage of a defined subject niche, but reduce the serendipitous discovery of peripheral material. Also, many A&Is are sold at a premium, which in itself is a barrier to their use. Examples from a wide range of subjects were given in the survey questions to help surveyees understand this classification.
Topology Discovery in Software Defined Networks: Threats, Taxonomy, and State-of-the-Art
The fundamental role of the software defined networks (SDNs) is to decouple the data plane from the control plane, thus providing a logically centralized visibility of the entire network to the controller. This enables the applications to innovate through network programmability. To establish a centralized visibility, a controller is required to discover a network topology of the entire SDN infrastructure. However, discovering a network topology is challenging due to: 1) the frequent migration of the virtual machines in the data centers; 2) lack of authentication mechanisms; 3) scarcity of the SDN standards; and 4) integration of security mechanisms for the topology discovery. To this end, in this paper, we present a comprehensive survey of the topology discovery and the associated security implications in SDNs. This survey provides discussions related to the possible threats relevant to each layer of the SDN architecture, highlights the role of the topology discovery in the traditional network and SDN, presents a thematic taxonomy of topology discovery in SDN, and provides insights into the potential threats to the topology discovery along with its state-of-the-art solutions in SDN. Finally, this survey also presents future challenges and research directions in the field of SDN topology discovery.
Effective Congestion Avoidance Scheme for Mobile Ad Hoc Networks
Mobile nodes are organized randomly without any access point in Mobile Ad hoc Networks (MANETs). Due to the mobility of nodes, the network congestion occurs. So many congestion control mechanisms were proposed to avoid the congestion avoidance or reducing the congestion status. In this research work, we proposed to develop the Effective Congestion Avoidance Scheme (ECAS), which consists of congestion monitoring, effective routing establishment and congestionless based routing. The overall congestion status is measured in congestion monitoring. In routing establishment, we propose the contention metric in the particular channel in terms of, queue length of packet, overall congestion standard, packet loss rate and packet dropping ratio to monitor the congestion status. Based on the congestion standard, the congestionless based routing is established to reduce the packet loss, high overhead, long delay in the network. By extensive simulation, the proposed scheme achieves better throughput, packet delivery ratio, low end-to-end delay and overhead than the existing schemes.
Interrater reliability of the Pediatric National Institutes of Health Stroke Scale (PedNIHSS) in a multicenter study.
BACKGROUND AND PURPOSE Stroke is an important cause of death and disability among children. Clinical trials for childhood stroke require a valid and reliable acute clinical stroke scale. We evaluated interrater reliability (IRR) of a pediatric adaptation of the National Institutes of Health Stroke Scale. METHODS The pediatric adaptation of the National Institutes of Health Stroke Scale was developed by pediatric and adult stroke experts by modifying each item of the adult National Institutes of Health Stroke Scale for children, retaining all examination items and scoring ranges of the National Institutes of Health Stroke Scale. Children 2 to 18 years of age with acute arterial ischemic stroke were enrolled in a prospective cohort study from 15 North American sites from January 2007 to October 2009. Examiners were child neurologists certified in the adult National Institutes of Health Stroke Scale. Each subject was examined daily for 7 days or until discharge. A subset of patients at 3 sites was scored simultaneously and independently by 2 study neurologists. RESULTS IRR testing was performed in 25 of 113 a median of 3 days (interquartile range, 2 to 4 days) after symptom onset. Patient demographics, total initial pediatric adaptation of the National Institutes of Health Stroke Scale scores, risk factors, and infarct characteristics in the IRR subset were similar to the non-IRR subset. The 2 raters' total scores were identical in 60% and within 1 point in 84%. IRR was excellent as measured by concordance correlation coefficient of 0.97 (95% CI, 0.94 to 0.99); intraclass correlation coefficient of 0.99 (95% CI, 0.97 to 0.99); precision measured by Pearson ρ of 0.97; and accuracy measured by the bias correction factor of 1.0. CONCLUSIONS There was excellent IRR of the pediatric adaptation of the National Institutes of Health Stroke Scale in a multicenter prospective cohort performed by trained child neurologists.
Effect of Sulphur and Micronutrients on Growth, Nutrient Content and Yield of Wheat ( Triticum aestivum L. )
All the growth characters (lAI, LAR, NAR, RGR and dry matter accumulation) were higher at high fertility (120,60,60 kg NPK/ha). Maximum lAI was observed at 75 DAS (5.36), LI\R at 30 DAS (16374 em/g), NAR at 60-75 DAS (8.31 g/m/day) and RGR at 30-45 (80.65 mg/ g/day) crop growth periods. Among micronutrients, zinc application increased the different growth parametres and yield followed by sulphur and manganese. Iron and boron did not influence the growth and yield of wheat significantly. Zinc, manganese and boron content was higher in grain than straw whereas iron and sulphur content was higher in straw. INTRODUCTION information on these aspects is scanty and The importance of micronutrients application unsystematic. Therefore, a field experiment was in increasing crop production has been recognised designed and laid down at farmer's field nearby the in India and it is becoming evident that without campus of N.D. University of Agriculture and the use of the micronutrient, it is not possible to Technology, Faizabad (U.P.) to assess the effect of get the maximum benefilo; of NPK fertilizers and application of micronutrients on growth parameters high yielding varieties of wheat. and nutrient content in wheat. Verma (1968) reported that NPK + B and MATERIAL AND METHODS NPK + all mkronutrients increased boron uptake The soil of the experimental site was sandy in plants. Takkar and Bhumbla (1968) observed loam with pH of 8.24. Available N, Pps and Kp an antagonistic effect between free Mn and was 220 (low), 34.56 (medium) and 220 (medium) extractable Fe in neutral and acidic soils. The kg/ha, respectively. Available Zn (DTPA Table 1: Effect of fertility levels and micronutrients on growth parametres and yield of wheat (Pooled data of two seasons). Treatment LAI (at 75 DAS) LAR NAR RGR (cm Ig) (gm day·l) (mg g.l) 30 DAS .60-65, Days day·1 30-45 days DMA Grain yield (at harv~t) (q/ha) (q/ha) Harvest Index
Pharmacodynamic evaluation of switching from ticagrelor to prasugrel in patients with stable coronary artery disease: Results of the SWAP-2 Study (Switching Anti Platelet-2).
OBJECTIVES The goal of this study was to evaluate the pharmacodynamic effects of switching patients from ticagrelor to prasugrel. BACKGROUND Clinicians may need to switch between more potent P2Y12 inhibitors because of adverse effects or switch to the use of a once-daily dosing regimen due to compliance issues. METHODS After a 3- to 5-day run-in phase with a ticagrelor 180-mg loading dose (LD) followed by a ticagrelor 90-mg twice-daily maintenance dose (MD), aspirin-treated patients (N = 110) with stable coronary artery disease were randomized to continue ticagrelor or switch to prasugrel 10-mg once-daily MD, with or without a 60-mg LD. Pharmacodynamic assessments were defined according to P2Y12 reaction unit (PRU) (P2Y12 assay) and platelet reactivity index (vasodilator-stimulated phosphoprotein phosphorylation assay) at baseline (before and after the run-in phase) and 2, 4, 24, and 48 h and 7 days after randomization. RESULTS Platelet reactivity was significantly greater at 24 and 48 h after switching to prasugrel versus continued therapy with ticagrelor, although to a lesser extent in those receiving an LD. Mean PRU remained significantly higher in the combined prasugrel groups versus the ticagrelor group (least-squares mean difference: 46 [95% confidence interval 25 to 67]) and did not meet the primary noninferiority endpoint (upper limit of the confidence interval ≤45), although PRU in the prasugrel cohort was lower at 7 days than at 24 or 48 h. Accordingly, rates of high on-treatment platelet reactivity were higher at 24 and 48 h in both prasugrel groups. At 7 days, there was no difference in high on-treatment platelet reactivity rate between the combined prasugrel and ticagrelor groups. CONCLUSIONS Compared with continued ticagrelor therapy, switching from ticagrelor to prasugrel therapy was associated with an increase in platelet reactivity that was partially mitigated by the administration of an LD.
Serum homocysteine levels in postmenopausal breast cancer patients treated with tamoxifen
Adjuvant treatment of breast cancer with tamoxifen may be associated with reduced risk of cardiovascular disease. Serum homocysteine level has been suggested to be a risk factor for cardiovascular disease in ̄uenced by estrogenic hormones. We evaluated a subset of postmenopausal women who had participated in a longitudinal, double-blind, randomized, placebocontrolled toxicity study of tamoxifen 10 mg orally, twice daily. Twenty-seven treated subjects and 37 placebo subjects had measurements of serum homocysteine levels made on previously frozen samples obtained at baseline and after 12 months. After treatment with tamoxifen, we found lower levels of serum homocysteine of borderline statistical signi®cance. q 1999 Elsevier Science Ireland Ltd. All rights reserved.
XIAP antisense oligonucleotide (AEG35156) achieves target knockdown and induces apoptosis preferentially in CD34+38− cells in a phase 1/2 study of patients with relapsed/refractory AML
XIAP, a potent caspase inhibitor, is highly expressed in acute myeloid leukemia (AML) cells and contributes to chemoresistance. A multi-center phase 1/2 trial of XIAP antisense oligonucleotide AEG35156 in combination with idarubicin/cytarabine was conducted in 56 patients with relapsed/refractory AML. Herein we report the pharmacodynamic studies of the patients enrolled at M. D. Anderson Cancer Center. A total of 13 patients were enrolled in our institution: five in phase 1 (12–350 mg/m2 AEG35156) and eight in phase 2 (350 mg/m2 AEG35156) of the protocol. AEG35156 was administered on 3 consecutive days and then weekly up to a maximum of 35 days. Blood samples were collected from patients on days 1 through 5 and on day 28–35 post-chemotherapy for detection of XIAP levels and apoptosis. AEG35156 treatment led to dose-dependent decreases of XIAP mRNA levels (42–100% reduction in phase 2 patients). XIAP protein levels were reduced in all five samples measured. Apoptosis induction was detected in 1/4 phase 1 and 4/5 phase 2 patients. Importantly, apoptosis was most pronounced in CD34 + 38 − AML stem cells and all phase 2 patients showing apoptosis induction in CD34 + 38 − cells achieved response. We conclude that at 350 mg/m2, AEG35156 is effective in knocking down XIAP in circulating blasts accompanied by the preferential induction of apoptosis in CD34 + 38 − AML stem cells.
Transforaminal lumbar interbody fusion (TLIF) versus posterolateral instrumented fusion (PLF) in degenerative lumbar disorders: a randomized clinical trial with 2-year follow-up
The aim of the present study was to analyze outcome, with respect to functional disability, pain, fusion rate, and complications of patients treated with transforaminal lumbar interbody fusion (TLIF) in compared to instrumented poserolateral fusion (PLF) alone, in low back pain. Spinal fusion has become a major procedure worldwide. However, conflicting results exist. Theoretical circumferential fusion could improve functional outcome. However, the theoretical advantages lack scientific documentation. Prospective randomized clinical study with a 2-year follow-up period. From November 2003 to November 2008 100 patients with severe low back pain and radicular pain were randomly selected for either posterolateral lumbar fusion [titanium TSRH (Medtronic)] or transforaminal lumbar interbody fusion [titanium TSRH (Medtronic)] with anterior intervertebral support by tantalum cage (Implex/Zimmer). The primary outcome scores were obtained using Dallas Pain Questionnaire (DPQ), Oswestry disability Index, SF-36, and low back pain Rating Scale. All measures assessed the endpoints at 2-year follow-up after surgery. The overall follow-up rate was 94 %. Sex ratio was 40/58. 51 patients had TLIF, 47 PLF. Mean age 49(TLIF)/45(PLF). No statistic difference in outcome between groups could be detected concerning daily activity, work leisure, anxiety/depression or social interest. We found no statistic difference concerning back pain or leg pain. In both the TLIF and the PLF groups the patients had significant improvement in functional outcome, back pain, and leg pain compared to preoperatively. Operation time and blood loss in the TLIF group were significantly higher than in the PLF group (p < 0.001). No statistic difference in fusion rates was detected. Transforaminal interbody fusion did not improve functional outcome in patients compared to posterolateral fusion. Both groups improved significantly in all categories compared to preoperatively. Operation time and blood loss were significantly higher in the TLIF group.
The structure of DSM-IV ADHD, ODD, and CD criteria in adolescent boys: A hierarchical approach
Numerous studies have examined the structure of the childhood externalizing disorder symptoms of Attention Deficit Hyperactivity Disorder (ADHD), Oppositional Defiant Disorder (ODD), and Conduct Disorder (CD), both separately as well as simultaneously. The present study expanded on previous findings by implementing a multi-level hierarchical approach to investigating the component structure of ADHD, ODD, and CD criteria in 487 14-year-old boys from the Minnesota Twin Family Study (MTFS). We found support for a hierarchical conceptualization of externalizing behavior criteria in early adolescent boys by specifying how one-, two-, three-, four-, five- and six-factor models of externalizing criteria can be integrated. These results suggest that it may be more beneficial to conceptualize different levels of this hierarchy as relevant to different issues in case conceptualization and research design, from the broad level of an overall externalizing spectrum, to the level of finer-grained subtypes within specific disorders.
Design of a compact planar MIMO antenna for LTE mobile application
Recently, the long-term evolution (LTE) is considered as one of the most promising 4th generation (4G) mobile standards to increase the capacity and speed of mobile handset networks [1]. In order to realize the LTE wireless communication system, the diversity and multiple-input multiple-output (MIMO) systems have been introduced [2]. In a MIMO mobile user terminal such as handset or USB dongle, at least two uncorrelated antennas should be placed within an extremely restricted space. This task becomes especially difficult when a MIMO planar antenna is designed for LTE band 13 (the corresponding wavelength is 390 mm). Due to the limited space available for antenna elements, the antennas are strongly coupled with each other and have narrow bandwidth.
A generative layout approach for rooted tree drawings
In response to the large number of existing tree layouts, generic “meta-layouts” have recently been proposed. These generic approaches utilize layout design spaces to pinpoint a tree drawing with desired characteristics in the wealth of available drawing options and parameters. While design-space-based generic layouts work well for the confined set of implicit space-filling tree layouts, they have so far eluded their extension to explicit node-link diagrams. In order to produce both, implicit and explicit tree layouts, this paper parts with the descriptive nature of the design spaces and instead takes a generative approach based on operators. As these operators can be combined into operator sequences and be used at different stages of the layout process, a small operator set already suffices to yield a large number of different tree layouts. To this end, we present a generic tree layout pipeline and give examples of suitable layout operators to plug into the pipeline. A prototypical implementation of our pipeline and operators is presented, and it is illustrated with space-filling and node-link examples. Furthermore, the paper presents results from a user study evaluating our generative approach as it is realized by the prototype.
VEEVVIE: Visual Explorer for Empirical Visualization, VR and Interaction Experiments
Empirical, hypothesis-driven, experimentation is at the heart of the scientific discovery process and has become commonplace in human-factors related fields. To enable the integration of visual analytics in such experiments, we introduce VEEVVIE, the Visual Explorer for Empirical Visualization, VR and Interaction Experiments. VEEVVIE is comprised of a back-end ontology which can model several experimental designs encountered in these fields. This formalization allows VEEVVIE to capture experimental data in a query-able form and makes it accessible through a front-end interface. This front-end offers several multi-dimensional visualization widgets with built-in filtering and highlighting functionality. VEEVVIE is also expandable to support custom experimental measurements and data types through a plug-in visualization widget architecture. We demonstrate VEEVVIE through several case studies of visual analysis, performed on the design and data collected during an experiment on the scalability of high-resolution, immersive, tiled-display walls.
Factors influencing Chinese Consumer Online Group-Buying Purchase Intention : An Empirical Study
Background: Because of the high-speed development of e-commerce, online group buying has become a new popular pattern of consumption for Chinese consumers. Previous research has studied online group-buying (OGB) purchase intention in some specific areas such as Taiwan, but in mainland China. Purpose: The purpose of this study is to contribute to the Technology Acceptance Model, incorporating other potential driving factors to address how they influence Chinese consumers' online group-buying purchase intentions. Method: The study uses two steps to achieve its purpose. The first step is that I use the focus group interview technique to collect primary data. The results combining the Technology Acceptance model help me propose hypotheses. The second step is that the questionnaire method is applied for empirical data collection. The constructs are validated with exploratory factor analysis and reliability analysis, and then the model is tested with Linear multiple regression. Findings: The results have shown that the adapted research model has been successfully tested in this study. The seven factors (perceived usefulness, perceived ease of use, price, e-trust, Word of Mouth, website quality and perceived risk) have significant effects on Chinese consumers' online group-buying purchase intentions. This study suggests that managers of group-buying websites need to design easy-to-use platform for users. Moreover, group-buying website companies need to propose some rules or regulations to protect consumers' rights. When conflicts occur, evendors can follow these rules to provide solutions that are reasonable and satisfying for consumers.
The Application of Fuzzy Control in Water Tank Level Using Arduino
Fuzzy logic control has been successfully utilized in various industrial applications; it is generally used in complex control systems, such as chemical process control. Today, most of the fuzzy logic controls are still implemented on expensive highperformance processors. This paper analyzes the effectiveness of a fuzzy logic control using a low-cost controller applied to a water level control system. The paper also gives a low-cost hardware solution and practical procedure for system identification and control. First, the mathematical model of the process was obtained with the help of Matlab. Then two methods were used to control the system, PI (Proportional, Integral) and fuzzy control. Simulation and experimental results are presented. Keywords—Fuzzy control; PI; PID; Arduino; System identification
Knowledge Sharing Barriers and Effectiveness at a Higher Education Institution
In most of today’s academic circles, faculty knowledge is rarely shared with colleagues in the same institution in any meaningful or systematic way. This investigation sought answers to two questions regarding the faculty’s perceived knowledge sharing (KS) barriers and the influence that KS barriers may have on KS effectiveness. A data set was collected from seventy-six faculty members. The analysis revealed four key KS barriers, as bounded individual capacity is the most perceived barrier to KS, followed by inadequate organizational capability, fear of knowledge revelation, and knowledge nature. Fear of knowledge revelation was found to be the most influential barrier on KS effectiveness, as it influences three of the four KS effectiveness measures, namely awareness of research activities in one’s department, sharing of research knowledge with others in the institution, and satisfaction with sharing research knowledge with others in the institution. These findings contribute to the growing empirical KS research and provide an appropriate foundation for decision making and policy formulation aiming at fostering KS effectiveness in academe. DOI: 10.4018/jkm.2012040103 44 International Journal of Knowledge Management, 8(2), 43-64, April-June 2012 Copyright © 2012, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. enabled by leading-edge technologies, there is little systematic sharing of learning content, context, and supporting materials among academics (Norris et al., 2006). Knowledge is hardly disbursed and rarely crosses disciplinary boundaries, as it mainly resides in archipelagos of individual knowledge clusters that are unavailable for systematic sharing (Norris et al., 2003). If knowledge is power, shared knowledge is real power (Jayalakshmi, 2006). Knowledge sharing (KS) is also the backbone of the four knowledge creation processes (i.e., socialization, externalization, combination, and internalization) identified in the SECI model (Nonaka & Takeuchi, 1995). Barriers to KS would, therefore, impede the leverage and accumulation of the intellectual assets of an organization. Nevertheless, efforts aiming at enhancing KS among faculty must be guided by evidence on the barriers that may hamper that sharing. Although a number of earlier studies attempted to explore knowledge management (KM) practice in higher education institutions (e.g., Petrides & Nodine, 2002; Arntzen et al., 2009; Sohail & Daud, 2009; Tian et al., 2009), the KM literature, in general, is short in evidence from the field on faculty’s KS practice and barriers in such institutions, where knowledge occupies a central and pervasive place. This research explores the barriers that may impede the effective sharing of research-related knowledge among faculty at a USA-based higher education institution. More specifically, it attempts to answer two questions: (1) what are the perceived barriers to KS among the faculty in the investigated institution? and (2) do the perceived KS barriers influence the effectiveness of sharing research-related knowledge among the faculty in the investigated institution? The remainder of this paper is organized accordingly. The research background is described next, followed by the research methodology, research results, discussion of the research results, research implications, limitations and future research, and the paper ends with conclusions. BACKGROUND Knowledge is a social construct. It exists in tacit and explicit forms, which are complementary and symbiotic. Innovation can occur only when explicit and tacit knowledge interact (Nonaka, 1994; Norris et al., 2006). While people can understand information individually and in isolation, knowledge can be only understood in a context of interactivity and communication with others (Norris et al., 2003). Although researchers have defined KS differently depending on their views of the term and their research purposes (e.g., Lee, 2001; Bock & Kim, 2002; MacNeil, 2003; Ryu et al., 2003; Lin & Lee, 2004; Yi, 2009), KS is basically the exchange of different types of knowledge between individuals, groups, units, and organizations. KS is about connection, not collection, and connection is ultimately a personal choice (Dougherty, 1999). Knowledge can be shared through different mechanisms, depending on the nature of the knowledge itself. KS can occur explicitly when an individual or a unit communicates with another individual or another unit, or implicitly through norms and routines. KS includes not only the transmission (sending) of knowledge but also the absorption of the knowledge by the receiver. Gupta and Govindarajan (2000) postulate that knowledge mobilization is a key task that organizations must perform. However, knowledge is to some extent complex and contextual and hard to transfer. It cannot be easily shared even when individuals intend or wish to share it. Norris et al. (2003) posit that many academics and educators are unreflective about the nature of knowledge outside their immediate domains of interest. They hold some types of knowledge in high regard and highly respect personalized knowledge that they have accumulated over time. Therefore, academic knowledge may mainly remain a “cottage industry” which makes it rather difficult to share with others. Yet, the KM literature suggests that KS barriers vary from individual to organizational to technological. As to the individual barriers, Yoo and Torrey (2002) postulate that KS 20 more pages are available in the full version of this document, which may be purchased using the "Add to Cart" button on the product's webpage: www.igi-global.com/article/knowledge-sharing-barrierseffectiveness-higher/67337?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Library Science, Information Studies, and Education, InfoSci-Select, InfoSci-Select, InfoSci-Knowledge Discovery, Information Management, and Storage eJournal Collection, InfoSci-Select. Recommend this product to your
A least-norm approach to flattenable mesh surface processing
Following the definition of developable surface in differential geometry, the flattenable mesh surface, a special type of piecewise- linear surface, inherits the good property of developable surface about having an isometric map from its 3D shape to a corresponding planar region. Different from the developable surfaces, a flattenable mesh surface is more flexible to model objects with complex shapes (e.g., cramped paper or warped leather with wrinkles). Modelling a flattenable mesh from a given input mesh surface can be completed under a constrained nonlinear optimization framework. In this paper, we reformulate the problem in terms of estimation error. Therefore, the shape of a flattenable mesh can be computed by the least-norm solutions faster. Moreover, the method for adding shape constraints to the modelling of flattenable mesh surfaces has been exploited. We show that the proposed method can compute flattenable mesh surfaces from input piecewise linear surfaces successfully and efficiently.
Optimization of Triboelectric Nanogenerator Charging Systems for Efficient Energy Harvesting and Storage
Triboelectric nanogenerator (TENG) technology has emerged as a new mechanical energy harvesting technology with numerous advantages. This paper analyzes its charging behavior together with a load capacitor. Through numerical and analytical modeling, the charging performance of a TENG with a bridge rectifier under periodic external mechanical motion is completely analogous to that of a dc voltage source in series with an internal resistance. An optimum load capacitance that matches the TENGs impedance is observed for the maximum stored energy. This optimum load capacitance is theoretically detected to be linearly proportional to the charging cycle numbers and the inherent TENG capacitance. Experiments were also performed to further validate our theoretical anticipation and show the potential application of this paper in guiding real experimental designs.
Comorbid depression and anxiety spectrum disorders.
The relationship between depression and anxiety disorders has long been a matter of controversy. The overlap of symptoms associated with these disorders makes diagnosis, research, and treatment particularly difficult. Recent evidence suggests genetic and neurobiologic similarities between depressive and anxiety disorders. Comorbid depression and anxiety are highly prevalent conditions. Patients with panic disorder, generalized anxiety disorder, social phobia, and other anxiety disorders are also frequently clinically depressed. Approximately 85% of patients with depression also experience significant symptoms of anxiety. Similarly, comorbid depression occurs in up to 90% of patients with anxiety disorders. Patients with comorbid disorders do not respond as well to therapy, have a more protracted course of illness, and experience less positive treatment outcomes. One key to successful treatment of patients with mixed depressive and anxiety disorders is early recognition of comorbid conditions. Antidepressant medications, including the selective serotonin reuptake inhibitors, tricyclic antidepressants, and monoamine oxidase inhibitors, are highly effective in the management of comorbid depression and anxiety. The high rates of comorbid depression and anxiety argue for well-designed treatment studies in these populations.
Do no harm: toward contextually appropriate psychosocial support in international emergencies.
In the aftermath of international emergencies caused by natural disasters or armed conflicts, strong needs exist for psychosocial support on a large scale. Psychologists have developed and applied frameworks and tools that have helped to alleviate suffering and promote well-being in emergency settings. Unfortunately, psychological tools and approaches are sometimes used in ways that cause unintended harm. In a spirit of prevention and wanting to support critical self-reflection, the author outlines key issues and widespread violations of the do no harm imperative in emergency contexts. Prominent issues include contextual insensitivity to issues such as security, humanitarian coordination, and the inappropriate use of various methods; the use of an individualistic orientation that does not fit the context and culture; an excessive focus on deficits and victimhood that can undermine empowerment and resilience; the use of unsustainable, short-term approaches that breed dependency, create poorly trained psychosocial workers, and lack appropriate emphasis on prevention; and the imposition of outsider approaches. These and related problems can be avoided by the use of critical self-reflection, greater specificity in ethical guidance, a stronger evidence base for intervention, and improved methods of preparing international humanitarian psychologists.
Joint optimization of signal constellation bit labeling for bit-interleaved coded modulation with iterative decoding
We optimize signal constellation and bit labeling for bit-interleaved coded modulation with iterative decoding (BICM-ID). Target is to minimize bit error rate floor and signal to noise ratio for the turbo cliff. Various optimal non-orthogonal 16-QAM mappings are presented. An improvement of 0.2 dB is shown compared to state of the art orthogonal QAM-constellation with approx. the same error floor. To obtain these results, we derive the probability density function of extrinsic L-values of the demapper for perfect a priori knowledge in closed form. This allows fast computation of mutual information in the EXIT chart.
High-resolution patterning of graphene by screen printing with a silicon stencil for highly flexible printed electronics.
High-resolution screen printing of pristine graphene is introduced for the rapid fabrication of conductive lines on flexible substrates. Well-defined silicon stencils and viscosity-controlled inks facilitate the preparation of high-quality graphene patterns as narrow as 40 μm. This strategy provides an efficient method to produce highly flexible graphene electrodes for printed electronics.
ABT-450/r-ombitasvir and dasabuvir with or without ribavirin for HCV.
BACKGROUND The interferon-free regimen of ABT-450 with ritonavir (ABT-450/r), ombitasvir, and dasabuvir with or without ribavirin has shown efficacy in inducing a sustained virologic response in a phase 2 study involving patients with hepatitis C virus (HCV) genotype 1 infection. We conducted two phase 3 trials to examine the efficacy and safety of this regimen in previously untreated patients with HCV genotype 1 infection and no cirrhosis. METHODS We randomly assigned 419 patients with HCV genotype 1b infection (PEARL-III study) and 305 patients with genotype 1a infection (PEARL-IV study) to 12 weeks of ABT-450/r-ombitasvir (at a once-daily dose of 150 mg of ABT-450, 100 mg of ritonavir, and 25 mg of ombitasvir), dasabuvir (250 mg twice daily), and ribavirin administered according to body weight or to matching placebo for ribavirin. The primary efficacy end point was a sustained virologic response (an HCV RNA level of <25 IU per milliliter) 12 weeks after the end of treatment. RESULTS The study regimen resulted in high rates of sustained virologic response among patients with HCV genotype 1b infection (99.5% with ribavirin and 99.0% without ribavirin) and among those with genotype 1a infection (97.0% and 90.2%, respectively). Of patients with genotype 1b infection, 1 had virologic failure, and 2 did not have data available at post-treatment week 12. Among patients with genotype 1a infection, the rate of virologic failure was higher in the ribavirin-free group than in the ribavirin group (7.8% vs. 2.0%). In both studies, decreases in the hemoglobin level were significantly more common in patients receiving ribavirin. Two patients (0.3%) discontinued the study drugs owing to adverse events. The most common adverse events were fatigue, headache, and nausea. CONCLUSIONS Twelve weeks of treatment with ABT-450/r-ombitasvir and dasabuvir without ribavirin was associated with high rates of sustained virologic response among previously untreated patients with HCV genotype 1 infection. Rates of virologic failure were higher without ribavirin than with ribavirin among patients with genotype 1a infection but not among those with genotype 1b infection. (Funded by AbbVie; PEARL-III and PEARL-IV ClinicalTrials.gov numbers, NCT01767116 and NCT01833533.).
An improved K-means algorithm combined with Particle Swarm Optimization approach for efficient web document clustering
Searching and discovering the relevant information on the web have always been challenging task. It is very hard to wade through the large number of returned documents in a response to a user query. This leads to the need to organize a large set of documents into categories through clustering. There is a need of efficient clustering algorithms for organizing documents. Clustering on large dataset can be effectively done using partitional clustering algorithms. The K-means algorithm is the appropriate partitional clustering approach for handling large dataset because of its efficiency with respect to execution time. But this algorithm is highly susceptible to the selection of initial positions of cluster centers. This paper introduces a new hybrid method using Particle Swarm Optimization (PSO) combined with an improved K-means algorithm for document clustering. We have tested K-means, PSO, our proposed PSOK, KPSO and KPSOK algorithms on various text document collections. The document range varies from 204 to 878 in the dataset and the terms ranges from 5804 to 7454. There is clear evidence from our results that the proposed method achieves better clustering than other methods taken for study.
Direction-of-arrival estimation using radiation power pattern with an ESPAR antenna
An approach for estimating direction-of-arrival (DoA) based on power output cross-correlation and antenna pattern diversity is proposed for a reactively steerable antenna. An "estimator condition" is proposed, from which the most appropriate pattern shape is derived. Computer simulations with directive beam patterns obtained from an electronically steerable parasitic array radiator antenna model are conducted to illustrate the theory and to inspect the method performance with respect to the "estimator condition". The simulation results confirm that a good estimation can be expected when suitable directive patterns are chosen. In addition, to verify performance, experiments on estimating DoA are conducted in an anechoic chamber for several angles of arrival and different scenarios of antenna adjustable reactance values. The results show that the proposed method can provide high-precision DoA estimation.
Detection and Recognition of Painted Road Surface Markings
A method for the automatic detection and recognition of text and symbols painted on the road surface is presented. Candidate regions are detected as maximally stable extremal regions (MSER) in a frame which has been transformed into an inverse perspective mapping (IPM) image, showing the road surface with the effects of perspective distortion removed. Detected candidates are then sorted into words and symbols, before they are interpreted using separate recognition stages. Symbol-based road markings are recognised using histogram of oriented gradient (HOG) features and support vector machines (SVM). Text-based road signs are recognised using a third-party optical character recognition (OCR) package, after application of a perspective correction stage. Matching of regions between frames, and temporal fusion of results is used to improve performance. The proposed method is validated using a data-set of videos, and achieves F-measures of 0.85 for text characters and 0.91 for symbols.
Script identification in natural scene image and video frames using an attention based Convolutional-LSTM network
Script identification plays a significant role in analysing documents and videos. In this paper, we focus on the problem of script identification in scene text images and video scripts. Because of low image quality, complex background and similar layout of characters shared by some scripts like Greek, Latin, etc., text recognition in those cases become challenging. In this paper, we propose a novel method that involves extraction of local and global features using CNN-LSTM framework and weighting them dynamically for script identification. First, we convert the images into patches and feed them into a CNN-LSTM framework. Attention-based patch weights are calculated applying softmax layer after LSTM. Next, we do patch-wise multiplication of these weights with corresponding CNN to yield local features. Global features are also extracted from last cell state of LSTM. We employ a fusion technique which dynamically weights the local and global features for an individual patch. Experiments have been done in four public script identification datasets: SIW-13, CVSI2015, ICDAR-17 and MLe2e. The proposed framework achieves superior results in comparison to conventional methods. Keywords-Script Identification, Convolutional Neural Network, Long Short-Term Memory, Local feature, Global feature, Attention Network, Dynamic Weighting.
Earnings management , stock issues , and shareholder lawsuits $
Abnormal accounting accruals are unusually high around stock offers, especially high for firms whose offers subsequently attract lawsuits. Accruals tend to reverse after stock offers and are negatively related to post-offer stock returns. Reversals are more pronounced and stock returns are lower for sued firms than for those that are not sued. The incidence of lawsuits involving stock offers and settlement amounts are significantly positively related to abnormal accruals around the offer and significantly negatively related to post-offer stock returns. Our results support the view that some firms opportunistically manipulate earnings upward before stock issues rendering themselves vulnerable to litigation. r 2003 Elsevier B.V. All rights reserved. JEL classification: G14; G24; G32; K22; M41
Transcatheter pacemaker implantation in a patient with a bioprosthetic tricuspid valve
A 66-year-old white female s/p mitral and tricuspid valve replacement was seen in cardiac consultation and diagnosed with atrial fibrillation and high-degree atrioventricular (AV) block with symptomatic bradycardia, thus pacemaker placement was recommended. For this particular patient, a standard, single-lead implantable pacemaker, while not contraindicated, was not a preferable solution due to the patient’s newly implanted bioprosthetic tricuspid valve and the potential complications transvalvular lead placement may bring [1]. No FDAapproved leadless pacemaker is available in the USA at this time; however, initial safety and feasibility trials have shown comparable results to traditional transvenous leads [2].
Genomic Profiling Reveals the Potential Role of TCL1A and MDR1 Deficiency in Chemotherapy-Induced Cardiotoxicity
BACKGROUND Anthracyclines, such as doxorubicin (Adriamycin), are highly effective chemotherapeutic agents, but are well known to cause myocardial dysfunction and life-threatening congestive heart failure (CHF) in some patients. METHODS To generate new hypotheses about its etiology, genome-wide transcript analysis was performed on whole blood RNA from women that received doxorubicin-based chemotherapy and either did, or did not develop CHF, as defined by ejection fractions (EF)≤40%. Women with non-ischemic cardiomyopathy unrelated to chemotherapy were compared to breast cancer patients prior to chemo with normal EF to identify heart failure-related transcripts in women not receiving chemotherapy. Byproducts of oxidative stress in plasma were measured in a subset of patients. RESULTS The results indicate that patients treated with doxorubicin showed sustained elevations in oxidative byproducts in plasma. At the RNA level, women who exhibited low EFs after chemotherapy had 260 transcripts that differed >2-fold (p<0.05) compared to women who received chemo but maintained normal EFs. Most of these transcripts (201) were not altered in non-chemotherapy patients with low EFs. Pathway analysis of the differentially expressed genes indicated enrichment in apoptosis-related transcripts. Notably, women with chemo-induced low EFs had a 4.8-fold decrease in T-cell leukemia/lymphoma 1A (TCL1A) transcripts. TCL1A is expressed in both cardiac and skeletal muscle, and is a known co-activator for AKT, one of the major pro-survival factors for cardiomyocytes. Further, women who developed low EFs had a 2-fold lower level of ABCB1 transcript, encoding the multidrug resistance protein 1 (MDR1), which is an efflux pump for doxorubicin, potentially leading to higher cardiac levels of drug. In vitro studies confirmed that inhibition of MDR1 by verapamil in rat H9C2 cardiomyocytes increased their susceptibility to doxorubicin-induced toxicity. CONCLUSIONS It is proposed that chemo-induced cardiomyopathy may be due to a reduction in TCL1A levels, thereby causing increased apoptotic sensitivity, and leading to reduced cardiac MDR1 levels, causing higher cardiac levels of doxorubicin and intracellular free radicals. If so, screening for TCL1A and MDR1 SNPs or expression level in blood, might identify women at greatest risk of chemo-induced heart failure.
Smoking correlates with flow-mediated brachial artery vasoactivity but not cold pressor vasoactivity in men with coronary artery disease
Impaired endothelial function is observed as altered vasomotion in both the peripheral and coronary circulation in the presence of cardiovascular risk factors and early atherogenesis. An improvement in endothelium-dependent vasoactivity has been reported with both cholesterol reduction and smoking cessation. This study was performed to determine whether smoking status in coronary artery disease (CAD) effects both flow-mediated and cold pressor vasoactivity. We studied 25 men (ages 30–59), 12 smokers and 13 nonsmokers with angiographically documented coronary artery disease and cardiac risk factors who were grouped as smokers and nonsmokers. Using 7.5MHz ultrasound, we measured brachial artery diameter and Doppler flow velocity at baseline, following 5 mins of ipsilateral blood pressure cuff occlusion and release (flow-mediated), during contralateral ice water hand immersion (cold pressor test) and after sublinqual nitroglycerin administration (an endothelium-independent vasodilator). Flow-mediated percent diameter change was significantly less in the smokers than nonsmokers (1.9 ± 5.7% vs 11.4 ± 7.2%, p <0.001). Both smokers and nonsmokers responded similarly to the cold pressor test (–3.9 ± 2.3 vs –1.2 ± 0.2%) and nitroglycerin (15.1 ± 7.6 vs 17.5 ± 8.3%). Cholesterol level did not appear to be an independent determinant of flow-mediated vasoactivity when smoking status whas taken into account. Flow-mediated vasoactivity is associated with smoking status in the presence of coronary artery disease but cold pressor induced vasoactivity is not.
Engineering Better Wheelchairs to Enhance Community Participation
With about 2.2 million Americans currently using wheeled mobility devices, wheelchairs are frequently provided to people with impaired mobility to provide accessibility to the community. Individuals with spinal cord injuries, arthritis, balance disorders, and other conditions or diseases are typical users of wheelchairs. However, secondary injuries and wheelchair-related accidents are risks introduced by wheelchairs. Research is underway to advance wheelchair design to prevent or accommodate secondary injuries related to propulsion and transfer biomechanics, while improving safe, functional performance and accessibility to the community. This paper summarizes research and development underway aimed at enhancing safety and optimizing wheelchair design
Emotional relief for parents: Is rational-emotive parent education effective?
The effects of a rational-emotive parent education program were studied on forty-eight parents from a nonclinical population using a pre-test, post-test control group design. The RET parenting program included four components: a) reducing emotional stress through disputing irrational beliefs, b) implementing rational discipline methods, c) rational problem solving skills and d) fostering rational thinking traits in their child. Four dependent variables were studied: parent irrationality, parent emotionality, parent perceptions of child problems and the perception of participants' parenting by their spouses. Results showed that for experimental group subjects there was a statistically significant reduction in parent irrationality, parent guilt and parent anger. An exploratory ten month follow-up suggested maintenance of effects, a reduction in perceived child behavior problems, and changes in parental irrational beliefs regarding self worth.
"All I know about politics is what I read in Twitter": Weakly Supervised Models for Extracting Politicians' Stances From Twitter
During the 2016 United States presidential election, politicians have increasingly used Twitter to express their beliefs, stances on current political issues, and reactions concerning national and international events. Given the limited length of tweets and the scrutiny politicians face for what they choose or neglect to say, they must craft and time their tweets carefully. The content and delivery of these tweets is therefore highly indicative of a politician’s stances. We present a weakly supervised method for extracting how issues are framed and temporal activity patterns on Twitter for popular politicians and issues of the 2016 election. These behavioral components are combined into a global model which collectively infers the most likely stance and agreement patterns among politicians, with respective accuracies of 86.44% and 84.6% on average.
What is an emerging technology?
There is considerable and growing interest in the emergence of novel technologies, especially from the policy-making perspective. Yet as an area of study, emerging technologies lacks key foundational elements, namely a consensus on what classifies a technology as ’emergent’ and strong research designs that operationalize central theoretical concepts. The present paper aims to fill this gap by developing a definition of ’emerging technologies’ and linking this conceptual effort with the development of a framework for the operationalisation of technological emergence. The definition is developed by combining a basic understanding of the term and in particular the concept of ’emergence’ with a review of key innovation studies dealing with definitional issues of technological emergence. The resulting definition identifies five attributes that feature in the emergence of novel technologies. These are: (i) radical novelty, (ii) relatively fast growth, (iii) coherence, (iv) prominent impact, and (v) uncertainty and ambiguity. The framework for operationalising emerging technologies is then elaborated on the basis of the proposed attributes. To do so, we identify and review major empirical approaches (mainly in, although not limited to, the scientometric domain) for the detection and study of emerging technologies (these include indicators and trend analysis, citation analysis, co-word analysis, overlay mapping, and combinations thereof) and elaborate on how these can be used to operationalise the different attributes of emergence.
Worsening of neck and shoulder complaints in humans are correlated with frequency parameters of electromyogram recorded 1-year earlier
The aim was to investigate whether output and electromyogram (EMG) variables obtained from an isokinetic endurance test of the shoulder flexor muscles of 23 women with neck and shoulder problems in a car and truck industry correlated with improvement or worsening of complaints 1 year later. Each subject performed 100 maximal isokinetic shoulder forward flexions at 60° · s−1. Surface EMG of the trapezius, deltoid, biceps brachii and infraspinatus muscles and mechanical output (peak torque) were determined for each contraction. The EMG was used to determine mean frequency f mean and the ratio between the signal amplitudes of the EMG of the passive relaxation and active flexion parts of each contraction cycle (SAR). The subjects also rated the degree of fatigue they experienced throughout the test. The magnitude of the shift in f mean was correlated with whether improvement or worsening occurred for complaints in the neck and or shoulders; a significant relationship (r 2 = 0.44; P = 0.001) existed between the total frequency shift of the four muscles and the variables measuring improvement in complaints. In the multivariate predictions other f mean variables and perception of fatigue were also of significance. The present study would indicate that a high degree of f mean shift correlates with improvement in neck and shoulder complaints 1 year later. One possible reason could be that f mean reflects the muscle morphology and/or a pathological situation for the type-1 muscle fibres.
Determinants of image quality of rotational angiography for on-line assessment of frame geometry after transcatheter aortic valve implantation
To study the determinants of image quality of rotational angiography using dedicated research prototype software for motion compensation without rapid ventricular pacing after the implantation of four commercially available catheter-based valves. Prospective observational study including 179 consecutive patients who underwent transcatheter aortic valve implantation (TAVI) with either the Medtronic CoreValve (MCS), Edward-SAPIEN Valve (ESV), Boston Sadra Lotus (BSL) or Saint-Jude Portico Valve (SJP) in whom rotational angiography (R-angio) with motion compensation 3D image reconstruction was performed. Image quality was evaluated from grade 1 (excellent image quality) to grade 5 (strongly degraded). Distinction was made between good (grades 1, 2) and poor image quality (grades 3–5). Clinical (gender, body mass index, Agatston score, heart rate and rhythm, artifacts), procedural (valve type) and technical variables (isocentricity) were related with the image quality assessment. Image quality was good in 128 (72 %) and poor in 51 (28 %) patients. By univariable analysis only valve type (BSL) and the presence of an artefact negatively affected image quality. By multivariate analysis (in which BMI was forced into the model) BSL valve (Odds 3.5, 95 % CI [1.3–9.6], p = 0.02), presence of an artifact (Odds 2.5, 95 % CI [1.2–5.4], p = 0.02) and BMI (Odds 1.1, 95 % CI [1.0–1.2], p = 0.04) were independent predictors of poor image quality. Rotational angiography with motion compensation 3D image reconstruction using a dedicated research prototype software offers good image quality for the evaluation of frame geometry after TAVI in the majority of patients. Valve type, presence of artifacts and higher BMI negatively affect image quality.
Scalable water splitting on particulate photocatalyst sheets with a solar-to-hydrogen energy conversion efficiency exceeding 1.
Photocatalytic water splitting using particulate semiconductors is a potentially scalable and economically feasible technology for converting solar energy into hydrogen. Z-scheme systems based on two-step photoexcitation of a hydrogen evolution photocatalyst (HEP) and an oxygen evolution photocatalyst (OEP) are suited to harvesting of sunlight because semiconductors with either water reduction or oxidation activity can be applied to the water splitting reaction. However, it is challenging to achieve efficient transfer of electrons between HEP and OEP particles. Here, we present photocatalyst sheets based on La- and Rh-codoped SrTiO3 (SrTiO3:La, Rh; ref. ) and Mo-doped BiVO4 (BiVO4:Mo) powders embedded into a gold (Au) layer. Enhancement of the electron relay by annealing and suppression of undesirable reactions through surface modification allow pure water (pH 6.8) splitting with a solar-to-hydrogen energy conversion efficiency of 1.1% and an apparent quantum yield of over 30% at 419 nm. The photocatalyst sheet design enables efficient and scalable water splitting using particulate semiconductors.