title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
When Bitcoin encounters information in an online forum: Using text mining to analyse user opinions and predict value fluctuation | Bitcoin is an online currency that is used worldwide to make online payments. It has consequently become an investment vehicle in itself and is traded in a way similar to other open currencies. The ability to predict the price fluctuation of Bitcoin would therefore facilitate future investment and payment decisions. In order to predict the price fluctuation of Bitcoin, we analyse the comments posted in the Bitcoin online forum. Unlike most research on Bitcoin-related online forums, which is limited to simple sentiment analysis and does not pay sufficient attention to note-worthy user comments, our approach involved extracting keywords from Bitcoin-related user comments posted on the online forum with the aim of analytically predicting the price and extent of transaction fluctuation of the currency. The effectiveness of the proposed method is validated based on Bitcoin online forum data ranging over a period of 2.8 years from December 2013 to September 2016. |
Has the U.S. Economy Become More Stable? A Bayesian Approach Based on a Markov-Switching Model of the Business Cycle | We hope to answer three questions: Has there been a structural break in postwar U.S. real GDP growth towards stabilization? If so, when? What is the nature of this structural break? We employ a Bayesian approach to identify a structural break at an unknown changepoint in a Markov-switching model of the business cycle. Empirical results suggest a break in GDP growth toward stabilization, with the posterior mode of the break date at 1984:1. Furthermore, we find a narrowing gap between growth rates during recessions and booms that is at least as important as any decline in the volatility of shocks. |
Accurate Continuous Sweeping Framework in Indoor Spaces With Backpack Sensor System for Applications to 3-D Mapping | In indoor environments, there exists a few distinctive indoor spaces' features (ISFs). However, up to our knowledge, there is no algorithm that fully utilizes ISF for accurate 3-D SLAM. In this letter, we suggest a sensor system that efficiently captures ISF and propose an algorithm framework that accurately estimates sensor's 3-D poses by utilizing ISF. Experiments conducted in six representative indoor spaces show that the accuracy of the proposed method is better than the previous method. Furthermore, the proposed method shows robust performances in a sense that a set of adjusted parameters of the related algorithms does not need to be recalibrated as target environment changes. We also demonstrate that the proposed method not only generates 3-D depth maps but also builds a dense 3-D RGB-D map. |
THE GLASS CEILING HYPOTHESIS A Comparative Study of the United States, Sweden, and Australia | The general-case glass ceiling hypothesis states that not only is it more difficult for women than for men to be promoted up levels of authority hierarchies within workplaces but also that the obstacles women face relative to men become greater as they move up the hierarchy. Gender-based discrimination in promotions is not simply present across levels of hierarchy but is more intense at higher levels. Empirically, this implies that the relative rates of women being promoted to higher levels compared to men should decline with the level of the hierarchy. This article explores this hypothesis with data from three countries: the United States, Australia, and Sweden. The basic conclusion is that while there is strong evidence for a general gender gap in authority—the odds of women having authority are less than those of men—there is no evidence for systematic glass ceiling effects in the United States and only weak evidence for such effects in the other two countries. |
Women's Preferences for Penis Size: A New Research Method Using Selection among 3D Models | Women's preferences for penis size may affect men's comfort with their own bodies and may have implications for sexual health. Studies of women's penis size preferences typically have relied on their abstract ratings or selecting amongst 2D, flaccid images. This study used haptic stimuli to allow assessment of women's size recall accuracy for the first time, as well as examine their preferences for erect penis sizes in different relationship contexts. Women (N = 75) selected amongst 33, 3D models. Women recalled model size accurately using this method, although they made more errors with respect to penis length than circumference. Women preferred a penis of slightly larger circumference and length for one-time (length = 6.4 inches/16.3 cm, circumference = 5.0 inches/12.7 cm) versus long-term (length = 6.3 inches/16.0 cm, circumference = 4.8 inches/12.2 cm) sexual partners. These first estimates of erect penis size preferences using 3D models suggest women accurately recall size and prefer penises only slightly larger than average. |
Decision support with belief functions theory for seabed characterization | The seabed characterization from sonar images is a very hard task because of the produced data and the unknown environment, even for an human expert. In this work we propose an original approach in order to combine binary classifiers arising from different kinds of strategies such as one-versus-one or one-versus-rest, usually used in the SVM-classification. The decision functions coming from these binary classifiers are interpreted in terms of belief functions in order to combine these functions with one of the numerous operators of the belief functions theory. Moreover, this interpretation of the decision function allows us to propose a process of decisions by taking into account the rejected observations too far removed from the learning data, and the imprecise decisions given in unions of classes. This new approach is illustrated and evaluated with a SVM in order to classify the different kinds of sediment on image sonar. |
Are Coherence Protocol States Vulnerable to Information Leakage? | Most commercial multi-core processors incorporate hardware coherence protocols to support efficient data transfers and updates between their constituent cores. While hardware coherence protocols provide immense benefits for application performance by removing the burden of software-based coherence, we note that understanding the security vulnerabilities posed by such oft-used, widely-adopted processor features is critical for secure processor designs in the future. In this paper, we demonstrate a new vulnerability exposed by cache coherence protocol states. We present novel insights into how adversaries could cleverly manipulate the coherence states on shared cache blocks, and construct covert timing channels to illegitimately communicate secrets to the spy. We demonstrate 6 different practical scenarios for covert timing channel construction. In contrast to prior works, we assume a broader adversary model where the trojan and spy can either exploit explicitly shared read-only physical pages (e.g., shared library code), or use memory deduplication feature to implicitly force create shared physical pages. We demonstrate how adversaries can manipulate combinations of coherence states and data placement in different caches to construct timing channels. We also explore how adversaries could exploit multiple caches and their associated coherence states to improve transmission bandwidth with symbols encoding multiple bits. Our experimental results on commercial systems show that the peak transmission bandwidths of these covert timing channels can vary between 700 to 1100 Kbits/sec. To the best of our knowledge, our study is the first to highlight the vulnerability of hardware cache coherence protocols to timing channels that can help computer architects to craft effective defenses against exploits on such critical processor features. |
Politics and the American clergy: Sincere shepherds or strategic saints? | Scholars have evaluated the causes of clergy political preferences and behavior for decades. As with party ID in the study of mass behavior, personal ideological preferences have been the relevant clergy literature's dominant behavioral predictor. Yet to the extent that clergy operate in bounded and specialized institutions, it is possible that much of the clergy political puzzle can be more effectively solved by recognizing these elites as institutionally-situated actors, with their preferences and behaviors influenced by the institutional groups with which they interact. I argue that institutional reference groups help to determine clergy political preferences and behavior. Drawing on three theories derived from neo-institutionalism, I assess reference group influence on clergy in two mainline Protestant denominations-the Presbyterian Church (USA) and the Episcopal Church, USA. In addition to their wider and more traditional socializing influence, reference groups in close proximity to clergy induce them to behave strategically-in ways that are contrary to their sincerely held political preferences. These proximate reference groups comprise mainly parishioners, suggesting that clergy political behavior, which is often believed to affect laity political engagement, may be predicated on clergy anticipation of potentially unfavorable reactions from their followers. The results show a set of political elites (the clergy) to be highly responsive to strategic pressure from below. This turns the traditional relationship between elites and masses on its head, and suggests that further examination of institutional reference group influence on clergy, and other political elites, is warranted. |
Aneka Cloud Application Platform and Its Integration with Windows Azure | Aneka is an Application Platform-as-a-Service (Aneka PaaS) for Cloud Computing. It acts as a framework for building customized applications and deploying them on either public or private Clouds. One of the key features of Aneka is its support for provisioning resources on different public Cloud providers such as Amazon EC2, Windows Azure and GoGrid. In this chapter, we will present Aneka platform and its integration with one of the public Cloud infrastructures, Windows Azure, which enables the usage of Windows Azure Compute Service as a resource provider of Aneka PaaS. The integration of the two platforms will allow users to leverage the power of Windows Azure Platform for Aneka Cloud Computing, employing a large number of compute instances to run their applications in parallel. Furthermore, customers of the Windows Azure platform can benefit from the integration with Aneka PaaS by embracing the advanced features of Aneka in terms of multiple programming models, scheduling and management services, application execution services, accounting and pricing services and dynamic provisioning services. Finally, in addition to the Windows Azure Platform we will illustrate in this chapter the integration of Aneka PaaS with other public Cloud platforms such as Amazon EC2 and GoGrid, and virtual machine management platforms such as Xen Server. The new support of provisioning resources on Windows Azure once again proves the adaptability, extensibility and flexibility of Aneka. |
Barriers to Uptake of Cataract Surgery: An Eye Camp Account | About 17.6m are blind from cataract global. In Nigeria about 486,000 Nigerians were estimated blind with a prevalence of about 1.8%. This level is still considered high. Cataract operation remains the only and very good option for treatment of cataract blindness. Factors preventing people from assessing sight restoration services remain a challenge to the eye care delivery. Free cataract surgeries is becoming a regular practice in Nigeria. Despite this practice, many people continue to turn out blind from cataract, either in one o r both eyes. This study is interested in why people still present with blinding cataract. Through an interview assisted questionnaires, a descriptive study was carried out among cataract blind patients who turned up for cataract surgery during an eye camp. About 1570 persons were screened. Of this number, 297 were found to have cataract with visual acuity of 6/60 or worse.167 were bilaterally blind. Questionnaire was administered to the 297 persons. Complete informat ion was obtained from 211 of the respondents. Cost of surgery was the greatest cause of delay in uptake of cataract surgery in 171 (81%) persons. This was followed by ignorance in18persons. Cost of surgery still causes a lot of delay in uptake of cataract surgery in the study population. Cataract surgical rate needs to be increased. |
Diagnostic accuracy of an agarose gel electrophoretic method in multiple sclerosis. | To the Editor: In .95% of patients suffering from clinically unambiguous multiple sclerosis (MS), oligoclonal IgG can be detected in cerebrospinal fluid (CSF) (1, 2). Isoelectric focusing is the most sensitive method for detecting oligoclonal bands (OCBs) (1 ), whereas agarose gel electrophoresis is reported to detect OCBs in ;80% of cases (2 ). A commercial electrophoresis unit (Beckman Instruments) is commonly used in clinical laboratories. Laboratory support of a clinically probable diagnosis of MS in combination with brain magnetic resonance imaging is important to allow initiation of appropriate treatment as early as possible. We evaluated the sensitivity of the Beckman modified agarose gel electrophoresis to detect OCB and estimated its diagnostic accuracy. We examined paired CSF and sera from 168 patients (122 females) with clinically unambiguous MS (3 ) for the presence of oligoclonal IgG. We followed the manufacturer’s instructions for application of samples to the buffered agarose gel (SPEII gel). Electrophoresis was performed at 100 V for 40 min, followed by immobilization in a fixative solution and staining. All tests were done by the same technician and evaluated by the same experienced hematologist, both unaware of clinical or magnetic resonance imaging data. OCB was detectable in 90 of the 168 patients (53%; 95% confidence interval, 52.9–53.1%). The mean IgG index [(CSF-IgG/serum-IgG)/(CSFalbumin/serum-albumin)] of OCBnegative patients was 0.87. Patients with detectable OCB had a mean IgG index of 1.28 (reference value ,0.7). This electrophoretic method does not appear to provide adequate clinical sensitivity because false-negative results in the detection of OCB may delay diagnosis and the start of treatment. |
A Robust Method for Prevention of Second Order and Stored Procedure based SQL Injections | Today's interconnected computer network is complex and is constantly growing in size . As per OWASP Top10 list 2013[1] the top vulnerability in web application is listed as injection attack. SQL injection[2] is the most dangerous attack among injection attacks. Most of the available techniques provide an incomplete solution. While attacking using SQL injection attacker probably use space, single quotes or double dashes in his input so as to change the indented meaning of the runtime query generated based on these inputs. Stored procedure based and second order SQL injection are two types of SQL injection that are difficult to detect and hence difficult to prevent. This work concentrates on Stored procedure based and second |
You Are Wrong! - Automatic Detection of Interaction Errors from Brain Waves | Brain-computer interfaces, as any other interaction modality based on physiological signals and body channels (e.g., muscular activity, speech and gestures), are prone to errors in the recognition of subject’s intent. In this paper we exploit a unique feature of the “brain channel”, namely that it carries information about cognitive states that are crucial for a purposeful interaction. One of these states is the awareness of erroneous responses. Different physiological studies have shown the presence of error-related potentials (ErrP) in the EEG recorded right after people get aware they have made an error. However, for human-computer interaction, the central question is whether ErrP are also elicited when the error is made by the interface during the recognition of the subject’s intent and no longer by errors of the subject himself. In this paper we report experimental results with three volunteer subjects during a simple human-robot interaction (i.e., bringing the robot to either the left or right side of a room) that seem to reveal a new kind of ErrP, which is satisfactorily recognized in single trials. These recognition rates significantly improve the performance of the brain interface. |
Evaluating Criticism of Smart Growth 28 May 2015 | This paper evaluates various criticisms of smart growth. It defines the concept of smart growth, contrasts it with sprawl, and describes common smart growth strategies. It examines various criticisms of smart growth including the claims that it harms consumers, infringes on freedom, increases traffic congestion and air pollution, reduces housing affordability, causes social problems, increases public service costs, requires wasteful transit subsidies and is unjustified. Some specific critics’ papers are examined. This analysis indicates that many claims by critics reflect an incomplete understanding of smart growth or inaccurate analysis. Critics identify some legitimate problems that must be addressed to optimize smart growth, but present no convincing evidence to diminish overall justifications for smart growth. Evaluating Criticism of Smart Growth Victoria Transport Policy Institute |
New analysis and results for the Frank-Wolfe method | Wepresent new results for the Frank–Wolfemethod (also known as the conditional gradient method). We derive computational guarantees for arbitrary step-size sequences, which are then applied to various step-size rules, including simple averaging and constant step-sizes. We also develop step-size rules and computational guarantees that depend naturally on the warm-start quality of the initial (and subsequent) iterates. Our results include computational guarantees for both duality/bound gaps and the so-calledFWgaps. Lastly,wepresent complexity bounds in the presence of approximate computation of gradients and/or linear optimization subproblem solutions. Mathematics Subject Classification 90C06 · 90C25 · 65K05 |
Effect of Gen2 protocol parameters on RFID tag performance | In this paper, we analyze the effect of Gen2 protocol parameters on RFID tag performance (tag sensitivity and backscatter efficiency). We describe our measurement methodology and perform characterization of several tags with different latest Gen2 ICs available on the market (Monza, UCODE, and Higgs families). To confirm our findings, we repeat measurements using conducted tag setup. We analyze data and draw conclusions on how the protocol parameters affect the tag performance in forward and reverse links. |
HTTPS traffic analysis and client identification using passive SSL/TLS fingerprinting | The encryption of network traffic complicates legitimate network monitoring, traffic analysis, and network forensics. In this paper, we present real-time lightweight identification of HTTPS clients based on network monitoring and SSL/TLS fingerprinting. Our experiment shows that it is possible to estimate the User-Agent of a client in HTTPS communication via the analysis of the SSL/TLS handshake. The fingerprints of SSL/TLS handshakes, including a list of supported cipher suites, differ among clients and correlate to User-Agent values from a HTTP header. We built up a dictionary of SSL/TLS cipher suite lists and HTTP User-Agents and assigned the User-Agents to the observed SSL/TLS connections to identify communicating clients. The dictionary was used to classify live HTTPS network traffic. We were able to retrieve client types from 95.4 % of HTTPS network traffic. Further, we discussed host-based and network-based methods of dictionary retrieval and estimated the quality of the data. |
Medical diagnosis using neural network | This research is to search for alternatives to the resolution of complex medical diagnosis where human knowledge should be apprehended in a general fashion. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic system. This paper describes a modified feedforward neural network constructive algorithm (MFNNCA), a new algorithm for medical diagnosis. The new constructive algorithm with backpropagation; offer an approach for the incremental construction of near-minimal neural network architectures for pattern classification. The algorithm starts with minimal number of hidden units in the single hidden layer; additional units are added to the hidden layer one at a time to improve the accuracy of the network and to get an optimal size of a neural network. The MFNNCA was tested on several benchmarking classification problems including the cancer, heart disease and diabetes. Experimental results show that the MFNNCA can produce optimal neural network architecture with good generalization ability. |
Eplerenone and atrial fibrillation in mild systolic heart failure: results from the EMPHASIS-HF (Eplerenone in Mild Patients Hospitalization And SurvIval Study in Heart Failure) study. | OBJECTIVES
The purpose of this study was to analyze the incidence of new atrial fibrillation or flutter (AFF) in the EMPHASIS-HF (Eplerenone in Mild Patients Hospitalization And SurvIval Study in Heart Failure) database.
BACKGROUND
Aldosterone antagonism in heart failure might influence atrial fibrosis and remodeling and, therefore, risk of developing AFF. The development of new AFF was a pre-specified secondary endpoint in the EMPHASIS-HF study.
METHODS
Patients in New York Heart Association functional class II and with ejection fraction ≤35% were eligible for EMPHASIS-HF. History of AFF at baseline was reported by investigators using the study case report form. New onset AFF (in those with no history of AFF at baseline) was reported using a specific endpoint form; in a sensitivity analysis we also examined the effect of eplerenone on AFF reported as an adverse event.
RESULTS
New onset AFF was significantly reduced by eplerenone: 25 of 911 (2.7%) versus 40 of 883 (4.5%) in the placebo group (hazard ratio [HR]: 0.58, 95% confidence interval [CI]: 0.35 to 0.96; p = 0.034). The reduction in the primary endpoint with eplerenone was similar among patients with and without AFF at baseline (HR: 0.60, 95% CI: 0.46 to 0.79 vs. HR: 0.70, 95% CI: 0.57 to 0.85, respectively; p for interaction = 0.41). The risk of cardiovascular (CV) death or hospital admission for worsening heart failure, the primary endpoint, was not significantly different in subjects with and without AFF at baseline (both study groups combined: HR: 1.23, 95% CI: 0.81 to 1.86; p = 0.33).
CONCLUSIONS
In patients with systolic heart failure and mild symptoms, eplerenone reduced the incidence of new onset AFF. The effects of eplerenone on the reduction of major CV events were similar in patients with and without AFF at baseline. |
Neuromuscular and biomechanical characteristics do not vary across the menstrual cycle | Research examining the menstrual cycle and its relationship to ACL injury has focused on determining the incidence of ACL injury during the different phases of the menstrual cycle and assessing the changes in neuromuscular and biomechanical characteristics between these phases. Conflicting results warrant further investigation to determine if neuromuscular and biomechanical characteristics respond in a similar pattern to the fluctuating estradiol and progesterone. The purpose of this study was to determine if changes in the levels of estradiol and progesterone significantly altered fine motor coordination, postural stability, knee strength, and knee joint kinematics and kinetics between the menses, post-ovulatory, and mid-luteal phases of the menstrual cycle. Ten healthy and physically active females (Age: 21.4 ± 1.4 years, Height: 1.67 ± 0.06 m, Mass: 59.9 ± 7.4 kg), who did not use oral contraceptives, were recruited from the local university population. Single-leg postural stability, fine motor coordination, knee strength, knee biomechanics, and serum estradiol and progesterone were assessed at the menses, post-ovulatory, and mid-luteal phases of the menstrual cycle. Levels of estradiol were significantly higher during the post-ovulatory (P = 0.016) and mid-luteal phases (P < 0.001) compared to the menses phase. Levels of progesterone were significantly lower during the menses (P < 0.001) and post-ovulatory phases (P < 0.001) compared to the mid-luteal phase. No significant differences existed between phases of the menstrual cycle for fine motor coordination (P = 0.474), postural stability (P = 0.707), hamstring – quadriceps strength ratio at 60°s−1 (P = 0.748) or 180°s−1 (P = 0.789), knee flexion excursion (P = 0.6), knee valgus excursion (P = 0.899), peak proximal tibial anterior shear force (P = 0.797), flexion moment at peak proximal tibial anterior shear force (P = 0.698), or valgus moment at peak proximal tibial anterior shear force (P = 0.924). The results of the current study suggest neuromuscular and biomechanical characteristics are not influenced by estradiol and progesterone fluctuations. All neuromuscular and biomechanical characteristics remained invariable between testing sessions despite concentration changes in estradiol and progesterone. |
Empirical analysis of programming language adoption | Some programming languages become widely popular while others fail to grow beyond their niche or disappear altogether. This paper uses survey methodology to identify the factors that lead to language adoption. We analyze large datasets, including over 200,000 SourceForge projects, 590,000 projects tracked by Ohloh, and multiple surveys of 1,000-13,000 programmers.
We report several prominent findings. First, language adoption follows a power law; a small number of languages account for most language use, but the programming market supports many languages with niche user bases. Second, intrinsic features have only secondary importance in adoption. Open source libraries, existing code, and experience strongly influence developers when selecting a language for a project. Language features such as performance, reliability, and simple semantics do not. Third, developers will steadily learn and forget languages. The overall number of languages developers are familiar with is independent of age. Finally, when considering intrinsic aspects of languages, developers prioritize expressivity over correctness. They perceive static types as primarily helping with the latter, hence partly explaining the popularity of dynamic languages. |
Design of Highly Efficient Broadband Class-E Power Amplifier Using Synthesized Low-Pass Matching Networks | A new methodology for designing and implementing high-efficiency broadband Class-E power amplifiers (PAs) using high-order low-pass filter-prototype is proposed in this paper. A GaN transistor is used in this work, which is carefully modeled and characterized to prescribe the optimal output impedance for the broadband Class-E operation. A sixth-order low-pass filter-matching network is designed and implemented for the output matching, which provides optimized fundamental and harmonic impedances within an octave bandwidth (L-band). Simulation and experimental results show that an optimal Class-E PA is realized from 1.2 to 2 GHz (50%) with a measured efficiency of 80%-89%, which is the highest reported today for such a bandwidth. An overall PA bandwidth of 0.9-2.2 GHz (84%) is measured with 10-20-W output power, 10-13-dB gain, and 63%-89% efficiency throughout the band. Furthermore, the Class-E PA is characterized through measurements using constant-envelop global system for mobile communications signals, indicating a favorable adjacent channel power ratio from -40 to -50 dBc within the entire bandwidth. |
Where Is Current Research on Blockchain Technology?—A Systematic Review | Blockchain is a decentralized transaction and data management technology developed first for Bitcoin cryptocurrency. The interest in Blockchain technology has been increasing since the idea was coined in 2008. The reason for the interest in Blockchain is its central attributes that provide security, anonymity and data integrity without any third party organization in control of the transactions, and therefore it creates interesting research areas, especially from the perspective of technical challenges and limitations. In this research, we have conducted a systematic mapping study with the goal of collecting all relevant research on Blockchain technology. Our objective is to understand the current research topics, challenges and future directions regarding Blockchain technology from the technical perspective. We have extracted 41 primary papers from scientific databases. The results show that focus in over 80% of the papers is on Bitcoin system and less than 20% deals with other Blockchain applications including e.g. smart contracts and licensing. The majority of research is focusing on revealing and improving limitations of Blockchain from privacy and security perspectives, but many of the proposed solutions lack concrete evaluation on their effectiveness. Many other Blockchain scalability related challenges including throughput and latency have been left unstudied. On the basis of this study, recommendations on future research directions are provided for researchers. |
Adaptive Stochastic Gradient Descent Optimisation for Image Registration | We present a stochastic gradient descent optimisation method for image registration with adaptive step size prediction. The method is based on the theoretical work by Plakhov and Cruz (J. Math. Sci. 120(1):964–973, 2004). Our main methodological contribution is the derivation of an image-driven mechanism to select proper values for the most important free parameters of the method. The selection mechanism employs general characteristics of the cost functions that commonly occur in intensity-based image registration. Also, the theoretical convergence conditions of the optimisation method are taken into account. The proposed adaptive stochastic gradient descent (ASGD) method is compared to a standard, non-adaptive Robbins-Monro (RM) algorithm. Both ASGD and RM employ a stochastic subsampling technique to accelerate the optimisation process. Registration experiments were performed on 3D CT and MR data of the head, lungs, and prostate, using various similarity measures and transformation models. The results indicate that ASGD is robust to these variations in the registration framework and is less sensitive to the settings of the user-defined parameters than RM. The main disadvantage of RM is the need for a predetermined step size function. The ASGD method provides a solution for that issue. |
The neural basis of addiction: a pathology of motivation and choice. | OBJECTIVE
A primary behavioral pathology in drug addiction is the overpowering motivational strength and decreased ability to control the desire to obtain drugs. In this review the authors explore how advances in neurobiology are approaching an understanding of the cellular and circuitry underpinnings of addiction, and they describe the novel pharmacotherapeutic targets emerging from this understanding.
METHOD
Findings from neuroimaging of addicts are integrated with cellular studies in animal models of drug seeking.
RESULTS
While dopamine is critical for acute reward and initiation of addiction, end-stage addiction results primarily from cellular adaptations in anterior cingulate and orbitofrontal glutamatergic projections to the nucleus accumbens. Pathophysiological plasticity in excitatory transmission reduces the capacity of the prefrontal cortex to initiate behaviors in response to biological rewards and to provide executive control over drug seeking. Simultaneously, the prefrontal cortex is hyperresponsive to stimuli predicting drug availability, resulting in supraphysiological glutamatergic drive in the nucleus accumbens, where excitatory synapses have a reduced capacity to regulate neurotransmission.
CONCLUSIONS
Cellular adaptations in prefrontal glutamatergic innervation of the accumbens promote the compulsive character of drug seeking in addicts by decreasing the value of natural rewards, diminishing cognitive control (choice), and enhancing glutamatergic drive in response to drug-associated stimuli. |
Architecture of field-programmable gate arrays | A survey of Field-Programmable Gate Array (FPGA) architectures and the programming technologies used to customize them is presented. Programming technologies are compared on the basis of their vola fility, size, parasitic capacitance, resistance, and process technology complexity. FPGA architectures are divided into two constituents: logic block architectures and routing architectures. A classijcation of logic blocks based on their granularity is proposed and several logic blocks used in commercially available FPGA ’s are described. A brief review of recent results on the effect of logic block granularity on logic density and pe$ormance of an FPGA is then presented. Several commercial routing architectures are described in the contest of a general routing architecture model. Finally, recent results on the tradeoff between the fleibility of an FPGA routing architecture its routability and density are reviewed. |
FoDRA — A new content-based job recommendation algorithm for job seeking and recruiting | In this paper, we propose a content-based recommendation Algorithm which extends and updates the Minkowski distance in order to address the challenge of matching people and jobs. The proposed algorithm FoDRA (Four Dimensions Recommendation Algorithm) quantifies the suitability of a job seeker for a job position in a more flexible way, using a structured form of the job and the candidate's profile, produced from a content analysis of the unstructured form of the job description and the candidate's CV. We conduct an experimental evaluation in order to check the quality and the effectiveness of FoDRA. Our primary study shows that FoDRA produces promising results and creates new prospects in the area of Job Recommender Systems (JRSs). |
CLANG - a simple teaching language | Address unti 1 ,]uric i986: c/o |
HISQ action in dynamical simulations | We report on recent progress in employing the Highly Improved Staggered Quark (HISQ) action introduced by the HPQCD/UKQCD collaboration in simulations with dynamical fermions. The HISQ action is an order $a^2$ Symanzik-improved action with further suppressed taste symmetry violations. The improvement in taste symmetry is achieved by introducing Fat7 smearing of the original gauge links and reunitarization (projection to an element of U(3) or SU(3)) followed by Asq-type smearing. Major challenges for calculating the fermion force are related to the reunitarization step. We present a preliminary study of the HISQ action on two 2+1+1 flavor ensembles with the lattice spacing roughly equivalent to the MILC asqtad a=0.125 and 0.09 fm ensembles. |
HIV and child mental health: a case-control study in Rwanda. | BACKGROUND
The global HIV/AIDS response has advanced in addressing the health and well-being of HIV-positive children. Although attention has been paid to children orphaned by parental AIDS, children who live with HIV-positive caregivers have received less attention. This study compares mental health problems and risk and protective factors in HIV-positive, HIV-affected (due to caregiver HIV), and HIV-unaffected children in Rwanda.
METHODS
A case-control design assessed mental health, risk, and protective factors among 683 children aged 10 to 17 years at different levels of HIV exposure. A stratified random sampling strategy based on electronic medical records identified all known HIV-positive children in this age range in 2 districts in Rwanda. Lists of all same-age children in villages with an HIV-positive child were then collected and split by HIV status (HIV-positive, HIV-affected, and HIV-unaffected). One child was randomly sampled from the latter 2 groups to compare with each HIV-positive child per village.
RESULTS
HIV-affected and HIV-positive children demonstrated higher levels of depression, anxiety, conduct problems, and functional impairment compared with HIV-unaffected children. HIV-affected children had significantly higher odds of depression (1.68: 95% confidence interval [CI] 1.15-2.44), anxiety (1.77: 95% CI 1.14-2.75), and conduct problems (1.59: 95% CI 1.04-2.45) compared with HIV-unaffected children, and rates of these mental health conditions were similar to HIV-positive children. These results remained significant after controlling for contextual variables, there were no significant differences on mental health outcomes groups, reflecting a potential explanatory role of factors such as daily hardships, caregiver depression, and HIV-related stigma [corrected].
CONCLUSIONS
The mental health of HIV-affected children requires policy and programmatic responses comparable to HIV-positive children. |
Online Risk-Based Security Assessment | The work was motivated by a perceived increase in the frequency at which power system operators are encountering high stress in bulk transmission systems and the corresponding need to improve security monitoring of these networks. Online risk-based security assessment provides rapid online quantification of the security level associated with an existing or forecasted operating condition. One major advantage of this approach over deterministic online security assessment is that it condenses contingency likelihood and severity into indices that reflect probabilistic risk. Use of these indices in control room decision-making leads to increased understanding of potential network problems, including overload, cascading overload, low voltages, and voltage instability, resulting in improved security related decision-making. Test results on large-scale transmission models retrieved from the energy management system of a U. S. utility company are described. |
Australia's Agenda for E-Security Education and Research | The paper describes the development of a national E-security strategy for Australia. The paper discusses the rationale behind its development and the issues that relate to the policy and its possible implementation and its impact upon E-security teaching and E-security research. The paper also discusses how current situations are having an impact on the development of a national education and research initiative. |
An evaluation framework for viable business models for m-commerce in the information technology sector | This paper presents a study of the characteristics of viable business models in the field of Mobile Commerce (m-commerce). Mobility has given new dimensions to the way commerce works. All over the world various stakeholder organisations are consistently probing into the areas where m-commerce can be exploited and can generate revenue or value for them, even though some of those implementations are making the business environment more complex and uncertain. This paper proposes a viable business model evaluation framework, based on the VISOR model, which helps in determining the sustainability capabilities of a business model. Four individual cases were conducted with diverse organisations in the Information Technology sector. The four cases discussed dealt with mobile business models and the primary data was collected via semi structured interviews, supplemented by an extensive range of secondary data. A cross-case comparative data analysis was used to review the patterns of different viable business components across the four cases and, finally, the findings and conclusions of the study are presented. |
[80 years of the Society of Friends of the University of Veterinary Medicine Hannover]. | The Society of Friends of the University of Veterinary Medicine Hannover was established in 1926, in times of severe economical distress. According to its statutes its main purpose from the beginning was to complement the governmental budget of the University. During its 80 years of existence the so called "Friendly Society" has contributed to overcome many financial shortages in research, clinics and institutes. In addition it supported veterinary students in need. Some aspects of the Society's history and activities will be communicated. |
Religion: Morality and Social Control | This article is revision of the previous edition article by C. Davies, volume 19, pp. 13082–13085, © 2001, Elsevier Ltd. |
Cloud Computing: Survey on Energy Efficiency | Cloud computing is today’s most emphasized Information and Communications Technology (ICT) paradigm that is directly or indirectly used by almost every online user. However, such great significance comes with the support of a great infrastructure that includes large data centers comprising thousands of server units and other supporting equipment. Their share in power consumption generates between 1.1% and 1.5% of the total electricity use worldwide and is projected to rise even more. Such alarming numbers demand rethinking the energy efficiency of such infrastructures. However, before making any changes to infrastructure, an analysis of the current status is required. In this article, we perform a comprehensive analysis of an infrastructure supporting the cloud computing paradigm with regards to energy efficiency. First, we define a systematic approach for analyzing the energy efficiency of most important data center domains, including server and network equipment, as well as cloud management systems and appliances consisting of a software utilized by end users. Second, we utilize this approach for analyzing available scientific and industrial literature on state-of-the-art practices in data centers and their equipment. Finally, we extract existing challenges and highlight future research directions. |
Learning from examples to improve code completion systems | The suggestions made by current IDE's code completion features are based exclusively on static type system of the programming language. As a result, often proposals are made which are irrelevant for a particular working context. Also, these suggestions are ordered alphabetically rather than by their relevance in a particular context. In this paper, we present intelligent code completion systems that learn from existing code repositories. We have implemented three such systems, each using the information contained in repositories in a different way. We perform a large-scale quantitative evaluation of these systems, integrate the best performing one into Eclipse, and evaluate the latter also by a user study. Our experiments give evidence that intelligent code completion systems which learn from examples significantly outperform mainstream code completion systems in terms of the relevance of their suggestions and thus have the potential to enhance developers' productivity. |
Kinematic-free position control of a 2-DOF planar robot arm | This paper challenges the well-established assumption in robotics that in order to control a robot it is necessary to know its kinematic information, that is, the arrangement of links and joints, the link dimensions and the joint positions. We propose a kinematic-free robot control concept that does not require any prior kinematic knowledge. The concept is based on our hypothesis that it is possible to control a robot without explicitly measuring its joint angles, by measuring instead the effects of the actuation on its end-effector. We implement a proof-of-concept encoderless robot controller and apply it for the position control of a physical 2-DOF planar robot arm. The prototype controller is able to successfully control the robot to reach a reference position, as well as to track a continuous reference trajectory. Notably, we demonstrate how this novel controller can cope with something that traditional control approaches fail to do: adapt to drastic kinematic changes such as 100% elongation of a link, 35-degree angular offset of a joint, and even a complete overhaul of the kinematics involving the addition of new joints and links. |
Knowledge-Embedded Representation Learning for Fine-Grained Image Recognition | Humans can naturally understand an image in depth with the aid of rich knowledge accumulated from daily lives or professions. For example, to achieve fine-grained image recognition (e.g., categorizing hundreds of subordinate categories of birds) usually requires a comprehensive visual concept organization including category labels and part-level attributes. In this work, we investigate how to unify rich professional knowledge with deep neural network architectures and propose a Knowledge-Embedded Representation Learning (KERL) framework for handling the problem of fine-grained image recognition. Specifically, we organize the rich visual concepts in the form of knowledge graph and employ a Gated Graph Neural Network to propagate node message through the graph for generating the knowledge representation. By introducing a novel gated mechanism, our KERL framework incorporates this knowledge representation into the discriminative image feature learning, i.e., implicitly associating the specific attributes with the feature maps. Compared with existing methods of fine-grained image classification, our KERL framework has several appealing properties: i) The embedded high-level knowledge enhances the feature representation, thus facilitating distinguishing the subtle differences among subordinate categories. ii) Our framework can learn feature maps with a meaningful configuration that the highlighted regions finely accord with the nodes (specific attributes) of the knowledge graph. Extensive experiments on the widely used CaltechUCSD bird dataset demonstrate the superiority of ∗Corresponding author is Liang Lin (Email: [email protected]). This work was supported by the National Natural Science Foundation of China under Grant 61622214, the Science and Technology Planning Project of Guangdong Province under Grant 2017B010116001, and Guangdong Natural Science Foundation Project for Research Teams under Grant 2017A030312006. head-pattern: masked Bohemian |
Synthesis of poly(1,5-diaminonaphthalene) microparticles with abundant amino and imino groups as strong adsorbers for heavy metal ions | AbstractPoly(1,5-diaminonaphthalene) microparticles with abundant reactive amino and imino groups on their surface were synthesized by one-step oxidative polymerization of 1,5-diaminonaphthalene using ammonium persulfate as the oxidant. The molecular, supramolecular, and morphological structures of the microparticles were systematically characterized by IR and UV-vis spectroscopies, elementary analysis, wide-angle X-ray diffractometry, and transmission electron microscopy. The microparticles demonstrate electrical semiconductivity and high resistance to strong acid and alkali, and strong adsorption capability for lead(II), mercury(II), and silver(I) ions. The experimental conditions for adsorption of Pb(II) were optimized by varying the persulfate/monomer ratio, adsorption time, sorbent concentration, and pH value of the Pb(II) solution. The maximum adsorption capacity is 241 mg·g−1 for particles after a 24 h-exposure to a solution at an initial Pb(II) concentration of 29 mM. The adsorption data fit a Langmuir isotherm and follow a pseudo-second-order reaction kinetics. This indicates a chemical adsorption that is typical for a chelation interaction between Pb(II) and amino/imino groups on the sorbent.
Graphical abstractPoly(1,5-diaminonaphthalene) microparticles with abundant functional amino and imino groups have been synthesized by one-step direct polymerization of non-volatile 1,5-diaminonaphthalene in aqueous medium for sustainable preparation of high-performance adsorbents to strongly adsorb lead(II), mercury(II), and silver(I) ions. |
Influence of antiplatelet-anticoagulant drugs on the need of blood components transfusion after vesical transurethral resection. | AIMS
The effect of the antithrombotic preventive therapy on haemorrhage keeps uncertain. We investigate the influence of the antiplatelet and anticoagulant drugs (AP/AC drugs) on the transfusion requirement after vesical transurethral resection (VTUR). We also describe the epidemiology of the blood components transfusion in our department.
MATERIALS AND METHODS
Retrospective observational study of a series of patients needing blood transfusion at the Urology Department between June 2010 and June 2013. Selection of 100 consecutive patients who were transfused after VTUR due to bladder transitional cell carcinoma (BTCC) (group A = GA).
CONTROL GROUP
100 consecutive patients who underwent VTUR due to BTCC and were not transfused (group B = GB). Transfusion criteria: Haemoglobin < 8 g/dl + anaemia symptoms. Age, gender, associated AP/AC treatment, secondary diagnoses, toxics, tumour stage and grade were analysed.
RESULTS
212 patients required transfusion of a blood component. 169 were men (79%) and 43 women (21%). Median age 77.59 years (SD 9.42, range 50-92). Secondary diagnoses: Diabetes Mellitus 64%, high blood pressure 77%, dyslipidemia 52%. 60% of patients were previously treated with AP/AC drugs. Average Haemoglobin pre-transfusion values: 7.4 g/dl (DE ± 0.7). Average Haemoglobin post-transfusion values: 8.9 g/Dl (DE ± 0.72). Most frequent transfusion indications were bladder cancer (37%), kidney cancer (11%), prostate cancer (8%), benign prostatic hyperplasia (BHP) (8%), other urological diagnoses (36%). Intraoperative transfusions indicated by the anaesthesiologist: kidney cancer (33%), BPH (28%). Patients who underwent VTUR due to BTCC were older in GA (77.59 years SD 9.42) than in GB (68.98 years SD 11.78) (p = 0.0001). Similar gender distribution (15 women in GA and 24 in GB). Less patients were asked to keep their treatment with ASA 100mg (AcetylSalicylicAcid) in GA (25.64%) than in GB (50%) (p = 0.0330). More aggressive tumour grade in GA (p = 0.0003) and higher stage in GA (p = 0.0018) regardless of concomitant treatment with AP/AC drugs.
CONCLUSIONS
The pathologies which most needed blood components' transfusions in the Urology Department were (in order of frequency): bladder cancer, kidney cancer, prostate cancer, prostate adenoma. ASA100mg did not influence the transfusion's requirements in VTUR due to BTCC. Tumour stage and higher grade have a greater influence in transfusion's requirements than concomitant AP/AC treatment. The heterogeneity of AP/AC protocols does not allow to establish the benefit of stopping those drugs before surgery in terms of avoiding blood transfusions when performing a VTUR. |
Mobile robot motion estimation by 2D scan matching with genetic and iterative closest point algorithms | The paper reports on mobile robot motion estimation based on matching points from successive two-dimensional (2D) laser scans. This ego-motion approach is well suited to unstructured and dynamic environments because it directly uses raw laser points rather than extracted features. We have analyzed the application of two methods that are very different in essence: (i) A 2D version of iterative closest point (ICP), which is widely used for surface registration; (ii) a genetic algorithm (GA), which is a novel approach for this kind of problem. Their performance in terms of real-time applicability and accuracy has been compared in outdoor experiments with nonstop motion under diverse realistic navigation conditions. Based on this analysis, we propose a hybrid GA-ICP algorithm that combines the best characteristics of these pure methods. The experiments have been carried out with the tracked mobile robot Auriga-alpha and an on-board 2D laser scanner. _____________________________________________________________________________________ This document is a PREPRINT. The published version of the article is available in: Journal of Field Robotics, 23: 21–34. doi: 10.1002/rob.20104; http://dx.doi.org/10.1002/rob.20104. |
Self-perception: An alternative interpretation of cognitive dissonance phenomena. | A theory of self-perception is proposed to provide an alternative interpretation for several of the major phenomena embraced by Festinger's theory of cognitive dissonance and to explicate some of the secondary patterns of data that have appeared in dissonance experiments. It is suggested that the attitude statements which comprise the major dependent variables in dissonance experiments may be regarded as interpersonal judgments in which the observer and the observed happen to be the same individual and that it is unnecessary to postulate an aversive motivational drive toward consistency to account for the attitude change phenomena observed. Supporting experiments are presented, and metatheoretical contrasts between the "radical" behavioral approach utilized and the phenomenological approach typified by dissonance theory are discussed. |
Milk Run Logistics : Literature Review and Directions | conditions. The effect of the milk run logistics on the reduction of CO2 is also discussed. The promotion of Milk-Run logistics can be highly evaluated from the viewpoint of environmental policy. |
Gait Planning of Quadruped Walking and Climbing Robot for Locomotion in 3D Environment | One of the traditional problems in the walking and climbing robot moving in the 3D environment is how to negotiate the boundary of two plain surfaces such as corners, which may be convex or concave. In this paper a practical gait planning algorithm in the transition region of the boundary is proposed in terms of a geometrical view. The trajectory of the body is derived from the geometrical analysis of the relationship between the robot and the environment. And the position of each foot is determined by using parameters associated with the hip and the ankle of the robot. In each case of concave or convex boundaries, the trajectory that the robot moves along is determined in advance and the foot positions of the robot associated with the trajectory are computed, accordingly. The usefulness of the proposed method is confirmed through simulations and demonstrations with a walking and climbing robot. |
Eligibility for statin therapy by the JUPITER trial criteria and subsequent mortality. | Justification for the Use of Statins in Primary Prevention: An Intervention Trial Using Rosuvastatin (JUPITER) reported reduced cardiovascular and all-cause mortality with statin treatment in patients with elevated C-reactive protein (CRP) and average cholesterol levels who were not eligible for lipid-lowering treatment on the basis of existing guidelines. The aim of this study was to determine the prevalence of eligibility and mortality in a general population sample on the basis of eligibility for statin treatment using the JUPITER criteria. The study group consisted of 30,229 participants in the REasons for Geographic and Racial Differences in Stroke (REGARDS) cohort, an observational study of US African American and white participants aged > or =45 years, enrolled in their homes from 2003 to 2007 and followed biannually by telephone. Among 11,339 participants age eligible for JUPITER and without vascular diagnoses or using lipid-lowering treatment, 21% (n = 2,342) met JUPITER entry criteria. Compared with JUPITER participants, they had similar low-density lipoprotein cholesterol and CRP levels, were more often women, were more often black, had metabolic syndrome, and used aspirin for cardioprotection. Over 3.5 years of follow-up, the mortality rate in REGARDS participants eligible for JUPITER was 1.17 per 100 patient-years (95% confidence interval 0.94 to 1.42). Compared with those otherwise JUPITER eligible who had CRP levels <2 mg/L (n = 2,620), those with CRP levels > or =2 mg/L had a multivariate-adjusted relative risk of 1.5 (95% confidence interval 1.1 to 2.2) for total mortality. In conclusion, 21% not otherwise eligible would be newly eligible for lipid lowering treatment on the basis of JUPITER trial eligibility. |
Results of a phase I multiple-dose clinical study of ursodeoxycholic Acid. | BACKGROUND
The hydrophilic bile acid, ursodeoxycholic acid (UDCA), may indirectly protect against colon carcinogenesis by decreasing the overall proportion of the more hydrophobic bile acids, such as deoxycholic acid (DCA), in aqueous phase stool. In the AOM rat model, treatment with UDCA resulted in a significant decrease in adenoma formation and colorectal cancer. It was hypothesized that there is a dose-response relationship between treatment with the more hydrophilic bile acid, UDCA, and a reduction in the proportion of the more hydrophobic bile acid, DCA, in the aqueous stool phase, suggesting the potential of UDCA as a chemopreventive agent.
METHODS
Eighteen participants were randomized to 300, 600, or 900 mg/day UDCA for 21 days in this multiple-dose, double-blinded study. Seventy-two-hour stool samples were collected pretreatment and on days 18-20 of UDCA treatment for bile acid measurements. Pharmacokinetics were performed and blood bile acids were measured at days 1 and 21 of UDCA treatment.
RESULTS
There were no serious adverse events associated with UDCA treatment. There was a dose-response increase in the posttreatment to baseline ratio of UDCA to DCA from the 300 mg/day to the 600 mg/day group, but not between the 600 and the 900 mg/day groups, in both aqueous and solid phase stool. This posttreatment increase was statistically significant in aqueous phase stool for the 300 and 600 mg/day treatment groups (P = 0.038 and P = 0.014, respectively), but was only marginally significant in the 900 mg/day treatment group (P = 0.057). Following the first dose administration, a dose-dependent increase in plasma ursodeoxycholic concentrations was observed in fasting subjects; however, when these levels were measured postprandially following 3 weeks of treatment, the areas under the plasma concentration-time profile (AUC) were not statistically different and remained relatively unchanged over time.
CONCLUSIONS
UDCA treatment did not decrease the quantity of DCA in fecal water or solids; however, it did decrease the proportion of DCA in fecal water and solids in relation to UDCA. Thus, 3 weeks of UDCA treatment resulted in an overall increase in hydrophilicity of bile acids in the aqueous phase stool, with a peak effect observed with a daily dose of 600 mg/day. Much larger studies are needed to determine the effect of ursodeoxycholic administration on deoxycholic concentration, overall hydrophilicity of stool bile acids, and the long-term effects on intermediate biomarkers of cellular damage. |
DeepFM: A Factorization-Machine based Neural Network for CTR Prediction | Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards lowor high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both lowand highorder feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide & Deep model from Google, DeepFM has a shared input to its “wide” and “deep” parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data. |
Crystal structure of a Baeyer-Villiger monooxygenase. | Flavin-containing Baeyer-Villiger monooxygenases employ NADPH and molecular oxygen to catalyze the insertion of an oxygen atom into a carbon-carbon bond of a carbonylic substrate. These enzymes can potentially be exploited in a variety of biocatalytic applications given the wide use of Baeyer-Villiger reactions in synthetic organic chemistry. The catalytic activity of these enzymes involves the formation of two crucial intermediates: a flavin peroxide generated by the reaction of the reduced flavin with molecular oxygen and the "Criegee" intermediate resulting from the attack of the flavin peroxide onto the substrate that is being oxygenated. The crystal structure of phenylacetone monooxygenase, a Baeyer-Villiger monooxygenase from the thermophilic bacterium Thermobifida fusca, exhibits a two-domain architecture resembling that of the disulfide oxidoreductases. The active site is located in a cleft at the domain interface. An arginine residue lays above the flavin ring in a position suited to stabilize the negatively charged flavin-peroxide and Criegee intermediates. This amino acid residue is predicted to exist in two positions; the "IN" position found in the crystal structure and an "OUT" position that allows NADPH to approach the flavin to reduce the cofactor. Domain rotations are proposed to bring about the conformational changes involved in catalysis. The structural studies highlight the functional complexity of this class of flavoenzymes, which coordinate the binding of three substrates (molecular oxygen, NADPH, and phenylacetone) in proximity of the flavin cofactor with formation of two distinct catalytic intermediates. |
The ABDADA Distributed Minimax Search Algorithm | This paper presents a new method to parsllelize the minimax tree search algorithm. This method is then compared to the “Young Brother Wait Concept” algorithm in an Othello program implementation and in a Chess program. Results of tests done on a 32-node CM5 and a 12%node CRAY T3D computers are given. |
A broadband CPW-fed bow-tie slot antenna | A broadband coplanar waveguide (CPW) fed bow-tie slot antenna is proposed. By using a linear tapered transition, a 37% impedance bandwidth at -10 dB return loss is achieved. The antenna structure is very simple and the radiation patterns of the antenna in the whole bandwidth remain stable; moreover, the cross-polarization level is lower. An antenna model is fabricated on a high dielectric constant substrate. Experiments show that the simulated results agree well with the measured ones. |
Fast Single Shot Detection and Pose Estimation | For applications in navigation and robotics, estimating the 3D pose of objects is as important as detection. Many approaches to pose estimation rely on detecting or tracking parts or keypoints [11, 21]. In this paper we build on a recent state-of-the-art convolutional network for sliding-window detection [10] to provide detection and rough pose estimation in a single shot, without intermediate stages of detecting parts or initial bounding boxes. While not the first system to treat pose estimation as a categorization problem, this is the first attempt to combine detection and pose estimation at the same level using a deep learning approach. The key to the architecture is a deep convolutional network where scores for the presence of an object category, the offset for its location, and the approximate pose are all estimated on a regular grid of locations in the image. The resulting system is as accurate as recent work on pose estimation (42.4% 8 View mAVP on Pascal 3D+ [21] ) and significantly faster (46 frames per second (FPS) on a TITAN X GPU). This approach to detection and rough pose estimation is fast and accurate enough to be widely applied as a pre-processing step for tasks including high-accuracy pose estimation, object tracking and localization, and vSLAM. |
Enhanced Topic-based Vector Space Model for semantics-aware spam filtering | Spam has become a major issue in computer security because it is a channel for threats such as computer viruses, worms and phishing. More than 85% of received e-mails are spam. Historical approaches to combat these messages including simple techniques such as sender blacklisting or the use of email signatures, are no longer completely reliable. Currently, many solutions feature machine-learning algorithms trained using statistical representations of the terms that usually appear in the e-mails. Still, these methods are merely syntactic and are unable to account for the underlying semantics of terms within the messages. In this paper, we explore the use of semantics in spam filtering by representing e-mails with a recently introduced Information Retrieval model: the enhanced Topic-based Vector Space Model (eTVSM). This model is capable of representing linguistic phenomena using a semantic ontology. Based upon this representation, we apply several well-known machine-learning models and show that the proposed method can detect the internal semantics of spam messages. |
Increased incidence and improved survival in endometrioid endometrial cancer diagnosed since 1989 in The Netherlands: a population based study. | OBJECTIVES
To measure progress against endometrioid endometrial carcinoma (EEC) in the Netherlands by analyzing trends in incidence, survival and mortality simultaneously.
STUDY DESIGN
Descriptive study of incidence, survival and mortality rates of women with EEC in the Netherlands. Rates were age-standardized to the European standard population. Population-based data were extracted from the nationwide Dutch Cancer Registry (NCR) between 1989 and 2009. Mortality data since 1989 came from Statistics Netherlands. European age standardized incidence rates were calculated according to age, histology and stage. Five year relative survival estimates were calculated in four periods. Optimal progress against cancer is defined as decreasing incidence and/or improving survival accompanied by declining mortality.
RESULTS
80% of the 32,332 patients newly diagnosed with a corpus uteri malignancy had an EEC. The incidence of EEC rose significantly from 11/100,000 to 15/100,000, being most pronounced in women with FIGO stage IB and in the group with grade 1&2 tumours (P<0.05). Coinciding with the increased incidence, 5-year relative survival increased, especially for patients aged 60-74 years, in women with FIGO stage I, and in histology group grade 1&2, being 87%, 94% and 93%, respectively, during 2005-2009.
CONCLUSION
The incidence of EEC (being 80% of corpus uteri cancer) increased markedly between 1989 and 2009, especially in women of 60-74 years. Five-year survival for patients with EEC increased from 83 to 85%. Progress against EEC has been less than was assumed previously, because mortality proportionally decreased only slightly, and because of the increasing incidence although survival improved. |
Statecraft and classical learning : the Rituals of Zhou in East Asian history | Introduction - Benjamin A. Elman and Martin Kern Early China 1. The Zhouli as Constitutional Text - David Schaberg 2. Offices of Writing and Reading in the Rituals of Zhou - Martin Kern 3. The Many Dukes of Zhou in Early Sources - Michael Nylan 4. Centering the Realm: Wang Mang, the Zhouli, and Early Chinese Statecraft - Michael Puett 5. Zheng Xuan's Commentary on the Zhouli - Andrew H. Plaks II. Medieval China 6. The Role of the Zhouli in Seventh- and Eighth-Century Civil Administrative Traditions - David McMullen 7. Wang Anshi and the Zhouli - Peter K. Bol 8. Tension and Balance: Changes of Constitutional Schemes in Southern Song Commentaries on the Rituals of Zhou - Jaeyoon Song III. Early Modern East Asia 9. Tokugawa Approaches to the Rituals of Zhou: The Late Mito School and Feudalism" - Kate Wildman Nakai 10. Yun Hyu and the Search for Dominance: A Seventeenth-Century Korean Reading of the Offices of Zhou and the Rituals of Zhou - JaHyun Kim Haboush 11. The Story of a Chapter: Changing Views of the "Artificer's Record" ("Kaogong ji") and the Zhouli - Benjamin A. Elman IV. Modern China 12. The Zhouli as the Late Qing Path to the Future - Rudolf G. Wagner 13. Denouement: Some Conclusions about the Zhouli - Rudolf G. Wagner Bibliography Index |
A Wideband Circularly Polarized Magnetoelectric Dipole Antenna | This letter proposes a novel wideband circularly polarized magnetoelectric dipole antenna. In the proposed antenna, a pair of rotationally symmetric horizontal patches functions as an electric dipole, and two vertical patches with the ground act as an equivalent magnetic dipole. A Γ-shaped probe is used to excite the antenna, and a metallic cavity with two gaps is designed for wideband and good performance in radiation. A prototype was fabricated and measured. The experimental results show that the proposed antenna has an impedance bandwidth of 65% for SWR≤2 from 1.76 to 3.46 GHz, a 3-dB axial-ratio bandwidth of 71.5% from 1.68 to 3.55 GHz, and a stable gain of 8 ± 1 dBi. Good unidirectional radiation characteristic and low back-lobe level are achieved over the whole operating frequency band. |
Bidirectional Generative Adversarial Networks for Neural Machine Translation | Generative Adversarial Network (GAN) has been proposed to tackle the exposure bias problem of Neural Machine Translation (NMT). However, the discriminator typically results in the instability of the GAN training due to the inadequate training problem: the search space is so huge that sampled translations are not sufficient for discriminator training. To address this issue and stabilize the GAN training, in this paper, we propose a novel Bidirectional Generative Adversarial Network for Neural Machine Translation (BGAN-NMT), which aims to introduce a generator model to act as the discriminator, whereby the discriminator naturally considers the entire translation space so that the inadequate training problem can be alleviated. To satisfy this property, generator and discriminator are both designed to model the joint probability of sentence pairs, with the difference that, the generator decomposes the joint probability with a source language model and a source-to-target translation model, while the discriminator is formulated as a target language model and a target-to-source translation model. To further leverage the symmetry of them, an auxiliary GAN is introduced and adopts generator and discriminator models of original one as its own discriminator and generator respectively. Two GANs are alternately trained to update the parameters. Experiment results on German-English and ChineseEnglish translation tasks demonstrate that our method not only stabilizes GAN training but also achieves significant improvements over baseline systems. |
Bringing Deep Learning at the Edge of Information-Centric Internet of Things | Various Internet solutions take their power processing and analysis from cloud computing services. Internet of Things (IoT) applications started discovering the benefits of computing, processing, and analysis on the device itself aiming to reduce latency for time-critical applications. However, on-device processing is not suitable for resource-constraints IoT devices. Edge computing (EC) came as an alternative solution that tends to move services and computation more closer to consumers, at the edge. In this letter, we study and discuss the applicability of merging deep learning (DL) models, i.e., convolutional neural network (CNN), recurrent neural network (RNN), and reinforcement learning (RL), with IoT and information-centric networking which is a promising future Internet architecture, combined all together with the EC concept. Therefore, a CNN model can be used in the IoT area to exploit reliably data from a complex environment. Moreover, RL and RNN have been recently integrated into IoT, which can be used to take the multi-modality of data in real-time applications into account. |
An empirical comparison of lead exposure pathway models. | Structural equation modeling is a statistical method for partitioning the variance in a set of interrelated multivariate outcomes into that which is due to direct, indirect, and covariate (exogenous) effects. Despite this model's flexibility to handle different experimental designs, postulation of a causal chain among the endogenous variables and the points of influence of the covariates is required. This has motivated the researchers at the University of Cincinnati Department of Environmental Health to be guided by a theoretical model for movement of lead from distal sources (exterior soil or dust and paint lead) to proximal sources (interior dust lead) and then finally to biologic outcomes (handwipe and blood lead). The question of whether a single structural equation model built from proximity arguments can be applied to diverse populations observed in different communities with varying lead amounts, sources, and bioavailabilities is addressed in this article. This reanalysis involved data from 1855 children less than 72 months of age enrolled in 11 studies performed over approximately 15 years. Data from children residing near former ore-processing sites were included in this reanalysis. A single model adequately fit the data from these 11 studies; however, the model needs to be flexible to include pathways that are not frequently observed. As expected, the more proximal sources of interior dust lead and handwipe lead were the most important predictors of blood lead; soil lead often had a number of indirect influences. A limited number of covariates were also isolated as usually affecting the endogenous lead variables. The blood lead levels surveyed at the ore-processing sites were comparable to and actually somewhat lower than those reported in the the Third National Health and Nutrition Examination Survey. Lessened bioavailability of the lead at certain of these sites is a probable reason for this finding. |
Chest radiographs in acute pulmonary embolism. | BACKGROUND
Pulmonary embolism (PE) is a serious clinical entity carrying significant morbidity and mortality. Clinically, it is a difficult condition to diagnose and remains under treated condition in Pakistan due to non-availability of objective tests and lack of awareness among physicians. This study was conducted to determine the chest radiographic presentation in known cases of acute PE presenting to a tertiary care hospital.
METHODS
Hospital records of patients with a diagnosis of acute PE were reviewed from June 2000 until June 2004. Fifty diagnosed cases of acute PE on Spiral Computed tomography (CT) of the chest demonstrating an intraluminal-filling defect were selected. Two chest physicians reviewed the chest radiographs obtained during that hospitalization. In case of discrepancy, a radiologist made final interpretation.
RESULTS
The chest radiograph was interpreted as normal in only 18% of patients with acute PE. The most common chest radiographic abnormalities were cardiac enlargement (38%), pulmonary parenchymal infiltrates (34%), atelectasis (26%), pleural effusion (24%), and pulmonary congestion (24%). Other rare findings were elevated hemi diaphragm (14%), pulmonary artery enlargement (14%), and focal oligemia (8%).
CONCLUSIONS
Cardiomegaly is the most common chest radiographic abnormality associated with acute pulmonary embolism. Chest radiography is not useful in making the diagnosis of acute pulmonary embolism. Its major role is in identification of alternative disease processes that can mimic thrombo-embolism. |
Effect of alpha-difluoromethylornithine on rectal mucosal levels of polyamines in a randomized, double-blinded trial for colon cancer prevention. | BACKGROUND
Polyamines (e.g., putrescine, spermidine, and spermine) are required for optimal cell growth. Inhibition of polyamine synthesis suppresses carcinogen-induced epithelial cancers, including colon cancer, in animal models. In a short-term phase IIa trial, we determined that low doses of alpha-difluoromethylornithine (DFMO), an inhibitor of ornithine decarboxylase (an enzyme involved in polyamine synthesis), reduced the polyamine content of normal-appearing rectal mucosa of subjects with a prior history of resected colon polyps. In a follow-up study, we have attempted to determine the lowest dose of DFMO that can suppress the polyamine content of rectal mucosa over a course of 1 year with no or minimal side effects.
METHODS
Participants were randomly assigned to daily oral treatment with a placebo or one of three doses (0.075, 0.20, or 0.40 g/m2) of DFMO. Baseline and serial determinations of polyamine levels in rectal mucosa and extensive symptom monitoring (including audiometric measurements, since DFMO causes some reversible hearing loss at higher doses) were performed over a 15-month period.
RESULTS
DFMO treatment reduced putrescine levels in a dose-dependent manner. Following 6 months of treatment, doses of 0.20 and 0.40 g/m2 per day reduced putrescine levels to approximately 34% and 10%, respectively, of those observed in the placebo group. Smaller decreases were seen in spermidine levels and spermidine:spermine ratios. Polyamine levels increased toward baseline values after discontinuation of DFMO. Although there were no statistically significant differences among the dose groups with respect to clinically important shifts in audiometric thresholds and nonaudiologic side effects, statistically significant higher dropout and discontinuation rates were observed in the highest dose group.
CONCLUSIONS
Polyamine levels in rectal mucosa can be continuously suppressed by daily oral doses of DFMO that produce few or no side effects. A dose of 0.20 g/m2 can be used safely in combination phase IIb or single-agent phase III chemoprevention trials. |
The geography of Strabo : in eight volumes | In his seventeen-book Geography, Strabo (c. 64 BCE c. 25 CE) discusses geographical method, stresses the value of geography, and draws attention to the physical, political, and historical details of separate countries. Geography is a vital source for ancient geography and informative about ancient geographers. |
Message authentication in driverless cars | Driverless cars and driver-assisted vehicles are becoming prevalent in today's transportation systems. Companies like TESLA, Audi, and Google are among the leaders in this new era of the automotive industry. Modern vehicles, equipped with sophisticated navigation devices, complex driver-assisted systems, and a whole lot of safety features, bring broader impacts to our quality of life, economic development, and environmental sustainability. Instant safety messages such as pre-collision warnings, blind-spot detection, pedestrian and object awareness significantly improve the safety for drivers, passengers, and pedestrians. As a result, vehicles would be able to travel closely yet safely together, forming a platoon, thus resulting in a reduction of traffic congestion and fuel consumption. Driverless cars also have non-safety-related applications: which are used to facilitate traffic management and infotainment dissemination for drivers and passengers. Vehicular Ad hoc Network (VANET), the wireless communication technology enabling driverless cars, features not only dynamic topology but also high mobility. The vehicle nodes move rapidly on streets and highways. Their movements also depend on but not limited to road traffic, speed limits, and behavior of nearby vehicles. With massive amount of messages exchanged among driverless cars that command the cars' movements at high speeds and in close distances, any malicious alternation could lead to disastrous accidents. Message authentication to ensure data integrity is paramount to attack preparation. This paper presents a novel message authentication scheme that protects cars from bogus messages and makes VANET resilient to Denial-of-Service (DoS) attacks. The work includes a simulation framework that integrates vehicle and data traffic models to validate the effectiveness of the proposed message authentication scheme. |
Defining knowledge management (KM) activities: towards consensus | Purpose – The purpose of the paper is to present a vocabulary of terms that clearly define knowledge management (KM) activities in order to move towards consensus in the adoption of a common language within the field. Design/methodology/approach – Existing literature across several disciplines has been integrated to provide a clear description of the sorts of activities an individual undertakes in order to move from knowledge acquisition to innovation, and a clarification of the terms used to describe such activities is put forth. Findings – Adoption of a common vocabulary to describe KM activities provides a platform to better understand how best to manage these activities, and enables clearer identification of the knowledge management capabilities held by various sectors within the broader business community. Research limitations/implications – There is a need to undertake empirical research and in-depth case studies of knowledge management practices using a common vocabulary as a framework with which to interpret findings. Practical implications – The adoption of a common frame of reference to describe knowledge management activities will deepen understanding of current KM practices, enable identification inhibitors and facilitators of KM, lead to increased dialogue between academia and industry, and present opportunities to the education sector to incorporate such a vocabulary into its curriculum. Originality/value – The framework presented here will remove the veil of mystery that currently clouds knowledge management and facilitate broader uptake of KM practices, thereby realising the benefits of a knowledge-based economy in the broader business community. |
Approaches to Identifying Synthetic Lethal Interactions in Cancer | Targeting synthetic lethal interactions is a promising new therapeutic approach to exploit specific changes that occur within cancer cells. Multiple approaches to investigate these interactions have been developed and successfully implemented, including chemical, siRNA, shRNA, and CRISPR library screens. Genome-wide computational approaches, such as DAISY, also have been successful in predicting synthetic lethal interactions from both cancer cell lines and patient samples. Each approach has its advantages and disadvantages that need to be considered depending on the cancer type and its molecular alterations. This review discusses these approaches and examines case studies that highlight their use. |
Artificial intelligence applied to computer forensics | To be able to examine large amounts of data in a timely manner in search of important evidence during crime investigations is essential to the success of computer forensic examinations. The limitations in time and resources, both computational and human, have a negative impact in the results obtained. Thus, better use of the resources available are necessary, beyond the capabilities of the currently used forensic tools. Herein, we describe the use of Artificial Intelligence in computer forensics through the development of a multiagent system and case-based reasoning. This system is composed of specialized intelligent agents that act based on the experts knowledge of the technical domain. Their goal is to analyze and correlate the data contained in the evidences of an investigation and based on its expertise, present the most interesting evidence to the human examiner, thus reducing the amount of data to be personally analyzed. The correlation feature helps to find links between evidences that can be easily overlooked by a human expert, specially due to the amount of data involved. This system has been tested using real data and the results were very positive when compared to those obtained by the human expert alone performing the same analysis. |
Attachment at (not to) work: applying attachment theory to explain individual behavior in organizations. | In this article, we report the results of 2 studies that were conducted to investigate whether adult attachment theory explains employee behavior at work. In the first study, we examined the structure of a measure of adult attachment and its relations with measures of trait affectivity and the Big Five. In the second study, we examined the relations between dimensions of attachment and emotion regulation behaviors, turnover intentions, and supervisory reports of counterproductive work behavior and organizational citizenship behavior. Results showed that anxiety and avoidance represent 2 higher order dimensions of attachment that predicted these criteria (except for counterproductive work behavior) after controlling for individual difference variables and organizational commitment. The implications of these results for the study of attachment at work are discussed. |
Patient-centredness: a conceptual framework and review of the empirical literature. | A 'patient-centred' approach is increasingly regarded as crucial for the delivery of high quality care by doctors. However, there is considerable ambiguity concerning the exact meaning of the term and the optimum method of measuring the process and outcomes of patient-centred care. This paper reviews the conceptual and empirical literature in order to develop a model of the various aspects of the doctor-patient relationship encompassed by the concept of 'patient-centredness' and to assess the advantages and disadvantages of alternative methods of measurement. Five conceptual dimensions are identified: biopsychosocial perspective; 'patient-as-person'; sharing power and responsibility; therapeutic alliance; and 'doctor-as-person'. Two main approaches to measurement are evaluated: self-report instruments and external observation methods. A number of recommendations concerning the measurement of patient-centredness are made. |
Power Assist Wear Driven with Pneumatic Rubber Artificial Muscles | In the coming advanced age society, an innovative technology to assist the activities of daily living of elderly and disabled people and the heavy work in nursing is desired. To develop such a technology, an actuator safe and friendly for human is required. It should be small, lightweight and has to provide a proper softness. A pneumatic rubber artificial muscle is available as such actuators. We have developed some types of pneumatic rubber artificial muscles and applied them to wearable power assist devices. A wearable power assist device is equipped to the human body to assist the muscular force, which supports activities of daily living, rehabilitation, heavy working, training and so on. In this paper, some types of pneumatic rubber artificial muscles developed in our laboratory are introduced. Further, two kinds of wearable power assist devices driven with the rubber artificial muscles are described. Some evaluations can clarify the effectiveness of pneumatic rubber artificial muscle for such an innovative human assist technology. |
A 3.6mW 2.4-GHz multi-channel super-regenerative receiver in 130nm CMOS | Super-regeneration is re-examined for its simplicity and power efficiency for low-power, short-range communication. Although previous approaches rely on a high quality off-chip LC-tuned circuit, this paper describes a fully integrated 2.4-GHz ISM band super-regenerative receiver implemented in 130 nm CMOS. Several new design features, that take advantage of digital processing, are proposed. A synthesizer scheme tunes the circuit for multi-channel operation. Frequency selectivity is improved through Q-enhancement. The entire receiver occupies less than 1 mm2, and consumes 3 mA from a 1.2 V supply, with a data rate of up to 500 Kbps, an energy per received bit of 7.2 nJ/bit, a channel spacing of 10 MHz, and a sensitivity of -80 dBm |
Hierarchical rank and women's organizational mobility: glass ceilings in corporate law firms. | This article revives the debate over whether women's upward mobility prospects decline as they climb organizational hierarchies. Although this proposition is a core element of the "glass ceiling" metaphor, it has failed to gain strong support in previous research. The article establishes a firm theoretical foundation for expecting an increasing female disadvantage, with an eye toward defining the scope conditions and extending the model to upper-level external hires. The approach is illustrated in an empirical setting that meets the proposed scope conditions: corporate law firms in the United States. Results confirm that in this setting, the female mobility disadvantage is greater at higher organizational levels in the case of internal promotions, but not in the case of external hires. |
A hierarchical edge cloud architecture for mobile computing | The performance of mobile computing would be significantly improved by leveraging cloud computing and migrating mobile workloads for remote execution at the cloud. In this paper, to efficiently handle the peak load and satisfy the requirements of remote program execution, we propose to deploy cloud servers at the network edge and design the edge cloud as a tree hierarchy of geo-distributed servers, so as to efficiently utilize the cloud resources to serve the peak loads from mobile users. The hierarchical architecture of edge cloud enables aggregation of the peak loads across different tiers of cloud servers to maximize the amount of mobile workloads being served. To ensure efficient utilization of cloud resources, we further propose a workload placement algorithm that decides which edge cloud servers mobile programs are placed on and how much computational capacity is provisioned to execute each program. The performance of our proposed hierarchical edge cloud architecture on serving mobile workloads is evaluated by formal analysis, small-scale system experimentation, and large-scale trace-based simulations. |
Direct feeding damage on cucumber by mixed-species infestations of Thrips palmi and Frankliniella occidentalis (Thysanoptera : Thripidae) | J. Econ. Entomo!. 83(4): 1519-1525 (1990) ABSTRACT Distributions of Thrips palmi Karny and Frankliniella occidentalis (Pergande) within plants and relative contributions of each species to fruit scarring were investigated in field plantings of cucumber, Cucumis sativus (L.). Densities of T. palmi (number per unit area plant substrate) were greatest on foliage, whereas F. occidentalis densities were greatest on flowers. Densities of both species were lowest on fruits. Both species had secondary sex ratios that were biased towards females. The proportion of male F. occidentalis increased substantially in flower samples. Temporal variation in the incidence of fruit scarring, within and between field plantings, was related to variation in densities of F. occidentalis (but not T. palmi). Within-field spatial variation in fruit scarring on a given harvest wasalsoassociated primarily with variation in F. occidentalis densities. Because small, developing fruits physically support the female flowers, the high densities of F. occidentalis in flowersmay create opportunities for them to incidentally feed upon and scar young fruit. |
Asymmetries of information in centralized order-driven markets | We study the efficiency of the equilibrium price in a centralized, orderdriven market where many asymmetrically informed traders are active for many periods. We show that asymmetries of information can lead to suboptimal information revelation with respect to the symmetric case. In particular, we assess that the more precise the information the higher the incentive to reveal it, and that the value of private information is related to the volume of exogenous trade present on the market. Moreover, we prove that any informed trader, whatever his information, reveals its private signal during an active phase of the market, concluding that long pre-opening phases are not effective as an information discovering device in the presence of strategic players. |
Enabling the coexistence of LTE and Wi-Fi in unlicensed bands | The expansion of wireless broadband access network deployments is resulting in increased scarcity of available radio spectrum. It is very likely that in the near future, cellular technologies and wireless local area networks will need to coexist in the same unlicensed bands. However, the two most prominent technologies, LTE and Wi-Fi, were designed to work in different bands and not to coexist in a shared band. In this article, we discuss the issues that arise from the concurrent operation of LTE and Wi-Fi in the same unlicensed bands from the point of view of radio resource management. We show that Wi-Fi is severely impacted by LTE transmissions; hence, the coexistence of LTE and Wi-Fi needs to be carefully investigated. We discuss some possible coexistence mechanisms and future research directions that may lead to successful joint deployment of LTE and Wi-Fi in the same unlicensed band. |
An Information Retrieval Approach to Short Text Conversation | Human computer conversation is regarded as one of the most difficult problems in artificial intelligence. In this paper, we address one of its key sub-problems, referred to as short text conversation, in which given a message from human, the computer returns a reasonable response to the message. We leverage the vast amount of short conversation data available on social media to study the issue. We propose formalizing short text conversation as a search problem at the first step, and employing state-of-the-art information retrieval (IR) techniques to carry out the task. We investigate the significance as well as the limitation of the IR approach. Our experiments demonstrate that the retrieval-based model can make the system behave rather “intelligently”, when combined with a huge repository of conversation data from social media. |
Techniques for efficiently querying scientific workflow provenance graphs | A key advantage of scientific workflow systems over traditional scripting approaches is their ability to automatically record data and process dependencies introduced during workflow runs. This information is often represented through provenance graphs, which can be used by scientists to better understand, reproduce, and verify scientific results. However, while most systems record and store data and process dependencies, few provide easy-to-use and efficient approaches for accessing and querying provenance information. Instead, users formulate provenance graph queries directly against physical data representations (e.g., relational, XML, or RDF), leading to queries that are difficult to express and expensive to evaluate. We address these problems through a high-level query language tailored for expressing provenance graph queries. The language is based on a general model of provenance supporting scientific workflows that process XML data and employ update semantics. Query constructs are provided for querying both structure and lineage information. Unlike other languages that return sets of nodes as answers, our query language is closed, i.e., answers to lineage queries are sets of lineage dependencies (edges) allowing answers to be further queried. We provide a formal semantics for the language and present novel techniques for efficiently evaluating lineage queries. Experimental results on real and synthetic provenance traces demonstrate that our lineage based optimizations outperform an in-memory and standard database implementation by orders of magnitude. We also show that our strategies are feasible and can significantly reduce both provenance storage size and query execution time when compared with standard approaches. |
Topological Centrality and Its Applications | Recent development of network structure analysis shows that it plays an important role in characterizing complex system of many branches of sciences. Different from previous network centrality measures, this paper proposes the notion of topological centrality (TC) reflecting the topological positions of nodes and edges in general networks, and proposes an approach to calculating the topological centrality. The proposed topological centrality is then used to discover communities and build the backbone network. Experiments and applications on research network show the significance of the proposed approach. Index Terms —Network structure, Centrality, Community, e-Science |
An Evaluation of Journals Used in Doctoral Marketing Programs | Studies that rank the relative quality of scholarly marketing journals have relied primarily on expert opinion surveys and citation analyses. The authors use a new approach that combines elements of these two alternatives and compile a database of 6,294 citations (representing 3,423 different articles) from 109 syllabi obtained from a broad sampling of AACSB-International-accredited schools with marketing doctoral programs. The five most cited journals (Journal of Marketing, Journal of Consumer Research, Journal of Marketing Research, Marketing Science, and Journal of the Academy of Marketing Science) account for 66.5 percent of citations in the syllabi. Rankings of journals other than the top five vary markedly from previous journal quality studies. Few articles are cited in common across programs, and the authors find considerable variation even within individual seminar types. The findings provide a new basis for assessing the quality of journals and provide new insights about the content of doctoral programs. |
A prediction based approach for stock returns using autoregressive neural networks | This paper presents a prediction based neural networks approach for stock returns. An autoregressive neural network predictor is used to predict future stock returns. In this predictor, the differences between the values of the series of stock returns and a specified past value are the regression variables. Various error metrics have been used to evaluate the performance of the predictor. Experiments with real data from National stock exchange of India (NSE) were employed to examine the accuracy of this method. |
Learning Maximal Marginal Relevance Model via Directly Optimizing Diversity Evaluation Measures | In this paper we address the issue of learning a ranking model for search result diversification. In the task, a model concerns with both query-document relevance and document diversity is automatically created with training data. Ideally a diverse ranking model would be designed to meet the criterion of maximal marginal relevance, for selecting documents that have the least similarity to previously selected documents. Also, an ideal learning algorithm for diverse ranking would train a ranking model that could directly optimize the diversity evaluation measures with respect to the training data. Existing methods, however, either fail to model the marginal relevance, or train ranking models by minimizing loss functions that loosely related to the evaluation measures. To deal with the problem, we propose a novel learning algorithm under the framework of Perceptron, which adopts the ranking model that \emph{maximizes marginal relevance at ranking and can optimize any diversity evaluation measure in training}. The algorithm, referred to as PAMM (Perceptron Algorithm using Measures as Margins), first constructs positive and negative diverse rankings for each training query, and then repeatedly adjusts the model parameters so that the margins between the positive and negative rankings are maximized. Experimental results on three benchmark datasets show that PAMM significantly outperforms the state-of-the-art baseline methods. |
Efficacy and safety of combination therapy with vildagliptin and metformin versus metformin up-titration in Chinese patients with type 2 diabetes mellitus: study design and rationale of the vision study | BACKGROUND AND AIM
Limitations of the currently recommended stepwise treatment pathway for type 2 diabetes mellitus (T2DM), especially the failure of monotherapies to maintain good glycemic control, have prompted use of early, more aggressive combination therapies.The VISION study is designed to explore the efficacy and safety of vildagliptin as an add-on to metformin therapy compared with up-titration of metformin monotherapy in Chinese patients with T2DM.
METHODS
VISION, a 24-week, phase 4, prospective, randomized, multicenter, open-label, parallel-group study, will include 3312 Chinese T2DM patients aged ≥18 years who are inadequately controlled (6.5% >HbA1c ≤9%) by metformin (750-1000 mg/day). Eligible patients will be randomized to receive either vildagliptin plus metformin or up-titration of metformin monotherapy (5:1). Patients will also be subgrouped (1:1:1:1) based on their age and body mass index (BMI): <60 years and <24 kg/m²; <60 years and ≥24 kg/m²; ≥60 years and <24 kg/m²; and ≥60 years and ≥24 kg/m².
CONCLUSION
The VISION study will test the hypothesis that early use of combination therapy with vildagliptin and metformin will provide good glycemic control and will be better tolerated than up-titration of metformin monotherapy. The study will also correlate these benefits with age and BMI. |
ii WIND-CHIMNEY | This paper suggests using a wind-catcher integrated with a solar-chimney in a single story building so that the resident might benefit from natural ventilation, a passive cooling system, and heating strategies; it would also help to decrease energy use, CO2 emissions, and pollution. This system is able to remove undesirable interior heat pollution from a building and provide thermal comfort for the occupant. The present study introduces the use of a solar-chimney with an underground air channel combined with a wind-catcher, all as part of one device. Both the wind-catcher and solar chimney concepts used for improving a room’s natural ventilation are individually and analytically studied. This paper shows that the solar-chimney can be completely used to control and improve the underground cooling system during the day without any electricity. With a proper design, the solar-chimney can provide a thermally comfortable indoor environment for many hours during hot summers. The end product of this thesis research is a natural ventilation system and techniques that improve air quality and thermal comfort levels in a single story building. The proposed wind-chimney could eventually be designed for use in commercial, retail, and multi-story buildings. |
Wrist Pulse Rate Monitor Using Self-Injection-Locked Radar Technology | To achieve sensitivity, comfort, and durability in vital sign monitoring, this study explores the use of radar technologies in wearable devices. The study first detected the respiratory rates and heart rates of a subject at a one-meter distance using a self-injection-locked (SIL) radar and a conventional continuous-wave (CW) radar to compare the sensitivity versus power consumption between the two radars. Then, a pulse rate monitor was constructed based on a bistatic SIL radar architecture. This monitor uses an active antenna that is composed of a SIL oscillator (SILO) and a patch antenna. When attached to a band worn on the subject's wrist, the active antenna can monitor the pulse on the subject's wrist by modulating the SILO with the associated Doppler signal. Subsequently, the SILO's output signal is received and demodulated by a remote frequency discriminator to obtain the pulse rate information. |
Eigenfaces for Recognition | We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture. |
Reinforced Adversarial Neural Computer for de Novo Molecular Design. | In silico modeling is a crucial milestone in modern drug design and development. Although computer-aided approaches in this field are well-studied, the application of deep learning methods in this research area is at the beginning. In this work, we present an original deep neural network (DNN) architecture named RANC (Reinforced Adversarial Neural Computer) for the de novo design of novel small-molecule organic structures based on the generative adversarial network (GAN) paradigm and reinforcement learning (RL). As a generator RANC uses a differentiable neural computer (DNC), a category of neural networks, with increased generation capabilities due to the addition of an explicit memory bank, which can mitigate common problems found in adversarial settings. The comparative results have shown that RANC trained on the SMILES string representation of the molecules outperforms its first DNN-based counterpart ORGANIC by several metrics relevant to drug discovery: the number of unique structures, passing medicinal chemistry filters (MCFs), Muegge criteria, and high QED scores. RANC is able to generate structures that match the distributions of the key chemical features/descriptors (e.g., MW, logP, TPSA) and lengths of the SMILES strings in the training data set. Therefore, RANC can be reasonably regarded as a promising starting point to develop novel molecules with activity against different biological targets or pathways. In addition, this approach allows scientists to save time and covers a broad chemical space populated with novel and diverse compounds. |
Leadership practices and staff nurses' intent to stay: a systematic review. | AIM
The aim of the present study was to describe the findings of a systematic review of the literature that examined the relationship between managers' leadership practices and staff nurses' intent to stay in their current position.
BACKGROUND
The nursing shortage demands that managers focus on the retention of staff nurses. Understanding the relationship between leadership practices and nurses' intent to stay is fundamental to retaining nurses in the workforce.
METHODS
Published English language articles on leadership practices and staff nurses' intent to stay were retrieved from computerized databases and a manual search. Data extraction and quality assessments were completed for the final 23 research articles.
RESULTS
Relational leadership practices influence staff nurses' intentions to remain in their current position.
CONCLUSION
This study supports a positive relationship between transformational leadership, supportive work environments and staff nurses' intentions to remain in their current positions. Incorporating relational leadership theory into management practices will influence nurse retention. Advancing current conceptual models will increase knowledge of intent to stay. Clarifying the distinction between the concepts intent to stay and intent to leave is needed to establish a clear theoretical foundation for further intent to stay research.
IMPLICATIONS FOR NURSE MANAGERS
Nurse managers and leaders who practice relational leadership and ensure quality workplace environments are more likely to retain their staff. The findings of the present study support the claim that leadership practices influence staff nurse retention and builds on intent to stay knowledge. |
Convolutional auto-encoder for image denoising of ultra-low-dose CT | OBJECTIVES
The purpose of this study was to validate a patch-based image denoising method for ultra-low-dose CT images. Neural network with convolutional auto-encoder and pairs of standard-dose CT and ultra-low-dose CT image patches were used for image denoising. The performance of the proposed method was measured by using a chest phantom.
MATERIALS AND METHODS
Standard-dose and ultra-low-dose CT images of the chest phantom were acquired. The tube currents for standard-dose and ultra-low-dose CT were 300 and 10 mA, respectively. Ultra-low-dose CT images were denoised with our proposed method using neural network, large-scale nonlocal mean, and block-matching and 3D filtering. Five radiologists and three technologists assessed the denoised ultra-low-dose CT images visually and recorded their subjective impressions of streak artifacts, noise other than streak artifacts, visualization of pulmonary vessels, and overall image quality.
RESULTS
For the streak artifacts, noise other than streak artifacts, and visualization of pulmonary vessels, the results of our proposed method were statistically better than those of block-matching and 3D filtering (p-values < 0.05). On the other hand, the difference in the overall image quality between our proposed method and block-matching and 3D filtering was not statistically significant (p-value = 0.07272). The p-values obtained between our proposed method and large-scale nonlocal mean were all less than 0.05.
CONCLUSION
Neural network with convolutional auto-encoder could be trained using pairs of standard-dose and ultra-low-dose CT image patches. According to the visual assessment by radiologists and technologists, the performance of our proposed method was superior to that of large-scale nonlocal mean and block-matching and 3D filtering. |
Wireless pulmonary artery pressure monitoring guides management to reduce decompensation in heart failure with preserved ejection fraction. | BACKGROUND
No treatment strategies have been demonstrated to be beneficial for the population for patients with heart failure (HF) and preserved ejection fraction (EF).
METHODS AND RESULTS
The CardioMEMS Heart Sensor Allows Monitoring of Pressure to Improve Outcomes in NYHA Class III Heart Failure Patients (CHAMPION) trial was a prospective, single-blinded, randomized controlled clinical trial testing the hypothesis that hemodynamically guided HF management decreases decompensation leading to hospitalization. Of the 550 patients enrolled in the study, 119 had left ventricular EF ≥40% (average, 50.6%), 430 patients had low left ventricular EF (<40%; average, 23.3%), and 1 patient had no documented left ventricular EF. A microelectromechanical system pressure sensor was permanently implanted in all participants during right heart catheterization. After implant, subjects were randomly assigned in single-blind fashion to a treatment group in whom daily uploaded pressures were used in a treatment strategy for HF management or to a control group in whom standard HF management included weight-monitoring, and pressures were uploaded but not available for investigator use. The primary efficacy end point of HF hospitalization rate >6 months for preserved EF patients was 46% lower in the treatment group compared with control (incidence rate ratio, 0.54; 95% confidence interval, 0.38-0.70; P<0.0001). After an average of 17.6 months of blinded follow-up, the hospitalization rate was 50% lower (incidence rate ratio, 0.50; 95% confidence interval, 0.35-0.70; P<0.0001). In response to pulmonary artery pressure information, more changes in diuretic and vasodilator therapies were made in the treatment group.
CONCLUSIONS
Hemodynamically guided management of patients with HF with preserved EF reduced decompensation leading to hospitalization compared with standard HF management strategies.
CLINICAL TRIAL REGISTRATION URL
http://www.clinicaltrials.gov. Unique identifier: NCT00531661. |
Ethnicity , Insurgency , and Civil War Revisited ∗ | ∗This work was supported by the Center for Statistics and the Social Sciences at the University of Washington with funds from the University Initiatives Funds. In addition, Quinn acknowledges research support under NSF Grant SES 01-36676. †Department of Political Science and CSSS, University of Washington, [email protected] ‡Department of Sociology, University of Washington, [email protected] §Department of Political Science, University of Washington, [email protected] |
A new IPT magnetic coupler for electric vehicle charging systems | Inductive Power Transfer (IPT) is a practical method for recharging Electric Vehicles (EVs) because is it safe, efficient and convenient. Couplers or Power Pads are the power transmitters and receivers used with such contactless charging systems. Due to improvements in power electronic components, the performance and efficiency of an IPT system is largely determined by the coupling or flux linkage between these pads. Conventional couplers are based on circular pad designs and due to their geometry have fundamentally limited magnetic flux above the pad. This results in poor coupling at any realistic spacing between the ground pad and the vehicle pickup mounted on the chassis. Performance, when added to the high tolerance to misalignment required for a practical EV charging system, necessarily results in circular pads that are large, heavy and expensive. A new pad topology termed a flux pipe is proposed in this paper that overcomes difficulties associated with conventional circular pads. Due to the magnetic structure, the topology has a significantly improved flux path making more efficient and compact IPT charging systems possible. |
A Comparison of Logistic Regression, Classification and Regression Tree, and Neural Networks Models in Predicting Violent Re-Offending | Previous studies that have compared logistic regression (LR), classification and regression tree (CART), and neural networks (NNs) models for their predictive validity have shown inconsistent results in demonstrating superiority of any one model. The three models were tested in a prospective sample of 1225 UK male prisoners followed up for a mean of 3.31 years after release. Items in a widely-used risk assessment instrument (the Historical, Clinical, Risk Management-20, or HCR-20) were used as predictors and violent reconvictions as outcome. Multi-validation procedure was used to reduce sampling error in reporting the predictive accuracy. The low base rate was controlled by using different measures in the three models to minimize prediction error and achieve a more balanced classification. Overall accuracy of the three models varied between 0.59 and 0.67, with an overall AUC range of 0.65–0.72. Although the performance of NNs was slightly better than that of LR and CART models, it did not demonstrate a significant improvement. |
Temporal Data Mining for Educational Applications | Intelligent tutoring systems (ITSs) acquire rich data about students behavior during learning; data mining techniques can help to describe, interpret and predict student behavior, and to evaluate progress in relation to learning outcomes. This paper surveys a variety of data mining techniques for analyzing how students interact with ITSs, including methods for handling hidden state variables, and for testing hypotheses. To illustrate these methods we draw on data from two ITSs for math instruction. Educational datasets provide new challenges to the data mining community, including inducing action patterns, designing distance metrics, and inferring unobservable states associated with learning. |
Defect Management in Agile Software Development | Agile development reduces the risk of developing low quality software in the first place by minimizing defects. In agile software development formal defect management processes help to build quality software. The core purpose of defect management is to make the software more effective and efficient in order to increase its quality. There are several methods for handling defects like defect prevention, defect discovery and resolution which are used by software developers and testers. Refactoring keeps the system clean by identifying and removing quality defects. To gain the full confidence of the customer defect management should be involved at every stage of development. Agile methodologies focus on delivering the software in form of short iterations. Thus each iteration helps to overcome defects and leads better development and end user satisfaction. This study paints the picture of handling the software defects using agile based Software Development Process. |
Domain adaptation network based on hypergraph regularized denoising autoencoder | Domain adaptation learning aims to solve the classification problems of unlabeled target domain by using rich labeled samples in source domain, but there are three main problems: negative transfer, under adaptation and under fitting. Aiming at these problems, a domain adaptation network based on hypergraph regularized denoising autoencoder (DAHDA) is proposed in this paper. To better fit the data distribution, the network is built with denoising autoencoder which can extract more robust feature representation. In the last feature and classification layers, the marginal and conditional distribution matching terms between domains are obtained via maximum mean discrepancy measurement to solve the under adaptation problem. To avoid negative transfer, the hypergraph regularization term is introduced to explore the high-order relationships among data. The classification performance of the model can be improved by preserving the statistical property and geometric structure simultaneously. Experimental results of 16 cross-domain transfer tasks verify that DAHDA outperforms other state-of-the-art methods. |
Influence of environment, disturbance, and ownership on forest vegetation of coastal Oregon. | Information about how vegetation composition and structure vary quantitatively and spatially with physical environment, disturbance history, and land ownership is fundamental to regional conservation planning. However, current knowledge about patterns of vegetation variability across large regions that is spatially explicit (i.e., mapped) tends to be general and qualitative. We used spatial predictions from gradient models to examine the influence of environment, disturbance, and ownership on patterns of forest vegetation biodiversity across a large forested region, the 3-million-ha Oregon Coast Range (USA). Gradients in tree species composition were strongly associated with environment, especially climate, and insensitive to disturbance, probably because many dominant tree species are long-lived and persist throughout forest succession. In contrast, forest structure was strongly correlated with disturbance and only weakly with environmental gradients. Although forest structure differed among ownerships, differences were blurred by the presence of legacy trees that originated prior to current forest management regimes. Our multi-ownership perspective revealed biodiversity concerns and benefits not readily visible in single-ownership analyses, and all ownerships contributed to regional biodiversity values. Federal lands provided most of the late-successional and old-growth forest. State lands contained a range of forest ages and structures, including diverse young forest, abundant legacy dead wood, and much of the high-elevation true fir forest. Nonindustrial private lands provided diverse young forest and the greatest abundance of hardwood trees, including almost all of the foothill oak woodlands. Forest industry lands encompassed much early-successional forest, most of the mixed hardwood-conifer forest, and large amounts of legacy down wood. The detailed tree- and species-level data in the maps revealed regional trends that would be masked in traditional coarse-filter assessment. Although abundant, most early-successional forests originated after timber harvest and lacked legacy live and dead trees important as habitat and for other ecological functions. Many large-conifer forests that might be classified as old growth using a generalized forest cover map lacked structural features of old growth such as multilayered canopies or dead wood. Our findings suggest that regional conservation planning include all ownerships and land allocations, as well as fine-scale elements of vegetation composition and structure. |
Epidemiology of falls in residential aged care: analysis of more than 70,000 falls from residents of bavarian nursing homes. | OBJECTIVE
Falls and fall-related injuries are leading problems in residential aged care facilities. The objective of this study was to provide descriptive data about falls in nursing homes.
DESIGN/SETTING/PARTICIPANTS
Prospective recording of all falls over 1 year covering all residents from 528 nursing homes in Bavaria, Germany.
MEASUREMENTS
Falls were reported on a standardized form that included a facility identification code, date, time of the day, sex, age, degree of care need, location of the fall, and activity leading to the fall. Data detailing homes' bed capacities and occupancy levels were used to estimate total person-years under exposure and to calculate fall rates. All analyses were stratified by residents' degree of care need.
RESULTS
More than 70,000 falls were recorded during 42,843 person-years. The fall rate was higher in men than in women (2.18 and 1.49 falls per person-year, respectively). Fall risk differed by degree of care need with lower fall risks both in the least and highest care categories. About 75% of all falls occurred in the residents' rooms or in the bathrooms and only 22% were reported within the common areas. Transfers and walking were responsible for 41% and 36% of all falls respectively. Fall risk varied during the day. Most falls were observed between 10 am and midday and between 2 pm and 8 pm.
CONCLUSION
The differing fall risk patterns in specific subgroups may help to target preventive measures. |
Nonparametric belief propagation | Continuous quantities are ubiquitous in models of real-world phenomena, but are surprisingly difficult to reason about automatically. Probabilistic graphical models such as Bayesian networks and Markov random fields, and algorithms for approximate inference such as belief propagation (BP), have proven to be powerful tools in a wide range of applications in statistics and artificial intelligence. However, applying these methods to models with continuous variables remains a challenging task. In this work we describe an extension of BP to continuous variable models, generalizing particle filtering, and Gaussian mixture filtering techniques for time series to more complex models. We illustrate the power of the resulting nonparametric BP algorithm via two applications: kinematic tracking of visual motion and distributed localization in sensor networks. |
Emotional controller (BELBIC) for electric drives — A review | Artificial intelligence (AI) and Biologically-inspired techniques, particularly the neural networks, are recently having significant impact on power electronics and electric drives. Neural networks have created a new and advancing frontier in power electronics, which is already a complex and multidisciplinary technology that is going through dynamic evolution in the recent years. But recently, a new type of the intelligent techniques, for control and decision making processes, was introduced that is based on the emotion processing mechanism in brain, and is essentially an action selection, which is based on sensory inputs and emotional cues. This intelligent control is inspired by the limbic system of mammalian brain. The proposed controller is called brain emotional learning based intelligent controller (BELBIC). This paper gives a comprehensive introduction and perspective of its applications in the intelligent control for electric drives area. The principal topologies of neural networks that are currently most relevant for applications in power electronics have been reviewed including the detailed description of their properties. Both feedforward and feedback or recurrent architectures have been covered in the description. The application examples that are discussed in this paper include different electric drives control as: Direct Current Motors (DC), Alternative Current Motors (AC) and Special Motors (SRM). In addition, almost all of the selected applications in the literature are included in the references. In the experimental and simulation works, novel and simple implementations of the drives system were achieved by using the intelligent controller, which control the motor speed accurately in different operating points. This emotional intelligent controller has simple structure with high auto learning feature that does not require any motor parameters, for self performance. The proposed emotional controller has been experimentally implemented in some of the laboratory electric drives, and shows excellent promise for industrial scale utilization. |
Cooling field and temperature dependent exchange bias in spin glass/ferromagnet bilayers | We report on the experimental and theoretical studies of cooling field (HFC) and temperature (T) dependent exchange bias (EB) in FexAu1-x/Fe19Ni81 spin glass (SG)/ferromagnet (FM) bilayers. When x varies from 8% to 14% in the FexAu1-x SG alloys, with increasing T, a sign-changeable exchange bias field (HE) together with a unimodal distribution of coercivity (HC) are observed. Significantly, increasing in the magnitude of HFC reduces (increases) the value of HE in the negative (positive) region, resulting in the entire HE∼T curve to move leftwards and upwards. In the meanwhile, HFC variation has weak effects on HC. By Monte Carlo simulation using a SG/FM vector model, we are able to reproduce such HE dependences on T and HFC for the SG/FM system. Thus this work reveals that the SG/FM bilayer system containing intimately coupled interface, instead of a single SG layer, is responsible for the novel EB properties. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.