title
stringlengths
8
300
abstract
stringlengths
0
10k
Efficient VLSI Implementation of Neural Networks With Hyperbolic Tangent Activation Function
Nonlinear activation function is one of the main building blocks of artificial neural networks. Hyperbolic tangent and sigmoid are the most used nonlinear activation functions. Accurate implementation of these transfer functions in digital networks faces certain challenges. In this paper, an efficient approximation scheme for hyperbolic tangent function is proposed. The approximation is based on a mathematical analysis considering the maximum allowable error as design parameter. Hardware implementation of the proposed approximation scheme is presented, which shows that the proposed structure compares favorably with previous architectures in terms of area and delay. The proposed structure requires less output bits for the same maximum allowable error when compared to the state-of-the-art. The number of output bits of the activation function determines the bit width of multipliers and adders in the network. Therefore, the proposed activation function results in reduction in area, delay, and power in VLSI implementation of artificial neural networks with hyperbolic tangent activation function.
Feedback specificity, exploration, and learning.
Greater feedback specificity is generally considered to be beneficial for performance and learning, but the evidence for this generalization is limited. The authors argue that increasing the specificity of feedback is beneficial for initial performance but discourages exploration and undermines the learning needed for later, more independent performance. The results of their transfer experiment demonstrate that increasing the specificity of feedback positively affected practice performance, but its benefits did not endure over time or modification of the task. In addition, feedback specificity negatively affected levels of exploration during practice and interacted with exploration strategies to affect learning. The results suggest that those who received feedback of varying specificity may have learned through different but equally beneficial mechanisms.
Enumeration versus multiple object tracking: the case of action video game players
Here, we demonstrate that action video game play enhances subjects' ability in two tasks thought to indicate the number of items that can be apprehended. Using an enumeration task, in which participants have to determine the number of quickly flashed squares, accuracy measures showed a near ceiling performance for low numerosities and a sharp drop in performance once a critical number of squares was reached. Importantly, this critical number was higher by about two items in video game players (VGPs) than in non-video game players (NVGPs). A following control study indicated that this improvement was not due to an enhanced ability to instantly apprehend the numerosity of the display, a process known as subitizing, but rather due to an enhancement in the slower more serial process of counting. To confirm that video game play facilitates the processing of multiple objects at once, we compared VGPs and NVGPs on the multiple object tracking task (MOT), which requires the allocation of attention to several items over time. VGPs were able to successfully track approximately two more items than NVGPs. Furthermore, NVGPs trained on an action video game established the causal effect of game playing in the enhanced performance on the two tasks. Together, these studies confirm the view that playing action video games enhances the number of objects that can be apprehended and suggest that this enhancement is mediated by changes in visual short-term memory skills.
6 Energy Management for Intelligent Buildings
The increasing availability and affordability of wireless building and home automation networks has increased interest in residential and commercial building energy management. This interest has been coupled with an increased awareness of the environmental impact of energy generation and usage. Residential appliances and equipment account for 30% of all energy consumption in OECD countries and indirectly contribute to 12% of energy generation related carbon dioxide (CO2) emissions (International Energy Agency, 2003). The International Energy Association also predicts that electricity usage for residential appliances would grow by 12% between 2000 and 2010, eventually reaching 25% by 2020. These figures highlight the importance of managing energy use in order to improve stewardship of the environment. They also hint at the potential gains that are available through smart consumption strategies targeted at residential and commercial buildings. The challenge is how to achieve this objective without negatively impacting people’s standard of living or their productivity. The three primary purposes of building energy management are the reduction/management of building energy use; the reduction of electricity bills while increasing occupant comfort and productivity; and the improvement of environmental stewardship without adversely affecting standards of living. Building energy management systems provide a centralized platform for managing building energy usage. They detect and eliminate waste, and enable the efficient use electricity resources. The use of widely dispersed sensors enables the monitoring of ambient temperature, lighting, room occupancy and other inputs required for efficient management of climate control (heating, ventilation and air conditioning), security and lighting systems. Lighting and HVAC account for 50% of commercial and 40% of residential building electricity expenditure respectively, indicating that efficiency improvements in these two areas can significantly reduce energy expenditure. These savings can be made through two avenues: the first is through the use of energy-efficient lighting and HVAC systems; and the second is through the deployment of energy management systems which utilize real time price information to schedule loads to minimize energy bills. The latter scheme requires an intelligent power grid or smart grid which can provide bidirectional data flows between customers and utility companies. The smart grid is characterized by the incorporation of intelligenceand bidirectional flows of information and electricity throughout the power grid. These enhancements promise to revolutionize the grid by enabling customers to not only consume but also supply power.
Computation offloading decisions for reducing completion time
We analyze the conditions in which offloading computation reduces completion time. We extend the existing literature by deriving an inequality that relates computation offloading system parameters to the bits per instruction ratio of a computational job. This ratio is the inverse of the arithmetic intensity. We then discuss how this inequality can be used to determine the computations that can benefit from offloading as well as the computation offloading systems required to make offloading beneficial for particular computations.
Professional Ethics in Accounting and Auditing
Ethics is a subject that is inclusive to all aspects of human life cover. The growing human and social relationships become more complex, creates new needs are The emergence of various professions, born efforts in response to the requirements and conditions change over time, they gradually shape the evolution and development of the valley. Time and circumstances change and they gradually form evolution and the evolution of a state. The pro due to the necessity of division of labor and specialization of tasks, are becoming increasingly integrated and play their role in improving the general welfare of communities that. The survival Profession and its members are subject to the type and quality of employment services that provides credibility and confidence as a result of providing these Services acquired. The principal asset of every business and maintain its credibility and trust are of paramount importance. This would require would The main goal of every professional duty and its members, community service and personal interest in providing these services framework to interpret Follow although long professional accountancy bodies in different countries, in order to protect the public interest and necessity Accountants to adhere to professional ethics, professional behavior have attempted to formulate regulations, but apparently not alone Professional Conduct Regulations Can not solve the problems of the profession of accounting scandals occur around the world., In this paper, First describe moral and ethical paradigms and professional ethics and the history of ethics is referenced in the following basic features and elements of ethics, ethical decision-making models Thorne, standing ethics And its role in advancement, professional growth and development, professional ethics and ethical guidelines in accounting and auditing are discussed.
Effect of Occlusal Splints on the Temporomandibular Disorders, Dental Wear and Anxiety of Bruxist Children
OBJECTIVES To evaluate the effectiveness of occlusal splints to reduce the signs and symptoms of temporomandibular disorders (TMD), dental wear and anxiety in a group of bruxist children. METHODS All of the subjects were 3 to 6 years old, had complete primary dentition, class I occlusion and were classified as bruxist according to the minimal criteria of the ICSD for bruxism. For each child, anxiety was evaluated with the Conners' Parent Rating Scales (CPRS). The TMD were evaluated using the RDC/TMD. The dental wear was processed in digital format with Mat Lab® and Lab view® software to determine its size and form. The children were randomized into an experimental (n=19) and a control (n=17) group. The children in the experimental group used rigid bite plates for a two-year period, until mixed dentition. Afterwards, the CPRS and the RDC/TMD were applied again and dental casts were taken. Comparisons of the variables regarding dental wear, signs and symptoms of TMD and anxiety before and after treatment among the groups were analyzed using the t-test, the Wilcoxon rank sum test and the Mann-Whitney test. RESULTS The subjects in the experimental group showed no statistically significant difference regarding anxiety levels and dental wear when compared with the control group. The signs and symptoms of TMD were not reduced except for the deviation in mouth opening. CONCLUSIONS The use of rigid occlusal bite plates was not efficient in reducing the signs of bruxism as a whole but did reduce the deviation in mouth opening.
1 ACO Algorithms for the Traveling Salesman Problemy
Ant algorithms [18, 14, 19] are a recently developed, population-based approach which has been successfully applied to several NP-hard combinatorial optimization problems [6, 13, 17, 23, 34, 40, 49]. As the name suggests, ant algorithms have been inspired by the behavior of real ant colonies, in particular, by their foraging behavior. One of the main ideas of ant algorithms is the indirect communication of a colony of agents, called (artificial) ants, based on pheromone trails (pheromones are also used by real ants for communication). The (artificial) pheromone trails are a kind of distributed numeric information which is modified by the ants to reflect their experience while solving a particular problem. Recently, the Ant Colony Optimization (ACO) metaheuristic has been proposed which provides a unifying framework for most applications of ant algorithms [15, 16] to combinatorial optimization problems. In particular, all the ant algorithms applied to the TSP fit perfectly into the ACO meta-heuristic and, therefore, we will call these algorithms also ACO algorithms. The first ACO algorithm, called Ant System (AS) [18, 14, 19], has been applied to the Traveling Salesman Problem (TSP). Starting from Ant System, several improvements of the basic algorithm have been proposed [21, 22, 17, 51, 53, 7]. Typically, these improved algorithms have been tested again on the TSP. All these improved versions of AS have in common a stronger exploita-
Effect of Lactobacillus plantarum 299v on cardiovascular disease risk factors in smokers.
BACKGROUND The short-chain fatty acids formed in the human colon by the bacterial fermentation of fiber may have an antiinflammatory effect, may reduce insulin production, and may improve lipid metabolism. We previously showed in hypercholesterolemic patients that supplementation with the probiotic bacteria Lactobacillus plantarum 299v significantly lowers concentrations of LDL cholesterol and fibrinogen. OBJECTIVE We determined the influence of a functional food product containing L. plantarum 299v on lipid profiles, inflammatory markers, and monocyte function in heavy smokers. DESIGN Thirty-six healthy volunteers (18 women and 18 men) aged 35-45 y participated in a controlled, randomized, double-blind trial. The experimental group drank 400 mL/d of a rose-hip drink containing L. plantarum 299v (5 x 10(7) colony-forming units/mL); the control group consumed the same volume of product without bacteria. The experiment lasted 6 wk and entailed no changes in lifestyle. RESULTS Significant decreases in systolic blood pressure (P < 0.000), leptin (P < 0.000), and fibrinogen (P < 0.001) were recorded in the experimental group. No such changes were observed in the control group. Decreases in F(2)-isoprostanes (37%) and interleukin 6 (42%) were also noted in the experimental group in comparison with baseline. Monocytes isolated from subjects treated with L. plantarum showed significantly reduced adhesion (P < 0.001) to native and stimulated human umbilical vein endothelial cells. CONCLUSION L. plantarum administration leads to a reduction in cardiovascular disease risk factors and could be useful as a protective agent in the primary prevention of atherosclerosis in smokers.
Separating agreement from execution for byzantine fault tolerant services
We describe a new architecture for Byzantine fault tolerant state machine replication that separates agreement that orders requests from execution that processes requests. This separation yields two fundamental and practically significant advantages over previous architectures. First, it reduces replication costs because the new architecture can tolerate faults in up to half of the state machine replicas that execute requests. Previous systems can tolerate faults in at most a third of the combined agreement/state machine replicas. Second, separating agreement from execution allows a general privacy firewall architecture to protect confidentiality through replication. In contrast, replication in previous systems hurts confidentiality because exploiting the weakest replica can be sufficient to compromise the system. We have constructed a prototype and evaluated it running both microbenchmarks and an NFS server. Overall, we find that the architecture adds modest latencies to unreplicated systems and that its performance is competitive with existing Byzantine fault tolerant systems.
Towards bipolar linguistic summaries: a novel fuzzy bipolar querying based approach
We study the possibility to extend the concept of linguistic data summaries employing the notion of bipolarity. Yager's linguistic summaries may be derived using a fuzzy linguistic querying interface. We look for a similar analogy between bipolar queries and the extended form of linguistic summaries. The general concept of bipolar query, and its special interpretation are recalled, which turns out to be applicable to accomplish our goal. Some preliminary results are presented and possible directions of further research are pointed out.
Generating a business model canvas for Future-Internet-based logistics control towers
In order to see technological innovations entering and gaining prevalence in a market, viable business models are necessary for companies that are going to employ and deliver such innovations. The business model canvas is an approach to describe and visualize a planned business model idea and is especially suited for innovative enterprises and start-ups. In this paper, a business model canvas is presented that has been developed to examine possible ways of exploiting the potential of Future-Internet-based logistics control towers in the transportation and logistics domain.
An unknown story: Majorana and the Pauli‐Weisskopf scalar electrodynamics
An account is given of an interesting but unknown theory by Majorana regarding scalar quantum electrodynamics, elaborated several years before the known Pauli-Weisskopf theory. Theoretical calculations and their interpretation are given in detail, together with a general historical discussion of the main steps towards the building of a quantum field theory for electrodynamics. A possible peculiar application to nuclear constitution, as conceived around 1930, considered by Majorana is as well discussed.
Viral Marketing for Multiple Products
Viral Marketing, the idea of exploiting social interactions of users to propagate awareness for products, has gained considerable focus in recent years. One of the key issues in this area is to select the best seeds that maximize the influence propagated in the social network. In this paper, we define the seed selection problem (called t-Influence Maximization, or t-IM) for multiple products. Specifically, given the social network and t products along with their seed requirements, we want to select seeds for each product that maximize the overall influence. As the seeds are typically sent promotional messages, to avoid spamming users, we put a hard constraint on the number of products for which any single user can be selected as a seed. In this paper, we design two efficient techniques for the t-IM problem, called Greedy and FairGreedy. The Greedy algorithm uses simple greedy hill climbing, but still results in a 1/3-approximation to the optimum. Our second technique, FairGreedy, allocates seeds with not only high overall influence (close to Greedy in practice), but also ensures fairness across the influence of different products. We also design efficient heuristics for estimating the influence of the selected seeds, that are crucial for running the seed selection on large social network graphs. Finally, using extensive simulations on real-life social graphs, we show the effectiveness and scalability of our techniques compared to existing and naive strategies.
Complexity leadership: a healthcare imperative.
PROBLEM The healthcare system is plagued with increasing cost and poor quality outcomes. A major contributing factor for these issues is that outdated leadership practices, such as leader-centricity, linear thinking, and poor readiness for innovation, are being used in healthcare organizations. SOLUTION Complexity leadership theory provides a new framework with which healthcare leaders may practice leadership. Complexity leadership theory conceptualizes leadership as a continual process that stems from collaboration, complex systems thinking, and innovation mindsets. CONCLUSION Compared to transactional and transformational leadership concepts, complexity leadership practices hold promise to improve cost and quality in health care.
Does #like4like indeed provoke more likes?
Hashtags, created by social network users, have gained a huge popularity in recent years. As a kind of metatag for organizing information, hashtags in online social networks, especially in Instagram, have greatly facilitated users' interactions. In recent years, academia starts to use hashtags to reshape our understandings on how users interact with each other. #like4like is one of the most popular hashtags in Instagram with more than 290 million photos appended with it, when a publisher uses #like4like in one photo, it means that he will like back photos of those who like this photo. Different from other hashtags, #like4like implies an interaction between a photo's publisher and a user who likes this photo, and both of them aim to attract likes in Instagram. In this paper, we study whether #like4like indeed serves the purpose it is created for, i.e., will #like4like provoke more likes? We first perform a general analysis of #like4like with 1.8 million photos collected from Instagram, and discover that its quantity has dramatically increased by 1,300 times from 2012 to 2016. Then, we study whether #like4like will attract likes for photo publishers; results show that it is not #like4like but actually photo contents attract more likes, and the lifespan of a #like4like photo is quite limited. In the end, we study whether users who like #like4like photos will receive likes from #like4like publishers. However, results show that more than 90% of the publishers do not keep their promises, i.e., they will not like back others who like their #like4like photos; and for those who keep their promises, the photos which they like back are often randomly selected.
Total order broadcast and multicast algorithms: Taxonomy and survey
Total order broadcast and multicast (also called atomic broadcast/multicast) present an important problem in distributed systems, especially with respect to fault-tolerance. In short, the primitive ensures that messages sent to a set of processes are, in turn, delivered by all those processes in the same total order.
Application of extended Kalman filter to parameter estimation of doubly-fed induction generators in variable-speed wind turbine systems
This paper proposes a parameter estimation method for doubly-fed induction generators (DFIGs) in variable-speed wind turbine systems (WTS). The proposed method employs an extended Kalman filter (EKF) for estimation of all electrical parameters of the DFIG, i.e., the stator and rotor resistances, the leakage inductances of stator and rotor, and the mutual inductance. The nonlinear state space model of the DFIG is derived and the design procedure of the EKF is described. The observability matrix of the linearized DFIG model is computed and the observability is checked online for different operation conditions. The estimation performance of the EKF is illustrated by simulation results. The estimated parameters are plotted against their actual values. The estimation performance of the EKF is also tested under variations of the DFIG parameters to investigate the estimation accuracy for changing parameters.
A Lightweight Secure Data Sharing Scheme for Mobile Cloud Computing
With the popularity of cloud computing, mobile devices can store/retrieve personal data from anywhere at any time. Consequently, the data security problem in mobile cloud becomes more and more severe and prevents further development of mobile cloud. There are substantial studies that have been conducted to improve the cloud security. However, most of them are not applicable for mobile cloud since mobile devices only have limited computing resources and power. Solutions with low computational overhead are in great need for mobile cloud applications. In this paper, we propose a lightweight data sharing scheme (LDSS) for mobile cloud computing. It adopts CP-ABE, an access control technology used in normal cloud environment, but changes the structure of access control tree to make it suitable for mobile cloud environments. LDSS moves a large portion of the computational intensive access control tree transformation in CP-ABE from mobile devices to external proxy servers. Furthermore, to reduce the user revocation cost, it introduces attribute description fields to implement lazy-revocation, which is a thorny issue in program based CP-ABE systems. The experimental results show that LDSS can effectively reduce the overhead on the mobile device side when users are sharing data in mobile cloud environments.
Teaching theory of mind: a new approach to social skills training for individuals with autism.
This study examined the effectiveness of a social skills training program for normal-IQ adolescents with autism. Five boys participated in the 4 1/2-month treatment condition; four boys matched on age, IQ, and severity of autism constituted the no-treatment control group. In addition to teaching specific interactional and conversational skills, the training program provided explicit and systematic instruction in the underlying social-cognitive principles necessary to infer the mental states of others (i.e., theory of mind). Pre- and post-intervention assessment demonstrated meaningful change in the treatment group's performance on several false belief tasks, but no improvement in the control sample. No changes, however, were demonstrated on general parent and teacher ratings of social competence for either group.
The VMware mobile virtualization platform: is that a hypervisor in your pocket?
The virtualization of mobile devices such as smartphones, tablets, netbooks, and MIDs offers significant potential in addressing the mobile manageability, security, cost, compliance, application development and deployment challenges that exist in the enterprise today. Advances in mobile processor performance, memory and storage capacities have led to the availability of many of the virtualization techniques that have previously been applied in the desktop and server domains. Leveraging these opportunities, VMware's Mobile Virtualization Platform (MVP) makes use of system virtualization to deliver an end-to-end solution for facilitating employee-owned mobile phones in the enterprise. In this paper we describe the use case behind MVP, and provide an overview of the hypervisor's design and implementation. We present a novel system architecture for mobile virtualization and describe key aspects of both core and platform virtualization on mobile devices
Developing an Assessment Method of Active Aging: University of Jyvaskyla Active Aging Scale.
OBJECTIVE To develop an assessment method of active aging for research on older people. METHOD A multiphase process that included drafting by an expert panel, a pilot study for item analysis and scale validity, a feedback study with focus groups and questionnaire respondents, and a test-retest study. Altogether 235 people aged 60 to 94 years provided responses and/or feedback. RESULTS We developed a 17-item University of Jyvaskyla Active Aging Scale with four aspects in each item (goals, ability, opportunity, and activity; range 0-272). The psychometric and item properties are good and the scale assesses a unidimensional latent construct of active aging. DISCUSSION Our scale assesses older people's striving for well-being through activities pertaining to their goals, abilities, and opportunities. The University of Jyvaskyla Active Aging Scale provides a quantifiable measure of active aging that may be used in postal questionnaires or interviews in research and practice.
The effectiveness of dialectical behaviour therapy in routine public mental health settings: An Australian controlled trial.
Randomised controlled studies in research environments have demonstrated dialectical behaviour therapy (DBT) to be more efficacious than treatment as usual in reducing suicidal behaviour in patients with borderline personality disorder (BPD). Limited evidence exists for the effectiveness of DBT in the treatment of BPD within routine clinical settings. This study examines the clinical and cost effectiveness of providing DBT over treatment as usual in a routine Australian public mental health service. Forty-three adult patients with BPD were provided with outpatient DBT for six months with patient outcomes compared to those obtained from patients in a wait list group receiving treatment as usual (TAU) from the same service. After six months of treatment the DBT group showed significantly greater reductions in suicidal/non-suicidal self-injury, emergency department visits, psychiatric admissions and bed days. Self-report measures were administered to a reduced sample of patients. With this group, DBT patients demonstrated significantly improved depression, anxiety and general symptom severity scores compared to TAU at six months. Average treatment costs were significantly lower for those patients in DBT than those receiving TAU. Therapists who received intensive DBT training were shown to produce significantly greater improvements in patients' suicidal and non-suicidal self-injury than therapists who received only 4 day basic training. Further clinical improvements were achieved in patients offered an additional six months of DBT. This study demonstrates that providing DBT to patients within routine public mental health settings can be both clinically effective and cost effective.
Unlocking the black box: exploring the link between high-performance work systems and performance.
With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level attitudinal factors--job satisfaction, organizational commitment, and psychological empowerment--as well as organizational citizenship behaviors that have the potential to provide insights into how human resource systems influence the performance of organizational units. The results support a unit-level path model, such that department-level, high-performance work system utilization is associated with enhanced levels of job satisfaction, organizational commitment, and psychological empowerment. In turn, these attitudinal variables were found to be positively linked to enhanced organizational citizenship behaviors, which are further related to a second-order construct measuring departmental performance.
Use of microsurgery and iloprost in the infantile arterial injuries.
BACKGROUND To evaluate the use and advantage of microsurgical intervention and intravenous iloprost administration in delayed infantile artery injuries. METHODS AND RESULTS Four patients were followed up and treated in our clinic between June 2003 and June 2006 for infantile artery injuries and distal ischemia. The average age of the 4 infants (3 girls, 1 boy) was 134.7+/-33.6 days. The reason for all of the artery injuries was iatrogenic. Tissue necrosis started in patches in 2 babies who were admitted at the 12(th) hour after ischemia (19(th) and 22(nd) hours), and therefore the artery was repaired by microsurgery. Iloprost infusion was also used in addition to the conservative treatments. The other 2 patients were assessed before the first 12 h after distal ischemia and were treated by iloprost without any surgical intervention. None of the patients lost any tissue or extremities during the 9 months (average) follow-up time. One of our patients died following the ventricular septal defect repair at the 9(th) month after a successful repair of artery. DISCUSSION We believe that intravenous iloprost infusion is very effective in the treatment of distal ischemia when used in addition to the conservative treatment methods for artery injuries in infants.
Individual Differences in the Benefits of Feedback for Learning
OBJECTIVE Research on learning from feedback has produced ambiguous guidelines for feedback design--some have advocated minimal feedback, whereas others have recommended more extensive feedback that highly supported performance. The objective of the current study was to investigate how individual differences in cognitive resources may predict feedback requirements and resolve previous conflicted findings. METHOD Cognitive resources were controlled for by comparing samples from populations with known differences, older and younger adults.To control for task demands, a simple rule-based learning task was created in which participants learned to identify fake Windows pop-ups. Pop-ups were divided into two categories--those that required fluid ability to identify and those that could be identified using crystallized intelligence. RESULTS In general, results showed participants given higher feedback learned more. However, when analyzed by type of task demand, younger adults performed comparably with both levels of feedback for both cues whereas older adults benefited from increased feedbackfor fluid ability cues but from decreased feedback for crystallized ability cues. CONCLUSION One explanation for the current findings is feedback requirements are connected to the cognitive abilities of the learner-those with higher abilities for the type of demands imposed by the task are likely to benefit from reduced feedback. APPLICATION We suggest the following considerations for feedback design: Incorporate learner characteristics and task demands when designing learning support via feedback.
A cognitive-motivational analysis of anxiety.
Evidence of preattentive and attentional biases in anxiety is evaluated from a cognitive-motivational perspective. According to this analysis, vulnerability to anxiety stems mainly from a lower threshold for appraising threat, rather than a bias in the direction of attention deployment. Thus, relatively innocuous stimuli are evaluated as having higher subjective threat value by high than low trait anxious individuals, and it is further assumed that everyone orients to stimuli that are judged to be significantly threatening. This account is contrasted with other recent cognitive models of anxiety, and implications for the etiology, maintenance and treatment of anxiety disorders are discussed.
Optimality of Dual Methods for Discrete Multiuser Multicarrier Resource Allocation Problems
Dual methods based on Lagrangian relaxation are the state of the art to solve multiuser multicarrier resource allocation problems. This applies to concave utility functions as well as to practical systems employing adaptive modulation, in which users' data rates can be described by step functions. We show that this discrete resource allocation problem can be formulated as an integer linear program belonging to the class of multiple-choice knapsack problems. As a knapsack problem with additional constraints, this problem is NP-hard, but facilitates approximation algorithms based on Lagrangian relaxation. We show that these dual methods can be described as rounding methods. As an immediate result, we conclude that prior claims of optimality, based on a vanishing duality gap, are insufficient. To answer the question of optimality of dual methods for discrete multicarrier resource allocation problems, we present bounds on the absolute integrality gap for three exemplary downlink resource allocation problems with different objectives when employing rounding methods. The obtained bounds are asymptotically optimal in the sense that the relative performance loss vanishes as the number of subcarriers tends to infinity. The exemplary problems considered in this work are sum rate maximization, sum power minimization and max-min fairness.
ARCTIC: metadata extraction from scientific papers in pdf using two-layer CRF
Most scientific articles are available in PDF format. The PDF standard allows the generation of metadata that is included within the document. However, many authors do not define this information, making this feature unreliable or incomplete. This fact has been motivating research which aims to extract metadata automatically. Automatic metadata extraction has been identified as one of the most challenging tasks in document engineering. This work proposes Artic, a method for metadata extraction from scientific papers which employs a two-layer probabilistic framework based on Conditional Random Fields. The first layer aims at identifying the main sections with metadata information, and the second layer finds, for each section, the corresponding metadata. Given a PDF file containing a scientific paper, Artic extracts the title, author names, emails, affiliations, and venue information. We report on experiments using 100 real papers from a variety of publishers. Our results outperformed the state-of-the-art system used as the baseline, achieving a precision of over 99%.
Development of a finite element model of the human body
A finite element human model, THUMS (Total HUman Model for Safety), was developed in order to study human body responses to impact loads. This paper briefly describes the structure of the human model, as well as some of the results of the simulations conducted to validate the model. Crash/Safety (1) 7th International LS-DYNA Users Conference 3-38 INTRODUCTION A finite element model of the whole human body, THUMS (Total HUman Model for Safety), has been developed. The purpose of the THUMS model is for the LS-DYNA users to simulate responses of the human body sustaining impact loads. This paper briefly describes the structure of the human model, as well as some results of the simulations conducted to validate the model. Finally, we describe further development plan of the model. OVERVIEW OF THE THUMS MODEL The THUMS model represents 50 percentile American adult male in a seating posture. Figure 1 shows the whole structure of the THUMS model with some soft tissues removed to expose the skeletal structure. The model contains about sixty thousand nodes and eighty thousand elements that include thirty thousand solid elements, fifty thousand shell elements and three thousand bar or beam elements. Figure 1. The THUMS Model with Some Soft Tissues Removed to Expose the Skeletal Structure 7th International LS-DYNA Users Conference Crash/Safety (1) 3-39 Whole skeletal structures are modeled in the THUMS model. Each bone consists of solid elements and shell elements; the solid elements represent the cancellous bone while the shell elements represent the cortical bone. In the joints of the THUMS model, ligaments that connect the bones are modeled by using shell elements or beam elements and sliding interfaces are defined on the contacting surfaces of these bones. In the thoracic cavity, the lung and heart are modeled as a single continuum body with solid elements. Abdominal organs are also modeled as continuum bodies. Skins and muscles that cover the bones are modeled with solid elements. The material properties of the tissues have been taken from Yamada (1970). VALIDATION OF THE THUMS MODEL Thoracic Frontal Impact Simulation We have simulated the published cadaver impact tests conducted by Kroell et al. (1971 and 1974) to validate the response of the THUMS model to thoracic frontal impact. Figure 2 shows the set-up of the THUMS model and the pendulum. Though all the soft tissues are included in the simulation, the right half of the soft tissues is removed to expose the skeletal structure in the Figure 2. The arrow in the figure shows the direction of the pendulum’s motion. In this simulation the mass and initial velocity of the pendulum are 23.4kg and 6.9m/s, respectively. Thoracic Side Impact Simulation The thoracic side impact simulation is discussed in this section. We have simulated the cadaver impact test conducted by Bouquet et al. (1994). Figure 3 shows the configuration of the THUMS model and the pendulum. In this simulation the mass and initial velocity of the pendulum are 23.4kg and 5.9m/s, respectively. We show the direction of the pendulum’s motion by the arrow in the figure. Figure 2. The Set-up of the Thoracic Frontal Impact Simulation Figure 3. The Set-up of the Thoracic Side Impact Simulation Crash/Safety (1) 7th International LS-DYNA Users Conference 3-40 Pelvic Side Impact Simulation In this section the pelvic side impact simulation is discussed. We have simulated the cadaver impact experiment reported by Viano (1989). Figure 4 shows the set-up of the THUMS model and the pendulum. In this simulation the mass and initial velocity of the pendulum are 23.4kg and 9.5m/s, respectively. The arrow in the figure indicates the direction of the pendulum’s motion. Abdominal Frontal Impact Simulation In this section the abdominal frontal impact simulation is described. We simulated the cadaver impact tests conducted by Nusholtz et al. (1988). Figure 5 shows the THUMS model and the pendulum that simulates a lower half of a steering wheel. In this simulation the mass and initial velocity of the pendulum are 18kg and 10m/s, respectively. The arrow in the figure indicates the direction of the pendulum’s motion. Figure 4. The Set-up of the Pelvic Side Impact Simulation Figure 5. The Set-up of the Abdominal Frontal Impact Simulation 7th International LS-DYNA Users Conference Crash/Safety (1) 3-41 RESULTS AND DISCUSSION Thoracic Frontal Impact Simulation Figure 6 shows the relation between the chest deflection in the antero-posterior direction and the impact force sustained by the chest. The solid line shows the simulation result, while the dashed lines show the upper and lower limits of the corridor obtained by Kroell et al. (1971 and 1974). The simulation result shows a good agreement with the experimental data. Figure 7 shows the deformation of the model at 0, 20, 40, and 60msec. The right half of the soft tissues are not shown in this figure to expose the skeletal structure. Although the overall force-deflection response of the model shows a good agreement with the experimental corridor, the unloading path of the model response deviates from the corridor. Therefore, we may add more energy-absorbing capabilities to the materials of the thoracic tissues so that the unloading path of the model response can be more accurate. Figure 6. Force-deflection curve obtained from the thoracic frontal impact simulation and the cadaver test conducted by Kroell et al (1971, 1974) Crash/Safety (1) 7th International LS-DYNA Users Conference 3-42 Figure 7. Deformation of the body obtained from the thoracic frontal impact 0msec 20msec
Exploring semi-supervised and active learning for activity recognition
In recent years research on human activity recognition using wearable sensors has enabled to achieve impressive results on real-world data. However, the most successful activity recognition algorithms require substantial amounts of labeled training data. The generation of this data is not only tedious and error prone but also limits the applicability and scalability of today's approaches. This paper explores and systematically analyzes two different techniques to significantly reduce the required amount of labeled training data. The first technique is based on semi-supervised learning and uses self-training and co-training. The second technique is inspired by active learning. In this approach the system actively asks which data the user should label. With both techniques, the required amount of training data can be reduced significantly while obtaining similar and sometimes even better performance than standard supervised techniques. The experiments are conducted using one of the largest and richest currently available datasets.
Query-by-Example: A Data Base Language
is a high-level data base management language that provides the user with a convenient and unijied interface to query, update, dejine, and control a data base. W h e n the user performs an operation against the data base, he jills in an example of a solution to that operation in skeleton tables that can be associated with actual tables in the data base. The system is currently being used experimentally for various applications. Query-by-E~ample"~ is a high-level data base management language that provides a convenient and unified style to query, update, define, and control a relational data base. The philosophy of Query-by-Example is to require the user to know very little in order to get started and to minimize the number of concepts that he subsequently has to learn in order to understand and use the whole language. The language syntax is simple, yet it covers a wide variety of complex transactions. This is achieved through the use of the same operations for retrieval, manipulation, definition, and control (to the extent possible). The language operations should mimic, as much as possible, manual table manipulation, thus capturing the simplicity, symmetry and neutrality of the relational The formulation of a transaction should capture the user's thought process, thereby providing freedom to formulate a transaction. The system should allow the user to create and drop tables dynamically from the data base; it must also provide the user with a dynamic capability for defining control statements and security features. The architecture of the Query-by-Example language addresses all the requirements just mentioned. The results of various psychological studies of the language7 show that it requires less than three hours of instruction for nonprogrammers to acquire the skill to make fairly complicated queries. Such queries would otherwise require the user to know first order predicate calculus. Other nonprocedural languages that deal with the same topic are
Genetic polymorphisms in key hypoxia-regulated downstream molecules and phenotypic correlation in prostate cancer
BACKGROUND In this study we sought if, in their quest to handle hypoxia, prostate tumors express target hypoxia-associated molecules and their correlation with putative functional genetic polymorphisms. METHODS Representative areas of prostate carcinoma (n = 51) and of nodular prostate hyperplasia (n = 20) were analysed for hypoxia-inducible factor 1 alpha (HIF-1α), carbonic anhydrase IX (CAIX), lysyl oxidase (LOX) and vascular endothelial growth factor (VEGFR2) immunohistochemistry expression using a tissue microarray. DNA was isolated from peripheral blood and used to genotype functional polymorphisms at the corresponding genes (HIF1A +1772 C > T, rs11549465; CA9 + 201 A > G; rs2071676; LOX +473 G > A, rs1800449; KDR - 604 T > C, rs2071559). RESULTS Immunohistochemistry analyses disclosed predominance of positive CAIX and VEGFR2 expression in epithelial cells of prostate carcinomas compared to nodular prostate hyperplasia (P = 0.043 and P = 0.035, respectively). In addition, the VEGFR2 expression score in prostate epithelial cells was higher in organ-confined and extra prostatic carcinoma compared to nodular prostate hyperplasia (P = 0.031 and P = 0.004, respectively). Notably, for LOX protein the immunoreactivity score was significantly higher in organ-confined carcinomas compared to nodular prostate hyperplasia (P = 0.015). The genotype-phenotype analyses showed higher LOX staining intensity for carriers of the homozygous LOX +473 G-allele (P = 0.011). Still, carriers of the KDR-604 T-allele were more prone to have higher VEGFR2 expression in prostate epithelial cells (P < 0.006). CONCLUSIONS Protein expression of hypoxia markers (VEGFR2, CAIX and LOX) on prostate epithelial cells was different between malignant and benign prostate disease. Two genetic polymorphisms (LOX +473 G > A and KDR-604 T > C) were correlated with protein level, accounting for a potential gene-environment effect in the activation of hypoxia-driven pathways in prostate carcinoma. Further research in larger series is warranted to validate present findings.
Effects of challenge with a virulent genotype II strain of porcine reproductive and respiratory syndrome virus on piglets vaccinated with an attenuated genotype I strain vaccine.
Porcine reproductive and respiratory syndrome virus (PRRSV) is endemic in most parts of Asia, where genotype I and II strains of diverse virulence may coexist. This study evaluated the outcome of infection with a highly virulent Asian genotype II PRRSV isolate in piglets vaccinated with a genotype I vaccine. Twenty-one 3-week-old piglets were divided in three groups: Pigs in group V (n=8) were vaccinated with an attenuated genotype I commercial PRRSV vaccine, while pigs in group U (n=8) and a control group (group C; n=5) were unvaccinated; 6 weeks later, pigs in groups V and U were challenged intranasally with a highly virulent strain of genotype II PRRSV (1×10(5) 50% tissue culture infectious doses/mL), while pigs in group C received a placebo. Over a period of 21 days after challenge, vaccinated pigs had significantly lower mortality (0/8 versus 2/8), fewer days of fever, a lower frequency of catarrhal bronchopneumonia, higher weight gains (13.4 versus 6.6 kg) and lower levels of viraemia compared to unvaccinated challenged pigs. Immunisation with a genotype I attenuated PRRSV vaccine provided partial protection against challenge with a highly virulent genotype II strain.
ICT Assimilation in Selected Ethiopian Public Organizations
Information and Communication Technology (ICT) affects how organizations deliver services to their customers. ICT is now used not only to improve back office routine activities but also to act as strategic tools for achieving organizational goals. It increases internal efficiency and reduces costs. Although ICT has many benefits for organizations, its assimilation is the most challenging process as there are high failure rates as reported by many researchers. This paper develops a conceptual framework that is used to investigate the success of ICT assimilation in public organizations. The study uses a field survey research method to empirically test the proposed model. The result of the study shows that ICT staff support services, top management participation, consultant involvement and consultant knowledge transfer as important factors that affect ICT assimilation in Ethiopian public organizations. Managers as well as decision makers should pay emphasis to those variables to increase the success of ICT in their organizations. Although the proposed model was adapted from international experiences, the factors that are identified as important determinant factors of ICT assimilation weakly explains the study phenomenon thereby further research with qualitative methods are recommended to identify factors that best explains the phenomenon of ICT assimilation in public organizations. Keyword: ICT assimilation, public organizations, top management championship
Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking
Methods that learn representations of nodes in a graph play a critical role in network analysis since they enable many downstream learning tasks. We propose Graph2Gauss – an approach that can efficiently learn versatile node embeddings on large scale (attributed) graphs that show strong performance on tasks such as link prediction and node classification. Unlike most approaches that represent nodes as point vectors in a low-dimensional continuous space, we embed each node as a Gaussian distribution, allowing us to capture uncertainty about the representation. Furthermore, we propose an unsupervised method that handles inductive learning scenarios and is applicable to different types of graphs: plain/attributed, directed/undirected. By leveraging both the network structure and the associated node attributes, we are able to generalize to unseen nodes without additional training. To learn the embeddings we adopt a personalized ranking formulation w.r.t. the node distances that exploits the natural ordering of the nodes imposed by the network structure. Experiments on real world networks demonstrate the high performance of our approach, outperforming state-of-the-art network embedding methods on several different tasks. Additionally, we demonstrate the benefits of modeling uncertainty – by analyzing it we can estimate neighborhood diversity and detect the intrinsic latent dimensionality of a graph.
Elements of Effective Deep Reinforcement Learning towards Tactical Driving Decision Making
Tactical driving decision making is crucial for autonomous driving systems and has attracted considerable interest in recent years. In this paper, we propose several practical components that can speed up deep reinforcement learning algorithms towards tactical decision making tasks: 1) nonuniform action skipping as a more stable alternative to action-repetition frame skipping, 2) a counterbased penalty for lanes on which ego vehicle has less right-of-road, and 3) heuristic inference-time action masking for apparently undesirable actions. We evaluate the proposed components in a realistic driving simulator and compare them with several baselines. Results show that the proposed scheme provides superior performance in terms of safety, efficiency, and comfort.
Impairment-targeted exercises for older adults with knee pain: a proof-of-principle study (TargET-Knee-Pain)
BACKGROUND Therapeutic exercise is an effective intervention for knee pain and osteoarthritis (OA) and should be individualised. In a preliminary, proof-of-principle study we sought to develop a home exercise programme targeted at specific physical impairments of weak quadriceps, reduced knee flexion range of motion (ROM) and poor balance, and evaluate whether receipt of this was associated with improvements in those impairments and in patient-reported outcomes among older adults with knee pain. METHODS This community-based study used a single group, before-after study design with 12-week follow-up. Participants were 58 adults aged over 56 years with knee pain and evidence of quadriceps weakness, loss of flexion ROM, or poor balance, recruited from an existing population-based, observational cohort. Participants received a 12-week home exercise programme, tailored to their physical impairments. The programme was led, monitored and progressed by a physiotherapist over six home visits, alternating with six telephone calls. Primary outcome measures were maximal isometric quadriceps strength, knee flexion ROM and timed single-leg standing balance, measured at baseline, 6 and 12 weeks by a research nurse blinded to the nature and content of participants' exercise programmes. Secondary outcome measures included the WOMAC. RESULTS At 12 weeks, participants receiving strengthening exercises demonstrated a statistically significant change in quadriceps isometric strength compared to participants not receiving strengthening exercises: 3.9 KgF (95 % CI 0.1, 7.8). Changes in knee flexion ROM (2.1° (-2.3, 6.5)) and single-leg balance time (-2.4 s (-4.5, 6.7)) after stretching and balance retraining exercises respectively, were not found to be statistically significant. There were significant improvements in mean WOMAC Pain and Physical Function scores: -2.2 (-3.1, -1.2) and -5.1 (-7.8, -2.5). CONCLUSIONS A 12-week impairment-targeted, home-based exercise programme for symptomatic knee OA appeared to be associated with modest improvements in self-reported pain and function but no strong evidence of greater improvement in the specific impairments targeted by each exercise package, with the possible exception of quadriceps strengthening. TRIAL REGISTRATION Clinical Trial Registration Number: ISRCTN 61638364 Date of registration: 24 June 2010.
Unsupervised Cross-Modality Domain Adaptation of ConvNets for Biomedical Image Segmentations with Adversarial Loss
Convolutional networks (ConvNets) have achieved great successes in various challenging vision tasks. However, the performance of ConvNets would degrade when encountering the domain shift. The domain adaptation is more significant while challenging in the field of biomedical image analysis, where cross-modality data have largely different distributions. Given that annotating the medical data is especially expensive, the supervised transfer learning approaches are not quite optimal. In this paper, we propose an unsupervised domain adaptation framework with adversarial learning for cross-modality biomedical image segmentations. Specifically, our model is based on a dilated fully convolutional network for pixel-wise prediction. Moreover, we build a plug-and-play domain adaptation module (DAM) to map the target input to features which are aligned with source domain feature space. A domain critic module (DCM) is set up for discriminating the feature space of both domains. We optimize the DAM and DCM via an adversarial loss without using any target domain label. Our proposed method is validated by adapting a ConvNet trained with MRI images to unpaired CT data for cardiac structures segmentations, and achieved very promising results.
Tuning of PID controller using Ziegler-Nichols method for speed control of DC motor
In this paper, a weighted tuning methods of a PID speed controller for separately excited Direct current motor is presented, based on Empirical Ziegler-Nichols tuning formula and modified Ziegler-Nichol PID tuning formula. Both these methods are compared on the basis of output response, minimum settling time, and minimum overshoot for speed demand application of DC motor. Computer simulation shows that the performance of PID controller using Modified Ziegler-Nichols technique is better than that of traditional Ziegler-Nichols technique.
Wireless infidelity I: war driving
Although WiFi technology security vulnerabilities are well known, the extent of these vulnerabilities may be surprising: War driving experiences identify many potential points of entry.
AND/OR Cutset Conditioning
Cutset conditioning is one of the methods of solving reasoning tasks for graphical models, especially when space restrictions make inference (e.g., jointree-clustering) algorithms infeasible. The wcutsetis a natural extension of the method to a hybrid algorithm that performs search on the conditioning variables and inference on the remaining problems of induced width bounded by w. This paper takes a fresh look at these methods through the spectrum of AND/OR search spaces for graphical models. The resultingAND/OR cutset method is a strict improvement over the traditional one, often by exponential amounts.
Risk and Failure in English Business 1700–1800
This major study considers bankruptcy in eighteenth-century England. Typically, business enterprise in this period has been seen as a success story - where men like Boulton, Watt, Wedgwood and Arkwright helped to forge the Industrial Revolution. But this is a myth, for thousands of businesses failed, hounded by their creditors into bankruptcy and ignominy. This book charts their history by looking at the incidence and causes of bankruptcy and by examining contemporary reactions to these. In this way, not only is evidence produced to improve our understanding of the nature of business enterprise, but the dynamics of the eighteenth-century economy over both the short and the long term are uncovered.
STOCK MARKET FORECASTING TECHNIQUES : LITERATURE SURVEY
The goal of this paper is to study different techniques to predict stock price movement using the sentiment analysis from social media, data mining. In this paper we will find efficient method which can predict stock movement more accurately. Social media offers a powerful outlet for people’s thoughts and feelings it is an enormous ever-growing source of texts ranging from everyday observations to involved discussions. This paper contributes to the field of sentiment analysis, which aims to extract emotions and opinions from text. A basic goal is to classify text as expressing either positive or negative emotion. Sentiment classifiers have been built for social media text such as product reviews, blog posts, and even twitter messages. With increasing complexity of text sources and topics, it is time to re-examine the standard sentiment extraction approaches, and possibly to redefine and enrich the definition of sentiment. Next, unlike sentiment analysis research to date, we examine sentiment expression and polarity classification within and across various social media streams by building topical datasets within each stream. Different data mining methods are used to predict market more efficiently along with various hybrid approaches. We conclude that stock prediction is very complex task and various factors should be considered for forecasting the market more accurately and efficiently.
Learning to Ask: Neural Question Generation for Reading Comprehension
We study automatic question generation for sentences from text passages in reading comprehension. We introduce an attention-based sequence learning model for the task and investigate the effect of encoding sentencevs. paragraph-level information. In contrast to all previous work, our model does not rely on hand-crafted rules or a sophisticated NLP pipeline; it is instead trainable end-to-end via sequenceto-sequence learning. Automatic evaluation results show that our system significantly outperforms the state-of-the-art rule-based system. In human evaluations, questions generated by our system are also rated as being more natural (i.e., grammaticality, fluency) and as more difficult to answer (in terms of syntactic and lexical divergence from the original text and reasoning needed to answer).
Evaluation of WebSocket Communication in in Enterprise Architecture
Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules
Extracting time expressions from free text is a fundamental task for many applications. We analyze time expressions from four different datasets and find that only a small group of words are used to express time information and that the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach named SynTime1 for time expression recognition. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related token regular expressions. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a lightweight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text from different domains and different text types. Experiments on benchmark datasets and tweets data show that SynTime outperforms state-of-the-art methods.
Cat-inspired mechanical design of self-adaptive toes for a legged robot
Cats have protractible claws to fold their tips to keep them sharp. They protract claws while hunting and pawing on slippery surfaces. Protracted claws by tendons and muscles of toes can help cats anchoring themselves steady while their locomotion trends to slip and releasing the hold while they retract claws intentionally. This research proposes a kind of modularized self-adaptive toe mechanism inspired by cat claws to improve the extremities' contact performance for legged robot. The mechanism is constructed with four-bar linkage actuated by contact reaction force and retracted by applied spring tension. A feasible mechanical design based on several essential parameters is introduced and an integrated Sole-Toe prototype is built for experimental evaluation. Mechanical self-adaption and actual contact performance on specific surface have been evaluated respectively on a biped walking platform and a bench-top mechanical testing.
FlexiSec: A Configurable Link Layer Security Architecture for Wireless Sensor Networks
Ensuring communications security in Wireless Sensor Networks (WSNs) indeed is critical; due to the criticality of the resources in the sensor nodes as well as due to their ubiquitous and pervasive deployment, with varying attributes and degrees of security required. The proliferation of the next generation sensor nodes, has not solved this problem, because of the greater emphasis on low-cost deployment. In addition, the WSNs use data-centric multi-hop communication that in turn, necessitates the security support to be devised at the link layer (increasing the cost of security related operations), instead of being at the application layer, as in general networks. Therefore, an energy-efficient link layer security framework is necessitated. There do exists a number of link layer security architectures that offer some combinations of the security attributes desired by different WSN applications. However, as we show in this paper, none of them is responsive to the actual security demands of the applications. Therefore, we believe that there is a need for investigating the feasibility of a configurable software-based link layer security architecture wherein an application can be compiled flexibly, with respect to its actual security demands. In this paper, we analyze, propose and experiment with the basic design of such configurable link layer security architecture for WSNs. We also experimentally evaluate various aspects related to our scheme viz. configurable block ciphers, configurable block cipher modes of operations, configurable MAC sizes and configurable replay protection. The architecture proposed is aimed to offer the optimal level of security at the minimal overhead, thus saving the precious resources in the WSNs.
Ab initio molecular-dynamics study of liquid GeSe
The structural, vibrational, and electronic properties of liquid GeSe 2 ar investigated using ab initio molecular dynamics. The static structure factor S(Q) and the pair-correlation functions of our model are in good agreement with experiment. We find many similarities between the topology of the liquid and the glass state. In addition we introduce a way of characterizing the intermediate-range order of liquid and glassy GeSe 2 through fourfold and sixfold rings. The overall vibrational density of states is found to be consistent with Raman experiments. The intensity of the low-frequency modes, splitting of the A1 and A1c peaks, and the decrease in the intensity of the high-frequency modes are all reproduced. The electronic density of states is determined and compared to our results for glassy GeSe 2. We find that an increase in Se bond length and bond-angle disorder significantly broadens the conduction band. The time-dependent behavior of the electronic eigenvalues is examined and transient events are observed in which an electronic state crosses the optical gap. The structural configurations which produce states in the optical gap are determined using an ab initio molecular-dynamics approach and are found to be in agreement with experimental photoluminescence and electron-spin-resonance data. We also find that a linear relationship exists between the root mean square of the thermal fluctuations of an electronic eigenvalue in time and its localization. @S0163-1829 ~97!05830-X#
Electrical stimulation therapy of the lower esophageal sphincter is successful in treating GERD: final results of open-label prospective trial
Electrical stimulation of the lower esophageal sphincter (LES) improves LES pressure without interfering with LES relaxation. The aim of this open-label pilot trial was to evaluate the safety and efficacy of long-term LES stimulation using a permanently implanted LES stimulator in patients with gastroesophageal reflux disease (GERD). GERD patients who were at least partially responsive to proton pump inhibitors (PPI) with abnormal esophageal pH, hiatal hernia ≤3 cm, and esophagitis ≤LA grade C were included. Bipolar stitch electrodes were placed in the LES and an IPG was placed in a subcutaneous pocket. Electrical stimulation was delivered at 20 Hz, 215 μs, 3–8 mA in 30 min sessions. The number and timing of sessions was tailored to each patient’s GERD profile. Patients were evaluated using GERD-HRQL, daily symptom and medication diaries, SF-12, esophageal pH, and high-resolution manometry. 24 patients (mean age = 53 years, SD = 12 years; 14 men) were implanted; 23 completed their 6-month evaluation. Median GERD-HRQL scores at 6 months was 2.0 (IQR = 0–5.5) and was significantly better than both baseline on-PPI [9.0 (range = 6.0–10.0); p < 0.001] and off-PPI [23 (21–25); p < 0.001] GERD-HRQL. Median%  24-h esophageal pH < 4.0 at baseline was 10.1 and improved to 5.1 at 6 months (p < 0.001). At their 6-month follow-up, 91 % (21/23) of the patients were off PPI and had significantly better median GERD-HRQL on LES stimulation compared to their on-PPI GERD-HRQL at baseline (9.0 vs. 2.0; p < 0.001). There were no unanticipated implantation- or stimulation-related adverse events or untoward sensation due to stimulation. There were no reports of treatment-related dysphagia, and manometric swallow was also unaffected. Electrical stimulation of the LES is safe and effective for treating GERD. There is a significant and sustained improvement in GERD symptoms, esophageal pH, and reduction in PPI usage without any side effects with the therapy. Furthermore, the therapy can be optimized to address an individual patient’s disease.
A Bayesian missing value estimation method for gene expression profile data
MOTIVATION Gene expression profile analyses have been used in numerous studies covering a broad range of areas in biology. When unreliable measurements are excluded, missing values are introduced in gene expression profiles. Although existing multivariate analysis methods have difficulty with the treatment of missing values, this problem has received little attention. There are many options for dealing with missing values, each of which reaches drastically different results. Ignoring missing values is the simplest method and is frequently applied. This approach, however, has its flaws. In this article, we propose an estimation method for missing values, which is based on Bayesian principal component analysis (BPCA). Although the methodology that a probabilistic model and latent variables are estimated simultaneously within the framework of Bayes inference is not new in principle, actual BPCA implementation that makes it possible to estimate arbitrary missing variables is new in terms of statistical methodology. RESULTS When applied to DNA microarray data from various experimental conditions, the BPCA method exhibited markedly better estimation ability than other recently proposed methods, such as singular value decomposition and K-nearest neighbors. While the estimation performance of existing methods depends on model parameters whose determination is difficult, our BPCA method is free from this difficulty. Accordingly, the BPCA method provides accurate and convenient estimation for missing values. AVAILABILITY The software is available at http://hawaii.aist-nara.ac.jp/~shige-o/tools/.
Multiband mobile phone antenna for GNSS application
In this article we have presented a new design of a monopole antenna mobile phone, which covers several bands with circular polarization. The proposed antenna generates five resonant frequencies. Our antenna has a bandwidth band that has bands of wireless communication systems such as GSM (880 960 MHz), DCS (1710 1880 MHz), PCS (1850 1990 MHz), UMTS (1920 2170 MHz), WiBro (2300 2390 MHz) and ISM (2400 2483 MHz) and GNSS, including COMPASS (1559.052 1591.788 MHz), GPS (1575.42 5MHz), GLONASS (1602 1615.5 MHz), Galileo (1189 1214 MHz). We discussed the various parameters of the antenna, such as return loss, radiation pattern and axial ratio AR.
TSync: a lightweight bidirectional time synchronization service for wireless sensor networks
Time synchronization in a wireless sensor network is critical for accurate timestamping of events and fine-tuned coordination of wake/sleep duty cycles to reduce power consumption. This paper proposes TSync, a novel lightweight bidirectional time synchronization service for wireless sensor networks. TSync's bidirectional service offers both a push mechanism for accurate and low overhead global time synchronization as well as a pull mechanism for on-demand synchronization by individual sensor nodes. Multi-channel enhancements improve TSync's performance. We deploy a GPS-enabled framework in live sensor networks to evaluate the accuracy and overhead of TSync in comparison with other in-situ time synchronization algorithms.
Bedside ultrasound-guided percutaneous cystostomy in an infant in the neonatal intensive care unit
We describe a case of an infant born at 39 weeks of gestation who was in the neonatal intensive care unit for postoperative management of congenital heart disease and underwent bedside ultrasound-guided percutaneous cystostomy to treat an iatrogenic urethral injury. The procedure was uneventful, successful, and no complications were noted. This case demonstrates that this procedure is safe and minimally invasive. Indications, contraindications, techniques, potential complications, and the safety of performing this procedure in a bedside setting are discussed.
Spline-Based Image Registration
The problem of image registration subsumes a number of problems and techniques in multiframe image analysis, including the computation of optic flow (general pixel-based motion), stereo correspondence, structure from motion, and feature tracking. We present a new registration algorithm based on spline representations of the displacement field which can be specialized to solve all of the above mentioned problems. In particular, we show how to compute local flow, global (parametric) flow, rigid flow resulting from camera egomotion, and multiframe versions of the above problems. Using a spline-based description of the flow removes the need for overlapping correlation windows, and produces an explicit measure of the correlation between adjacent flow estimates. We demonstrate our algorithm on multiframe image registration and the recovery of 3D projective scene geometry. We also provide results on a number of standard motion sequences.
Pomegranate: fast and flexible probabilistic modeling in python
We present pomegranate, an open source machine learning package for probabilistic modeling in Python. Probabilistic modeling encompasses a wide range of methods that explicitly describe uncertainty using probability distributions. Three widely used probabilistic models implemented in pomegranate are general mixture models, hidden Markov models, and Bayesian networks. A primary focus of pomegranate is to abstract away the complexities of training models from their definition. This allows users to focus on specifying the correct model for their application instead of being limited by their understanding of the underlying algorithms. An aspect of this focus involves the collection of additive sufficient statistics from data sets as a strategy for training models. This approach trivially enables many useful learning strategies, such as out-of-core learning, minibatch learning, and semi-supervised learning, without requiring the user to consider how to partition data or modify the algorithms to handle these tasks themselves. pomegranate is written in Cython to speed up calculations and releases the global interpreter lock to allow for built-in multithreaded parallelism, making it competitive with—or outperform—other implementations of similar algorithms. This paper presents an overview of the design choices in pomegranate, and how they have enabled complex features to be supported by simple code. The code is available at https://github.com/jmschrei/pomegranate
Acute mental stress impairs insulin sensitivity in IDDM patients
The effect of acute mental stress on insulin sensitivity was evaluated in ten IDDM patients, studied on two occasions (test day and control day) in random order and separated by a period of 1–3 weeks. Mental stress was evoked by a modified filmed version of Stroop's CWT for 20 min. On the control day, the patients were resting quietly during the corresponding period. Insulin sensitivity was estimated by an insulin (0.4 mU · kg−1 · min−1)-glucose (4.5 mg · kg−1 · min−1)-infusion test (IGIT) for 6.5 h. Mental stress evoked significant responses for adrenaline, cortisol and GH, their respective peak values being 0.27 ± 0.05 nmol/l, 426 ± 27 nmol/l and 7.6 ± 1.8 μg/l, as well as increases in systolic and diastolic blood pressure and pulse rate The steady-state blood glucose levels, i.e. the mean blood glucose levels 3–6.5 h after the start of the IGIT, were significantly higher after stress, compared with those on the control day, 10.6 ± 1.5 vs 8.7 ± 1.4 mmol/l, p = 0.01, demonstrating impairment of the insulin sensitivity by mental stress. It is concluded that acute mental stress induces a state of insulin resistance in IDDM patients, which can be demonstrated by an IGIT to appear 1 h after maximal stress and to last more than 5 h.
Funnel Libraries for Real-Time Robust Feedback Motion Planning
We consider the problem of generating motion plans for a robot that are guaranteed to succeed despite uncertainty in the environment, parametric model uncertainty, and disturbances. Furthermore, we consider scenarios where these plans must be generated in real-time, because constraints such as obstacles in the environment may not be known until they are perceived (with a noisy sensor) at runtime. Our approach is to pre-compute a library of “funnels” along different maneuvers of the system that the state is guaranteed to remain within (despite bounded disturbances) when the feedback controller corresponding to the maneuver is executed. The resulting funnel library is then used to sequentially compose motion plans at runtime while ensuring the safety of the robot. A major advantage of the work presented here is that by explicitly taking into account the effect of uncertainty, the robot can evaluate motion plans based on how vulnerable they are to disturbances. We demonstrate and validate our method using extensive hardware experiments on a small fixed-wing airplane avoiding obstacles at high speed (∼12 mph), along with thorough simulation experiments of ground vehicle and quadrotor models navigating through cluttered environments. To our knowledge, the resulting hardware demonstrations on a fixed-wing airplane constitute one of the first examples of provably safe and robust control for robotic systems with complex nonlinear dynamics that need to plan in realtime in environments with complex geometric constraints. The key computational engine we leverage is sums-of-squares (SOS) programming. While SOS programming allows us to apply our approach to systems of relatively high dimensionality (up to approximately 10-15 dimensional state spaces), scaling our approach to higher dimensional systems such as humanoid robots requires a different set of computational tools. In this thesis, we demonstrate how DSOS and SDSOS programming, which are recently introduced alternatives to SOS programming, can be employed to achieve this improved scalability and handle control systems with as many as 30-50 state dimensions. Thesis Supervisor: Russ Tedrake Title: Professor of Electrical Engineering and Computer Science
Nurses' response time to call lights and fall occurrences.
Nurses respond to fallers' call lights more quickly than they do to lights initiated by non-fallers. The nurses' responsiveness to call lights could be a compensatory mechanism in responding to the fall prevalence on the unit.
Intervention to Lower Household Wood Smoke Exposure in Guatemala Reduces ST-Segment Depression on Electrocardiograms
BACKGROUND A large body of evidence suggests that fine particulate matter (PM) air pollution is a cause of cardiovascular disease, but little is known in particular about the cardiovascular effects of indoor air pollution from household use of solid fuels in developing countries. RESPIRE (Randomized Exposure Study of Pollution Indoors and Respiratory Effects) was a randomized trial of a chimney woodstove that reduces wood smoke exposure. OBJECTIVES We tested the hypotheses that the stove intervention, compared with open fire use, would reduce ST-segment depression and increase heart rate variability (HRV). METHODS We used two complementary study designs: a) between-groups comparisons based on randomized stove assignment, and b) before-and-after comparisons within control subjects who used open fires during the trial and received chimney stoves after the trial. Electrocardiogram sessions that lasted 20 hr were repeated up to three times among 49 intervention and 70 control women 38-84 years of age, and 55 control subjects were also assessed after receiving stoves. HRV and ST-segment values were assessed for each 30-min period. ST-segment depression was defined as an average value below -1.00 mm. Personal fine PM [aerodynamic diameter ≤ 2.5 μm (PM₂.₅] exposures were measured for 24 hr before each electrocardiogram. RESULTS PM₂.₅ exposure means were 266 and 102 μg/m³ during the trial period in the control and intervention groups, respectively. During the trial, the stove intervention was associated with an odds ratio of 0.26 (95% confidence interval, 0.08-0.90) for ST-segment depression. We found similar associations with the before-and-after comparison. The intervention was not significantly associated with HRV. CONCLUSIONS The stove intervention was associated with reduced occurrence of nonspecific ST-segment depression, suggesting that household wood smoke exposures affect ventricular repolarization and potentially cardiovascular health.
Mining multi-tag association for image tagging
Automatic media tagging plays a critical role in modern tag-based media retrieval systems. Existing tagging schemes mostly perform tag assignment based on community contributed media resources, where the tags are provided by users interactively. However, such social resources usually contain dirty and incomplete tags, which severely limit the performance of these tagging methods. In this paper, we propose a novel automatic image tagging method aiming to automatically discover more complete tags associated with information importance for test images. Given an image dataset, all the near-duplicate clusters are discovered. For each near-duplicate cluster, all the tags occurring in the cluster form the cluster’s “document”. Given a test image, we firstly initialize the candidate tag set from its near-duplicate cluster’s document. The candidate tag set is then expanded by considering the implicit multi-tag associations mined from all the clusters’ documents, where each cluster’s document is regarded as a transaction. To further reduce noisy tags, a visual relevance score is also computed for each candidate tag to the test image based on a new tag model. Tags with very low scores can be removed from the final tag set. Extensive experiments conducted on a real-world web image dataset—NUS-WIDE, demonstrate the promising effectiveness of our approach.
The Complexity of Near-Optimal Graph Coloring
Graph coloring problems, in which one would like to color the vertices of a given graph with a small number of colors so that no two adjacent vertices receive the same color, arise in many applications, including various scheduling and partitioning problems. In this paper the complexity and performance of algorithms which construct such colorings are investigated. For a graph <italic>G</italic>, let &khgr;(<italic>G</italic>) denote the minimum possible number of colors required to color <italic>G</italic> and, for any graph coloring algorithm <italic>A</italic>, let <italic>A</italic>(<italic>G</italic>) denote the number of colors used by <italic>A</italic> when applied to <italic>G</italic>. Since the graph coloring problem is known to be “NP-complete,” it is considered unlikely that any efficient algorithm can guarantee <italic>A</italic>(<italic>G</italic>) = &khgr;(<italic>G</italic>) for all input graphs. In this paper it is proved that even coming close to khgr;(<italic>G</italic>) with a fast algorithm is hard. Specifically, it is shown that if for some constant <italic>r</italic> < 2 and constant <italic>d</italic> there exists a polynomial-time algorithm <italic>A</italic> which guarantees <italic>A</italic>(<italic>G</italic>) ≤ <italic>r</italic>·&khgr;(<italic>G</italic>) + <italic>d</italic>, then there also exists a polynomial-time algorithm <italic>A</italic> which guarantees <italic>A</italic>(<italic>G</italic>) = &khgr;(<italic>G</italic>).
GFDM Frame Design For 5 G Application Scenarios
The services foreseen for 5G networks will demand a vast number of challenging requirements to be fulfilled by the physical layer. These services can be grouped into application scenarios, each one with a key requirement to be addressed by the 5G network. A flexible waveform associated with a appropriate data frame is an essential feature in order to guarantee the support of contrasting requirements from different application scenarios such as Enhanced Mobile Broadband, Internet of Things, Tactile Internet and Internet Access for Remote Areas. In this paper, we propose a flexible data frame based on Generalized Frequency Division Multiplexing (GFDM) that can be tailored to address the specific key requirements of the different 5G scenarios. The paper also presents the physical layer parametrization that can be used for each application.
Lactose-fermenting, multiple drug-resistant Salmonella typhi strains isolated from a patient with postoperative typhoid fever.
Two lactose-fermenting Salmonella typhi strains were isolated from bile and blood specimens of a typhoid fever patient who underwent a cholecystectomy due to cholelithiasis. One lactose-fermenting S. typhi strain was also isolated from a pus specimen which was obtained at the tip of the T-shaped tube withdrawn from the operative wound of the common bile duct of the patient. These three lactose-fermenting isolates: GIFU 11924 from bile, GIFU 11926 from pus, and GIFU 11927 from blood, were phenotypically identical to the type strain (GIFU 11801 = ATCC 19430 = NCTC 8385) of S. typhi, except that the three strains fermented lactose and failed to blacken the butt of Kligler iron agar or triple sugar iron agar medium. All three lactose-fermenting strains were resistant to chloramphenicol, ampicillin, sulfomethoxazole, trimethoprim, gentamicin, cephaloridine, and four other antimicrobial agents. The type strain was uniformly susceptible to these 10 drugs. The strain GIFU 11925, a lactose-negative dissociant from strain GIFU 11926, was also susceptible to these drugs, with the sole exception of chloramphenicol (minimal inhibitory concentration, 100 micrograms/ml).
Efficient Hierarchical Identity-Based Signature With Batch Verification for Automatic Dependent Surveillance-Broadcast System
The automatic-dependent surveillance-broad-cast (ADS-B) is generally regarded as the most important module in air traffic surveillance technology. To obtain better airline security, ADS-B system will be deployed in most airspace by 2020, where aircraft will be equipped with an ADS-B device that periodically broadcasts messages to other aircraft and ground station controllers. Due to the open communication environment, the ADS-B system is subject to a broad range of attacks. To simultaneously implement both integrity and authenticity of messages transmitted in the ADS-B system, Yang et al. proposed a new authentication frame based on the three-level hierarchical identity-based signature (TLHIBS) scheme with batch verification, as well as constructing two schemes for the ADS-B system. However, neither TLHIBS schemes are sufficiently lightweight for practical deployment due to the need for complex hash-to-point operation or expensive certification management. In this paper, we construct an efficient TLHIBS scheme with batch verification for the ADS-B system. Our scheme does not require hash-to-point operation or (expensive) certification management. We then prove the TLHIBS scheme secure in the random oracle model. We also demonstrate the practicality of the scheme using experiments, whose findings indicate that the TLHIBS scheme supports attributes required by the ADS-B system without the computation cost in Chow et al.'s scheme and Yang et al.'s TLHIBS schemes.
SAGE2: A new approach for data intensive collaboration using Scalable Resolution Shared Displays
Current web-based collaboration systems, such as Google Hangouts, WebEx, and Skype, primarily enable single users to work with remote collaborators through video conferencing and desktop mirroring. The original SAGE software, developed in 2004 and adopted at over one hundred international sites, was designed to enable groups to work in front of large shared displays in order to solve problems that required juxtaposing large volumes of information in ultra high-resolution. We have developed SAGE2, as a complete redesign and implementation of SAGE, using cloud-based and web browser technologies in order to enhance data intensive co-located and remote collaboration. This paper provides an overview of SAGE2's infrastructure, the technical design challenges, and the afforded benefits to data intensive collaboration. Lastly, we provide insight on how future collaborative applications can be developed to support large displays and demonstrate the power and flexibility that SAGE2 offers in collaborative scenarios through a series of use cases.
SOSPES: SPIRIVA® observational study measuring SGRQ score in routine medical practice in Central and Eastern Europe
PURPOSE The long-acting inhaled anticholinergic agent, tiotropium, is recommended as first-line maintenance therapy for moderate to very severe Chronic Obstructive Pulmonary Disease (COPD) to improve symptoms, exercise tolerance, health status, and to reduce exacerbations. Few studies have evaluated the therapeutic efficacy of tiotropium in patients in routine clinical conditions. The current study was designed to investigate the therapeutic efficacy of tiotropium delivered via the HandiHaler® device on the health status of patients with COPD with Global initiative for chronic Obstructive Lung Disease (GOLD) disease classification 2-4 in six central and eastern European countries in a real-life clinical setting. METHODS The study was an open-label, prospective, uncontrolled, and single-arm surveillance study with three clinic visits during a 6-month observation period (baseline, and months 3 and 6). Health status was measured using the disease-specific St George's Respiratory Questionnaire (SGRQ). The primary efficacy endpoint was the mean change from baseline in SGRQ total score at the end of the 6-month observational period. RESULTS Patients treated with tiotropium 18 μg once daily showed statistically significant and clinically meaningful reduction (improvement) of 21.7 units in the SGRQ total score, regardless of smoking status or cardiac comorbidities at enrollment (P < 0.0001). The analysis also showed that age, treatment compliance, and GOLD disease classification were significant factors that impact the health status of patients with COPD differently. CONCLUSION These results provide further support for the use of the tiotropium HandiHaler® as first-line maintenance treatment of patients with COPD with a clinician-assessed disease.
Motivational orientation modulates the neural response to reward
Motivational orientation defines the source of motivation for an individual to perform a particular action and can either originate from internal desires (e.g., interest) or external compensation (e.g., money). To this end, motivational orientation should influence the way positive or negative feedback is processed during learning situations and this might in turn have an impact on the learning process. In the present study, we thus investigated whether motivational orientation, i.e., extrinsic and intrinsic motivation modulates the neural response to reward and punishment as well as learning from reward and punishment in 33 healthy individuals. To assess neural responses to reward, punishment and learning of reward contingencies we employed a probabilistic reversal learning task during functional magnetic resonance imaging. Extrinsic and intrinsic motivation were assessed with a self-report questionnaire. Rewarding trials fostered activation in the medial orbitofrontal cortex and anterior cingulate gyrus (ACC) as well as the amygdala and nucleus accumbens, whereas for punishment an increased neural response was observed in the medial and inferior prefrontal cortex, the superior parietal cortex and the insula. High extrinsic motivation was positively correlated to increased neural responses to reward in the ACC, amygdala and putamen, whereas a negative relationship between intrinsic motivation and brain activation in these brain regions was observed. These findings show that motivational orientation indeed modulates the responsiveness to reward delivery in major components of the human reward system and therefore extends previous results showing a significant influence of individual differences in reward-related personality traits on the neural processing of reward.
A unified presentation of identities involving Weierstrass-type functions and their applications
Abstract In this work, we investigate and discuss Weierstrass-type functions ℘ 2 k to the power of m , and then derive a general formula for identities among them. Applications of this general formula include not only producing many of the familiar identities among Weierstrass-type functions, but also infinitely many new ones. Moreover, it is observed that new Bernoulli identities could be constructed systematically by means of the new identities among Weierstrass-type functions.
Long-term stability of the Wechsler Intelligence Scale for Children--Fourth Edition.
Long-term stability of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV; Wechsler, 2003) was investigated with a sample of 344 students from 2 school districts twice evaluated for special education eligibility at an average interval of 2.84 years. Test-retest reliability coefficients for the Verbal Comprehension Index (VCI), Perceptual Reasoning Index (PRI), Working Memory Index (WMI), Processing Speed Index (PSI), and the Full Scale IQ (FSIQ) were .72, .76, .66, .65, and .82, respectively. As predicted, the test-retest reliability coefficients for the subtests (Mdn = .56) were generally lower than the index scores (Mdn = .69) and the FSIQ (.82). On average, subtest scores did not differ by more than 1 point, and index scores did not differ by more than 2 points across the test-retest interval. However, 25% of the students earned FSIQ scores that differed by 10 or more points, and 29%, 39%, 37%, and 44% of the students earned VCI, PRI, WMI, and PSI scores, respectively, that varied by 10 or more points. Given this variability, it cannot be assumed that WISC-IV scores will be consistent across long test-retest intervals for individual students.
On the Difficulties of Disclosure Prevention in Statistical Databases or The Case for Differential Privacy
A privacy-preserving statistical database enables the data analyst to learn properties of the population as a whole, while protecting the privacy of the individuals whose data were used in creating the database. A rigorous treatment of privacy requires definitions: What constitutes a failure to preserve privacy? What is the power of the adversary whose goal it is to compromise privacy? What auxiliary information is available to the adversary (newspapers, medical studies, labor statistics) even without access to the database in question? Of course, utility also requires formal treatment, as releasing no information, or only random noise, clearly does not compromise privacy; we will return to this point later. However, in this work privacy is paramount: we will first define our privacy goals and then explore what utility can be achieved given that the privacy goals will be satisfied.
Mobile game-based learning in secondary education: engagement, motivation and learning in a mobile city game
Using mobile games in education combines situated and active learning with fun in a potentially excellent manner. The effects of a mobile city game called Frequency 1550, which was developed by The Waag Society to help pupils in their first year of secondary education playfully acquire historical knowledge of medieval Amsterdam, were investigated in terms of pupil engagement in the game, historical knowledge, and motivation for History in general and the topic of the Middle Ages in particular. A quasi-experimental design was used with 458 pupils from 20 classes from five schools. The pupils in 10 of the classes played the mobile history game whereas the pupils in the other 10 classes received a regular, project-based lesson series. The results showed those pupils who played the game to be engaged and to gain significantly more knowledge about medieval Amsterdam than those pupils who received regular projectbased instruction. No significant differences were found between the two groups with respect to motivation for History or the MiddleAges. The impact of location-based technology and gamebased learning on pupil knowledge and motivation are discussed along with suggestions for future research.
An Adaptive SVR for High-Frequency Stock Price Forecasting
In order to mitigate investments, stock price forecasting has attracted more attention in recent years. Aiming at the discreteness, non-normality, high-noise in high-frequency data, a support vector machine regression (SVR) algorithm is introduced in this paper. However, the characteristics in different periods of the same stock, or the same periods of different stocks are significantly different. So, SVR with fixed parameters is difficult to satisfy with the constantly changing data flow. To tackle this problem, an adaptive SVR was proposed for stock data at three different time scales, including daily data, 30-min data, and 5-min data. Experiments show that the improved SVR with dynamic optimization of learning parameters by particle swarm optimization can get a better result than compared methods including SVR and back-propagation neural network.
Mesolimbic functional magnetic resonance imaging activations during reward anticipation correlate with reward-related ventral striatal dopamine release.
The dopaminergic mechanisms that control reward-motivated behavior are the subject of intense study, but it is yet unclear how, in humans, neural activity in mesolimbic reward-circuitry and its functional neuroimaging correlates are related to dopamine release. To address this question, we obtained functional magnetic resonance imaging (fMRI) measures of reward-related neural activity and [(11)C]raclopride positron emission tomography measures of dopamine release in the same human participants, while they performed a delayed monetary incentive task. Across the cohort, a positive correlation emerged between neural activity of the substantia nigra/ventral tegmental area (SN/VTA), the main origin of dopaminergic neurotransmission, during reward anticipation and reward-related [(11)C]raclopride displacement as an index of dopamine release in the ventral striatum, major target of SN/VTA dopamine neurons. Neural activity in the ventral striatum/nucleus accumbens itself also correlated with ventral striatal dopamine release. Additionally, high-reward-related dopamine release was associated with increased activation of limbic structures, such as the amygdala and the hippocampus. The observed correlations of reward-related mesolimbic fMRI activation and dopamine release provide evidence that dopaminergic neurotransmission plays a quantitative role in human mesolimbic reward processing. Moreover, the combined neurochemical and hemodynamic imaging approach used here opens up new perspectives for the investigation of molecular mechanisms underlying human cognition.
Controlled trial of disodium cromoglycate in chronic persistent ulcerative colitis.
Oral disodium cromoglycate (200 mg qds) has been tested in 26 patients with ulcerative colitis that was resistant to medical treatment. In a double-blind crossover trial disodium cromoglycate and placebo were added to conventional treatment in random order, each for four weeks. There was no significant difference in therapeutic effect between disodium cromoglycate and placebo.
Review on determining number of Cluster in K-Means Clustering
Clustering is widely used in different field such as biology, psychology, and economics. The result of clustering varies as number of cluster parameter changes hence main challenge of cluster analysis is that the number of clusters or the number of model parameters is seldom known, and it must be determined before clustering. The several clustering algorithm has been proposed. Among them k-means method is a simple and fast clustering technique. We address the problem of cluster number selection by using a k-means approach We can ask end users to provide a number of clusters in advance, but it is not feasible end user requires domain knowledge of each data set. There are many methods available to estimate the number of clusters such as statistical indices, variance based method, Information Theoretic, goodness of fit method etc...The paper explores six different approaches to determine the right number of clusters in a dataset
Bounded expectations: resource analysis for probabilistic programs
This paper presents a new static analysis for deriving upper bounds on the expected resource consumption of probabilistic programs. The analysis is fully automatic and derives symbolic bounds that are multivariate polynomials in the inputs. The new technique combines manual state-of-the-art reasoning techniques for probabilistic programs with an effective method for automatic resource-bound analysis of deterministic programs. It can be seen as both, an extension of automatic amortized resource analysis (AARA) to probabilistic programs and an automation of manual reasoning for probabilistic programs that is based on weakest preconditions. An advantage of the technique is that it combines the clarity and compositionality of a weakest-precondition calculus with the efficient automation of AARA. As a result, bound inference can be reduced to off-the-shelf LP solving in many cases and automatically-derived bounds can be interactively extended with standard program logics if the automation fails. Building on existing work, the soundness of the analysis is proved with respect to an operational semantics that is based on Markov decision processes. The effectiveness of the technique is demonstrated with a prototype implementation that is used to automatically analyze 39 challenging probabilistic programs and randomized algorithms. Experiments indicate that the derived constant factors in the bounds are very precise and even optimal for some programs.
Quantum trajectory approach to circuit QED : Quantum jumps and the Zeno effect
Jay Gambetta, Alexandre Blais, M. Boissonneault, A. A. Houck, D. I. Schuster, and S. M. Girvin Departments of Applied Physics and Physics, Yale University, New Haven, Connecticut 06520, USA Institute for Quantum Computing and Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada Département de Physique et Regroupement Québécois sur les Matériaux de Pointe, Université de Sherbrooke, Sherbrooke, Québec J1K 2R1, Canada Received 26 September 2007; revised manuscript received 26 November 2007; published 25 January 2008
EXTRUSION INSTABILITIES AND WALL SLIP
Many fabrication processes for polymeric objects include melt extrusion, in which the molten polymer is conveyed by a ram or a screw and the melt is then forced through a shaping die in continuous processing or into a mold for the manufacture of discrete molded parts. The properties of the fabricated solid object, including morphology developed during cooling and solidification, depend in part on the stresses and orientation induced during the melt shaping. Most polymers used for commercial processing are of sufficiently high molecular weight that the polymer chains are highly entangled in the melt, resulting in flow behavior that differs qualitatively from that of low-molecular-weight liquids. Obvious manifestations of the differences from classical Newtonian fluids are a strongly shear-dependent viscosity and finite stresses normal to the direction of shear in rectilinear flow, transients of the order of seconds for the buildup or relaxation of stresses following a change in shear rate, a finite phase angle between stress and shear rate in oscillatory shear, ratios of extensional to shear viscosities that are considerably greater than 3, and substantial extrudate swell on extrusion from a capillary or slit. These rheological characteristics of molten polymers have been reviewed in textbooks (e.g. Larson 1999, Macosko 1994); the recent research emphasis in rheology has been to establish meaningful constitutive models that incorporate chain behavior at a molecular level. All polymer melts and concentrated solutions exhibit instabilities during extrusion when the stresses to which they are subjected become sufficiently high. The first manifestation of extrusion instability is usually the appearance of distortions on the extrudate surface, sometimes accompanied by oscillating flow. Gross distortion of the extrudate usually follows. The sequence of extrudate distortions
Using Vital Sensors in Mobile Healthcare Business Applications - Challenges, Examples, Lessons Learned
Today, sensors are increasingly used for data collection. In the medical domain, for example, vital signs (e.g., pulse or oxygen saturation) of patients can be measured with sensors and used for further processing. In this paper, different types of applications will be discussed whether sensors might be used in the context of these applications and their suitability for applying external sensors to them. Furthermore, a system architecture for adding sensor technology to respective applications is presented. For this purpose, a real-world business application scenario in the field of well-being and fitness is presented. In particular, we integrated two different sensors in our fitness application. We report on the lessons learned from the implementation and use of this application, e.g., in respect to connection and data structure. They mainly deal with problems relating to the connection and communication between the smart mobile device and the external sensors, as well as the selection of the appropriate type of application. Finally, a robust sensor framework, arising from this fitness application is presented. This framework provides basic features for connecting sensors. Particularly, in the medical domain, it is crucial to provide an easy to use toolset to relieve medical staff.
Software Defect Prediction Models for Quality Improvement : A Literature Study
In spite of meticulous planning, well documentation and proper process control during software development, occurrences of certain defects are inevitable. These software defects may lead to degradation of the quality which might be the underlying cause of failure. In today‟s cutting edge competition it‟s necessary to make conscious efforts to control and minimize defects in software engineering. However, these efforts cost money, time and resources. This paper identifies causative factors which in turn suggest the remedies to improve software quality and productivity. The paper also showcases on how the various defect prediction models are implemented resulting in reduced magnitude of defects.
A Component Model for Augmented / Mixed Reality Applications with Reconfigurable Data-flow
In this paper, we will introduce a new componentbased framework for mixed and augmented reality applications. We will see the developed framework intends to meet several requirements such as portability, variable component granularity, scalability, and a high level of abstraction for end-users. The provided framework, at runtime, is composed of a set of several libraries where components are stored, an XML file in which configuration and communications between components are described and a runtime program that deals with the two previous parts of the system. Our component model can also deal with reconfigurable data-flows by the use of a finite state-machine, using several component configurations within an application.
Outcome After Surgical Repair of Proximal Hamstring Avulsions: A Systematic Review.
BACKGROUND At the present time, no systematic review, including a quality assessment, has been published about the outcome after proximal hamstring avulsion repair. PURPOSE To determine the outcome after surgical repair of proximal hamstring avulsions, to compare the outcome after acute (≤4 weeks) and delayed repairs (>4 weeks), and to compare the outcome after different surgical techniques. STUDY DESIGN Systematic review and best-evidence synthesis. METHODS PubMed, CINAHL, SPORTdiscus, Cochrane library, EMBASE, and Web of Science were searched (up to December 2013) for eligible studies. Two authors screened the search results separately, while quality assessment was performed by 2 authors independently using the Physiotherapy Evidence Database (PEDro) scale. A best-evidence synthesis was subsequently used. RESULTS Thirteen studies (387 participants) were included in this review. There were no studies with control groups of nonoperatively treated proximal hamstring avulsions. All studies had a low methodological quality. After surgical repair of proximal hamstring avulsion, 76% to 100% returned to sports, 55% to 100% returned to preinjury activity level, and 88% to 100% were satisfied with surgery. Mean hamstring strength varied between reporting studies (78%-101%), and hamstring endurance and flexibility were fully restored compared with the unaffected side. Symptoms of residual pain were reported by 8% to 61%, and reported risk of major complications was low (3% rerupture rate). No to minimal difference in outcome was found between acute and delayed repair in terms of return to sports, patient satisfaction, hamstring strength, and pain. Achilles allograft reconstruction and primary repair with suture anchors led to comparable results. CONCLUSION The quality of studies included is low. Surgical repair of proximal hamstring avulsions appears to result in a subjective highly satisfying outcome. However, decreased strength, residual pain, and decreased activity level were reported by a relevant number of patients. Minimal to no differences in outcome of acute and delayed repairs were found. Limited evidence suggests that an Achilles allograft reconstruction yields results comparable with primary repair in delayed cases where primary repair is not possible. High-level studies are required to confirm these findings.
Hard Mixtures of Experts for Large Scale Weakly Supervised Vision
Training convolutional networks (CNNs) that fit on a single GPU with minibatch stochastic gradient descent has become effective in practice. However, there is still no effective method for training large networks that do not fit in the memory of a few GPU cards, or for parallelizing CNN training. In this work we show that a simple hard mixture of experts model can be efficiently trained to good effect on large scale hashtag (multilabel) prediction tasks. Mixture of experts models are not new [7, 3], but in the past, researchers have had to devise sophisticated methods to deal with data fragmentation. We show empirically that modern weakly supervised data sets are large enough to support naive partitioning schemes where each data point is assigned to a single expert. Because the experts are independent, training them in parallel is easy, and evaluation is cheap for the size of the model. Furthermore, we show that we can use a single decoding layer for all the experts, allowing a unified feature embedding space. We demonstrate that it is feasible (and in fact relatively painless) to train far larger models than could be practically trained with standard CNN architectures, and that the extra capacity can be well used on current datasets.
An exceptionally preserved arthropod cardiovascular system from the early Cambrian.
The assumption that amongst internal organs of early arthropods only the digestive system withstands fossilization is challenged by the identification of brain and ganglia in early Cambrian fuxianhuiids and megacheirans from southwest China. Here we document in the 520-million-year-old Chengjiang arthropod Fuxianhuia protensa an exceptionally preserved bilaterally symmetrical organ system corresponding to the vascular system of extant arthropods. Preserved primarily as carbon, this system includes a broad dorsal vessel extending through the thorax to the brain where anastomosing branches overlap brain segments and supply the eyes and antennae. The dorsal vessel provides segmentally paired branches to lateral vessels, an arthropod ground pattern character, and extends into the anterior part of the abdomen. The addition of its vascular system to documented digestive and nervous systems resolves the internal organization of F. protensa as the most completely understood of any Cambrian arthropod, emphasizing complexity that had evolved by the early Cambrian.
Designing Automatic Meter Reading System Using Open Source Hardware and Software
Received Apr 18, 2017 Revised Jul 9, 2017 Accepted Aug 8, 2017 The importance of quality of the measured values is very dependent on the device that measures these values: the size of the sample, the time of measurement, periods of measurement, the mobility and the robustness of the device, etc. Contemporary devices intended for the measurement of physical quantities that are on the market vary in price, as well as and the quality of the measured values. The rule "the more expensive the better" is not necessarily always a rule that is valid because it all depends on the characteristics and capabilities of the device, and the customer’s needs. In this paper, a device based on "Open Source" components of hardware and software will be presented. Device was used to measure voltages and currents on low voltage networks, on which a virtually unlimited number of sensors can be added, and the device is assembled of components available on electronic components Internet. Keyword:
Interactive Chinese Character Learning System though Pictograph Evolution
This paper proposes an Interactive Chinese Character Learning System (ICCLS) based on pictorial evolution as an edutainment concept in computer-based learning of language. The advantage of the language origination itself is taken as a learning platform due to the complexity in Chinese language as compared to other types of languages. Users especially children enjoy more by utilize this learning system because they are able to memories the Chinese Character easily and understand more of the origin of the Chinese character under pleasurable learning environment, compares to traditional approach which children need to rote learning Chinese Character under un-pleasurable environment. Skeletonization is used as the representation of Chinese character and object with an animated pictograph evolution to facilitate the learning of the language. Shortest skeleton path matching technique is employed for fast and accurate matching in our implementation. User is required to either write a word or draw a simple 2D object in the input panel and the matched word and object will be displayed as well as the pictograph evolution to instill learning. The target of computer-based learning system is for pre-school children between 4 to 6 years old to learn Chinese characters in a flexible and entertaining manner besides utilizing visual and mind mapping strategy as learning methodology.
Building Large Machine Reading-Comprehension Datasets using Paragraph Vectors
We present a dual contribution to the task of machine reading-comprehension: a technique for creating large-sized machine-comprehension (MC) datasets using paragraph-vector models; and a novel, hybrid neural-network architecture that combines the representation power of recurrent neural networks with the discriminative power of fully-connected multi-layered networks. We use the MC-dataset generation technique to build a dataset of around 2 million examples, for which we empirically determine the high-ceiling of human performance (around 91% accuracy), as well as the performance of a variety of computer models. Among all the models we have experimented with, our hybrid neuralnetwork architecture achieves the highest performance (83.2% accuracy). The remaining gap to the human-performance ceiling provides enough room for future model improvements.
RFID-Based 3-D Positioning Schemes
This research focuses on RFID-based 3-D positioning schemes, aiming to locate an object in a 3-dimensional space, with reference to a predetermined arbitrary coordinates system, by using RFID tags and readers. More specifically, we consider a hexahedron which may be a shipping container, a storage room, or other hexahedral shape spaces. A number of RFID tags and/or readers with known locations are deployed as reference nodes. We propose two positioning schemes, namely, the active scheme and the passive scheme. The former scheme locates an RFID reader. For example, it may be employed to locate a mobile person who is equipped with an RFID reader or an object that is approached by an RFID reader. The passive scheme locates an RFID tag, which is attached to the target object. Both approaches are based on a Nelder-Mead nonlinear optimization method that minimizes the error objective functions. We have carried out analyses and extensive simulations to evaluate the proposed schemes. Our results show that both schemes can locate the targets with acceptable accuracy. The active scheme usually results in smaller errors and has a lower hardware cost compared to its passive counterpart. On the other hand, the passive scheme is more efficient when locating multiple targets simultaneously. The effectiveness of our proposed approaches is verified experimentally using the IDENTEC RFID kits.
Efficacy of Adjuvant Radiation Therapy in the Treatment of Soft Tissue Sarcoma of the Extremity: 20-year Follow-Up of a Randomized Prospective Trial
This update of a randomized, prospective study presents the effect of external beam radiation therapy (EBRT) on long-term overall survival, local control, and limb function following limb-sparing surgery (LSS) for the treatment extremity soft tissue sarcoma (STS). Following LSS, patients with extremity STS were randomized to receive EBRT or surgery alone. All patients with high-grade STS received adjuvant chemotherapy. Long-term follow-up was obtained through telephone interviews using a questionnaire based on validated methods. Overall survival (OS) was determined by Kaplan–Meier method. A total of 141 patients with extremity STS were randomized to receive adjuvant EBRT (n = 70) or LSS alone (n = 71). Median follow-up was 17.9 years. The 10- and 20-year survival was 77 % (95 % CI 66–85 %) and 64 % (95 % CI 52–75 %) for patients receiving LSS alone and 82 % (95 % CI 72–90 %) and 71 % (95 % CI 59–81 %) for patients receiving EBRT (p = 0.22). Of the 54 patients who completed telephone interviews, the incidence of local recurrence during the follow-up period was 4 % (1 of 24) in the LSS alone cohort compared with 0 % (0 of 30) in those who received EBRT (p = 0.44). Patients treated with EBRT tended to have more wound complications (17 vs. 12.5 %, p = 0.72), clinically significant edema (25 vs. 12 %, p = 0.31), and functional limb deficits (15 vs. 12 %, p = 0.84). Adjuvant EBRT following surgery for STS of the extremity provides excellent local control with acceptable treatment-related morbidity and no statistically significant improvement in overall survival.
Trajectories of receptive language development from 3 to 12 years of age for very preterm children.
OBJECTIVES The goal was to examine whether indomethacin use, gender, neonatal, and sociodemographic factors predict patterns of receptive language development from 3 to 12 years of age in preterm children. METHODS A total of 355 children born in 1989-1992 with birth weights of 600 to 1250 g were evaluated at 3, 4.5, 6, 8, and 12 years with the Peabody Picture Vocabulary Test-Revised. Hierarchical growth modeling was used to explore differences in language trajectories. RESULTS From 3 to 12 years, preterm children displayed catch-up gains on the Peabody Picture Vocabulary Test-Revised. Preterm children started with an average standardized score of 84.1 at 3 years and gained 1.2 points per year across the age period studied. Growth-curve analyses of Peabody Picture Vocabulary Test-Revised raw scores revealed an indomethacin-gender effect on initial scores at 3 years, with preterm boys assigned randomly to receive indomethacin scoring, on average, 4.2 points higher than placebo-treated boys. However, the velocity of receptive vocabulary development from 3 to 12 years did not differ for the treatment groups. Children with severe brain injury demonstrated slower gains in skills over time, compared with those who did not suffer severe brain injury. Significant differences in language trajectories were predicted by maternal education and minority status. CONCLUSION Although indomethacin yielded an initial benefit for preterm boys, this intervention did not alter the developmental trajectory of receptive language scores. Severe brain injury leads to long-term sequelae in language development, whereas a socioeconomically advantaged environment supports better language development among preterm children.
Notes on Noise Contrastive Estimation and Negative Sampling
Estimating the parameters of probabilistic models of langu age such as maxent models and probabilistic neural models is computationally difficult since it involves ev aluating partition functions by summing over an entire vocabulary, which may be millions of word types in siz e. Two closely related strategies— noise contrastive estimation (Mnih and Teh, 2012; Mnih and Kavukcuoglu, 2013; Vaswani et a l., 2013) andnegative sampling (Mikolov et al., 2012; Goldberg and Levy, 2014)—have emerge d as popular solutions to this computational problem, but some confusion remains as to whi ch is more appropriate and when. This document explicates their relationships to each other and to ot her estimation techniques. The analysis shows that, although they are superficially similar, NCE is a gener al parameter estimation technique that is asymptotically unbiased, while negative sampling is best unders tood as a family of binary classification models that are useful for learning word representations but not as a general-purpose estimator.
A phase II study of neoadjuvant chemotherapy with docetaxel, cisplatin and trastuzumab for T2 breast cancers
Preclinical data indicate that the combination of docetaxel, cisplatin and trastuzumab (TCH) may have the potential for clinically significant activity against breast cancers that overexpress the her2/neu gene (HER2). An open-label phase II trial was designed to investigate the response rate and toxicity profile of TCH in breast cancer patients with a primary tumor 2–5 cm in diameter (T2) in its original size. Thirty breast cancer patients with HER2-overexpressing tumors were enrolled. Patients received 6 cycles of docetaxel at 60 mg/m2 and cisplatin at 50 mg/m2 given on day 1 and then every 21 days. Trastuzumab was given on day 1, cycle 1 (4 mg/kg), and then continued weekly at 2 mg/kg for 1 year or until disease progression. Tumor measurements were obtained at baseline as well as after 3 and 6 cycles of chemotherapy. We identified 29 breast cancer patients in Taiwan, of whom 13 (44.8%) had pathological complete responses. No cardiac toxicity was observed. Hematologic grade 4 or 3 toxicities were observed in 1 of 28 patients. Non-hematologic grade 4 or 3 toxicities with a reverse pattern were observed in 6 of 29 patients. The results of our study indicate that TCH neoadjuvant chemotherapy is feasible and active in T2 HER2-overexpressing breast cancer patients in terms of pathological complete response rate, complete response, partial response and manageable toxicities.
Concordance between clinical and histopathologic diagnoses of oral mucosal lesions.
PURPOSE To study the epidemiology of oral soft tissue lesions in New Zealand from 2002 to 2006 and to determine the concordance between the clinical diagnosis and the definitive histopathologic diagnosis achieved by general dental practitioners and by specialists. MATERIALS AND METHODS The details from biopsy referrals and the corresponding histopathologic reports of oral soft tissue lesions were recorded into a statistical software package, and the concordance between the clinical diagnosis and histopathologic diagnosis was determined for all the lesions. RESULTS Most biopsies were benign lesions, and both clinician groups achieved a high diagnostic concordance for these lesions. However, when considering all lesion types, the overall concordance for both groups was a moderate 50.6%, with little difference between specialists and general dental practitioners, although specialists were more accurate in diagnosing a malignant or premalignant lesion. CONCLUSIONS The clinical and histopathologic concordance achieved by oral health practitioners in New Zealand appears to be moderate.
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
Deep learning is at the heart of the current rise of artificial intelligence. In the field of computer vision, it has become the workhorse for applications ranging from self-driving cars to surveillance and security. Whereas, deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models. Adversarial attacks pose a serious threat to the success of deep learning in practice. This fact has recently led to a large influx of contributions in this direction. This paper presents the first comprehensive survey on adversarial attacks on deep learning in computer vision. We review the works that design adversarial attacks, analyze the existence of such attacks and propose defenses against them. To emphasize that adversarial attacks are possible in practical conditions, we separately review the contributions that evaluate adversarial attacks in the real-world scenarios. Finally, drawing on the reviewed literature, we provide a broader outlook of this research direction.
Data Mining Approach for Analyzing Call Center Performance
The aim of our research was to apply well-known data mining techniques (such as linear neural networks, multi-layered perceptrons, probabilistic neural networks, classification and regression trees, support vector machines and finally a hybrid decision tree – neural network approach) to the problem of predicting the quality of service in call centers; based on the performance data actually collected in a call center of a large insurance company. Our aim was two-fold. First, to compare the performance of models built using the abovementioned techniques and, second, to analyze the characteristics of the input sensitivity in order to better understand the relationship between the performance evaluation process and the actual performance and in this way help improve the performance of call centers. In this paper we summarize our findings.
A prospective, randomized clinical trial of antiretroviral therapies on carotid wall thickness.
OBJECTIVE This article compares the effects of initiating three contemporary antiretroviral therapy (ART) regimens on progression of carotid artery intima-media thickness (IMT) over 3 years. DESIGN Randomized clinical trial. SETTING Multicenter (26 institutions). PATIENTS ART-naive HIV-infected individuals (n = 328) without known cardiovascular disease or diabetes mellitus. INTERVENTION Random assignment to tenofovir/emtricitabine along with atazanavir/ritonavir (ATV/r), darunavir/ritonavir (DRV/r), or raltegravir (RAL). MAIN OUTCOME MEASURES Right-sided carotid IMT was evaluated by B-mode ultrasonography before ART initiation, and then after 48, 96, and 144 weeks. Comparisons of yearly rates of change in carotid IMT used mixed-effects linear regression models that permitted not only evaluation of the effects of ART on carotid IMT progression but also how ART-associated changes in traditional risk factors, bilirubin, and markers of HIV infection were associated carotid IMT progression. RESULTS HIV-1 RNA suppression rates were high in all arms (>85%) over 144 weeks. Modest increases in triglycerides and non-high-density lipoprotein cholesterol levels were observed in the protease inhibitor-containing arms compared with decreases with RAL. In contrast, carotid IMT progressed more slowly on ATV/r [8.2, 95% confidence interval (5.6, 10.8) μm/year] than DRV/r [12.9 (10.3, 15.5) μm/year, P = 0.013]; changes with RAL were intermediate [10.7 (9.2, 12.2) μm/year, P = 0.15 vs. ATV/r; P = 0.31 vs. DRV/r]. Bilirubin and non-high-density lipoprotein cholesterol levels appeared to influence carotid IMT progression rates. CONCLUSION In ART-naive HIV-infected individuals at low cardiovascular disease risk, carotid IMT progressed more slowly in participants initiating ATV/r than those initiating DRV/r, with intermediate changes associated with RAL. This effect may be due, in part, to hyperbilirubinemia.
Prognostic value of six minute walk test in cystic fibrosis adults.
BACKGROUND The 6 min walk test (6MWT) provides prognostic information in various respiratory diseases, but limited data exist in cystic fibrosis (CF) adults. METHODS Consecutive CF adults who performed 6MWT at Cochin Hospital (Paris, France) over 12 years were analyzed. The cut-off 6 min walking distance (6MWD) value that best predicted a combined endpoint (death without transplant or lung transplant) was established using a receiver operating curve. Determinants of low 6MWD or of desaturation (SpO2 ≤ 90%) during 6MWT were examined using multivariate logistic regressions. Prognostic value of these variables was assessed using Kaplan-Meier and Cox analyses. RESULTS 6MWT was performed in 286 CF adults (median: age, 28 yr; FEV1, 45% predicted) of whom 14% (n = 40) had lung transplant and 6% (n = 18) died without transplant. 6MWD correlated with FEV1% predicted (r = 0.43; P < 0.001), but markedly differed in subjects within the same range of FEV1. A 6MWD ≤ 475 m predicted death or transplant and was mostly found in patients with FEV1 ≤ 60% predicted. Desaturation during the 6MWT occurred in 29% of patients, exclusively in subjects with FEV1 ≤ 60% predicted. Both 6MWD ≤ 475 m and desaturation during the 6MWT were independent predictors of death or transplant. CONCLUSION The 6MWT provides prognostic information in CF adults, especially in subjects with FEV1 ≤ 60% predicted.
Image Compression using DCT upon Various Quantization
Discrete cosine transform (DCT) is a widely compression technique for converting an image into elementary frequency components. However, level of quality and compression is desired, scalar multiples of the JPEG standard quantization may be used. In this paper, DCT method was applied to compress image under various level of quality. Different quantization matrices of DCT’s coefficients are used to improve level of quality and compression ratio of JPEG image.