title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Agile Spectrum Imaging: Programmable Wavelength Modulation for Cameras and Projectors | We advocate the use of quickly-adjustable, computer-controlled color spectra in photography, lighting and displays. We present an optical relay system that allows mechanical or electronic color spectrum control and use it to modify a conventional camera and projector. We use a diffraction grating to disperse the rays into different colors, and introduce a mask (or LCD/DMD) in the optical path to modulate the spectrum. We analyze the tradeoffs and limitations of this design, and demonstrate its use in a camera, projector and light source. We propose applications such as adaptive color primaries, metamer detection, scene contrast enhancement, photographing fluorescent objects, and high dynamic range photography using spectrum modulation. |
Rex: replication at the speed of multi-core | Standard state-machine replication involves consensus on a sequence of totally ordered requests through, for example, the Paxos protocol. Such a sequential execution model is becoming outdated on prevalent multi-core servers. Highly concurrent executions on multi-core architectures introduce non-determinism related to thread scheduling and lock contentions, and fundamentally break the assumption in state-machine replication. This tension between concurrency and consistency is not inherent because the total-ordering of requests is merely a simplifying convenience that is unnecessary for consistency. Concurrent executions of the application can be decoupled with a sequence of consensus decisions through consensus on partial-order traces, rather than on totally ordered requests, that capture the non-deterministic decisions in one replica execution and to be replayed with the same decisions on others. The result is a new multi-core friendly replicated state-machine framework that achieves strong consistency while preserving parallelism in multi-thread applications. On 12-core machines with hyper-threading, evaluations on typical applications show that we can scale with the number of cores, achieving up to 16 times the throughput of standard replicated state machines. |
Estimation of construction and demolition waste volume generation in new residential buildings in Spain. | The management planning of construction and demolition (C&D) waste uses a single indicator which does not provide enough detailed information. Therefore the determination and implementation of other innovative and precise indicators should be determined. The aim of this research work is to improve existing C&D waste quantification tools in the construction of new residential buildings in Spain. For this purpose, several housing projects were studied to determine an estimation of C&D waste generated during their construction process. This paper determines the values of three indicators to estimate the generation of C&D waste in new residential buildings in Spain, itemizing types of waste and construction stages. The inclusion of two more accurate indicators, in addition to the global one commonly in use, provides a significant improvement in C&D waste quantification tools and management planning. |
A comparison study between gross tumor volumes defined by preoperative magnetic resonance imaging, postoperative specimens, and tumor bed for radiotherapy after breast-conserving surgery | BACKGROUND
The identification and contouring of target volume is important for breast-conserving therapy. The aim of the study was to compare preoperative magnetic resonance imaging (MRI), postoperative pathology, excised specimens' (ES) size, and tumor bed (TB) delineation as methods for determining the gross tumor volume (GTV) for radiotherapy after breast-conserving surgery (BCS).
METHODS
Thirty-three patients with breast cancer who underwent preoperative MRI and radiotherapy after BCS were enrolled. The GTVs determined by MRI, pathology, and the ES were defined as GTVMRI, GTVPAT, and GTVES, respectively. GTVMRI+1 was defined as a 1.0-cm margin around the GTVMRI. The radiation oncologist delineated GTV of the TB (GTVTB) using planning computed tomography according to ≥5 surgical clips placed in the lumpectomy cavity (LC).
RESULTS
The median GTVMRI, GTVMRI+1, GTVPAT, GTVES, and GTVTB were 0.97 cm (range, 0.01-6.88), 12.58 cm (range, 3.90-34.13), 0.97 cm (range, 0.01-6.36), 15.46 cm (range, 1.15-70.69), and 19.24 cm (range, 4.72-54.33), respectively. There were no significant differences between GTVMRI and GTVPAT, GTVMRI+1 and GTVES, GTVES and GTVTB (P = 0.188, 0.070, and 0.264, respectively). GTVMRI is positively related with GTVPAT. However, neither GTVES nor GTVTB correlated with GTVMRI (P = 0.071 and 0.378, respectively). Furthermore, neither GTVES nor GTVTB correlated with GTVMRI+1 (P = 0.068 and 0.375, respectively).
CONCLUSION
When ≥5 surgical clips were placed in the LC for BCS, the volume of TB was consistent with the volume of ES. Neither the volume of TB nor the volume of ES correlated significantly with the volume of tumor defined by preoperative MRI. |
Haplogroups as evolutionary markers of cognitive ability | Article history: Received 19 November 2011 Received in revised form 5 March 2012 Accepted 9 April 2012 Available online xxxx Studies investigating evolutionary theories on the origins of national differences in intelligence have been criticized on the basis that both national cognitive ability measures and supposedly evolutionarily informative proxies (such as latitude and climate) are confounded with general developmental status. In this study 14 Y chromosomal haplogroups (N=47 countries) are employed as evolutionary markers. These are (most probably) not intelligence coding genes, but proxies of evolutionary development with potential relevance to cognitive ability. Correlations and regression analyses with a general developmental indicator (HDI) revealed that seven haplogroups were empirically important predictors of national cognitive ability (I, R1a, R1b, N, J1, E, T[+L]). Based on their evolutionary meaning and correlation with cognitive ability these haplogroups were grouped into two sets. Combined, they accounted in a regression and path analyses for 32–51% of the variance in national intelligence relative to the developmental indicator (35–58%). This pattern was replicated internationally with further controls (e.g. latitude, spatial autocorrelation etc.) and at the regional level in two independent samples (within Italy and Spain). These findings, using a conservative estimate of evolutionary influences, provide support for a mixed influence on national cognitive ability stemming from both current environmental and past environmental (evolutionary) factors. © 2012 Elsevier Inc. All rights reserved. |
Mining advices from weblogs | Weblog, one of the fastest growing user generated contents, often contains key learnings gleaned from people's past experiences which are really worthy to be well presented to other people. One of the key learnings contained in weblogs is often vented in the form of advice. In this paper, we aim to provide a methodology to extract sentences that reveal advices on weblogs. We observed our data to discover the characteristics of advices contained in weblogs. Based on this observation, we define our task as a classification problem using various linguistic features. We show that our proposed method significantly outperforms the baseline. The presence or absence of imperative mood expression appears to be the most important feature in this task. It is also worth noting that the work presented in this paper is the first attempt on mining advices from English data. |
Intramuscular myxoma of the hypothenar muscles | Intramuscular myxomas of the hand are rare entities. Primarily found in the myocardium, these lesions also affect the bone and soft tissues in other parts of the body. This article describes a case of hypothenar muscles myxoma treated with local surgical excision after frozen section biopsy with tumor-free margins. Radiographic images of the axial and appendicular skeleton were negative for fibrous dysplasia, and endocrine studies were within normal limits. The 8-year follow-up period has been uneventful, with no complications. The patient is currently recurrence free, with normal intrinsic hand function. |
An RLS-Based Lattice-Form Complex Adaptive Notch Filter | This letter presents a new lattice-form complex adaptive IIR notch filter to estimate and track the frequency of a complex sinusoid signal. The IIR filter is a cascade of a direct-form all-pole prefilter and an adaptive lattice-form all-zero filter. A complex domain exponentially weighted recursive least square algorithm is adopted instead of the widely used least mean square algorithm to increase the convergence rate. The convergence property of this algorithm is investigated, and an expression for the steady-state asymptotic bias is derived. Analysis results indicate that the frequency estimate for a single complex sinusoid is unbiased. Simulation results demonstrate that the proposed method achieves faster convergence and better tracking performance than all traditional algorithms. |
The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. | OBJECTIVE
To test the feasibility of creating a valid and reliable checklist with the following features: appropriate for assessing both randomised and non-randomised studies; provision of both an overall score for study quality and a profile of scores not only for the quality of reporting, internal validity (bias and confounding) and power, but also for external validity.
DESIGN
A pilot version was first developed, based on epidemiological principles, reviews, and existing checklists for randomised studies. Face and content validity were assessed by three experienced reviewers and reliability was determined using two raters assessing 10 randomised and 10 non-randomised studies. Using different raters, the checklist was revised and tested for internal consistency (Kuder-Richardson 20), test-retest and inter-rater reliability (Spearman correlation coefficient and sign rank test; kappa statistics), criterion validity, and respondent burden.
MAIN RESULTS
The performance of the checklist improved considerably after revision of a pilot version. The Quality Index had high internal consistency (KR-20: 0.89) as did the subscales apart from external validity (KR-20: 0.54). Test-retest (r 0.88) and inter-rater (r 0.75) reliability of the Quality Index were good. Reliability of the subscales varied from good (bias) to poor (external validity). The Quality Index correlated highly with an existing, established instrument for assessing randomised studies (r 0.90). There was little difference between its performance with non-randomised and with randomised studies. Raters took about 20 minutes to assess each paper (range 10 to 45 minutes).
CONCLUSIONS
This study has shown that it is feasible to develop a checklist that can be used to assess the methodological quality not only of randomised controlled trials but also non-randomised studies. It has also shown that it is possible to produce a checklist that provides a profile of the paper, alerting reviewers to its particular methodological strengths and weaknesses. Further work is required to improve the checklist and the training of raters in the assessment of external validity. |
Roles of Macro-Actions in Accelerating Reinforcement Learning | We analyze the use of built-in policies, or macro-actions, as a form of domain knowledge that can improve the speed and scaling of reinforcement learning algorithms. Such macro-actions are often used in robotics, and macro-operators are also well-known as an aid to state-space search in AI systems. The macro-actions we consider are closed-loop policies with termination conditions. The macro-actions can be chosen at the same level as primitive actions. Macro-actions commit the learning agent to act in a particular, purposeful way for a sustained period of time. Overall, macro-actions may either accelerate or retard learning , depending on the appropriateness of the macro-actions to the particular task. We analyze their eeect in a simple example, breaking the acceleration eeect into two parts: 1) the eeect of the macro-action in changing exploratory behavior, independent of learning , and 2) the eeect of the macro-action on learning, independent of its eeect on behavior. In our example, both eeects are signiicant, but the latter appears to be larger. Finally, we provide a more complex gridworld illustration of how appropriately chosen macro-actions can accelerate overall learning. Many problems in artiicial intelligence (AI) are too large to be solved practically by searching the state-space using available primitive operators. By searching for the goal using only primitive operators, the AI system is bounded by both the depth and the breadth of the search tree. One way to overcome this diiculty is through macro-actions (or macros). By chunking together primitive actions into macro-actions, the eeective length of the solution is shortened. Both Korf, 1985] and Iba, 1989] have demonstrated that using macro-actions to search for a solution has resulted in solutions in cases where the system was unable to nd answers by searching in primitive state-space, and in nding faster solutions in cases where both systems could solve the problem. Reinforcement learning (RL) is a collection of methods for discovering near-optimal solutions to stochas-tic sequential decision problems Watkins, 1989]. An RL system interacts with the environment by executing actions and receiving rewards from the environment. Unlike supervised learning, RL does not rely on an outside teacher to specify the correct action for a given state. Instead, an RL system tries diierent actions and uses the feedback from the environment to determine a closed loop policy which maximizes reward. In this work, we treat macro-actions as closed-loop policies with termination conditions. Prior work that has included closed-loop macro-In … |
Semi-Supervised Clustering with Neural Networks | Clustering using neural networks has recently demonstrated promising performance in machine learning and computer vision applications. However, the performance of current approaches is limited either by unsupervised learning or their dependence on large set of labeled data samples. In this paper, we propose ClusterNet that uses pairwise semantic constraints from very few labeled data samples (< 5% of total data) and exploits the abundant unlabeled data to drive the clustering approach. We define a new loss function that uses pairwise semantic similarity between objects combined with constrained k-means clustering to efficiently utilize both labeled and unlabeled data in the same framework. The proposed network uses convolution autoencoder to learn a latent representation that groups data into k specified clusters, while also learning the cluster centers simultaneously. We evaluate and compare the performance of ClusterNet on several datasets and state of the art deep clustering approaches. |
Inequalities in Open Source Software Development: Analysis of Contributor’s Commits in Apache Software Foundation Projects | While researchers are becoming increasingly interested in studying OSS phenomenon, there is still a small number of studies analyzing larger samples of projects investigating the structure of activities among OSS developers. The significant amount of information that has been gathered in the publicly available open-source software repositories and mailing-list archives offers an opportunity to analyze projects structures and participant involvement. In this article, using on commits data from 263 Apache projects repositories (nearly all), we show that although OSS development is often described as collaborative, but it in fact predominantly relies on radically solitary input and individual, non-collaborative contributions. We also show, in the first published study of this magnitude, that the engagement of contributors is based on a power-law distribution. |
Constructing a Hermitian Matrix from Its Diagonal Entries and Eigenvalues | Given two vectors a R the Schur Horn theorem states that a majorizes if and only if there exists a Hermitian matrix H with eigenvalues and diagonal entries a While the theory is regarded as classical by now the known proof is not constructive To construct a Hermitian matrix from its diagonal entries and eigenvalues therefore becomes an interesting and challenging inverse eigenvalue problem Two algorithms for determining the matrix numerically are proposed in this paper The lift and pro jection method is an iterative method which involves an interesting application of the Wielandt Ho man theorem The projected gradient method is a continuous method which besides its easy implementation o ers a new proof of existence because of its global convergence property |
A Novel CMOS Full Adder | This paper proposes a high-speed adder cell using a new design style called "bridge". The bridge design style offers more regularity and higher density than conventional CMOS design style, by using some transistors, named bridge transistors. Results show 4.4% (@ Vdd=3 volt) to 34.1% (@ Vdd=1 volt) improvement in speed over conventional CMOS adder. HSPICE is the circuit simulator used, and the technology being used for simulations is BSIM3v3 0.18mum technology |
Human-Inspired Control of Bipedal Walking Robots | This paper presents a human-inspired control approach to bipedal robotic walking: utilizing human data and output functions that appear to be intrinsic to human walking in order to formally design controllers that provably result in stable robotic walking. Beginning with human walking data, outputs-or functions of the kinematics-are determined that result in a low-dimensional representation of human locomotion. These same outputs can be considered on a robot, and human-inspired control is used to drive the outputs of the robot to the outputs of the human. The main results of this paper are that, in the case of both under and full actuation, the parameters of this controller can be determined through a human-inspired optimization problem that provides the best fit of the human data while simultaneously provably guaranteeing stable robotic walking for which the initial condition can be computed in closed form. These formal results are demonstrated in simulation by considering two bipedal robots-an underactuated 2-D bipedal robot, AMBER, and fully actuated 3-D bipedal robot, NAO-for which stable robotic walking is automatically obtained using only human data. Moreover, in both cases, these simulated walking gaits are realized experimentally to obtain human-inspired bipedal walking on the actual robots. |
External localization system for mobile robotics | We present a fast and precise vision-based software intended for multiple robot localization. The core component of the proposed localization system is an efficient method for black and white circular pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision, and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost camera, its core algorithm is able to process hundreds of images per second while tracking hundreds of objects with millimeter precision. We propose a mathematical model of the method that allows to calculate its precision, area of coverage, and processing speed from the camera's intrinsic parameters and hardware's processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions are verified in several experiments. Apart from the method description, we also publish its source code; so, it can be used as an enabling technology for various mobile robotics problems. |
Low-delay peer-to-peer streaming using scalable video coding | Peer-to-peer (P2P) networks represent a valuable architecture for streaming video over the Internet. In these systems, users contribute their resources to relay the media to others and no dedicated infrastructure is required. In order to ensure a low end-to-end delay, P2P overlay networks are often organized as a set of complementary multicast trees. The source of the stream multiplexes the data on top of these trees and the routing of packets is statically defined. In this scenario, the reliability of the overlay links is critical for the performance of the system since temporary link failure or network congestion can cause a significant disruption of the end-user quality. The novel Scalable Video Coding (SVC) standard enables efficient usage of the network capacity by allowing intermediate high capacity nodes in the overlay network to dynamically extract layers from the scalable bit stream to serve less capable peers. On the other hand, SVC incurs a certain loss in terms of coding efficiency with respect to H.264/AVC single-layer coding. We propose a simple model that allows to evaluate the trade-off of using a scalable codec with respect to single-layer coding, given the distribution of the receivers’ capacities in an error-free network. We also report experimental results obtained by using SVC on top of a real-time implementation of the Stanford Peer-to-Peer Multicast (SPPM) protocol that clearly show the benefits of a prioritization mechanism to react to network congestion. |
ST-Elevation Myocardial Infarction, Thrombus Aspiration, and Different Invasive Strategies. A TASTE Trial Substudy | BACKGROUND
The clinical effect of thrombus aspiration in ST-elevation myocardial infarction may depend on the type of aspiration catheter and stenting technique.
METHODS AND RESULTS
The multicenter, prospective, randomized, open-label trial Thrombus Aspiration in ST-Elevation myocardial infarction in Scandinavia (TASTE) did not demonstrate a clinical benefit of thrombus aspiration compared to percutaneous coronary intervention alone. We assessed the effect of type of aspiration device, stent type, direct stenting, and postdilatation on outcomes at 1 year. There was no difference in all-cause mortality, between the 3 most frequently used aspiration catheters (Eliminate [Terumo] 5.4%, Export [Medtronic] 5.0%, Pronto [Vascular Solutions] 4.5%) in patients randomized to thrombus aspiration. There was no difference in mortality between directly stented patients randomized to thrombus aspiration compared to patients randomized to percutaneous coronary intervention only (risk ratio 1.08, 95% CI 0.70 to 1.67, P=0.73). Similarly, there was no difference in mortality between the 2 randomized groups for patients receiving drug-eluting stents (risk ratio 0.89, 95% CI 0.63 to 1.26, P=0.50) or for those treated with postdilation (risk ratio 0.72, 95% CI 0.49 to 1.07, P=0.11). Furthermore, there was no difference in rehospitalization for myocardial infarction or stent thrombosis between the randomized arms in any of the subgroups.
CONCLUSIONS
In patients with ST-elevation myocardial infarction randomized to thrombus aspiration, the type of aspiration catheter did not affect outcome. Stent type, direct stenting, or postdilation did not affect outcome irrespective of treatment with thrombus aspiration and percutaneous coronary intervention or percutaneous coronary intervention alone.
CLINICAL TRIAL REGISTRATION
URL: ClinicalTrials.gov. Unique identifier: NCT01093404, https://clinicaltrials.gov/ct2/show/NCT01093404. |
HARP: Hierarchical Representation Learning for Networks | We present HARP, a novel method for learning low dimensional embeddings of a graph’s nodes which preserves higherorder structural features. Our proposed method achieves this by compressing the input graph prior to embedding it, effectively avoiding troublesome embedding configurations (i.e. local minima) which can pose problems to non-convex optimization. HARP works by finding a smaller graph which approximates the global structure of its input. This simplified graph is used to learn a set of initial representations, which serve as good initializations for learning representations in the original, detailed graph. We inductively extend this idea, by decomposing a graph in a series of levels, and then embed the hierarchy of graphs from the coarsest one to the original graph. HARP is a general meta-strategy to improve all of the stateof-the-art neural algorithms for embedding graphs, including DeepWalk, LINE, and Node2vec. Indeed, we demonstrate that applying HARP’s hierarchical paradigm yields improved implementations for all three of these methods, as evaluated on classification tasks on real-world graphs such as DBLP, BlogCatalog, and CiteSeer, where we achieve a performance gain over the original implementations by up to 14% Macro F1. |
Using web security scanners to detect vulnerabilities in web services | Although web services are becoming business-critical components, they are often deployed with critical software bugs that can be maliciously explored. Web vulnerability scanners allow detecting security vulnerabilities in web services by stressing the service from the point of view of an attacker. However, research and practice show that different scanners have different performance on vulnerabilities detection. In this paper we present an experimental evaluation of security vulnerabilities in 300 publicly available web services. Four well known vulnerability scanners have been used to identify security flaws in web services implementations. A large number of vulnerabilities has been observed, which confirms that many services are deployed without proper security testing. Additionally, the differences in the vulnerabilities detected and the high number of false-positives (35% and 40% in two cases) and low coverage (less than 20% for two of the scanners) observed highlight the limitations of web vulnerability scanners on detecting security vulnerabilities in web services. |
Cloud based Big Data Analytics: A Survey of Current Research and Future Directions | The advent of the digital age has led to a rise in different types of data with every passing day. In fact, it is expected that half of the total data will be on the cloud by 2016. This data is complex and needs to be stored, processed and analyzed for information that can be used by organizations. Cloud computing provides an apt platform for big data analytics in view of the storage and computing requirements of the latter. This makes cloud-based analytics a viable research field. However, several issues need to be addressed and risks need to be mitigated before practical applications of this synergistic model can be popularly used. This paper explores the existing research, challenges, open issues and future research direction for this field of study. |
Current perspectives: the impact of cyberbullying on adolescent health | Cyberbullying has become an international public health concern among adolescents, and as such, it deserves further study. This paper reviews the current literature related to the effects of cyberbullying on adolescent health across multiple studies worldwide and provides directions for future research. A review of the evidence suggests that cyberbullying poses a threat to adolescents' health and well-being. A plethora of correlational studies have demonstrated a cogent relationship between adolescents' involvement in cyberbullying and negative health indices. Adolescents who are targeted via cyberbullying report increased depressive affect, anxiety, loneliness, suicidal behavior, and somatic symptoms. Perpetrators of cyberbullying are more likely to report increased substance use, aggression, and delinquent behaviors. Mediating/moderating processes have been found to influence the relationship between cyberbullying and adolescent health. More longitudinal work is needed to increase our understanding of the effects of cyberbullying on adolescent health over time. Prevention and intervention efforts related to reducing cyberbullying and its associated harms are discussed. |
Program interference in MLC NAND flash memory: Characterization, modeling, and mitigation | As NAND flash memory continues to scale down to smaller process technology nodes, its reliability and endurance are degrading. One important source of reduced reliability is the phenomenon of program interference: when a flash cell is programmed to a value, the programming operation affects the threshold voltage of not only that cell, but also the other cells surrounding it. This interference potentially causes a surrounding cell to move to a logical state (i.e., a threshold voltage range) that is different from its original state, leading to an error when the cell is read. Understanding, characterizing, and modeling of program interference, i.e., how much the threshold voltage of a cell shifts when another cell is programmed, can enable the design of mechanisms that can effectively and efficiently predict and/or tolerate such errors. In this paper, we provide the first experimental characterization of and a realistic model for program interference in modern MLC NAND flash memory. To this end, we utilize the read-retry mechanism present in some state-of-the-art 2Y-nm (i.e., 20-24nm) flash chips to measure the changes in threshold voltage distributions of cells when a particular cell is programmed. Our results show that the amount of program interference received by a cell depends on 1) the location of the programmed cells, 2) the order in which cells are programmed, and 3) the data values of the cell that is being programmed as well as the cells surrounding it. Based on our experimental characterization, we develop a new model that predicts the amount of program interference as a function of threshold voltage values and changes in neighboring cells. We devise and evaluate one application of this model that adjusts the read reference voltage to the predicted threshold voltage distribution with the goal of minimizing erroneous reads. Our analysis shows that this new technique can reduce the raw flash bit error rate by 64% and thereby improve flash lifetime by 30%. We hope that the understanding and models developed in this paper lead to other error tolerance mechanisms for future flash memories. |
Secure data processing framework for mobile cloud computing | In mobile cloud computing, mobile devices can rely on cloud computing and information storage resource to perform computationally intensive operations such as searching, data mining, and multimedia processing. In addition to providing traditional computation services, mobile cloud also enhances the operation of traditional ad hoc network by treating mobile devices as service nodes, e.g., sensing services. The sensed information, such as location coordinates, health related information, should be processed and stored in a secure fashion to protect user's privacy in the cloud. To this end, we present a new mobile cloud data processing framework through trust management and private data isolation. Finally, an implementation pilot for improving teenagers' driving safety, which is called FocusDrive, is presented to demonstrate the solution. |
Reusable Model Transformation Patterns | This paper is a reflection of our experience with the specification and subsequent execution of model transformations in the QVT core and Relations languages. Since this technology for executing transformations written in high-level, declarative specification languages is of very recent date, we observe that there is little knowledge available on how to write such declarative model transformations. Consequently, there is a need for a body of knowledge on transformation engineering. With this paper we intend to make an initial contribution to this emerging discipline. Based on our experiences we propose a number of useful design patterns for transformation specification. In addition we provide a method for specifying such transformation patterns in QVT, such that others can add their own patterns to a catalogue and the body of knowledge can grow as experience is built up. Finally, we illustrate how these patterns can be used in the specification of complex transformations. |
Number 2002-01-3006 HUD Symbology for Surface Operations : Command Guidance vs . Situation Guidance Formats | This study investigated pilots' taxi performance, situation awareness and workload while taxiing with three different head-up display (HUD) symbology formats: Command-guidance, Situation-guidance and Hybrid. Command-guidance symbology provided the pilot with required control inputs to maintain centerline position; Situation-guidance symbology provided conformal, scene-linked navigation information; while the Hybrid symbology combined elements of both symbologies. Taxi speed, centerline tracking accuracy, workload and situation awareness were assessed. Taxi speed, centerline accuracy, and situation awareness were highest and workload lowest with Situation-guidance and Hybrid symbologies. These results are thought to be due to cognitive tunneling induced by the Commandguidance symbology. The conformal route information of the Situation-guidance and Hybrid HUD formats provided a common reference with the environment, which may have supported better distribution of attention. |
Modeling and Summarizing News Events Using Semantic Triples | ive News Summarization Approaches 5 Phrase-selection based (PSB) • Tentatively pairs subject and verb phrases from different sentences • Checks for compatibility Pattern-graph Fusion (PGF) • Looks for similar tokens in different sentences • Fuses the tokens, thus forming a graph PSB Example “Hurricane” “Nate” “slammed” “Louisiana” “It” “killed” “2” “people” compatible |
Ambulatory measurement of shoulder and elbow kinematics through inertial and magnetic sensors | Inertial and magnetic measurement systems (IMMSs) are a new generation of motion analysis systems which may diffuse the measurement of upper-limb kinematics to ambulatory settings. Based on the MT9B IMMS (Xsens Technologies, NL), we therefore developed a protocol that measures the scapulothoracic, humerothoracic and elbow 3D kinematics. To preliminarily evaluate the protocol, a 23-year-old subject performed six tasks involving shoulder and elbow single-joint-angle movements. Criteria for protocol validity were limited cross-talk with the other joint-angles during each task; scapulohumeral-rhythm close to literature results; and constant carrying-angle. To assess the accuracy of the MT9B when measuring the upper-limb kinematics through the protocol, we compared the MT9B estimations during the six tasks, plus other four, with the estimations of an optoelectronic system (the gold standard), in terms of RMS error, correlation coefficient (r), and the amplitude ratio (m). Results indicate that the criteria for protocol validity were met for all tasks. For the joint angles mainly involved in each movement, the MT9B estimations presented RMS errors <3.6°, r > 0.99 and 0.9 < m < 1.09. It appears therefore that (1) the protocol in combination with the MT9B is valid for, and (2) the MT9B in combination with the protocol is accurate when, measuring shoulder and elbow kinematics, during the tasks tested, in ambulatory settings. |
Evaluation of Dependency Parsers on Unbounded Dependencies | We evaluate two dependency parsers, MSTParser and MaltParser, with respect to their capacity to recover unbounded dependencies in English, a type of evaluation that has been applied to grammarbased parsers and statistical phrase structure parsers but not to dependency parsers. The evaluation shows that when combined with simple post-processing heuristics, the parsers correctly recall unbounded dependencies roughly 50% of the time, which is only slightly worse than two grammar-based parsers specifically designed to cope with such dependencies. |
Dissociation, somatization, and affect dysregulation: the complexity of adaptation of trauma. | OBJECTIVE
A century of clinical research has noted a range of trauma-related psychological problems that are not captured in the DSM-IV framework of posttraumatic stress disorder (PTSD). This study investigated the relationships between exposure to extreme stress, the emergence of PTSD, and symptoms traditionally associated with "hysteria," which can be understood as problems with stimulus discrimination, self-regulation, and cognitive integration of experience.
METHOD
The DSM-IV field trial for PTSD studied 395 traumatized treatment-seeking subjects and 125 non-treatment-seeking subjects who had also been exposed to traumatic experiences. Data on age at onset, the nature of the trauma, PTSD, dissociation, somatization, and affect dysregulation were collected.
RESULTS
PTSD, dissociation, somatization, and affect dysregulation were highly interrelated. The subjects meeting the criteria for lifetime (but not current) PTSD scored significantly lower on these disorders than those with current PTSD, but significantly higher than those who never had PTSD. Subjects who developed PTSD after interpersonal trauma as adults had significantly fewer symptoms than those with childhood trauma, but significantly more than victims of disasters.
CONCLUSIONS
PTSD, dissociation, somatization, and affect dysregulation represent a spectrum of adaptations to trauma. They often occur together, but traumatized individuals may suffer from various combinations of symptoms over time. In treating these patients, it is critical to attend to the relative contributions of loss of stimulus discrimination, self-regulation, and cognitive integration of experience to overall impairment and provide systematic treatment that addresses both unbidden intrusive recollections and these other symptoms associated with having been overwhelmed by exposure to traumatic experiences. |
The global seismographic network surpasses its design goal | (GSN) surpassed its 128-station design goal for uniform worldwide coverage of the Earth.A total of 136 GSN stations are now sited from the South Pole to Siberia, and from the Amazon Basin to the sea floor of the northeast Pacific Ocean—in cooperation with over 100 host organizations and seismic networks in 59 countries worldwide (Figure 1). Established in 1986 by the Incorporated Research Institutions for Seismology (IRIS) to replace the obsolete, analog Worldwide Standardized Seismograph Network (WWSSN), the GSN continues a tradition in global seismology that dates back more than a century to the network of Milne seismographs that initially spanned the globe.The GSN is a permanent network of state-of-the-art seismological and geophysical sensors connected by available telecommunications to serve as a multi-use scientific facility and societal resource for scientific research, environmental monitoring, and education for our national and international community. All GSN data are freely and openly available via the Internet both in real-time and from archival storage at the IRIS Data Management System (www. iris.edu). GSN instrumentation is capable of measuring and recording with high fidelity all of Earth's vibrations, from high-frequency, strong ground motions near an earthquake, to the slowest free oscillations of the Earth (Figure 2). GSN seismometers have recorded both the greatest earthquakes on scale (for example, the 1994 Mw-8.2 Bolivia earthquake at 660 km depth; Wallace [1995]),as well as the nano-earthquakes (M < 0) near the sea floor at the Hawaii-2 Observatory [Butler, 2003]. GSN sensors are accurately calibrated, and timing is based on GPS clocks. The primary focus in creating the GSN has been seismology. However, the power, telemetry, site, and logistical infrastructure at GSN stations are inherently multi-use.These resources are available to other scientific sensors, and the GSN welcomes interest from other scientific disciplines in sharing this infrastructure. GPS, meteorological, and geomagnetic sensors currently enhance GSN sites as geophysical observatories. Global real-time telemetry from all stations is the second GSN design goal now within reach. Dial-up telephone access, which the GSN pioneered in the early 1990s, has largely been supplanted by Internet and satellite access, which has now reached more than 80% of the network.To achieve this telemetry coverage, a wide range of solutions—geosyn-chronous satellites employing antennas in the 1 to 4 m range, Inmarsat, Iridium, land lines, local ISPs, submarine cable, etc.—has been implemented, in cooperation with NASA/Jet Island. Satellite hubs in Houston and at the Pacific Tsunami Warning Center in … |
An online stair-climbing control method for a transformable tracked robot | Stair-climbing is a necessary capacity for mobile robots. This paper presents an online control method for the stair-climbing of a transformable tracked robot, Amoeba-II, and this robot is also an isomerism-modules robot with different mechanism modules. Based on the reasonable compartmentalization and kinematics analysis of the stair-climbing process, the coordination of the rotations of modules can reduce the slippage between tracks and terrain. To ensure that the robot can climb stairs with enough capability and stability, the stair-climbing criterion for the robot has been established based on the force analysis of each stage of the stair-climbing procedure. Meanwhile, the interference-avoiding criterion has been set up to avoid the interference between the non-tracked module of the robot and the stair. The experiment for the stair-climbing of the robot has been implemented to certify the validity of the online stair-climbing control method for a transformable tracked robot. |
Strategic subcortical hyperintensities in cholinergic pathways and executive function decline in treated Alzheimer patients. | OBJECTIVE
To investigate changes in cognition, function, and behavior after 1 year in patients with Alzheimer disease being treated with cholinesterase inhibitors, in relation to the presence or absence of subcortical hyperintensities involving the cholinergic pathways.
DESIGN
One-year prospective cohort study.
SETTING
Memory Clinic, Sunnybrook Health Sciences Centre, University of Toronto. Patients Ninety patients with possible/probable Alzheimer disease who were being treated with cholinesterase inhibitors at baseline.
INTERVENTIONS
Yearly standardized neuropsychological testing and brain magnetic resonance imaging (MRI). The Cholinergic Pathways Hyperintensities Scale (CHIPS) was applied to baseline MRIs to rate the severity of subcortical hyperintensities in cholinergic pathways. The consensus-derived Age-Related White Matter Changes (ARWMC) Rating Scale was used as a general measure of white matter disease burden.
MAIN OUTCOME MEASURES
Tests of global cognition, function, and behavior and specific cognitive and functional domains.
RESULTS
Patients in the low CHIPS group were equivalent to those in the high CHIPS group with regard to baseline demographic characteristics, cognitive severity, and vascular risk factors. After covarying age and education, no differences were found after 1 year in overall cognition, function, and behavior or on memory, language, and visuospatial tasks. Patients in the high CHIPS group showed improvement on executive function and working memory tasks compared with those in the low CHIPS group. For the ARWMC scale, groups with and without white matter abnormalities were equivalent on baseline demographics and in cognitive, functional, and behavioral outcomes.
CONCLUSION
Cerebrovascular compromise of the cholinergic pathways may be a factor that contributes more selectively than does total white matter lesion burden to response to cholinergic therapy in Alzheimer disease, particularly on frontal/executive tasks. |
Engineers & electrons: A century of electrical progress | Spanning two centuries — with emphasis on the past 100 years-Engineers & Electrons is as much a revelation of the human side of engineering as it is a description of the technical accomplishments of the profession. |
Book review: Three-Dimensional Computer Vision. by Olivier Faugeras (The MIT Press, 1993) | The book addresses the important problem of adaptation of an embedded system's behaviour to make it more appropriate to the environment in which it is embedded. This process, in its broadest sense, is referred to as learning, and for the author this also includes such simple forms of learning as measuring the width of a hall to allow a measuring robot to perform better. The philosophical debate about what does and does not constitute machine learning is left alone; the interested reader is referred to an earlier publication by the same author. |
Emergence of simple-cell receptive field properties by learning a sparse code for natural images | THE receptive fields of simple cells in mammalian primary visual cortex can be characterized as being spatially localized, oriented1–4 and bandpass (selective to structure at different spatial scales), comparable to the basis functions of wavelet transforms5,6. One approach to understanding such response properties of visual neurons has been to consider their relationship to the statistical structure of natural images in terms of efficient coding7–12. Along these lines, a number of studies have attempted to train unsupervised learning algorithms on natural images in the hope of developing receptive fields with similar properties13–18, but none has succeeded in producing a full set that spans the image space and contains all three of the above properties. Here we investigate the proposal8,12 that a coding strategy that maximizes sparseness is sufficient to account for these properties. We show that a learning algorithm that attempts to find sparse linear codes for natural scenes will develop a complete family of localized, oriented, bandpass receptive fields, similar to those found in the primary visual cortex. The resulting sparse image code provides a more efficient representation for later stages of processing because it possesses a higher degree of statistical independence among its outputs. |
Retrieving Similar Styles to Parse Clothing | Clothing recognition is a societally and commercially important yet extremely challenging problem due to large variations in clothing appearance, layering, style, and body shape and pose. In this paper, we tackle the clothing parsing problem using a retrieval-based approach. For a query image, we find similar styles from a large database of tagged fashion images and use these examples to recognize clothing items in the query. Our approach combines parsing from: pre-trained global clothing models, local clothing models learned on the fly from retrieved examples, and transferred parse-masks (Paper Doll item transfer) from retrieved examples. We evaluate our approach extensively and show significant improvements over previous state-of-the-art for both localization (clothing parsing given weak supervision in the form of tags) and detection (general clothing parsing). Our experimental results also indicate that the general pose estimation problem can benefit from clothing parsing. |
CrossFire: An Analysis of Firefox Extension-Reuse Vulnerabilities | Extension architectures of popular web browsers have been carefully studied by the research community; however, the security impact of interactions between different extensions installed on a given system has received comparatively little attention. In this paper, we consider the impact of the lack of isolation between traditional Firefox browser extensions, and identify a novel extension-reuse vulnerability that allows adversaries to launch stealthy attacks against users. This attack leverages capability leaks from legitimate extensions to avoid the inclusion of security-sensitive API calls within the malicious extension itself, rendering extensions that use this technique difficult to detect through the manual vetting process that underpins the security of the Firefox extension ecosystem. We then present CROSSFIRE, a lightweight static analyzer to detect instances of extension-reuse vulnerabilities. CROSSFIRE uses a multi-stage static analysis to efficiently identify potential capability leaks in vulnerable, benign extensions. If a suspected vulnerability is identified, CROSSFIRE then produces a proof-ofconcept exploit instance – or, alternatively, an exploit template that can be adapted to rapidly craft a working attack that validates the vulnerability. To ascertain the prevalence of extension-reuse vulnerabilities, we performed a detailed analysis of the top 10 Firefox extensions, and ran further experiments on a random sample drawn from the top 2,000. The results indicate that popular extensions, downloaded by millions of users, contain numerous exploitable extension-reuse vulnerabilities. A case study also provides anecdotal evidence that malicious extensions exploiting extension-reuse vulnerabilities are indeed effective at cloaking themselves from extension vetters. |
Microstrip-Ridge Gap Waveguide–Study of Losses, Bends, and Transition to WR-15 | This paper presents the design of microstrip-ridge gap waveguide using via-holes in printed circuit boards, a solution for high-frequency circuits. The study includes how to define the numerical ports, pin sensitivity, losses, and also a comparison with performance of normal microstrip lines and inverted microstrip lines. The results are produced using commercially available electromagnetic simulators. A WR-15 to microstrip-ridge gap waveguide transition was also designed. The results are verified with measurements on microstrip-ridge gap waveguides with WR15 transitions at both ends. |
Influenza and Pneumonia Mortality in 66 Large Cities in the United States in Years Surrounding the 1918 Pandemic | The 1918 influenza pandemic was a major epidemiological event of the twentieth century resulting in at least twenty million deaths worldwide; however, despite its historical, epidemiological, and biological relevance, it remains poorly understood. Here we examine the relationship between annual pneumonia and influenza death rates in the pre-pandemic (1910-17) and pandemic (1918-20) periods and the scaling of mortality with latitude, longitude and population size, using data from 66 large cities of the United States. The mean pre-pandemic pneumonia death rates were highly associated with pneumonia death rates during the pandemic period (Spearman ρ = 0.64-0.72; P<0.001). By contrast, there was a weak correlation between pre-pandemic and pandemic influenza mortality rates. Pneumonia mortality rates partially explained influenza mortality rates in 1918 (ρ = 0.34, P = 0.005) but not during any other year. Pneumonia death counts followed a linear relationship with population size in all study years, suggesting that pneumonia death rates were homogeneous across the range of population sizes studied. By contrast, influenza death counts followed a power law relationship with a scaling exponent of ∼0.81 (95%CI: 0.71, 0.91) in 1918, suggesting that smaller cities experienced worst outcomes during the pandemic. A linear relationship was observed for all other years. Our study suggests that mortality associated with the 1918-20 influenza pandemic was in part predetermined by pre-pandemic pneumonia death rates in 66 large US cities, perhaps through the impact of the physical and social structure of each city. Smaller cities suffered a disproportionately high per capita influenza mortality burden than larger ones in 1918, while city size did not affect pneumonia mortality rates in the pre-pandemic and pandemic periods. |
Automation, per se, is not job elimination: How artificial intelligence forwards cooperative human-machine coexistence | Recent advances in artificial intelligence (AI) and machine learning, combined with developments in neuromorphic hardware technologies and ubiquitous computing, promote machines to emulate human perceptual and cognitive abilities in a way that will continue the trend of automation for several upcoming decades. Despite the gloomy scenario of automation as a job eliminator, we argue humans and machines can cross-fertilise in a way that forwards a cooperative coexistence. We build our argument on three pillars: (i) the economic mechanism of automation, (ii) the dichotomy of ‘experience’ that separates the first-person perspective of humans from artificial learning algorithms, and (iii) the interdependent relationship between humans and machines. To realise this vision, policy makers have to implement alternative educational approaches that support lifelong training and flexible job transitions. |
An Evaluation of Space Time Cube Representation of Spatiotemporal Patterns | Space time cube representation is an information visualization technique where spatiotemporal data points are mapped into a cube. Information visualization researchers have previously argued that space time cube representation is beneficial in revealing complex spatiotemporal patterns in a data set to users. The argument is based on the fact that both time and spatial information are displayed simultaneously to users, an effect difficult to achieve in other representations. However, to our knowledge the actual usefulness of space time cube representation in conveying complex spatiotemporal patterns to users has not been empirically validated. To fill this gap, we report on a between-subjects experiment comparing novice users' error rates and response times when answering a set of questions using either space time cube or a baseline 2D representation. For some simple questions, the error rates were lower when using the baseline representation. For complex questions where the participants needed an overall understanding of the spatiotemporal structure of the data set, the space time cube representation resulted in on average twice as fast response times with no difference in error rates compared to the baseline. These results provide an empirical foundation for the hypothesis that space time cube representation benefits users analyzing complex spatiotemporal patterns. |
Acceptance model of a Hospital Information System | PURPOSE
The purpose of this study is to develop a model of Hospital Information System (HIS) user acceptance focusing on human, technological, and organizational characteristics for supporting government eHealth programs. This model was then tested to see which hospital type in Indonesia would benefit from the model to resolve problems related to HIS user acceptance.
METHOD
This study used qualitative and quantitative approaches with case studies at four privately owned hospitals and three government-owned hospitals, which are general hospitals in Indonesia. The respondents involved in this study are low-level and mid-level hospital management officers, doctors, nurses, and administrative staff who work at medical record, inpatient, outpatient, emergency, pharmacy, and information technology units. Data was processed using Structural Equation Modeling (SEM) and AMOS 21.0.
RESULTS
The study concludes that non-technological factors, such as human characteristics (i.e. compatibility, information security expectancy, and self-efficacy), and organizational characteristics (i.e. management support, facilitating conditions, and user involvement) which have level of significance of p<0.05, significantly influenced users' opinions of both the ease of use and the benefits of the HIS. This study found that different factors may affect the acceptance of each user in each type of hospital regarding the use of HIS. Finally, this model is best suited for government-owned hospitals.
CONCLUSIONS
Based on the results of this study, hospital management and IT developers should have more understanding on the non-technological factors to better plan for HIS implementation. Support from management is critical to the sustainability of HIS implementation to ensure HIS is easy to use and provides benefits to the users as well as hospitals. Finally, this study could assist hospital management and IT developers, as well as researchers, to understand the obstacles faced by hospitals in implementing HIS. |
DESIGN OF A DISTRIBUTED SWITCHED RELUCTANCE MOTOR FOR A TIP-DRIVEN FAN | Article history: Received: 4.7.2015. Received in revised form: 9.1.2016. Accepted: 29.1.2016. This paper presents a design of a distributed switched reluctance motor for an integrated motorfan system. Unlike a conventional compact motor structure, the rotor is distributed into the ends of the impeller blades. This distributed structure of motor makes more space for airflow to pass through so that the system efficiency is highly improved. Simultaneously, the distributed structure gives the motor a higher torque, better efficiency and heat dissipation. The paper first gives an initial design of a switched reluctance motor based on system structure constraints and output equations, then it predicts the machine performance and determines phase current and winding turns based on equivalent magnetic circuit analysis; finally it validates and refines the analytical design with 3D transient finite element analysis. It is found that the analytical performance prediction agrees well with finite element analysis results except for the weakness on core losses estimation. The results of the design shows that the distributed switched reluctance motor can produce a large torque of pretty high efficiency at specified speeds. |
Convergent multi-view geometric error correction with pseudo-inverse projection homography | The paper presents a geometric error correction method for convergent multi-view images, reducing the vertical parallax and non-uniform horizontal disparities over the set of input camera views. The global optimization method using inter-camera geometric relations on an arc is described. Camera orientations and optical centers are adjusted towards a uniform, circular distribution closest to the actual camera setup. Experimental validation shows up to 20% improvements over state-of-the-art w.r.t. reducing geometric errors and reaching better scalability/robustness to large rotation angles between adjacent camera views. The corrected images can be readily used for visualization on autostereoscopic 3D displays, and/or for further processing in an advanced 3D-TV imaging pipeline. |
The combination of context information to enhance simple question answering | With the rapid development of knowledge base, question answering based on knowledge base has been a hot research issue. In this paper, we focus on answering singlerelation factoid questions based on knowledge base. We build a question answering system and study the effect of context information on fact selection, such as entity’s notable type, outdegree. Experimental results show that context information can improve the result of simple question answering. |
The bilingual individual | This article presents a general overview of the adult bilingual individual. First, the bilingual is defined and discussed in terms of the complementary principle, i.e. the fact that bilinguals acquire and use their languages for different purposes, in different domains of life, with different people. Next, the various language modes bilinguals find themselves in during their everyday interactions are examined. These range from the monolingual mode when they are communicating with monolinguals {and they have to deactivate all but one language) to the bilingual rriode when they are interacting with other bilinguals who share their two (or more) languages and with whom they can mix languages if they so wish (i.e. code-switch and borrow). The article ends with a rapid survey of the psycholinguistics of bilingualism and, in particular, of how bilinguals access their lexicon when perceiving mixed speech. The regular bilingual Is compared to the interpreter bilingual whenever possible. |
That's So Annoying!!!: A Lexical and Frame-Semantic Embedding Based Data Augmentation Approach to Automatic Categorization of Annoying Behaviors using #petpeeve Tweets | We propose a novel data augmentation approach to enhance computational behavioral analysis using social media text. In particular, we collect a Twitter corpus of the descriptions of annoying behaviors using the #petpeeve hashtags. In the qualitative analysis, we study the language use in these tweets, with a special focus on the fine-grained categories and the geographic variation of the language. In quantitative analysis, we show that lexical and syntactic features are useful for automatic categorization of annoying behaviors, and frame-semantic features further boost the performance; that leveraging large lexical embeddings to create additional training instances significantly improves the lexical model; and incorporating frame-semantic embedding achieves the best overall performance. |
The Quantized kd-Tree: Efficient Ray Tracing of Compressed Point Clouds | Both ray tracing and point-based representations provide means to efficiently display very complex 3D models. Computational efficiency has been the main focus of previous work on ray tracing point-sampled surfaces. For very complex models efficient storage in the form of compression becomes necessary in order to avoid costly disk access. However, as ray tracing requires neighborhood queries, existing compression schemes cannot be applied because of their sequential nature. This paper introduces a novel acceleration structure called the quantized kd-tree, which offers both efficient traversal and storage. The gist of our new representation lies in quantizing the kd-tree splitting plane coordinates. We show that the quantized kd-tree reduces the memory footprint up to 18 times, not compromising performance. Moreover, the technique can also be employed to provide LOD (level-of-detail) to reduce aliasing problems, with little additional storage cost |
Discontinuation of nevirapine because of hypersensitivity reactions in patients with prior treatment experience, compared with treatment-naive patients: the ATHENA cohort study. | BACKGROUND
Recommendations that nevirapine (NVP) should be avoided in female individuals with CD4 cell counts >250 cells/microL and in male individuals with CD4 cell counts >400 cells/microL are based on findings in treatment-naive patients. It is unclear whether these guidelines also apply to treatment-experienced patients switching to NVP-based combination therapy.
METHODS
Patients in the ATHENA cohort study who had used NVP-based combination therapy were included. We identified patients who discontinued NVP-based combination therapy because of hypersensitivity reactions (HSRs; rash and/or hepatotoxicity) within 18 weeks after starting such therapy. We grouped patients according to their CD4 cell count at the start of NVP-based combination therapy (current CD4 cell count) as having a high CD4 cell count (for female patients, >250 cells/microL; for male patients, >400 cells/microL) or a low CD4 cell count. Treatment-experienced patients were further subdivided according to the last available CD4 cell count before first receipt of antiretroviral therapy (ART; pre-ART CD4 cell count) using the same criteria. Risk factors for HSR were assessed using multivariate logistic regression.
RESULTS
Of 3752 patients receiving NVP-based combination therapy, 231 patients (6.2%) discontinued NVP therapy because of HSRs. Independent risk factors included female sex and Asian ethnicity. Having an undetectable viral load (VL) at the start of NVP therapy was associated with reduced risk of developing an HSR (adjusted odds ratio [OR], 0.52; 95% confidence interval [CI], 0.38-0.71). Pretreated patients with low pre-ART and high current CD4 cell counts and a detectable VL when switching to NVP-based combination therapy had a significantly higher risk of developing an HSR, compared with treatment-naive patients who started NVP therapy with low CD4 cell counts (adjusted OR, 1.87; 95% CI, 1.11-3.12); pretreated patients with low pre-ART CD4 cell counts who switched to NVP therapy with a high current CD4 cell count and an undetectable VL did not have an increased risk of developing an HSR (adjusted OR, 1.03; 95% CI, 0.66-1.61).
CONCLUSIONS
Treatment-experienced patients who start NVP-based combination therapy with low pre-ART and high current CD4 cell counts and an undetectable VL have a similar likelihood for discontinuing NVP therapy because of HSRs, compared with treatment-naive patients with low CD4 cell counts. This suggests that NVP-based combination therapy may be safely initiated in such patients. However, in similar patients with a detectable VL, it is prudent to continue to adhere to current CD4 cell count thresholds. |
Instantaneous ego-motion estimation using multiple Doppler radars | The estimation of the ego-vehicle's motion is a key capability for advanced driving assistant systems and mobile robot localization. The following paper presents a robust algorithm using radar sensors to instantly determine the complete 2D motion state of the ego-vehicle (longitudinal, lateral velocity and yaw rate). It evaluates the relative motion between at least two Doppler radar sensors and their received stationary reflections (targets). Based on the distribution of their radial velocities across the azimuth angle, non-stationary targets and clutter are excluded. The ego-motion and its corresponding covariance matrix are estimated. The algorithm does not require any preprocessing steps such as clustering or clutter suppression and does not contain any model assumptions. The sensors can be mounted at any position on the vehicle. A common field of view is not required, avoiding target association in space. As an additional benefit, all targets are instantly labeled as stationary or non-stationary. |
Internet of things: Survey on security | The Internet of Things (IoT) is intended for ubiquitous connectivity among different entities or “things”. While its purpose is to provide effective and efficient solutions, security of the devices and network is a challenging issue. The number of devices connected along with the ad-hoc nature of the system further exacerbates the situation. Therefore, security and privacy has emerged as a significant challenge for the IoT. In this paper, we aim to provide a thorough survey related to the privacy and security challenges of the IoT. This document addresses these challenges from the perspective of technologies and architecture used. This work focuses also in IoT intrinsic vulnerabilities as well as the security challenges of various layers based on the security principles of data confidentiality, integrity and availability. This survey analyzes articles published for the IoT at the time and relates it to the security conjuncture of the field and its projection to the future. |
Gamification of ERP Systems – Exploring Gamification Effects on User Acceptance Constructs | The adoption of game mechanics into serious contexts such as business applications (gamification) is a promising trend to improve the user’s participation and engagement with the software in question and on the job. However, this topic is mainly driven by practitioners. A theoretical model for gamification with appropriate empirical validation is missing. In this paper, we introduce a prototype for gamification using SAP ERP as example. Moreover, we have evaluated the concept within a comprehensive user study with 112 participants based on the technology acceptance model (TAM) using partial least squares (PLS) for analysis. Finally, we show that this gamification approach yields significant improvements in latent variables such as enjoyment, flow or perceived ease of use. Moreover, we outline further research requirements in the domain of gamification. |
The effects of cocaine self-administration on dendritic spine density in the rat hippocampus are dependent on genetic background. | Chronic exposure to cocaine induces modifications to neurons in the brain regions involved in addiction. Hence, we evaluated cocaine-induced changes in the hippocampal CA1 field in Fischer 344 (F344) and Lewis (LEW) rats, 2 strains that have been widely used to study genetic predisposition to drug addiction, by combining intracellular Lucifer yellow injection with confocal microscopy reconstruction of labeled neurons. Specifically, we examined the effects of cocaine self-administration on the structure, size, and branching complexity of the apical dendrites of CA1 pyramidal neurons. In addition, we quantified spine density in the collaterals of the apical dendritic arbors of these neurons. We found differences between these strains in several morphological parameters. For example, CA1 apical dendrites were more branched and complex in LEW than in F344 rats, while the spine density in the collateral dendrites of the apical dendritic arbors was greater in F344 rats. Interestingly, cocaine self-administration in LEW rats augmented the spine density, an effect that was not observed in the F344 strain. These results reveal significant structural differences in CA1 pyramidal cells between these strains and indicate that cocaine self-administration has a distinct effect on neuron morphology in the hippocampus of rats with different genetic backgrounds. |
Failure as a Service ( FaaS ) : A Cloud Service for Large-Scale , Online Failure Drills | Cloud computing is pervasive, but cloud service outages still take place. One might say that the computing forecast for tomorrow is “cloudy with a chance of failure.” One main reason why major outages still occur is that there are many unknown large-scale failure scenarios in which recovery might fail. We propose a new type of cloud service, Failure as a Service (FaaS), which allows cloud services to routinely perform large-scale failure drills in real deployments. |
Nice Thinking! An Educational Intervention That Teaches Children to Think Gratefully | Gratitude is essential to social life and well-being. Although research with youth populations has gained momentum recently, only two gratitude interventions have been conducted in youth, targeting mostly adolescents. In the current research, we tested a new intervention for promoting gratitude among the youngest children targeted to date. Elementary school classrooms (of 8to 11-year-olds) were randomly assigned either to an intervention that educated children about the appraisal of benefit exchanges or to a control condition. We found that children’s awareness of the social-cognitive appraisals of beneficial social exchanges (i.e., grateful thinking) can be strengthened and that this, in turn, makes children more grateful and benefits their well-being in terms of increased general positive affect. A daily intervention produced evidence that this new approach induced gratitude immediately (2 days later) and led children to express gratitude more behaviorally (i.e., they wrote 80% more thank-you cards to their Parent–Teacher Association). A weekly intervention induced gratitude up to 5 months later and additionally showed an effect on well-being (i.e., positive affect). Evidence thus supported the effectiveness of this intervention. Results are discussed in terms of implications for positive youth development and academic |
A double-blind, placebo-controlled trial of i.v. dolasetron mesilate in the prevention of radiotherapy-induced nausea and vomiting in cancer patients | The aim of this work was to measure the safety and efficacy of single i.v. doses of dolasetron mesilate for the control of emesis caused by single high-dose (at least 6 Gy) radiotherapy to the upper abdomen. The double-blind, placebo-controlled, multicenter study stratified patients on the basis of being naive or nonnaive to radiotherapy. Patients with or without a history of previous chemotherapy were enrolled. Patients were randomized to receive placebo or 0.3, 0.6, or 1.2 mg/kg dolasetron mesilate 30 min before radiotherapy, then monitored for 24 h. Antiemetic efficacy was assessed from the time to the first emetic episode or rescue, from whether there was a complete response (0 emetic episodes/no rescue medication) or a complete-plus-major response (0-2 emetic episodes/no rescue medication), from the severity of nausea (rated by patients and the investigator), and from the investigator's assessment of efficacy. Fifty patients completed the study (owing to changing medical practice, enrollment objectives were not met; consequently, no significant linear dose trend was expected). Pooled dolasetron was superior to the placebo in its effect on the time to first emesis or rescue in radiotherapy-nonnaive patients (P=0.015). Dolasetron was statistically superior to the placebo in the overall population on the basis of a complete plus major response:54%, 100%, 93%, and 83% for the placebo and 0.3-, 0.6-, and 1.2-mg/kg doses respectively (P=0.002). The low response in the highest dose group may be due to an imbalance in the number of chemotherapynonnaive patients in that group. Dolasetron was superior to the placebo on the basis of nausea assessed by the investigator (P=0.024) and administration of rescue medication (P=0.006). Complete response at the 0.3-mg/ kg dose was superior to results with the placebo (P=0.050). Treatment-related adverse events were rare, mild to moderate in intensity, and evenly distributed across the four groups. Overall, dolasetron mesilate was effective and well-tolerated in the control of single, high-dose radiotherapy-induced emesis. |
SumTime-Mousam: Configurable marine weather forecast generator | Numerical weather prediction (NWP) models produce time series data of basic weather parameters which human forecasters use as guidance while writing textual forecasts. Our studies of humans writing textual weather forecasts led us to build SUMTIME-MOUSAM, a text generator that produces textual marine weather forecasts for offshore oilrig applications. SUMTIME-MOUSAM separates control and processing. As a result of this forecasters can tailor the output text using control data derived from end user profiles. In this paper we describe the design and the implementation details of SUMTIME-MOUSAM which is currently being used by our industrial collaborator. Output from our system is post-edited by forecasters before communicating it to the end-users. We also briefly describe an evaluation of our system using the post-edit data. |
A Generic Approach for Escaping Saddle points | A central challenge to using first-order methods for optimizing nonconvex problems is the presence of saddle points. First-order methods often get stuck at saddle points, greatly deteriorating their performance. Typically, to escape from saddles one has to use secondorder methods. However, most works on second-order methods rely extensively on expensive Hessian-based computations, making them impractical in large-scale settings. To tackle this challenge, we introduce a generic framework that minimizes Hessianbased computations while at the same time provably converging to secondorder critical points. Our framework carefully alternates between a first-order and a second-order subroutine, using the latter only close to saddle points, and yields convergence results competitive to the state-of-the-art. Empirical results suggest that our strategy also enjoys a good practical performance. |
Multiview Deep Learning for Land-Use Classification | A multiscale input strategy for multiview deep learning is proposed for supervised multispectral land-use classification, and it is validated on a well-known data set. The hypothesis that simultaneous multiscale views can improve composition-based inference of classes containing size-varying objects compared to single-scale multiview is investigated. The end-to-end learning system learns a hierarchical feature representation with the aid of convolutional layers to shift the burden of feature determination from hand-engineering to a deep convolutional neural network (DCNN). This allows the classifier to obtain problem-specific features that are optimal for minimizing the multinomial logistic regression objective, as opposed to user-defined features which trade optimality for generality. A heuristic approach to the optimization of the DCNN hyperparameters is used, based on empirical performance evidence. It is shown that a single DCNN can be trained simultaneously with multiscale views to improve prediction accuracy over multiple single-scale views. Competitive performance is achieved for the UC Merced data set, where the 93.48% accuracy of multiview deep learning outperforms the 85.37% accuracy of SIFT-based methods and the 90.26% accuracy of unsupervised feature learning. |
Precuneus shares intrinsic functional architecture in humans and monkeys. | Evidence from macaque monkey tracing studies suggests connectivity-based subdivisions within the precuneus, offering predictions for similar subdivisions in the human. Here we present functional connectivity analyses of this region using resting-state functional MRI data collected from both humans and macaque monkeys. Three distinct patterns of functional connectivity were demonstrated within the precuneus of both species, with each subdivision suggesting a discrete functional role: (i) the anterior precuneus, functionally connected with the superior parietal cortex, paracentral lobule, and motor cortex, suggesting a sensorimotor region; (ii) the central precuneus, functionally connected to the dorsolateral prefrontal, dorsomedial prefrontal, and multimodal lateral inferior parietal cortex, suggesting a cognitive/associative region; and (iii) the posterior precuneus, displaying functional connectivity with adjacent visual cortical regions. These functional connectivity patterns were differentiated from the more ventral networks associated with the posterior cingulate, which connected with limbic structures such as the medial temporal cortex, dorsal and ventromedial prefrontal regions, posterior lateral inferior parietal regions, and the lateral temporal cortex. Our findings are consistent with predictions from anatomical tracer studies in the monkey, and provide support that resting-state functional connectivity (RSFC) may in part reflect underlying anatomy. These subdivisions within the precuneus suggest that neuroimaging studies will benefit from treating this region as anatomically (and thus functionally) heterogeneous. Furthermore, the consistency between functional connectivity networks in monkeys and humans provides support for RSFC as a viable tool for addressing cross-species comparisons of functional neuroanatomy. |
Generalized Grounding Graphs: A Probabilistic Framework for Understanding Grounded Commands | Many task domains require robots to interpret and act upon natural language commands which are given by people and which refer to the robot’s physical surroundings. Such interpretation is known variously as the symbol grounding problem (Harnad, 1990), grounded semantics (Feldman et al., 1996) and grounded language acquisition (Nenov and Dyer, 1993, 1994). This problem is challenging because people employ diverse vocabulary and grammar, and because robots have substantial uncertainty about the nature and contents of their surroundings, making it difficult to associate the constitutive language elements (principally noun phrases and spatial relations) of the command text to elements of those surroundings. Symbolic models capture linguistic structure but have not scaled successfully to handle the diverse language produced by untrained users. Existing statistical approaches can better handle diversity, but have not to date modeled complex linguistic structure, limiting achievable accuracy. Recent hybrid approaches have addressed limitations in scaling and complexity, but have not effectively associated linguistic and perceptual features. Our framework, called Generalized Grounding Graphs (G), addresses these issues by defining a probabilistic graphical model dynamically according to the linguistic parse structure of a natural language command. This approach scales effectively, handles linguistic diversity, and enables the system to associate parts of a command with the specific objects, places, and events in the external world to which they refer. We show that robots can learn word meanings and use those learned meanings to robustly follow natural language commands produced by untrained users. We demonstrate our approach for both mobility commands (e.g. route directions like “Go down the hallway through the door”) and mobile manipulation commands (e.g. physical directives like “Pick up the pallet on the truck”) involving a variety of semi-autonomous robotic platforms, including a wheelchair, a microair vehicle, a forklift, and the Willow Garage PR2. The first two authors contributed equally to this paper. 1 ar X iv :1 71 2. 01 09 7v 1 [ cs .C L ] 2 9 N ov 2 01 7 |
Additively Manufactured Scaffolds for Bone Tissue Engineering and the Prediction of their Mechanical Behavior: A Review | Additive manufacturing (AM), nowadays commonly known as 3D printing, is a revolutionary materials processing technology, particularly suitable for the production of low-volume parts with high shape complexities and often with multiple functions. As such, it holds great promise for the fabrication of patient-specific implants. In recent years, remarkable progress has been made in implementing AM in the bio-fabrication field. This paper presents an overview on the state-of-the-art AM technology for bone tissue engineering (BTE) scaffolds, with a particular focus on the AM scaffolds made of metallic biomaterials. It starts with a brief description of architecture design strategies to meet the biological and mechanical property requirements of scaffolds. Then, it summarizes the working principles, advantages and limitations of each of AM methods suitable for creating porous structures and manufacturing scaffolds from powdered materials. It elaborates on the finite-element (FE) analysis applied to predict the mechanical behavior of AM scaffolds, as well as the effect of the architectural design of porous structure on its mechanical properties. The review ends up with the authors' view on the current challenges and further research directions. |
VTBPEKE: Verifier-based Two-Basis Password Exponential Key Exchange | PAKE protocols, for Password-Authenticated Key Exchange, enable two parties to establish a shared cryptographically strong key over an insecure network using a short common secret as authentication means. After the seminal work by Bellovin and Merritt, with the famous EKE, for Encrypted Key Exchange, various settings and security notions have been defined, and many protocols have been proposed.
In this paper, we revisit the promising SPEKE, for Simple Password Exponential Key Exchange, proposed by Jablon. The only known security analysis works in the random oracle model under the CDH assumption, but in the multiplicative groups of finite fields only (subgroups of Zp*), which means the use of large elements and so huge communications and computations. Our new instantiation (TBPEKE, for Two-Basis Password Exponential Key Exchange) applies to any group, and our security analysis requires a DLIN-like assumption to hold. In particular, one can use elliptic curves, which leads to a better efficiency, at both the communication and computation levels. We additionally consider server corruptions, which immediately leak all the passwords to the adversary with symmetric PAKE. We thus study an asymmetric variant, also known as VPAKE, for Verifier-based Password Authenticated Key Exchange. We then propose a verifier-based variant of TBPEKE, the so-called VTBPEKE, which is also quite efficient, and resistant to server-compromise. |
Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary | We describe a model of object recognition as machine translation. In this model, recognition is a process of annotating image regions with words. Firstly, images are segmented into regions, which are classi ed into region types using a variety of features. A mapping between region types and keywords supplied with the images, is then learned, using a method based around EM. This process is analogous with learning a lexicon from an aligned bitext. For the implementation we describe, these words are nouns taken from a large vocabulary. On a large test set, the method can predict numerous words with high accuracy. Simple methods identify words that cannot be predicted well. We show how to cluster words that individually are diÆcult to predict into clusters that can be predicted well | for example, we cannot predict the distinction between train and locomotive using the current set of features, but we can predict the underlying concept. The method is trained on a substantial collection of images. Extensive experimental results illustrate the strengths and weaknesses of the approach. |
Implementation of virtual fitting room using image processing | There has been a great increase in interests towards online shopping. In case of purchase of products like apparels which always require a sense of knowledge on how cloths would fit upon a person. This is the major reason why less number of apparels are being shopped online. Hence, a virtual dressing room which would make people know how cloths personally fits in would be a great luxury for the online sellers which could give a wide choice for customers. For online marketers, this would be a great tool for enhancing its market. |
The Demand for Military Spending in Developing Countries | Numerous studies have estimated demand for military expenditure in terms of economic, political and strategic variables. Ten years after the end of the Cold War, this paper attempts to ascertain if the new strategic environment has changed the pattern of determinants, by estimating cross-country demand functions for developing countries for periods during and just after the Cold War. The results suggest that, for both periods, military burden depended on neighbours' military spending and internal and external conflict. Democracy and population both relate negatively to military burden. There is little evidence of a change in the underlying relationship between the periods. |
Constitutive model for quasi-static deformation of metallic sandwich cores | All-metal sandwich construction holds promise for significant improvements in stiffness, strength and blast resistance for built-up plate structures. Analysis of the performance of sandwich plates under various loads, static and dynamic, requires modelling of face sheets and core with some fidelity. While it is possible to model full geometric details of the core for a few selected problems, this is unnecessary and unrealistic for larger complex structures under general loadings. In this paper, a continuum constitutive model is proposed as an alternative means of modelling the core. The constitutive model falls within the framework of a compressible rate-independent, anisotropic elastic– plastic solid. The general form of the model is presented, along with algorithmic aspects of its implementation in a finite element code, and selected problems are solved which benchmark the code against existing codes for limiting cases and which illustrate features specific to compressible cores. Three core geometries (pyramidal truss, folded plate, and square honeycomb) are considered in some detail. The validity of the approach is established by comparing numerical finite element simulations using the model with those obtained by a full three-dimensional meshing of the core geometry for each of the three types of cores for a clamped sandwich plate subject to uniform pressure load. Limitations of the model are also discussed. Copyright 2004 John Wiley & Sons, Ltd. |
Frangipani: A Scalable Distributed File System | The ideal distributed file system would provide all its users with coherent, shared access to the same set of files,yet would be arb itrarily scalable to provide more storage space and higher performan ce to a growing user community. It would be highly available in spi te of component failures. It would require minimal human adminis tration, and administration would not become more complex as mo re components were added. Frangipani is a new file system that approximates this ideal, y t was relatively easy to build because of its two-layer struct ure. The lower layer is Petal (described in an earlier paper), a distr ibuted storage service that provides incrementally scalable, hig hly available, automatically managed virtual disks. In the upper lay er, multiple machines run the same Frangipani file system code on top of a shared Petal virtual disk, using a distributed lock serv ic to ensure coherence. Frangipani is meant to run in a cluster of machines that are un d r a common administration and can communicate securely. Thus the machines trust one another and the shared virtual disk appro ch is practical. Of course, a Frangipani file system can be exporte d to untrusted machines using ordinary network file access proto cols. We have implemented Frangipani on a collection of Alphas running DIGITAL Unix 4.0. Initial measurements indicate th a Frangipani has excellent single-server performance and sc ale well as servers are added. |
Effects of an injectable platelet-rich fibrin on osteoblast behavior and bone tissue formation in comparison to platelet-rich plasma. | Platelet-rich plasma (PRP) has been utilized for many years as a regenerative agent capable of inducing vascularization of various tissues using blood-derived growth factors. Despite this, drawbacks mostly related to the additional use of anti-coagulants found in PRP have been shown to inhibit the wound healing process. For these reasons, a novel platelet concentrate has recently been developed with no additives by utilizing lower centrifugation speeds. The purpose of this study was therefore to investigate osteoblast behavior of this novel therapy (injectable-platelet-rich fibrin; i-PRF, 100% natural with no additives) when compared to traditional PRP. Human primary osteoblasts were cultured with either i-PRF or PRP and compared to control tissue culture plastic. A live/dead assay, migration assay as well as a cell adhesion/proliferation assay were investigated. Furthermore, osteoblast differentiation was assessed by alkaline phosphatase (ALP), alizarin red and osteocalcin staining, as well as real-time PCR for genes encoding Runx2, ALP, collagen1 and osteocalcin. The results showed that all cells had high survival rates throughout the entire study period irrespective of culture-conditions. While PRP induced a significant 2-fold increase in osteoblast migration, i-PRF demonstrated a 3-fold increase in migration when compared to control tissue-culture plastic and PRP. While no differences were observed for cell attachment, i-PRF induced a significantly higher proliferation rate at three and five days when compared to PRP. Furthermore, i-PRF induced significantly greater ALP staining at 7 days and alizarin red staining at 14 days. A significant increase in mRNA levels of ALP, Runx2 and osteocalcin, as well as immunofluorescent staining of osteocalcin was also observed in the i-PRF group when compared to PRP. In conclusion, the results from the present study favored the use of the naturally-formulated i-PRF when compared to traditional PRP with anti-coagulants. Further investigation into the direct role of fibrin and leukocytes contained within i-PRF are therefore warranted to better elucidate their positive role in i-PRF on tissue wound healing. |
Constitutive expression of EIL-like transcription factor partially restores ripening in the ethylene-insensitive Nr tomato mutant. | Climacteric fruit ripening is regulated by the phytohormone ethylene. ETHYLENE-INSENSITIVE3 (EIN3) is a transcription factor that functions downstream from the ethylene receptors in the Arabidopsis ethylene signal transduction pathway. Three homologues of the Arabidopsis EIN3 gene have been identified in tomato, Lycopersicon esculentum, EIN3-like or LeEIL, LeEIL1, LeEIL2, and LeEIL3. These transcription factors have been proposed to be functionally redundant positive regulators of multiple ethylene responses. In order to test the role of such factors in the ethylene signal transduction pathway during ripening, EIL1 fused to green fluorescent protein (GFP) has been over-expressed in the ethylene-insensitive non-ripening Nr mutant of tomato. Increased levels of LeEIL1 compensated for the normally reduced levels of LeEIL1 in the Nr mutant, and transgenic Nr plants that exhibited high-level constitutive expression of LeEIL1GFP phenotypically resembled wild-type plants, the fruit ripened and the leaves exhibited epinasty, unlike Nr plants. The EIL1GFP fusion protein was located in the cell nuclei of ripe tomato fruit. The mRNA profile of these plants showed that the expression of certain ethylene-dependent ripening genes was up-regulated, including polygalacturonase and TOMLOX B. However, not all ripening genes and ethylene responses, such as seedling triple response, were restored. These results demonstrate that expressing candidate genes in the Nr ethylene-insensitive background is a valuable general approach for testing the role of putative downstream components in the ethylene-signalling pathway. |
Classification of Point Cloud Scenes with Multiscale Voxel Deep Network | In this article we describe a new convolutional neural network (CNN) to classify 3D point clouds of urban or indoor scenes. Solutions are given to the problems encountered working on scene point clouds, and a network is described that allows for point classification using only the position of points in a multi-scale neighborhood. On the reduced-8 Semantic3D benchmark [Hackel et al., 2017], this network, ranked second, beats the state of the art of point classification methods (those not using a regularization step). Figure 1: Example of classified point cloud on Semantic3D test set (blue: man-made terrain, cerulean blue: natural terrain, green: high vegetation, light green: low vegetation, chartreuse green: buildings, yellow: hard scape, orange: scanning artefacts, red: cars). |
Is it really about me?: message content in social awareness streams | In this work we examine the characteristics of social activity and patterns of communication on Twitter, a prominent example of the emerging class of communication systems we call "social awareness streams." We use system data and message content from over 350 Twitter users, applying human coding and quantitative analysis to provide a deeper understanding of the activity of individuals on the Twitter network. In particular, we develop a content-based categorization of the type of messages posted by Twitter users, based on which we examine users' activity. Our analysis shows two common types of user behavior in terms of the content of the posted messages, and exposes differences between users in respect to these activities. |
The inevitable drift to triple therapy in COPD: an analysis of prescribing pathways in the UK | BACKGROUND
Real-world prescription pathways leading to triple therapy (TT) (inhaled corticosteroid [ICS] plus long-acting β2-agonist bronchodilator [LABA] plus long-acting muscarinic antagonist) differ from Global initiative for chronic Obstructive Lung Disease [GOLD] and National Institute for Health and Care Excellence treatment recommendations. This study sets out to identify COPD patients without asthma receiving TT, and determine the pathways taken from diagnosis to the first prescription of TT.
METHODS
This was a historical analysis of COPD patients without asthma from the Optimum Patient Care Research Database (387 primary-care practices across the UK) from 2002 to 2010. Patient disease severity was classified using GOLD 2013 criteria. Data were analyzed to determine prescribing of TT before, at, and after COPD diagnosis; the average time taken to receive TT; and the impact of lung function grade, modified Medical Research Council dyspnea score, and exacerbation history on the pathway to TT.
RESULTS
During the study period, 32% of patients received TT. Of these, 19%, 28%, 37%, and 46% of patients classified as GOLD A, B, C, and D, respectively, progressed to TT after diagnosis (P<0.001). Of all patients prescribed TT, 25% were prescribed TT within 1 year of diagnosis, irrespective of GOLD classification (P=0.065). The most common prescription pathway to TT was LABA plus ICS. It was observed that exacerbation history did influence the pathway of LABA plus ICS to TT.
CONCLUSION
Real life UK prescription data demonstrates the inappropriate prescribing of TT and confirms that starting patients on ICS plus LABA results in the inevitable drift to overuse of TT. This study highlights the need for dissemination and implementation of COPD guidelines to physicians, ensuring that patients receive the recommended therapy. |
The Geology and Geochemistry of Porphyrite Iron Deposits in the Nanjing-Wuhu Area,Southeast China | For the iron deposits occurring in andesitic volcanic rocks of the Lower Yangtze Area,the genetic mod- el for porphyrite iron deposits was proposed by Chinese geologists more than ten years ago on the basis of their detailed studies in the Nanjing-Wuhu Basin.It comprises a set of deposits of different genetic types ranging from late magmatic segregation,ore-magma injection,pneumato-hydatogenetic replacing and hydrothermal filling as well as sedimentary origin.The deposits are closely connected with the gabbro-diorite porphyrite subvolcanic intrusive bodies both in space and in genesis.Mineralization and wall-rock alteration are consistent with the history of the magmatic evolution.Geochemical studies on trace elements and S,O,Sr isotopes have proved that the porphyrite iron deposits are of magmatic origin. The proposed model may be applied to iron ores associated with andesitic volcanites,for example,in Chile,Mexico,Pakistan,Turkey,etc. |
A Control Lyapunov Approach for Feedback Control of Cable-Suspended Robots | This paper considers a feedback control technique for cable suspended robots under input constraints, using control Lyapunov functions (CLF). The motivation for this work is to develop an explicit feedback control law for cable robots to asymptotically stabilize it to a goal point with positive input constraints. The main contributions of this paper are as follows: (i) proposal for a CLF candidate for a cable robot, (ii) a CLF based positive controllers for multiple inputs. An example of a three degrees-of-freedom cable suspended robot is presented to illustrate the proposed methods |
Sub-Channel Assignment, Power Allocation, and User Scheduling for Non-Orthogonal Multiple Access Networks | In this paper, we study the resource allocation and user scheduling problem for a downlink non-orthogonal multiple access network where the base station allocates spectrum and power resources to a set of users. We aim to jointly optimize the sub-channel assignment and power allocation to maximize the weighted total sum-rate while taking into account user fairness. We formulate the sub-channel allocation problem as equivalent to a many-to-many two-sided user-subchannel matching game in which the set of users and sub-channels are considered as two sets of players pursuing their own interests. We then propose a matching algorithm, which converges to a two-side exchange stable matching after a limited number of iterations. A joint solution is thus provided to solve the sub-channel assignment and power allocation problems iteratively. Simulation results show that the proposed algorithm greatly outperforms the orthogonal multiple access scheme and a previous non-orthogonal multiple access scheme. |
Instance-Level Human Parsing via Part Grouping Network | Instance-level human parsing towards real-world human analysis scenarios is still under-explored due to the absence of sufficient data resources and technical difficulty in parsing multiple instances in a single pass. Several related works all follow the “parsing-by-detection” pipeline that heavily relies on separately trained detection models to localize instances and then performs human parsing for each instance sequentially. Nonetheless, two discrepant optimization targets of detection and parsing lead to suboptimal representation learning and error accumulation for final results. In this work, we make the first attempt to explore a detection-free Part Grouping Network (PGN) for efficiently parsing multiple people in an image in a single pass. Our PGN reformulates instance-level human parsing as two twinned sub-tasks that can be jointly learned and mutually refined via a unified network: 1) semantic part segmentation for assigning each pixel as a human part (e.g ., face, arms); 2) instance-aware edge detection to group semantic parts into distinct person instances. Thus the shared intermediate representation would be endowed with capabilities in both characterizing fine-grained parts and inferring instance belongings of each part. Finally, a simple instance partition process is employed to get final results during inference. We conducted experiments on PASCAL-Person-Part dataset and our PGN outperforms all state-of-the-art methods. Furthermore, we show its superiority on a newly collected multi-person parsing dataset (CIHP) including 38,280 diverse images, which is the largest dataset so far and can facilitate more advanced human analysis. The CIHP benchmark and our source code are available at http://sysu-hcp.net/lip/. |
Influence of hormonal and reproductive factors on the risk of vertebral deformity in European women | The aim of this study was to determine whether variation in the level of selected hormonal and reproductive variables might explain variation in the occurrence of vertebral deformity across Europe. A population-based cross-sectional survey method was used. A total of 7530 women aged 50–79 years and over were recruited from 30 European centres. Subjects were invited to attend for an interviewer-administered questionnaire and lateral spinal radiographs which were taken according to a standard protocol. After adjusting for age, centre, body mass index and smoking, those in the highest quintile of menarche (age 2=16 years) had an increased risk of vertebral deformity (odds ratio [OR]=1.48; 95% confidence interval [CI] 1.16, 1.88). Increased menopausal age (>52.5 years) was associated with a reduced risk of deformity (OR=0.78; 95% CI 0.60, 1.00), while use of the oral contraceptive pill was also protective (OR=0.76; 95% CI 0.58, 0.99). There was a smaller protective effect associated with one or more years use of hormone replacement therapy, though the confidence limits clearly embraced unity. There was no apparent effect of parity or breast-feeding on the risk of deformity. We conclude that oestrogen status is an important determinant of vertebral deformity. Ever use of the oral contraceptive pill was associated with a 25% reduction in risk of deformity though the effect may be a result of the higher-dosage oestrogen pills used in the past. Parity and breast-feeding do not appear to be important and would appear to have little potential for identification of women at high risk of vertebral deformity. |
Can the University Aid Industry? | The writer believes that a distinct advance has been made within the last few years in the relations between the university and industry. It is hoped and believed that this tendency to cooperate will continue and will be of great advantage to both parties to the transaction. |
Helping Patients Make Informed Choices About Probiotics: A Need for Research | Applications of probiotics in the treatment of gastrointestinal disorders are gaining acceptance among patients, despite evidence that probiotics can present substantial health risks, particularly for patients who are immunocompromised or seriously ill. Patients will likely formulate their attitudes and beliefs about probiotic therapies with reference to interpretive frameworks that compare probiotics with more familiar therapeutic modalities, including complementary and alternative medicines, pharmacological therapies, and gene-transfer technologies. Each of these frameworks highlights a different set of benefit-to-risk considerations regarding probiotic usage and reinforces extreme characterizations of both the therapeutic promise and peril of probiotics. Considerable effort may be required to help patients make informed choices about probiotic therapies. |
SoftWater: Software-defined networking for next-generation underwater communication systems | Underwater communication systems have drawn the attention of the research community in the last 15 years. This growing interest can largely be attributed to new civil and military applications enabled by large-scale networks of underwater devices (e.g., underwater static sensors, unmanned autonomous vehicles (AUVs), and autonomous robots), which can retrieve information from the aquatic and marine environment, perform in-network processing on the extracted data, and transmit the collected information to remote locations. Currently underwater communication systems are inherently hardware-based and rely on closed and inflexible architectural design. This imposes significant challenges into adopting new underwater communication and networking technologies, prevent the provision of truly-differentiated services to highly diverse underwater applications, and induce great barriers to integrate heterogeneous underwater devices. Software Defined Networking, recognized as the next-generation networking paradigm, relies on the highly flexible, programmable, and virtualizable network architecture to dramatically improve network resource utilization, simplify network management, reduce operating cost, and promote innovation and evolution. In this paper, a software-defined architecture, namely SoftWater, is first introduced to facilitate the development of the next-generation underwater communication systems. More specifically, by exploiting the network function virtualization (NFV) and network virtualization concepts, SoftWater architecture can easily incorporate new underwater communication solutions, accordingly maximize the network capacity, can achieve the network robustness and energy efficiency, as well as can provide truly differentiated and scalable networking services. Consequently, the SoftWater architecture can simultaneously support a variety of different underwater applications, and can enable the interoperability of underwater devices from different manufacturers that operate on different underwater communication technologies based on acoustic, optical, or radio waves. Moreover, the essential network management tools of SoftWater are discussed, including reconfigurable multi-controller placement, hybrid in-band and out-of-band control traffic balancing, and utility-optimal network virtualization. Furthermore, the major benefits of ∗ Corresponding author. Tel.: +01404 934 9932. E-mail addresses: [email protected] (I.F. Akyildiz), [email protected] (P. Wang), [email protected] (S.-C. Lin). http://dx.doi.org/10.1016/j.adhoc.2016.02.016 1570-8705/© 2016 Elsevier B.V. All rights reserved. Please cite this article as: I.F. Akyildiz et al., SoftWater: Software-defined networking for next-generation underwater communication systems, Ad Hoc Networks (2016), http://dx.doi.org/10.1016/j.adhoc.2016.02.016 2 I.F. Akyildiz et al. / Ad Hoc Networks xxx (2016) xxx–xxx ARTICLE IN PRESS JID: ADHOC [m3Gdc; April 9, 2016;17:52 ] SoftWater architecture are demonstrated by introducing software-defined underwater networking solutions, including the throughput-optimal underwater routing, SDN-enhanced fault recovery, and software-defined underwater mobility management. The research challenges to realize the SoftWater are also discussed in detail. © 2016 Elsevier B.V. All rights reserved. |
Choice decision of e-learning system: Implications from construal level theory | This study investigates user acceptance of a new e-learning system when users can choose between the old and the new systems. Drawing upon construal level theory and technology acceptance model, this study proposes that users’ construal level of an e-learning system interacts with their perceptions of the system (i.e., PEoU and PU) and affects their adoption intention. Data collected from 131 participants in a laboratory experiment show that a higher construal level strengthened the effect of PEoU but mitigated the effect of PU on participants’ attitude toward using the system, thus affecting adoption intention. Theoretical contributions and implications are discussed. 2014 Elsevier B.V. All rights reserved. * Corresponding author at: University of Science and Technology of China, School of Management, 96 Jinzhai Road, Hefei, Anhui, China. Tel.: +86 551 63606822; fax: +86 551 63600025. E-mail addresses: [email protected] (Candy K.Y. Ho), [email protected] (W. Ke), [email protected] (H. Liu). |
The Near East: Archaeology in the 'cradle of Civilization' | List of figures. Maps. Plates and Tables. Acknowledgements. 1. Introduction 2. An Artefactual Basis for the Past 3. Digging Before Excavation 4. Practical Pioneers and Theoretical Problems 5. Harbingers in the Levant 6. The Land that Two Rivers Made 7. The Ubadaidian Inheritance 8. The Household as Enterprise 9. What we're Getting to Know and What we Need to Do. Notes. Bibliography. Index. |
Image dissimilarity | In this paper we compare the performance of a number of representative instrumental models for image dissimilarity with respect to their ability to predict both image dissimilarity and image quality, as perceived by human subjects. Two sets of experimental data, one for images degraded by noise and blur, and one for JPEG-coded images, are used in the comparison. ( 1998 Elsevier Science B.V. All rights reserved. |
Characterizing History Independent Data Structures | We consider history independent data structures as proposed for study by Naor and Teague. In a history independent data structure, nothing can be learned from the memory representation of the data structure except for what is available from the abstract data structure. We show that for the most part, strong history independent data structures have canonical representations. We provide a natural alternative definition of strong history independence that is less restrictive than Naor and Teague and characterize how it restricts allowable representations. We also give a general formula for creating dynamically resizing history independent data structures and give a related impossibility result. |
Pilot-Assisted PAPR Reduction Technique for Optical OFDM Communication Systems | This paper investigates the use of a pilot signal in reducing the electrical peak-to-average power ratio (PAPR) of an orthogonal frequency division multiplexing (OFDM) intensity-modulated optical wireless communication system. The phase of the pilot signal is chosen based on the selected mapping (SLM) algorithm while the maximum likelihood criterion is used to estimate the pilot signal at the receiver. Bit error rate (BER) performance of the pilot-assisted optical OFDM system is identical to that of the basic optical OFDM (with no pilot and no PAPR reduction technique implemented) at the desired BER of less than 10-3 needed to establish a reliable communication link. The pilot-assisted PAPR reduction technique results in higher reduction in PAPR for high order constellations than the classical SLM. With respect to a basic OFDM system, with no pilot and no PAPR reduction technique implemented, a pilot-assisted M-QAM optical OFDM system is capable of reducing the electrical PAPR by over about 2.5 dB at a modest complementary cumulative distribution function (CCDF) point of 10-4 for M = 64. Greater reductions in PAPR are possible at lower values of CCDF with no degradation to the system's error performance. Clipping the time domain signal at both ends mildly (at 25 times the signal variance level) results in a PAPR reduction of about 6.3 dB at the same CCDF of 10-4 but with an error floor of about 3 ×10-5. Although it is possible to attain any desired level of electrical PAPR reduction with signal clipping, this will be at a cost of deterioration in the systems's bit error performance. |
Studying complex places : change and continuity in York and Dijon | This study considers the methodological implications of a critical realist and complex systems perspective to social phenomena in general, and to cities and urban regions in particular. Using three broad methodological approaches, namely the use of official statistics, visual sources and group interviews with children, different representations of York and Dijon are produced. Through an integrated and reflexive analysis of the findings, an argument is developed to show that an emergent pattern of change and continuity since the 1970s is common to both places. This is then related to the desired and projected changes to the cities voiced by the children, who, it is argued, are active agents shaping the present and future trajectories of their |
An Intelligent Fuzzy Control for Crossroads Traffic Light | The fuzzy control algorithm that carries on the intelligent control twelve phases three traffic lanes single crossroads traffic light, works well in the real-time traffic flow under flexible operation. The procedures can be described as below: first, the number of vehicles of all the lanes can be received through the sensor, and the phase with the largest number is stipulated to be highest priority, while the phase turns to the next one from the previous, it transfers into the highest priority. Then the best of the green light delay time can be figured out under the fuzzy rules reasoning on the current waiting formation length and general formation length. The simulation result indicates the fuzzy control method on vehicle delay time compared with the traditional timed control method is greatly improved. |
Sliding-mode control for trajectory-tracking of a Wheeled Mobile Robot in presence of uncertainties | Wheeled Mobile Robots (WMRs) are the most widely used class of mobile robots. This is due to their fast maneuvering, simple controllers and energy saving characteristics. A dynamics-based sliding mode controller for WMR trajectorytracking is proposed. Robustness to external disturbances and parameter uncertainties is achieved. Closed loop real-time results show good performances in trajectory tracking even if for high upper bound of uncertainties. |
Light-Head R-CNN: In Defense of Two-Stage Object Detector | In this paper, we first investigate why typical two-stage methods are not as fast as single-stage, fast detectors like YOLO [26, 27] and SSD [22]. We find that Faster RCNN [28] and R-FCN [17] perform an intensive computation after or before RoI warping. Faster R-CNN involves two fully connected layers for RoI recognition, while RFCN produces a large score maps. Thus, the speed of these networks is slow due to the heavy-head design in the architecture. Even if we significantly reduce the base model, the computation cost cannot be largely decreased accordingly. We propose a new two-stage detector, Light-Head RCNN, to address the shortcoming in current two-stage approaches. In our design, we make the head of network as light as possible, by using a thin feature map and a cheap R-CNN subnet (pooling and single fully-connected layer). Our ResNet-101 based light-head R-CNN outperforms state-of-art object detectors on COCO while keeping time efficiency. More importantly, simply replacing the backbone with a tiny network (e.g, Xception), our LightHead R-CNN gets 30.7 mmAP at 102 FPS on COCO, significantly outperforming the single-stage, fast detectors like YOLO [26, 27] and SSD [22] on both speed and accuracy. Code will be made publicly available. |
An Analysis of Power Consumption in a Smartphone | Mobile consumer-electronics devices, especially phones, are powered from batteries which are limited in size and therefore capacity. This implies that managing energy well is paramount in such devices. Good energy management requires a good understanding of where and how the energy is used. To this end we present a detailed analysis of the power consumption of a recent mobile phone, the Openmoko Neo Freerunner. We measure not only overall system power, but the exact breakdown of power consumption by the device’s main hardware components. We present this power breakdown for micro-benchmarks as well as for a number of realistic usage scenarios. These results are validated by overall power measurements of two other devices: the HTC Dream and Google Nexus One. We develop a power model of the Freerunner device and analyse the energy usage and battery lifetime under a number of usage patterns. We discuss the significance of the power drawn by various components, and identify the most promising areas to focus on for further improvements of power management. We also analyse the energy impact of dynamic voltage and frequency scaling of the device’s application processor. |
Energy disaggregation meets heating control | Heating control is of particular importance, since heating accounts for the biggest amount of total residential energy consumption. Smart heating strategies allow to reduce such energy consumption by automatically turning off the heating when the occupants are sleeping or away from home. The present context or occupancy state of a household can be deduced from the appliances that are currently in use. In this study we investigate energy disaggregation techniques to infer appliance states from an aggregated energy signal measured by a smart meter. Since most household devices have predictable energy consumption, we propose to use the changes in aggregated energy consumption as features for the appliance/occupancy state classification task. We evaluate our approach on real-life energy consumption data from several households, compare the classification accuracy of various machine learning techniques, and explain how to use the inferred appliance states to optimize heating schedules. |
Tapped-inductor buck converter for high-step-down DC-DC conversion | The narrow duty cycle in the buck converter limits its application for high-step-down dc-dc conversion. With a simple structure, the tapped-inductor buck converter shows promise for extending the duty cycle. However, the leakage inductance causes a huge turn-off voltage spike across the top switch. Also, the gate drive for the top switch is not simple due to its floating source connection. This paper solves all these problems by modifying the tapped-inductor structure. A simple lossless clamp circuit can effectively clamp the switch turn-off voltage spike and totally recover the leakage energy. Experimental results for 12V-to-1.5V and 48V-to-6V dc-dc conversions show significant improvements in efficiency. |
NFC Loop Antenna in Conjunction With the Lower Section of a Metal Cover | This letter proposes a near-field communication (NFC) antenna for smartphones with a metal cover that includes an embedded narrow slot, therein splitting the metal cover into upper and lower sections. The proposed NFC antenna is a simple loop structure that is formed by incorporating a three-dimensional coupling strip with the upper edge of the metal cover in the lower section. Because this antenna structure can substantially reduce the eddy current, excellent inductive coupling can be achieved through the clearance zone of the narrow slot. The proposed NFC antenna is relatively thin and achieves excellent performance in the read/write mode. Furthermore, in the card emulation mode verified by Europay, MasterCard, and Visa test, it even surpasses the performance of certain other NFC antennas in smartphones with non-metal covers. |
A low-fat vegan diet and a conventional diabetes diet in the treatment of type 2 diabetes: a randomized, controlled, 74-wk clinical trial. | BACKGROUND
Low-fat vegetarian and vegan diets are associated with weight loss, increased insulin sensitivity, and improved cardiovascular health.
OBJECTIVE
We compared the effects of a low-fat vegan diet and conventional diabetes diet recommendations on glycemia, weight, and plasma lipids.
DESIGN
Free-living individuals with type 2 diabetes were randomly assigned to a low-fat vegan diet (n = 49) or a diet following 2003 American Diabetes Association guidelines (conventional, n = 50) for 74 wk. Glycated hemoglobin (Hb A(1c)) and plasma lipids were assessed at weeks 0, 11, 22, 35, 48, 61, and 74. Weight was measured at weeks 0, 22, and 74.
RESULTS
Weight loss was significant within each diet group but not significantly different between groups (-4.4 kg in the vegan group and -3.0 kg in the conventional diet group, P = 0.25) and related significantly to Hb A(1c) changes (r = 0.50, P = 0.001). Hb A(1c) changes from baseline to 74 wk or last available values were -0.34 and -0.14 for vegan and conventional diets, respectively (P = 0.43). Hb A(1c) changes from baseline to last available value or last value before any medication adjustment were -0.40 and 0.01 for vegan and conventional diets, respectively (P = 0.03). In analyses before alterations in lipid-lowering medications, total cholesterol decreased by 20.4 and 6.8 mg/dL in the vegan and conventional diet groups, respectively (P = 0.01); LDL cholesterol decreased by 13.5 and 3.4 mg/dL in the vegan and conventional groups, respectively (P = 0.03).
CONCLUSIONS
Both diets were associated with sustained reductions in weight and plasma lipid concentrations. In an analysis controlling for medication changes, a low-fat vegan diet appeared to improve glycemia and plasma lipids more than did conventional diabetes diet recommendations. Whether the observed differences provide clinical benefit for the macro- or microvascular complications of diabetes remains to be established. This trial was registered at clinicaltrials.gov as NCT00276939. |
Exploring the Structure of Complex Software Designs: An Empirical Study of Open Source and Proprietary Code | Harvard Business School Working Paper Number 05-016. Working papers are distributed in draft form for purposes of comment and discussion only. They may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author(s). Abstract Much recent research has pointed to the critical role of architecture in the development of a firm's products, services and technical capabilities. A common theme in these studies is the notion that specific characteristics of a product's design – for example, the degree of modularity it exhibits – can have a profound effect on among other things, its performance, the flexibility of the process used to produce it, the value captured by its producer, and the potential for value creation at the industry level. Unfortunately, this stream of work has been limited by the lack of appropriate tools, metrics and terminology for characterizing key attributes of a product's architecture in a robust fashion. As a result, there is little empirical evidence that the constructs emerging in the literature have power in predicting the phenomena with which they are associated. This paper reports data from a research project which seeks to characterize the differences in design structure between complex software products. In particular, we adopt a technique based upon Design Structure Matrices (DSMs) to map the dependencies between different elements of a design then develop metrics that allow us to compare the structures of these different DSMs. We demonstrate the power of this approach in two ways: First, we compare the design structures of two complex software products – the Linux operating system and the Mozilla web browser – that were developed via contrasting modes of organization: specifically, open source versus proprietary development. We find significant differences in their designs, consistent with an interpretation that Linux possesses a more " modular " architecture. We then track the evolution of Mozilla, paying particular attention to a major " redesign " effort that took place several months after its release as an open source product. We show that this effort resulted in a design structure that was significantly more modular than its predecessor, and indeed, more modular than that of a comparable version of Linux. Our findings demonstrate that it is possible to characterize the structure of complex product designs and draw meaningful conclusions about the precise ways in which they differ. We provide a description of a set of tools … |
The Social Regulation of Emotion: An Integrative, Cross-Disciplinary Model | Research in emotion regulation has largely focused on how people manage their own emotions, but there is a growing recognition that the ways in which we regulate the emotions of others also are important. Drawing on work from diverse disciplines, we propose an integrative model of the psychological and neural processes supporting the social regulation of emotion. This organizing framework, the 'social regulatory cycle', specifies at multiple levels of description the act of regulating another person's emotions as well as the experience of being a target of regulation. The cycle describes the processing stages that lead regulators to attempt to change the emotions of a target person, the impact of regulation on the processes that generate emotions in the target, and the underlying neural systems. |
A Modified Node2vec Method for Disappearing Link Prediction | Disappearing link prediction aims to predict the possibility of the links disappearing in the future. This paper describes the disappearing link prediction problem in scientific collaboration networks based on network embedding. We propose a novel network embedding method called TDL2vec, which is an extension of node2vec algorithm. TDL2vec generates the link embdeddings considering with the time factor. In this paper, the disappearing link prediction problem is considered as a binary classification problem, and support vector machine (SVM) is used as the classifier after link embedding. To evaluate the performance in disappearing link prediction, this paper tests the proposed method and several baseline methods on a real-world network. The experimental results show that TDL2vec achieves better performance than other baselines. |
Type 2 Diabetes Mellitus and Cardiovascular Exercise Performance | Poor physical fitness is associated with increased morbidity and mortality. It has been observed that low cardiorespiratory fitness and physical inactivity predict mortality in children, in normal weight and obese men, in older men and women and in men with type 2 diabetes mellitus (T2DM) [1–4]. Sedentary behavior has been clearly implicated as a factor leading to the development of diabetes as well as the worsening of cardiovascular (CV) outcomes of diabetes. Exercise has long been recognized as a cornerstone for the treatment of patients with T2DM [5]. Over 80 years ago, Allen and others reported that a single bout of exercise lowered the blood glucose concentration of persons with diabetes and improved glucose tolerance temporarily [5]. Since that observation, numerous studies have confirmed the beneficial effects of exercise for the person with T2DM. Paradoxically, despite extensive data indicating the importance of physical activity and exercise, 60 to 80% of adults with T2DM do not exercise sufficiently, and adherence to exercise programs is low in these patients [6]. One possible reason for this is that exercise performance is impaired even in uncomplicated T2DM [7–10]. Benefits of exercise training or even increasing the level of habitual physical activity level for persons with T2DM that have been observed range from the prevention of diabetes to the treatment and management of diabetes in addition to reducing CV morbidity and mortality. Benefits occur for both metabolic and CV parameters [11–15]. For instance, many studies have shown that the regular performance of exercise is related to increased glucose tolerance and enhanced insulin sensitivity in persons with diabetes [11–13]. Exercise training has also been seen to improve measures of CV fitness such as maximal oxygen consumption [VO2max] in persons with T2DM [14,15]. However, the effects of exercise on CV function in the person with T2DM have not been as extensively studied as its effects on metabolism. The relationship between exercise and CV function in T2DM has particular importance given the report of an association between low VO2max with higher morbidity and mortality rates [2]. This review will primarily address the effects of T2DM on CV exercise performance. Subsequently, CV consequences of exercise training for persons with diabetes are discussed. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.