title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Elevational shifts, biotic homogenization and time lags in vegetation change during 40 years of climate warming | changes (e.g. meters of elevational distribution shift) with expectations based on the magnitude of environmental change itself over the same time period (e.g. meters of elevational shift in isotherms) (Bertrand et al. 2011, Devictor et al. 2012, Feeley et al. 2013). Many studies have reported both substantial lags in response to climate warming, for example with observed changes 50 % of those expected (Davis 1986, Bertrand et al. 2011, Devictor et al. 2012), as well as a fairly close match between observed and expected ecological changes. For example, Beckage et al. (2008) suggested that the deciduous-boreal ecotone in mountains of northeastern North America had shifted upward in elevations more or less in concert with regional climate warming. It is unclear whether this apparent rapid ecological tracking of climate change applies more broadly to the full suite of plants in these systems. Studies in which long-term vegetation change has been attributed to climate warming have found both increases and decreases in local plant species richness (Damschen et al. 2010, Danby et al. 2011, Vellend et al. 2013a). With high elevations typically characterized by relatively low richness, increased richness is expected, although a priori expectations for lower elevations are more diffi cult to derive. Although many studies have explored changes in alpha diversity in Ecography 38: 546–555, 2015 doi: 10.1111/ecog.01131 © 2014 Th e Authors. Ecography © 2014 Nordic Society Oikos Subject Editor: Kenneth Feeley. Editor-in-Chief: Miguel Ara ú jo. Accepted 26 August 2014 |
Rationale and methodology for a multicentre randomised trial of fibrinolysis for pulmonary embolism that includes quality of life outcomes. | BACKGROUND
Submassive pulmonary embolism (PE) has a low mortality rate but can degrade functional capacity.
OBJECTIVE
The present study aims to provide rationale, methodology, and initial findings of a multicentre, randomised trial of fibrinolysis for PE that used a composite end-point, including quality of life measures.
METHODS
This investigator-initiated study was funded by a contract between a corporate partner and the investigator's hospital (the prime site). The investigator was the Food and Drug Administration (FDA) sponsor. The prime site subcontracted, indemnified, and trained consortia members. Consenting, normotensive patients with PE and right ventricular strain (by echocardiography or biomarkers) received low-molecular-weight heparin and random assignment to a single bolus of tenecteplase or placebo in double-blinded fashion. The outcomes were: (i) in-hospital rate of intubation, vasopressor support, and major haemorrhage, or (ii) at 90 days, death, recurrent PE, or composite that defined poor quality of life (echocardiography, 6 min walk test and surveys). The planned sample size was n = 200.
RESULTS
Eight sites enrolled 87 patients over 5 years. The ratio of patients screened for each enrolled was 7.4 to 1, equating to 11 h screening time per patient enrolled. Primary barrier to enrolment was the cost of screening. Two patients died (2.5%, 95%CI [0-8%]), one developed shock, but 18 (22%, 95%CI: [13-30%]) had a poor quality of life.
CONCLUSIONS
An investigator-initiated, FDA-regulated, multicentre trial of fibrinolysis for submassive PE was conducted, but was limited by screening costs and a low mortality rate. Quality of life measurements might represent a more important patient-centred end-point. |
Neurological development of 5-year-old children receiving a low-saturated fat, low-cholesterol diet since infancy: A randomized controlled trial. | CONTEXT
Early childhood introduction of nutritional habits aimed at atherosclerosis prevention is compatible with normal growth, but its effect on neurological development is unknown.
OBJECTIVE
To analyze how parental counseling aimed at keeping children's diets low in saturated fat and cholesterol influences neurodevelopment during the first 5 years of life.
DESIGN
Randomized controlled trial conducted between February 1990 and November 1996.
SETTING
Outpatient clinic of a university department in Turku, Finland.
PARTICIPANTS
A total of 1062 seven-month-old infants and their parents, recruited at well-baby clinics between 1990 and 1992. At age 5 years, 496 children still living in the city of Turku were available to participate in neurodevelopmental testing.
INTERVENTION
Participants were randomly assigned to receive individualized counseling aimed at limiting the child's fat intake to 30% to 35% of daily energy, with a saturated:monounsaturated:polyunsaturated fatty acid ratio of 1:1:1 and a cholesterol intake of less than 200 mg/d (n = 540) or usual health education (control group, n = 522).
MAIN OUTCOME MEASURES
Nutrient intake, serum lipid concentrations, and neurological development at 5 years, among children in the intervention vs control groups.
RESULTS
Absolute and relative intakes of fat, saturated fatty acids, and cholesterol among children in the intervention group were markedly less than the respective values of control children. Mean (SD) percentages of daily energy at age 5 years for the intervention vs control groups were as follows: for total fat, 30.6% (4.5%) vs 33.4% (4.4%) (P<. 001); and for saturated fat, 11.7% (2.3%) vs 14.5% (2.4%) (P<.001). Mean intakes of cholesterol were 164.2 mg (60.1 mg) and 192.5 mg (71. 9 mg) (P<.001) for the intervention and control groups, respectively. Serum cholesterol concentrations were continuously 3% to 5% lower in children in the intervention group than in children in the control group. At age 5 years, mean (SD) serum cholesterol concentration of the intervention group was 4.27 (0.63) mmol/L (165 [24] mg/dL) and of the control group, 4.41 (0.74) mmol/L (170 [29] mg/dL) (P =.04). Neurological development of children in the intervention group was at least as good as that of children in the control group. Relative risks for children in the intervention group to fail tests of speech and language skills, gross motor functioning plus perception, and visual motor skills were 0.95 (90% confidence interval [CI], 0.60-1.49), 0.95 (90% CI, 0.58-1.55), and 0.65 (90% CI, 0.39-1.08), respectively (P =.85,.86, and.16, respectively, vs control children).
CONCLUSION
Our data indicate that repeated child-targeted dietary counseling of parents during the first 5 years of a child's life lessens age-associated increases in children's serum cholesterol and is compatible with normal neurological development. JAMA. 2000;284:993-1000 |
Associations between Vitamin D Status and Type 2 Diabetes Measures among Inuit in Greenland May Be Affected by Other Factors. | OBJECTIVE
Epidemiological studies have provided evidence of an association between vitamin D insufficiency and type 2 diabetes. Vitamin D levels have decreased among Inuit in Greenland, and type 2 diabetes is increasing. We hypothesized that the decline in vitamin D could have contributed to the increase in type 2 diabetes, and therefore investigated associations between serum 25(OH)D3 as a measure of vitamin D status and glucose homeostasis and glucose intolerance in an adult Inuit population.
METHODS
2877 Inuit (≥18 years) randomly selected for participation in the Inuit Health in Transition study were included. Fasting- and 2hour plasma glucose and insulin, C-peptide and HbA1c were measured, and associations with serum 25(OH)D3 were analysed using linear and logistic regression. A subsample of 330 individuals who also donated a blood sample in 1987, were furthermore included.
RESULTS
After adjustment, increasing serum 25(OH)D3 (per 10 nmol/L) was associated with higher fasting plasma glucose (0.02 mmol/L, p = 0.004), 2hour plasma glucose (0.05 nmol/L, p = 0.002) and HbA1c (0.39%, p<0.001), and with lower beta-cell function (-1.00 mmol/L, p<0.001). Serum 25(OH)D3 was positively associated with impaired fasting glycaemia (OR: 1.08, p = 0.001), but not with IGT or type 2 diabetes.
CONCLUSIONS
Our results did not support an association between low vitamin D levels and risk of type 2 diabetes. Instead, we found weak positive associations between vitamin D levels and fasting- and 2hour plasma glucose levels, HbA1c and impaired fasting glycaemia, and a negative association with beta-cell function, underlining the need for determination of the causal relationship. |
A 3D printed low profile magnetic dipole antenna | An electrically small, low profile magnetic dipole antenna is presented. The proposed antenna is composed of multiple small twisted loops connected in parallel. The simulated radiation efficiency is 87 % at the resonant frequency of 222.5 MHz. The corresponding electrical size ka is 0.276 with the height of 0.0016λ. The prototype is built using a selective laser sintering technology and silver paste painting. The measured result is discussed. |
FEEL: Featured Event Embedding Learning | Statistical script learning is an effective way to acquire world knowledge which can be used for commonsense reasoning. Statistical script learning induces this knowledge by observing event sequences generated from texts. The learned model thus can predict subsequent events, given earlier events. Recent approaches rely on learning event embeddings which capture script knowledge. In this work, we suggest a general learning model–Featured Event Embedding Learning (FEEL)–for injecting event embeddings with fine grained information. In addition to capturing the dependencies between subsequent events, our model can take into account higher level abstractions of the input event which help the model generalize better and account for the global context in which the event appears. We evaluated our model over three narrative cloze tasks, and showed that our model is competitive with the most recent state-of-the-art. We also show that our resulting embedding can be used as a strong representation for advanced semantic tasks such as discourse parsing and sentence semantic relatedness. |
A general two-phase debris flow model | [1] This paper presents a new, generalized two-phase debris flow model that includes many essential physical phenomena. The model employs the Mohr-Coulomb plasticity for the solid stress, and the fluid stress is modeled as a solid-volume-fraction-gradient-enhanced non-Newtonian viscous stress. The generalized interfacial momentum transfer includes viscous drag, buoyancy, and virtual mass. A new, generalized drag force is proposed that covers both solid-like and fluid-like contributions, and can be applied to drag ranging from linear to quadratic. Strong coupling between the solidand the fluid-momentum transfer leads to simultaneous deformation, mixing, and separation of the phases. Inclusion of the non-Newtonian viscous stresses is important in several aspects. The evolution, advection, and diffusion of the solid-volume fraction plays an important role. The model, which includes three innovative, fundamentally new, and dominant physical aspects (enhanced viscous stress, virtual mass, generalized drag) constitutes the most generalized two-phase flow model to date, and can reproduce results from most previous simple models that consider singleand two-phase avalanches and debris flows as special cases. Numerical results indicate that the model can adequately describe the complex dynamics of subaerial two-phase debris flows, particle-laden and dispersive flows, sediment transport, and submarine debris flows and associated phenomena. |
Automated Usability Testing for Mobile Applications | In this paper we discuss the design and implementation of an automated usability evaluation method for iOS applications. In contrast to common usability testing methods, it is not explicitly necessary to involve an expert or subjects. These circumstances reduce costs, time and personnel expenditures. Professionals are replaced by the automation tool while test participants are exchanged with consumers of the launched application. Interactions of users are captured via a fully automated capturing framework which creates a record of user interactions for each session and sends them to a central server. A usability problem is defined as a sequence of interactions and pattern recognition specified by interaction design patterns is applied to find these problems. Nevertheless, it falls back to the user input for accurate results. Similar to the problem, the solution of the problem is based on the HCI design pattern. An evaluation shows the functionality of our approach compared to a traditional usability evaluation method. |
Alley cropping on an Ultisol in subhumid Benin. Part 1: Long-term effect on maize, cassava and tree productivity | In southern Benin, West Africa, two alley cropping systems were studied from 1986 to 1992. Yield development was followed in a maize and cassava crop rotation vs. intercropping system, with alleys of Leucaena leucocephala (Lam.) de Wit and Cajanus cajan (L.) Millsp. vs. a no-tree control, with and without NPK fertiliser. Without alleys, NPK fertilisation maintained high yield levels of 2–3 t maize dry grain plus 4–6 t ha−1 cassava root DM in intercropping, 3–4 t ha−1 maize and 6–10 t ha−1 cassava in solercropping. Without NPK, final yields seemed to stabilise at about 1 t maize plus 2 t cassava in intercropping and twice as much in each solecrop. Alley cropping induced significant yield increases by about 50% with both tree species in unfertilised, intercropped maize, and with Cajanus in fertilised, solecropped cassava. In monetary terms, the NPK-fertiliser response of stabilised yields was significant for all treatments except the solecropped Leucaena alleys. It is concluded that on Ultisols with low nutrient status in the upper rooting zone, alley cropping with low-competitive tree species may improve food crop yields but the greatest monetary output is achieved by intercropping with mineral fertiliser independent of the presence or absence of an agroforestry component. |
Word Embeddings via Tensor Factorization | Most popular word embedding techniques involve implicit or explicit factorization of a word co-occurrence based matrix into low rank factors. In this paper, we aim to generalize this trend by using numerical methods to factor higher-order word co-occurrence based arrays, or tensors. We present four word embeddings using tensor factorization and analyze their advantages and disadvantages. One of our main contributions is a novel joint symmetric tensor factorization technique related to the idea of coupled tensor factorization. We show that embeddings based on tensor factorization can be used to discern the various meanings of polysemous words without being explicitly trained to do so, and motivate the intuition behind why this works in a way that doesn’t with existing methods. We also modify an existing word embedding evaluation metric known as Outlier Detection [Camacho-Collados and Navigli, 2016] to evaluate the quality of the order-N relations that a word embedding captures, and show that tensor-based methods outperform existing matrix-based methods at this task. Experimentally, we show that all of our word embeddings either outperform or are competitive with state-of-the-art baselines commonly used today on a variety of recent datasets. Suggested applications of tensor factorization-based word embeddings are given, and all source code and pre-trained vectors are publicly available online. |
Experimental characterization of indoor visible light communication channels | The impulse response and frequency response of indoor visible light communication diffuse channels are characterized experimentally in this paper. Both the short pulse technique and frequency sweep technique are adopted for experimental investigation. The iterative site-based modeling is also carried out to simulate the channel impulse response, and good conformity is observed between the experimental and simulation results. Finally, the impact of receiver pointing angles and field of view on the channel 3dB bandwidth is investigated. |
Active learning for on-road vehicle detection: a comparative study | In recent years, active learning has emerged as a powerful tool in building robust systems for object detection using computer vision. Indeed, active learning approaches to on-road vehicle detection have achieved impressive results. While active learning approaches for object detection have been explored and presented in the literature, few studies have been performed to comparatively assess costs and merits. In this study, we provide a cost-sensitive analysis of three popular active learning methods for on-road vehicle detection. The generality of active learning findings is demonstrated via learning experiments performed with detectors based on histogram of oriented gradient features and SVM classification (HOG–SVM), and Haar-like features and Adaboost classification (Haar–Adaboost). Experimental evaluation has been performed on static images and real-world on-road vehicle datasets. Learning approaches are assessed in terms of the time spent annotating, data required, recall, and precision. |
Terabit free-space data transmission employing orbital angular momentum multiplexing | The recognition in the 1990s that light beams with a helical phase front have orbital angular momentum has benefited applications ranging from optical manipulation to quantum information processing. Recently, attention has been directed towards the opportunities for harnessing such beams in communications. Here, we demonstrate that four light beams with different values of orbital angular momentum and encoded with 42.8 3 4 Gbit s quadrature amplitude modulation (16-QAM) signals can be multiplexed and demultiplexed, allowing a 1.37 Tbit s aggregated rate and 25.6 bit s Hz spectral efficiency when combined with polarization multiplexing. Moreover, we show scalability in the spatial domain using two groups of concentric rings of eight polarization-multiplexed 20 3 4 Gbit s 16-QAM-carrying orbital angular momentum beams, achieving a capacity of 2.56 Tbit s and spectral efficiency of 95.7 bit s Hz. We also report data exchange between orbital angular momentum beams encoded with 100 Gbit s differential quadrature phase-shift keying signals. These demonstrations suggest that orbital angular momentum could be a useful degree of freedom for increasing the capacity of free-space communications. |
Media Bias Monitor: Quantifying Biases of Social Media News Outlets at Large-Scale | As Internet users increasingly rely on social media sites like Facebook and Twitter to receive news, they are faced with a bewildering number of news media choices. For example, thousands of Facebook pages today are registered and categorized as some form of news media outlets. Inferring the bias (or slant) of these media pages poses a difficult challenge for media watchdog organizations that traditionally rely on con- |
Effect of exercise on blood pressure in older persons: a randomized controlled trial. | BACKGROUND
Because of age-related differences in the cause of hypertension, it is uncertain whether current exercise guidelines for reducing blood pressure (BP) are applicable to older persons. Few exercise studies in older persons have evaluated BP changes in relation to changes in body composition or fitness.
METHODS
This was a 6-month randomized controlled trial of combined aerobic and resistance training; controls followed usual care physical activity and diet advice. Participants (aged 55-75 years) had untreated systolic BP (SBP) of 130 to 159 mm Hg or diastolic BP (DBP) of 85 to 99 mm Hg.
RESULTS
Fifty-one exercisers and 53 controls completed the trial. Exercisers significantly improved aerobic and strength fitness, increased lean mass, and reduced general and abdominal obesity. Mean decreases in SBP and DBP, respectively, were 5.3 and 3.7 mm Hg among exercisers and 4.5 and 1.5 mm Hg among controls (P < .001 for all). There were no significant group differences in mean SBP change from baseline (-0.8 mm Hg; P=.67). The mean DBP reduction was greater among exercisers (-2.2 mm Hg; P=.02). Aortic stiffness, indexed by aortofemoral pulse-wave velocity, was unchanged in both groups. Body composition improvements explained 8% of the SBP reduction (P = .006) and 17% of the DBP reduction (P<.001).
CONCLUSIONS
A 6-month program of aerobic and resistance training lowered DBP but not SBP in older adults with mild hypertension more than in controls. The concomitant lack of improvement in aortic stiffness in exercisers suggests that older persons may be resistant to exercise-induced reductions in SBP. Body composition improvements were associated with BP reductions and may be a pathway by which exercise training improves cardiovascular health in older men and women. |
Mandibular Rim Trilogy with Botulinum Toxin Injection: Reduction, Projection, and Lift. | "Onabotulinum toxin A (Botox) revolution" has brought the fundamental change in the facial rejuvenation as well as the concept of microinjection. The aesthetic standard tends to be the "globalization"; however, Asians have different aesthetic cultures and unique facial features compared with Caucasians. A new rejuvenation concept is proposed during our practice; the Asian face should preserve the original facial identity during Botox treatments. The lower face is treated with botulinum toxin to achieve a harmonious facial profile. Twenty young females ranging in age from 30 to 45 years consented and received the three-pronged procedure from March 2014 to April 2015; photography at baseline and follow-up visit were taken and analyzed. After posttreatment for 2 months, significant improvement was observed compared with the baseline. And the reduced masseter prominence and prominent chin were obtained, showing a favorable facial contour and harmonious appearance during the follow-up. The novel three-pronged approach to lower facial rejuvenation was aimed at the Asian characteristic of hypertrophic masseter, chin retrusion, and the facial sagging during the aging process. Botox treatment was a quite effective and safe strategy to improve the appearance and contour of the lower face in Asian patients. |
Ultrasonographic features in symptomatic osteoarthritis of the knee and relation with pain. | OBJECTIVE
Radiographic knee OA is moderately associated with pain. As OA is a disease of the entire joint, ultrasonography visualizing cartilage and soft tissue structures might provide more insight into the complex process of pain in knee OA. The objective of this study was to investigate the cross-sectional association between US findings and pain in knee OA.
METHODS
In this observational study, 180 patients fulfilling the ACR clinical criteria for knee OA underwent US examination of the most symptomatic knee. The US protocol comprised assessment of synovial hypertrophy, joint effusion, infrapatellar bursitis and Baker's cyst, medial meniscus protrusion and cartilage thickness. To evaluate the association between US features and pain (Numerated Rating Scale from 0 to 10 and the Knee Injury and Osteoarthritis Outcome Score pain subscale), regression analysis was performed.
RESULTS
In regression analysis, no association between US or clinical or demographic features and the level of knee pain was found.
CONCLUSION
In this cohort, no association between US features and the degree of knee pain was found. Despite the attractiveness of US (easy accessible, inexpensive and no radiation involvement) and the fact that previous research suggested otherwise, it remains uncertain which part of pain in knee OA is explained by pathology in soft tissue structures and whether US of the knee is the imaging tool of choice to visualize this pathology. |
Feudal Multi-Agent Hierarchies for Cooperative Reinforcement Learning | We investigate how reinforcement learning agents can learn to cooperate. Drawing inspiration from human societies, in which successful coordination of many individuals is often facilitated by hierarchical organisation, we introduce Feudal Multiagent Hierarchies (FMH). In this framework, a ‘manager’ agent, which is tasked with maximising the environmentally-determined reward function, learns to communicate subgoals to multiple, simultaneously-operating, ‘worker’ agents. Workers, which are rewarded for achieving managerial subgoals, take concurrent actions in the world. We outline the structure of FMH and demonstrate its potential for decentralised learning and control. We find that, given an adequate set of subgoals from which to choose, FMH performs, and particularly scales, substantially better than cooperative approaches that use a shared reward function. |
Predicting Multiple Metrics for Queries: Better Decisions Enabled by Machine Learning | One of the most challenging aspects of managing a very large data warehouse is identifying how queries will behave before they start executing. Yet knowing their performance characteristics --- their runtimes and resource usage --- can solve two important problems. First, every database vendor struggles with managing unexpectedly long-running queries. When these long-running queries can be identified before they start, they can be rejected or scheduled when they will not cause extreme resource contention for the other queries in the system. Second, deciding whether a system can complete a given workload in a given time period (or a bigger system is necessary) depends on knowing the resource requirements of the queries in that workload. We have developed a system that uses machine learning to accurately predict the performance metrics of database queries whose execution times range from milliseconds to hours. For training and testing our system, we used both real customer queries and queries generated from an extended set of TPC-DS templates. The extensions mimic queries that caused customer problems. We used these queries to compare how accurately different techniques predict metrics such as elapsed time, records used, disk I/Os, and message bytes. The most promising technique was not only the most accurate, but also predicted these metrics simultaneously and using only information available prior to query execution. We validated the accuracy of this machine learning technique on a number of HP Neoview configurations. We were able to predict individual query elapsed time within 20% of its actual time for 85% of the test queries. Most importantly, we were able to correctly identify both the short and long-running (up to two hour) queries to inform workload management and capacity planning. |
Evidence in Management and Organizational Science : Assembling the Field ’ s Full Weight of Scientific Knowledge Through Syntheses | This chapter advocates the good scientific practice of systematic research syntheses in Management and Organizational Science (MOS). A research synthesis is the systematic accumulation, analysis and reflective interpretation of the full body of relevant empirical evidence related to a question. It is the * Corresponding author. Email: [email protected] 476 • The Academy of Management Annals critical first step in effective use of scientific evidence. Synthesis is not a conventional literature review. Literature reviews are often position papers, cherrypicking studies to advocate a point of view. Instead, syntheses systematically identify where research findings are clear (and where they aren’t), a key first step to establishing the conclusions science supports. Syntheses are also important for identifying contested findings and productive lines for future research. Uses of MOS evidence, that is, the motives for undertaking a research synthesis include scientific discovery and explanation, improved management practice guidelines, and formulating public policy. We identify six criteria for establishing the evidentiary value of a body of primary studies in MOS. We then pinpoint the stumbling blocks currently keeping the field from making effective use of its ever-expanding base of empirical studies. Finally, this chapter outlines (a) an approach to research synthesis suitable to the domain of MOS; and (b) supporting practices to make synthesis a collective MOS project. Evidence in Management and Organizational Science It is the nature of the object that determines the form of its possible science. (Bhaskar, 1998, p. 3) Uncertain knowledge is better than ignorance. (Mitchell, 2000, p. 9) This chapter is motivated by the failure of Management and Organizational Science (MOS) to date to make full effective use of its available research evidence. Failure to make effective use of scientific evidence is a problem both management scholars and practitioners face. Effective use of evidence, as we employ the term here, means to assemble and interpret a body of primary studies relevant to a question of fact, and then take appropriate action based upon the conclusions drawn. For science, appropriate action might be to direct subsequent research efforts elsewhere if the science is clear, or to recommend a new tact if findings are inconclusive. For practice, appropriate action might begin with a summary of key findings to share with educators, thought leaders, consultants, and the broader practice community. Unfortunately, bodies of evidence in MOS are seldom assembled or interpreted in the systematic fashion needed to permit their confident use. A systematic review of the full body of evidence is the key first step in formulating a science-based conclusion. As a consequence at present, neither the MOS scholar nor the practitioner can readily claim to be well-informed. This lapse has many causes. Two are central to our failure to use MOS evidence well: (1) overvaluing novelty to the detriment of accumulating convergent findings; and (2) the general absence of systematic research syntheses. These two causes are intertwined in that, as we shall show, use of research syntheses ties closely with how a field gauges the value of its research. This chapter’s subject, the systematic research synthesis, is not to be confused Evidence in Management and Organizational Science • 477 with a conventional literature review, its less systematic, non-representative counterpart. Systematic research syntheses assemble, analyze and interpret a comprehensive body of evidence in a highly reflective fashion according to six evidentiary criteria we describe. The why, what, and how of research synthesis in MOS is this chapter’s focus. The explosion of management research since World War II has created knowledge products at a rate far outpacing our current capacity for recall, sense-making, and use. In all likelihood, MOS’s knowledge base will continue to expand. We estimate over 200 peer-reviewed journals currently publish MOS research. These diverse outlets reflect the fact that MOS is not a discipline; it is an area of inter-related research activities cutting across numerous disciplines and subfields. The area’s expansion translates into a body of knowledge that is increasingly fragmented (Rousseau, 1997), transdisciplinary (Whitley, 2000), and interdependent with advancements in other social sciences (Tranfield, Denyer & Smart, 2003). The complicated state of MOS research makes it tough to know what we know, especially as specialization spawns research communities that often don’t and sometimes can’t talk with each other. |
Microcontroller based drip irrigation system using smart sensor | In the past couple of decades, there is rapid growth in terms of technology in the field of agriculture. Different monitoring and controlled systems are installed in order to increase the yield. The yield rate may deceases due to numerous factors. Disease is one of the key factors that cause the degradation of yield. So the developed monitoring system mainly focuses on predicting the start of germination of the disease. Sensor module is used to detect different environmental condition across the farm and the sensed data is displayed on Liquid crystal display using microcontroller. Microcontroller wirelessly transmits different environment conditions across the farm to central unit where data is stored, and analysed. Central unit checks the present data with disease condition and if matches then it commands microcontroller to operate relay. Sensor module is tested for different temperature range and it is found that there are little variations in recorded values. Wireless data transfer is tested with the introduction of various obstacles like wall, metal body, magnet, etc. and it is found that same data is transferred to central unit but with some amount of delay in it. The developed system nearly predicts the start of germination of disease. |
Combining haptic human-machine interaction with predictive path planning for lane-keeping and collision avoidance systems | This paper presents a first approach for a haptic human-machine interface combined with a novel lane-keeping and collision-avoidance assistance system approach, as well as the results of a first exploration study with human test drivers. The assistance system approach is based on a potential field predictive path planning algorithm that incorporates the drivers wishes commanded by the steering wheel angle, the brake pedal or throttle, and the intended maneuver. For the design of the haptic human-machine interface the assistance torque characteristic at the handwheel is shaped and the path planning parameters are held constant. In the exploration, both driving data as well as questionnaires are evaluated. The results show good acceptance for the lane-keeping assistance while the collision avoidance assistance needs to be improved. |
Cooperative localization and mapping of MAVs using RGB-D sensors | The fusion of IMU and RGB-D sensors presents an interesting combination of information to achieve autonomous localization and mapping using robotic platforms such as ground robots and flying vehicles. In this paper, we present a software framework for cooperative localization and mapping while simultaneously using multiple aerial platforms. We employ a monocular visual odometry algorithm to solve the localization task, where the depth data flow associated to the RGB image is used to estimate the scale factor associated with the visual information. The current framework enables autonomous onboard control of each vehicle with cooperative localization and mapping. We present a methodology that provides both a sparse map generated by the monocular SLAM and a multiple resolution dense map generated by the associated depth. The localization algorithm and both 3D mapping algorithms work in parallel improving the system real-time reliability. We present experimental results to show the effectiveness of the proposed approach using two quadrotors platforms. |
Behavioral economics holds potential to deliver better results for patients, insurers, and employers. | Many programs being implemented by US employers, insurers, and health care providers use incentives to encourage patients to take better care of themselves. We critically review a range of these efforts and show that many programs, although well-meaning, are unlikely to have much impact because they require information, expertise, and self-control that few patients possess. As a result, benefits are likely to accrue disproportionately to patients who already are taking adequate care of their health. We show how these programs could be made more effective through the use of insights from behavioral economics. For example, incentive programs that offer patients small and frequent payments for behavior that would benefit the patients, such as medication adherence, can be more effective than programs with incentives that are far less visible because they are folded into a paycheck or used to reduce a monthly premium. Deploying more-nuanced insights from behavioral economics can lead to policies with the potential to increase patient engagement and deliver dividends for patients and favorable cost-effectiveness ratios for insurers, employers, and other relevant commercial entities. |
A Comprehensive Analysis of AM–AM and AM–PM Conversion in an LDMOS RF Power Amplifier | In this paper, a Volterra analysis built on top of a normal harmonic balance simulation is used for a comprehensive analysis of the causes of AM-PM distortion in a LDMOS RF power amplifier (PA). The analysis shows that any nonlinear capacitors cause AM-PM. In addition, varying terminal impedances may pull the matching impedances and cause phase shift. The AM-PM is also affected by the distortion that is mixed down from the second harmonic. As a sample circuit, an internally matched 30-W LDMOS RF PA is used and the results are compared to measured AM-AM, AM-PM and large-signal S11. |
TV-Broadcasting Competition and Advertising | We analyse the rivalry between two TV-channels competing both on the market for audience and the market for advertising. We identify the nature of TV-programs emerging from this competition, and the quantity of advertising that TV-viewers will have to attend at equilibrium. Finally, we examine how a government's regulation of this quantity will affect programs' selection by the channels. |
Heterogeneous Task Allocation and Sequencing via Decentralized Large Neighborhood Search | This paper focuses on decentralized task allocation and sequencing for multiple heterogeneous robots. Each task is defined as visiting a point in a subset of the robot configuration space — this de... |
Interactive metagenomic visualization in a Web browser | A critical output of metagenomic studies is the estimation of abundances of taxonomical or functional groups. The inherent uncertainty in assignments to these groups makes it important to consider both their hierarchical contexts and their prediction confidence. The current tools for visualizing metagenomic data, however, omit or distort quantitative hierarchical relationships and lack the facility for displaying secondary variables. Here we present Krona, a new visualization tool that allows intuitive exploration of relative abundances and confidences within the complex hierarchies of metagenomic classifications. Krona combines a variant of radial, space-filling displays with parametric coloring and interactive polar-coordinate zooming. The HTML5 and JavaScript implementation enables fully interactive charts that can be explored with any modern Web browser, without the need for installed software or plug-ins. This Web-based architecture also allows each chart to be an independent document, making them easy to share via e-mail or post to a standard Web server. To illustrate Krona's utility, we describe its application to various metagenomic data sets and its compatibility with popular metagenomic analysis tools. Krona is both a powerful metagenomic visualization tool and a demonstration of the potential of HTML5 for highly accessible bioinformatic visualizations. Its rich and interactive displays facilitate more informed interpretations of metagenomic analyses, while its implementation as a browser-based application makes it extremely portable and easily adopted into existing analysis packages. Both the Krona rendering code and conversion tools are freely available under a BSD open-source license, and available from: http://krona.sourceforge.net . |
Real-time pixel luminance optimization for dynamic multi-projection mapping | Using projection mapping enables us to bring virtual worlds into shared physical spaces. In this paper, we present a novel, adaptable and real-time projection mapping system, which supports multiple projectors and high quality rendering of dynamic content on surfaces of complex geometrical shape. Our system allows for smooth blending across multiple projectors using a new optimization framework that simulates the diffuse direct light transport of the physical world to continuously adapt the color output of each projector pixel. We present a real-time solution to this optimization problem using off-the-shelf graphics hardware, depth cameras and projectors. Our approach enables us to move projectors, depth camera or objects while maintaining the correct illumination, in realtime, without the need for markers on the object. It also allows for projectors to be removed or dynamically added, and provides compelling results with only commodity hardware. |
Real time head pose estimation with random regression forests | Fast and reliable algorithms for estimating the head pose are essential for many applications and higher-level face analysis tasks. We address the problem of head pose estimation from depth data, which can be captured using the ever more affordable 3D sensing technologies available today. To achieve robustness, we formulate pose estimation as a regression problem. While detecting specific face parts like the nose is sensitive to occlusions, learning the regression on rather generic surface patches requires enormous amount of training data in order to achieve accurate estimates. We propose to use random regression forests for the task at hand, given their capability to handle large training datasets. Moreover, we synthesize a great amount of annotated training data using a statistical model of the human face. In our experiments, we show that our approach can handle real data presenting large pose changes, partial occlusions, and facial expressions, even though it is trained only on synthetic neutral face data. We have thoroughly evaluated our system on a publicly available database on which we achieve state-of-the-art performance without having to resort to the graphics card. |
Prognostic Value of Baseline Neutrophil-Lymphocyte and Platelet-Lymphocyte Ratios in Local and Advanced Gastric Cancer Patients. | BACKGROUND
We aimed to investigate the prognostic value of baseline neutrophil, lymphocyte, and platelet counts along with the neutrophil-lymphocyte ratio (NLR) and platelet-lymphocyte ratio (PLR) in local and advanced gastric cancer patients.
MATERIALS AND METHODS
In this retrospective cross-sectional study, a total of 103 patients with gastric cancer were included. For all, patient characteristics and overall survival (OS) times were evaluated. Data from a complete blood count test including neutrophil, lymphocyte, monocyte, white blood cell (WBC) and platelet (Plt) count, hemoglobin level (Hb) were recorded, and the NLR and PLR were obtained for every patient prior to pathological diagnosis before any treatment was applied.
RESULTS
Of the patients, 53 had local disease, underwent surgery and were administered adjuvant chemoradiotherapy where indicated. The remaining 50 had advanced disease and only received chemotherapy. OS time was 71.6±6 months in local gastric cancer patients group and 15±2 months in the advanced gastric cancer group. Univariate analysis demonstrated that only high platelet count (p=0.013) was associated with better OS in the local gastric cancer patients. In contrast, both low NLR (p=0.029) and low PLR (p=0.012) were associated with better OS in advanced gastric cancer patients.
CONCLUSIONS
This study demonstrated that NLR and PLR had no effect on prognosis in patients with local gastric cancer who underwent surgery and received adjuvant chemoradiotherapy. In advanced gastric cancer patients, both NLR and PLR had significant effects on prognosis, so they may find application as easily measured prognostic factors for such patients. |
Insights into the interpretation of light transmission aggregometry for evaluation of platelet aggregation inhibition by clopidogrel. | INTRODUCTION
When studying the efficacy of clopidogrel to inhibit platelet aggregation by light transmission aggregometry, technical decisions must be taken prior to assessment or during analysis, including, but not limited to, concentration of agonist to use and timing of the evaluation of the response on the aggregation curve obtained (peak ADP-stimulated platelet aggregation vs. late aggregation). We investigated how some of these technical modalities affected the results of platelet aggregation obtained after clopidogrel administration.
MATERIALS AND METHODS
One hundred and twenty stable coronary artery disease patients requiring a diagnostic angiography were recruited prior to pre-treatment with clopidogrel. Blood samples were tested before clopidogrel initiation and immediately preceding coronary angiography using light transmission aggregometry with either 5 or 20 microM of ADP. Aggregation was measured at maximal amplitude (peak), and 5 minutes after agonist addition (late).
RESULTS
While measurements of platelet aggregation as either peak or late aggregation were strongly correlated, peak platelet aggregation was significantly higher than late aggregation, by 10.8% and by 10.3% with ADP 5 and 20 microM, respectively. Moreover, the use of ADP 20 microM resulted in less spontaneous disaggregation than 5 microM in the absence of clopidogrel (11.8% and 4.8% with ADP 5 microM and 20 microM, respectively).
CONCLUSIONS
When assessing platelet aggregation following clopidogrel, measurement of late aggregation after addition of ADP 20 microM should be preferred. Large clinical trials should be conducted to assess which parameter between residual aggregation or inhibition of platelet aggregation by clopidogrel best predicts clinical efficacy of the drug. |
Kullback-Leibler Divergence Constrained Distributionally Robust Optimization | In this paper we study distributionally robust optimization (DRO) problems where the ambiguity set of the probability distribution is defined by the Kullback-Leibler (KL) divergence. We consider DRO problems where the ambiguity is in the objective function, which takes a form of an expectation, and show that the resulted minimax DRO problems can be formulated as a one-layer convex minimization problem. We also consider DRO problems where the ambiguity is in the constraint. We show that ambiguous expectation-constrained programs may be reformulated as a one-layer convex optimization problem that takes the form of the Benstein approximation of Nemirovski and Shapiro (2006). We further consider distributionally robust probabilistic programs. We show that the optimal solution of a probability minimization problem is also optimal for the distributionally robust version of the same problem, and also show that the ambiguous chance-constrained programs (CCPs) may be reformulated as the original CCP with an adjusted confidence level. A number of examples and special cases are also discussed in the paper to show that the reformulated problems may take simple forms that can be solved easily. The main contribution of the paper is to show that the KL divergence constrained DRO problems are often of the same complexity as their original stochastic programming problems and, thus, KL divergence appears a good candidate in modeling distribution ambiguities in mathematical programming. |
BVFG An Unbiased Detector of Curvilinear Structures | The extraction of curvilinear structures is an important low-level operation in computer vision that has many applications. Most existing operators use a simple model for the line that is to be extracted, i.e., they do not take into account the surroundings of a line. This leads to the undesired consequence that the line will be extracted in the wrong position whenever a line with different lateral contrast is extracted. In contrast, the algorithm proposed in this paper uses an explicit model for lines and their surroundings. By analyzing the scale-space behaviour of a model line profile, it is shown how the bias that is induced by asymmetrical lines can be removed. Furthermore, the algorithm not only returns the precise sub-pixel line position, but also the width of the line for each line point, also with sub-pixel accuracy. |
A New Read-Disturb Failure Mechanism Caused by Boosting Hot-Carrier Injection Effect in MLC NAND Flash Memory | In this paper, we have reported a new failure phenomenon of read-disturb in MLC NAND flash memory caused by boosting hot-carrier injection effect. 1) The read-disturb failure occurred on unselected WL (WLn+1) after the adjacent selected WL (WLn) was performed with more than 1K read cycles. 2) The read-disturb failure of WLn+1 depends on WLn cell’s Vth and its applied voltage. 3) The mechanism of this kind of failure can be explained by hot carrier injection that is generated by discharging from boosting voltage in unselected cell area (Drain of WLn) to ground (Source of WLn). Experiment A NAND Flash memory was fabricated based on 70nm technology. In order to investigate the mechanisms of readdisturb, 3 different read voltages and 4 different cell data states (S0, S1, S2 and S3) were applied on the selected WL with SGS/SGD rising time shift scheme [1]. Fig. 1 and Fig. 2 show the operation condition and waveform for readdisturb evaluation. In the evaluation, the selected WLn was performed with more than 100K read cycles. Result And Discussion Fig. 3 shows the measured results of WL2 Vth shift (i.e. read-disturb failure) during WL1 read-didturb cycles with different WL1 voltages (VWL1) and cell data states (S0~S3). From these data, a serious WL2 Vth shift can be observed in VWL1=0.5V and VWL1=1.8V after 1K read cycles. In Fig. 3(a), the magnitude of Vth shift with WL1=S2 state is larger than that with WL1=S3 state. However, obviously WL2 Vth shift can be found only when WL1 is at S3 state in Fig. 3(b). In Fig. 3(c), WL2 Vth is unchanged while VWL1 is set to 3.6V. To precisely analyze the phenomenon, further TCAD simulation and analysis were carried out to clarify the mechanism of the read-disturb failure. Based on simulation results of Fig. 4, the channel potential difference between selected WLn (e.g. WL1) and unselected WLn+1 (e.g. WL2) is related to cell data states (S0~S3) and the read voltage of the selected WL (VWL1). Fig. 4(a) exhibits that the selected WL1 channel was tuned off and the channel potential of unselected WL2~31 was boosted to high level when the WL1 cell data state is S2 or S3. Therefore, a sufficient potential difference appears between WLn and WLn+1 and provides a high transverse electric field. When VWL1 is increased to 1.8V as Fig. 4(b), a high programming cell state (S3) is required to support the potential boosting of unselected WL2~31. In addition, from Fig. 4(c) and the case of WL1=S2 in Fig. 4(b), we can find that the potential difference were depressed since the WL1 channel is turned on by high WL1 voltage. Therefore, the potential difference can be reduced by sharing effect. These simulation results are well corresponding with read disturb results of Fig. 3. Electron current density is another factor to cause the Vth shift of WLn+1. From Fig. 3(a), the current density of WL1=S2 should higher than that of WL=S3 since its Vth is lower. Consequently, the probability of impact ionization can be increased due to the high current density in case of WL1=S2. According to the model, we can clearly explain the phenomenon of serious WL2 Vth shift occurs in the condition of WL 1=S2 rather than WL1=S3. Fig. 5 shows the schematic diagram of the mechanism of Boosting Hot-carrier Injection in MLC NAND flash memories. The transverse E-field can be enhanced by the channel potential difference and consequently make a high probability of impact ionization. As a result, electron-hole pairs will be generated, and then electrons will inject into the adjacent cell (WL2) since the higher vertical field of VWL2. Thus, the Vth of adjacent cell will be changed after 1K cycles with the continual injecting of the hot electrons. Table 1 shows the measured result of cell array Vth shift for WL1 to WL4 after the read-disturb cycles on WL1 or WL2. From the data, it concretely indicates that the WLn read cycles could only causes WLn+1 Vth shift even if WLn+1 did not apply the read cycles. The result is consistent with measured data and also supports that the read-disturb on adjacent cell results from boosting hotcarrier injection. Conclusion A new read-disturb failure mechanism caused by boosting hot-carrier injection effect in MLC NAND flash memory has been reported and clarified. Simulation and measured data describe that the electrostatic potential difference between reading cell and the adjacent cell plays a significant role to enhance hot-carrier injection effect. Reference [1] Ken Takeuchi, “A 56nm CMOS 99mm2 8Gb Multilevel NAND Flash Memory with 10MB/s Program Throughput ,” ISSCC, 2006. 978-1-4244-3761-0/09/$25.00 ©2009 IEEE R ead D isturbance Test R esult W L0 W L1 W L2 W L3 W L4 W L1= S0 Pass Pass Pass Pass Pass W L1= S1 Pass Pass Pass Pass Pass W L1= S2 Pass Pass Fail Pass Pass W L1= S3 Pass Pass Fail Pass Pass W L1= S0 Pass Pass Pass Pass Pass W L1= S1 Pass Pass Pass Pass Pass W L1= S2 Pass Pass Pass Pass Pass W L1= S3 Pass Pass Fail Pass Pass W L1= S0 Pass Pass Pass Pass Pass W L1= S1 Pass Pass Pass Pass Pass W L1= S2 Pass Pass Pass Pass Pass W L1= S3 Pass Pass Pass Pass Pass W L2= S0 Pass Pass Pass Pass Pass W L2= S1 Pass Pass Pass Pass Pass W L2= S2 Pass Pass Pass Fail Pass W L2= S3 Pass Pass Pass Fail Pass W L1 Read= 0.5V (C ase 1 ) W L1 Read= 1.8V (C ase 2 ) W L1 Read= 3.6V (C ase 3 ) |
Some from Here, Some from There: Cross-Project Code Reuse in GitHub | Code reuse has well-known benefits on code quality, coding efficiency, and maintenance. Open Source Software (OSS) programmers gladly share their own code and they happily reuse others'. Social programming platforms like GitHub have normalized code foraging via their common platforms, enabling code search and reuse across different projects. Removing project borders may facilitate more efficient code foraging, and consequently faster programming. But looking for code across projects takes longer and, once found, may be more challenging to tailor to one's needs. Learning how much code reuse goes on across projects, and identifying emerging patterns in past cross-project search behavior may help future foraging efforts. To understand cross-project code reuse, here we present an in-depth study of cloning in GitHub. Using Deckard, a clone finding tool, we identified copies of code fragments across projects, and investigate their prevalence and characteristics using statistical and network science approaches, and with multiple case studies. By triangulating findings from different methods, we find that cross-project cloning is prevalent in GitHub, ranging from cloning few lines of code to whole project repositories. Some of the projects serve as popular sources of clones, and others seem to contain more clones than their fair share. Moreover, we find that ecosystem cloning follows an onion model: most clones come from the same project, then from projects in the same application domain, and finally from projects in different domains. Our results show directions for new tools that can facilitate code foraging and sharing within GitHub. |
Exploring the Political Agenda of the European Parliament Using a Dynamic Topic Modeling Approach | This study analyzes the political agenda of the European Parliament (EP) plenary, how it has evolved over time, and the manner in which Members of the European Parliament (MEPs) have reacted to external and internal stimuli when making plenary speeches. To unveil the plenary agenda and detect latent themes in legislative speeches over time, MEP speech content is analyzed using a new dynamic topic modeling method based on two layers of Non-negative Matrix Factorization (NMF). This method is applied to a new corpus of all English language legislative speeches in the EP plenary from the period 1999-2014. Our findings suggest that two-layer NMF is a valuable alternative to existing dynamic topic modeling approaches found in the literature, and can unveil niche topics and associated vocabularies not captured by existing methods. Substantively, our findings suggest that the political agenda of the EP evolves significantly over time and reacts to exogenous events such as EU Treaty referenda and the emergence of the Euro-crisis. MEP contributions to the plenary agenda are also found to be impacted upon by voting behaviour and the committee structure of the Parliament. ∗Insight Centre for Data Analytics & School of Computer Science, University College Dublin, Ireland ([email protected]) †School of Politics & International Relations, University College Dublin, Ireland ([email protected]). |
Emergency centre investigation of first-onset seizures in adults in the Western Cape, South Africa. | BACKGROUND
Patients with first-onset seizures commonly present to emergency centres (ECs). The differential diagnosis is broad, potentially life-threatening conditions need to be excluded, and these patients need to be correctly diagnosed and appropriately referred. There are currently no data on adults presenting with first-onset seizures to ECs in South Africa.
OBJECTIVE
To review which investigations were performed on adults presenting with first-onset seizures to six ECs in the Western Cape Province.
METHODS
A prospective, cross-sectional study was conducted from 1 July 2011 to 31 December 2011. All adults with first-onset seizures were included; children and trauma patients were excluded. Subgroup analyses were conducted regarding HIV status and inter-facility variation.
RESULTS
A total of 309 patients were included. Computed tomography (CT) scans were planned in 218 (70.6%) patients, but only performed in 169; 96 (56.8%) showed abnormalities judged to be causative (infarction, intracerebral haemorrhage and atrophy being the most common). At least 80% of patients (n=247) received a full renal and electrolyte screen, blood glucose testing and a full haematological screen. Lumbar puncture (LP) was performed in 67 (21.7%) patients, with normal cerebrospinal fluid findings in 51 (76.1%). Only 27 (8%) patients had an electroencephalogram, of which 5 (18%) were abnormal. There was a statistically significant difference in the number of CT scans (p=0.002) and LPs (p<0.001) performed in the HIV-positive group (n=49).
CONCLUSION
This study demonstrated inconsistency and wide local variance for all types of investigations done. It emphasises the need for a local guideline to direct doctors to appropriate investigations, ensuring better quality patient care and potential cost-saving. |
Exploring Demographic Language Variations to Improve Multilingual Sentiment Analysis in Social Media | Different demographics, e.g., gender or age, can demonstrate substantial variation in their language use, particularly in informal contexts such as social media. In this paper we focus on learning gender differences in the use of subjective language in English, Spanish, and Russian Twitter data, and explore cross-cultural differences in emoticon and hashtag use for male and female users. We show that gender differences in subjective language can effectively be used to improve sentiment analysis, and in particular, polarity classification for Spanish and Russian. Our results show statistically significant relative F-measure improvement over the gender-independent baseline 1.5% and 1% for Russian, 2% and 0.5% for Spanish, and 2.5% and 5% for English for polarity and subjectivity classification. |
Patch-based Progressive 3D Point Set Upsampling | We present a detail-driven deep neural network for point set upsampling. A high-resolution point set is essential for point-based rendering and surface reconstruction. Inspired by the recent success of neural image super-resolution techniques, we progressively train a cascade of patch-based upsampling networks on different levels of detail end-to-end. We propose a series of architectural design contributions that lead to a substantial performance boost. The effect of each technical contribution is demonstrated in an ablation study. Qualitative and quantitative experiments show that our method significantly outperforms the state-of-theart learning-based [58, 59], and optimazation-based [23] approaches, both in terms of handling low-resolution inputs and revealing high-fidelity details. The data and code are at https://github.com/yifita/3pu. |
Glymphatic clearance controls state-dependent changes in brain lactate concentration. | Brain lactate concentration is higher during wakefulness than in sleep. However, it is unknown why arousal is linked to an increase in brain lactate and why lactate declines within minutes of sleep. Here, we show that the glymphatic system is responsible for state-dependent changes in brain lactate concentration. Suppression of glymphatic function via acetazolamide treatment, cisterna magna puncture, aquaporin 4 deletion, or changes in body position reduced the decline in brain lactate normally observed when awake mice transition into sleep or anesthesia. Concurrently, the same manipulations diminished accumulation of lactate in cervical, but not in inguinal lymph nodes when mice were anesthetized. Thus, our study suggests that brain lactate is an excellent biomarker of the sleep-wake cycle and increases further during sleep deprivation, because brain lactate is inversely correlated with glymphatic-lymphatic clearance. This analysis provides fundamental new insight into brain energy metabolism by demonstrating that glucose that is not fully oxidized can be exported as lactate via glymphatic-lymphatic fluid transport. |
Finite Element Method Simulation of Machining of AISI 1045 Steel With A Round Edge Cutting Tool Tu ğ | In this paper, FEM modeling and simulation of orthogonal cutting of AISI 1045 steel is studied by using dynamics explicit Arbirary Lagrangian Eulerian method. The simulation model utilizes the advantages offered by ALE method in simulating plastic flow around the round edge of the cutting tool and eliminates the need for chip separation criteria. JohnsonCook work material model is used for elastic plastic work deformations. A methodology developed to determine friction characteristics from orthogonal cutting tests is also utilized for chip-tool interfacial friction modeling. The simulation results include predicted chip formation as well as temperature and stress distributions. These results are highly essential in predicting machining induced residual stresses and other properties on the machined surface. |
GLASSES: Relieving The Myopia Of Bayesian Optimisation | We present glasses: Global optimisation with Look-Ahead through Stochastic Simulation and Expected-loss Search. The majority of global optimisation approaches in use are myopic, in only considering the impact of the next function value; the non-myopic approaches that do exist are able to consider only a handful of future evaluations. Our novel algorithm, glasses, permits the consideration of dozens of evaluations into the future. This is done by approximating the ideal look-ahead loss function, which is expensive to evaluate, by a cheaper alternative in which the future steps of the algorithm are simulated beforehand. An Expectation Propagation algorithm is used to compute the expected value of the loss. We show that the far-horizon planning thus enabled leads to substantive performance gains in empirical tests. |
The effect of soy isoflavone on bone mineral density in postmenopausal Taiwanese women with bone loss: a 2-year randomized double-blind placebo-controlled study | The treatment of 300-mg/day isoflavones (aglycone equivalents) (172.5 mg genistein + 127.5 mg daidzein) for 2 years failed to prevent lumbar spine and total proximal femur bone mineral density (BMD) from declining as compared with the placebo group in a randomized, double-blind, two-arm designed study enrolling 431 postmenopausal women 45–65 years old. This study evaluated the effects of soy isoflavones on bone metabolism in postmenopausal women. Four hundred and thirty-one women, aged 45–65 years, orally consumed 300-mg/day isoflavones (aglycone equivalents) or a placebo for 2 years in a parallel group, randomized, double-blind, two-arm study. Each participant also ingested 600 mg of calcium and 125 IU of vitamin D3 per day. The BMD of the lumbar spine and total proximal femur were measured using dual-energy X-ray absorptiometry at baseline and every half-year thereafter. Serum bone-specific alkaline phosphatase, urinary N-telopeptide of type 1 collagen/creatinine, and other safety assessments were examined regularly. Two hundred out of 217 subjects in the isoflavone group and 199 out of 214 cases in placebo group completed the treatment. Serum concentrations of isoflavone metabolites, genistein and daidzein, of the intervention group were remarkably elevated following intake of isoflavones (p < 0.001). However, differences in the mean percentage changes of BMD throughout the treatment period were not statistically significant (lumbar spine, p = 0.42; total femur, p = 0.39) between the isoflavone and placebo groups, according to the generalized estimating equation (GEE) method. A significant time trend of bone loss was observed at both sites as assessed by the GEE method following repeated measurement of BMD (p < 0.001). Differences in bone marker levels were not significant between the two treatment groups. Treatment with 300-mg/day isoflavones (aglycone equivalents) failed to prevent a decline in BMD in the lumbar spine or total femur compared with the placebo group. |
A trust-based consumer decision-making model in electronic commerce: The role of trust, perceived risk, and their antecedents | Are trust and risk important in consumers' electronic commerce purchasing decisions? What are the antecedents of trust and risk in this context? How do trust and risk affect an Internet consumer's purchasing decision? To answer these questions, we i) develop a theoretical framework describing the trust-based decision-making process a consumer uses when making a purchase from a given site, ii) test the proposed model using a Structural Equation Modeling technique on Internet consumer purchasing behavior data collected via a Web survey, and iii) consider the implications of the model. The results of the study show that Internet consumers' trust and perceived risk have strong impacts on their purchasing decisions. Consumer disposition to trust, reputation, privacy concerns, security concerns, the information quality of the Website, and the company's reputation, have strong effects on Internet consumers' trust in the Website. Interestingly, the presence of a third-party seal did not strongly influence consumers' trust. © 2007 Elsevier B.V. All rights reserved. |
Bacterial metabolism and health-related effects of galacto-oligosaccharides and other prebiotics. | Most studies involving prebiotic oligosaccharides have been carried out using inulin and its fructo-oligosaccharide (FOS) derivatives, together with various forms of galacto-oligosaccharides (GOS). Although many intestinal bacteria are able to grow on these carbohydrates, most investigations have demonstrated that the growth of bifidobacteria, and to a lesser degree lactobacilli, is particularly favoured. Because of their safety, stability, organoleptic properties, resistance to digestion in the upper bowel and fermentability in the colon, as well as their abilities to promote the growth of beneficial bacteria in the gut, these prebiotics are being increasingly incorporated into the Western diet. Inulin-derived oligosaccharides and GOS are mildly laxative, but can result in flatulence and osmotic diarrhoea if taken in large amounts. However, their effects on large bowel habit are relatively minor. Although the literature dealing with the health significance of prebiotics is not as extensive as that concerning probiotics, considerable evidence has accrued showing that consumption of GOS and FOS can have significant health benefits, particularly in relation to their putative anti-cancer properties, influence on mineral absorption, lipid metabolism, and anti-inflammatory and other immune effects such as atopic disease. In many instances, prebiotics seem to be more effective when used as part of a synbiotic combination. |
Improved outcome of adult Burkitt lymphoma/leukemia with rituximab and chemotherapy: report of a large prospective multicenter trial. | This largest prospective multicenter trial for adult patients with Burkitt lymphoma/leukemia aimed to prove the efficacy and feasibility of short-intensive chemotherapy combined with the anti-CD20 antibody rituximab. From 2002 to 2011, 363 patients 16 to 85 years old were recruited in 98 centers. Treatment consisted of 6 5-day chemotherapy cycles with high-dose methotrexate, high-dose cytosine arabinoside, cyclophosphamide, etoposide, ifosphamide, corticosteroids, and triple intrathecal therapy. Patients >55 years old received a reduced regimen. Rituximab was given before each cycle and twice as maintenance, for a total of 8 doses. The rate of complete remission was 88% (319/363); overall survival (OS) at 5 years, 80%; and progression-free survival, 71%; with significant difference between adolescents, adults, and elderly patients (OS rate of 90%, 84%, and 62%, respectively). Full treatment could be applied in 86% of the patients. The most important prognostic factors were International Prognostic Index (IPI) score (0-2 vs 3-5; P = .0005), age-adjusted IPI score (0-1 vs 2-3; P = .0001), and gender (male vs female; P = .004). The high cure rate in this prospective trial with a substantial number of participating hospitals demonstrates the efficacy and feasibility of chemoimmunotherapy, even in elderly patients. This trial was registered at www.clinicaltrials.gov as #NCT00199082. |
On the origin of the bilateral filter and ways to improve it | Additive noise removal from a given signal is an important problem in signal processing. Among the most appealing aspects of this field are the ability to refer it to a well-established theory, and the fact that the proposed algorithms in this field are efficient and practical. Adaptive methods based on anisotropic diffusion (AD), weighted least squares (WLS), and robust estimation (RE) were proposed as iterative locally adaptive machines for noise removal. Tomasi and Manduchi (see Proc. 6th Int. Conf. Computer Vision, New Delhi, India, p.839-46, 1998) proposed an alternative noniterative bilateral filter for removing noise from images. This filter was shown to give similar and possibly better results to the ones obtained by iterative approaches. However, the bilateral filter was proposed as an intuitive tool without theoretical connection to the classical approaches. We propose such a bridge, and show that the bilateral filter also emerges from the Bayesian approach, as a single iteration of some well-known iterative algorithm. Based on this observation, we also show how the bilateral filter can be improved and extended to treat more general reconstruction problems. |
An Octa-band Monopole Antenna With a Small Nonground Portion Height for LTE/WLAN Mobile Phones | An octa-band antenna for 5.7-in mobile phones with the size of 80 mm <inline-formula> <tex-math notation="LaTeX">$\times6$ </tex-math></inline-formula> mm <inline-formula> <tex-math notation="LaTeX">$\times5.8$ </tex-math></inline-formula> mm is proposed and studied. The proposed antenna is composed of a coupled line, a monopole branch, and a ground branch. By using the 0.25-, 0.5-, and 0.75-wavelength modes, the lower band (704–960 MHz) and the higher band (1710–2690 MHz) are covered. The working mechanism is analyzed based on the S-parameters and the surface current distributions. The attractive merits of the proposed antenna are that the nonground portion height is only 6 mm and any lumped element is not used. A prototype of the proposed antenna is fabricated and measured. The measured −6 dB impedance bandwidths are 350 MHz (0.67–1.02 GHz) and 1.27 GHz (1.65–2.92 GHz) at the lower and higher bands, respectively, which can cover the LTE700, GSM850, GSM900, GSM1800, GSM1900, UMTS, LTE2300, and LTE2500 bands. The measured patterns, gains, and efficiencies are presented. |
On parallel integer sorting | We present an optimal algorithm for sortingn integers in the range [1,n c ] (for any constantc) for the EREW PRAM model where the word length isn ε , for any ε>0. Using this algorithm, the best known upper bound for integer sorting on the (O(logn) word length) EREW PRAM model is improved. In addition, a novel parallel range reduction algorithm which results in a near optimal randomized integer sorting algorthm is presented. For the case when the keys are uniformly distributed integers in an arbitrary range, we give an algorithm whose expected running time is optimal. |
Benefits of partnered strength training for prostate cancer survivors and spouses: results from a randomized controlled trial of the Exercising Together project. | BACKGROUND
Prostate cancer can negatively impact quality of life of the patient and his spouse caregiver, but interventions rarely target the health of both partners simultaneously. We tested the feasibility and preliminary efficacy of a partnered strength training program on the physical and mental health of prostate cancer survivors (PCS) and spouse caregivers.
METHODS
Sixty-four couples were randomly assigned to 6 months of partnered strength training (Exercising Together, N = 32) or usual care (UC, N = 32). Objective measures included body composition (lean, fat and trunk fat mass (kg), and % body fat) by DXA, upper and lower body muscle strength by 1-repetition maximum, and physical function by the physical performance battery (PPB). Self-reported measures included the physical and mental health summary scales and physical function and fatigue subscales of the SF-36 and physical activity with the CHAMPS questionnaire.
RESULTS
Couple retention rates were 100 % for Exercising Together and 84 % for UC. Median attendance of couples to Exercising Together sessions was 75 %. Men in Exercising Together became stronger in the upper body (p < 0.01) and more physically active (p < 0.01) than UC. Women in Exercising Together increased muscle mass (p = 0.05) and improved upper (p < 0.01) and lower body (p < 0.01) strength and PPB scores (p = 0.01) more than UC.
CONCLUSIONS
Exercising Together is a novel couples-based approach to exercise that was feasible and improved several health outcomes for both PCS and their spouses.
IMPLICATIONS FOR CANCER SURVIVORS
A couples-based approach should be considered in cancer survivorship programs so that outcomes can mutually benefit both partners.
TRIAL REGISTRATION
ClinicalTrials.gov NCT00954044. |
Validation of the functional assessment of cancer therapy-gastric module for the Chinese population | BACKGROUND
Quality of life (QoL) assessment has become an important aspect of the clinical management of gastric cancer (GC), which poses a greater health threat in Chinese populations around the world. Functional Assessment of Cancer Therapy-Gastric Module (FACT-Ga), a questionnaire developed specifically to measure QoL of patients with GC, has never been validated in Chinese subjects. The current study was designed to examine the psychometric properties of FACT-Ga as a GC specific QoL instrument for its future use in Chinese populations.
METHODS
A sample of 67 Chinese patients with GC in the National University Hospital, Singapore was investigated cross-sectionally. The participants independently completed either English or Chinese versions of the FACT-Ga and the European Quality of Life-5 Dimensions (EQ-5D). Reliability was measured as the Cronbach's α for EQ-5D, and five subscale scores and two total scores of FACT-Ga. The sensitivity to patients' clinical status was evaluated by comparing EQ-5D and FACT-Ga scores between clinical subgroups classified by Clinical Stage and Treatment Intent. The construct validity of FACT-Ga was assessed internally by examining the item-to-scale correlations and externally by contrasting the FACT-Ga subscales with the EQ-5D domains.
RESULTS
For both FACT-Ga and EQ-5D, patients treated with curative intent rated their QoL higher than those treated for palliation, and early stage patients scored higher than those in the late stage. The sensitivity to clinical status of FACT-Ga scores were differential as four of seven FACT-Ga scores were significant for Treatment Intent while only one subscale score was significant for Clinical Stage. Six FACT-Ga scores had Cronbach's α of 0.8 or above indicating excellent reliability. For construct validity, 45 of 46 items converged about their respective subscales. The monotrait-multimethod correlations between QoL constructs of FACT-Ga and EQ-5D were stronger than the multitrait-multimethod correlations as theoretically hypothesized, suggesting good convergent and discriminant validities.
CONCLUSIONS
Given the excellent reliability and good construct validity, FACT-Ga scores are able to distinguish patient groups with different clinical characteristics in the expected direction. Therefore FACT-Ga can be used as a discriminative instrument for measuring QoL of Chinese patients with GC. |
Temperament and personality: origins and outcomes. | This article reviews how a temperament approach emphasizing biological and developmental processes can integrate constructs from subdisciplines of psychology to further the study of personality. Basic measurement strategies and findings in the investigation of temperament in infancy and childhood are reviewed. These include linkage of temperament dimensions with basic affective-motivational and attentional systems, including positive affect/approach, fear, frustration/anger, and effortful control. Contributions of biological models that may support these processes are then reviewed. Research indicating how a temperament approach can lead researchers of social and personality development to investigate important person-environment interactions is also discussed. Lastly, adult research suggesting links between temperament dispositions and the Big Five personality factors is described. |
A Communication Goals Model of Online Persuasion | Online communication media are being used increasingly for attempts to persuade message receivers. This paper presents a theoretical model that predicts outcomes of online persuasion based on the structure of primary and secondary goals message receivers hold toward the communication. |
A randomized controlled trial of ultrasound-guided peripherally inserted central catheters compared with standard radiograph in neonates | Objective:The placement of a peripherally inserted central catheter (PICC) routinely incorporates tip position confirmation using standard radiographs. In this study, we sought to determine whether real-time ultrasound (RTUS) could be used to place a PICC in a shorter time period, with fewer manipulations and fewer radiographs than the use of radiographs to determine accurate placement.Study Design:This was a prospective, randomized, trial of infants who required PICC placement. Catheters were placed using either standard radiograph, with blinded evaluation of the catheters using RTUS or with RTUS guidance, with input on catheter tip location. The number of radiographs required to confirm proper positioning, duration of the procedure and manipulations of the lines were recorded for both groups. Final confirmation of PICC placement was by radiographs in both groups.Result:A total of 64 patients were enrolled in the study, with 16 failed PICC attempts. Of the 48 remaining infants, 28 were in the standard placement group and 20 were in the RTUS-guided group. The mean±s.d. gestational ages and weight at time of placement were 30±4 weeks and 1229±485 g, respectively. The RTUS use significantly decreased the time of line placement by 30 min (P=0.034), and decreased the median number of manipulations (0 vs 1, P=0.032) and radiographs (1 vs 2 P=0.001) taken to place the catheters. Early identification of the PICC by RTUS was possible in all cases and would have saved an additional 38 min if radiographs were not required.Conclusion:In the hands of ultrasound (US)-experienced neonatologists, RTUS-guided PICC placement reduces catheter insertion duration, and is associated with fewer manipulations and radiographs when compared with conventional placement. |
Regional E-Government Readiness Evaluation Based on Administrative Ecology Theory | Under the background of informatization in China, regional E-government readiness index evaluation system is built on the basis of administrative ecology concept through empirical analysis on E-government readiness of 31 provinces, autonomous regions and municipalities. We found that there is a large difference in E-Government readiness of different areas, and the E-government readiness scores tend to reduce from east to west, and all areas have a strong hardware, but the environment foundation is very weak. At last, it is suggested to strengthen the development of social informatization from three aspects: economic basis, communication infrastructure and human capital construction. |
Steady-state analysis of the set-membership affine projection algorithm | Among the adaptive filtering algorithms the set-membership affine projection (SM-AP) algorithm has the attractive feature of not trading off misadjustment with convergence speed. This paper presents an analysis of the steady-state mean-square error (MSE) of the SM-AP algorithm. Our analysis relies on the energy conservation method and does not assume a specific probability distribution for the input vector. Moreover, since the SM-AP algorithm with a fixed-modulus error-based constraint vector generalizes some important algorithms, such as the SM normalized least-mean-square (SM-NLMS) algorithm, the results can be directly applied to these algorithms. Simulation results confirm the accuracy of our analysis. |
Combining Cognitive Behavioural Therapy and Pharmacotherapy in the Treatment of Anxiety Disorders : True Gains or False Hopes ? | Anxiety disorders such as obsessive-compulsive disorder (OCD), panic disorder, social anxiety disorder (SAD), generalized anxiety disorder (GA) and posttraumatic stress disorder (PTSD) are the most common of the psychiatric disorders worldwide with a prevalence of up to 18% (1). These disorders often only receive little attention by mental health services despite their great contribution to overall clinical burden and economic costs Combining Cognitive Behavioural Therapy and Pharmacotherapy in the Treatment of Anxiety Disorders: True Gains or False Hopes? |
An Output-Capacitorless Low-Dropout Regulator With Direct Voltage-Spike Detection | An output-capacitorless low-dropout regulator (LDO) with a direct voltage-spike detection circuit is presented in this paper. The proposed voltage-spike detection is based on capacitive coupling. The detection circuit makes use of the rapid transient voltage at the LDO output to increase the bias current momentarily. Hence, the transient response of the LDO is significantly enhanced due to the improvement of the slew rate at the gate of the power transistor. The proposed voltage-spike detection circuit is applied to an output-capacitorless LDO implemented in a standard 0.35-¿m CMOS technology (where VTHN ¿ 0.5 V and VTHP ¿ -0.65 V). Experimental results show that the LDO consumes 19 ¿A only. It regulates the output at 0.8 V from a 1-V supply, with dropout voltage of 200 mV at the maximum output current of 66.7 mA. The voltage spike and the recovery time of the LDO with the proposed voltage-spike detection circuit are reduced to about 70 mV and 3 ¿s, respectively, whereas they are more than 420 mV and 30 ¿s for the LDO without the proposed detection circuit. |
Semi-supervised learning of compact document representations with deep networks | Finding good representations of text documents is crucial in information retrieval and classification systems. Today the most popular document representation is based on a vector of word counts in the document. This representation neither captures dependencies between related words, nor handles synonyms or polysemous words. In this paper, we propose an algorithm to learn text document representations based on semi-supervised autoencoders that are stacked to form a deep network. The model can be trained efficiently on partially labeled corpora, producing very compact representations of documents, while retaining as much class information and joint word statistics as possible. We show that it is advantageous to exploit even a few labeled samples during training. |
Neurohex: A Deep Q-learning Hex Agent | DeepMind’s recent spectacular success in using deep convolutional neural nets and machine learning to build superhuman level agents — e.g. for Atari games via deep Q-learning and for the game of Go via other deep Reinforcement Learning methods — raises many questions, including to what extent these methods will succeed in other domains. In this paper we consider DQL for the game of Hex: after supervised initializing, we use self-play to train NeuroHex, an 11-layer CNN that plays Hex on the 13×13 board. Hex is the classic two-player alternate-turn stone placement game played on a rhombus of hexagonal cells in which the winner is whomever connects their two opposing sides. Despite the large action and state space, our system trains a Q-network capable of strong play with no search. After two weeks of Q-learning, NeuroHex achieves respective win-rates of 20.4% as first player and 2.1% as second player against a 1-second/move version of MoHex, the current ICGA Olympiad Hex champion. Our data suggests further improvement might be possible with more training time. 1 Motivation, Introduction, Background 1.1 Motivation DeepMind’s recent spectacular success in using deep convolutional neural nets and machine learning to build superhuman level agents — e.g. for Atari games via deep Q-learning and for the game of Go via other deep Reinforcement Learning methods — raises many questions, including to what extent these methods will succeed in other domains. Motivated by this success, we explore whether DQL can work to build a strong network for the game of Hex. 1.2 The Game of Hex Hex is the classic two-player connection game played on an n×n rhombus of hexagonal cells. Each player is assigned two opposite sides of the board and a set of colored stones; in alternating turns, each player puts one of their stones on an empty cell; the winner is whomever joins their two sides with a contiguous chain of their stones. Draws are not possible (at most one player can have a winning chain, and if the game ends with the board full, then exactly one player will have such a chain), and for each n×n board there exists a winning strategy for the 1st player [7]. Solving — finding the win/loss value — arbitrary Hex positions is P-Space complete [11]. Despite its simple rules, Hex has deep tactics and strategy. Hex has served as a test bed for algorithms in artificial intelligence since Shannon and E.F. Moore built a resistance network to play the game [12]. Computers have solved all 9×9 1-move openings and two 10×10 1-move openings, and 11×11 and 13×13 Hex are games of the International Computer Games Association’s annual Computer Olympiad [8]. In this paper we consider Hex on the 13×13 board. (a) A Hex game in progress. Black wants to join top and bottom, White wants to join left and right. (b) A finished Hex game. Black wins. Fig. 1: The game of Hex. 1.3 Related Work The two works that inspire this paper are [10] and [13], both from Google DeepMind. [10] introduces Deep Q-learning with Experience Replay. Q-learning is a reinforcement learning (RL) algorithm that learns a mapping from states to action values by backing up action value estimates from subsequent states to improve those in previous states. In Deep Q-learning the mapping from states to action values is learned by a Deep Neural network. Experience replay extends standard Q-learning by storing agent experiences in a memory buffer and sampling from these experiences every time-step to perform updates. This algorithm achieved superhuman performance on several classic Atari games using only raw visual input. [13] introduces AlphaGo, a Go playing program that combines Monte Carlo tree search with convolutional neural networks: one guides the search (policy network), another evaluates position quality (value network). Deep reinforcement learning (RL) is used to train both the value and policy networks, which each take a representation of the gamestate as input. The policy network outputs a probability distribution over available moves indicating the likelihood of choosing each move. The value network outputs a single scalar value estimating |
Quantum Entropic Security and Approximate Quantum Encryption | An encryption scheme is said to be entropically secure if an adversary whose min-entropy on the message is upper bounded cannot guess any function of the message. Similarly, an encryption scheme is entropically indistinguishable if the encrypted version of a message whose min-entropy is high enough is statistically indistinguishable from a fixed distribution. We present full generalizations of these two concepts to the encryption of quantum states in which the quantum conditional min-entropy, as introduced by Renner, is used to bound the adversary's prior information on the message. A proof of the equivalence between quantum entropic security and quantum entropic indistinguishability is presented. We also provide proofs of security for two different ciphers in this model and a proof for a lower bound on the key length required by any such cipher. These ciphers generalize existing schemes for approximate quantum encryption to the entropic security model. |
Erratum to: FUGE: A joint meta-heuristic approach to cloud job scheduling algorithm using fuzzy theory and a genetic method | Job scheduling is one of the most important research problems in distributed systems, particularly cloud environments/computing. The dynamic and heterogeneous nature of resources in such distributed systems makes optimum job scheduling a non-trivial task. Maximal resource utilization in cloud computing demands/necessitates an algorithm that allocates resources to jobs with optimal execution time and cost. The critical issue for job scheduling is assigning jobs to the most suitable resources, considering user preferences and requirements. In this paper, we present a hybrid approach called FUGE that is based on fuzzy theory and a genetic algorithm (GA) that aims to perform optimal load balancing considering execution time and cost. We modify the standard genetic algorithm (SGA) and use fuzzy theory to devise a fuzzy-based steady-state GA in order to improve SGA performance in term of makespan. In details, the FUGE algorithm assigns jobs to resources by considering virtual machine (VM) processing speed, VM memory, VM bandwidth, and the job lengths. We mathematically prove our M. Shojafar (B) · N. Cordeschi Department of Information Engineering Electronics and Telecommunications (DIET), University Sapienza of Rome, via Eudossiana 18, 00184 Rome, Italy e-mail: [email protected]; [email protected] URL: http://www.mshojafar.com N. Cordeschi e-mail: [email protected] S. Javanmardi Research and Education center, Nikan network Company, Shiraz, Fars, Iran e-mail: [email protected]; [email protected] S. Abolfazli Center for Mobile Cloud Computing, University of Malaya, Kuala Lumpur, Malaysia e-mail: [email protected] optimization problem which is convex with well-known analytical conditions (specifically, Karush–Kuhn–Tucker conditions). We compare the performance of our approach to several other cloud scheduling models. The results of the experiments show the efficiency of the FUGE approach in terms of execution time, execution cost, and average degree of imbalance. |
SOKOBAN and other motion planning problems | We consider a natural family of motion planning problems with movable obstacles and obtain hardness results for them. Some members of the family are shown to be PSPACE-complete thus improving and extending (and also simplifying) a previous NP-hardness result of Wilfong. The family considered includes a motion planning problem which forms the basis of a popular computer game called SOKOBAN. The decision problem corresponding to SOKOBAN is shown to be NP-hard. The motion planning problems considered are related to the \warehouseman's problem" considered by Hopcroft, Schwartz and Sharir, and to geometric versions of the motion planning problem on graphs considered |
Correlation between plasma clozapine concentration and heart rate variability in schizophrenic patients | Forty schizophrenic patients treated with 50–600 mg/day of clozapine as monotherapy and 40 normal control subjects were tested for heart rate variability (HRV) which is mediated by the vagus nerve using acetylcholine as neurotransmitter. As compared to the control subjects, the patients showed essentially reduced HRV parameters which were negatively correlated with the plasma clozapine levels. Therefore, clozapine’s anticholinergic effect is correlated to the plasma clozapine level when measured by the decrease of HRV. We suggest that HRV data might be useful as a predictor for plasma clozapine levels. |
The security of shopping online | The online shopping is increasingly being accepted Internet users, which reflects the online shopping convenient, fast, efficient and economic advantage. Online shopping, personal information security is a major problem in the Internet. This article summarizes the characteristics of online shopping and the current development of the main safety problems, and make online shopping related security measures and transactions. |
Quantum Recommendation Systems | A recommendation system uses the past purchases or ratings of n products by a group of m users, in order to provide personalized recommendations to individual users. The information is modeled as an m×n preference matrix which is assumed to have a good rank-k approximation, for a small constant k. In this work, we present a quantum algorithm for recommendation systems that has running time O(poly(k)polylog(mn)). All known classical algorithms for recommendation systems that work through reconstructing an approximation of the preference matrix run in time polynomial in the matrix dimension. Our algorithm provides good recommendations by sampling efficiently from an approximation of the preference matrix, without reconstructing the entire matrix. For this, we design an efficient quantum procedure to project a given vector onto the row space of a given matrix. This is the first algorithm for recommendation systems that runs in time polylogarithmic in the dimensions of the matrix and provides an example of a quantum machine learning algorithm for a real world application. |
Noisy Or-based model for Relation Extraction using Distant Supervision | Distant supervision, a paradigm of relation extraction where training data is created by aligning facts in a database with a large unannotated corpus, is an attractive approach for training relation extractors. Various models are proposed in recent literature to align the facts in the database to their mentions in the corpus. In this paper, we discuss and critically analyse a popular alignment strategy called the “at least one” heuristic. We provide a simple, yet effective relaxation to this strategy. We formulate the inference procedures in training as integer linear programming (ILP) problems and implement the relaxation to the “at least one ” heuristic via a soft constraint in this formulation. Empirically, we demonstrate that this simple strategy leads to a better performance under certain settings over the existing approaches. |
Brain responses to lexical-semantic priming in children at-risk for dyslexia | Deviances in early event-related potential (ERP) components reflecting auditory and phonological processing are well-documented in children at familial risk for dyslexia. However, little is known about brain responses which index processing in other linguistic domains such as lexicon, semantics and syntax in this group. The present study investigated effects of lexical-semantic priming in 20- and 24-month-olds at-risk for dyslexia and typically developing controls in two ERP experiments. In both experiments an early component assumed to reflect facilitated lexical processing for primed words was enhanced in the at-risk group compared to the control group. Moreover, an N400-like response which was prominent in the control group was attenuated or absent in at-risk children. Results suggest that deficiencies in young children at-risk for dyslexia are not restricted to perceptual and lower-level phonological abilities, but also affect higher order linguistic skills such as lexical and semantic processing. |
Obesity relationships with community design, physical activity, and time spent in cars. | BACKGROUND
Obesity is a major health problem in the United States and around the world. To date, relationships between obesity and aspects of the built environment have not been evaluated empirically at the individual level.
OBJECTIVE
To evaluate the relationship between the built environment around each participant's place of residence and self-reported travel patterns (walking and time in a car), body mass index (BMI), and obesity for specific gender and ethnicity classifications.
METHODS
Body Mass Index, minutes spent in a car, kilometers walked, age, income, educational attainment, and gender were derived through a travel survey of 10,878 participants in the Atlanta, Georgia region. Objective measures of land use mix, net residential density, and street connectivity were developed within a 1-kilometer network distance of each participant's place of residence. A cross-sectional design was used to associate urban form measures with obesity, BMI, and transportation-related activity when adjusting for sociodemographic covariates. Discrete analyses were conducted across gender and ethnicity. The data were collected between 2000 and 2002 and analysis was conducted in 2004.
RESULTS
Land-use mix had the strongest association with obesity (BMI >/= 30 kg/m(2)), with each quartile increase being associated with a 12.2% reduction in the likelihood of obesity across gender and ethnicity. Each additional hour spent in a car per day was associated with a 6% increase in the likelihood of obesity. Conversely, each additional kilometer walked per day was associated with a 4.8% reduction in the likelihood of obesity. As a continuous measure, BMI was significantly associated with urban form for white cohorts. Relationships among urban form, walk distance, and time in a car were stronger among white than black cohorts.
CONCLUSIONS
Measures of the built environment and travel patterns are important predictors of obesity across gender and ethnicity, yet relationships among the built environment, travel patterns, and weight may vary across gender and ethnicity. Strategies to increase land-use mix and distance walked while reducing time in a car can be effective as health interventions. |
Plasma proteasome level is a reliable early marker of malignant transformation of liver cirrhosis. | BACKGROUND
Proteasomes are the main non-lysosomal proteolytic structures which regulate crucial cellular processes. Circulating proteasome levels can be measured using an ELISA test and can be considered as a tumour marker in several types of malignancy. Given that there is no sensitive marker of hepatocellular carcinoma (HCC) in patients with cirrhosis, we measured plasma proteasome levels in 83 patients with cirrhosis (33 without HCC, 50 with HCC) and 40 controls.
METHODS AND RESULTS
Patients with HCC were sub-classified into three groups according to tumour mass. alpha-Fetoprotein (AFP) was also measured. Plasma proteasome levels were significantly higher in patients with HCC compared to controls (4841 (SEM 613) ng/ml vs 2534 (SEM 187) ng/ml; p<0.001) and compared to patients with cirrhosis without HCC (2077 (SEM 112) ng/ml; p<0.001). This difference remained significant when the subgroup of patients with low tumour mass (proteasome level 3970 (SEM 310) ng/ml, p<0.001) was compared to controls and patients with cirrhosis without HCC. Plasma proteasome levels were independent of the cause of cirrhosis and were weakly correlated with AFP levels. With a cut-off of 2900 ng/ml, diagnostic specificity for HCC was 97% with a sensitivity of 72%, better than results obtained with AFP. Diagnostic relevance of plasma proteasome measurement was also effective in low tumour mass patients (sensitivity 76.2% vs 57.1% for AFP).
CONCLUSION
The plasma proteasome level is a reliable marker of malignant transformation in patients with cirrhosis, even when there is a low tumour mass. |
Early ontogenic origin of the hematopoietic stem cell lineage. | Several lines of evidence suggest that the adult hematopoietic system has multiple developmental origins, but the ontogenic relationship between nascent hematopoietic populations under this scheme is poorly understood. In an alternative theory, the earliest definitive blood precursors arise from a single anatomical location, which constitutes the cellular source for subsequent hematopoietic populations. To deconvolute hematopoietic ontogeny, we designed an embryo-rescue system in which the key hematopoietic factor Runx1 is reactivated in Runx1-null conceptuses at specific developmental stages. Using complementary in vivo and ex vivo approaches, we provide evidence that definitive hematopoiesis and adult-type hematopoietic stem cells originate predominantly in the nascent extraembryonic mesoderm. Our data also suggest that other anatomical sites that have been proposed to be sources of the definitive hematopoietic hierarchy are unlikely to play a substantial role in de novo blood generation. |
The lexical constituency model: some implications of research on Chinese for general theories of reading. | The authors examine the implications of research on Chinese for theories of reading and propose the lexical constituency model as a general framework for word reading across writing systems. Word identities are defined by 3 interlinked constituents (orthographic, phonological, and semantic). The implemented model simulates the time course of graphic, phonological, and semantic priming effects, including immediate graphic facilitation followed by graphic inhibition with simultaneous phonological facilitation, a pattern unique to the Chinese writing system. Pseudocharacter primes produced only facilitation, supporting the model's assumption that lexical thresholds determine phonological and semantic, but not graphic, effects. More generally, both universal reading processes and writing system constraints exist. Although phonology is universal, its activation process depends on how the writing system structures graphic units. |
Engineered Ecosystems as a Sustainable Solution for Decentralized Wastewater Treatment in Tropical Environments-a Minor Field Study ( MFS ) | The performance of an engineered ecosystem constructed and operated by the BioProcess research group of Rio de Janeiro State University-UERJ to treat the sewage of a research campus was evaluated on the island of Ilha Grande, RJ, Brazil. The engineered ecosystem was created as a sustainable alternative for decentralized sewage treatment in rural areas and consists of conventional treatment units as well as vegetated and algae tanks. The main objective of the study was to analyze the performance of each specific tank, as well as the system overall, pollutant removal performance. A method of sampling according to the hydraulic retention time of each treatment unit was used, in order to gain better understanding of the complex processes that contributes to pollutant removal. Four series of sampling were conducted in a total of 9 sampling points, including the raw affluent and the effluents from all treatment units. The concentration of most parameters in the final effluent were below discharge limits set by Brazilian and Swedish regulations, with satisfactory removal for most parameters with the exception of total nitrogen and total phosphorus, which were just above the Swedish limits. One important observation, which was possible due to the sampling strategy, was the considerable variation in each treatment unit's performance among series. However, when comparing the system overall performance, removal rates among series were stable, indicating buffering capacity of the overall system and a cooperative nature between the different tanks. Furthermore the poor performance of the first of the four conducted sample series was striking and was probably caused by initially weakened bacteria cultures. Additionally, algal bloom was experienced in the vegetated and algae tanks, which is suspected to have impacted system performance, particularly in the form of enhanced phosphorus removal. In conclusion, the engineered ecosystem is considered to be a viable alternative to on-site sewage treatment in rural areas with tropical climates. However, some improvements are required specially to achieve higher phosphorus removal. |
Cost Effectiveness of Targeted High-dose Atorvastatin Therapy Following Genotype Testing in Patients with Acute Coronary Syndrome | Results from the PROVE IT trial suggest that patients with acute coronary syndrome (ACS) treated with atorvastatin 80 mg/day (A80) have significantly lower rates of cardiovascular events compared with patients treated with pravastatin 40 mg/day (P40). In a genetic post hoc substudy of the PROVE IT trial, the rate of event reduction was greater in carriers of the Trp719Arg variant in kinesin family member 6 protein (KIF6) than in noncarriers. We assessed the cost effectiveness of testing for the KIF6 variant followed by targeted statin therapy (KIF6 Testing) versus not testing patients (No Test) and treating them with P40 or A80 in the USA from a payer perspective. A Markov model was developed in which 2-year event rates from PROVE IT were extrapolated over a lifetime horizon. Costs and utilities were derived from published literature. All costs were in 2010 US dollars except the cost of A80, which was in 2012 US dollars because the generic formulation was available in 2012. Expected costs and quality-adjusted life-years (QALYs) were estimated for each strategy over a lifetime horizon. Lifetime costs were US$31,700; US$37,100 and US$41,300 for No Test P40, KIF6 Testing and No Test A80 strategies, respectively. The No Test A80 strategy was associated with more QALYs (9.71) than the KIF6 Testing (9.69) and No Test P40 (9.57) strategies. No Test A80 had an incremental cost-effectiveness ratio (ICER) of US$232,100 per QALY gained compared with KIF6 Testing. KIF6 Testing had an ICER of US$45,300 per QALY compared with No Test P40. Testing ACS patients for KIF6 carrier status may be a cost-effective strategy at commonly accepted thresholds. Treating all patients with A80 is more expensive than treating patients on the basis of KIF6 results, but the modest gain in QALYs is achieved at a cost/QALY that is generally considered unacceptable compared with the KIF6 Testing strategy. Compared with treating all patients with P40, the KIF6 Testing strategy had an ICER below US$50,000 per QALY. The conclusions from this study are sensitive to the price of generic A80 and the effect on adherence of knowing KIF6 carrier status. The results were based on a post hoc substudy of the PROVE IT trial, which was not designed to test the effectiveness of KIF6 testing. |
Activity sensing in the wild: a field trial of ubifit garden | Recent advances in small inexpensive sensors, low-power processing, and activity modeling have enabled applications that use on-body sensing and machine learning to infer people's activities throughout everyday life. To address the growing rate of sedentary lifestyles, we have developed a system, UbiFit Garden, which uses these technologies and a personal, mobile display to encourage physical activity. We conducted a 3-week field trial in which 12 participants used the system and report findings focusing on their experiences with the sensing and activity inference. We discuss key implications for systems that use on-body sensing and activity inference to encourage physical activity. |
Late Simultaneous Metastasis of Renal Cell Carcinoma to the Submandibular and Thyroid Glands Seven Years after Radical Nephrectomy | Background. Renal cell carcinoma (RCC) metastasis to the salivary glands is extremely rare. Most cases reported previously have involved the parotid gland and only six cases involving the submandibular gland exist in the current literature. Metastasis of RCC to thyroid gland is also rare but appears to be more common than to salivary glands. Methods and Results. We present the first case of simultaneous metastasis to the submandibular and thyroid glands from clear cell RCC in a 61-year-old woman who presented seven years after the primary treatment. The submandibular and thyroid glands were excised completely with preservation of the marginal mandibular and recurrent laryngeal nerves, respectively. Conclusion. Metastatic disease should always be considered in the differential diagnosis for patients who present with painless salivary or thyroid gland swelling with a previous history of RCC. If metastatic disease is confined only to these glands, prompt surgical excision can be curative. |
Clustering Game Behavior Data | Recent years have seen a deluge of behavioral data from players hitting the game industry. Reasons for this data surge are many and include the introduction of new business models, technical innovations, the popularity of online games, and the increasing persistence of games. Irrespective of the causes, the proliferation of behavioral data poses the problem of how to derive insights therefrom. Behavioral data sets can be large, time-dependent and high-dimensional. Clustering offers a way to explore such data and to discover patterns that can reduce the overall complexity of the data. Clustering and other techniques for player profiling and play style analysis have, therefore, become popular in the nascent field of game analytics. However, the proper use of clustering techniques requires expertise and an understanding of games is essential to evaluate results. With this paper, we address game data scientists and present a review and tutorial focusing on the application of clustering techniques to mine behavioral game data. Several algorithms are reviewed and examples of their application shown. Key topics such as feature normalization are discussed and open problems in the context of game analytics are pointed out. |
Evolved GANs for generating pareto set approximations | In machine learning, generative models are used to create data samples that mimic the characteristics of the training data. Generative adversarial networks (GANs) are neural-network based generator models that have shown their capacity to produce realistic samples in different domains. In this paper we propose a neuro-evolutionary approach for evolving deep GAN architectures together with the loss function and generator-discriminator synchronization parameters. We also propose the problem of Pareto set (PS) approximation as a suitable benchmark to evaluate the quality of neural-network based generators in terms of the accuracy of the solutions they generate. The covering of the Pareto front (PF) by the generated solutions is used as an indicator of the mode-collapsing behavior of GANs. We show that it is possible to evolve GANs that generate good PS approximations. Our method scales to up to 784 variables and that it is capable to create architecture transferable across dimensions and functions. |
MRI anatomy of parametrial extension to better identify local pathways of disease spread in cervical cancer. | This paper highlights an updated anatomy of parametrial extension with emphasis on magnetic resonance imaging (MRI) assessment of disease spread in the parametrium in patients with locally advanced cervical cancer. Pelvic landmarks were identified to assess the anterior and posterior extensions of the parametria, besides the lateral extension, as defined in a previous anatomical study. A series of schematic drawings and MRI images are shown to document the anatomical delineation of disease on MRI, which is crucial not only for correct image-based three-dimensional radiotherapy but also for the surgical oncologist, since neoadjuvant chemoradiotherapy followed by radical surgery is emerging in Europe as a valid alternative to standard chemoradiation. |
Multi-Prediction Deep Boltzmann Machines | We introduce the multi-prediction deep Boltzmann machine (MP-DBM). The MPDBM can be seen as a single probabilistic model trained to maximize a variational approximation to the generalized pseudolikelihood, or as a family of recurrent nets that share parameters and approximately solve different inference problems. Prior methods of training DBMs either do not perform well on classification tasks or require an initial learning pass that trains the DBM greedily, one layer at a time. The MP-DBM does not require greedy layerwise pretraining, and outperforms the standard DBM at classification, classification with missing inputs, and mean field prediction tasks.1 |
Estimating the NDVI from SAR by Convolutional Neural Networks | Since optical remote sensing images are useless in cloudy conditions, a possible alternative is to resort to synthetic aperture radar (SAR) images. However, many conventional techniques for Earth monitoring applications require specific spectral features which are defined only for multispectral data. For this reason, in this work we propose to estimate missing spectral features through data fusion and deep learning, exploiting both temporal and cross-sensor dependencies on Sentinel-1 and Sentinel-2 time-series. The proposed approach, validated focusing on the estimation of the normalized difference vegetation index (NDVI), shows very interesting results with a large performance gain over the linear regression approach according to several accuracy indicators. |
Blood pressure estimation from pulse wave velocity measured on the chest | Recently, monitoring of blood pressure fluctuation in the daily life is focused on in the hypertension care area to predict the risk of cardiovascular and cerebrovascular disease events. In this paper, in order to propose an alternative system to the existed ambulatory blood pressure monitoring (ABPM) sphygmomanometer, we have developed a prototype of small wearable device consisting of electrocardiogram (ECG) and photopelthysmograph (PPG) sensors. In addition, it was examined whether blood pressure can be estimated based on pulse wave transit time (PWTT) only by attaching that device on the surface of the chest. We indicated that our system could also sense tendency of time-dependent change of blood pressure by measuring pulse of vessel over the sternum while its propagation distance is short. |
The frequency of voluntary and involuntary autobiographical memories across the life span. | In the present study, ratings of the memory of an important event from the previous week on the frequency of voluntary and involuntary retrieval, belief in its accuracy, visual imagery, auditory imagery, setting, emotional intensity, valence, narrative coherence, and centrality to the life story were obtained from 988 adults whose ages ranged from 15 to over 90. Another 992 adults provided the same ratings for a memory from their confirmation day, when they were at about age 14. The frequencies of involuntary and voluntary retrieval were similar. Both frequencies were predicted by emotional intensity and centrality to the life story. The results from the present study-which is the first to measure the frequency of voluntary and involuntary retrieval for the same events-are counter to both cognitive and clinical theories, which consistently claim that involuntary memories are infrequent as compared with voluntary memories. Age and gender differences are noted. |
Improved Regularization Techniques for End-to-End Speech Recognition | Regularization is important for end-to-end speech models, since the models are highly flexible and easy to overfit. Data augmentation and dropout has been important for improving end-to-end models in other domains. However, they are relatively under explored for end-to-end speech models. Therefore, we investigate the effectiveness of both methods for end-to-end trainable, deep speech recognition models. We augment audio data through random perturbations of tempo, pitch, volume, temporal alignment, and adding random noise. We further investigate the effect of dropout when applied to the inputs of all layers of the network. We show that the combination of data augmentation and dropout give a relative performance improvement on both Wall Street Journal (WSJ) and LibriSpeech dataset of over 20%. Our model performance is also competitive with other end-to-end speech models on both datasets. |
Co-medication of statins and CYP3A4 inhibitors before and after introduction of new reimbursement policy. | WHAT IS ALREADY KNOWN ABOUT THIS SUBJECT
HMG-CoA reductase inhibitors (statins) are frequently used drugs in the treatment of dyslipidaemia. Co-medication with interacting drugs increases the risk of statin-induced muscular side-effects. Simvastatin exhibits particularly high interaction potential due to substantial metabolism via cytochrome P450 3A4 (CYP3A4).
WHAT THIS STUDY ADDS
In June 2005, a new reimbursement policy was introduced by the Norwegian Medicines Agency stating that simvastatin should be prescribed as first-line lipid-lowering therapy. Following introduction of the new policy, the number of patients co-medicated with simvastatin and CYP3A4 inhibitors almost doubled. A potential consequence is increased incidence of muscular side-effects in the statin-treated population.
AIMS
To assess the prevalence of co-medication of statins and CYP3A4 inhibitors before and after introduction of a new Norwegian reimbursement policy, which states that all patients should be prescribed simvastatin as first-line lipid-lowering therapy.
METHODS
Data from patients receiving simvastatin, lovastatin, pravastatin, fluvastatin or atorvastatin in 2004 and 2006, including co-medication of potent CYP3A4 inhibitors, were retrieved from the Norwegian Prescription Database covering the total population of Norway. Key measurements were prevalence of continuous statin use (two or more prescriptions on one statin) and proportions of different statin types among all patients and those co-medicated with CYP3A4 inhibitors.
RESULTS
In 2004, 5.9% (n= 272 342) of the Norwegian population received two or more prescriptions on one statin compared with 7.0% (n= 324 267) in 2006. The relative number of simvastatin users increased from 39.7% (n= 112 122) in 2004 to 63.1% (n= 226 672) in 2006. A parallel increase was observed within the subpopulation co-medicated with statins and CYP3A4 inhibitors, i.e. from 42.9% (n= 7706) in 2004 to 63.6% (n= 13 367) in 2006. For all other statins the number of overall users decreased to a similar extent to those co-medicated with CYP3A4 inhibitors.
CONCLUSIONS
In both 2004 and 2006, the choice of statin type did not depend on whether the patient used a CYP3A4 inhibitor or not. Considering the pronounced interaction potential of simvastatin with CYP3A4 inhibitors, a negative influence of the new policy on overall statin safety seems likely. |
Welcome to the experience economy. It's no longer just about healing: patients want a personal transformation. | Health care is no longer just about healing: patients want a "personal transformation," a way to be made whole again. How can your organization think about making the play from services to experiences to transformations, but without your organization's dropping the ball? |
Attentive Tensor Product Learning for Language Generation and Grammar Parsing | This paper proposes a new architecture — Attentive Tensor Product Learning (ATPL) — to represent grammatical structures in deep learning models. ATPL is a new architecture to bridge this gap by exploiting Tensor Product Representations (TPR), a structured neural-symbolic model developed in cognitive science, aiming to integrate deep learning with explicit language structures and rules. The key ideas of ATPL are: 1) unsupervised learning of role-unbinding vectors of words via TPR-based deep neural network; 2) employing attention modules to compute TPR; and 3) integration of TPR with typical deep learning architectures including Long Short-Term Memory (LSTM) and Feedforward Neural Network (FFNN). The novelty of our approach lies in its ability to extract the grammatical structure of a sentence by using role-unbinding vectors, which are obtained in an unsupervised manner. This ATPL approach is applied to 1) image captioning, 2) part of speech (POS) tagging, and 3) constituency parsing of a sentence. Experimental results demonstrate the effectiveness of the proposed approach. |
Software Architecture Risk Containers | Our motivation is to determine whether risks such as implementation error-proneness can be isolated into three types of containers at design time. This paper identifies several container candidates in other research that fit the risk container concept. Two industrial case studies were used to determine which of three container types tested is most effective at isolating and predicting at design time the risk of implementation error-proneness. We found that Design Rule Containers were more effective than Use Case and Resource Containers. |
Methodologies for realizing natural-language-facilitated human-robot cooperation: A review | It is natural and efficient to use Natural Language (NL) for transferring knowledge from a human to a robot. Recently, research on using NL to support human-robot cooperation (HRC) has received increasing attention in several domains such as robotic daily assistance, robotic health caregiving, intelligent manufacturing, autonomous navigation and robot social accompany. However, a high-level review that can reveal the realization process and the latest methodologies of using NL to facilitate HRC is missing. In this review, a comprehensive summary about the methodology development of natural-language-facilitated human-robot cooperation (NLC) has been made. We first analyzed driving forces for NLC developments. Then, with a temporal realization order, we reviewed three main steps of NLC: human NL understanding, knowledge representation, and knowledge-world mapping. Last, based on our paper review and perspectives, potential research trends in NLC were discussed. |
Investigating Cybersecurity Issues In Active Traffic Management Systems | Active Traffic Management (ATM) systems have been introduced by transportation agencies to manage recurrent and non-recurrent congestion. ATM systems rely on the interconnectivity of components made possible by wired and/or wireless networks. Unfortunately, this connectivity that supports ATM systems also provides potential system access points that results in vulnerability to cyberattacks. This is becoming more pronounced as ATM systems begin to integrate internet of things (IoT) devices. Hence, there is a need to rigorously evaluate ATM systems for cyberattack vulnerabilities, and explore design concepts that provide stability and graceful degradation in the face of cyberattacks. In this research, a prototype ATM system along with a real-time cyberattack monitoring system were developed for a 1.5-mile section of I-66 in Northern Virginia. The monitoring system detects deviation from expected operation of an ATM system by comparing lane control states generated by the ATM system with lane control states deemed most likely by the monitoring system. This comparison provides the functionality to continuously monitor the system for abnormalities that would result from a cyberattack. In case of any deviation between two sets of states, the monitoring system displays the lane control states generated by the back-up data source. In a simulation experiment, the prototype ATM system and cyberattack monitoring system were subject to emulated cyberattacks. The evaluation results showed that the ATM system, when operating properly in the absence of attacks, improved average vehicle speed in the system to 60mph (a 13% increase compared to the baseline case without ATM). However, when subject to cyberattack, the mean speed reduced by 15% compared to the case with the ATM system and was similar to the baseline case. This illustrates that the effectiveness of the ATM system was negated by cyberattacks. The monitoring system however, allowed the ATM system to revert to an expected state with a mean speed of 59mph and reduced the negative impact of cyberattacks. These results illustrate the need to revisit ATM system design concepts as a means to protect against cyberattacks in addition to traditional system intrusion prevention approaches. |
Movie review mining and summarization | With the flourish of the Web, online review is becoming a more and more useful and important information resource for people. As a result, automatic review mining and summarization has become a hot research topic recently. Different from traditional text summarization, review mining and summarization aims at extracting the features on which the reviewers express their opinions and determining whether the opinions are positive or negative. In this paper, we focus on a specific domain - movie review. A multi-knowledge based approach is proposed, which integrates WordNet, statistical analysis and movie knowledge. The experimental results show the effectiveness of the proposed approach in movie review mining and summarization. |
A polymorphism in the serotonin receptor 3A (HTR3A) gene and its association with harm avoidance in women. | BACKGROUND
The brain neurotransmitter serotonin is known to affect various aspects of human behavior, including personality traits. Serotonin receptor type 3 is a ligand-gated channel encoded by 2 different subunit genes, HTR3A and HTR3B. A polymorphism (C178T) in the 5' region of the HTR3A gene has recently been identified and suggested to be of functional importance.
OBJECTIVE
To elucidate the possible association between the C178T polymorphism in the HTR3A gene and personality traits in women.
DESIGN
Two independent samples of 35- to 45-year-old Swedish women were recruited using the population register. Sample 1 (n = 195) was assessed via the Karolinska Scales of Personality and the Temperament and Character Inventory; sample 2 (n = 175) was assessed using the latter only. Both samples were genotyped with respect to the C178T polymorphism in the HTR3A gene. The A1596G polymorphism in the same gene was also investigated.
RESULTS
A significant association between C178T genotype and the Temperament and Character Inventory factor harm avoidance was observed in sample 1 (corrected for multiple comparisons P =.04); this finding was subsequently replicated in sample 2 (P =.004) (pooled populations: P<.001). In the pooled sample, all harm avoidance subscales were found to be significantly associated with the C178T polymorphism: anticipatory worry (P =.001), fear of uncertainty (P<.001), shyness (P<.001), and fatigability and asthenia (P =.008). In addition, a significant association was found in sample 1 between the C178T polymorphism and the Karolinska Scales of Personality nonconformity factor (corrected P =.002), including the subscales of social desirability (P<.001), indirect aggression (P =.002), verbal aggression (P =.05), and irritability (P<.001). Participants homozygous for the less common T allele (<4%) differed from the remaining women by displaying lower ratings on harm avoidance and nonconformity.
CONCLUSION
The C178T polymorphism in the HTR3A gene may affect the personality trait of harm avoidance in women. |
Xbox movies recommendations: variational bayes matrix factorization with embedded feature selection | We present a matrix factorization model inspired by challenges we encountered while working on the Xbox movies recommendation system. The item catalog in a recommender system is typically equipped with meta-data features in the form of labels. However, only part of these features are informative or useful with regard to collaborative filtering. By incorporating a novel sparsity prior on feature parameters, the model automatically discerns and utilizes informative features while simultaneously pruning non-informative features.
The model is designed for binary feedback, which is common in many real-world systems where numeric rating data is scarce or non-existent. However, the overall framework is applicable to any likelihood function. Model parameters are estimated with a Variational Bayes inference algorithm, which is robust to over-fitting and does not require cross-validation and fine tuning of regularization coefficients. The efficacy of our method is illustrated on a sample from the Xbox movies dataset as well as on the publicly available MovieLens dataset. In both cases, the proposed solution provides superior predictive accuracy, especially for long-tail items. We then demonstrate the feature selection capabilities and compare against the common case of simple Gaussian priors. Finally, we show that even without features, our model performs better than a baseline model trained with the popular stochastic gradient descent approach. |
Two years' effectiveness of intravenous pamidronate (APD) versus oral fluoride for osteoporosis occurring in the postmenopause | Bisphosphonates seem to be effective as antiresorptive agents in the prevention and treatment of osteoporosis. However, the optimal dose and route of administration as well as the specific effects on cortical or trabecular bone have not been clarified. To compare pamidronate (APD) with fluoride (F) in the therapy of postmenopausal osteoporosis, 32 osteoporotic women were treated for 2 years either with APD (30 mg as a single intravenous infusion over 1 h every 3 months,n=16, mean age 65 years) or with fluoride orally (20–30 mg F/day,n=16, mean age 67 years) in an open study. Both groups received 1 g calcium and 1000 U vitamin D per day, but no estrogens or other drugs acting on bone. Both groups showed the same initial mean number of fractures per patient (2.8 and 2.7). Bone densitometry was performed every 6 months at three sites: lumbar spine and hip with dual-energy X-ray absorptiometry (BMD), distal forearm with single photon absorptiometry and lumbar spine with quantitative computed tomography. Biochemical assessment was performed in blood and urine every 3 months. Lumbar BMD (g/cm2, mean±SEM) increased from 0.632 (±0.030) at time 0 to 0.696 (±0.028) at 24 months in the APD group (p<0.001), and from 0.684 (±0.025) to 0.769 (±0.028) in the fluoride group (p<0.001). Femoral neck BMD increased significantly from 0.558 (±0.025) to 0.585 (±0.025) (p<0.01) in the APD group, whereas it did not change in the fluoride group. Forearm SPA (g/cm) increased from 0.90 (±0.08) to 0.97 (±0.10) (p<0.01) in the APD group, whereas it decreased from 0.94±(0.06) to 0.90 (±0.06) (p<0.05) in the fluoride group. Plasma calcium, parathormone and creatinine did not change in any group. Urinary hydroxyproline/creatinine, serum osteocalcin and alkaline phosphatase decreased significantly with APD, but did not change with fluoride, except for an increase in osteocalcin. Two new vertebral fractures occurred with APD and 6 with fluoride, plus 1 hip fracture. Side effects were a transient fever with flu-like symptoms in 5 patients after the first infusion of APD, rigors and mild phlebitis in 1 patient. Fluoride was associated with 5 transient arthralgias and 4 mild gastric intolerances, and treatment was interrupted in 2 patients with distal stress fractures. While fluoride increased only lumbar BMD, intermittent intravenous APD increased lumbar, hip and radial BMD in postmenopausal osteoporosis and was well tolerated. |
A Model-Based Approach to Cognitive Radio Design | Cognitive radio is a promising technology for fulfilling the spectrum and service requirements of future wireless communication systems. Real experimentation is a key factor for driving research forward. However, the experimentation testbeds available today are cumbersome to use, require detailed platform knowledge, and often lack high level design methods and tools. In this paper we propose a novel cognitive radio design technique, based on a high-level model which is implementation independent, supports design-time correctness checks, and clearly defines the underlying execution semantics. A radio designed using this technique can be synthesised to various real radio platforms automatically; detailed knowledge of the target platform is not required. The proposed technique therefore simplifies cognitive radio design and implementation significantly, allowing researchers to validate ideas in experiments without extensive engineering effort. One example target platform is proposed, comprising software and reconfigurable hardware. The design technique is demonstrated for this platform through the development of two realistic cognitive radio applications. |
Sequence of central nervous system myelination in human infancy. II. Patterns of myelination in autopsied infants. | The timing and synchronization of postnatal myelination in the human central nervous system (CNS) are complex. We found eight time-related patterns of CNS myelination during the first two postnatal years in autopsied infants. The intensity of myelination was graded in 162 infants with diverse diseases on an ordinal scale of degrees 0-4. The Ayer method for maximum likelihood estimates for censored data was utilized to generate curves of the temporal changes in the percent of infants with degrees 0 through 4 of myelin in 62 white matter sites. These sites fall into eight subgroups determined by the presence or absence of microscopic myelin (degree 1) at birth and the median age at which mature myelin (degree 3) is reached. There is variability in the timing of myelination within and across axonal systems, and early onset of myelination is not always followed by early myelin maturation. We reexamined general rules governing the timing of myelination proposed by previous investigators, and found that those rules are neither complete nor inviolate, and that there is a complex interplay among them. This study specifies distinct periods of maturation in which myelinating pathways are potentially vulnerable to insult during the first two postnatal years. |
BabelNet: Building a Very Large Multilingual Semantic Network | In this paper we present BabelNet – a very large, wide-coverage multilingual semantic network. The resource is automatically constructed by means of a methodology that integrates lexicographic and encyclopedic knowledge from WordNet and Wikipedia. In addition Machine Translation is also applied to enrich the resource with lexical information for all languages. We conduct experiments on new and existing gold-standard datasets to show the high quality and coverage of the resource. |
The impact of the lung allocation score on short-term transplantation outcomes: a multicenter study. | OBJECTIVE
The lung allocation score restructured the distribution of scarce donor lungs for transplantation. The algorithm ranks waiting list patients according to medical urgency and expected benefit after transplantation. The purpose of this study was to evaluate the impact of the lung allocation score on short-term outcomes after lung transplantation.
METHODS
A multicenter retrospective cohort study was performed with data from 5 academic medical centers. Results of patients undergoing transplantation on the basis of the lung allocation score (May 4, 2005 to May 3, 2006) were compared with those of patients receiving transplants the preceding year before the lung allocation score was implemented (May 4, 2004, to May 3, 2005).
RESULTS
The study reports on 341 patients (170 before the lung allocation score and 171 after). Waiting time decreased from 680.9 +/- 528.3 days to 445.6 +/- 516.9 days (P < .001). Recipient diagnoses changed with an increase in idiopathic pulmonary fibrosis and a decrease in emphysema and cystic fibrosis (P = .002). Postoperatively, primary graft dysfunction increased from 14.1% (24/170) to 22.9% (39/171) (P = .04) and intensive care unit length of stay increased from 5.7 +/- 6.7 days to 7.8 +/- 9.6 days (P = .04). Hospital mortality and 1-year survival were the same between groups (5.3% vs 5.3% and 90% vs 89%, respectively; P > .6)
CONCLUSIONS
This multicenter retrospective review of short-term outcomes supports the fact that the lung allocation score is achieving its objectives. The lung allocation score reduced waiting time and altered the distribution of lung diseases for which transplantation was done on the basis of medical necessity. After transplantation, recipients have significantly higher rates of primary graft dysfunction and intensive care unit lengths of stay. However, hospital mortality and 1-year survival have not been adversely affected. |
Part Detector Discovery in Deep Convolutional Neural Networks | Current fine-grained classification approaches often rely on a robust localization of object parts to extract localized feature representations suitable for discrimination. However, part localization is a challenging task due to the large variation of appearance and pose. In this paper, we show how pre-trained convolutional neural networks can be used for robust and efficient object part discovery and localization without the necessity to actually train the network on the current dataset. Our approach called “part detector discovery” (PDD) is based on analyzing the gradient maps of the network outputs and finding activation centers spatially related to annotated semantic parts or bounding boxes. This allows us not just to obtain excellent performance on the CUB2002011 dataset, but in contrast to previous approaches also to perform detection and bird classification jointly without requiring a given bounding box annotation during testing and ground-truth parts during training. The code is available at http://www.inf-cv.uni-jena.de/part_ discovery and https://github.com/cvjena/PartDetectorDisovery. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.