title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Integration of Distributed Enterprise Applications: A Survey | Many industrial enterprises acquire disparate systems and applications over the years. The need to integrate these different systems and applications is often prominent for satisfying business requirements and needs. In an effort to help researchers in industrial informatics understand the state-of-the-art of the enterprise application integration, we examined the architectures and technologies for integrating distributed enterprise applications, illustrated their strengths and weaknesses, and identified research trends and opportunities in this increasingly important area. |
A randomised controlled study of bromocriptine versus levodopa in previously untreated Parkinsonian patients: a 3 year follow-up. | The long term effects of a de novo treatment with levodopa versus bromocriptine were compared in respectively 13 and 15 previously untreated patients with Parkinson's disease in a prospective randomised trial. Thirteen patients were treated with levodopa alone (mean dose 444, SEM 63 mg daily) whereas 15 others received bromocriptine alone (mean dose 50, SEM 6 mg daily) during 37, SEM 4 and 32, SEM 4 months respectively. For a similar decrease in the Columbia rating scale, the nature of long term side effects was different in the two groups: three patients on levodopa developed peak-dose dyskinesias and one other dystonia. With bromocriptine, one patient developed a severe psychosis whereas 3 others suffered from primary lack of efficacy (1 case) or late decrease in efficacy (2 cases). These results demonstrate the potential of D2 dopamine agonists (like bromocriptine) in the de novo treatment of Parkinson's disease; however, their use is limited by their lack of efficacy and/or the occurrence of neuropsychiatric side effects. |
Potentiated sympathetic and hemodynamic responses to alcohol in hypertensive vs. normotensive individuals. | OBJECTIVE
Alcohol is associated with acute increases in muscle sympathetic nerve activity (MSNA) in normal individuals. The effects of alcohol on MSNA in patients with hypertension are unknown. Using a randomized, placebo-controlled study design, we tested the hypothesis that there is a differential effect of acute alcohol consumption on cardiovascular function in hypertensive patients compared with normotensive controls.
METHODS
We examined the effects of oral alcohol intake (1.0 g/kg body weight) and placebo on blood pressure, heart rate, and MSNA in 13 newly diagnosed hypertensive patients and 11 normotensive controls. The two sessions were performed in random order, each study on a separate day.
RESULTS
Baseline MSNA was significantly elevated in the hypertensive patients as compared to the controls (38 ± 2 vs. 28 ± 2 bursts/min; P < 0.01). Placebo had no significant effect on MSNA, blood pressure, or heart rate in either group. In normotensive individuals, alcohol had no significant effect on blood pressure (SBP increased by 1 ± 4 mmHg). By contrast, SBP increased after alcohol in hypertensive patients by 24 ± 6 mmHg (P < 0.001 vs. controls). MSNA increased after alcohol in controls by 83 ± 34% (P < 0.01 vs. baseline). MSNA did not change significantly after alcohol in hypertensive patients (16 ± 7%, not significant), despite a profound blood pressure increase, which would be expected to inhibit sympathetic activity.
CONCLUSION
Pressor responses to acute alcohol consumption are potentiated in hypertensive patients compared with normotensive controls. Vasoconstrictor sympathetic tone is not suppressed in hypertensive patients after alcohol, despite the enhanced pressor response. Sympathetic neural mechanisms might contribute to both alcohol-related blood pressure increases and cardiovascular events in hypertensive patients. |
Evaluation of a blended learning course for teaching oral radiology to undergraduate dental students. | AIMS
The purpose of this study was to develop and implement a blended course (a combined face-to-face and online instruction) on undergraduate oral radiology and evaluate it by comparing its educational effectiveness (derived from students' performance and answers to questionnaires) to a conventional course's. Students' attitudes concerning the blended methodology were also registered.
METHODOLOGY
An original course was developed and implemented, and its electronic version was uploaded to an e-learning educational platform. The course was attended by two groups of final-year students, who were taught by either the conventional face-to-face methodology or the blended learning methodology. Students answered a series of questionnaires, before and after following the course, regarding their perceptions, attitudes and evaluation of the course. Additionally, they completed knowledge assessment tests and their grades (before and after the course) were compared. Educational effectiveness of the course was determined by analysing the results of the questionnaires and the tests.
RESULTS
Students in the blended group performed significantly better than their colleagues of the conventional group in the post-course knowledge test, and female students of the blended group performed better than male students. Students evaluated high the course content, organisation, educational material, and the blended group students additionally appreciated the course design and clarity of instructions. Students' attitudes towards elements of blended learning (effectiveness, motivation and active engagement) were very positive. Most of the blended group students, who attended the face-to-face meeting (approx. 91%), evaluated it as helpful for summarising the subject and clarifying difficult issues.
CONCLUSIONS
Blended learning is effective and well evaluated by dental students and can be implemented in undergraduate curriculum for teaching oral radiology. |
A Comprehensive Framework for Secure Query Processing on Relational Data in the Cloud | Data security in the cloud is a big concern that blocks the widespread use of the cloud for relational data management. First, to ensure data security, data confidentiality needs to be provided when data resides in storage as well as when data is dynamically accessed by queries. Prior works on query processing on encrypted data did not provide data confidentiality guarantees in both aspects. Tradeoff between secrecy and efficiency needs to be made when satisfying both aspects of data confidentiality while being suitable for practical use. Second, to support common relational data management functions, various types of queries such as exact queries, range queries, data updates, insertion and deletion should be supported. To address these issues, this paper proposes a comprehensive framework for secure and efficient query processing of relational data in the cloud. Our framework ensures data confidentiality using a salted IDA encoding scheme and column-access-via-proxy query processing primitives, and ensures query efficiency using matrix column accesses and a secure B+-tree index. In addition, our framework provides data availability and integrity. We establish the security of our proposal by a detailed security analysis and demonstrate the query efficiency of our proposal through an experimental evaluation. |
a.SCAtch - A Sketch-Based Retrieval for Architectural Floor Plans | Architects’ daily routine means working with drawings. They use either a pen or a computer sketching their ideas or drawing to scale. When beginning a new project they often have to search for similar projects in the past. In this paper a sketch-based approach is proposed to query the floor plan repository. The user searches for semantically similar floor plans just by drawing the new plan. An algorithm extracts the semantic structure sketched by the architect on DFKI’s Touch & Write table and compares the structure of the sketch with the ones from the floor plan repository. The a SCatch system enables the user to easily access knowledge from past projects. While in the current prototype only sketches with a predefined structure are recognized, we will extend the system to work with normal floor plans. |
Maternal Knowledge of Optimal Breastfeeding Practices and Associated Factors in Rural Communities of Arba Minch Zuria | Introduction: Breastfeeding is one of the components of Primary Health Care and considered as natural practices in Ethiopia. However, a wide range of harmful infant feeding practices is documented even after implementations of infant and young child feeding guidelines in 2004. Therefore the major objective of this study was to assess maternal knowledge about optimal breastfeeding practices and associated factors in rural communities of Arba Minch Zuria. Methods: A cross sectional community based study was carried out from January to February, 2012 in Arba Minch Zuria. Quantitative data were collected from 383 mothers supplemented with qualitative data from 10 key informants. Data were analyzed using SPSS version 16.0. Binary logistic regressions were used to see the strength of association between independent and dependent variables using odds ratios and 95% of confidence intervals. Finally a multivariate logistic regression analysis was made to identify the predictors of maternal knowledge about optimal breastfeeding practices. Results: Breastfeeding was considered as a natural gift in Arba Minch Zuria. More than half of mothers (57.2%) initiated breastfeeding within the first hour of delivery and 213 (55.6%) were exclusively breastfed their children for 6 months. Three-hundred forty one (89%) mothers gave colostrum though a small number of mothers considered colostrum as an expired breast milk and discarded it. Maternal knowledge about optimal breastfeeding was positively associated with paternal education level, total number of births, attending antenatal care, having the radio, using family planning and giving birth by health workers. This study also showed there is a positive relationship between maternal knowledge of optimal breastfeeding with exclusive breastfeeding and timely introduction of complementary food. Conclusions: Findings from this study showed that maternal knowledge was directly related with paternal education level, attending antenatal care, having the radio, using family planning and giving birth by health workers. Maternal knowledge had a significant contribution in the promotion of optimal child feeding practices. Promotion of strong community based education and support to ensure optimal infant and young child feeding is recommended for the whole communities with health workers and community leaders to provide counseling and support related to infant and young child feeding practices. |
Why firms listed on an unregulated financial market comply voluntarily with IFRS: An empirical analysis with French data | This study examines the determinants of voluntary adoption of IFRS by French companies listed on an unregulated financial market. These firms can choose IFRS or the French accounting standards to present their accounts. We analyze the annual reports of 85 French firms listed in 2010 on an unregulated financial market: Alternext. The results reveal that size is an important determinant of the voluntary adoption of IFRS, showing a positive correlation. The percentage of assets in place is also a significant factor: firms with a higher percentage are protected by heavy barriers to entry and they thus voluntarily adopt IFRS. Industry sector shows a negative and significant relationship, as it explains the decision not to adopt IFRS. The following variables are not significant: leverage, internationality, profitability, type of auditor, and ownership concentration. Our findings suggest that without the intervention of regulatory bodies, companies listed on an unregulated financial market will continue to opt for local accounting standards, thereby maintaining the status quo. |
Global Software Engineering: The Future of Socio-technical Coordination | Globally-distributed projects are rapidly becoming the norm for large software systems, even as it becomes clear that global distribution of a project seriously impairs critical coordination mechanisms. In this paper, I describe a desired future for global development and the problems that stand in the way of achieving that vision. I review research and lay out research challenges in four critical areas: software architecture, eliciting and communicating requirements, environments and tools, and orchestrating global development. I conclude by noting the need for a systematic understanding of what drives the need to coordinate and effective mechanisms for bringing it about. |
Experience Replay for Continual Learning | Continual learning is the problem of learning new tasks or knowledge while protecting old knowledge and ideally generalizing from old experience to learn new tasks faster. Neural networks trained by stochastic gradient descent often degrade on old tasks when trained successively on new tasks with different data distributions. This phenomenon, referred to as catastrophic forgetting, is considered a major hurdle to learning with non-stationary data or sequences of new tasks, and prevents networks from continually accumulating knowledge and skills. We examine this issue in the context of reinforcement learning, in a setting where an agent is exposed to tasks in a sequence. Unlike most other work, we do not provide an explicit indication to the model of task boundaries, which is the most general circumstance for a learning agent exposed to continuous experience. While various methods to counteract catastrophic forgetting have recently been proposed, we explore a straightforward, general, and seemingly overlooked solution – that of using experience replay buffers for all past events – with a mixture of onand off-policy learning, leveraging behavioral cloning. We show that this strategy can still learn new tasks quickly yet can substantially reduce catastrophic forgetting in both Atari and DMLab domains, even matching the performance of methods that require task identities. When buffer storage is constrained, we confirm that a simple mechanism for randomly discarding data allows a limited size buffer to perform almost as well as an unbounded one. |
Spacing Effects in Real-World Classroom Vocabulary Learning | s International, 56, 5204. Pashler, H., Rohrer, D., Cepeda, N. J., & Carpenter, S. K. (2007). Enhancing learning and retarding forgetting: Choices and consequences. Psychonomic Bulletin and Review, 14, 187–1193. Rohrer, D., & Taylor, K. (2006). The effects of overlearning and distributed practice on the retention of mathematics knowledge. Applied Cognitive Psychology, 20, 1209–1224. Seabrook, R., Brown, G. D., & Solity, J. E. (2005). Distributed and massed practice: From laboratory to classroom. Applied Cognitive Psychology, 19, 107–122. Toppino, T. C., & DeMesquita, M. (1984). Effects of spacing repetitions on children’s memory. Journal of Experimental Child Psychology, 37, 637– |
Iconography in Medieval Spanish Literature | Only for you today! Discover your favourite iconography in medieval spanish literature book right here by downloading and getting the soft file of the book. This is not your time to traditionally go to the book stores to buy a book. Here, varieties of book collections are available to download. One of them is this iconography in medieval spanish literature as your preferred book. Getting this book b on-line in this site can be realized now by visiting the link page to download. It will be easy. Why should be here? |
Evaluating Search Engines by Modeling the Relationship Between Relevance and Clicks | We propose a model that leverages the millions of clicks received by web search engines to predict document relevance. This allows the comparison of ranking functions when clicks are available but complete relevance judgments are not. After an initial training phase using a set of relevance judgments paired with click data, we show that our model can predict the relevance score of documents that have not been judged. These predictions can be used to evaluate the performance of a search engine, using our novel formalization of the confidence of the standard evaluation metric discounted cumulative gain (DCG), so comparisons can be made across time and datasets. This contrasts with previous methods which can provide only pair-wise relevance judgments between results shown for the same query. When no relevance judgments are available, we can identify the better of two ranked lists up to 82% of the time, and with only two relevance judgments for each query, we can identify the better ranking up to 94% of the time. While our experiments are on sponsored search results, which is the financial backbone of web search, our method is general enough to be applicable to algorithmic web search results as well. Furthermore, we give an algorithm to guide the selection of additional documents to judge to improve confidence. |
Polyphenol content and antioxidant properties of colored soybean seeds from central Europe. | The antioxidant activity and contents of various polyphenol classes in the seeds of seven soybean varieties of different seed color and one yellow seed cultivar, representing a reference genotype, were evaluated. Total polyphenols and tannins were determined after extraction of plant material with 70% aqueous acetone, and total flavonoids were extracted with methanol and acetic acid, whereas anthocyanins were extracted with 20% aqueous ethanol. In addition, isoflavone content and composition were determined using high-performance liquid chromatography analysis. Antioxidant activity of seed extracts was evaluated by the 2,2-diphenyl-1-picrylhydrazyl free radical scavenging activity assay. A positive linear correlation between antioxidant activity and contents of total polyphenols and anthocyanins was established. The highest antioxidant activity was observed in the extracts of black and brown varieties, which also showed high levels of all polyphenol classes examined. Yellow seed had the highest total isoflavone content (3.62 mg/g of dry material). The highest concentration of total daidzein was determined in black seeds (>2.0 mg/g of dry material), and the highest total glycitein and genistein contents occurred in the yellow cultivar (0.53 and 1.49 mg/g of dry material, respectively). According to our results, varieties of black and brown seeds could be of special interest not only for their large content of total polyphenols, ranging from 4.94 to 6.22 mg of gallic acid equivalents/g of dry material, but also for their high content of natural antioxidants such as anthocyanins. |
Stakeholder Excellence ? Framing the evolution and complexity of a stakeholder perspective of the firm | Analysing and dealing with the needs and demands of stakeholders is a major concern of modern business. This paper attempts to frame, systematise and conceptualise a number of the underlying issues related to the phenomenon. As a result, a vast array of unaddressed and unanswered questions emerges. What unites these is the implicit acceptance of an emerging (contemporary) stakeholder theory of the firm. This largely normative theory provides some understanding of what constitutes a stakeholder, what stakes they seek to protect and the way in which transactions between the stakeholder and the organisations are handled. While the brevity of the contribution necessitates a somewhat superficial treatment of the issues, the paper leads inevitably to the conclusion that the demonstrated complexity of really engaging with stakeholders means that both theory-development and practice still have a long way to go. |
Constructing the Leading Order Terms of the Yang-Mills Schroedinger Functional | The leading short-time behaviour of the Yang-Mills Schroedinger functional is obtained within a local expansion in the fields. |
Treatment of refractory anterior knee pain using botulinum toxin type A (Dysport) injection to the distal vastus lateralis muscle: a randomised placebo controlled crossover trial | OBJECTIVES
This randomised controlled crossover trial examined the efficacy of botulinum toxin type A (BoNT-A) injection, plus an exercise programme, to remediate chronic anterior knee pain (AKP) associated with quadriceps muscle imbalance.
METHODS
24 individuals with refractory AKP received either BoNT-A (500 U Dysport) or the same volume saline injection to the vastus lateralis (VL) muscle and performed home exercises focusing on re-training the vastus medialis (VM) muscle. All subjects were offered open-label injection at 12 weeks. Knee-related disability (anterior knee pain scale; AKPS) and activity-induced pain (10 cm visual analogue scale) at 12 weeks were the primary outcomes. Peak isometric extensor force was recorded and normalised VL:VM ratios were derived from simultaneous surface electromyography. Self-reported pain and disability measures were collected at six time points to a mean of 20±8 months.
RESULTS
14 subjects received BoNT-A and 10 placebo injection. Improvement at 12 weeks was significantly greater for BoNT-A compared with placebo-injected subjects for the AKPS (p<0.03), pain on kneeling (p<0.004), squatting (p<0.02) and level walking (p<0.04). At week 12, five placebo subjects crossed over to open-label injection. At 24 weeks, 16 of 19 BoNT-A-injected and two of the remaining five placebo-injected subjects were either satisfied or very satisfied with treatment outcomes. Improvements were maintained in 11 of 14 BoNT-A-injected and two of five placebo subjects available at longer-term follow-up.
CONCLUSION
BoNT-A injection produced a greater reduction in pain and disability than placebo injection in carefully selected patients with chronic AKP related to quadriceps muscle imbalance. |
Automated Heart Wall Motion Abnormality Detection from Ultrasound Images Using Bayesian Networks | Coronary Heart Disease can be diagnosed by measuring and scoring regional motion of the heart wall in ultrasound images of the left ventricle (LV) of the heart. We describe a completely automated and robust technique that detects diseased hearts based on detection and automatic tracking of the endocardium and epicardium of the LV. The local wall regions and the entire heart are then classified as normal or abnormal based on the regional and global LV wall motion. In order to leverage structural information about the heart we applied Bayesian Networks to this problem, and learned the relations among the wall regions off of the data using a structure learning algorithm. We checked the validity of the obtained structure using anatomical knowledge of the heart and medical rules as described by doctors. The resultant Bayesian Network classifier depends only on a small subset of numerical features extracted from dual-contours tracked through time and selected using a filterbased approach. Our numerical results confirm that our system is robust and accurate on echocardiograms collected in routine clinical practice at one hospital; our system is built to be used in real-time. |
Human activity recognition with wearable sensors | This thesis investigates the use of wearable sensors to recognize human activity. The activity of the user is one example of context information – others include the user’s location or the state of his environment – which can help computer applications to adapt to the user depending on the situation. In this thesis we use wearable sensors – mainly accelerometers – to record, model and recognize human activities. Using wearable sensors allows continuous recording of activities across different locations and independent from external infrastructure. There are many possible applications for activity recognition with wearable sensors, for instance in the areas of healthcare, elderly care, personal fitness, entertainment, or performing arts. In this thesis we focus on two particular research challenges in activity recognition, namely the need for less supervision, and the recognition of high-level activities. We make several contributions towards addressing these challenges. Our first contribution is an analysis of features for activity recognition. Using a data set of activities such as walking, standing, sitting, or hopping, we analyze the performance of commonly used features and window lengths over which the features are computed. Our results indicate that different features perform well for different activities, and that in order to achieve best recognition performance, features and window lengths should be chosen specific for each activity. In order to reduce the need for labeled training data, we propose an unsupervised algorithm which can discover structure in unlabeled recordings of activities. The approach identifies correlated subsets in feature space, and represents these subsets with low-dimensional models. We show that the discovered subsets often correspond to distinct activities, and that the resulting models can be used for recognition of activities in unknown data. In a separate study, we show that the approach can be effectively deployed in a semi-supervised learning framework. More specifically, we combine the approach with a discriminant classifier, and show that this scheme allows high recognition rates even when using only a small amount of labeled training data. Recognition of higher-level activities such as shopping, doing housework, or commuting is challenging, as these activities are composed of changing sub-activities and vary strongly across individuals. We present one study in which we recorded 10h of three different high-level activities, investigating to which extent methods for low-level activities can be scaled to the recognition of high-level activities. Our results indicate that for settings as ours, traditional supervised approaches in combination with data from wearable accelerometers can achieve recognition rates of more than 90%. While unsupervised techniques are desirable for short-term activities, they become crucial for long-term activities, for which annotation is often impractical or impossible. To this end we propose an unsupervised approach based on topic models that allows to discover high-level structure in human activity data. The discovered activity patterns correlate with daily routines such as commuting, office work, or lunch routine, and they can be used to recognize such routines in unknown data. |
Privacy-Preserving Audit and Extraction of Digital Contents | A growing number of online services, such as Google, Yahoo!, and Amazon, are starting to charge users for their storage. Customers often use these services to store valuable data such as email, family photos and videos, and disk backups. Today, a customer must entirely trust such external services to maintain the integrity of hosted data and return it intact. Unfortunately, no service is infallible. To make storage services accountable for data loss, we present protocols that allow a thirdparty auditor to periodically verify the data stored by a service and assist in returning the data intact to the customer. Most importantly, our protocols are privacy-preserving, in that they never reveal the data contents to the auditor. Our solution removes the burden of verification from the customer, alleviates both the customer’s and storage service’s fear of data leakage, and provides a method for independent arbitration of data retention contracts. |
A 249-Mpixel/s HEVC Video-Decoder Chip for 4K Ultra-HD Applications | High Efficiency Video Coding, the latest video standard, uses larger and variable-sized coding units and longer interpolation filters than H.264/AVC to better exploit redundancy in video signals. These algorithmic techniques enable a 50% decrease in bitrate at the cost of computational complexity, external memory bandwidth, and, for ASIC implementations, on-chip SRAM of the video codec. This paper describes architectural optimizations for an HEVC video decoder chip. The chip uses a two-stage subpipelining scheme to reduce on-chip SRAM by 56 kbytes-a 32% reduction. A high-throughput read-only cache combined with DRAM-latency-aware memory mapping reduces DRAM bandwidth by 67%. The chip is built for HEVC Working Draft 4 Low Complexity configuration and occupies 1.77 mm2 in 40-nm CMOS. It performs 4K Ultra HD 30-fps video decoding at 200 MHz while consuming 1.19 nJ/pixel of normalized system power. |
Clinical diagnostic accuracy and magnetic resonance imaging of patients referred by physical therapists, orthopaedic surgeons, and nonorthopaedic providers. | STUDY DESIGN
Nonexperimental, retrospective design.
OBJECTIVES
This study was designed to compare clinical diagnostic accuracy (CDA) between physical therapists (PTs), orthopaedic surgeons (OSs), and nonorthopaedic providers (NOPs) at Keller Army Community Hospital on patients with musculoskeletal injuries (MSI) referred for magnetic resonance imaging (MRI).
BACKGROUND
US Army PTs are frequently the first credentialed providers privileged to examine and diagnose patients with musculoskeletal injuries. Physical therapists assigned at Keller Army Community Hospital have also been credentialed with privileges to order MRI studies for several years.
METHODS AND MEASURES
To reduce provider bias, a retrospective analysis was performed on 560 patients referred for MRI over an 18-month period. An electronic review of each patient's radiological profile was performed to assess agreement between clinical diagnosis and MRI findings. Data analyses were performed through descriptive statistics and contingency tables.
RESULTS
Analysis on agreement between clinical diagnosis and MRI findings produced a CDA of 74.5% (108/145) for PTs, 80.8% (139/172) for OSs, and 35.4% (86/243) for NOPs. There was a significant difference in CDA between PTs and NOPs (P<.001), and between OSs and NOPs (P<.001). There was no difference in CDA between PTs and OSs (P>.05).
CONCLUSIONS
Clinical diagnostic accuracy by PTs and OSs on patients with musculoskeletal injuries was significantly greater than for NOPs, with no difference noted between PTs and OSs. |
The effect of pyridostigmine on bronchial hyperreactivity. | We examined the effect of pyridostigmine (PY) at a dose of 30 mg orally three times a day on nonspecific bronchial hyperreactivity in ten normal nonsmokers (NNS), ten smokers (SM), and ten mild asthmatics (AS). We conducted a double-blind, placebo-controlled, crossover trial, randomly assigning subjects to receive either placebo (PL) or PY before undergoing bronchoprovocation challenge with eucapnic voluntary hyperventilation (EVH) using dry gas. Compliance with PY was confirmed by measuring red blood cell acetylcholinesterase (Achase) levels during both days of testing. While taking PL, the mean (+/- SEM) falls in FVC and FEV1 after the bronchoprovocation were as follows: NNS, 1.0 percent (+/- 0.6) FVC and 4.3 percent (+/- 1.0) FEV1; SM, 2.4 percent (+/- 1.1) FVC and 2.7 percent (+/- 1.3) FEV1; AS, 5.3 percent (+/- 2.3) FVC and 11.5 percent (+/- 2.8) FEV1. The mean decreases in FVC and FEV1 while taking PY were as follows: NNS, 1.8 percent (+/- 0.7) FVC and 4.3 percent (+/- 0.8) FEV1; SM, 3.8 percent (+/- 1.4) FVC and 5.2 percent (+/- 1.6) FEV1; AS, 4.4 percent (+/- 1.3) FVC and 11.8 percent (+/- 2.8) FEV1. Within each category, using a paired t test to compare the results on each day of testing, no statistically significant differences were noted. Pyridostigmine at the tested dose has no significant effect on nonspecific bronchial hyperreactivity in normal NNS, SM, or AS. |
Malware traffic classification using convolutional neural network for representation learning | Traffic classification is the first step for network anomaly detection or network based intrusion detection system and plays an important role in network security domain. In this paper we first presented a new taxonomy of traffic classification from an artificial intelligence perspective, and then proposed a malware traffic classification method using convolutional neural network by taking traffic data as images. This method needed no hand-designed features but directly took raw traffic as input data of classifier. To the best of our knowledge this interesting attempt is the first time of applying representation learning approach to malware traffic classification using raw traffic data. We determined that the best type of traffic representation is session with all layers through eight experiments. The method is validated in two scenarios including three types of classifiers and the experiment results show that our proposed method can satisfy the accuracy requirement of practical application. |
Research on Binarization of QR Code Image | In this paper, a new QR code binarization method is presented according to the QR code characteristic to improve the accuracy of QR code decoding in mobile image processing. The Sauvola's thresholding algorithm is modified to solve QR code image binarization which is the threshold decision in the uneven light condition. Experiments show that the proposed algorithm has better correctness of recognition compared with others. |
Cost-effectiveness of orlistat for the treatment of overweight and obese patients in Ireland | OBJECTIVE:To assess the cost-effectiveness of orlistat plus a calorie-controlled diet compared with a calorie-controlled diet alone for the treatment of overweight and obese patients in Ireland.DESIGN:Economic modelling techniques using published international efficacy data and Irish cost data were used to estimate the cost-effectiveness of orlistat in obese patients when only responders to treatment (ie achieve 5% weight loss after 3 months of treatment) continue orlistat after 3 months. The model incorporated known relationships between weight loss and quality of life (utility) gain, and weight loss and reduction in risk of type 2 diabetes (T2DM) to predict the impact of weight loss on quality-adjusted-life-years (QALYs) gained and on the onset of T2DM. The costs associated with each treatment arm included the acquisition cost of orlistat, cost of a calorie-controlled dietary programme and monitoring and treatment costs associated with T2DM. An Irish health-care perspective was taken for the analysis, based on 2003 costs.SUBJECTS:Weight loss data on 1386 patients from five pivotal orlistat clinical trials with at least 12 months duration were pooled (two American and three primarily European studies). All the studies were randomized, placebo-controlled, multicentre trials with a similar design. The inclusion criteria were BMI ≥28 kg / m2, age ≥18 y, no diagnosed T2DM and the ability to lose 2.5 kg in weight during the introductory period.MEASUREMENTS:Cost effectiveness was modelled from these data and presented as incremental cost per QALY.RESULTS:When orlistat treatment plus a calorie-controlled diet was compared with a calorie-controlled diet alone, the incremental cost per year was [euro ]478. The number needed to treat (NNT) to gain one QALY was estimated to be 35. The incremental cost per QALY gained was within the range considered cost-effective at [euro ]16 954. Sensitivity analysis demonstrated an incremental cost per QALY of [euro ]11 000–35 000 under a variety of assumptions.CONCLUSIONS:Our model suggests that orlistat is effective and cost-effective in obese patients, if after 3 months of treatment, only treatment responders continue treatment. |
Mentoring and Beginning Teachers' Workplace Learning | Mentoring has been the focus of much attention in the recent literature on initial teacher education and induction and as such has become a ‘foundation stone’ of collaborative endeavours between universities and schools in the facilitation of teacher development. In 1998 some 220 beginning teachers and 245 supervisors and mentors in New South Wales government schools were surveyed and beginning teachers’ professional learning observed closely in six case study schools in different settings across the state. One-way analysis of variance (ANOVA) and multivariate analysis of variance (MANOVA) indicated the relevance of internship to initial teacher education programs and established the importance of mentoring support in beginning teachers’ professional learning in the induction year. The case studies also identi ed key practices, conditions and professional interactions that sustained transmission, transactional and transformational approaches to teacher learning. The complementary qualitative and quantitative methodology provided strong evidence of the importance of the mentoring strategy. |
Face it: The Impact of Gender on Social Media Images | Social websites like Facebook enable users to upload self-created digital images; it is therefore of interest to see how gender is performed in this domain. A panel used a literature review of pictorial features associated with gender traits, and a sample of Facebook pictures to assess gender stereotypes present in Facebook images. Traits emerging in greater prominence in pictures of males included active, dominant, and independent. Those prominent with female users included attractive and dependent. These findings generally conform to gender stereotypes found in prior research and extend the research regarding stereotypical gender traits displayed in professional media depictions to self-selected social media displays. They also extend the research on gender differences in impression management generally, in both interpersonal communication and social media, to include gender-specific traits that are part of young mens and women’s impression management. |
Multiple and Polynomial Recurrence for Abelian Actions in Infinite Measure | The $(C,F)$ -construction from a previous paper of the first author is applied to produce a number of funny rank one infinite measure preserving actions of discrete countable Abelian groups $G$ with ‘unusual’ multiple recurrence properties. In particular, the following are constructed for each $p\in\Bbb N\cup\{\infty\}$ :
(i) a $p$ -recurrent action $T=(T_g)_{g\in G}$ such that (if $p\ne\infty$ ) no one transformation $T_g$ is $(p+1)$ -recurrent for every element $g$ of infinite order; (ii) an action $T=(T_g)_{g\in G}$ such that for every finite sequence $g_1,\dots,g_r\in G$ without torsion the transformation $T_{g_1}\times\cdots\times T_{g_r}$ is ergodic, $p$ -recurrent but (if $p\ne\infty$ ) not $(p+1)$ -recurrent; (iii) a $p$ -polynomially recurrent $(C,F)$ -transformation which (if $p\ne\infty$ ) is not $(p+1)$ -recurrent. $\infty$ -recurrence here means multiple recurrence. Moreover, it is shown that there exists a $(C,F)$ -transformation which is rigid (and hence multiply recurrent) but not polynomially recurrent. Nevertheless, the subset of polynomially recurrent transformations is generic in the group of infinite measure preserving transformations endowed with the weak topology. |
litewi: A combined term extraction and entity linking method for eliciting educational ontologies from textbooks | Major efforts have been conducted on ontology learning, that is, semiautomatic processes for the construction of domain ontologies from diverse sources of information. In the past few years, a research trend has focused on the construction of educational ontologies, that is, ontologies to be used for educational purposes. The identification of the terminology is crucial to build ontologies. Term extraction techniques allow the identification of the domain-related terms from electronic resources. This paper presents LiTeWi, a novel method that combines current unsupervised term extraction approaches for creating educational ontologies for technology supported learning systems from electronic textbooks. LiTeWi uses Wikipedia as an additional information source. Wikipedia contains more than 30 million articles covering the terminology of nearly every domain in 288 languages, which makes it an appropriate generic corpus for term extraction. Furthermore, given that its content is available in several languages, it promotes both domain and language independence. LiTeWi is aimed at being used by teachers, who usually develop their didactic material from textbooks. To evaluate its performance, LiTeWi was tuned up using a textbook on object oriented programming and then tested with two textbooks of different domains—astronomy and molecular biology. Introduction |
E-government and organizational change: Reappraising the role of ICT and bureaucracy in public service delivery | There is a substantial literature on e-government that discusses information and communication technology (ICT) as an instrument for reducing the role of bureaucracy in government organizations. The purpose of this paper is to offer a critical discussion of this literature and to provide a complementary argument, which favors the use of ICT in the public sector to support the operations of bureaucratic organizations. Based on the findings of a case study – of the Venice municipality in Italy – the paper discusses how ICT can be used to support rather than eliminate bureaucracy. Using the concepts of e-bureaucracy and functional simplification and closure, the paper proposes evidence and support for the argument that bureaucracy should be preserved and enhanced where e-government policies are concerned. Functional simplification and closure are very valuable concepts for explaining why this should be a viable approach. |
The SILVA ribosomal RNA gene database project: improved data processing and web-based tools | SILVA (from Latin silva, forest, http://www.arb-silva.de) is a comprehensive web resource for up to date, quality-controlled databases of aligned ribosomal RNA (rRNA) gene sequences from the Bacteria, Archaea and Eukaryota domains and supplementary online services. The referred database release 111 (July 2012) contains 3 194 778 small subunit and 288 717 large subunit rRNA gene sequences. Since the initial description of the project, substantial new features have been introduced, including advanced quality control procedures, an improved rRNA gene aligner, online tools for probe and primer evaluation and optimized browsing, searching and downloading on the website. Furthermore, the extensively curated SILVA taxonomy and the new non-redundant SILVA datasets provide an ideal reference for high-throughput classification of data from next-generation sequencing approaches. |
Robot arm control exploiting natural dynamics | This thesis presents an approach to robot arm control exploiting natural dynamics. The approach consists of using a compliant arm whose joints are controlled with simple non-linear oscillators. The arm has special actuators which makes it robust to collisions and gives it a smooth compliant, motion. The oscillators produce rhythmic commands of the joints of the arm, and feedback of the joint motions is used to modify the oscillator behavior. The oscillators enable the resonant properties of the arm to be exploited to perform a variety of rhythmic and discrete tasks. These tasks include tuning into the resonant frequencies of the arm itself, juggling, turning cranks, playing with a Slinky toy, sawing wood, throwing balls, hammering nails and drumming. For most of these tasks, the controllers at each joint are completely independent, being coupled by mechanical coupling through the physical arm of the robot. The thesis shows that this mechanical coupling allows the oscillators to automatically adjust their commands to be appropriate for the arm dynamics and the task. This coordination is robust to large changes in the oscillator parameters, and large changes in the dynamic properties of the arm. As well as providing a wealth of experimental data to support this approach, the thesis also provides a range of analysis tools, both approximate and exact. These can be used to understand and predict the behavior of current implementations, and design new ones. These analysis techniques improve the value of oscillator solutions. The results in the thesis suggest that the general approach of exploiting natural dynamics is a powerful method for obtaining coordinated dynamic behavior of robot arms. Thesis Supervisor: Rodney A. Brooks Title: Professor of Electrical Engineering and Computer Science, MIT 5.4. CASE (C): MODIFYING THE NATURAL DYNAMICS 95 |
Sentiment identification by incorporating syntax, semantics and context information | This paper proposes a method based on conditional random fields to incorporate sentence structure (syntax and semantics) and context information to identify sentiments of sentences within a document. It also proposes and evaluates two different active learning strategies for labeling sentiment data. The experiments with the proposed approach demonstrate a 5-15% improvement in accuracy on Amazon customer reviews compared to existing supervised learning and rule-based methods. |
Physical Limitations of Omnidirectional Antennas Physical Li£itations of Omnidirectional Antennas | The physical limitations of omnidirectional antennas are considered. With the use of the spherical wave functions to describe the field, the directivity gain G and the Q of an unspecified antenna are calculated under idealized conditions. To obtain the optimum performance, three criteria are used: (1) maximum gain for a given complexity of the antenna structure, (2) minimum Q, (3) maximum ratio of G/Q. It is found that an antenna of which the maximum dimension is 2a has the potentiality of a broad bandwidth provided that the gain is equal to or less than 4a/. To obtain a gain higher than this value, the Q of the antenna increases at an astronomical rate. The antenna which has potentially the broadest bandwidth of all omnidirectional antennas is one which has a radiation pattern corresponding to that of an infinitesimally small dipole. |
Clinical use of cobicistat as a pharmacoenhancer of human immunodeficiency virus therapy | The pharmacoenhancement of plasma concentrations of protease inhibitors by coadministration of so-called boosters has been an integral part of antiretroviral therapy for human immunodeficiency virus (HIV) for 1.5 decades. Nearly all HIV protease inhibitors are combined with low-dose ritonavir or cobicistat, which are able to effectively inhibit the cytochrome-mediated metabolism of HIV protease inhibitors in the liver and thus enhance the plasma concentration and prolong the dosing interval of the antiretrovirally active combination partners. Therapies created in this way are clinically effective regimens, being convenient for patients and showing a high genetic barrier to viral resistance. In addition to ritonavir, which has been in use since 1996, cobicistat, a new pharmacoenhancer, has been approved and is widely used now. The outstanding property of cobicistat is its cytochrome P450 3A-selective inhibition of hepatic metabolism of antiretroviral drugs, in contrast with ritonavir, which not only inhibits but also induces a number of cytochrome P450 enzymes, UDP-glucuronosyltransferase, P-glycoprotein, and other cellular transporters. This article reviews the current literature, and compares the pharmacokinetics, pharmacodynamics, and safety of both pharmacoenhancers and discusses the clinical utility of cobicistat in up-to-date and future HIV therapy. |
Semantics-aware detection of targeted attacks: a survey | In today’s interconnected digital world, targeted attacks have become a serious threat to conventional computer systems and critical infrastructure alike. Many researchers contribute to the fight against network intrusions or malicious software by proposing novel detection systems or analysis methods. However, few of these solutions have a particular focus on Advanced Persistent Threats or similarly sophisticated multi-stage attacks. This turns finding domain-appropriate methodologies or developing new approaches into a major research challenge. To overcome these obstacles, we present a structured review of semantics-aware works that have a high potential for contributing to the analysis or detection of targeted attacks. We introduce a detailed literature evaluation schema in addition to a highly granular model for article categorization. Out of 123 identified papers, 60 were found to be relevant in the context of this study. The selected articles are comprehensively reviewed and assessed in accordance to Kitchenham’s guidelines for systematic literature reviews. In conclusion, we combine new insights and the status quo of current research into the concept of an ideal systemic approach capable of semantically processing and evaluating information from different observation points. |
The Futility of DNSSec | The lack of data authentication and integrity guarantees in the Domain Name System (DNS) facilitates a wide variety of malicious activity on the Internet today. DNSSec, a set of cryptographic extensions to DNS, has been proposed to address these threats. While DNSSec does provide certain security guarantees, here we argue that it does not provide what users really need, namely end-to-end authentication and integrity. Even worse, DNSSec makes DNS much less efficient and harder to administer, thus significantly compromising DNS’s availability—arguably its most important characteristic. In this paper we explain the structure of DNS, examine the threats against it, present the details of DNSSec, and analyze the benefits of DNSSec relative to its costs. This cost-benefit analysis clearly shows that DNSSec deployment is a futile effort, one that provides little long-term benefit yet has distinct, perhaps very significant costs. |
Text Simplification using Typed Dependencies: A Comparision of the Robustness of Different Generation Strategies | We present a framework for text simplification based on applying transformation rules to a typed dependency representation produced by the Stanford parser. We test two approaches to regeneration from typed dependencies: (a) gen-light, where the transformed dependency graphs are linearised using the word order and morphology of the original sentence, with any changes coded into the transformation rules, and (b)gen-heavy, where the Stanford dependencies are reduced to a DSyntS representation and sentences are generating formally using the RealPro surface realiser. The main contribution of this paper is to compare the robustness of these approaches in the presence of parsing errors, using both a single parse and an n-best parse setting in an overgenerate and rank approach. We find that the gen-light approach is robust to parser error, particularly in the n-best parse setting. On the other hand, parsing errors cause the realiser in the genheavy approach to order words and phrases in ways that are disliked by our evaluators. |
A 22-Week-Old Fetus with Nager Syndrome and Congenital Diaphragmatic Hernia due to a Novel SF3B4 Mutation. | Nager syndrome, or acrofacial dysostosis type 1 (AFD1), is a rare multiple malformation syndrome characterized by hypoplasia of first and second branchial arches derivatives and appendicular anomalies with variable involvement of the radial/axial ray. In 2012, AFD1 has been associated with dominant mutations in SF3B4. We report a 22-week-old fetus with AFD1 associated with diaphragmatic hernia due to a previously unreported SF3B4 mutation (c.35-2A>G). Defective diaphragmatic development is a rare manifestation in AFD1 as it is described in only 2 previous cases, with molecular confirmation in 1 of them. Our molecular finding adds a novel pathogenic splicing variant to the SF3B4 mutational spectrum and contributes to defining its prenatal/fetal phenotype. |
DIFusion: Fast Skip-Scan with Zero Space Overhead | Scan is a crucial operation in main-memory column-stores. It scans a column and returns a result bit vector indicating which records satisfy a filter predicate. ByteSlice is an in-memory data layout that chops data into multiple bytes and exploits early-stop capability by high-order bytes comparisons. As column widths are usually not multiples of byte, the last-byte of ByteSlice is padded with 0's, wasting memory bandwidth and computation power. To fully leverage the resources, we propose to weave a secondary index into the vacant bits (i.e., bits originally padded with 0's), forming our new layout coined DIFusion (Data Index Fusion). DIFusion enables skip-scan, a new fast scan that inherits the early-stopping capability from ByteSlice and at the same time possesses the data-skipping ability of index with zero space overhead. Empirical results show that skip-scan on DIFusion outperforms scan on ByteSlice. |
Mining high utility episodes in complex event sequences | Frequent episode mining (FEM) is an interesting research topic in data mining with wide range of applications. However, the traditional framework of FEM treats all events as having the same importance/utility and assumes that a same type of event appears at most once at any time point. These simplifying assumptions do not reflect the characteristics of scenarios in real applications and thus the useful information of episodes in terms of utilities such as profits is lost. Furthermore, most studies on FEM focused on mining episodes in simple event sequences and few considered the scenario of complex event sequences, where different events can occur simultaneously. To address these issues, in this paper, we incorporate the concept of utility into episode mining and address a new problem of mining high utility episodes from complex event sequences, which has not been explored so far. In the proposed framework, the importance/utility of different events is considered and multiple events can appear simultaneously. Several novel features are incorporated into the proposed framework to resolve the challenges raised by this new problem, such as the absence of anti-monotone property and the huge set of candidate episodes. Moreover, an efficient algorithm named UP-Span (Utility ePisodes mining by Spanning prefixes) is proposed for mining high utility episodes with several strategies incorporated for pruning the search space to achieve high efficiency. Experimental results on real and synthetic datasets show that UP-Span has excellent performance and serves as an effective solution to the new problem of mining high utility episodes from complex event sequences. |
Analytical solutions to one-dimensional advection–diffusion equation with variable coefficients in semi-infinite media | 0022-1694/$ see front matter 2009 Elsevier B.V. A doi:10.1016/j.jhydrol.2009.11.008 * Corresponding author. Address: P-12, New Med 221005, India. Tel.: +91 9450541328. E-mail addresses: [email protected], naveen@ In the present study one-dimensional advection–diffusion equation with variable coefficients is solved for three dispersion problems: (i) solute dispersion along steady flow through an inhomogeneous medium, (ii) temporally dependent solute dispersion along uniform flow through homogeneous medium and (iii) solute dispersion along temporally dependent flow through inhomogeneous medium. Continuous point sources of uniform and increasing nature are considered in an initially solute free semi-infinite medium. Analytical solutions are obtained using Laplace transformation technique. The inhomogeneity of the medium is expressed by spatially dependent flow. Its velocity is defined by a function interpolated linearly in a finite domain in which concentration values are to be evaluated. The dispersion is considered proportional to square of the spatially dependent velocity. The solutions of the third problem may help understand the concentration dispersion pattern along a sinusoidally varying unsteady flow through an inhomogeneous medium. New independent variables are introduced through separate transformations, in terms of which the advection–diffusion equation in each problem is reduced into the one with the constant coefficients. The effects of spatial and temporal dependence on the concentration dispersion are studied with the help of respective parameters and are shown graphically. 2009 Elsevier B.V. All rights reserved. |
Average Current-Mode Control Scheme for a Quadratic Buck Converter With a Single Switch | A controller for a quadratic buck converter is given using average current-mode control. The converter has two filters; thus, it will exhibit fourth-order characteristic dynamics. The proposed scheme employs an inner loop that uses the current of the first inductor. This current can also be used for overload protection; therefore, the full benefits of current-mode control are maintained. For the outer loop, a conventional controller which provides good regulation characteristics is used. The design-oriented analytic results allow the designer to easily pinpoint the control circuit parameters that optimize the converter's performance. Experimental results are given for a 28 W switching regulator where current-mode control and voltage-mode control are compared. |
Development of a Digitalization Maturity Model for the Manufacturing Sector | An Industry 4.0 framework, showing every possible action towards the complete digitalization of a company both considering strategic and technological aspects, is proposed. The state of the art analysis and re-elaboration allows to coherently integrating many valuable but scattered and narrow contributions into a holistic view covering the following dimensions: value drivers, levers of action, processes and enabling technologies. Based on this framework the consequences of digitalization in shaping a new working environment are analyzed focusing on workforce impact, looking at the modification of both performed tasks and required competences as well as to more generic social consequences (job satisfaction, work-life balance, new forms of employment). To support the application of the proposed roadmap a Digitalization Maturity Model is developed to assess the state of a company journey towards Industry 4.0 considering the following dimensions: Strategy, Processes, Technologies, Products & Services and People. |
Are measures of self-esteem, neuroticism, locus of control, and generalized self-efficacy indicators of a common core construct? | The authors present results of 4 studies that seek to determine the discriminant and incremental validity of the 3 most widely studied traits in psychology-self-esteem, neuroticism, and locus of control-along with a 4th, closely related trait-generalized self-efficacy. Meta-analytic results indicated that measures of the 4 traits were strongly related. Results also demonstrated that a single factor explained the relationships among measures of the 4 traits. The 4 trait measures display relatively poor discriminant validity, and each accounted for little incremental variance in predicting external criteria relative to the higher order construct. In light of these results, the authors suggest that measures purporting to assess self-esteem, locus of control, neuroticism, and generalized self-efficacy may be markers of the same higher order concept. |
Sensitivity-Based Model of Low Voltage Distribution Systems with Distributed Energy Resources | A key issue in Low Voltage (LV) distribution systems is to identify strategies for the optimal management and control in the presence of Distributed Energy Resources (DERs). To reduce the number of variables to be monitored and controlled, virtual levels of aggregation, called Virtual Microgrids (VMs), are introduced and identified by using new models of the distribution system. To this aim, this paper, revisiting and improving the approach outlined in a conference paper, presents a sensitivity-based model of an LV distribution system, supplied by an Medium/Low Voltage (MV/LV) substation and composed by several feeders, which is suitable for the optimal management and control of the grid and for VM definition. The main features of the proposed method are: it evaluates the sensitivity coefficients in a closed form; it provides an overview of the sensitivity of the network to the variations of each DER connected to the grid; and it presents a limited computational burden. A comparison of the proposed method with both the exact load flow solutions and a perturb-and-observe method is discussed in a case study. Finally, the method is used to evaluate the impact of the DERs on the nodal voltages of the network. |
STATE OF THE ART OF DV / DT AND DI / DT CONTROL OF INSULATED GATE POWER SWITCHES | HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. STATE OF THE ART OF DV/DT AND DI/DT CONTROL OF INSULATED GATE POWER SWITCHES Pierre Lefranc, Dominique Bergogne |
Towards a Verified , General-Purpose Operating System Kernel † | Operating system kernels are complex, critical, and difficult to test systems. The imperative nature of operating system implementations, the programming languages chosen, and the usually selected implementation style combine to make verification of a general-purpose operating system kernel impractical. While security policies have been verified against models of general-purpose operating systems, no verification has ever been accomplished for a general purpose operating system implementation. This paper summarizes how we are attempting to create a verified general purpose operating system implementation for Coyotos, the successor to the EROS system, and why we believe that there is a reasonable chance of success. |
Embedding Images and Sentences in a Common Space with a Recurrent Capsule Network | Associating texts and images is an easy and intuitive task for a human being, but it raises some issues if we want that task to be accomplished by a computer. Among these issues, there is the problem of finding a common representation for images and sentences. Based on recent research about capsule networks, we define a novel model to tackle that issue. This model is trained and compared to other recent models on the Flickr8k database on Image Retrieval and Image Annotation (or Sentence Retrieval) tasks. We propose a new recurrent architecture inspired from capsule networks to replace the traditional LSTM/GRU and show how it leads to improved performances. Moreover, we show that the interest of our model goes beyond its performances and includes its intrinsic characteristics, which can explain why it performs particularly well on the Image Annotation task. In addition, we propose a routing procedure between capsules which is fully learned during the training of our model. |
Development of advanced driver assistance systems with vehicle hardware-inthe-loop simulations ∗ | Development of advanced driver assistance systems with vehicle hardware-in-the-loop simulations , " (Received 00 Month 200x; In final form 00 Month 200x) This paper presents a new method for the design and validation of advanced driver assistance systems (ADASs). With vehicle hardware-in-the-loop (VEHIL) simulations the development process, and more specifically the validation phase, of intelligent vehicles is carried out safer, cheaper, and more manageable. In the VEHIL laboratory a full-scale ADAS-equipped vehicle is set up in a hardware-in-the-loop simulation environment, where a chassis dynamometer is used to emulate the road interaction and robot vehicles to represent other traffic. In this controlled environment the performance and dependability of an ADAS is tested to great accuracy and reliability. The working principle and the added value of VEHIL are demonstrated with test results of an adaptive cruise control and a forward collision warning system. Based on the 'V' diagram, the position of VEHIL in the development process of ADASs is illustrated. |
Improving the decision-making process in the structural modification of drug candidates: enhancing metabolic stability. | The activity-exposure-toxicity relationship, which can be described as "the rule of three", presents the single most difficult challenge in the design of drug candidates and their subsequent advancement to the development stage. ADME studies are widely used in drug discovery to optimize the balance of properties necessary to convert lead candidates into drugs that are safe and effective for humans. Metabolite characterization has become one of the key drivers of the drug discovery process, helping to optimize ADME properties and increase the success rate for drugs. Various strategies can influence drug design in the decision-making process in the structural modification of drug candidates to reduce metabolic instability. |
Audiovisual biofeedback breathing guidance for lung cancer patients receiving radiotherapy: a multi-institutional phase II randomised clinical trial | There is a clear link between irregular breathing and errors in medical imaging and radiation treatment. The audiovisual biofeedback system is an advanced form of respiratory guidance that has previously demonstrated to facilitate regular patient breathing. The clinical benefits of audiovisual biofeedback will be investigated in an upcoming multi-institutional, randomised, and stratified clinical trial recruiting a total of 75 lung cancer patients undergoing radiation therapy. To comprehensively perform a clinical evaluation of the audiovisual biofeedback system, a multi-institutional study will be performed. Our methodological framework will be based on the widely used Technology Acceptance Model, which gives qualitative scales for two specific variables, perceived usefulness and perceived ease of use, which are fundamental determinants for user acceptance. A total of 75 lung cancer patients will be recruited across seven radiation oncology departments across Australia. Patients will be randomised in a 2:1 ratio, with 2/3 of the patients being recruited into the intervention arm and 1/3 in the control arm. 2:1 randomisation is appropriate as within the interventional arm there is a screening procedure where only patients whose breathing is more regular with audiovisual biofeedback will continue to use this system for their imaging and treatment procedures. Patients within the intervention arm whose free breathing is more regular than audiovisual biofeedback in the screen procedure will remain in the intervention arm of the study but their imaging and treatment procedures will be performed without audiovisual biofeedback. Patients will also be stratified by treating institution and for treatment intent (palliative vs. radical) to ensure similar balance in the arms across the sites. Patients and hospital staff operating the audiovisual biofeedback system will complete questionnaires to assess their experience with audiovisual biofeedback. The objectives of this clinical trial is to assess the impact of audiovisual biofeedback on breathing motion, the patient experience and clinical confidence in the system, clinical workflow, treatment margins, and toxicity outcomes. This clinical trial marks an important milestone in breathing guidance studies as it will be the first randomised, controlled trial providing the most comprehensive evaluation of the clinical impact of breathing guidance on cancer radiation therapy to date. This study is powered to determine the impact of AV biofeedback on breathing regularity and medical image quality. Objectives such as determining the indications and contra-indications for the use of AV biofeedback, evaluation of patient experience, radiation toxicity occurrence and severity, and clinician confidence will shed light on the design of future phase III clinical trials. This trial has been registered with the Australian New Zealand Clinical Trials Registry (ANZCTR), its trial ID is ACTRN12613001177741 . |
Mechanomyography signals in spastic muscle and the correlation with the modified Ashworth scale | The modified Ashworth scale (MAS) is the most widely used measurement technique to assess levels of spasticity. In MAS, the evaluator graduates spasticity considering his/her subjective analysis of the muscular endurance during passive stretching. Therefore, it is a subjective scale. Mechanomyography (MMG) allows registering the vibrations generated by muscle contraction and stretching events that propagate through the tissue until the surface of the skin. With this in mind, this study aimed to investigate possible correlations between MMG signal and muscle spasticity levels determined by MAS. We evaluated 34 limbs considered spastic by MAS, including upper and lower limbs of 22 individuals of both sexes. Simultaneously, the MMG signals of the spastic muscle group (agonists) were acquired. The features investigated involved, in the time domain, the median energy (MMGME) of the MMG Z-axis (perpendicular to the muscle fibers) and, in the frequency domain, the median frequency (MMGmf). The Kruskal-Wallis test (p<;0.001) determined that there were significant differences between intergroup MAS spasticity levels for MMGme. There was a high linear correlation between the MMGme and MAS (R2=0.9557) and also a high correlation as indicated by Spearman test (ρ=0.9856; p<;0.001). In spectral analysis, the Kruskal-Wallis test (p = 0.0059) showed that MMGmf did not present significant differences between MAS spasticity levels. There was moderate linear correlation between MAS and MMGmf (R2=0.4883 and Spearman test [ρ = 0.4590; p <; 0.001]). Between the two investigated features, we conclude that the median energy is the most viable feature to evaluate spasticity due to strong correlations with the MAS. |
Practical Data Breakpoints: Design and Implementation | A data breakpoint associates debugging actions with programmer-specified conditions on the memory state of an executing program. Data breakpoints provide a means for discovering program bugs that are tedious or impossible to isolate using control breakpoints alone. In practice, programmers rarely use data breakpoints, because they are either unimplemented or prohibitively slow in available debugging software. In this paper, we present the design and implementation of a practical data breakpoint facility.
A data breakpoint facility must monitor all memory updates performed by the program being debugged. We implemented and evaluated two complementary techniques for reducing the overhead of monitoring memory updates. First, we checked write instructions by inserting checking code directly into the program being debugged. The checks use a segmented bitmap data structure that minimizes address lookup complexity. Second, we developed data flow algorithms that eliminate checks on some classes of write instructions but may increase the complexity of the remaining checks.
We evaluated these techniques on the SPARC using the SPEC benchmarks. Checking each write instruction using a segmented bitmap achieved an average overhead of 42%. This overhead is independent of the number of breakpoints in use. Data flow analysis eliminated an average of 79% of the dynamic write checks. For scientific programs such the NAS kernels, analysis reduced write checks by a factor of ten or more. On the SPARC these optimizations reduced the average overhead to 25%. |
Erythromycin as an alternative to reduce interfering extra-cardiac activity in myocardial perfusion imaging | OBJECTIVES
We sought to determine whether taking oral erythromycin prior to SPECT myocardial perfusion imaging with Tc99m-sestamibi would reduce the amount of interfering extra-cardiac activity and improve the image quality.
METHODS
A total of 96 patients who were routinely referred for myocardial perfusion imaging were randomly assigned to one of two groups. Patients in group A received 500 mg of non-enterically coated erythromycin orally one hour prior to image acquisition (45 patients). Patients in group B received diluted lemon juice which comprises the current standard of care in our department (51 patients). A two-day protocol was followed and study participants received the same intervention on both days. Planar images of both the stress and rest images were assessed visually by three experienced nuclear medicine physicians for the presence of interfering extracardiac activity. Physicians were blinded to the detail of the protocol and independently assessed the images.
RESULTS
The qualitative results favoured lemon juice in reducing the amount of interfering extra-cardiac activity. The overall incidence of interfering extra-cardiac activity was 46.15% in the lemon juice group vs 55.56% in the erythromycin group. However, this difference was not found to be statistically significant (p = 0.36). The use of a MYO:EXT ratio similar to the one described by Peace and Lloyd,11 appeared promising in quantifying interfering extra-cardiac activity.
CONCLUSION
The difference between the effect of erythromycin and lemon juice on interfering extra-cardiac activity appears statistically insignificant and erythromycin could therefore be considered as a suitable alternative to lemon juice. |
Automatic Question Generation from Sentences | Question Generation (QG) and Question Answering (QA) are some of the many challenges for natural language understanding and interfaces. As humans need to ask good questions, the potential benefits from automated QG systems may assist them in meeting useful inquiry needs. In this paper, we consider an automatic Sentence-to-Question generation task, where given a sentence, the Question Generation (QG) system generates a set of questions for which the sentence contains, implies, or needs answers. To facilitate the question generation task, we build elementary sentences from the input complex sentences using a syntactic parser. A named entity recognizer and a part of speech tagger are applied on each of these sentences to encode necessary information. We classify the sentences based on their subject, verb, object and preposition for determining the possible type of questions to be generated. We use the TREC-2007 (Question Answering Track) dataset for our experiments and evaluation. Mots-clés : Génération de questions, Analyseur syntaxique, Phrases élémentaires, POS Tagging. |
Effectiveness of cognitive-functional (Cog-Fun) intervention with children with attention deficit hyperactivity disorder: a pilot study. | The executive function (EF) deficits of children with attention deficit hyperactivity disorder (ADHD) hinder their performance of complex daily functions. Despite the existing evidence-based pharmacological interventions for ADHD symptoms, no intervention has yet been found that deals directly with EFs in daily tasks. Fourteen children and their parents participated in the Cognitive-Functional (Cog-Fun) program in occupational therapy, which is tailored to the executive dysfunction of ADHD and focuses on enabling cognitive strategies for occupational performance. The study included initial assessment of EFs (Behavior Rating Inventory of Executive Functions; Tower of London(DX)), occupational performance (Canadian Occupational Performance Measure), 10 sessions of Cog-Fun intervention with each child-parent dyad, and postintervention and 3-month follow-up assessments. We found significant improvements with medium to large effects on outcome measures after intervention, and most effects were maintained at follow-up. The findings warrant controlled studies examining the effectiveness of this intervention for children with ADHD. |
A modified Gabor filter design method for fingerprint image enhancement | Fingerprint image enhancement is an essential preprocessing step in fingerprint recognition applications. In this paper, we propose a novel filter design method for fingerprint image enhancement, primarily inspired from the traditional Gabor filter (TGF). The previous fingerprint image enhancement methods based on TGF banks have some drawbacks in their image-dependent parameter selection strategy, which leads to artifacts in some cases. To address this issue, we develop an improved version of the TGF, called the modified Gabor filter (MGF). Its parameter selection scheme is image-independent. The remarkable advantages of our MGF over the TGF consist in preserving fingerprint image structure and achieving image enhancement consistency. Experimental results indicate that the proposed MGF enhancement algorithm can reduce the FRR of a fingerprint matcher by approximately 2% at a FAR of 0.01%. 2003 Elsevier Science B.V. All rights reserved. |
A multicenter, phase I, dose-escalation study to assess the safety, tolerability, and pharmacokinetics of etirinotecan pegol in patients with refractory solid tumors. | PURPOSE
This study was designed to establish the maximum tolerated dose (MTD) and to evaluate tolerability, pharmacokinetics, and antitumor activity of etirinotecan pegol.
EXPERIMENTAL DESIGN
Patients with refractory solid malignancies were enrolled and assigned to escalating-dose cohorts. Patients received 1 infusion of etirinotecan pegol weekly 3 times every 4 weeks (w × 3q4w), or every 14 days (q14d), or every 21 days (q21d), with MTD as the primary end point using a standard 3 + 3 design.
RESULTS
Seventy-six patients were entered onto 3 dosing schedules (58-245 mg/m(2)). The MTD was 115 mg/m(2) for the w × 3q4w schedule and 145 mg/m(2) for both the q14d and q21d schedules. Most adverse events related to study drug were gastrointestinal disorders and were more frequent at higher doses of etirinotecan pegol. Late onset diarrhea was observed in some patients, the frequency of which generally correlated with dose density. Cholinergic diarrhea commonly seen with irinotecan treatment did not occur in patients treated with etirinotecan pegol. Etirinotecan pegol administration resulted in sustained and controlled systemic exposure to SN-38, which had a mean half-life of approximately 50 days. Overall, the pharmacokinetics of etirinotecan pegol are predictable and do not require complex dosing adjustments. Confirmed partial responses were observed in 8 patients with breast, colon, lung (small and squamous cell), bladder, cervical, and neuroendocrine cancer.
CONCLUSION
Etirinotecan pegol showed substantial antitumor activity in patients with various solid tumors and a somewhat different safety profile compared with the irinotecan historical profile. The MTD recommended for phase II clinical trials is 145 mg/m(2) q14d or q21d. |
ADOPTING AN EXTERNAL FOCUS OF ATTENTION FACILITATES MOTOR LEARNING IN CHILDREN WITH ATTENTION DEFICIT HYPERACTIVITY DISORDER | The purpose of the present study was to investigate if children with attention deficit hyperactivity disorder (ADHD) would show enhanced motor skill learning with instructions to adopt an external focus of attention (i.e. on the movement effect) rather than an internal focus (i.e. on the movements themselves). The task involved throwing tennis balls with the dominant arm at a circular target that was placed on the floor at a distance of three meters. Twenty children with ADHD, ranging in age from 8 to 11 years, were randomly assigned to either an external or internal focus group. Participants performed 180 practice trials with focus instructions and reminders before each block of 30 trials. Learning was assessed 48 hours after practice. The external focus group demonstrated more effective learning than the internal focus group. This finding has implications for applied settings that involve sports or physical activity with children who have ADHD. |
Malware behaviour analysis | Several malware analysis techniques suppose that the disassembled code of a piece of malware is available, which is however not always possible. This paper proposes a flexible and automated approach to extract malware behaviour by observing all the system function calls performed in a virtualized execution environment. Similarities and distances between malware behaviours are computed which allows to classify malware behaviours. The main features of our approach reside in coupling a sequence alignment method to compute similarities and leverage the Hellinger distance to compute associated distances. We also show how the accuracy of the classification process can be improved using a phylogenetic tree. Such a tree shows common functionalities and evolution of malware. This is relevant when dealing with obfuscated malware variants that have often similar behaviour. The phylogenetic trees were assessed using known antivirus results and only a few malware behaviours were wrongly classified. |
The Exact Rate-Memory Tradeoff for Caching With Uncoded Prefetching | We consider a basic cache network, in which a single server is connected to multiple users via a shared bottleneck link. The server has a database of files (content). Each user has an isolated memory that can be used to cache content in a prefetching phase. In a following delivery phase, each user requests a file from the database, and the server needs to deliver users’ demands as efficiently as possible by taking into account their cache contents. We focus on an important and commonly used class of prefetching schemes, where the caches are filled with uncoded data. We provide the exact characterization of the rate-memory tradeoff for this problem, by deriving both the minimum average rate (for a uniform file popularity) and the minimum peak rate required on the bottleneck link for a given cache size available at each user. In particular, we propose a novel caching scheme, which strictly improves the state of the art by exploiting commonality among user demands. We then demonstrate the exact optimality of our proposed scheme through a matching converse, by dividing the set of all demands into types, and showing that the placement phase in the proposed caching scheme is universally optimal for all types. Using these techniques, we also fully characterize the rate-memory tradeoff for a decentralized setting, in which users fill out their cache content without any coordination. |
Enforcing k-anonymity in Web Mail Auditing | We study the problem of k-anonymization of mail messages in the realistic scenario of auditing mail traffic in a major commercial Web mail service. Mail auditing is necessary in various Web mail debugging and quality assurance activities, such as anti-spam or the qualitative evaluation of novel mail features. It is conducted by trained professionals, often referred to as "auditors", who are shown messages that could expose personally identifiable information. We address here the challenge of k-anonymizing such messages, focusing on machine generated mail messages that represent more than 90% of today's mail traffic. We introduce a novel message signature Mail-Hash, specifically tailored to identifying structurally-similar messages, which allows us to put such messages in a same equivalence class. We then define a process that generates, for each class, masked mail samples that can be shown to auditors, while guaranteeing the k-anonymity of users. The productivity of auditors is measured by the amount of non-hidden mail content they can see every day, while considering normal working conditions, which set a limit to the number of mail samples they can review. In addition, we consider k-anonymity over time since, by definition of k-anonymity, every new release places additional constraints on the assignment of samples. We describe in details the results we obtained over actual Yahoo mail traffic, and thus demonstrate that our methods are feasible at Web mail scale. Given the constantly growing concern of users over their email being scanned by others, we argue that it is critical to devise such algorithms that guarantee k-anonymity, and implement associated processes in order to restore the trust of mail users. |
Large-Scale Cost-Based Abduction in Full-Fledged First-Order Predicate Logic with Cutting Plane Inference | Abduction is inference to the best explanation. Abduction has long been studied intensively in a wide range of contexts, from artificial intelligence research to cognitive science. While recent advances in large-scale knowledge acquisition warrant applying abduction with large knowledge bases to real-life problems, as of yet no existing approach to abduction has achieved both the efficiency and formal expressiveness necessary to be a practical solution for large-scale reasoning on real-life problems. The contributions of our work are the following: (i) we reformulate abduction as an Integer Linear Programming (ILP) optimization problem, providing full support for first-order predicate logic (FOPL); (ii) we employ Cutting Plane Inference, which is an iterative optimization strategy developed in Operations Research for making abductive reasoning in full-fledged FOPL tractable, showing its efficiency on a real-life dataset; (iii) the abductive inference engine presented in this paper is made publicly available. |
Sequential Pattern Mining : Survey and Current Research Challenges | 185 Abstract— The concept of sequence Data Mining was first introduced by Rakesh Agrawal and Ramakrishnan Srikant in the year 1995. The problem was first introduced in the context of market analysis. It aimed to retrieve frequent patterns in the sequences of products purchased by customers through time ordered transactions. Later on its application was extended to complex applications like telecommunication, network detection, DNA research, etc. Several algorithms were proposed. The very first was Apriori algorithm, which was put forward by the founders themselves. Later more scalable algorithms for complex applications were developed. E.g. GSP, Spade, PrefixSpan etc. The area underwent considerable advancements since its introduction in a short span. In this paper, a systematic survey of the sequential pattern mining algorithms is performed. This paper investigates these algorithms by classifying study of sequential pattern-mining algorithms into two broad categories. First, on the basis of algorithms which are designed to increase efficiency of mining and second, on the basis of various extensions of sequential pattern mining designed for certain application. At the end, comparative analysis is done on the basis of important key features supported by various algorithms and current research challenges are discussed in this field of data mining. |
Traffic Signs Detection Based on Faster R-CNN | In this paper, we use a advanced method called Faster R-CNN to detect traffic signs. This new method represents the highest level in object recognition, which don't need to extract image feature manually anymore and can segment image to get candidate region proposals automatically. Our experiment is based on a traffic sign detection competition in 2016 by CCF and UISEE company. The mAP(mean average precision) value of the result is 0.3449 that means Faster R-CNN can indeed be applied in this field. Even though the experiment did not achieve the best results, we explore a new method in the area of the traffic signs detection. We believe that we can get a better achievement in the future. |
Conditional Adversarial Network for Semantic Segmentation of Brain Tumor | Automated medical image analysis has a significant value in diagnosis and treatment of lesions. Brain tumors segmentation has a special importance and difficulty due to the difference in appearances and shapes of the different tumor regions in magnetic resonance images. Additionally the data sets are heterogeneous and usually limited in size in comparison with the computer vision problems. The recently proposed adversarial training has shown promising results in generative image modeling. In this paper we propose a novel end-to-end trainable architecture for brain tumor semantic segmentation through conditional adversarial training. We exploit conditional Generative Adversarial Network (cGAN) and train a semantic segmentation Convolution Neural Network (CNN) along with an adversarial network that discriminates segmentation maps coming from the ground truth or from the segmentation network for BraTS 2017 segmentation task[15,4,2,3]. We also propose an end-to-end trainable CNN for survival day prediction based on deep learning techniques for BraTS 2017 prediction task [15,4,2,3]. The experimental results demonstrate the superior ability of the proposed approach for both tasks. The proposed model achieves on validation data a DICE score, Sensitivity and Specificity respectively 0.68, 0.99 and 0.98 for the whole tumor, regarding online judgment system. |
Synergistic Effect of Silver Nanoparticles with Neomycin or Gentamicin Antibiotics on Mastitis-Causing Staphylococcus aureus | Objective: Mastitis is one of the most costly diseases in dairy cows, which greatly decreases milk production. Use of antibiotics in cattle leads to antibiotic-resistance of mastitis-causing bacteria. The present study aimed to investigate synergistic effect of silver nanoparticles (AgNPs) with neomycin or gentamicin antibiotic on mastitis-causing Staphylococcus aureus. Materials and Methods: In this study, 46 samples of milk were taken from the cows with clinical and subclinical mastitis during the august-October 2015 sampling period. In addition to biochemical tests, nuc gene amplification by PCR was used to identify strains of Staphylococcus aureus. Disk diffusion test and microdilution were performed to determine minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC). Fractional Inhibitory Concentration (FIC) index was calculated to determine the interaction between a combination of AgNPs and each one of the antibiotics. Results: Twenty strains of Staphylococcus aureus were isolated from 46 milk samples and were confirmed by PCR. Based on disk diffusion test, 35%, 10% and 55% of the strains were respectively susceptible, moderately susceptible and resistant to gentamicin. In addition, 35%, 15% and 50% of the strains were respectively susceptible, moderately susceptible and resistant to neomycin. According to FIC index, gentamicin antibiotic and AgNPs had synergistic effects in 50% of the strains. Furthermore, neomycin antibiotic and AgNPs had synergistic effects in 45% of the strains. Conclusion: It could be concluded that a combination of AgNPs with either gentamicin or neomycin showed synergistic antibacterial properties in Staphylococcus aureus isolates from mastitis. In addition, some hypotheses were proposed to explain antimicrobial mechanism of the combination. |
Facial Rejuvenation With Fine-Barbed Threads: The Simple Miz Lift | Since the invention of the first barbed (short) suture by Sulamanidze in the late 1990s, different techniques have been described including Woffles (long) thread lifting, Waptos suture lifting, Isse unidirectional barbed-threads lifting, and silhouette lifting. The authors have implemented a newly developed type of thread integrating more small cogs and a soft and fragile feeling of the material (medical grade polypropylene: 16.5 cm long, 15 cm of length covered with cogs, and 0.40 mm in diameter). This study aimed to describe the authors’ thread and the surgical techniques they have adopted to counteract the descent and laxity of facial soft tissues. A retrospective chart review was performed during a period of 2 years, from March 2010 to February 2012. The procedure was performed with the patient under local anesthesia and intravenous sedation. The face was marked preoperatively to determine the appropriate vector of the thread and its five end fixation points. The superior border of the incision was approximately at the level of the lateral brow, and the lower border was about 2 cm above the superior margin of the helical root. After the temporal incision was made, the dissection was carried all the way down to the deep temporal fascia to create a plane between the superficial and deep temporal fascias. Using blunt cannulas, the dissection was continued in an inferomedial direction from the temporal incision to the lower face through the sub-submucosal aponeurotic system (sub-SMAS) plane, which was marked preoperatively. This sub-SMAS dissection could easily proceed to the premasseteric space (PMS). The face-lift sutures (Gusan Inc., Seoul, Republic of Korea) then were inserted through the cannula from the lower face to the temporal incision line. The sutures were trimmed, and the proximal ends were secured on the deep temporal fascia reinforced with Vicryl interrupted sutures. The results were assessed objectively using serial photography and subjectively according to patient assessment. Complications also were recorded. All but two patients (100/102, 98.1 %) were satisfied with the outcomes after surgery. Consensus ratings by two independent plastic surgeons found that objective outcomes were divided among “excellent,” “good,” and “fair.” The postoperative course was uneventful except for one patient (1/102, 1 %) who presented with minor skin dimpling and another patient (1/102, 1 %) who had a temporary facial weakness. These two complicated cases were resolved spontaneously without any surgical interventions. The reported technique has several advantages over current approaches. First, the use of nonabsorbable sutures with sufficient maintenance potential can produce long-lasting, satisfying results. Second, use of the authors’ fine thread can avoid complications such as extruded or visible thread, which often have been complaints with thread lifting. Third, use of a loose areolar plane, including sub-SMAS and PMS free of vital structures, which is deeper than the traditional lift procedure, can avoid any traction line during rest or animation without any significant complications. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 . |
San Francisco Crime Classification | San Francisco Crime Classification is an online competition administered by Kaggle Inc. The competition aims at predicting the future crimes based on a given set of geographical and time-based features. In this paper, I achieved a an accuracy that ranks at top %18, as of May 19th, 2016. I will explore the data, and explain in details the tools I used to achieve that result. |
Development of an FMCW LADAR Source Chip using MEMS-Electronic-Photonic Heterogeneous Integration | We present a modular integration platform, combining active and passive photonic devices with CMOS integrated electronics. Based on this integration platform, an integrated Frequency Modulated Continuous Wave (FMCW) Laser Detection and Ranging (LADAR) source chip is being developed. Such a LADAR source chip can be used in miniaturized 3D imaging systems for defense and consumer electronics applications. In this paper, we discuss the integration approach, the performance of select individual components, and experimental results on a bench top LADAR model system. |
27.6 A 4GS/s 13b pipelined ADC with capacitor and amplifier sharing in 16nm CMOS | In recent years, we have seen the emergence of multi-GS/s medium-to-high-resolution ADCs. Presently, SAR ADCs dominate low-speed applications and time-interleaved SARs are becoming increasingly popular for high-speed ADCs [1,2]. However the SAR architecture faces two key problems in simultaneously achieving multi-GS/s sample rates and high resolution: (1) the fundamental trade-off of comparator noise and speed is limiting the speed of single-channel SARs, and (2) highly time-interleaved ADCs introduce complex lane-to-lane mismatches that are difficult to calibrate with high accuracy. Therefore, pipelined [3] and pipelined-SAR [4] remain the most common architectural choices for high-speed high-resolution ADCs. In this work, a pipelined ADC achieves 4GS/s sample rate, using a 4-step capacitor and amplifier-sharing front-end MDAC architecture with 4-way sampling to reduce noise, distortion and power, while overcoming common issues for SHA-less ADCs. |
Standing in an unstable shoe increases postural sway and muscle activity of selected smaller extrinsic foot muscles. | Inactivity or the under-utilization of lower limb muscles can lead to strength and functional deficits and potential injury. Traditional shoes with stability and support features can overprotect the foot and potentially contribute to the deterioration of the smaller extrinsic foot muscles. Healthy subjects (n=28) stood in an unstable MBT (Masai Barefoot Technology) shoe during their work day for a 6-week accommodation period. A two-way repeated measures ANOVA was used to determine (i) if unstable shoe wear increased electromyographic (EMG) activity of selected extrinsic foot muscles and increased postural sway compared to standing barefoot and in a stable control shoe and (ii) if postural sway and muscle activity across footwear conditions differed between a pre- and post-accommodation testing visit. Using an EMG circumferential linear array, it was shown that standing in the unstable shoe increased activity of the flexor digitorum longus, peroneal (PR) and anterior compartment (AC) muscles of the lower leg. No activity differences for the larger soleus (SOL) were identified between the stable and unstable shoe conditions. Postural sway was greater while standing in the unstable shoe compared to barefoot and the stable control shoe. These findings suggest that standing in the unstable MBT shoe effectively activates selected extrinsic foot muscles and could have implications for strengthening and conditioning these muscles. Postural sway while standing in the unstable MBT shoe also decreased over the 6-week accommodation period. |
Development of an Automated Healthcare Kiosk for the Management of Chronic Disease Patients in the Primary Care Setting | An increase in the prevalence of chronic disease has led to a rise in the demand for primary healthcare services in many developed countries. Healthcare technology tools may provide the leverage to alleviate the shortage of primary care providers. Here we describe the development and usage of an automated healthcare kiosk for the management of patients with stable chronic disease in the primary care setting. One-hundred patients with stable chronic disease were recruited from a primary care clinic. They used a kiosk in place of doctors’ consultations for two subsequent follow-up visits. Patient and physician satisfaction with kiosk usage were measured on a Likert scale. Kiosk blood pressure measurements and triage decisions were validated and optimized. Patients were assessed if they could use the kiosk independently. Patients and physicians were satisfied with all areas of kiosk usage. Kiosk triage decisions were accurate by the 2nd month of the study. Blood pressure measurements by the kiosk were equivalent to that taken by a nurse (p = 0.30, 0.14). Independent kiosk usage depended on patients’ language skills and educational levels. Healthcare kiosks represent an alternative way to manage patients with stable chronic disease. They have the potential to replace physician visits and improve access to primary healthcare. Patients welcome the use of healthcare technology tools, including those with limited literacy and education. Optimization of environmental and patient factors may be required prior to the implementation of kiosk-based technology in the healthcare setting. |
Maxillary gap at 11-13 weeks' gestation: marker of cleft lip and palate. | OBJECTIVE
To describe a new sign of cleft lip and palate (CLP), the maxillary gap, which is visible in the mid-sagittal plane of the fetal face used routinely for measurement of nuchal translucency thickness.
METHODS
This was a retrospective study of stored images of the mid-sagittal view of the fetal face at 11-13 weeks' gestation in 86 cases of CLP and 86 normal controls. The images were examined to determine if a maxillary gap was present, in which case its size was measured.
RESULTS
In 37 (43.0%) cases of CLP the defect was isolated and in 49 (57.0%) there were additional fetal defects. In the isolated CLP group, the diagnosis of facial cleft was made in the first trimester in nine (24.3%) cases and in the second trimester in 28 (75.7%). In the group with additional defects, the diagnosis of facial cleft was made in the first trimester in 46 (93.9%) cases and in the second trimester in three (6.1%). A maxillary gap was observed in 96% of cases of CLP with additional defects, in 65% of those with isolated CLP and in 7% of normal fetuses. There was a large gap (>1.5 mm) or complete absence of signals from the maxilla in the midline in 69% of cases of CLP with additional defects, in 35% of those with isolated CLP and in none of the normal controls.
CONCLUSIONS
The maxillary gap is a new simple marker of possible CLP, which could increase the detection rate of CLP, especially in isolated cases. |
Clinical implications of energetic problems in cardiovascular disease | Cardiac energy metabolism can be altered in many forms of heart disease, including ischemic heart disease, cardiac hypertrophy, heart failure, and cardiac arrhythmias. Some of these energy metabolic changes are beneficial and help the heart adapt to the presence of the underlying cardiac pathology. However, some of the changes can also be maladaptive and actually contribute to the severity of cardiovascular disease. Because of the importance of energy metabolism in mediating cardiovascular disease, optimization of cardiac energetics has recently emerged as a novel approach to treat cardiovascular disease. This includes increasing the efficiency of oxygen utilization by the heart, which can be achieved by shifting cardiac metabolism to favor the use of carbohydrates rather than fatty acids as a metabolic fuel. This can be attained by reducing the circulating concentrations of fatty acids to which the heart is exposed, by inhibiting the uptake of fatty acids into the mitochondria, by directly inhibiting the enzymes of fatty acid oxidation, or by directly stimulating glucose metabolism. Clinical studies using these approaches have shown promise in treating various cardiovascular diseases, including ischemic heart disease, acute myocardial infarction, cardiac surgery, and heart failure. One agent that uses this approach is trimetazidine, which directly inhibits cardiac fatty acid oxidation and has been shown to be clinically effective in treating ischemic heart disease and heart failure. The paper will review those alterations in energy metabolism that occur in ischemic heart disease and heart failure, and the promising clinical approach of switching energy metabolism from fatty acid to glucose oxidation as a therapy in heart disease. Heart Metab. 2006;32:9–17. |
Polyantigenic Interferon-γ Responses Are Associated with Protection from TB among HIV-Infected Adults with Childhood BCG Immunization | BACKGROUND
Surrogate immunologic markers for natural and vaccine-mediated protection against tuberculosis (TB) have not been identified.
METHODS
HIV-infected adults with childhood BCG immunization entering the placebo arm of the DarDar TB vaccine trial in Dar es Salaam, Tanzania, were assessed for interferon gamma (IFN-γ) responses to three mycobacterial antigen preparations--secreted Mycobacterium tuberculosis antigens 85 (Ag85), early secretory antigenic target 6 (ESAT-6) and polyantigenic whole cell lysate (WCL). We investigated the association between the number of detectable IFN-γ responses at baseline and the subsequent risk of HIV-associated TB.
RESULTS
During a median follow-up of 3.3 years, 92 (9.4%) of 979 placebo recipients developed TB. The incidence of TB was 14% in subjects with no detectable baseline IFN-γ responses vs. 8% in subjects with response to polyantigenic WCL (P = 0.028). Concomitant responses to secreted antigens were associated with further reduction in the incidence of HIV-associated TB. Overall the percentage of subjects with 0, 1, 2 and 3 baseline IFN-γ responses to mycobacterial preparations who developed HIV-associated TB was 14%, 8%, 7% and 4%, respectively (P = 0.004). In a multivariate Cox regression model, the hazard of developing HIV-associated TB was 46% lower with each increment in the number of detectable baseline IFN-γ responses (P<0.001).
CONCLUSIONS
Among HIV-infected adults who received BCG in childhood and live in a TB-endemic country, polyantigenic IFN-γ responses are associated with decreased risk of subsequent HIV-associated TB.
TRIAL REGISTRATION
ClinicalTrials.gov NCT0052195. |
Multi-task Attention-based Neural Networks for Implicit Discourse Relationship Representation and Identification | We present a novel multi-task attentionbased neural network model to address implicit discourse relationship representation and identification through two types of representation learning, an attentionbased neural network for learning discourse relationship representation with two arguments and a multi-task framework for learning knowledge from annotated and unannotated corpora. The extensive experiments have been performed on two benchmark corpora (i.e., PDTB and CoNLL-2016 datasets). Experimental results show that our proposed model outperforms the state-of-the-art systems on benchmark corpora. |
Some connectivity based cluster validity indices | Identification of the correct number of clusters and the appropriate partitioning technique are some important considerations in clustering where several cluster validity indices, primarily utilizing the Euclidean distance, have been used in the literature. In this paper a new measure of connectivity is incorporated in the definitions of seven cluster validity indices namely, DB-index, Dunn-index, Generalized Dunn-index, PS-index, I-index, XB-index and SV-index, thereby yielding seven new cluster validity indices which are able to automatically detect clusters of any shape, size or convexity as long as they are well-separated. Here connectivity is measured using a novel approach following the concept of relative neighborhood graph. It is empirically established that incorporation of the property of connectivity sigonnectivity elative neighborhood graph ingle linkage clustering technique -means clustering technique nificantly improves the capabilities of these indices in identifying the appropriate number of clusters. The well-known clustering techniques, single linkage clustering technique and K-means clustering technique are used as the underlying partitioning algorithms. Results on eight artificially generated and three real-life data sets show that connectivity based Dunn-index performs the best as compared to all the other six indices. Comparisons are made with the original versions of these seven cluster validity indices. |
Feature Extraction based Face Recognition , Gender and Age Classification | The face recognition system with large sets of training sets for personal identification normally attains good accuracy. In this paper, we proposed Feature Extraction based Face Recognition, Gender and Age Classification (FEBFRGAC) algorithm with only small training sets and it yields good results even with one image per person. This process involves three stages: Pre-processing, Feature Extraction and Classification. The geometric features of facial images like eyes, nose, mouth etc. are located by using Canny edge operator and face recognition is performed. Based on the texture and shape information gender and age classification is done using Posteriori Class Probability and Artificial Neural Network respectively. It is observed that the face recognition is 100%, the gender and age classification is around 98% and 94% respectively. |
Perspectives on Media Literacy, Digital Literacy and Information Literacy | The cultural landscape poses different challenges for teachers. Beyond developing reading and writing skills, it is necessary to emerge in the digital culture and master the different codes of different languages. In this context, media education studies discuss the educational possibilities of interpreting, problematizing, and producing different kinds of texts in critical and creative ways, through the use of all means, languages and technologies available. Considering that media cannot be excluded from literacy programs, it is essential to reflect on the definition of “literate” today. These reflections examine the resignification of concepts like literacy, media literacy, digital literacy and information literacy. DOI: 10.4018/jdldc.2010100102 International Journal of Digital Literacy and Digital Competence, 1(4), 10-15, October-December 2010 11 Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. ent literacies demanded by the different media involve specific abilities of analysis, evaluation and reflection, and imply an understanding of the social economic and institutional context of the communication, to understand how it affects the experiences and uses of the media and their interpretations on micro and macro levels. Since different media have distinct narrative structures and elements, the production of meanings is based on the abilities to operate with the codes of different languages and their instruments, such as photographic cameras, video cameras, computers, cell phones, etc. From this perspective, media literacy involves the capacity to decipher, appreciate, criticize and compose, but also requires a broad understanding of the historic, economic and social context in which these texts are produced, distributed and used by audiences, as Silverstone (2005) emphasizes. To assure this form of appropriation, the learning of the medias should be dynamic and involve reflexive approaches combined with critical analyses, creative productions and critical consumption. Working for quite some time with the concept of media literacy, Hobbs (1994) defines it as “the ability to access, analyze, evaluate and produce communication in a variety of forms” (Aufderheide apud Rivoltella, 2005, p. 69). This definition is also found in the works of Livingstone (2003) and Rivoltella (2005), who understand that it involves an ability to read, write, speak, listen, and see critically and create messages using the broadest range of technologies. Media literacy is “literacy for the information era,” according to Hobbs (1994, p.2) for whom it means essentially learning to formulate questions about what one sees, observes and reads. To do so, it is possible to use the most varied types of messages and products: television drama, newspapers, films, news programs, documentaries, mini-series, advertising, photography, video-clips, online services, etc. The essential focus of this media literacy approach is anchored in the presumptions of Masterman, which Hobbs appropriates by synthesizing his fundamental ideas: all messages are constructed; messages are representations of reality; messages have proposals related to social, political and economic, ethical, and aesthetic contexts; individuals construct meanings for the messages that they receive; each media, form and genre of communication have specific characteristics. The focuses of media literacy correspond to a demand for greater semantic amplitude of the concept of literacy, and for Hobbs (2003) concern the possibility of knowing how to: • access messages: read with a good level of understanding; recognize and understand different types of languages; develop strategies to look for information in different sources; select relevant information, process and analyze it; use various technological tools; • analyze messages: develop a reflexive and critical reception; analyze the form, structure and construction of meanings; know how to use categories, concepts and ideas; interpret messages based on basic concepts such as intentions, audiences, points of view, formats, genres, arguments, themes, languages, contexts; compare and contrast information; identify fact from opinion; differentiate cause and effect; • evaluate messages: relate to the experience itself by evaluating its quality, veracity and relevance; interpret it according to the origin of the sources; respond and debate the message according to content and complexity; analyze the message according to its production context; evaluate the form and content of the message; • create and/or communicate messages: learn to write, speak, and create texts and images, for a variety of purposes and audiences; use ideas and express them clearly; use different types of language; select codes and resources that allow the message to reach its objectives; understand the grammar and syntax of the various media and of communication technologies to know how to use them in the construction of messages and in post-production. 4 more pages are available in the full version of this document, which may be purchased using the "Add to Cart" button on the product's webpage: www.igi-global.com/article/perspectives-media-literacydigital-literacy/49686?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Library Science, Information Studies, and Education, InfoSci-Educational Leadership, Administration, and Technologies eJournal Collection, InfoSci-Knowledge Discovery, Information Management, and Storage eJournal Collection, InfoSci-Select. Recommend this product to your |
Bring you to the past: Automatic Generation of Topically Relevant Event Chronicles | An event chronicle provides people with an easy and fast access to learn the past. In this paper, we propose the first novel approach to automatically generate a topically relevant event chronicle during a certain period given a reference chronicle during another period. Our approach consists of two core components – a timeaware hierarchical Bayesian model for event detection, and a learning-to-rank model to select the salient events to construct the final chronicle. Experimental results demonstrate our approach is promising to tackle this new problem. |
Proverb comprehension reconsidered—‘theory of mind’ and the pragmatic use of language in schizophrenia | BACKGROUND
For decades, impaired proverb comprehension has been regarded as typical of schizophrenic thought disorder. Testing patients' proverb understanding has widely been abandoned, however, due to poor reliability and validity of the assessment procedures. Since the underlying cognitive deficit of impaired proverb interpretation remained obscure, this study sought to determine the relation of proverb understanding with other cognitive domains, particularly 'theory of mind' or 'mindreading', in schizophrenia.
METHODS
31 patients diagnosed with schizophrenia were assessed using a novel German Proverb Test [Barth, A., Küfferle, B., 2001. Die Entwicklung eines Sprichworttests zur Erfassung konkretistischer Denkstörungen bei schizophrenen Patienten. Nervenarzt 72, 853-858.], a 'theory of mind' test battery, a variety of executive functioning tests and verbal intelligence. Psychopathology was measured using the PANSS [Kay, S.R., Opler, L.A., Lindenmayer, J.P., 1989. The Positive and Negative Syndrome Scale (PANSS): rationale and standardisation. Br. J. Psychiatry 158 (suppl. 7), 59-67.]. Patients' task performance was compared to a group of healthy control persons.
RESULTS
'Theory of mind', executive functioning and intelligence were strongly correlated with patients' ability to interpret proverbs correctly. In a regression analysis 'theory of mind' performance predicted, conservatively estimated, about 39% of the variance of proverb comprehension in the patient group.
CONCLUSIONS
The ability to interpret such metaphorical speech that is typical of many proverbs crucially depends on schizophrenic patients' ability to infer mental states. Future studies may further address differences between schizophrenia subtypes or the relation to specific symptom clusters. |
Processing capacity defined by relational complexity: implications for comparative, developmental, and cognitive psychology. | Working memory limits are best defined in terms of the complexity of the relations that can be processed in parallel. Complexity is defined as the number of related dimensions or sources of variation. A binary relation has one argument and one source of variation; its argument can be instantiated in only one way at a time. A binary relation has two arguments, two sources of variation, and two instantiations, and so on. Dimensionality is related to the number of chunks, because both attributes on dimensions and chunks are independent units of information of arbitrary size. Studies of working memory limits suggest that there is a soft limit corresponding to the parallel processing of one quaternary relation. More complex concepts are processed by "segmentation" or "conceptual chunking." In segmentation, tasks are broken into components that do not exceed processing capacity and can be processed serially. In conceptual chunking, representations are "collapsed" to reduce their dimensionality and hence their processing load, but at the cost of making some relational information inaccessible. Neural net models of relational representations show that relations with more arguments have a higher computational cost that coincides with experimental findings on higher processing loads in humans. Relational complexity is related to processing load in reasoning and sentence comprehension and can distinguish between the capacities of higher species. The complexity of relations processed by children increases with age. Implications for neural net models and theories of cognition and cognitive development are discussed. |
Effects of an angiotensin II receptor blocker on the impaired function of endothelial progenitor cells in patients with essential hypertension. | BACKGROUND
Endothelial progenitor cells (EPCs) induce neovascularization and repair vascular damage. We have demonstrated that EPC function is impaired in hypertensive rats with increases in oxidative stress and that angiotensin II receptor blockers improved the impaired function of EPCs. In this study, we investigated basal EPC functions in normotensive control subjects and patients with essential hypertension and the effect of losartan on EPC function in hypertensive patients.
METHODS
Eighteen normotensive control subjects and 36 patients with essential hypertension who were undergoing treatment participated in the study. Hypertensive patients were randomly selected to receive 50mg of losartan or 4 mg of trichlormethiazide daily for 4 weeks. Peripheral blood mononuclear cells were isolated and cultured to assay EPC colony formation. Blood pressure, biological examination, and oxidative stress were evaluated in all subjects.
RESULTS
The number of EPC colonies was significantly lower in patients with essential hypertension than in normotensive control subjects. EPC colony number was significantly and inversely correlated with systolic and diastolic blood pressure in all subjects. EPC colony number was significantly increased by treatment with losartan in patients with essential hypertension but not affected by treatment with trichlormethiazide.
CONCLUSIONS
EPC function was inversely correlated with blood pressure and was impaired in essential hypertension. Losartan significantly improved the impaired EPC function in hypertensive patients. Impaired EPC function may determine the cardiovascular complications in essential hypertension. The improvement of EPC function with the administration of angiotensin II receptor blockers is considered to be one of the cardiovascular protective effects. |
Peptidoglycan Crosslinking Relaxation Promotes Helicobacter pylori's Helical Shape and Stomach Colonization | The mechanisms by which bacterial cells generate helical cell shape and its functional role are poorly understood. Helical shape of the human pathogen Helicobacter pylori may facilitate penetration of the thick gastric mucus where it replicates. We identified four genes required for helical shape: three LytM peptidoglycan endopeptidase homologs (csd1-3) and a ccmA homolog. Surrounding the cytoplasmic membrane of most bacteria, the peptidoglycan (murein) sacculus is a meshwork of glycan strands joined by peptide crosslinks. Intact cells and isolated sacculi from mutants lacking any single csd gene or ccmA formed curved rods and showed increased peptidoglycan crosslinking. Quantitative morphological analyses of multiple-gene deletion mutants revealed each protein uniquely contributes to a shape-generating pathway. This pathway is required for robust colonization of the stomach in spite of normal directional motility. Our findings suggest that the coordinated action of multiple proteins relaxes peptidoglycan crosslinking, enabling helical cell curvature and twist. |
The Anatomy of a Large-Scale Hypertextual Web Search Engine | In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want. |
The evolution of jealousy | If a single deontic reasoning system handles deontic social contracts and deontic precautions, then performance on both should be impaired by neural damage to that system. But if they are handled by two distinct neurocognitive systems, as predicted by SCT and HMT, then neural trauma could cause a dissociation. Buller’s hypothesis fails again: focal brain damage can selectively impair social contract reasoning while leaving precautionary reasoning intact [5]. This dissociation within the domain of deontic rules has recently been replicated using neuroimaging [6]. Interpretations of social contract rules track SCT’s domain-specialized inference procedures: in [8], we refuted Buller-style logic explanations of social contract results for perspective change, switched rules, and ‘wants’ problems – facts he fails to mention, let alone discuss. Buller’s systematic inattention to large bodies of findings that conflict with his assertions is not due to lack of space in TICS – the pretence that these findings do not exist pervades his book, and its treatment of many areas of evolutionary psychology. (For further analysis, see www. psych.ucsb.edu/research/cep/buller.htm) |
CAPITO - a web server-based analysis and plotting tool for circular dichroism data | MOTIVATION
Circular dichroism (CD) spectroscopy is one of the most versatile tools to study protein folding and to validate the proper fold of purified proteins. Here, we aim to provide a readily accessible, user-friendly and platform-independent tool capable of analysing multiple CD datasets of virtually any format and returning results as high-quality graphical output to the user.
RESULTS
CAPITO (CD Anaylsis and Plotting Tool) is a novel web server-based tool for analysing and plotting CD data. It allows reliable estimation of secondary structure content utilizing different approaches. CAPITO accepts multiple CD datasets and, hence, is well suited for a wide application range such as the analysis of temperature or pH-dependent (un)folding and the comparison of mutants.
AVAILABILITY
http://capito.nmr.fli-leibniz.de.
CONTACT
[email protected] or [email protected]
SUPPLEMENTARY INFORMATION
Supplementary data are available at Bioinformatics online. |
Learning And-Or Model to Represent Context and Occlusion for Car Detection and Viewpoint Estimation | This paper presents a method for learning an And-Or model to represent context and occlusion for car detection and viewpoint estimation. The learned And-Or model represents car-to-car context and occlusion configurations at three levels: (i) spatially-aligned cars, (ii) single car under different occlusion configurations, and (iii) a small number of parts. The And-Or model embeds a grammar for representing large structural and appearance variations in a reconfigurable hierarchy. The learning process consists of two stages in a weakly supervised way (i.e., only bounding boxes of single cars are annotated). First, the structure of the And-Or model is learned with three components: (a) mining multi-car contextual patterns based on layouts of annotated single car bounding boxes, (b) mining occlusion configurations between single cars, and (c) learning different combinations of part visibility based on CAD simulations. The And-Or model is organized in a directed and acyclic graph which can be inferred by Dynamic Programming. Second, the model parameters (for appearance, deformation and bias) are jointly trained using Weak-Label Structural SVM. In experiments, we test our model on four car detection datasets-the KITTI dataset [1] , the PASCAL VOC2007 car dataset [2] , and two self-collected car datasets, namely the Street-Parking car dataset and the Parking-Lot car dataset, and three datasets for car viewpoint estimation-the PASCAL VOC2006 car dataset [2] , the 3D car dataset [3] , and the PASCAL3D+ car dataset [4] . Compared with state-of-the-art variants of deformable part-based models and other methods, our model achieves significant improvement consistently on the four detection datasets, and comparable performance on car viewpoint estimation. |
Likelihood Ratio Gradient Estimation for Stochastic Systems | Consider a computer system having a CPU that feeds jobs to two input/output (I/O) devices having different speeds. Let &thgr; be the fraction of jobs routed to the first I/O device, so that 1 - &thgr; is the fraction routed to the second. Suppose that α = α(&thgr;) is the steady-sate amount of time that a job spends in the system. Given that &thgr; is a decision variable, a designer might wish to minimize α(&thgr;) over &thgr;. Since α(·) is typically difficult to evaluate analytically, Monte Carlo optimization is an attractive methodology. By analogy with deterministic mathematical programming, efficient Monte Carlo gradient estimation is an important ingredient of simulation-based optimization algorithms. As a consequence, gradient estimation has recently attracted considerable attention in the simulation community. It is our goal, in this article, to describe one efficient method for estimating gradients in the Monte Carlo setting, namely the likelihood ratio method (also known as the efficient score method). This technique has been previously described (in less general settings than those developed in this article) in [6, 16, 18, 21]. An alternative gradient estimation procedure is infinitesimal perturbation analysis; see [11, 12] for an introduction. While it is typically more difficult to apply to a given application than the likelihood ratio technique of interest here, it often turns out to be statistically more accurate.
In this article, we first describe two important problems which motivate our study of efficient gradient estimation algorithms. Next, we will present the likelihood ratio gradient estimator in a general setting in which the essential idea is most transparent. The section that follows then specializes the estimator to discrete-time stochastic processes. We derive likelihood-ratio-gradient estimators for both time-homogeneous and non-time homogeneous discrete-time Markov chains. Later, we discuss likelihood ratio gradient estimation in continuous time. As examples of our analysis, we present the gradient estimators for time-homogeneous continuous-time Markov chains; non-time homogeneous continuous-time Markov chains; semi-Markov processes; and generalized semi-Markov processes. (The analysis throughout these sections assumes the performance measure that defines α(&thgr;) corresponds to a terminating simulation.) Finally, we conclude the article with a brief discussion of the basic issues that arise in extending the likelihood ratio gradient estimator to steady-state performance measures. |
DNA double-strand break repair | The integrity of genomic DNA is crucial for its function. And yet, DNA in living cells is inherently unstable. It is subject to mechanical stress and to many types of chemical modification that may lead to breaks in one or both strands of the double helix. Within the cell, reactive oxygen species generated by normal respiratory metabolism can cause double-strand breaks, as can stalled DNA replication. External agents that cause double-strand breaks include ionizing radiation and certain chemotherapeutic drugs. DNA double-strand breaks are also made and repaired during meiosis when recombination takes place between paired homologous chromosomes, during the rearrangement of immunoglobulin gene segments in lymphocyte development and during integration of certain mobile genetic elements and viruses into the host cell DNA. It is difficult to know how often double-strand breaks occur in the genome of a cell not exposed to external DNA-damaging agents, but we know from work with yeast cells that one persistent DNA double-strand break can be sufficient to trigger the death of a cell. If double-strand breaks go unrepaired in mammalian cells, they can also cause gene deletion, chromosome loss and other chromosomal aberrations that might ultimately produce cancers. DNA double-strand breaks are repaired by means of two main mechanisms: nonhomologous end joining and homologous recombination (see Figure 1). Both mechanisms operate in all eukaryotic cells that have been examined but the relative contribution of each mechanism varies. For example, most mammalian cells seem to favour nonhomologous end joining (also called ‘illegitimate recombination’), whereas homologous recombination is more common in the budding yeast Saccharomyces cerevisiae. One possible reason for this difference might be the prevalence in mammalian cells of repetitive sequences, which could lead to gene amplification or deletion if homologous recombination were common. In addition to these main mechanisms, DNA double-strand breaks can be repaired by means of single-strand annealing between adjacent repeated DNA sequences, which involves deletion of the intervening DNA (see Figure 1). This article focuses mainly on nonhomologous end joining, the best-characterized mammalian DNA double-strand break repair mechanism. |
A novel stealth Vivaldi antenna | A novel stealth Vivaldi antenna is proposed in this paper that covers the entire X-band from 8 to 12 GHz. Based on the difference of the current distribution on the metal patch when the antenna radiates or scatters, the shape of the patch is modified so that the electromagnetic scattering is significantly decreased in a wide angle range within the entire operating band. Maximally 14dBsm radar cross section (RCS) reduction is achieved with satisfactory radiation performance maintained. |
Having a BLAST with bioinformatics (and avoiding BLASTphemy) | Searching for similarities between biological sequences is the principal means by which bioinformatics contributes to our understanding of biology. Of the various informatics tools developed to accomplish this task, the most widely used is BLAST, the basic local alignment search tool. This article discusses the principles, workings, applications and potential pitfalls of BLAST, focusing on the implementation developed at the National Center for Biotechnology Information. |
Description, Prescription and the Choice of Discount Rates | The choice of discount rates is a key issue in the analysis of long-term societal issues, in particular environmental issues such as climate change. Approaches to choosing discount rates are generally placed into two categories: the descriptive approach and the prescriptive approach. The descriptive approach is often justified on grounds that it uses a description of how society discounts instead of having analysts impose their own discounting views on society. This paper analyzes the common forms of the descriptive and prescriptive approaches and finds that, in contrast with customary thinking, both forms are equally descriptive and prescriptive. The prescriptions concern who has standing (i.e. who is included) in society, how the views of these individuals are measured, and how the measurements are aggregated. Such prescriptions are necessary to choose from among the many possible descriptions of how society discounts. The descriptions are the measurements made given a choice of measurement technique. Thus, the labels “descriptive approach” and “prescriptive approach” are deeply misleading, as analysts cannot avoid imposing their own views on society. |
Long-Acting Insulin Analogs and the Risk of Diabetic Ketoacidosis in Children and Adolescents With Type 1 Diabetes | OBJECTIVE To investigate if long-acting insulin analogs decrease the risk of diabetic ketoacidosis (DKA) in young individuals with type 1 diabetes. RESEARCH DESIGN AND METHODS Of 48,110 type 1 diabetic patients prospectively studied between 2001 and 2008, the incidence of DKA requiring hospitalization was analyzed in 10,682 individuals aged </=20 years with a diabetes duration of >/=2 years. RESULTS The overall rate of DKA was 5.1 (SE +/- 0.2)/100 patient-years. Patients using insulin glargine or detemir (n = 5,317) had a higher DKA incidence than individuals using NPH insulin (n = 5,365, 6.6 +/- 0.4 vs. 3.6 +/- 0.3, P < 0.001). The risk for DKA remained significantly different after adjustment for age at diabetes onset, diabetes duration, A1C, insulin dose, sex, and migration background (P = 0.015, odds ratio 1.357 [1.062-1.734]). CONCLUSIONS Despite their long-acting pharmacokinetics, the use of insulin glargine or detemir is not associated with a lower incidence of DKA compared with NPH insulin. |
CONSTRAINTS ON THE MAGNETIC CONFIGURATION OF AP STARS FROM SIMPLE FEATURESOF OBSERVED QUANTITIES | According to the oblique rotator model, the time variations of the quantities usually employed to investigate the magnetic configuration of Ap stars (mean longitudinal field, mean surface field, broad band linear polarization) are described by simple laws. For each quantity, certain typical features can easily be identified. We show that these features set definite constraints on the magnetic configuration. |
Sensors and systems for fruit detection and localization: A review | This paper reviews the research and development of machine vision systems for fruit detection and localization for robotic harvesting and/or crop-load estimation of specialty tree crops including apples, pears, and citrus. Variable lighting condition, occlusions, and clustering are some of the important issues needed to be addressed for accurate detection and localization of fruit in orchard environment. To address these issues, various techniques have been investigated using different types of sensors and their combinations as well as with different image processing techniques. This paper summarizes various techniques and their advantages and disadvantages in detecting fruit in plant or tree canopies. The paper also summarizes the sensors and systems developed and used by researchers to localize fruit as well as the potential and limitations of those systems. Finally, major challenges for the successful application of machine vision system for robotic fruit harvesting and crop-load estimation, and potential future directions for research and development are discussed. 2015 Elsevier B.V. All rights reserved. |
gSLICr: SLIC superpixels at over 250Hz | We introduce a parallel GPU implementation of the Simple Linear Iterative Clustering (SLIC) superpixel segmentation. Using a single graphic card, our implementation achieves speedups of up to 83× from the standard sequential implementation. Our implementation is fully compatible with the standard sequential implementation and the software is now available online and is open source. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.