title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Clinical research of Olanzapine for prevention of chemotherapy-induced nausea and vomiting | BACKGROUND
This study was designed to mainly evaluate the activity and safety of olanzapine compared with 5-hydroxytryptamine 3(5-HT3) receptor antagonists for prevention of chemotherapy-induced nausea and vomiting(CINV) in patients receiving highly or moderately emetogenic chemotherapy (HEC or MEC). The second goal was to evaluate the impact of olanzapine on quality of life (QoL) of cancer patients during the period of chemotherapy.
METHODS
229 patients receiving highly or moderately emetogenic chemotherapy were randomly assigned to the test group [olanzapine(O) 10 mg p.o. plus azasetron (A) 10 mg i.v. and dexamethasone (D) 10 mg i.v. on day 1; O 10 mg once a day on days 2-5] or the control group (A 10 mg i.v. and D 10 mg i.v. on day 1; D 10 mg i.v. once a day on days 2-5). All the patients filled the observation table of CINV once a day on days 1-5, patients were instructed to fill the EORTC QLQ-C30 QoL observation table on day 0 and day 6. The primary endpoint was the complete response (CR) (without nausea and vomiting, no rescue therapy) for the acute period (24 h postchemotherapy), delayed period (days 2-5 postchemotherapy), the whole period (days 1-5 postchemotherapy). The second endpoint was QoL during chemotherapy administration, drug safety and toxicity.
RESULTS
229 patients were evaluable for efficacy. Compared with control group, complete response for acute nausea and vomiting in test group had no difference (p > 0.05), complete response for delayed nausea and vomiting in patients with highly emetogenic chemotherapy respectively improved 39.21% (69.64% versus 30.43%, p < 0.05), 22.05% (78.57% versus 56.52%, p < 0.05), complete response for delayed nausea and vomiting in patients with moderately emetogenic chemotherapy respectively improved 25.01% (83.07% versus 58.06%, p < 0.05), 13.43% (89.23% versus 75.80%, p < 0.05), complete response for the whole period of nausea and vomiting in patients with highly emetogenic chemotherapy respectively improved 41.38% (69.64% versus 28.26%, p < 0.05), 22.05% (78.57% versus 56.52%, p < 0.05), complete response for the whole period of nausea and vomiting in patients with moderately emetogenic chemotherapy respectively improved 26.62% (83.07% versus 56.45%, p < 0.05), 13.43% (89.23% versus 75.80%, p < 0.05). 214 of 299 patients were evaluable for QoL. Comparing test group with control group in QoL evolution, significant differences were seen in global health status, emotional functioning, social functioning, fatigue, nausea and vomiting, insomnia and appetite loss evolution in favour of the test group (p < 0.01). Both treatments were well tolerated.
CONCLUSION
Olanzapine can improve the complete response of delayed nausea and vomiting in patients receiving the highly or moderately emetogenic chemotherapy comparing with the standard therapy of antiemesis, as well as improve the QoL of the cancer patients during chemotherapy administration. Olanzapine is a safe and efficient drug for prevention of CINV. |
moocRP: An Open-source Analytics Platform | In this paper, we address issues of transparency, modularity, and privacy with the introduction of an open source, web-based data repository and analysis tool tailored to the Massive Open Online Course community. The tool integrates data request/authorization and distribution workflows as well as a simple analytics module upload format to enable reuse and replication of analytics results among instructors and researchers. We survey the evolving landscape of competing data models, all of which can be accommodated in the platform. Data model descriptions are provided to analytics authors who choose, much like with smartphone app stores, to write for any number of data models depending on their needs and the proliferation of the particular data model. Two case study examples of analytics and interactive visualizations are described in the paper. The result is a simple but effective approach to learning analytics immediately applicable to X consortium institutions and beyond. |
Learning Control in Robotics | Recent trends in robot learning are to use trajectory-based optimal control techniques and reinforcement learning to scale complex robotic systems. On the one hand, increased computational power and multiprocessing, and on the other hand, probabilistic reinforcement learning methods and function approximation, have contributed to a steadily increasing interest in robot learning. Imitation learning has helped significantly to start learning with reasonable initial behavior. However, many applications are still restricted to rather lowdimensional domains and toy applications. Future work will have to demonstrate the continual and autonomous learning abilities, which were alluded to in the introduction. |
Triple-Wideband Open-Slot Antenna for the LTE Metal-Framed Tablet device | An open-slot antenna with a low profile of 7 mm to provide a triple-wideband LTE operation in the metal-framed tablet device is presented. With the low profile, the antenna can fit in the narrow region between the metal frame and the display panel of the metal-framed tablet device. The triple-wideband operation covers 698-960 MHz (low band), 1710-2690 MHz (middle band), and 3400-3800 MHz (high band). The antenna mainly has a rectangular open slot and an L-shaped metal strip embedded therein to provide two open slots of different resonant paths. Using a microstrip feedline having a step-shaped section across the two open slots, good excitation of multiple slot resonant modes is obtained. Details of the proposed antenna are presented. |
Surgical Gesture Segmentation and Recognition | Automatic surgical gesture segmentation and recognition can provide useful feedback for surgical training in robotic surgery. Most prior work in this field relies on the robot's kinematic data. Although recent work [1,2] shows that the robot's video data can be equally effective for surgical gesture recognition, the segmentation of the video into gestures is assumed to be known. In this paper, we propose a framework for joint segmentation and recognition of surgical gestures from kinematic and video data. Unlike prior work that relies on either frame-level kinematic cues, or segment-level kinematic or video cues, our approach exploits both cues by using a combined Markov/semi-Markov conditional random field (MsM-CRF) model. Our experiments show that the proposed model improves over a Markov or semi-Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematic data alone, and improves over state-of-the-art methods when combining kinematic and video data. |
Towards proactive event-driven computing | Event driven architecture is a paradigm shift from traditional computing architectures which employ synchronous, request-response interactions. In this paper we introduce a conceptual architecture for what can be considered the next phase of that evolution: proactive event-driven computing. Proactivity refers to the ability to mitigate or eliminate undesired future events, or to identify and take advantage of future opportunities, by applying prediction and automated decision making technologies. We investigate an extension of the event processing conceptual model and architecture to support proactive event-driven applications, and propose the main building blocks of a novel architecture. We first describe several extensions to the existing event processing functionality that is required to support proactivity; next, we extend the event processing agent model to include two more type of agents: predictive agents that may derive future uncertain events based on prediction models, and proactive agents that compute the best proactive action that should be taken. Those building blocks are demonstrated through a comprehensive scenario that deals with proactive decision making, ensuring timely delivery of critical material for a production plant. |
Package-Aware Scheduling of FaaS Functions | We consider the problem of scheduling small cloud functions on serverless computing platforms. Fast deployment and execution of these functions is critical, for example, for microservices architectures. However, functions that require large packages or libraries are bloated and start slowly. A solution is to cache packages at the worker nodes instead of bundling them with the functions. However, existing FaaS schedulers are vanilla load balancers, agnostic of any packages that may have been cached in response to prior function executions, and cannot reap the benefits of package caching (other than by chance). To address this problem, we propose a package-aware scheduling algorithm that tries to assign functions that require the same package to the same worker node. Our algorithm increases the hit rate of the package cache and, as a result, reduces the latency of the cloud functions. At the same time, we consider the load sustained by the workers and actively seek to avoid imbalance beyond a configurable threshold. Our preliminary evaluation shows that, even with our limited exploration of the configuration space so-far, we can achieve 66% performance improvement at the cost of a (manageable) higher node imbalance. |
Impact of IDH1 R132 mutations and an IDH1 single nucleotide polymorphism in cytogenetically normal acute myeloid leukemia: SNP rs11554137 is an adverse prognostic factor. | PURPOSE
We assessed the prognostic impact of IDH1 R132 mutations and a known single nucleotide polymorphism (SNP) located in the same exon of the IDH1 gene in patients with cytogenetically normal acute myeloid leukemia (CN-AML) in the context of other prognostic markers.
PATIENTS AND METHODS
IDH1 exon four was directly sequenced in 275 CN-AML patients from two subsequent AML multicenter treatment trials and 120 healthy volunteers. Moreover, mutations in NPM1, FLT3, CEBPA, and WT1 were analyzed, and mRNA expression of IDH1 was quantified.
RESULTS
IDH1 R132 mutations were found in 10.9% of CN-AML patients. IDH1 SNP rs11554137 was found in 12% of CN-AML patients and 11.7% of healthy volunteers. IDH1 R132 mutations had no impact on prognosis. In contrast, IDH1 SNP rs11554137 was an adverse prognostic factor for overall survival in univariate and multivariate analysis. Other significant factors were age, NPM1/FLT3 mutational status, WT1 SNP rs16754, and platelet count. The impact of IDH1 SNP rs11554137 was most pronounced in the NPM1/FLT3 high-risk patients (either NPM1 wild-type or FLT3-internal tandem duplication positive). Patients with IDH1 SNP rs11554137 had a higher expression of IDH1 mRNA than patients with two wild-type alleles.
CONCLUSION
IDH1 SNP rs11554137 but not IDH1 R132 mutations are associated with an inferior outcome in CN-AML. |
Laser Scanner Based Slam in Real Road and Traffic Environment | In this paper we will present a SLAM algorithm we have recently developed for our needs in autonomous automotive applications. Our approach has the particularity of making use exclusively of laser scanners to achieve our goals without using any other type of sensors or source of information. We concentrated on developing a self-contained system that could be placed on any kind of mobile platform and work in any kind of dynamic environment; this is why too at this point our approach does not make use of any model of the vehicle. Our SLAM system has been tested with success both on a car at full speed on a road and a human evolving indoors. We will present here the challenges we face that pushed us to develop the algorithm, the solutions we are exploring, discuss experimental results and suggest areas of future work. |
Electronic business adoption by European firms: a cross-country assessment of the facilitators and inhibitors | Received: 15 May 2002 Revised: 31 January 2003 Accepted: 18 July 2003 Abstract In this study, we developed a conceptual model for studying the adoption of electronic business (e-business or EB) at the firm level, incorporating six adoption facilitators and inhibitors, based on the technology–organization– environment theoretical framework. Survey data from 3100 businesses and 7500 consumers in eight European countries were used to test the proposed adoption model. We conducted confirmatory factor analysis to assess the reliability and validity of constructs. To examine whether adoption patterns differ across different e-business environments, we divided the full sample into high EB-intensity and low EB-intensity countries. After controlling for variations of industry and country effects, the fitted logit models demonstrated four findings: (1) Technology competence, firm scope and size, consumer readiness, and competitive pressure are significant adoption drivers, while lack of trading partner readiness is a significant adoption inhibitor. (2) As EB-intensity increases, two environmental factors – consumer readiness and lack of trading partner readiness – become less important, while competitive pressure remains significant. (3) In high EB-intensity countries, e-business is no longer a phenomenon dominated by large firms; as more and more firms engage in e-business, network effect works to the advantage of small firms. (4) Firms are more cautious in adopting e-business in high EB-intensity countries – it seems to suggest that the more informed firms are less aggressive in adopting e-business, a somehow surprising result. Explanations and implications are offered. European Journal of Information Systems (2003) 12, 251–268. doi:10.1057/ palgrave.ejis.3000475 |
Modified image segmentation method based on region growing and region merging | Image segmentation is one of the basic concepts widely used in each and every fields of image processing. The entire process of the proposed work for image segmentation comprises of 3 phases: Threshold generation with dynamic Modified Region Growing phase (DMRG), texture feature generation phase and region merging phase. by dynamically changing two thresholds, the given input image can be performed as DMRG, in which the cuckoo search optimization algorithm helps to optimize the two thresholds in modified region growing. after obtaining the region growth segmented image, the edges are detected with edge detection algorithm. In the second phase, the texture feature is extracted using entropy based operation from the input image. In region merging phase, the results obtained from the texture feature generation phase is combined with the results of DMRG phase and similar regions are merged by using a distance comparison between regions. The proposed work is implemented using Mat lab platform with several medical images. the performance of the proposed work is evaluated using the metrics sensitivity, specificity and accuracy. the results show that this proposed work provides very good accuracy for the segmentation process in images. |
Mr. Doc: A Doctor Appointment Application System | -Life is becoming too busy to get medical appointments in person and to maintain a proper health care. The main idea of this work is to provide ease and comfort to patients while taking appointment from doctors and it also resolves the problems that the patients has to face while making an appointment. The android application Mr.Doc acts as a client whereas the database containing the doctor’s details, patient’s details and appointment details is maintained by a website that acts as a server. |
Communication network architecture and design principles for smart grids | An integrated high performance, highly reliable, scalable, and secure communications network is critical for the successful deployment and operation of next-generation electricity generation, transmission, and distribution systems — known as “smart grids.” Much of the work done to date to define a smart grid communications architecture has focused on high-level service requirements with little attention to implementation challenges. This paper investigates in detail a smart grid communication network architecture that supports today's grid applications (such as supervisory control and data acquisition [SCADA], mobile workforce communication, and other voice and data communication) and new applications necessitated by the introduction of smart metering and home area networking, support of demand response applications, and incorporation of renewable energy sources in the grid. We present design principles for satisfying the diverse quality of service (QoS) and reliability requirements of smart grids. |
A secure environment for untrusted helper applications confining the Wily Hacker | Many popular programs, such as Netscape, use untrusted helper applications to process data from the network. Unfortunately, the unauthenticated network data they interpret could well have been created by an adversary, and the helper applications are usually too complex to be bug-free. This raises signi cant security concerns. Therefore, it is desirable to create a secure environment to contain untrusted helper applications. We propose to reduce the risk of a security breach by restricting the program's access to the operating system. In particular, we intercept and lter dangerous system calls via the Solaris process tracing facility. This enabled us to build a simple, clean, user-mode implementation of a secure environment for untrusted helper applications. Our implementation has negligible performance impact, and can protect pre-existing applications. |
Bronchodilator therapy with metered-dose inhaler and spacer versus nebulizer in mechanically ventilated patients: comparison of magnitude and duration of response. | OBJECTIVE
Four-hour comparison of the bronchodilator response of albuterol administered via metered-dose inhaler (MDI) with spacer versus small-volume nebulizer (SVN) to mechanically ventilated patients with chronic obstructive pulmonary disease (COPD).
DESIGN
Prospective randomized clinical trial.
SETTING
Medical intensive care unit in a university hospital.
PATIENTS
Thirteen mechanically ventilated COPD patients.
INTERVENTION
Albuterol administration of 4 puffs (0.4 mg) or 10 puffs (1.0 mg) via MDI with spacer or 2.5 mg via SVN to mechanically ventilated patients in order to assess the bronchodilator response over 4 hours.
MEASUREMENTS AND RESULTS
Mechanically ventilated patients were enrolled in a randomized crossover study wherein one group received 4 puffs (0.4 mg) or 2.5 mg of albuterol and another group received 10 puffs (1.0 mg) or 2.5 mg of albuterol on separate days. Respiratory mechanics measurements were obtained over 4 hours. Total airway resistance declined by 14.4 +/- 3.8% after 4 MDI puffs, 18.3 +/- 1.8% after 10 MDI puffs, or 13.7 +/- 2.6% after 2.5 mg via SVN, compared to baseline (p < 0.01). After albuterol delivery, airway resistance remained improved for 90-120 minutes (p < 0.05) and returned to baseline by 4 hours with all treatments.
CONCLUSION
The airway response to albuterol administration via MDI and SVN to mechanically ventilated patients was similar in magnitude and duration, returning to baseline by 240 minutes. In stable, mechanically ventilated COPD patients, albuterol may be administered via MDI with spacer or via SVN every 4 hours. |
Relations among EEG-alpha asymmetry, BIS/BAS, and dispositional optimism | Past research has been unable to address whether the activity in the frontal hemispheres is related to the direction of motivation (approach versus withdrawal) or valence of emotion (positive versus negative). The present study was an attempt to address this question by using a standardized low-resolution brain electromagnetic tomography (sLORETA) which provides EEG localization measures that are independent of the recording reference. Resting EEG, self-report measures of Behavioral Activation and Inhibition System (BAS and BIS) strength, dispositional optimism and a measure of hedonic tone, were collected from 51 unselected undergraduates. Three measures of cortical activation were obtained: (a) alpha asymmetry at conventional scalp sites, (b) anterior and posterior source alpha asymmetries (sLORETA method), (c) posterior versus frontal delta and theta activity. Both alpha asymmetry measures (conventional EEG and sLORETA) yielded significant frontal and parietal asymmetry correlation patterns. Neither measure identified significant associations between resting posterior versus frontal delta and theta activity personality traits. Higher BAS was uniquely related to greater left-sided activation in the middle frontal gyrus (BA11). Optimism was associated with higher activations in the left-superior frontal gyrus (BA10) and in the right-posterior cingulate cortex (BA31). |
Plastic Analysis and Design of Steel Plate Shear Walls | plate is ntation of by plastic plex, wall ection of procedure Abstract: A revised procedure for the design of steel plate shear walls is proposed. In this procedure the thickness of the infill found using equations that are derived from the plastic analysis of the strip model, which is an accepted model for the represe steel plate shear walls. Comparisons of experimentally obtained ultimate strengths of steel plate shear walls and those predicted analysis are given and reasonable agreement is observed. Fundamental plastic collapse mechanisms for several, more com configurations are also given. Additionally, an existing codified procedure for the design of steel plate walls is reviewed and a s this procedure which could lead to designs with less-than-expected ultimate strength is identified. It is shown that the proposed eliminates this possibility without changing the other valid sections of the current procedure. |
A new miniaturized antenna for ISM 433 MHz frequency band | A new miniaturized antenna for 433MHz frequency band is introduced for sensor networks. The antenna is a magnetic monopole loaded with a capacitor, which can significantly decrease the resonant frequency of the structure. It is designed to operate in the ISM 433MHz band on a small PCB (103mm⋆55mm) and has at least 3MHz bandwidth. Moreover, its radiation efficiency is suitable for the possible applications and its pattern is omnidirectionnal. |
Gremlin: Systematic Resilience Testing of Microservices | Modern Internet applications are being disaggregated into a microservice-based architecture, with services being updated and deployed hundreds of times a day. The accelerated software life cycle and heterogeneity of language runtimes in a single application necessitates a new approach for testing the resiliency of these applications in production infrastructures. We present Gremlin, a framework for systematically testing the failure-handling capabilities of microservices. Gremlin is based on the observation that microservices are loosely coupled and thus rely on standard message-exchange patterns over the network. Gremlin allows the operator to easily design tests and executes them by manipulating inter-service messages at the network layer. We show how to use Gremlin to express common failure scenarios and how developers of an enterprise application were able to discover previously unknown bugs in their failure-handling code without modifying the application. |
Learning to detect chest radiographs containing lung nodules using visual attention networks | Machine learning approaches hold great potential for the automated detection of lung nodules on chest radiographs, but training algorithms requires very large amounts of manually annotated radiographs, which are difficult to obtain. The increasing availability of PACS (Picture Archiving and Communication System), is laying the technological foundations needed to make available large volumes of clinical data and images from hospital archives. Binary labels indicating whether a radiograph contains a pulmonary lesion can be extracted at scale, using natural language processing algorithms. In this study, we propose two novel neural networks for the detection of chest radiographs containing pulmonary lesions. Both architectures make use of a large number of weakly-labelled images combined with a smaller number of manually annotated x-rays. The annotated lesions are used during training to deliver a type of visual attention feedback informing the networks about their lesion localisation performance. The first architecture extracts saliency maps from high-level convolutional layers and compares the inferred position of a lesion against the true position when this information is available; a localisation error is then back-propagated along with the softmax classification error. The second approach consists of a recurrent attention model that learns to observe a short sequence of smaller image portions through reinforcement learning; the reward function penalises the exploration of areas, within an image, that are unlikely to contain nodules. Using a repository of over 430,000 historical chest radiographs, we present and discuss the proposed methods over related architectures that use either weakly-labelled or annotated images only. |
An 82.4% efficiency package-bondwire-based four-phase fully integrated buck converter with flying capacitor for area reduction | Multi-phase converters have become a topic of great interest due to the high output power capacity and output ripple cancellation effect. They are even more beneficial to nowadays high-frequency fully integrated converters with output capacitor integrated on-chip. As one of the dominant chip area consumers, reducing the size of the on-chip decoupling capacitors directly leads to cost down. It is reported that a 5× capacitor area reduction can be achieved with a four-phase converter compared to a single-phase one [1]. However, the penalty is obvious. Every extra phase comes with an inductor, which is also counted as cost and becomes more dominant with increase in the number of phases. |
Utility Maximization in LTE-Advanced Systems with Carrier Aggregation | Long Term Evolution (LTE)-Advanced is expected to aggregate multiple Component Carrier (CC)s to fulfil the high data rate requirement. It may serve users with different capabilities in accessing these CCs, e.g., some can access all CCs, whereas some may operate on only one CC. This gives challenges to the packet scheduler to maximize the system performance over all CCs. In this paper we provide a mathematical model of the log-measure utility in an LTE-Advanced system, and give proof that our previously developed cross-CC Proportional Fair (PF) packet scheduler maximizes this utility. System level simulations are performed, which confirm that cross-CC PF scheduling offers much higher utility than independent PF and channel blind schedulers. This scheduler is then generalized to adjust the resource sharing among users. It can trade off between average cell throughput and cell edge user throughput. However, any adjustment in the resource sharing leads to a loss in utility. |
Degree of nonlinearity (DoN) measure for target tracking in videos | Performance evaluation of tracking methods includes methods of relative and absolute performance. Absolute tracking performance is the robust end result presented to a user which determines the product solution for real world analysis. However, to achieve robust performance, the tracking method is subject to the sensor data, filtering performance, and associated models, which requires relative performance evaluation. In this paper, we highlight the efficacy of using DoN measure in evaluating video tracking capabilities. Three developments are presented as to the real-world issues associated with the nonlinear video-based tracking: (1) challenges of performance evaluation with real data, (2) approaches to utilize DoN as improvements for relative track evaluation, and (3) operational implementations lessons associated from user-defined operating picture (UDOP) plugins. Results are presented using relevant data with a highlighted tracker. |
Microcontrollers as material: crafting circuits with paper, conductive ink, electronic components, and an "untoolkit" | Embedded programming is typically made accessible through modular electronics toolkits. In this paper, we explore an alternative approach, combining microcontrollers with craft materials and processes as a means of bringing new groups of people and skills to technology production. We have developed simple and robust techniques for drawing circuits with conductive ink on paper, enabling off-the-shelf electronic components to be embedded directly into interactive artifacts. We have also developed an set of hardware and software tools -- an instance of what we call an "untoolkit" -- to provide an accessible toolchain for the programming of microcontrollers. We evaluated our techniques in a number of workshops, one of which is detailed in the paper. Four broader themes emerge: accessibility and appeal, the integration of craft and technology, microcontrollers vs. electronic toolkits, and the relationship between programming and physical artifacts. We also expand more generally on the idea of an untoolkit, offering a definition and some design principles, as well as suggest potential areas of future research. |
Influence of mechanical activation on microstructure and crystal structure of sintered MgO-TiO2 system | Mixtures of MgO-TiO2 were mechanically activated using high-energy planetary ball mill during 5, 10, 20, 40, 80 and 120 minutes. Sintering process was preformed in air at 1100 o -1400 o C for 2h. The decrease in powder’s particle size was noticed as the time of mechanical activation increased and confirmed by particle size analyzer. XRD analyses were performed in order to acquire the information about phase composition. Different ratio mixtures of MgTiO3 and Mg2TiO4 are present within all sintered samples. The effect of tribophysical activation on microstructure was investigated by scanning electron microscopy. The differential thermal gravimetric analysis has been performed in order to investigate thermal behaviour of the mixtures. |
Pay for a Sliding Bloom Filter and Get Counting, Distinct Elements, and Entropy for Free | For many networking applications, recent data is more significant than older data, motivating the need for sliding window solutions. Various capabilities, such as DDoS detection and load balancing, require insights about multiple metrics including Bloom filters, per-flow counting, count distinct and entropy estimation. In this work, we present a unified construction that solves all the above problems in the sliding window model. Our single solution offers a better space to accuracy tradeoff than the state-of-the-art for each of these individual problems! We show this both analytically and by running multiple real Internet backbone and datacenter packet traces. |
Testosterone deficiency. | Testosterone deficiency (TD) afflicts approximately 30% of men aged 40-79 years, with an increase in prevalence strongly associated with aging and common medical conditions including obesity, diabetes, and hypertension. A strong relationship is noted between TD and metabolic syndrome, although the relationship is not certain to be causal. Repletion of testosterone (T) in T-deficient men with these comorbidities may indeed reverse or delay their progression. While T repletion has been largely thought of in a sexual realm, we discuss its potential role in general men's health concerns: metabolic, body composition, and all-cause mortality through the use of a single clinical vignette. This review examines a host of studies, with practical recommendations for diagnosis of TD and T repletion in middle-aged and older men, including an analysis of treatment modalities and areas of concerns and uncertainty. |
Implementation of network intrusion detection system using variant of decision tree algorithm | As the need of internet is increasing day by day, the significance of security is also increasing. The enormous usage of internet has greatly affected the security of the system. Hackers do monitor the system minutely or keenly, therefore the security of the network is under observation. A conventional intrusion detection technology indicates more limitation like low detection rate, high false alarm rate and so on. Performance of the classifier is an essential concern in terms of its effectiveness; also number of feature to be examined by the IDS should be improved. In our work, we have proposed two techniques, C4.5 Decision tree algorithm and C4.5 Decision tree with Pruning, using feature selection. In C4.5 Decision tree with pruning we have considered only discrete value attributes for classification. We have used KDDCup'99 and NSL_KDD dataset to train and test the classifier. The Experimental Result shows that, C4.5 decision tree with pruning approach is giving better results with all most 98% of accuracy. |
A new approach for supervised power disaggregation by using a deep recurrent LSTM network | This paper presents a new approach for supervised power disaggregation by using a deep recurrent long short term memory network. It is useful to extract the power signal of one dominant appliance or any subcircuit from the aggregate power signal. To train the network, a measurement of the power signal of the target appliance in addition to the total power signal during the same time period is required. The method is supervised, but less restrictive in practice since submetering of an important appliance or a subcircuit for a short time is feasible. The main advantages of this approach are: a) It is also applicable to variable load and not restricted to on-off and multi-state appliances. b) It does not require hand-engineered event detection and feature extraction. c) By using multiple networks, it is possible to disaggregate multiple appliances or subcircuits at the same time. d) It also works with a low cost power meter as shown in the experiments with the Reference Energy Disaggregation (REDD) dataset (1/3Hz sampling frequency, only real power). |
Does a Simple Intervention Enhance Memory and Adherence for Neuropsychological Recommendations? | The variables that influence the extent to which patients and their families remember and follow neuropsychological recommendations after their appointments are unclear. There has been limited research on this topic. The current study was designed to address this knowledge gap. Patients (n = 79) and caregivers (n = 36) were randomized into 1 of 2 groups, letter or no-letter, to investigate whether providing a supplemental written reminder of the recommendations given (in addition to routine feedback procedures in our clinic) would improve memory for and adherence to recommendations. We found that recall of recommendations was better in the letter condition, although this effect was observed in the caregivers and not in the patients. Adherence to recommendations did not differ significantly between the letter and no-letter conditions. These findings show that a simple intervention can improve caregiver memory for recommendations. Future research could help determine how to translate improvements in memory into greater adherence. |
A comparison between extended kalman filtering and sequential monte carlo technique for simultaneous localisation and map-building. | Monte Carlo Localisation has been applied to solve many different classes of localisation problems. In this paper, we present a possible Simultaneous Localisation and Map-building implementation using the Sequential Monte Carlo technique. Multiple particle filters are created to estimate both the robot and landmark positions simultaneously. The proposed technique shows promising results when compared with those obtained with the Extended Kalman filter. |
Genetic algorithm-based learning of fuzzy neural networks. Part 1: feed-forward fuzzy neural networks | In spite of great importance of fuzzy feed-forward and recurrent neural networks (FNN) for solving wide range of real-world problems, today there is no e ective learning algorithm for FNN. In this paper we propose an e ective geneticbased learning mechanism for FNN with fuzzy inputs, fuzzy weights expressed as LR-fuzzy numbers, and fuzzy outputs. The e ectiveness of the proposed method is illustrated through simulation of fuzzy regression for quality evaluation and comparison with the widely used learning method based on -cuts and fuzzy arithmetic. Finally, we demonstrate the use of the proposed learning procedure for calculating fuzzy-valued pro t in an oligopolistic environment. c © 2001 Elsevier Science B.V. All rights reserved. |
Fully Convolutional Adaptation Networks for Semantic Segmentation | The recent advances in deep neural networks have convincingly demonstrated high capability in learning vision models on large datasets. Nevertheless, collecting expert labeled datasets especially with pixel-level annotations is an extremely expensive process. An appealing alternative is to render synthetic data (e.g., computer games) and generate ground truth automatically. However, simply applying the models learnt on synthetic images may lead to high generalization error on real images due to domain shift. In this paper, we facilitate this issue from the perspectives of both visual appearance-level and representation-level domain adaptation. The former adapts source-domain images to appear as if drawn from the "style" in the target domain and the latter attempts to learn domain-invariant representations. Specifically, we present Fully Convolutional Adaptation Networks (FCAN), a novel deep architecture for semantic segmentation which combines Appearance Adaptation Networks (AAN) and Representation Adaptation Networks (RAN). AAN learns a transformation from one domain to the other in the pixel space and RAN is optimized in an adversarial learning manner to maximally fool the domain discriminator with the learnt source and target representations. Extensive experiments are conducted on the transfer from GTA5 (game videos) to Cityscapes (urban street scenes) on semantic segmentation and our proposal achieves superior results when comparing to state-of-the-art unsupervised adaptation techniques. More remarkably, we obtain a new record: mIoU of 47.5% on BDDS (drive-cam videos) in an unsupervised setting. |
Peek-a-Boo: I see your smart home activities, even encrypted! | A myriad of IoT devices such as bulbs, switches, speakers in a smart home environment allow users to easily control the physical world around them and facilitate their living styles. However, an attacker inside or near a smart home environment can potentially exploit the innate wireless medium used by these devices to exfiltrate sensitive information about the users and their activities, invading user privacy. With this in mind, in this work, we introduce a novel multi-stage privacy attack against user privacy in a smart environment. It is realized utilizing state-of-the-art machine-learning approaches for detecting and identifying particular types of IoT devices, their actions, states, and ongoing user activities in a cascading style by only observing passively the wireless traffic from smart home devices. The attack effectively works on both encrypted and unencrypted communications. We evaluate the efficiency of the attack with real measurements from an extensive set of popular off-the-shelf smart home IoT devices utilizing a set of diverse network protocols like WiFi, ZigBee, and BLE. Our results show that an adversary passively sniffing the network traffic can achieve very high accuracy (above 90%) in identifying the state and actions of targeted smart home devices and their users. In contrast to earlier straightforward approaches, our multi-stage privacy attack can perform activity detection and identification automatically, without extensive background knowledge or specifications of analyzed protocols. This allows an adversary to efficiently aggregate extensive behavior profiles of targeted users. To protect against this privacy leakage, we also propose a countermeasure based on generating spoofed network traffic to hide the real activities of the devices. We also demonstrate that the provided solution provides better protection than existing solutions. |
Sub-Nyquist Radar via Doppler Focusing | We investigate the problem of a monostatic pulse-Doppler radar transceiver trying to detect targets sparsely populated in the radar's unambiguous time-frequency region. Several past works employ compressed sensing (CS) algorithms to this type of problem but either do not address sample rate reduction, impose constraints on the radar transmitter, propose CS recovery methods with prohibitive dictionary size, or perform poorly in noisy conditions. Here, we describe a sub-Nyquist sampling and recovery approach called Doppler focusing, which addresses all of these problems: it performs low rate sampling and digital processing, imposes no restrictions on the transmitter, and uses a CS dictionary with size, which does not increase with increasing number of pulses P. Furthermore, in the presence of noise, Doppler focusing enjoys a signal-to-noise ratio (SNR) improvement, which scales linearly with P, obtaining good detection performance even at SNR as low as - 25 dB. The recovery is based on the Xampling framework, which allows reduction of the number of samples needed to accurately represent the signal, directly in the analog-to-digital conversion process. After sampling, the entire digital recovery process is performed on the low rate samples without having to return to the Nyquist rate. Finally, our approach can be implemented in hardware using a previously suggested Xampling radar prototype. |
Overreactive brain responses to sensory stimuli in youth with autism spectrum disorders. | OBJECTIVES
Sensory over-responsivity (SOR), defined as a negative response to or avoidance of sensory stimuli, is both highly prevalent and extremely impairing in youth with autism spectrum disorders (ASD), yet little is known about the neurological bases of SOR. This study aimed to examine the functional neural correlates of SOR by comparing brain responses to sensory stimuli in youth with and without ASD.
METHOD
A total of 25 high-functioning youth with ASD and 25 age- and IQ-equivalent typically developing (TD) youth were presented with mildly aversive auditory and visual stimuli during a functional magnetic resonance imaging (fMRI) scan. Parents provided ratings of children's SOR and anxiety symptom severity.
RESULTS
Compared to TD participants, ASD participants displayed greater activation in primary sensory cortical areas as well as amygdala, hippocampus, and orbital-frontal cortex. In both groups, the level of activity in these areas was positively correlated with level of SOR severity as rated by parents, over and above behavioral ratings of anxiety.
CONCLUSIONS
This study demonstrates that youth with ASD show neural hyper-responsivity to sensory stimuli, and that behavioral symptoms of SOR may be related to both heightened responsivity in primary sensory regions as well as areas related to emotion processing and regulation. |
Synthesizing the preferred inputs for neurons in neural networks via deep generator networks | Deep neural networks (DNNs) have demonstrated state-of-the-art results on many pattern recognition tasks, especially vision classification problems. Understanding the inner workings of such computational brains is both fascinating basic science that is interesting in its own right—similar to why we study the human brain—and will enable researchers to further improve DNNs. One path to understanding how a neural network functions internally is to study what each of its neurons has learned to detect. One such method is called activation maximization (AM), which synthesizes an input (e.g. an image) that highly activates a neuron. Here we dramatically improve the qualitative state of the art of activation maximization by harnessing a powerful, learned prior: a deep generator network (DGN). The algorithm (1) generates qualitatively state-of-the-art synthetic images that look almost real, (2) reveals the features learned by each neuron in an interpretable way, (3) generalizes well to new datasets and somewhat well to different network architectures without requiring the prior to be relearned, and (4) can be considered as a high-quality generative method (in this case, by generating novel, creative, interesting, recognizable images). |
Pen-and-ink textures for real-time rendering | Simulation of a pen-and-ink illustration style in a realtime rendering system is a challenging computer graphics problem. Tonal art maps (TAMs) were recently suggested as a solution to this problem. Unfortunately, only the hatching aspect of pen-and-ink media was addressed thus far. We extend the TAM approach and enable representation of arbitrary textures. We generate TAM images by distributing stroke primitives according to a probability density function. This function is derived from the input image and varies depending on the TAM’s scale and tone levels. The resulting depiction of textures approximates various styles of pen-and-ink illustrations such as outlining, stippling, and hatching. |
The effectiveness of a convergence dialogue meeting with the employer in promoting return to work as part of the cognitive-behavioural treatment of common mental disorders: A randomized controlled trial. | BACKGROUND
Dialogue between supervisor and employee is of great importance for occupational rehabilitation.
OBJECTIVE
To evaluate the effectiveness of a convergence dialogue meeting (CDM) of employee, therapist and supervisor aimed at facilitating return to work (RTW) as part of cognitive-behavioural treatment.
METHODS
Randomized controlled trial including 60 employees sick-listed with common mental disorders and referred for specialized mental healthcare. Employees were randomly allocated either to an intervention group (n = 31) receiving work-focused cognitive-behavioural therapy plus CDM or a control group (n = 29) receiving work-focused cognitive-behavioural therapy without CDM.
RESULTS
The time to first RTW was 12 days shorter (p = 0.334) in the intervention group, although full (i.e., at equal earnings as before reporting sick) RTW took 41 days longer (p = 0.122) than the control group. The odds of full RTW at the end of treatment were only 7% higher (p = 0.910) in the intervention group as compared to the control group.
CONCLUSIONS
CDM did not significantly reduce the time to RTW. We recommend that therapists who are trained on CDM focus on barriers and solutions for RTW. |
SnapShot: Visualization to Propel Ice Hockey Analytics | Sports analysts live in a world of dynamic games flattened into tables of numbers, divorced from the rinks, pitches, and courts where they were generated. Currently, these professional analysts use R, Stata, SAS, and other statistical software packages for uncovering insights from game data. Quantitative sports consultants seek a competitive advantage both for their clients and for themselves as analytics becomes increasingly valued by teams, clubs, and squads. In order for the information visualization community to support the members of this blossoming industry, it must recognize where and how visualization can enhance the existing analytical workflow. In this paper, we identify three primary stages of today's sports analyst's routine where visualization can be beneficially integrated: 1) exploring a dataspace; 2) sharing hypotheses with internal colleagues; and 3) communicating findings to stakeholders.Working closely with professional ice hockey analysts, we designed and built SnapShot, a system to integrate visualization into the hockey intelligence gathering process. SnapShot employs a variety of information visualization techniques to display shot data, yet given the importance of a specific hockey statistic, shot length, we introduce a technique, the radial heat map. Through a user study, we received encouraging feedback from several professional analysts, both independent consultants and professional team personnel. |
Unsupervised Learning for Physical Interaction through Video Prediction | A core challenge for an agent learning to interact with the world is to predict how its actions affect objects in its environment. Many existing methods for learning the dynamics of physical interactions require labeled object information. However, to scale real-world interaction learning to a variety of scenes and objects, acquiring labeled data becomes increasingly impractical. To learn about physical object motion without labels, we develop an action-conditioned video prediction model that explicitly models pixel motion, by predicting a distribution over pixel motion from previous frames. Because our model explicitly predicts motion, it is partially invariant to object appearance, enabling it to generalize to previously unseen objects. To explore video prediction for real-world interactive agents, we also introduce a dataset of 50, 000 robot interactions involving pushing motions, including a test set with novel objects. In this dataset, accurate prediction of videos conditioned on the robot’s future actions amounts to learning a “visual imagination” of different futures based on different courses of action. Our experiments show that our proposed method not only produces more accurate video predictions, but also more accurately predicts object motion, when compared to prior methods. |
Review of the pi-calculus: a theory of mobile processes | With the rise of computer networks in the past decades, the sp read of distributed applications with components across multiple machines, and with new notions such as mobile code, there has been a need for formal methods to model and reason about concurrency and mobility. The study of sequ ential computations has been based on notions such as Turing machines, recursive functions, the -calculus, all equivalent formalisms capturing the essenc e of sequential computations. Unfortunately, for concurrent programs, th eories for sequential computation are not enough. Many programs are not simply programs that compute a result and re turn it to the user, but rather interact with other programs, and even move from machine to machine. Process calculi are an attempt at getting a formal foundatio n based on such ideas. They emerged from the work of Hoare [4] and Milner [6] on models of concurrency. These calc uli are meant to model systems made up of processes communicating by exchanging values across channels. They a llow for the dynamic creation and removal of processes, allowing the modelling of dynamic systems. A typical proces s calculus in that vein is CCS [6, 7]. The -calculus extends CCS with the ability to create and remove communicat ion links between processes, a new form of dynamic behaviour. By allowing links to be created and deleted, it is po sible to model a form of mobility, by identifying the position of a process by its communication links. This book, “The -calculus: A Theory of Mobile Processes”, by Davide Sangior gi and David Walker, is a in-depth study of the properties of the -calculus and its variants. In a sense, it is the logical foll owup to the recent introduction to concurrency and the -calculus by Milner [8], reviewed in SIGACT News, 31(4), Dec ember 2000. What follows is a whirlwind introduction to CCS and the -calculus. It is meant as a way to introduce the notions discussed in much more depth by the book under review. Let us s tart with the basics. CCS provides a syntax for writing processes. The syntax is minimalist, in the grand tradition of foundational calculi such as the -calculus. Processes perform actions, which can be of three forms: the sending of a message over channel x (written x), the receiving of a message over channel x (written x), and internal actions (written ), the details of which are unobservable. Send and receive actions are called synchronizationactions, since communication occurs when the correspondin g processes synchronize. Let stand for actions, including the internal action , while we reserve ; ; : : : for synchronization actions. 1 Processes are written using the following syntax: P ::= Ahx1; : : : ; xki jXi2I i:Pi j P1jP2 j x:P We write 0 for the empty summation (when I = ;). The idea behind process expressions is simple. The proces s 0 represents the process that does nothing and simply termina tes. A process of the form :P awaits to synchronize with a process of the form :Q, after which the processes continue as process P andQ respectively. A generalization 1In the literature, the actions of CCS are often given a much mo re abstract interpretation, as simply names and co-names. T he send/receive interpretation is useful when one moves to the -calculus. |
Effects of intracoronary injection of mononuclear bone marrow cells on left ventricular function, arrhythmia risk profile, and restenosis after thrombolytic therapy of acute myocardial infarction. | AIMS
To assess the efficacy and safety of bone marrow cell (BMC) therapy after thrombolytic therapy of an acute ST-elevation myocardial infarction (STEMI).
METHODS AND RESULTS
Patients with STEMI treated with thrombolysis followed by percutaneous coronary intervention (PCI) 2-6 days after STEMI were randomly assigned to receive intracoronary BMCs (n = 40) or placebo medium (n = 40), collected and prepared 3-6 h prior PCI and injected into the infarct artery immediately after stenting. Efficacy was assessed by the measurement of global left ventricular ejection fraction (LVEF) by left ventricular angiography and 2-D echocardiography, and safety by measuring arrhythmia risk variables and restenosis of the stented vessel by intravascular ultrasound. At 6 months, BMC group had a greater absolute increase of global LVEF than placebo group, measured either by angiography (mean +/- SD increase 7.1 +/- 12.3 vs. 1.2 +/- 11.5%, P = 0.05) or by 2-D echocardiography (mean +/- SD increase 4.0 +/- 11.2 vs. -1.4 +/- 10.2%, P = 0.03). No differences were observed between the groups in the adverse clinical events, arrhythmia risk variables, or the minimal lumen diameter of the stented coronary lesion.
CONCLUSION
Intracoronary BMC therapy is associated with an improvement of global LVEF and neutral effects on arrhythmia risk profile and restenosis of the stented coronary lesions in patients after thrombolytic therapy of STEMI. |
Sudoku Solutions Using Logic Equations | In several previous papers and particularly in [3] we presented the use of logic equations and their solution using ternary vectors and set-theoretic considerations as well as binary codings and bit-parallel vector operations. In this paper we introduce a new and elegant model for the game of Sudoku that uses the same approach and solves this problem without any search always finding all solutions (including no solutions or several solutions). It can also be extended to larger Sudokus and to a whole class of similar discrete problems, such as Queens’ problems on the chessboard, graph-coloring problems etc. Disadvantages of known SAT approaches for such problems were overcome by our new method. |
Computational thinking for youth in practice | Computational thinking (CT) has been described as the use of abstraction, automation, and analysis in problem-solving [3]. We examine how these ways of thinking take shape for middle and high school youth in a set of NSF-supported programs. We discuss opportunities and challenges in both in-school and after-school contexts. Based on these observations, we present a "use-modify-create" framework, representing three phases of students' cognitive and practical activity in computational thinking. We recommend continued investment in the development of CT-rich learning environments, in educators who can facilitate their use, and in research on the broader value of computational thinking. |
A New Metal Control Gate Last process (MCGL process) for high performance DC-SF (Dual Control gate with Surrounding Floating gate) 3D NAND flash memory | A new Metal Control Gate Last process (MCGL process) has been successfully developed for the DC-SF (Dual Control gate with Surrounding Floating gate cell)[1] three-dimensional (3D) NAND flash memory. The MCGL process can realize a low resistive tungsten (W) metal word-line with high-k IPD, a low damage on tunnel oxide/IPD, and a preferable FG shape. And also, a conventional bulk erase can be used, replaced GIDL erase in BiCS[3][4], due to direct connection between channel poly and p-well by the channel contact holes. Therefore, by using MCGL process, high performance and high reliability of DC-SF cell can be achieved for MLC/TLC 256Gb/512Gb 3D NAND flash memories. |
Comparison of toxicokinetic and tissue distribution of triptolide-loaded solid lipid nanoparticles vs free triptolide in rats. | The traditional Chinese medicine Tripterygium wilfordii Hook F (TWHF) is used clinically to treat some autoimmune and inflammatory disorders including rheumatoid arthritis, systemic lupus erythematosus, and skin diseases. However TWHF has a high potential for toxicity, so its clinical use is limited. Solid lipid nanoparticle (SLN) delivery systems are reported to have remarkable advantages over conventional formulations of bioactive plant extracts, such as enhancing solubility and bioavailability, offering protection from toxicity, and enhancing pharmacological activity. We reported previously that a tripterygium glycoside (TG) solid lipid nanoparticle (TG-SLN) delivery system had a protective effect against TG-induced male reproductive toxicity. To better understand this issue, we used triptolide (TP) as a model drug in a comparative study of the toxicokinetic and tissue distribution of TP-SLN and free TP in rats, allowing us to observing the in vivo behavior of this nanoformulation and to assess mechanisms of SLN-related toxicity. A fast and sensitive HPLC-APCI-MS/MS method was developed for the determination of triptolide in rat plasma. Fourteen rats were divided randomly into two groups of 7 rats each for toxicokinetic analysis, with one group receiving free TP (450μg/kg) and the other receiving the TP-SLN formulation (450μg/kg). Blood was obtained before dosing and 0.083, 0.17, 0.25, 0.33, 0.5, 0.75, 1, 1.5, 2, 3 and 4h after drug administration. Thirty-six rats were divided randomly into six equal groups for a tissue-distribution study. Half of the rats received intragastric administration of TP (450μg/kg) and the other half received TP-SLN (450μg/kg). At 15, 45, and 90min after dosing, samples of blood, liver, kidney, spleen, lung, and testicular tissue were taken. TP concentration in the samples was determined by LC-APCI-MS-MS. The toxicokinetic results for the nanoformulation showed a significant increase the area under the curve (AUC) (P<0.05), significantly longer T(max) and mean retention times (MRTs) (0-t) (P<0.05), significantly decreased C(max) (P<0.05). The nanoformulation promoted absorption with a slow release character, indicating that toxicokinetic changes may be the most important mechanism for the enhanced efficacy of nanoformulations. Tissue-distribution results suggest a tendency for TP concentrations in the lung and spleen to increase, while TP concentrations in plasma, liver, kidney, and testes tended to decrease in the TP-SLN group. At multiple time points, testicular tissue TP concentrations were lower in the TP-SLN group than in free TP group. This provides an important clue for the decreased reproductive toxicity observed with TP-SLN. |
Cloudlets: at the leading edge of cloud-mobile convergence | Since the dawn of mobile computing two decades ago, the unique constraints of mobility have shaped the software architectures of systems. We now stand at the threshold of the next major transformation in computing: one in which the rich sensing and interaction capabilities of mobile devices are seamlessly fused with compute-intensive and data-intensive processing in the cloud. This heralds a new genre of software that augments human perception and cognition in a mobile context.
A major obstacle to realizing this vision is the large and variable end-to-end WAN latency between mobile device and cloud, and the possibility of WAN disruptions. Cloudlets have emerged as an architectural solution to this problem. A cloudlet represents the middle tier of a 3-tier hierarchy: mobile device -- cloudlet -- cloud, and can be viewed as a "data center in a box" whose goal is to "bring the cloud closer". A cloudlet-based hardware/software ecosystem inspires futuristic visions such as cognitive assistance for attention-challenged mobile users, scalable crowd-sourcing of first-person video, and ubiquitous mobile access to one's legacy world. Realizing these visions will require many technical challenges to be overcome. It will also require us to rethink a wide range of issues in areas such as privacy, software licensing, and business models. |
High accuracy approximate multiplier with error correction | Approximate computing has gained significant attention due to the popularity of multimedia applications. In this paper, we propose a novel inaccurate 4:2 counter that can effectively reduce the partial product stages of the Wallace Multiplier. Compared to the normal Wallace multiplier, our proposed multiplier can reduce 10.74% of power consumption and 9.8% of delay on average, with an error rate from 0.2% to 13.76% The accuracy of amplitude is higher than 99% In addition, we further enhance the design with error-correction units to provide accurate results. The experimental results show that the extra power consumption of correct units is lower than 6% on average. Compared to the normal Wallace multiplier, the average latency of our proposed multiplier with EDC is 6% faster when the bit-width is 32, and the power consumption is still 10% lower than that of the Wallace multiplier. |
An Anonymous Electronic Voting Protocol for Voting Over The Internet | In this work we propose a secure electronic voting protocol that is suitable for large scale voting over the Internet. The protocol allows a voter to cast his or her ballot anonymously, by exchanging untraceable yet authentic messages. The protocol ensures that (i) only eligible voters are able to cast votes, (ii) a voter is able to cast only one vote, (iii) a voter is able to verify that his or her vote is counted in the final tally, (iv) nobody, other than the voter, is able to link a cast vote with a voter, and (v) if a voter decides not to cast a vote, nobody is able to cast a fraudulent vote in place of the voter. The protocol does not require the cooperation of all registered voters. Neither does it require the use of complex cryptographic techniques like threshold cryptosystems or anonymous channels for casting votes. This is in contrast to other voting protocols that have been proposed in the literature. The protocol uses three agents, other than the voters, for successful operation. However, we do not require any of these agents to be trusted. That is, the agents may be physically co-located or may collude with one another to try to commit a fraud. If a fraud is committed, it can be easily detected and proven, so that the vote can be declared null and void. Although we propose the protocol with electronic voting in mind, the protocol can be used in other applications that involve exchanging an untraceable yet authentic message. Examples of such applications are answering confidential questionnaire anonymously or anonymous financial transactions. |
Measure it? Manage it? Ignore it? software practitioners and technical debt | The technical debt metaphor is widely used to encapsulate numerous software quality problems. The metaphor is attractive to practitioners as it communicates to both technical and nontechnical audiences that if quality problems are not addressed, things may get worse. However, it is unclear whether there are practices that move this metaphor beyond a mere communication mechanism. Existing studies of technical debt have largely focused on code metrics and small surveys of developers. In this paper, we report on our survey of 1,831 participants, primarily software engineers and architects working in long-lived, software-intensive projects from three large organizations, and follow-up interviews of seven software engineers. We analyzed our data using both nonparametric statistics and qualitative text analysis. We found that architectural decisions are the most important source of technical debt. Furthermore, while respondents believe the metaphor is itself important for communication, existing tools are not currently helpful in managing the details. We use our results to motivate a technical debt timeline to focus management and tooling approaches. |
Two-step parameter extraction procedure with formal optimization for physics-based circuit simulator IGBT and p-i-n diode models | A practical and accurate parameter extraction method is presented for the Fourier-based-solution physics-based insulated gate bipolar transistor (IGBT) and power diode models. The goal is to obtain a model accurate enough to allow switching loss prediction under a variety of operating conditions. In the first step of the extraction procedure, only one simple clamped inductive load test is needed for the extraction of the six parameters required for the diode model and of the 12 and 15 parameters required for the nonpunch-through (NPT) and punch-through (PT) IGBT models, respectively. The second part of the extraction procedure is an automated formal optimization step that refines the parameter estimation. Validation with experimental results from various structures of IGBT demonstrates the accuracy of the proposed IGBT and diode models and the robustness of the parameter extraction method. |
Personalized entity recommendation: a heterogeneous information network approach | Among different hybrid recommendation techniques, network-based entity recommendation methods, which utilize user or item relationship information, are beginning to attract increasing attention recently. Most of the previous studies in this category only consider a single relationship type, such as friendships in a social network. In many scenarios, the entity recommendation problem exists in a heterogeneous information network environment. Different types of relationships can be potentially used to improve the recommendation quality. In this paper, we study the entity recommendation problem in heterogeneous information networks. Specifically, we propose to combine heterogeneous relationship information for each user differently and aim to provide high-quality personalized recommendation results using user implicit feedback data and personalized recommendation models.
In order to take full advantage of the relationship heterogeneity in information networks, we first introduce meta-path-based latent features to represent the connectivity between users and items along different types of paths. We then define recommendation models at both global and personalized levels and use Bayesian ranking optimization techniques to estimate the proposed models. Empirical studies show that our approaches outperform several widely employed or the state-of-the-art entity recommendation techniques. |
Comparing Entities in RDF Graphs | The Semantic Web has fuelled the appearance of numerous open-source knowledge bases. Knowledge bases enable new types of information search, going beyond classical query answering and into the realm of exploratory search, and providing answers to new types of user questions. One such question is how two entities are comparable, i.e., what are similarities and differences between the information known about the two entities. Entity comparison is an important task and a widely used functionality available in many information systems. Yet it is usually domain-specific and depends on a fixed set of aspects to compare. In this paper we propose a formal framework for domain-independent entity comparison that provides similarity and difference explanations for input entities. We model explanations as conjunctive queries, we discuss how multiple explanations for an entity pair can be ranked and we provide a polynomial-time algorithm for generating most specific similarity explanations. |
A Semantic Relevance Based Neural Network for Text Summarization and Text Simplification | ive text summarization has achieved successful performance thanks to the sequence-to-sequence model (Sutskever, Vinyals, and Le 2014) and attention mechanism (Bahdanau, Cho, and Bengio 2014). Rush, Chopra, and Weston (2015) first used an attention-based encoder to compress texts and a neural network language decoder to generate summaries. Following this work, recurrent encoder was introduced to text summarization, and gained better performance (Lopyrev 2015; Chopra, Auli, and Rush 2016). Towards Chinese texts, Hu, Chen, and Zhu (2015) built a large corpus of Chinese short text summarization. To deal with unknown word problem, Nallapati et al. (2016) proposed a generator-pointer model so that the decoder is able to generate words in source texts. Gu et al. (2016) also solved this issue by incorporating copying mechanism. Besides, Ayana et al. (2016) proposes a minimum risk training method which optimizes the parameters with the target of rouge scores. Zhu, Bernhard, and Gurevych (2010) constructs a wikipedia dataset, and proposes a treebased simplification model, which is the first statistical simplification model covering splitting, dropping, reordering and substitution integrally. Woodsend and Lapata (2011) introduces a datadriven model based on quasi-synchronous grammar, which captures structural mismatches and complex rewrite operations. Wubben, van den Bosch, and Krahmer (2012) presents a method for text simplification using phrase based machine translation with re-ranking the outputs. |
Competence-Based Strategies and Global Production Networks a Discussion of Current Changes and Their Implications for Employment | This paper adopts a multidisciplinary perspective to analyse current changes in firms' organisational strategies and assess their implications from the perspective of industrial organisation and employment. The analysis first draws on recent developments in the strategic management literature that conceptualise the firm as knowledge-based or competence-based. This approach is built upon to develop a competence-based organisational model integrating both firms' internal management practices and external linkages into a unified analytical framework, and showing how firms respond to new competitive pressures by managing competencies on an intrafirm and interfirm basis. Part two considers how such model can contribute to explain the emergence of global production networks, which are analysed by focusing on the key dimensions of power, activity and geography, along the lines of the global commodity chain framework. The employment outcomes of competence-based organisational strategies and network forms of organisation are then discussed from the perspective of labour market segmentation theory, with emphasis on the emergence of new forms of employment segmentation within and between firms. 3 In today's fast-changing competitive environment, firms' competitive positions are constantly challenged by the emergence of new technologies, products, markets and competitors. Flexibility and adaptability have become key management concepts to develop a sustainable competitive advantage, and successful firms apply them in new organisational strategies that put into question many conventional tenets on organisations and their management. These strategies involve a decentralised and responsive work organisation, based on cooperative relations not only within the firm but also in its relations with customers, suppliers and competitors. However, firms are also increasingly resorting to traditional market mechanisms through the use of contingent workers and arms'-length subcontracting relations. This paper adopts a multidisciplinary perspective to examine these new strategies as well as their implications in terms of industrial organisation and employment. By doing so, it aims to highlight major changes in power relations both between firms, through the new dynamics of competition, and within firms, through new forms of employment relations. Firms' organisational strategies are a key driver of changes in these two areas, and as such they constitute the entry point of our analysis. The first section builds on recent developments in the strategic management literature that conceptualise the firm as knowledge-based or competence-based, to propose an original model for analysing firms' organisational strategies. This model integrates both firms' internal management practices and external linkages into a unified analytical framework, showing how firms can successfully respond … |
Dynamic Programming for Detecting, Tracking, and Matching Deformable Contours | Abstruct-The problem of segmenting an image into separate regions and tracking them over time is one of the most significant problems in vision. Tenopoulos et al have proposed an approach to detect the contour regions of complex shapes, assuming a user selected initial contour not very far from the desired solution. We propose to further explore the information provided by the user’s selected points and applyan optimal method to detect contours which allows a segmentation of the image. The method is based on dynamic programming @P), and applies to a wide variety of shapes. It is exact and not iterative. We also consider a multiscale approach capable of speeding up the algorithm by a factor of 20, although at the expense of losing the guaranteed optimality characteristic. The problem of tracking and matching these contours is addressed. For tracking, the final contour obtained at one frame is sampled and used as initial points for the next frame. Then, the same DP process is applied. For matching, a novel strategy is proposed where the solution is a smooth displacement field in which unmatched regions are allowed while cross vectors are not. The algorithm is again based on DP and the optimal solution is guaranteed. We have demonstrated the algorithms on natural objects in a large spectrum of applications, including interactive segmentation and automatic tracking of the regions of interest in medical images. |
Suicide by skull stab wounds: a case of drug-induced psychosis. | Suicide by stabbing to the head and/or driving sharp objects into the skull is of extreme rarity. This article reports the case of a 27-year-old man, who committed suicide by multiple knife stabs and cuts to the head, the torso, one shoulder and the forearms. Autopsy showed a perforating wound of the skull and the 10-cm long broken blade of the knife being still embedded in the right temporal lobe of the brain. The deceased had no history of psychiatric illness but was currently treated by mefloquine, a quinine derivative associated with a high rate of psychiatric adverse effects. Toxicological examination confirmed a recent intake of mefloquine together with chloroquine, another antimalarial drug. To our knowledge, this is the first report of a completed suicide with very strong evidence of mefloquine implication. Discussion focuses upon mefloquine-induced psychiatric disorders and highlights the importance of performing toxicological investigations in cases of unusual suicides. |
Online Anomaly Detection in Crowd Scenes via Structure Analysis | Abnormal behavior detection in crowd scenes is continuously a challenge in the field of computer vision. For tackling this problem, this paper starts from a novel structure modeling of crowd behavior. We first propose an informative structural context descriptor (SCD) for describing the crowd individual, which originally introduces the potential energy function of particle's interforce in solid-state physics to intuitively conduct vision contextual cueing. For computing the crowd SCD variation effectively, we then design a robust multi-object tracker to associate the targets in different frames, which employs the incremental analytical ability of the 3-D discrete cosine transform (DCT). By online spatial-temporal analyzing the SCD variation of the crowd, the abnormality is finally localized. Our contribution mainly lies on three aspects: 1) the new exploration of abnormal detection from structure modeling where the motion difference between individuals is computed by a novel selective histogram of optical flow that makes the proposed method can deal with more kinds of anomalies; 2) the SCD description that can effectively represent the relationship among the individuals; and 3) the 3-D DCT multi-object tracker that can robustly associate the limited number of (instead of all) targets which makes the tracking analysis in high density crowd situation feasible. Experimental results on several publicly available crowd video datasets verify the effectiveness of the proposed method. |
SDN-Guard: DoS Attacks Mitigation in SDN Networks | Software Defined Networking (SDN) has recently emerged as a new networking technology offering an unprecedented programmability that allows network operators to dynamically configure and manage their infrastructures. The main idea of SDN is to move the control plane into a central controller that is in charge of taking all routing decisions in the network. However, despite all the advantages offered by this technology, Deny-of-Service (DoS) attacks are considered a major threat to such networks as they can easily overload the controller processing and communication capacity and flood switch CAM tables, resulting in a critical degradation of the overall network performance. To address this issue, we propose in this paper SDN-Guard, a novel scheme able to efficiently protect SDN networks against DoS attacks by dynamically (1) rerouting potential malicious traffic, (2) adjusting flow timeouts and (3) aggregating flow rules. Realistic experiments using Mininet show that the proposed solution succeeds in minimizing by up to 32% the impact of DoS attacks on~the controller performance, switch memory usage and control plane bandwidth and thereby maintaining acceptable network performance during such attacks. |
AA-Sort: A New Parallel Sorting Algorithm for Multi-Core SIMD Processors | Many sorting algorithms have been studied in the past, but there are only a few algorithms that can effectively exploit both SIMD instructions and thread-level parallelism. In this paper, we propose a new parallel sorting algorithm, called aligned-access sort (AA-sort), for shared-memory multi processors. The AA-sort algorithm takes advantage of SIMD instructions. The key to high performance is eliminating unaligned memory accesses that would reduce the effectiveness of SIMD instructions. We implemented and evaluated the AA-sort on PowerPCreg 970MP and Cell Broadband Enginetrade. In summary, a sequential version of the AA-sort using SIMD instructions outperformed IBM's optimized sequential sorting library by 1.8 times and GPUTeraSort using SIMD instructions by 3.3 times on PowerPC 970MP when sorting 32 M of random 32-bit integers. Furthermore, a parallel version of AA-sort demonstrated better scalability with increasing numbers of cores than a parallel version of GPUTeraSort on both platforms. |
Substance abuser impulsivity decreases with a nine-month stay in a therapeutic community. | BACKGROUND
Substance abuse continues to be a major public health problem. Keeping substance abusers in treatment is a challenge, and researchers continue to investigate ways to increase retention.
OBJECTIVE
The aim of this study was to investigate the relationship between impulsivity in substance abusers and length of stay in the context of therapeutic community.
METHODS
The Barratt Impulsiveness Scale- 11 (BIS-11) was used to assess impulsivity in 138 substance abusers at admission and at nine months in a therapeutic community.
RESULTS
Impulsivity significantly decreased in subjects who completed nine months in the therapeutic community. Legal stipulation increased length of stay, on average, by three months. On admission, female participants were on average more impulsive than their male counterparts.
CONCLUSION
Impulsivity decreased in subjects who remained in therapeutic community for nine months although self-reported impulsivity at baseline did not seem to be associated with retention.
SCIENTIFIC SIGNIFICANCE
Therapeutic community factors contribute to a decrease in self-reported impulsivity and these factors might be enhanced to increase retention in therapeutic community. |
Medial and Lateral Entorhinal Cortex Differentially Excite Deep versus Superficial CA1 Pyramidal Neurons. | Although hippocampal CA1 pyramidal neurons (PNs) were thought to comprise a uniform population, recent evidence supports two distinct sublayers along the radial axis, with deep neurons more likely to form place cells than superficial neurons. CA1 PNs also differ along the transverse axis with regard to direct inputs from entorhinal cortex (EC), with medial EC (MEC) providing spatial information to PNs toward CA2 (proximal CA1) and lateral EC (LEC) providing non-spatial information to PNs toward subiculum (distal CA1). We demonstrate that the two inputs differentially activate the radial sublayers and that this difference reverses along the transverse axis, with MEC preferentially targeting deep PNs in proximal CA1 and LEC preferentially exciting superficial PNs in distal CA1. This differential excitation reflects differences in dendritic spine numbers. Our results reveal a heterogeneity in EC-CA1 connectivity that may help explain differential roles of CA1 PNs in spatial and non-spatial learning and memory. |
A comprehensive overview of methodologies and performance evaluation frameworks in 3D mesh segmentation | 3D mesh segmentation has become a crucial part of many applications in 3D shape analysis. In this paper, a comprehensive survey on 3D mesh segmentation methods is presented. Analysis of the existing methodologies is addressed taking into account a new categorization along with the performance evaluation frameworks which aim to support meaningful benchmarks not only qualitatively but also in a quantitative manner. This survey aims to capture the essence of current trends in 3D mesh segmentation. |
Analysis and Simulation of a Rocker-bogie Exploration Rover | Rovers will continue to play an important role in planetary exploration. Plans include the use of the rocker-bogie rover configuration. Here, models of the mechanics of this configuration are presented. Methods for solving the inverse kinematics of the system and quasi-static force analysis are described. Also described is a simulation based on the models of the rover’s performance. Experimental results confirm the validity of the models. |
Effects of attribute and valence of e-WOM on message adoption: Moderating roles of subjective knowledge and regulatory focus | 0747-5632/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.chb.2012.05.018 ⇑ Corresponding author. Tel.: +82 53 950 7412; fax E-mail addresses: [email protected] (K.-T. Lee), unlim 1 Tel.: +82 53 950 7322; fax: +82 53 950 6247. The current study proposes a model to test whether online review valence and attributes have an effect on credibility, and whether regulatory focus and subjective knowledge have moderating effects. Three hundred nineteen university students participated in online experiments with a 2 (positive vs. negative review valence) by 2 (objective vs. subjective review attributes) between subject design. The experiment demonstrated that objective and negative online reviews have a significant positive and negative impact, respectively, on message credibility, which affects review adoption. The results also showed that the moderating effect produced by objective information and a consumer’s subjective knowledge is supported. This study contributes to explaining the inconsistent results between review valence/attribute and credibility found in previous studies. 2012 Elsevier Ltd. All rights reserved. |
Comparing information graphics: a critical look at eye tracking | Effective graphics are essential for understanding complex information and completing tasks. To assess graphic effectiveness, eye tracking methods can help provide a deeper understanding of scanning strategies that underlie more traditional, high-level accuracy and task completion time results. Eye tracking methods entail many challenges, such as defining fixations, assigning fixations to areas of interest, choosing appropriate metrics, addressing potential errors in gaze location, and handling scanning interruptions. Special considerations are also required designing, preparing, and conducting eye tracking studies. An illustrative eye tracking study was conducted to assess the differences in scanning within and between bar, line, and spider graphs, to determine which graphs best support relative comparisons along several dimensions. There was excessive scanning to locate the correct bar graph in easier tasks. Scanning across bar and line graph dimensions before comparing across graphs was evident in harder tasks. There was repeated scanning between the same dimension of two spider graphs, implying a greater cognitive demand from scanning in a circle that contains multiple linear dimensions, than from scanning the linear axes of bar and line graphs. With appropriate task design and targeted analysis metrics, eye tracking techniques can illuminate visual scanning patterns hidden by more traditional time and accuracy results. |
The Jewish population of the former Lomza province by the end of the nineteenth century | The distribution of the Jewish population in the area that was formerly Lomza province Poland is examined for the period at the end of the nineteenth century according to type of locality (such as town former town or village) commune and size of locality. The results indicate a concentration of the Jewish population in the towns and a wide dispersion in villages. (summary in ENG RUS) (ANNOTATION) |
Recognizing academic performance, sleep quality, stress level, and mental health using personality traits, wearable sensors and mobile phones | What can wearable sensors and usage of smart phones tell us about academic performance, self-reported sleep quality, stress and mental health condition? To answer this question, we collected extensive subjective and objective data using mobile phones, surveys, and wearable sensors worn day and night from 66 participants, for 30 days each, totaling 1,980 days of data. We analyzed daily and monthly behavioral and physiological patterns and identified factors that affect academic performance (GPA), Pittsburg Sleep Quality Index (PSQI) score, perceived stress scale (PSS), and mental health composite score (MCS) from SF-12, using these month-long data. We also examined how accurately the collected data classified the participants into groups of high/low GPA, good/poor sleep quality, high/low self-reported stress, high/low MCS using feature selection and machine learning techniques. We found associations among PSQI, PSS, MCS, and GPA and personality types. Classification accuracies using the objective data from wearable sensors and mobile phones ranged from 67-92%. |
Additive Manufacturing : A Framework for Implementation | As mass production has migrated to developing countries, European and US companies are forced to rapidly switch towards low volume production of more innovative, customised and sustainable products with high added value. To compete in this turbulent environment, manufacturers have sought new fabrication techniques to provide the necessary tools to support the need for increased flexibility and enable economic low volume production. One such emerging technique is Additive Manufacturing (AM). AM is a method of manufacture which involves the joining of materials, usually layer-upon-layer, to create objects from 3D model data. The benefits of this methodology include new design freedom, removal of tooling requirements, and economic low volumes. AM consists various technologies to process versatile materials, and for many years its dominant application has been the manufacture of prototypes, or Rapid Prototyping. However, the recent growth in applications for direct part manufacture, or Rapid Manufacturing, has resulted in much research effort focusing on development of new processes and materials. This study focuses on the implementation process of AM and is motivated by the lack of socio-technical studies in this area. It addresses the need for existing and potential future AM project managers to have an implementation framework to guide their efforts in adopting this new and potentially disruptive technology class to produce high value products and generate new business opportunities. Based on a review of prior works and through qualitative case study analysis, we construct and test a normative structural model of implementation factors related to AM technology, supply chain, organisation, operations and strategy. |
On random relational structures | Erdos and Renyi gave a probabilistic construction of the countable universal homogeneous graph. We extend their result to more general structures of first-order predicate calculus. Our main result shows that if a class of countable relational structures C contains an infinite ω-categorical universal homogeneous structure U, then U can be constructed probabilistically. |
Software Traceability : A Roadmap | ion and Traceability", Proceedings of the IEEE International Requirements Engineering Conference, Germany, September 2002. 70. Watkins R, Neal M, "Why and How of Requirements Tracing", IEEE Software, 104-106, July 1994 |
Evaluating the intangible benefits of Business Intelligence: review and research agenda | A Business Intelligence (BI) system is a technology that provides significant business value by improving the effectiveness of managerial decision-making. In an uncertain and highly competitive business environment, the value of strategic information systems such as these is easily recognised. High adoption rates and investment in BI software and services suggest that these systems are a principal provider of decision support in the current marketplace. Most business investments are screened using some form of evaluation process or technique. The benefits of BI are such that traditional evaluation techniques have difficulty in identifying the soft, intangible benefits often provided by BI. This paper, forming the first part of a larger research project, aims to review current evaluation techniques that address intangible benefits, presents issues relating to the evaluation of BI in industry, and suggests a research agenda to advance what is presently a limited body of knowledge relating to the evaluation of BI intangible benefits. |
Gibbs Sampling Methods for Stick-Breaking Priors | A rich and exible class of random probability measures, which we call stick-breaking priors, can be constructed using a sequence of independent beta random variables. Examples of random measures that have this characterization include the Dirichlet process, its two-parameter extension, the two-parameter Poisson–Dirichlet process, nite dimensional Dirichlet priors, and beta two-parameter processes. The rich nature of stick-breaking priors offers Bayesians a useful class of priors for nonparametri c problems, while the similar construction used in each prior can be exploited to develop a general computational procedure for tting them. In this article we present two general types of Gibbs samplers that can be used to t posteriors of Bayesian hierarchical models based on stick-breaking priors. The rst type of Gibbs sampler, referred to as a Pólya urn Gibbs sampler, is a generalized version of a widely used Gibbs sampling method currently employed for Dirichlet process computing. This method applies to stick-breaking priors with a known Pólya urn characterization, that is, priors with an explicit and simple prediction rule. Our second method, the blocked Gibbs sampler, is based on an entirely different approach that works by directly sampling values from the posterior of the random measure. The blocked Gibbs sampler can be viewed as a more general approach because it works without requiring an explicit prediction rule. We nd that the blocked Gibbs avoids some of the limitations seen with the Pólya urn approach and should be simpler for nonexperts to use. |
Deep Autoencoding Models for Unsupervised Anomaly Segmentation in Brain MR Images | Reliably modeling normality and differentiating abnormal appearances from normal cases is a very appealing approach for detecting pathologies in medical images. A plethora of such unsupervised anomaly detection approaches has been made in the medical domain, based on statistical methods, content-based retrieval, clustering and recently also deep learning. Previous approaches towards deep unsupervised anomaly detection model patches of normal anatomy with variants of Autoencoders or GANs, and detect anomalies either as outliers in the learned feature space or from large reconstruction errors. In contrast to these patch-based approaches, we show that deep spatial autoencoding models can be efficiently used to capture normal anatomical variability of entire 2D brain MR images. A variety of experiments on real MR data containing MS lesions corroborates our hypothesis that we can detect and even delineate anomalies in brain MR images by simply comparing input images to their reconstruction. Results show that constraints on the latent space and adversarial training can further improve the segmentation performance over standard deep representation learning. |
The effect of mode of delivery on postpartum sexual functioning in primiparous women. | OBJECTIVE
To evaluate the effect of mode of delivery on postpartum sexual functioning in primiparous women.
METHODS
In this cross-sectional descriptive study, 150 primiparous women in postpartum period, who attended the family planning or vaccination clinics, were enrolled for the study. Eighty-one had vaginal delivery with episiotomy and 69 had experienced cesarean section. Sexual function was evaluated by the Female Sexual Function Index within 3 and 6 months postpartum.
RESULTS
About 29% in vaginal delivery group and 37% in cesarean delivery group had resumed their sexual intercourses four weeks after delivery (p=0.280).There were no significant differences between mode of delivery and sexual functioning, including desire, arousal, lubrication, orgasm, satisfaction and pain.
CONCLUSION
The present study showed that postpartum sexual functioning was not associated with the type of delivery. |
d6 metal systems for white phosphorus activation | Abstract This short review describes a breakthrough provided by the synthesis of d 6 metal complexes containing the intact molecules P 4 and P 4 S 3 . The coordinated cage molecules acquire unexpected reactivity and undergo dismutation reactions in mild conditions in the presence of water. The outcomings are obtained either in form of free or coordinated molecules; the former are hypophosphorous and phosphorous acids, the latter comprise, besides phosphine, PH 3 , such species as thiophosphinous acid, PH 2 SH, diphosphane, P 2 H 4 , 1-hydroxytriphosphane, PH(OH)PHPH 2 and 1,1,4-tris-hydroxytetraphosphane, P(OH) 2 PHPHPH(OH), which are either unknown or extremely reactive as free molecules. The formation of the above molecules provides a clue to the hydrolytic activation of the P 4 and P 4 S 3 cage molecules. |
A Comparison of Physically Based Rendering Systems | In this thesis, a quantitative evaluation is performed to find the most relevant physically based rendering systems in research. As a consequence of this evaluation, the rendering systems Mitsuba, PBRT-v3 and LuxRender are compared to each other and their potential for interoperability is assessed. The goal is to find common materials and light models and analyze the effects of changing the parameters of those models. |
Toward a Distributed Data Flow Platform for the Web of Things (Distributed Node-RED) | Several web-based platforms have emerged to ease the development of interactive or near real-time IoT applications by providing a way to connect things and services together and process the data they emit using a data flow paradigm. While these platforms have been found to be useful on their own, many IoT scenarios require the coordination of computing resources across the network: on servers, gateways and devices themselves. To address this, we explore how to extend existing IoT data flow platforms to create a system suitable for execution on a range of run time environments, toward supporting distributed IoT programs that can be partitioned between servers, gateways and devices. Eventually we aim to automate the distribution of data flows using appropriate distribution mechanism, and optimization heuristics based on participating resource capabilities and constraints imposed by the developer. |
Safer Classification by Synthesis | The discriminative approach to classification using deep neural networks has become the de-facto standard in various fields. Complementing recent reservations about safety against adversarial examples, we show that conventional discriminative methods can easily be fooled to provide incorrect labels with very high confidence to out of distribution examples. We posit that a generative approach is the natural remedy for this problem, and propose a method for classification using generative models. At training time, we learn a generative model for each class, while at test time, given an example to classify, we query each generator for its most similar generation, and select the class corresponding to the most similar one. Our approach is general and can be used with expressive models such as GANs and VAEs. At test time, our method accurately “knows when it does not know,” and provides resilience to out of distribution examples while maintaining competitive performance for standard examples. |
The game, the player, the world: looking for a heart of gameness | This paper proposes a definition of games. I describe the classic game model, a list of six features that are necessary and sufficient for something to be a game. The definition shows games to be transmedial: There is no single game medium, but rather a number of game media, each with its own strengths. The computer is simply the latest game medium to emerge. While computer games 1 are therefore part of the broader area of games, they have in many cases evolved beyond the classic game model. |
Image Quality Transfer via Random Forest Regression: Applications in Diffusion MRI | This paper introduces image quality transfer. The aim is to learn the fine structural detail of medical images from high quality data sets acquired with long acquisition times or from bespoke devices and transfer that information to enhance lower quality data sets from standard acquisitions. We propose a framework for solving this problem using random forest regression to relate patches in the low-quality data set to voxel values in the high quality data set. Two examples in diffusion MRI demonstrate the idea. In both cases, we learn from the Human Connectome Project (HCP) data set, which uses an hour of acquisition time per subject, just for diffusion imaging, using custom built scanner hardware and rapid imaging techniques. The first example, super-resolution of diffusion tensor images (DTIs), enhances spatial resolution of standard data sets with information from the high-resolution HCP data. The second, parameter mapping, constructs neurite orientation density and dispersion imaging (NODDI) parameter maps, which usually require specialist data sets with two b-values, from standard single-shell high angular resolution diffusion imaging (HARDI) data sets with b = 1000 smm-2. Experiments quantify the improvement against alternative image reconstructions in comparison to ground truth from the HCP data set in both examples and demonstrate efficacy on a standard data set. |
Youtube traffic characterization: a view from the edge | This paper presents a traffic characterization study of the popular video sharing service, YouTube. Over a three month period we observed almost 25 million transactions between users on an edge network and YouTube, including more than 600,000 video downloads. We also monitored the globally popular videos over this period of time.
In the paper we examine usage patterns, file properties, popularity and referencing characteristics, and transfer behaviors of YouTube, and compare them to traditional Web and media streaming workload characteristics. We conclude the paper with a discussion of the implications of the observed characteristics. For example, we find that as with the traditional Web, caching could improve the end user experience, reduce network bandwidth consumption, and reduce the load on YouTube's core server infrastructure. Unlike traditional Web caching, Web 2.0 provides additional meta-data that should be exploited to improve the effectiveness of strategies like caching. |
Genetic diversity in natural populations: a fundamental component of plant-microbe interactions. | Genetic diversity for plant defense against microbial pathogens has been studied either by analyzing sequences of defense genes or by testing phenotypic responses to pathogens under experimental conditions. These two approaches give different but complementary information but, till date, only rare attempts at their integration have been made. Here we discuss the advances made, because of the two approaches, in understanding plant-pathogen coevolution and propose ways of integrating the two. |
Generic tools, specific languages | Adapting tools to a particular domain is expensive, and the adaptation is often not very deep. To address this challenge, Generic Tools, Specific Languages shifts the focus from building and adapting tools (windows, buttons, algorithms) to building and adapting languages to a domain. The thesis applies the approach to embedded software development: mbeddr is an extensible set of integrated languages for embedded software development built with JetBrains MPS language workbench. The evaluation of mbeddr suggests that it is a productive tool for embedded software development. The evaluation of the development of mbeddr itself suggests that MPS is a suitable platform for Generic Tools, Specific Languages, and that the approach in general is worthwhile. 1.1 O V E RV I E W A N D C O N T R I B U T I O N This section provides an overview of the thesis. It starts out by describing the contribution and the research methodology. It then discusses mbeddr, the primary artifact built during the dissertation and explains its context, embedded software engineering. A discussion of the technologies used for building mbeddr – language engineering, projectional editing and JetBrains MPS – follows. The section concludes with an overview of the results. Contribution Today’s software engineering tools are hard to adapt to specific domains. A major reason is that while platforms such as Eclipse support easy extension of the tool (views, editors, buttons, menus) they do not easily support the extension of the data formats or languages that underlie a tool. The contribution of this thesis is the introduction and evaluation of a new approach to developing domain-specific software engineering tools called Generic Tools, Specific Languages. It shifts the focus from adapting the engineering tool to adapting the underlying languages to solve the problem of tool adaptation. It relies on language workbenches (the generic tool) and recent advances in language engineering, particularly, user-friendly projectional editing. The problem of tool extensibility, the Generic Tools, Specific Languages approach and the way it is evaluated is discussed in Chapter 2. The specific research questions that arise from the approach (domain-specific extensibility, language modularity and projectional editing, tool scalability, implementation efforts and the learning curve) are introduced and motivated in Section 2.6. Methodology As part of the dissertation, the Generic Tools, Specific Languages approach has been applied to embedded software engineering, resulting in a |
Bayesian probabilistic matrix factorization using Markov chain Monte Carlo | Low-rank matrix approximation methods provide one of the simplest and most effective approaches to collaborative filtering. Such models are usually fitted to data by finding a MAP estimate of the model parameters, a procedure that can be performed efficiently even on very large datasets. However, unless the regularization parameters are tuned carefully, this approach is prone to overfitting because it finds a single point estimate of the parameters. In this paper we present a fully Bayesian treatment of the Probabilistic Matrix Factorization (PMF) model in which model capacity is controlled automatically by integrating over all model parameters and hyperparameters. We show that Bayesian PMF models can be efficiently trained using Markov chain Monte Carlo methods by applying them to the Netflix dataset, which consists of over 100 million movie ratings. The resulting models achieve significantly higher prediction accuracy than PMF models trained using MAP estimation. |
Validating the theoretical structure of the Treatment Self-Regulation Questionnaire (TSRQ) across three different health behaviors. | Nearly 40% of mortality in the United States is linked to social and behavioral factors such as smoking, diet and sedentary lifestyle. Autonomous self-regulation of health-related behaviors is thus an important aspect of human behavior to assess. In 1997, the Behavior Change Consortium (BCC) was formed. Within the BCC, seven health behaviors, 18 theoretical models, five intervention settings and 26 mediating variables were studied across diverse populations. One of the measures included across settings and health behaviors was the Treatment Self-Regulation Questionnaire (TSRQ). The purpose of the present study was to examine the validity of the TSRQ across settings and health behaviors (tobacco, diet and exercise). The TSRQ is composed of subscales assessing different forms of motivation: amotivation, external, introjection, identification and integration. Data were obtained from four different sites and a total of 2731 participants completed the TSRQ. Invariance analyses support the validity of the TSRQ across all four sites and all three health behaviors. Overall, the internal consistency of each subscale was acceptable (most alpha values >0.73). The present study provides further evidence of the validity of the TSRQ and its usefulness as an assessment tool across various settings and for different health behaviors. |
3D printed dielectric Fresnel lens | This paper presents the design and fabrication of a zone plate Fresnel lens. 3D Printing is used for rapid prototyping this low-cost and light-weight lens to operate at 10 GHz. This lens is comprised of four different 3D printed dielectric zones to form phase compensation in a Fresnel lens. The dielectric zones are fabricated with different infill percentage to create tailored dielectric constants. The dielectric lens offers 18 dBi directivity at 10 GHz when illuminated by a waveguide source. |
Selection of robot pre-grasps using box-based shape approximation | Grasping is a central issue of various robot applications, especially when unknown objects have to be manipulated by the system. In earlier work, we have shown the efficiency of 3D object shape approximation by box primitives for the purpose of grasping. A point cloud was approximated by box primitives [1]. In this paper, we present a continuation of these ideas and focus on the box representation itself. On the number of grasp hypotheses from box face normals, we apply heuristic selection integrating task, orientation and shape issues. Finally, an off-line trained neural network is applied to chose a final best hypothesis as the final grasp. We motivate how boxes as one of the simplest representations can be applied in a more sophisticated manner to generate task-dependent grasps. |
Finite temperature non-local effective action for scalar fields | Massless scalar fields at finite temperature are considered in four-dimensional ultrastatic curved spacetime. The one-loop non-local effective action at finite temperature is found up to second order in the curvature expansion. This action is explicitly infrared finite. In the high-temperature expansion of the free energy, essentially non-local terms linear in temperature are derived. |
Relation of Telemetry Use and Mortality Risk, Hospital Length of Stay, and Readmission Rates in Patients With Respiratory Illness. | The 2004 American Heart Association expert opinion-based guidelines restrict telemetry use primarily to patients with current or high-risk cardiac conditions. Respiratory infections have emerged as a common source of hospitalization, and telemetry is frequently applied without indication in efforts to monitor patient decompensation. In this retrospective study, we aimed to determine whether telemetry impacts mortality risk, length of stay (LOS), or readmission rates in hospitalized patients with acute respiratory infection not meeting American Heart Association criteria. A total of 765 respiratory infection patient encounters with Diagnosis-Related Groups 193, 194, 195, 177, 178 and 179 admitted in 2013 to 2015 to 2 tertiary community-based medical centers (Mayo Clinic, Arizona, and Mayo Clinic, Florida) were evaluated, and outcomes between patients who underwent or did not undergo telemetry were compared. Overall, the median LOS was longer in patients who underwent telemetry (3.0 days vs 2.0 days, p <0.0001). No differences between cohorts were noted in 30-day readmission rates (0.6% vs 1.3%, p = 0.32), patient mortality while hospitalized (0.6% vs 1.3%, p = 0.44), mortality at 30 days (7.9% vs 7.7%, p = 0.94), or mortality at 90 days (13.5% vs 13.5%, p = 0.99). Telemetry predicted LOS for both univariate (estimate 1.18, 95% confidence interval 1.06 to 1.32, p = 0.003) and multivariate (estimate 1.17, 95% confidence interval 1.06 to 1.30, p = 0.003) analyses after controlling for severity of illness but did not predict patient mortality. In conclusion, this study identified that patients with respiratory infection who underwent telemetry without clear indications may face increased LOS without reducing their readmission risk or improving the overall mortality. |
Pixel-wise Attentional Gating for Parsimonious Pixel Labeling | To achieve dynamic inference in pixel labeling tasks, we propose Pixel-wise Attentional Gating (PAG), which learns to selectively process a subset of spatial locations at each layer of a deep convolutional network. PAG is a generic, architecture-independent, problem-agnostic mechanism that can be readily “plugged in” to an existing model with fine-tuning. We utilize PAG in two ways: 1) learning spatially varying pooling fields that improve model performance without the extra computation cost associated with multi-scale pooling, and 2) learning a dynamic computation policy for each pixel to decrease total computation (FLOPs) while maintaining accuracy. We extensively evaluate PAG on a variety of per-pixel labeling tasks, including semantic segmentation, boundary detection, monocular depth and surface normal estimation. We demonstrate that PAG allows competitive or state-ofthe-art performance on these tasks. Our experiments show that PAG learns dynamic spatial allocation of computation over the input image which provides better performance trade-offs compared to related approaches (e.g., truncating deep models or dynamically skipping whole layers). Generally, we observe PAG can reduce computation by 10% without noticeable loss in accuracy and performance degrades gracefully when imposing stronger computational constraints. |
Semantic component retrieval in software engineering | In the early days of programming the concept of subroutines, and through this software reuse, was invented to spare limited hardware resources. Since then software systems have become increasingly complex and developing them would not have been possible without reusable software elements such as standard libraries and frameworks. Furthermore, other approaches commonly subsumed under the umbrella of software reuse such as product lines and design patterns have become very successful in recent years. However, there are still no software component markets available that would make buying software components as simple as buying parts in a do-it-yourself hardware store and millions of software fragments are still lying un(re)used in configuration management repositories all over the world. The literature primarily blames this on the immense effort required so far to set up and maintain searchable component repositories and the weak mechanisms available for retrieving components from them, resulting in a severe usability problem. In order to address these issues within this thesis, we developed a proactive component reuse recommendation system, naturally integrated into test-first development approaches, which is able to propose semantically appropriate, reusable components according to the specification a developer is just working on. We have implemented an appropriate system as a plugin for the well-known Eclipse IDE and demonstrated its usefulness by carrying out a case study from a popular agile development book. Furthermore, we present a precision analysis for our approach and examples of how components can be retrieved based on a simplified semantics description in terms of standard test cases. Zusammenfassung Zu Zeiten der ersten Programmiersprachen wurde die Idee von Unterprogrammen und damit die Idee der Wiederverwendung von Software zur Einsparung knapper Hardware-Ressourcen erdacht. Seit dieser Zeit wurden Software-Systeme immer komplexer und ihre Entwicklung wäre ohne weitere wiederverwendbare SoftwareElemente wie Bibliotheken und Frameworks schlichtweg nicht mehr handhabbar. Weitere, üblicherweise unter dem Begriff Software Reuse zusammengefasste Ansätze, wie z.B. Produktlinien und Entwurfsmuster waren in den letzten Jahren ebenfalls sehr erfolgreich, gleichzeitig existieren allerdings noch immer keine Marktplätze, die das Kaufen von Software-Komponenten so einfach machen würden, wie den Einkauf von Kleinteilen in einem Heimwerkermarkt. Daher schlummern derzeit Millionen von nicht (wieder) genutzten Software-Fragmenten in Konfigurations-Management-Systemen auf der ganzen Welt. Die Fachliteratur lastet dies primär dem hohen Aufwand, der bisher für Aufbau und Instandhaltung von durchsuchbaren Komponenten-Repositories getrieben werden musste, an. Zusammen mit den ungenauen Algorithmen, wie sie bisher zum Durchsuchen solcher Komponentenspeicher zur Verfügung stehen, macht diese Tatsache die Benutzung dieser Systeme zu kompliziert und damit unattraktiv. Um diese Hürde künftig abzumildern, entwickelten wir in der vorliegenden Arbeit ein proaktives Komponenten-Empfehlungssystem, das eng an testgetriebene Entwicklungsprozesse angelehnt ist und darauf aufbauend wiederverwendbare Komponenten vorschlagen kann, die genau die Funktionalität erbringen, die ein Entwickler gerade benötigt. Wir haben das System als Plugin für die bekannte Eclipse IDE entwickelt und seine Nutzbarkeit unter Beweis gestellt, in dem wir ein Beispiel aus einem bekannten Buch über agile Entwicklung damit nachimplementiert haben. Weiterhin enthält diese Arbeit eine Analyse der Precision unseres Ansatzes sowie zahlreiche Beispiele, wie gewöhnliche Testfälle als vereinfachte semantische Beschreibung einer Komponente und als Ausgangspunkt für die Suche nach wiederverwendbaren Komponenten genutzt werden können. |
Evaluation and validation of social and psychological markers in randomised trials of complex interventions in mental health: a methodological research programme. | BACKGROUND
The development of the capability and capacity to evaluate the outcomes of trials of complex interventions is a key priority of the National Institute for Health Research (NIHR) and the Medical Research Council (MRC). The evaluation of complex treatment programmes for mental illness (e.g. cognitive-behavioural therapy for depression or psychosis) not only is a vital component of this research in its own right but also provides a well-established model for the evaluation of complex interventions in other clinical areas. In the context of efficacy and mechanism evaluation (EME) there is a particular need for robust methods for making valid causal inference in explanatory analyses of the mechanisms of treatment-induced change in clinical outcomes in randomised clinical trials.
OBJECTIVES
The key objective was to produce statistical methods to enable trial investigators to make valid causal inferences about the mechanisms of treatment-induced change in these clinical outcomes. The primary objective of this report is to disseminate this methodology, aiming specifically at trial practitioners.
METHODS
The three components of the research were (1) the extension of instrumental variable (IV) methods to latent growth curve models and growth mixture models for repeated-measures data; (2) the development of designs and regression methods for parallel trials; and (3) the evaluation of the sensitivity/robustness of findings to the assumptions necessary for model identifiability. We illustrate our methods with applications from psychological and psychosocial intervention trials, keeping the technical details to a minimum, leaving the reporting of the more theoretical and mathematically demanding results for publication in appropriate specialist journals.
RESULTS
We show how to estimate treatment effects and introduce methods for EME. We explain the use of IV methods and principal stratification to evaluate the role of putative treatment effect mediators and therapeutic process measures. These results are extended to the analysis of longitudinal data structures. We consider the design of EME trials. We focus on designs to create convincing IVs, bearing in mind assumptions needed to attain model identifiability. A key area of application that has become apparent during this work is the potential role of treatment moderators (predictive markers) in the evaluation of treatment effect mechanisms for personalised therapies (stratified medicine). We consider the role of targeted therapies and multiarm trials and the use of parallel trials to help elucidate the evaluation of mediators working in parallel.
CONCLUSIONS
In order to demonstrate both efficacy and mechanism, it is necessary to (1) demonstrate a treatment effect on the primary (clinical) outcome, (2) demonstrate a treatment effect on the putative mediator (mechanism) and (3) demonstrate a causal effect from the mediator to the outcome. Appropriate regression models should be applied for (3) or alternative IV procedures, which account for unmeasured confounding, provided that a valid instrument can be identified. Stratified medicine may provide a setting where such instruments can be designed into the trial. This work could be extended by considering improved trial designs, sample size considerations and measurement properties.
FUNDING
The project presents independent research funded under the MRC-NIHR Methodology Research Programme (grant reference G0900678). |
The Reason of Rules | Societies function on the basis of rules. These rules, rather like the rules of the road, coordinate the activities of individuals who have a variety of goals and purposes. Whether the rules work well or ill, and how they can be made to work better, is a matter of major concern. Appropriately interpreted, the working of social rules is also the central subject matter of modern political economy. This book is about rules - what they are, how they work, and how they can be properly analysed. The authors' objective is to understand the workings of alternative political institutions so that choices among such institutions (rules) can be more fully informed. Thus, broadly defined, the methodology of constitutional political economy is the subject matter of The Reason of Rules. The authors have examined how rules for political order work, how such rules might be chosen, and how normative criteria for such choices might be established. |
Digital Advertising Traffic Operation: Machine Learning for Process Discovery | In a Web Advertising Traffic Operation it’s necessary to manage the day-to-day trafficking, pacing and optimization of digital and paid social campaigns. The data analyst on Traffic Operation can not only quickly provide answers but also speaks the language of the Process Manager and visually displays the discovered process problems. In order to solve a growing number of complaints in the customer service process, the weaknesses in the process itself must be identified and communicated to the department. With the help of Process Mining for the CRM data it is possible to identify unwanted loops and delays in the process. With this paper we propose a process discovery based on Machine Learning technique to automatically discover variations and detect at first glance what the problem is, and undertake corrective measures. |
The resolution of facial expressions of emotion. | Much is known on how facial expressions of emotion are produced, including which individual muscles are most active in each expression. Yet, little is known on how this information is interpreted by the human visual system. This paper presents a systematic study of the image dimensionality of facial expressions of emotion. In particular, we investigate how recognition degrades when the resolution of the image (i.e., number of pixels when seen as a 5.3 by 8 degree stimulus) is reduced. We show that recognition is only impaired in practice when the image resolution goes below 20 × 30 pixels. A study of the confusion tables demonstrates that each expression of emotion is consistently confused by a small set of alternatives and that the confusion is not symmetric, i.e., misclassifying emotion a as b does not imply we will mistake b for a. This asymmetric pattern is consistent over the different image resolutions and cannot be explained by the similarity of muscle activation. Furthermore, although women are generally better at recognizing expressions of emotion at all resolutions, the asymmetry patterns are the same. We discuss the implications of these results for current models of face perception. |
Platform Leadership How Intel , Microsoft , and Cisco Drive Industry Innovation | Annabelle Gawer is Assistant Professor of Strategy and Management at INSEAD. Gawer is engaged in research work on the `why' and `how' of innovation strategies for firms operating in industries such as computers and telecommunications. She is affiliated with MIT research centers like the Internet and Telecom Convergence Consortium (ITC), which studies the possible evolution of the industry of computer-enhanced human communications and the Center for Innovation in Product Development (CIPD), an engineering research center. |
Modeling of Set and Reset Operations of Phase-Change Memory Cells | Phase-change memory elements with 25-nm Ge2Sb2Te5 thickness and 25-nm heater diameter with ±2-nm protrusion/recess of the heater are studied using 2-D finite-element simulations with rotational symmetry. Temperature-dependent material parameters are used to solve current continuity and heat equations self-consistently. Melting is accounted for by including latent heat of fusion in heat capacity at melting temperature. Electrical breakdown is modeled using additional field-dependent conductivity terms to enable set simulations. Analyses on current, voltage, energy, power, and minimum pitch requirements are summarized for reset/set operations with 1-ns/20-ns voltage pulses leading to ~500× difference between the reset and set resistance states. |
Rates of non-confounded HIV-associated neurocognitive disorders in men initiating combination antiretroviral therapy during primary infection. | OBJECTIVE
To determine the prevalence of HIV-associated neurocognitive disorders (HAND) in HIV-infected participants who initiated combination antiretroviral therapy (cART) during primary infection.
DESIGN
Cross-sectional observational study.
METHODS
HIV-infected men without neuropsychiatric confounds who had initiated cART during primary infection were administered a neuropsychological battery as well as questionnaires evaluating depression and quality of life. Eligibility was determined by a medical examination with history and review of records.
RESULTS
Twenty-six primarily non-Hispanic white (73%), male (100%) participants were enrolled and underwent neurocognitive assessment. Mean age was 44 (28-71) years, with a median of 17 years of education (13-24). Median current and nadir CD4 T-cell counts were 828 (506-1411) and 359 (150-621) cells/μl. All participants had plasma HIV-1 RNA less than 50 copies/ml. Median duration of cART prior to enrolment was 5.7 years (2.2-9.9). Median global deficit score was 0.17 (0.00-0.60). Only one (4%) participant was impaired.
CONCLUSION
Rates of HAND in this cohort of HIV-infected men without comorbid conditions who initiated early cART are low. Our findings suggest a possible neuroprotective benefit of early cART and an important contribution of comorbidities to observed HAND prevalence. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.