title
stringlengths
8
300
abstract
stringlengths
0
10k
Short-Term Load Forecasting Using EMD-LSTM Neural Networks with a Xgboost Algorithm for Feature Importance Evaluation
Accurate load forecasting is an important issue for the reliable and efficient operation of a power system. This study presents a hybrid algorithm that combines similar days (SD) selection, empirical mode decomposition (EMD), and long short-term memory (LSTM) neural networks to construct a prediction model (i.e., SD-EMD-LSTM) for short-term load forecasting. The extreme gradient boosting-based weighted k-means algorithm is used to evaluate the similarity between the forecasting and historical days. The EMD method is employed to decompose the SD load to several intrinsic mode functions (IMFs) and residual. Separated LSTM neural networks were also employed to forecast each IMF and residual. Lastly, the forecasting values from each LSTM model were reconstructed. Numerical testing demonstrates that the SD-EMD-LSTM method can accurately forecast the electric load.
Validation of the Regicor Short Physical Activity Questionnaire for the Adult Population
OBJECTIVE To develop and validate a short questionnaire to estimate physical activity (PA) practice and sedentary behavior for the adult population. METHODS The short questionnaire was developed using data from a cross-sectional population-based survey (n = 6352) that included the Minnesota leisure-time PA questionnaire. Activities that explained a significant proportion of the variability of population PA practice were identified. Validation of the short questionnaire included a cross-sectional component to assess validity with respect to the data collected by accelerometers and a longitudinal component to assess reliability and sensitivity to detect changes (n = 114, aged 35 to 74 years). RESULTS Six types of activities that accounted for 87% of population variability in PA estimated with the Minnesota questionnaire were selected. The short questionnaire estimates energy expenditure in total PA and by intensity (light, moderate, vigorous), and includes 2 questions about sedentary behavior and a question about occupational PA. The short questionnaire showed high reliability, with intraclass correlation coefficients ranging between 0.79 to 0.95. The Spearman correlation coefficients between estimated energy expenditure obtained with the questionnaire and the number of steps detected by the accelerometer were as follows: 0.36 for total PA, 0.40 for moderate intensity, and 0.26 for vigorous intensity. The questionnaire was sensitive to detect changes in moderate and vigorous PA (correlation coefficients ranging from 0.26 to 0.34). CONCLUSION The REGICOR short questionnaire is reliable, valid, and sensitive to detect changes in moderate and vigorous PA. This questionnaire could be used in daily clinical practice and epidemiological studies.
Surface treatment of the lithium boron oxide coated LiMn2O4 cathode material in Li-ion battery
Surface modification on the electrode has a vital impact on lithium-ion batteries, and it is essential to probe the mechanism of the modified film on the surface of the electrode. In this study, a Li2O-2B2O3 film was coated on the surface of the cathode material by solution method. The cathode powders derived from co-precipitation method were calcined with various weight percent of the surface modified glass to form fine powder of single spinel phase with different particle size, size distribution and morphology. The thermogravimetry/differential thermal analysis was used to evaluate the appropriate heat treatment temperature. The structure was confirmed by the X-ray diffractometer along with the composition measured by the electron probe microanalyzer. From the field emission scanning electron microscope image and Laser Scattering measurements, the average particle size was in the range of 7-8µm. The electrochemical behavior of the cathode powder was examined by using two-electrode test cells consisted of a cathode, metallic lithium anode, and an electrolyte of 1M LiPF6. Cyclic charge/discharge testing of the coin cells, fabricated by both coated and un-coated cathode material, provided high discharge capacity. Furthermore, the coated cathode powder showed better cyclability than the un-coated one after the cyclic test. The introduction of the glass-coated cathode material revealed high discharge capacity and appreciably decreased the decay rate after cyclic test.
5.6 A 0.13μm fully digital low-dropout regulator with adaptive control and reduced dynamic stability for ultra-wide dynamic range
This paper presents a discrete-time, fully digital, scan-programmable LDO macro in 0.13μm technology featuring greater than 90% current efficiency across a 50× current range, and 8× improvement in transient response time in response to large load steps. The baseline design features a 128b barrel shifter that digitally controls 128 identical power PMOS devices to provide load and line regulation at the node VREG, for a scan-programmable fine-grained synthetic load. A clocked comparator, which eliminates the need for any bias current, controls the direction of shift, D. The programmable mux-select signals, MUX1 and MUX2, provide controllable closed loop gains, KBARREL, of 1 to 3×. Since at any clock edge only 1, 2 or 3 shifts can occur (depending on the gain setting), fine-grained clock gating is enabled by dividing the 128b shifter into four sections and only enabling the clock to the section(s) where the shift occurs.
Hybrid Mobile Edge Computing: Unleashing the Full Potential of Edge Computing in Mobile Device Use Cases
Many different technologies fostering and supporting distributed and decentralized computing scenarios emerged recently. Edge computing provides the necessary on-demand computing power for Internet-of-Things (IoT) devices where it is needed. Computing power is moved closer to the consumer, with the effect of reducing latency and increasing fail-safety due to absent centralized structures. This is an enabler for applications requiring high-bandwidth uplinks and low latencies to computing units. In this paper, a new use case for edge computing is identified. Mobile devices can overcome their battery limitations and performance constraints by dynamically using the edge-computing-provided computational power. We call this new technology Hybrid Mobile Edge Computing. We present a general architecture and framework, which targets the mobile device use case of hybrid mobile edge computing, not only considering the improvement of performance and energy consumption, but also providing means to protect user privacy, sensitive data and computations. The achieved results are backed by the results of our analysis, targeting the energy saving potentials and possible performance improvements.
Re-Os dating of the Raobazhai ultra mafic massif in North Dabie
The ultramafic massif at Raobazhai in North Dabie is located in the suture zone between the Yangtze craton and North China craton. The Re-Os isotope compositions of the massif are used to decipher the origin and tectonics of the ultramafic rocks involved in continental subduction and exhumation. Fifteen samples were collected from five drill holes along the main SE-NW axis of the Raobazhai massif. Major and trace element compositions of the samples show linear correlations between MgO, Yb and Al2O3. This suggests that the massif experienced partial melting with variable degrees and is from fertile to deplete in basaltic compositions. Nine selected samples were analyzed for Re-Os isotope compositions. Re contents range from 0.004 to 0.376 ng/g, Os contents from 0.695 to 3.761 ng/g,187Re/188Os ratios from 0.022 to 2.564 and187Os/188Os ratios from 0.1165 to 0.1306. These indicate that the massif is a piece of continental lithospheric mantle with variable depletion. Using the positive correlations between187Os/188Os and Yb and Al2O3 respectively, a proxy isochron age of 1.8±0.1 Ga is obtained for the Raobazhai ultramafic massif, which is interpreted to represent a fragment of the ancient subcontinental lithospheric mantle. During Triassic subduction of the Yangtze craton beneath the North China craton, part of the wedge of subcontinental lithospheric mantle was intruded into the subduction belt, and then exhumed to crustal level together with the subducted crustal plate after ultrahigh pressure metamorphism at mantle depths. This ancient lithospheric mantle is now exposed as an orogenic peridotite massif.
Sentence Extraction by tf/idf and Position Weighting from Newspaper Articles
Recently lots of researchers are focusing their interests on the development of summarization systems from large volume sources combined with knowledge acquisition techniques such as infor mation extraction text mining or information re trieval Some of these techniques are implemented according to the speci c knowledge in the domain or the genre from the source document In this pa per we will discuss Japanese Newspaper Domain Knowledge in order to make a summary My sys tem is implemented with the sentence extraction approach and weighting strategy to mine from a number of documents
Ibutilide added to propafenone for the conversion of atrial fibrillation and atrial flutter.
OBJECTIVES We evaluated the safety and efficacy of ibutilide when added to propafenone in treating both paroxysmal and chronic atrial fibrillation (AF) and atrial flutter (AFL). BACKGROUND The effects of ibutilide in patients with paroxysmal or chronic AF/AFL who were pre-treated with propafenone have not been previously evaluated. METHODS Oral propafenone was initially given in 202 patients with AF/AFL without left ventricular dysfunction. Intravenous ibutilide was administered in 104 patients in whom propafenone failed to convert the arrhythmia. Two different propafenone dosage regimens were used according to the duration of the presenting arrhythmia: patients with paroxysmal arrhythmia (n = 48) received 600 mg loading dose, and patients with chronic arrhythmia (n = 56) were receiving 150 mg three times a day as stable-dose pre-treatment. RESULTS Ibutilide offered an overall conversion efficacy of 66.3% (69 of 104 patients), 70.8% for patients with paroxysmal AF/AFL and 62.5% for patients with chronic AF/AFL. Ibutilide significantly decreased the heart rate (HR) and further prolonged the QTc interval (p < 0.0001). The degree of HR reduction after ibutilide administration emerged as the sole predictor of successful arrhythmia termination (p < 0.001). After ibutilide, one patient (1%) developed two asymptomatic episodes of non-sustained torsade de pointes, and 10 patients (9.6%) manifested transient bradyarrhythmic events; however, all bradyarrhythmic effects were predictable, occurring mostly at the time of arrhythmia termination. None of 82 patients who decided to continue propafenone after successful cardioversion had immediate arrhythmia recurrence. CONCLUSIONS Our graded approach using propafenone and ibutilide appears to be a relatively safe and effective alternative for the treatment of paroxysmal and chronic AF/AFL to both rapidly restore sinus rhythm in nonresponders to monotherapy with propafenone and prevent immediate recurrences of the arrhythmia.
Autonomous parking using a sensor based approach
This paper considers the perpendicular parking problem of car-like vehicles for both forward and reverse maneuvers. A sensor based controller with a weighted control scheme is proposed and is compared with a state of the art path planning approach. The perception problem is threated as well considering a Velodyne VLP-16 as the sensor providing the required exteroceptive information. A methodology to extract the necessary information for both approaches from the sensor data is presented. A fast prototyping environment has been used to develop the parking application, and also used as simulator in order to validate the approach. Preliminary results from simulation and real experimentation show the effectiveness of the proposed approach.
A comparison of two learning algorithms for text categorization
This paper examines the use of inductive learning to categorize natural language documents into predeened content categories. Categorization of text is of increasing importance in information retrieval and natural language processing systems. Previous research on automated text categorization has mixed machine learning and knowledge engineering methods, making it diicult to draw conclusions about the performance of particular methods. In this paper we present empirical results on the performance of a Bayesian classiier and a decision tree learning algorithm on two text categorization data sets. We nd that both algorithms achieve reasonable performance and allow controlled tradeoos between false positives and false negatives. The stepwise feature selection in the decision tree algorithm is particularly eeective in dealing with the large feature sets common in text categorization. However, even this algorithm is aided by an initial preeltering of features, connrming the results found by Almuallim and Dietterich on artiicial data sets. We also demonstrate the impact of the time-varying nature of category deenitions.
Im2Avatar: Colorful 3D Reconstruction from a Single Image
Existing works on single-image 3D reconstruction mainly focus on shape recovery. In this work, we study a new problem, that is, simultaneously recovering 3D shape and surface color from a single image, namely “colorful 3D reconstruction”. This problem is both challenging and intriguing because the ability to infer textured 3D model from a single image is at the core of visual understanding. Here, we propose an end-to-end trainable framework, Colorful Voxel Network (CVN), to tackle this problem. Conditioned on a single 2D input, CVN learns to decompose shape and surface color information of a 3D object into a 3D shape branch and a surface color branch, respectively. Specifically, for the shape recovery, we generate a shape volume with the state of its voxels indicating occupancy. For the surface color recovery, we combine the strength of appearance hallucination and geometric projection by concurrently learning a regressed color volume and a 2Dto-3D flow volume, which are then fused into a blended color volume. The final textured 3D model is obtained by sampling color from the blended color volume at the positions of occupied voxels in the shape volume. To handle the severe sparse volume representations, a novel loss function, Mean Squared False Cross-Entropy Loss (MSFCEL), is designed. Extensive experiments demonstrate that our approach achieves significant improvement over baselines, and shows great generalization across diverse object categories and arbitrary viewpoints.
Anonymizing healthcare data: a case study on the blood transfusion service
Sharing healthcare data has become a vital requirement in healthcare system management; however, inappropriate sharing and usage of healthcare data could threaten patients' privacy. In this paper, we study the privacy concerns of the blood transfusion information-sharing system between the Hong Kong Red Cross Blood Transfusion Service (BTS) and public hospitals, and identify the major challenges that make traditional data anonymization methods not applicable. Furthermore, we propose a new privacy model called LKC-privacy, together with an anonymization algorithm, to meet the privacy and information requirements in this BTS case. Experiments on the real-life data demonstrate that our anonymization algorithm can effectively retain the essential information in anonymous data for data analysis and is scalable for anonymizing large datasets.
Assessing Effects of Task and Data Distribution on the Effectiveness of Visual Encodings
Abstract In addition to the choice of visual encodings, the effectiveness of a data visualization may vary with the analytical task being performed and the distribution of data values. To better assess these effects and create refined rankings of visual encodings, we conduct an experiment measuring subject performance across task types (e.g., comparing individual versus aggregate values) and data distributions (e.g., with varied cardinalities and entropies). We compare performance across 12 encoding specifications of trivariate data involving 1 categorical and 2 quantitative fields, including the use of x, y, color, size, and spatial subdivision (i.e., faceting). Our results extend existing models of encoding effectiveness and suggest improved approaches for automated design. For example, we find that colored scatterplots (with positionally-coded quantities and color-coded categories) perform well for comparing individual points, but perform poorly for summary tasks as the number of categories increases.
A Compact Multiple Band-Notched Planer Antenna with Enhanced Bandwidth Using Parasitic Strip Lumped Capacitors and DGS-Technique
UWB antenna with dual notched characteristics fed by microstrip transmission line is presented in this paper. The tapered connection between the rectangular patch and the feed line is used to produce a good impedance matching from 2.3 to 11.5 GHz. A dual band frequency notches are achieved using UDGS loaded with lumped capacitors. The first notch frequency band is achieved using DGS to reduce the interference with WIMAX from 3.3 to 3.7 Ghz. The second notch frequency band is also achieved using Uparasitic strip placed in the ground plan to eliminate the interference with WLAN from 5.2 to 5.9 GHz. Lumped capacitors are combined with the slot due to miniaturize the slot size. The size of the resonator is reduced by more than 40% when lumped capacitors are used. The proposed antenna hasVSWR < 2 except the notched bands. The simulated results confirm that the antenna is suitable for UWB applications.
Brain Networks of Explicit and Implicit Learning
Are explicit versus implicit learning mechanisms reflected in the brain as distinct neural structures, as previous research indicates, or are they distinguished by brain networks that involve overlapping systems with differential connectivity? In this functional MRI study we examined the neural correlates of explicit and implicit learning of artificial grammar sequences. Using effective connectivity analyses we found that brain networks of different connectivity underlie the two types of learning: while both processes involve activation in a set of cortical and subcortical structures, explicit learners engage a network that uses the insula as a key mediator whereas implicit learners evoke a direct frontal-striatal network. Individual differences in working memory also differentially impact the two types of sequence learning.
Molecular Evolution of the Nuclear Factor (Erythroid-Derived 2)-Like 2 Gene Nrf2 in Old World Fruit Bats (Chiroptera: Pteropodidae)
Mammals developed antioxidant systems to defend against oxidative damage in their daily life. Enzymatic antioxidants and low molecular weight antioxidants (LMWAs) constitute major parts of the antioxidant systems. Nuclear factor (erythroid-derived 2)-like 2 (Nrf2, encoded by the Nrf2 gene) is a central transcriptional regulator, regulating transcription, of many antioxidant enzymes. Frugivorous bats eat large amounts of fruits that contain high levels of LMWAs such as vitamin C, thus, a reliance on LMWAs might greatly reduce the need for antioxidant enzymes in comparison to insectivorous bats. Therefore, it is possible that frugivorous bats have a reduced need for Nrf2 function due to their substantial intake of diet-antioxidants. To test whether the Nrf2 gene has undergone relaxed evolution in fruit-eating bats, we obtained Nrf2 sequences from 16 species of bats, including four Old World fruit bats (Pteropodidae) and one New World fruit bat (Phyllostomidae). Our molecular evolutionary analyses revealed changes in the selection pressure acting on Nrf2 gene and identified seven specific amino acid substitutions that occurred on the ancestral lineage leading to Old World fruit bats. Biochemical experiments were conducted to examine Nrf2 in Old World fruit bats and showed that the amount of catalase, which is regulated by Nrf2, was significantly lower in the brain, heart and liver of Old World fruit bats despite higher levels of Nrf2 protein in Old World fruit bats. Computational predictions suggest that three of these seven amino acid replacements might be deleterious to Nrf2 function. Therefore, the results suggest that Nrf2 gene might have experienced relaxed constraint in Old World fruit bats, however, we cannot rule out the possibility of positive selection. Our study provides the first data on the molecular adaptation of Nrf2 gene in frugivorous bats in compensation to the increased levels of LWMAs from their fruit-diet.
A randomized controlled study comparing room air with carbon dioxide for abdominal pain, distention, and recovery time in patients undergoing colonoscopy.
Colonoscopy remains the gold standard for colorectal cancer screening. Many barriers to the procedure exist including the possibility of abdominal discomfort that may occur with insufflation. Carbon dioxide (CO2), which is rapidly absorbed in the blood stream, is an alternate method used to distend the lumen during colonoscopy. The goal of this study was to compare patient discomfort, abdominal girth, and recovery time in 2 groups of patients randomized to CO2 versus room air insufflation during colonoscopy. Using a Wong-Baker score, we found statistical difference in postprocedural discomfort levels (CO2 Group: 1.15 ± 2.0 vs. room air: 0.41 ± 0.31, p = .015) and a significantly greater increase in abdominal girth over CO2 immediately postprocedure (room air: 1.06 ± 1.29 inches vs. CO2: 0.56 ± 0.73 inches, p = .054) girth immediately postprocedure; however, recovery time was similar between the 2 study arms (CO2: 9.1 ± 16.2 minutes vs. room air: 10.2 ± 18.6 minutes, p = .713). Further studies are needed to determine whether CO2 is cost-effective and improves patient satisfaction with colonoscopy.
Radiative and nonradiative muon capture on the proton in heavy baryon chiral perturbation theory
We have evaluated the amplitude for muon capture by a proton, mu + p --> n + nu, to O(p^3) within the context of heavy baryon chiral perturbation theory (HBChPT) using the new O(p^3) Lagrangian of Ecker and Mojzis (E&M). We obtain expressions for the standard muon capture form factors and determine three of the coefficients of the E&M Lagrangian, namely, b_7, b_{19}, and b_{23}. We describe progress on the next step, a calculation of the radiative muon capture process, mu + p --> n + nu + gamma.
Rule of law and the international diffusion of e-commerce
Strong institutional environments serve as a foundation for e-commerce growth opportunities.
A Step-Up DC–DC Converter With a Resonant Voltage Doubler
A step-up dc-dc converter with a resonant voltage doubler is proposed. The proposed converter is composed of an active-clamp circuit and a resonant voltage doubler. The active-clamp circuit, which is controlled by dual asymmetrical pulsewidth modulation of the primary side, reduces the voltage spikes of main switches and limits the voltage stress of the switches as the maximum input voltage. In addition, the resonant voltage doubler of the secondary side provides two resonant-current paths formed by the leakage inductance and the output resonant capacitors, and zero-current-switching turn-off of diodes can be achieved by their resonant current. Thus, the losses of the output diodes due to the reverse-recovery problem can be removed. In addition, the voltage stress of the output diodes is clamped to the output voltage. The operation and analysis of the proposed circuit is presented in detail. To verify the performance of the proposed converter, experimental results are carried out for a 1.2-kW dc-dc converter with a constant switching frequency of 70 kHz.
A survey on the use of topic models when mining software repositories
Researchers in software engineering have attempted to improve software development by mining and analyzing software repositories. Since the majority of the software engineering data is unstructured, researchers have applied Information Retrieval (IR) techniques to help software development. The recent advances of IR, especially statistical topic models, have helped make sense of unstructured data in software repositories even more. However, even though there are hundreds of studies on applying topic models to software repositories, there is no study that shows how the models are used in the software engineering research community, and which software engineering tasks are being supported through topic models. Moreover, since the performance of these topic models is directly related to the model parameters and usage, knowing how researchers use the topic models may also help future studies make optimal use of such models. Thus, we surveyed 167 articles from the software engineering literature that make use of topic models. We find that i) most studies centre around a limited number of software engineering tasks; ii) most studies use only basic topic models; iii) and researchers usually treat topic models as black boxes without fully exploring their underlying assumptions and parameter values. Our paper provides a starting point for new researchers who are interested in using topic models, and may help new researchers and practitioners determine how to best apply topic models to a particular software engineering task.
The History of Political Thought in National Context
1. The history of political thought and the national discourses of politics Dario Castiglione and Iain Hampsher-Monk 2. The voice of the 'Greeks' in the Conversation of Mankind Janet Coleman 3. History of political theory in the Federal Republic of Germany: strange death and slow recovery Wolfgang Mommsen 4. A German version of the 'Linguistic Turn': Reinhart Koselleck and the history of political and social concepts (Begriffsgeschichte) Melvin Richter 5. One hundred years of history of political thought in Italy Angelo D'Orsi 6. Discordant voices: American histories of political thought Terry Ball 7. The Professoriate of Political Thought in England since 1914: a tale of three chairs Robert Wokler 8. The history of political thought and the political history of thought Iain Hampsher-Monk 9. The rise of, challenge to, and prospects of a Collingwoodian approach to the history of political thought Quentin Skinner 10. Towards a philosophical history of the political Pierre Rosanvallon 11. 'Le Retour des Emigres'?: The study of the history of political ideas in contemporary France Jeremy Jennings 12. National political cultures and regime changes in Eastern and Central Europe Victor Neumann 13. The limits of the national paradigm in the study of political thought: the case of Karl Popper and Central European Cosmopolitanism Malachi Hacohen 14. Postscript: Disciplines, canons, and publics: the history of the 'History of Political Thought' in comparative perspective Stefan Collini.
Methods for Reconstructing Causal Networks from Observed Time-Series: Granger-Causality, Transfer Entropy, and Convergent Cross-Mapping
Objectives and Scope A major question that arises in many areas of Cognitive Science is the need to distinguish true causal connections between variables from mere correlations. The most common way of addressing this distinction is the design of wellcontrolled experiments. However, in many situations, it is extremely difficult –or even outright impossible– to perform such experiments. Researchers are then forced to rely on correlational data in order to make causal inferences. This situation is especially common when one needs to analyze longitudinal data corresponding to historical time-series, symbolic sequences, or developmental data. These inferences are often very problematic. From the correlations alone it is difficult to determine the direction of the causal arrow linking two variables. Worse even, the lack of controls of observational data entail that correlations found between two variables need not reflect any causal connection between them. The possibility always remains that some third variable which the researchers were not able to measure, or were actually unaware of, is the actually driver for both measured variables, giving rise to the mirage of a direct relationship between them. In recent years, it has been shown that, under particular circumstances, one can use correlational information for making sound causal inferences (cf., Pearl, 2000). In this tutorial I will provide a hands-on introduction to the use of modern causality techniques for the analysis of observational time series. I will cover causality analyses for three types of time-series that are often encountered in Cognitive Science research:
A dynamic clustering algorithm for downlink CoMP systems with multiple antenna UEs
Coordinated multi-point (CoMP) schemes have been widely st udied in the recent years to tackle the inter-cell interference. In practice, latency and thro ughput constraints on the backhaul allow the organization of only small clusters of base stations (BSs) w here joint processing (JP) can be implemented. In this work we focus on downlink CoMP-JP with multiple anten na user equipments (UEs) and propose a novel dynamic clustering algorithm. The additional degre s of freedom at the UE can be used to suppress the residual interference by using an interferenc rejection combiner (IRC) and allow a multistream transmission. In our proposal we first define a set of ca ndid te clusters depending on long-term channel conditions. Then, in each time block, we develop a re source allocation scheme by jointly optimizing transmitter and receiver where: a) within each c andidate cluster a weighted sum rate is estimated and then b) a set of clusters is scheduled in order t o maximize the system weighted sum rate. Numerical results show that much higher rates are achi eved when UEs are equipped with multiple antennas. Moreover, as this performance improvement is mai nly due to the IRC, the gain achieved by the proposed approach with respect to the non-cooperative s cheme decreases by increasing the number of UE antennas. Index Terms Resource allocation and interference management, MIMO sys tems, Cellular technology. Part of this work has been presented at the International Sym posium on Wireless Communication Systems (ISWCS) 2012, Paris (France), and at the International Conference on Sign al Processing, Computing and Control (ISPCC) 2013, Shimla ( India). P. Baracca and F. Boccardi are with Bell Laboratories, Alcat el-Lucent (email:{paolo.baracca, federico.boccardi }@alcatellucent.com). N. Benvenuto is with the Department of Information Engineer ing, University of Padova, Italy (email: [email protected]) . November 21, 2013 DRAFT
Quark-Hadron Duality in Neutron Spin-Structure and g_2 moments at intermediate Q**2
Jefferson Lab experiment E01-012 measured the He-3 spin-structure functions and virtual photon asymmetries in the resonance region in the momentum transfer range 1.0<Q**2<4.0(GeV/c)**2. Our data, when compared with existing deep inelastic scattering data, were used to test quark-hadron duality in g_1 and A_1 for He-3 and the neutron. In addition, preliminary results on the He-3 spin-structure function g_2, on the Burkhardt-Cottingham sum rule and on higher twist effects through the x**2-weighted moment d_2 of the neutron were presented.
Trajectories and particle creation and annihilation in quantum field theory
We develop a theory based on Bohmian mechanics in which particle world lines can begin and end. Such a theory provides a realist description of creation and annihilation events and thus a further step towards a 'beable-based' formulation of quantum field theory, as opposed to the usual 'observable-based' formulation which is plagued by the conceptual difficulties—such as the measurement problem—of quantum mechanics.
Developing multiple hypotheses in behavioral ecology
Researchers in behavioral ecology are increasingly turning to research methods that allow the simultaneous evaluation of hypotheses. This approach has great potential to increase our scientific understanding, but researchers interested in the approach should be aware of its long and somewhat contentious history. Also, prior to implementing multiple hypothesis evaluation, researchers should be aware of the importance of clearly specifying a priori hypotheses. This is one of the more difficult aspects of research based on multiple hypothesis evaluation, and we outline and provide examples of three approaches for doing so. Finally, multiple hypothesis evaluation has some limitations important to behavioral ecologists; we discuss two practical issues behavioral ecologists are likely to face.
Stylized augmented reality for improved immersion
The ultimate goal of augmented reality is to provide the user with a view of the surroundings enriched by virtual objects. Practically all augmented reality systems rely on standard real-time rendering methods for generating the images of virtual scene elements. Although such conventional computer graphics algorithms are fast, they often fail to produce sufficiently realistic renderings. The use of simple lighting and shading methods, as well as the lack of knowledge about actual lighting conditions in the real surroundings, cause virtual objects to appear artificial. In this paper, we propose an entirely novel approach for generating augmented reality images in video see-through systems. Our method is based on the idea of applying stylization techniques for reducing the visual realism of both the camera image and the virtual graphical objects. A special painterly image filter is applied to the camera video stream. The virtual scene elements are generated using a non-photorealistic rendering method. Since both the camera image and the virtual objects are stylized in a corresponding "cartoon-like" or "sketch-like" way, they appear very similar. As a result, the graphical objects seem to be an actual part of the real surroundings. We describe both the new painterly filter for the camera image and the non-photorealistic rendering method for virtual scene elements, which has been adapted for this purpose. Both are fast enough for generating augmented reality images in real-time and are highly customizable. The results obtained using our method, are very promising and show that it improves immersion in augmented reality.
Long-term cognitive and functional effects of potentially inappropriate medications in older women.
BACKGROUND The use of potentially inappropriate medications in older adults can lead to known adverse drug events, but long-term effects are less clear. We therefore conducted a prospective cohort study of older women to determine whether PIM use is associated with risk of functional impairment or low cognitive performance. METHODS We followed up 1,429 community-dwelling women (≥ 75 years) for a period of 5 years at four clinical sites in the United States. The primary predictor at baseline was PIM use based on 2003 Beers Criteria. We also assessed anticholinergic load using the Anticholinergic Cognitive Burden scale. Outcomes included scores on a battery of six cognitive tests at follow-up and having one or more incident impairments in instrumental activities of daily living. Regression models were adjusted for baseline age, race, education, smoking, physical activity, a modified Charlson Comorbidity Index, and cognitive score. RESULTS The mean ± SD age of women at baseline was 83.2 ± 3.3. In multivariate models, baseline PIM use and higher ACB scores were significantly associated with poorer performance in category fluency (PIM: p = .01; ACB: p = .02) and immediate (PIM: p = .04; ACB: p = .03) and delayed recall (PIM: p = .04). Both PIM use (odds ratio [OR]: 1.36 [1.05-1.75]) and higher ACB scores (OR: 1.11 [1.04-1.19]) were also strongly associated with incident functional impairment. CONCLUSIONS The results provide suggestive evidence that PIM use and increased anticholinergic load may be associated with risk of functional impairment and low cognitive performance. More cautious selection of medications in older adults may reduce these potential risks.
[Trastuzumab emtansine (Kadcyla(®)) approval in HER2-positive metastatic breast cancers].
HER2 (human epidermal growth factor receptor 2) is overexpressed in 15 to 20% of breast cancer. Anti-HER2 targeted therapies, notably trastuzumab, have transformed the natural history of this disease. Trastuzumab emtansine, consisting of trastuzumab coupled to a cytotoxic agent, emtansine (DM1), by a stable linker, has been approved in November 2013 by the European Medicine Agency. Trastuzumab emtansine targets and inhibits HER2 signaling, but also allows emtansine to be directly delivered inside HER2-positive cancer cells. It is indicated as single-agent in taxane and trastuzumab-pretreated HER2-positive breast cancer patients with metastatic and locally recurrent unresecable disease or relapsing within 6 months of the end of adjuvant therapy. This indication is based on the results of the EMILIA study, an open label phase III randomized trial comparing trastuzumab emtansine to lapatinib-capecitabine. The two primary endpoints were reached. The progression-free survival was 6.4 months in the lapatinib-capecitabine arm versus 9.6 months for the trastuzumab emtansine arm (HR=0.65; 95% CI=0.55-0.77, P<0.001). Overall survival at the second interim analysis was 25.1 months in the lapatinib-capecitabine arm versus 30.9 months in the trastuzumab emtansine arm (HR=0.68; 95% CI=0.55-0.85, P<0.001). Moreover, adverse events were more frequent in the lapatinib-capecitabine arm.
Uses and Misuses of Bronfenbrenner’s Bioecological Theory of Human Development
This paper evaluates the application of Bronfenbrenner’s bioecological theory as it is represented in empirical work on families and their relationships. We describe the ‘‘mature’’ form of bioecological theory of the mid-1990s and beyond, with its focus on proximal processes at the center of the Process-Person-Context-Time model. We then examine 25 papers published since 2001, all explicitly described as being based on Bronfenbrenner’s theory, and show that all but 4 rely on outmoded versions of the theory, resulting in conceptual confusion and inadequate testing of the theory.
Learning Decision Trees for Named-Entity Recognition and Classification
We propose the use of decision tree induction as a solution to the problem of customising a named-entity recognition and classification (NERC) system to a specific domain. A NERC system assigns semantic tags to phrases that correspond to named entities, e.g. persons, locations and organisations. Typically, such a system makes use of two language resources: a recognition grammar and a lexicon of known names, classified by the corresponding named-entity types. NERC systems have been shown to achieve good results when the domain of application is very specific. However, the construction of the grammar and the lexicon for a new domain is a hard and time-consuming process. We propose the use of decision trees as NERC “grammars” and the construction of these trees using machine learning. In order to validate our approach, we tested C4.5 on the identification of person and organisation names involved in management succession events, using data from the sixth Message Understanding Conference. The results of the evaluation are very encouraging showing that the induced tree can outperform a grammar that was constructed manually.
3-D Integration and Through-Silicon Vias in MEMS and Microsensors
After two decades of intensive development, 3-D integration has proven invaluable for allowing integrated circuits to adhere to Moore's Law without needing to continuously shrink feature sizes. The 3-D integration is also an enabling technology for hetero-integration of microelectromechanical systems (MEMS)/microsensors with different technologies, such as CMOS and optoelectronics. This 3-D hetero-integration allows for the development of highly integrated multifunctional microsystems with small footprints, low cost, and high performance demanded by emerging applications. This paper reviews the following aspects of the MEMS/microsensor-centered 3-D integration: fabrication technologies and processes, processing considerations and strategies for 3-D integration, integrated device configurations and wafer-level packaging, and applications and commercial MEMS/microsensor products using 3-D integration technologies. Of particular interest throughout this paper is the hetero-integration of the MEMS and CMOS technologies.
GPS Locator: An Application for Location Tracking and Sharing Using GPS for Java Enabled Handhelds
The use of mobile devices has become a part of our daily routine. Recently, mobile devices like mobile phones or portable digital displays (PDAs) are equipped with global positioning system (GPS) receptors that allow us to get the device's geographic position in real time. Location Based Services (LBS) are regarded as a key feature of many future mobile applications. GPS serves well for most outdoor applications, however, its dependence on satellites makes it ineffective for indoor environments. This document gives a detail on our ongoing project work in the field of Location Based Services for JAVA enabled mobile devices, equipped with GPS receptor. We present a novel technique to send GPS coordinates to other mobiles through Short Message Service (SMS) based on Global Positioning System (GPS) technology. This application also enables the users to get their current location coordinates (latitude, longitude and altitude) and they can also view their locations on the Google maps. Further, this application also enables the user to share his location with their friends through a web server using internet connectivity in their hand helds. GPS is a satellite-based navigation system made up of a network of 24 satellites placed into orbit by the United States (US) Department of Defense (DoD). GPS was originally intended for military applications, but in the 1980s, the government made the system available for civilian use. GPS can show you your exact position on the Earth in any weather conditions, anywhere in the world, 24 hours a day. There are no subscription fees or setup charges to use GPS [6].
Microscopic evolution of social networks
We present a detailed study of network evolution by analyzing four large online social networks with full temporal information about node and edge arrivals. For the first time at such a large scale, we study individual node arrival and edge creation processes that collectively lead to macroscopic properties of networks. Using a methodology based on the maximum-likelihood principle, we investigate a wide variety of network formation strategies, and show that edge locality plays a critical role in evolution of networks. Our findings supplement earlier network models based on the inherently non-local preferential attachment. Based on our observations, we develop a complete model of network evolution, where nodes arrive at a prespecified rate and select their lifetimes. Each node then independently initiates edges according to a "gap" process, selecting a destination for each edge according to a simple triangle-closing model free of any parameters. We show analytically that the combination of the gap distribution with the node lifetime leads to a power law out-degree distribution that accurately reflects the true network in all four cases. Finally, we give model parameter settings that allow automatic evolution and generation of realistic synthetic networks of arbitrary scale.
Ruminations of Du Bois, Davis and Drake
W. E. B. Du Bois, Allison Davis, and St. Clair Drake, along with other African American pioneers of anthropology, have produced works which prefigure many recent developments in sociocultural anthropology. These scholars' detailed research into race relations and social stratification offers broad theoretical insights to processual and agency-oriented approaches to studying social life. Their work historicizes the construction of racist ideologies and the erection of racial hierarchies and considers the dynamic relationship of structures of inequality to ideological formulations. In addition, they describe the intersection of race, class, and gender and how these overlapping forms of categorization and organization impact the daily lives of positioned subjects. These pioneers situate their detailed local studies within diaspora and global contexts as they note transnational flows of people, ideas, and capital. Du Bois, Davis and Drake combine qualitative, quantitative, archival, and team research methods to produce non-realist, decentered, polyvocal texts aimed at highlighting social injustice, discrimination, and the politics of identity.
Neural Network-Based Coronary Heart Disease Risk Prediction Using Feature Correlation Analysis
Background Of the machine learning techniques used in predicting coronary heart disease (CHD), neural network (NN) is popularly used to improve performance accuracy. Objective Even though NN-based systems provide meaningful results based on clinical experiments, medical experts are not satisfied with their predictive performances because NN is trained in a “black-box” style. Method We sought to devise an NN-based prediction of CHD risk using feature correlation analysis (NN-FCA) using two stages. First, the feature selection stage, which makes features acceding to the importance in predicting CHD risk, is ranked, and second, the feature correlation analysis stage, during which one learns about the existence of correlations between feature relations and the data of each NN predictor output, is determined. Result Of the 4146 individuals in the Korean dataset evaluated, 3031 had low CHD risk and 1115 had CHD high risk. The area under the receiver operating characteristic (ROC) curve of the proposed model (0.749 ± 0.010) was larger than the Framingham risk score (FRS) (0.393 ± 0.010). Conclusions The proposed NN-FCA, which utilizes feature correlation analysis, was found to be better than FRS in terms of CHD risk prediction. Furthermore, the proposed model resulted in a larger ROC curve and more accurate predictions of CHD risk in the Korean population than the FRS.
Role of irbesartan in prevention of post-coronary artery bypass graft atrial fibrillation.
BACKGROUND AND OBJECTIVE Atrial fibrillation (AF) is a common complication of cardiothoracic surgery (CTS). Existing evidence about the potential protective role of angiotensin II type 1 receptor antagonists (angiotensin receptor blockers [ARBs]) for post-CTS AF has been limited and conflicting. In this single-blind, open-label, randomized prospective pilot study, we evaluated the potential protective role of irbesartan (an ARB) in post-coronary artery bypass graft (CABG) AF. METHODS A total of 100 consecutive patients undergoing CABG were randomly assigned to irbesartan (n = 50) versus no irbesartan (n = 50) for 5 days prior to the scheduled surgery. Data were collected for imaging studies, laboratory values, and peri-operative details. Patients were monitored post-operatively for in-hospital AF episodes. Unadjusted and adjusted logistic regression analysis was performed to assess the effect of irbesartan on the incidence of post-CABG AF. RESULTS A total of 14 patients developed AF during their post-operative hospital stay. The incidence of AF in patients who received irbesartan was 6% (n = 3) compared with 22% (n = 11) in patients who did not receive irbesartan (p = 0.021). Univariate logistic regression analysis identified irbesartan and age as statistically significant variables. An adjusted multivariate logistic model identified irbesartan as an important protective factor against development of post-CABG AF (adjusted odds ratio [OR] 0.20; 95% confidence interval [CI] 0.04, 0.94; p = 0.04). Increasing age (adjusted OR 1.09, 95% CI 1.01, 1.17; p = 0.03) was also identified as an independent risk factor for development of post-CABG AF. CONCLUSION Pretreatment with irbesartan tends to have a significant protective effect against the occurrence of AF during the post-operative period in patients undergoing CABG.
Reproducible network experiments using container-based emulation
In an ideal world, all research papers would be runnable: simply click to replicate all results, using the same setup as the authors. One approach to enable runnable network systems papers is Container-Based Emulation (CBE), where an environment of virtual hosts, switches, and links runs on a modern multicore server, using real application and kernel code with software-emulated network elements. CBE combines many of the best features of software simulators and hardware testbeds, but its performance fidelity is unproven. In this paper, we put CBE to the test, using our prototype, Mininet-HiFi, to reproduce key results from published network experiments such as DCTCP, Hedera, and router buffer sizing. We report lessons learned from a graduate networking class at Stanford, where 37 students used our platform to replicate 18 published results of their own choosing. Our experiences suggest that CBE makes research results easier to reproduce and build upon.
Learning an Optimizer for Image Deconvolution
As an integral component of blind image deblurring, non-blind deconvolution removes image blur with a given blur kernel, which is essential but difficult due to the ill-posed nature of the inverse problem. The predominant approach is based on optimization subject to regularization functions that are either manually designed, or learned from examples. Existing learning based methods have shown superior restoration quality but are not practical enough due to their restricted model design. They solely focus on learning a prior and require to know the noise level for deconvolution. We address the gap between the optimizationbased and learning-based approaches by learning an optimizer. We propose a Recurrent Gradient Descent Network (RGDN) by systematically incorporating deep neural networks into a fully parameterized gradient descent scheme. A parameterfree update unit is used to generate updates from the current estimates, based on a convolutional neural network. By training on diverse examples, the Recurrent Gradient Descent Network learns an implicit image prior and a universal update rule through recursive supervision. Extensive experiments on synthetic benchmarks and challenging real-world images demonstrate that the proposed method is effective and robust to produce favorable results as well as practical for realworld image deblurring applications.
Leveraging multiviews of trust and similarity to enhance clustering-based recommender systems
Although demonstrated to be efficient and scalable to large-scale data sets, clustering-based recommender systems suffer from relatively low accuracy and coverage. To address these issues, we develop a multiview clustering method through which users are iteratively clustered from the views of both rating patterns and social trust relationships. To accommodate users who appear in two different clusters simultaneously, we employ a support vector regression model to determine a prediction for a given item, based on user-, itemand prediction-related features. To accommodate (cold) users who cannot be clustered due to insufficient data, we propose a probabilistic method to derive a prediction from the views of both ratings and trust relationships. Experimental results on three real-world data sets demonstrate that our approach can effectively improve both the accuracy and coverage of recommendations as well as in the cold start situation, moving clustering-based recommender systems closer towards practical use.
Long-term outcomes of catheter ablation of atrial fibrillation post-cardiac valve replacement.
OBJECTIVE The long-term outcomes of catheter ablation of atrial fibrillation (AF) developing post-cardiac valve replacement (VR) remain undefined. METHODS AND RESULTS Eighty-nine post-VR patients with AF (44% longstanding persistent AF, LSP-AF) were enrolled. Cumulative success rate of circumferential pulmonary vein ablation (CPVA for paroxysmal AF) and bidirectional block of lines and disappearance of complex fractionated atrial electrograms (CFAEs for persistent and LSP-AF) as index and repeat procedural endpoints reached 57% (mean, 1.3 procedures) during the first year, and dropped to 42% at median follow-up of 40months (range, 24-70months) for multiple procedures (mean, 1.6±0.9 [1-5]); incidence of procedural complications was similar to that of conventional procedures. In multivariate analysis, larger right atrium (RA, 9.40 [2.64-33.36]; P=0.001) and rheumatic valvular disease etiology (OR, 5.49 [95% CI, 1.26-23.96]; P=0.023) were significant independent predictors of recurrent atrial tachyarrhythmia (ATa); in contrast, long-term freedom from ATa was comparable between single and double valve replacement groups (42.1% vs. 43.7%, P=0.880), or mechanical and bioprosthetic valves groups (41.7% vs. 50.0%, P=0.620). CONCLUSION In this single-center prospective study, treatment of post-VR AF with commonly used ablation strategies including CPVA and linear and CFAE ablation had limited long-term success, with ATa recurrence risk appearing higher in the setting of RA enlargement and rheumatic valvular disease and unrelated to valves characteristics.
Classical Planning in Deep Latent Space: Bridging the Subsymbolic-Symbolic Boundary
Current domain-independent, classical planners require symbolic models of the problem domain and instance as input, resulting in a knowledge acquisition bottleneck. Meanwhile, although deep learning has achieved significant success in many fields, the knowledge is encoded in a subsymbolic representation which is incompatible with symbolic systems such as planners. We propose LatPlan, an unsupervised architecture combining deep learning and classical planning. Given only an unlabeled set of image pairs showing a subset of transitions allowed in the environment (training inputs), and a pair of images representing the initial and the goal states (planning inputs), LatPlan finds a plan to the goal state in a symbolic latent space and returns a visualized plan execution. The contribution of this paper is twofold: (1) State Autoencoder, which finds a propositional state representation of the environment using a Variational Autoencoder. It generates a discrete latent vector from the images, based on which a PDDL model can be constructed and then solved by an off-the-shelf planner. (2) Action Autoencoder / Discriminator, a neural architecture which jointly finds the action symbols and the implicit action models (preconditions/effects), and provides a successor function for the implicit graph search. We evaluate LatPlan using image-based versions of 3 planning domains: 8-puzzle, Towers of Hanoi and LightsOut. Note This is an extended manuscript of the paper accepted in AAAI-18. The contents of AAAI-18 submission itself is significantly extended from what has been published in Arxiv, KEPS-17, NeSy-17 or Cognitum-17 workshops. Over half of the paper describing (2) is new. Additionally, this manuscript contains the contents in the supplemental material of AAAI-18 submission. These implementation/experimental details are moved to the Appendix. Note to the ML / deep learning researchers This article combines the Machine Learning systems and the classical, logic-based symbolic systems. Some readers may not be familiar with NNs and related fields like you are, thus we include very basic description of the architectures and the training methods.
Siamese Network for Underwater Multiple Object Tracking
For underwater videos, the performance of object tracking is greatly affected by illumination changes, background disturbances and occlusion. Hence, there is a need to have a robust function that computes image similarity, to accurately track the moving object. In this work, a hybrid model that incorporates the Kalman Filter, a Siamese neural network and a miniature neural network has been developed for object tracking. It was observed that the usage of the Siamese network to compute image similarity significantly improved the robustness of the tracker. Although the model was developed for underwater videos, it was found that it performs well for both underwater and human surveillance videos. A metric has been defined for analyzing detections-to-tracks mapping accuracy. Tracking results have been analyzed using Multiple Object Tracking Accuracy (MOTA) and Multiple Object Tracking Precision (MOTP)metrics.
Model-based whitebox fuzzing for program binaries
Many real-world programs take highly structured and very complex inputs. The automated testing of such programs is non-trivial. If the test input does not adhere to a specific file format, the program returns a parser error. For symbolic execution-based whitebox fuzzing the corresponding error handling code becomes a significant time sink. Too much time is spent in the parser exploring too many paths leading to trivial parser errors. Naturally, the time is better spent exploring the functional part of the program where failure with valid input exposes deep and real bugs in the program. In this paper, we suggest to leverage information about the file format and the data chunks of existing, valid files to swiftly carry the exploration beyond the parser code. We call our approach Model-based Whitebox Fuzzing (MoWF) because the file format input model of blackbox fuzzers can be exploited as a constraint on the vast input space to rule out most invalid inputs during path exploration in symbolic execution. We evaluate on 13 vulnerabilities in 8 large program binaries with 6 separate file formats and found that MoWF exposes all vulnerabilities while both, traditional whitebox fuzzing and model-based blackbox fuzzing, expose only less than half, respectively. Our experiments also demonstrate that MoWF exposes 70% vulnerabilities without any seed inputs.
VOLAP: A Scalable Distributed Real-Time OLAP System for High-Velocity Data
This paper presents VelocityOLAP (VOLAP), a distributed real-time OLAP system for high-velocity data. VOLAP makes use of dimension hierarchies, is highly scalable, exploits both multi-core and multi-processor parallelism, and can guarantee serializable execution of insert and query operations. In contrast to other high performance OLAP systems such as SAP HANA or IBM Netezza that rely on vertical scaling or special purpose hardware, VOLAP supports cost-efficient horizontal scaling on commodity hardware or modest cloud instances. Experiments on 20 Amazon EC2 nodes with TPC-DS data show that VOLAP is capable of bulk ingesting data at over 600 thousand items per second, and processing streams of interspersed insertions and aggregate queries at a rate of approximately 50 thousand insertions and 20 thousand aggregate queries per second with a database of 1 billion items. VOLAP is designed to support applications that perform large aggregate queries, and provides similar high performance for aggregations ranging from a few items to nearly the entire database.
Reward association affects neuronal responses to visual stimuli in macaque te and perirhinal cortices.
To study the roles of the perirhinal cortex (PRh) and temporal cortex (area TE) in stimulus-reward associations, we recorded spike activities of cells from PRh and TE in two monkeys performing a visually cued go/no-go task. Each visual cue indicated the required motor action as well as the availability of reward after correct completion of the trial. Eighty object images were divided into four groups, each of which was assigned to one of four motor-reward conditions. The monkeys either had to release a lever (go response) or keep pressing it (no-go response), depending on the cue. Each of the go and no-go trials could be either a rewarded or unrewarded trial. A liquid reward was provided after correct responses in rewarded trials, whereas correct responses were acknowledged only by audiovisual feedback in unrewarded trials. Several measures of the monkeys' behavior indicated that the monkeys correctly anticipated the reward availability in each trial. The dependence of neuronal activity on the reward condition was examined by comparing mean discharges to each of the 40 rewarded stimuli with those to each of the 40 unrewarded stimuli. Many cells in both areas showed significant reward dependence in their responses to the visual cues, and this was not likely attributable to differences in behavior across conditions because the variations in neuronal activity were not correlated with trial-by-trial variations in latency of go responses or anticipatory sucking strength. These results suggest the involvement of PRh and TE in associating visual stimuli with reward outcomes.
A new collaborative filtering algorithm using K-means clustering and neighbors' voting
The Collaborative Filtering is the most successful algorithm in the recommender systems' field. A recommender system is an intelligent system can help users to come across interesting items. It uses data mining and information filtering techniques. The collaborative filtering creates suggestions for users based on their neighbors' preferences. But it suffers from its poor accuracy and scalability. This paper considers the users are m (m is the number of users) points in n dimensional space (n is the number of items) and represents an approach based on user clustering to produce a recommendation for active user by a new method. It uses k-means clustering algorithm to categorize users based on their interests. Then it uses a new method called voting algorithm to develop a recommendation. We evaluate the traditional collaborative filtering and the new one to compare them. Our results show the proposed algorithm is more accurate than the traditional one, besides it is less time consuming than it.
Visualizing Digital Forensic Datasets: A Proof of Concept.
Digital forensic visualization is an understudied area despite its potential to achieve significant improvements in the efficiency of an investigation, criminal or civil. In this study, a three-stage forensic data storage and visualization life cycle is presented. The first stage is the decoding of data, which involves preparing both structured and unstructured data for storage. In the storage stage, data are stored within our proposed database schema designed for ensuring data integrity and speed of storage and retrieval. The final stage is the visualization of stored data in a manner that facilitates user interaction. These functionalities are implemented in a proof of concept to demonstrate the utility of the proposed life cycle. The proof of concept demonstrates the utility of the proposed approach for the storage and visualization of digital forensic data.
ON THE SOCIAL PSYCHOLOGY OF THE PSYCHOLOGICAL EXPERIMENT : WITH PARTICULAR REFERENCE TO DEMAND CHARACTERISTICS AND THEIR IMPLICATIONS 1
It is to the highest degree probable that the subject['s] ... general attitude of mind is that of ready complacency and cheerful willingness to assist the investigator in every possible way by reporting to him those very things which he is most eager to find, and that the very questions of the experimenter ... suggest the shade of reply expected . . . . Indeed . it seems too often as if the subject were now regarded as a stupid automaton .... A. H. PIERCE, 1908 3
End-to-End Learning of Deterministic Decision Trees
Conventional decision trees have a number of favorable properties, including interpretability, a small computational footprint and the ability to learn from little training data. However, they lack a key quality that has helped fuel the deep learning revolution: that of being end-to-end trainable, and to learn from scratch those features that best allow to solve a given supervised learning problem. Recent work (Kontschieder 2015) has addressed this deficit, but at the cost of losing a main attractive trait of decision trees: the fact that each sample is routed along a small subset of tree nodes only. We here propose a model and ExpectationMaximization training scheme for decision trees that are fully probabilistic at train time, but after a deterministic annealing process become deterministic at test time. We also analyze the learned oblique split parameters on image datasets and show that Neural Networks can be trained at each split node. In summary, we present the first endto-end learning scheme for deterministic decision trees and present results on par with or superior to published standard oblique decision tree algorithms.
Flexible affix classification for stemming Indonesian Language
The stemming is the process to derive the basic word by removing affix of the word. The stemming is tightly related to basic word or lemma and the sub lemmas. The lemma and sub lemma of Indonesian Language have been grown and absorb from foreign languages or Indonesian traditional languages. Our approach provides the easy way of stemming Indonesian language through flexibility affix classification. Therefore, the affix additional can be applied in easy way. We experiment with 1,704 text documents with 255,182 tokens and the stemmed words is 3,648 words. In this experiment, we compare our approach performance to the confix-stripping approach performance. The result shows that our performance can cover the failure in stemming reduplicated words of confix-stripping approach.
A Novelty Detection Approach to Classification
Novelty Detection techniques are concept-learning methods that proceed by recognizing positive instances of a concept rather than diierentiating between its positive and negative instances. Novelty Detection approaches consequently require very few, if any, negative training instances. This paper presents a particular Novelty Detection approach to classiica-tion that uses a Redundancy Compression and Non-Redundancy Diierentiation technique based on the (Gluck & Myers 1993) model of the hippocampus, a part of the brain critically involved in learning and memory. In particular, this approach consists of training an autoencoder to reconstruct positive input instances at the output layer and then using this au-toencoder to recognize novel instances. Classiication is possible, after training, because positive instances are expected to be reconstructed accurately while negative instances are not. The purpose of this paper is to compare Hippo, the system that implements this technique , to C4.5 and feedforward neural network classi-cation on several applications.
Person re-identification with content and context re-ranking
This paper proposes a novel and efficient re-ranking technque to solve the person re-identification problem in the surveillance application. Previous methods treat person re-identification as a special object retrieval problem, and compute the retrieval result purely based on a unidirectional matching between the probe and all gallery images. However, the correct matching may be not included in the top-k ranking result due to appearance changes caused by variations in illumination, pose, viewpoint and occlusion. To obtain more accurate re-identification results, we propose to reversely query every gallery person image in a new gallery composed of the original probe person image and other gallery person images, and revise the initial query result according to bidirectional ranking lists. The behind philosophy of our method is that images of the same person should not only have similar visual content, refer to content similarity, but also possess similar k-nearest neighbors, refer to context similarity. Furthermore, the proposed bidirectional re-ranking method can be divided into offline and online parts, where the majority of computation load is accomplished by the offline part and the online computation complexity is only proportional to the size of the gallery data set, which is especially suited to the real-time required video investigation task. Extensive experiments conducted on a series of standard data sets have validated the effectiveness and efficiency of our proposed method.
Coupled semi-supervised learning for information extraction
We consider the problem of semi-supervised learning to extract categories (e.g., academic fields, athletes) and relations (e.g., PlaysSport(athlete, sport)) from web pages, starting with a handful of labeled training examples of each category or relation, plus hundreds of millions of unlabeled web documents. Semi-supervised training using only a few labeled examples is typically unreliable because the learning task is underconstrained. This paper pursues the thesis that much greater accuracy can be achieved by further constraining the learning task, by coupling the semi-supervised training of many extractors for different categories and relations. We characterize several ways in which the training of category and relation extractors can be coupled, and present experimental results demonstrating significantly improved accuracy as a result.
Point-to-point connectivity between neuromorphic chips using address events
I discuss connectivity between neuromorphic chips, which use the timing of fixed-height, fixed-width, pulses to encode information. Address-events—log2(N)-bit packets that uniquely identify one of N neurons—are used to transmit these pulses in real-time on a random-access, time-multiplexed, communication channel. Activity is assumed to consist of neuronal ensembles—spikes clustered in space and in time. I quantify tradeoffs faced in allocating bandwidth, granting access, and queuing, as well as throughput requirements, and conclude that an arbitered channel design is the best choice. I implement the arbitered channel with a formal design methodology for asynchronous digital VLSI CMOS systems, after introducing the reader to this top-down synthesis technique. Following the evolution of three generations of designs, I show how the overhead of arbitrating, and encoding and decoding, can be reduced in area (from N to √ N) by organizing neurons into rows and columns, and reduced in time (from log2(N) to 2) by exploiting locality in the arbiter tree and in the row–column architecture, and clustered activity. Throughput is boosted by pipelining and by reading spikes in parallel. Simple techniques that reduce crosstalk in these mixed analog–digital systems are described. Keywords— Spiking Neurons, Interchip Communication, Asynchronous Logic Synthesis, Virtual Wiring. I. Connectivity in Neuromorphic Systems E are far from matching either the efficacy of neural computation or the efficiency of neural coding. Computers use a million times more energy per operation than brains do [1]. Video cameras uses a thousand times more bandwidth per bit of information than retinas do (see Section II-A). We cannot replace damaged parts of the nervous system because of these shortcomings. To match nature’s computational performance and communication efficiency, we must co-optimize information processing and energy consumption. A small—but growing—community of engineers is attempting to build autonomous sensorimotor systems that match the efficacy and efficiency of their biological counterparts by recreating the function and structure of neural systems in silicon. Taking a structure-to-function approach, these neuromorphic systems go beyond bioinspiration [2], copying biological organization as well as function [3], [4], [5]. Neuromorphic engineers are using garden-variety VLSI CMOS technology to achieve their goal [6]. This effort is facilitated by similarities between VLSI hardware and neural wetware. Both technologies: • Provide millions of inexpensive, poorly-matched devices. • Operate in the information-maximizing low-signal-tonoise/high-bandwidth regime. K. A. Boahen morphs brains into silicon at the Bioengineering Dept, University of Pennsylvania, Philadelphia PA 19104-6392. Email: [email protected] And challenged by these fundamental differences: • Fan-ins and fan-outs are about ten in VLSI circuits versus several thousand in neural circuits. • Most digital VLSI circuits are synchronized by an external clock, whereas neurons use the degree of coincidence in their firing times to encode information. Neuromorphic engineers have adopted time-division multiplexing to achieve massive connectivity, inspired by its success in telecommunications [7] and computer networks [8]. The number of layers and pins offered by commercial microfabrication and chip-packaging technologies are severely limited. Multiplexing leverages the 5-decade difference in bandwidth between a neuron (hundreds of Hz) and a digital bus (tens of megahertz), enabling us to replace thousands of dedicated point-to-point connections with a handful of high-speed metal wires and thousands of switches (transistors). It pays off because transistors occupy less area than wires, and are becoming relatively more compact in deep submicron processes. In adapting existing networking solutions, neuromorphic architects are challenged by huge differences between the requirements of computer networks and those of neuromorphic systems. Whereas computer networks connect thousands of computers at the buildingor campus-level, neuromorphic systems need to connect millions of neurons at the chipor circuit-board level. Hence, they must improve the efficiency of traditional computer communication architectures, and protocols, by several orders of magnitude. Mahowald and Sivilotti proposed using an address-event representation to transmit pulses, or spikes, from an array of neurons on one chip to the corresponding location in an array on a second chip [9], [4], [10]. In their scheme, depicted in Figure 1, an address-encoder generates a unique binary address for each neuron whenever it spikes. A bus transmits these addresses to the receiving chip, where an address decoder selects the corresponding location. Eight years after Mahowald and Sivilotti proposed it, the address-event representation (AER) has emerged as the leading candidate for communication between neuromorphic chips. Indeed, at the NSF Neuromorphic Engineering Workshop held in June/July 1997 at Telluride CO, the AER Interchip Communication Workgroup was in the top two—second only to Mindless Robots in popularity [11]! The performance of the original point-to-point protocol has been greatly improved. Efficient hierarchical arbitration circuits have been developed to handle oneand two-dimensional arrays [12], [13], [14]. Sender and receiver interfaces have been combined on a single chip to build a transceiver [15]. Support for multiple senders and receivers [16], [15], [17], one-dimensional nearest-neighbor– IEEE TRANSACTIONS ON CIRCUITS & SYSTEMS, VOL. XX, NO. Y, MONTH 1999 101
Prime Object Proposals with Randomized Prim's Algorithm
Generic object detection is the challenging task of proposing windows that localize all the objects in an image, regardless of their classes. Such detectors have recently been shown to benefit many applications such as speeding-up class-specific object detection, weakly supervised learning of object detectors and object discovery. In this paper, we introduce a novel and very efficient method for generic object detection based on a randomized version of Prim's algorithm. Using the connectivity graph of an image's super pixels, with weights modelling the probability that neighbouring super pixels belong to the same object, the algorithm generates random partial spanning trees with large expected sum of edge weights. Object localizations are proposed as bounding-boxes of those partial trees. Our method has several benefits compared to the state-of-the-art. Thanks to the efficiency of Prim's algorithm, it samples proposals very quickly: 1000 proposals are obtained in about 0.7s. With proposals bound to super pixel boundaries yet diversified by randomization, it yields very high detection rates and windows that tightly fit objects. In extensive experiments on the challenging PASCAL VOC 2007 and 2012 and SUN2012 benchmark datasets, we show that our method improves over state-of-the-art competitors for a wide range of evaluation scenarios.
Robust Tracking of Position and Velocity With Kalman Snakes
A new Kalman-filter based active contour model is proposed for tracking of nonrigid objects in combined spatio-velocity space. The model employs measurements of gradient-based image potential and of optical-flow along the contour as system measurements. In order to improve robustness to image clutter and to occlusions an optical-flow based detection mechanism is proposed. The method detects and rejects spurious measurements which are not consistent with previous estimation of image motion.
Miniaturized 122 GHz ISM band FMCW radar with micrometer accuracy
In this paper, a miniaturized 122 GHz ISM band FMCW radar is used to achieve micrometer accuracy. The radar consists of a SiGe single chip radar sensor and LCP off-chip antennas. The antennas are integrated in a QFN package. To increase the gain of the radar, an additional lens is used. A combined frequency and phase evaluation algorithm provides micrometer accuracy. The influence of the lens phase center on the beat frequency phase and hence, the overall accuracy is shown. Furthermore, accuracy limitations of the radar system over larger measurement distances are investigated. Accuracies of 200 μm and 2 μm are achieved over a distance of 1.9 m and 5 mm, respectively.
Gains from Others ’ Losses : Technology Trajectories and the Global Division of Firms
Gains from Others’ Losses: Technology Trajectories and the Global Division of Firms Chia-Hsuan Yang, Rebecca Nugent, Erica R.H. Fuchs Carnegie Mellon University This Draft: April 2, 2012 Abstract After the burst of the telecommunications bubble in March 2000, the majority of U.S. optoelectronic component firms moved manufacturing offshore. This research explores (1) whether due to different offshore production economics, firms who move manufacturing offshore stop or slow U.S.-based R&D activities in the emerging technology necessary to access larger markets and (2) whether the inventors originally within these offshoring firms, leave, and continue to innovate in the emerging technology at different institutions. We focus on the 28 leading smallor medium-sized U.S. firms that manufacture optoelectronic components for telecommunications (18 offshore, 10 not) and the inventors who patent at these firms. We triangulate hand-classified USPTO patents, firm SEC filings, inventor CVs, and structured interview data we collect from the firms. Our results show that there is a relationship between firms’ type and extent of offshoring facilities (fabrication, assembly) and their innovation directions. In particular, we find that while offshoring is associated with a statistically significant decrease in innovation in the emerging technology, it can be associated with an increase in all other types of patenting. While an important minority of inventors of the emerging technology who worked at offshoring firms depart to a single onshore firm in the same industry (which subsequently dominates this space), the majority of inventors depart to firms outside our study scope and stop work in the emerging technology.
Optimal preventive maintenance scheduling in semiconductor manufacturing
Preventive maintenance (PM) scheduling is a very challenging task in semiconductor manufacturing due to the complexity of highly integrated fab tools and systems, the interdependence between PM tasks, and the balancing of work-in-process (WIP) with demand/throughput requirements. In this paper, we propose a two-level hierarchical modeling framework. At the higher level is a model for long-term planning, and at the lower level is a model for short-term PM scheduling. Solving the lower level problem is the focus of this paper. We develop mixed-integer programming (MIP) models for scheduling all due PM tasks for a group of tools, over a planning horizon. Interdependence among different PM tasks, production planning data such as projected WIP levels, manpower constraints, and associated PM time windows and costs, are incorporated in the model. Results of a simulation study comparing the performance of the model-based PM schedule with that of a baseline reference schedule are also presented.
Learning Multiviewpoint Context-Aware Representation for RGB-D Scene Classification
Effective visual representation plays an important role in the scene classification systems. While many existing methods are focused on the generic descriptors extracted from the RGB color channels, we argue the importance of depth context, since scenes are composed with spatial variability and depth is an essential component in understanding the geometry. In this letter, we present a novel depth representation for RGB-D scene classification based on a specific designed convolutional neural network (CNN). Contrast to previous deep models that transfer from pretrained RGB CNN models, we harness model by using the multiviewpoint depth image augmentation to overcome the data scarcity problem. The proposed CNN framework contains the dilated convolutions to expand the receptive field and a subsequent spatial pooling to aggregate multiscale contextual information. The combination of contextual design and multiviewpoint depth images are important toward a more compact representation, compared to directly using original depth images or off-the-shelf networks. Through extensive experiments on SUN RGB-D dataset, we demonstrate that the representation outperforms recent state of the arts, and combining it with standard CNN-based RGB features can lead to further improvements.
The ERP response to the amount of information conveyed by words in sentences
Reading times on words in a sentence depend on the amount of information the words convey, which can be estimated by probabilistic language models. We investigate whether event-related potentials (ERPs), too, are predicted by information measures. Three types of language models estimated four different information measures on each word of a sample of English sentences. Six different ERP deflections were extracted from the EEG signal of participants reading the same sentences. A comparison between the information measures and ERPs revealed a reliable correlation between N400 amplitude and word surprisal. Language models that make no use of syntactic structure fitted the data better than did a phrase-structure grammar, which did not account for unique variance in N400 amplitude. These findings suggest that different information measures quantify cognitively different processes and that readers do not make use of a sentence's hierarchical structure for generating expectations about the upcoming word.
Fingerprint classification using fast Fourier transform and nonlinear discriminant analysis
In this paper, we present a new approach for fingerprint class ification based on Discrete Fourier Transform (DFT) and nonlinear discrimina nt nalysis. Utilizing the Discrete Fourier Transform and directional filters, a relia ble and efficient directional image is constructed from each fingerprint image, and then no nlinear discriminant analysis is applied to the constructed directional images, reducing the dimension dramatically and extracting the discriminant features. The pr oposed method explores the capability of DFT and directional filtering in dealing with l ow quality images and the effectiveness of nonlinear feature extraction method in fin gerprint classification. Experimental results demonstrates competitive performance compared with other published results.
A Simulation Framework for Sensor Networks in J-Sim
Sensor networks have gained considerable importance and attention in the past few years. Hence, an inevitable need for developing simulation frameworks for sensor networks in existing network simulators arises. In this paper, we describe our work in incorporating wireless sensor networks simulation in J-Sim. We have built a simulation framework for sensor networks that builds upon the autonomous component architecture (ACA) and the extensible internetworking framework (INET) of J-Sim. The paper shows how each layer in the protocol stack of a sensor node can be implemented as a component and how ports and contracts enable different layers (components) to interact with each other in the initiator-reactor mechanism that is a fundamental concept of J-Sim.
Microstructure and Dielectric Properties of Bi2O3 Added (Ba0.86Ca0.14)(Ti0.85Zr0.12Sn0.03)O3 Ceramics
In this works, in order to develop the composition ceramics for a capacitor with the excellent dielectric properties, (Ba0.86Ca0.14) (Ti0.85Zr0.12Sn0.03) O3(abbreviated as BCTZ) ceramics were fabricated with the amount of Bi2O3. All specimens showed a typical perovskite structure. As the Bi2O3 addition was increased, secondary phase was found. When x = 0.006, the maximum value of εr = 6043 was shown at 0 °C. At the 0.003 mol Bi2O3, the most excellent dielectric and TCC properties were obtained. Namely, the dielectric constant (εr), the range of TCC from − 20 to 80 °C, Curie temperature were 5015, − 2 to + 48%, and 40 °C, respectively.
Spray Cooling of High Aspect Ratio Open Microchannels
Direct spraying of dielectric liquids has been shown to be an effective method of cooling high power electronics. Recent studies have illustrated that even higher heat transfer can be obtained by adding extended structures, particularly straight fins, to the heated surface. In the current work, spray cooling of high aspect ratio open microchannels was explored, which substantially increases the total surface area allowing more residence time for the incoming liquid to be heated by the wall. Five such heat sinks were EDM wire machined and their thermal performance was investigated. These 1.41times1.41 cm2 heat sinks featured a channel width of 360 mum; a fin width of 500 mum; and fin lengths of 0.25 mm, 0.50 mm, 1.0 mm, 3.0 mm, and 5.0 mm. The five enhanced surfaces and a flat surface with the same projected area were sprayed with a full cone nozzle using PF-5060 at 30degC and nozzle pressure differences from 1.36-4.08 atm (20-60 psig). In all cases, the enhanced surfaces improved thermal performance compared to the flat surface. Longer fins were found to outperform shorter ones in the single-phase regime. Adding fins also resulted in two-phase effects (and higher heat transfer) at lower wall temperatures than the flat surface. The two-phase regime appeared to be marked by a balance between added area, changing flow flux, channeling, and added conduction resistance. Spray efficiency calculations indicated that a much larger percentage of the liquid sprayed onto the enhanced surface evaporated than with the flat surface. Fin lengths between 1 and 3 mm appeared to be optimum for heat fluxes as high as 124 W/cm and the range of conditions studied
Short-versus long-term effects of different dihydropyridines on sympathetic and baroreflex function in hypertension.
Antihypertensive treatment with dihydropyridines may be accompanied by sympathetic activation. Data on whether this is common to all compounds and similar in the various phases of treatment are not univocal, however. In 28 untreated essential hypertensives (age, 56.4+/-1.8 years; mean+/-SEM) finger blood pressure (BP, Finapres), heart rate (HR, ECG), plasma norepinephrine (NE, high-performance liquid chromatography), and muscle sympathetic nerve traffic (MSNA, microneurography) were measured at rest and during baroreceptor manipulation (vasoactive drugs) in the placebo run-in period and after randomization to double-blind acute and chronic (8 weeks) felodipine (10 mg/d, n=14) or lercanidipine (10 mg/d, n=14). Acute administration of both drugs induced pronounced BP reductions and marked increases in HR, NE, and MSNA. After 8 weeks of treatment, BP reductions were similar to those observed after acute administration, whereas HR, NE, and MSNA responses were markedly attenuated (-7%, -32%, and -14%, respectively; P<0.05). There was a small residual increase in sympathetic activity in the felodipine group, whereas in the lercanidipine group, all adrenergic markers returned to baseline values. Baroreflex control of HR and MSNA was markedly impaired (-42% and -48%, respectively) after acute drug administration, with a recovery and complete resetting during chronic treatment. Thus, the sympathoexcitation induced by 2 different dihydropyridines is largely limited to the acute administration. The 2 drugs have, nevertheless, a different chronic sympathetic effect, indicating that dihydropyridines do not homogeneously affect this function. The acute sympathoexcitation, but not the small between-drugs differential chronic adrenergic effect, is accounted for by baroreflex impairment.
Why do people seek anonymity on the internet?: informing policy and design
In this research we set out to discover why and how people seek anonymity in their online interactions. Our goal is to inform policy and the design of future Internet architecture and applications. We interviewed 44 people from America, Asia, Europe, and Africa who had sought anonymity and asked them about their experiences. A key finding of our research is the very large variation in interviewees' past experiences and life situations leading them to seek anonymity, and how they tried to achieve it. Our results suggest implications for the design of online communities, challenges for policy, and ways to improve anonymity tools and educate users about the different routes and threats to anonymity on the Internet.
Video-games used in a group setting is feasible and effective to improve indicators of physical activity in individuals with chronic stroke: a randomized controlled trial.
OBJECTIVES To investigate the feasibility of using video-games in a group setting and to compare the effectiveness of video-games as a group intervention to a traditional group intervention for improving physical activity in individuals with chronic stroke. DESIGN A single-blind randomized controlled trial with evaluations pre and post a 3-month intervention, and at 3-month follow-up. Compliance (session attendance), satisfaction and adverse effects were feasibility measures. Grip strength and gait speed were measures of physical activity. Hip accelerometers quantified steps/day and the Action Research Arm Test assessed the functional ability of the upper extremity. RESULTS Forty-seven community-dwelling individuals with chronic stroke (29-78 years) were randomly allocated to receive video-game (N=24) or traditional therapy (N=23) in a group setting. There was high treatment compliance for both interventions (video-games-78%, traditional therapy-66%), but satisfaction was rated higher for the video-game (93%) than the traditional therapy (71%) (χ(2)=4.98, P=0.026). Adverse effects were not reported in either group. Significant improvements were demonstrated in both groups for gait speed (F=3.9, P=0.02), grip strength of the weaker (F=6.67, P=0.002) and stronger hands (F=7.5, P=0.001). Daily steps and functional ability of the weaker hand did not increase in either group. CONCLUSIONS Using video-games in a small group setting is feasible, safe and satisfying. Video-games improve indicators of physical activity of individuals with chronic stroke.
A recommendation mechanism for contextualized mobile advertising
Mobile advertising complements the Internet and interactive television advertising and makes it possible for advertisers to create tailormade campaigns targeting users according to where they are, their needs of the moment and the devices they are using (i.e. contextualized mobile advertising). Therefore, it is necessary that a fully personalized mobile advertising infrastructure be made. In this paper, we present such a personalized contextualized mobile advertising infrastructure for the advertisement of commercial/non-commercial activities. We name this infrastructure MALCR, in which the primary ingredient is a recommendation mechanism that is supported by the following concepts: (1) minimize users’ inputs (a typical interaction metaphor for mobile devices) for implicit browsing behaviors to be best utilized; (2) implicit browsing behaviors are then analyzed with a view to understanding the users’ interests in the values of features of advertisements; (3) having understood the users’ interests, Mobile Ads relevant to a designated location are subsequently scored and ranked; (4) Top-N scored advertisements are recommended. The recommendation mechanism is novel in its combination of two-level Neural Network learning, Neural Network sensitivity analysis, and attribute-based filtering. This recommendation mechanism is also justified (by thorough evaluations) to show its ability in furnishing effective personalized contextualized mobile advertising. q 2003 Elsevier Science Ltd. All rights reserved.
Pharmacokinetic, pharmacodynamic and biomarker evaluation of transforming growth factor-β receptor I kinase inhibitor, galunisertib, in phase 1 study in patients with advanced cancer
Purpose Transforming growth factor-beta (TGF-β) signaling plays a key role in epithelial-mesenchymal transition (EMT) of tumors, including malignant glioma. Small molecule inhibitors (SMI) blocking TGF-β signaling reverse EMT and arrest tumor progression. Several SMIs were developed, but currently only LY2157299 monohydrate (galunisertib) was advanced to clinical investigation. Design The first-in-human dose study had three parts (Part A, dose escalation, n = 39; Part B, safety combination with lomustine, n = 26; Part C, relative bioavailability study, n = 14). Results A preclinical pharmacokinetic/pharmacodynamic (PK/PD) model predicted a therapeutic window up to 300 mg/day and was confirmed in Part A after continuous PK/PD. PK was not affected by co-medications such as enzyme-inducing anti-epileptic drugs or proton pump inhibitors. Changes in pSMAD2 levels in peripheral blood mononuclear cells were associated with exposure indicating target-related pharmacological activity of galunisertib. Twelve (12/79; 15 %) patients with refractory/relapsed malignant glioma had durable stable disease (SD) for 6 or more cycles, partial responses (PR), or complete responses (CR). These patients with clinical benefit had high plasma baseline levels of MDC/CCL22 and low protein expression of pSMAD2 in their tumors. Of the 5 patients with IDH1/2 mutation, 4 patients had a clinical benefit as defined by CR/PR and SD ≥6 cycles. Galunisertib had a favorable toxicity profile and no cardiac adverse events. Conclusion Based on the PK, PD, and biomarker evaluations, the intermittent administration of galunisertib at 300 mg/day is safe for future clinical investigation.
Learning 6-DOF Grasping Interaction via Deep Geometry-Aware 3D Representations
This paper focuses on the problem of learning 6- DOF grasping with a parallel jaw gripper in simulation. Our key idea is constraining and regularizing grasping interaction learning through 3D geometry prediction. We introduce a deep geometry-aware grasping network (DGGN) that decomposes the learning into two steps. First, we learn to build mental geometry-aware representation by reconstructing the scene (i.e., 3D occupancy grid) from RGBD input via generative 3D shape modeling. Second, we learn to predict grasping outcome with its internal geometry-aware representation. The learned outcome prediction model is used to sequentially propose grasping solutions via analysis-by-synthesis optimization. Our contributions are fourfold: (1) To best of our knowledge, we are presenting for the first time a method to learn a 6-DOF grasping net from RGBD input; (2) We build a grasping dataset from demonstrations in virtual reality with rich sensory and interaction annotations. This dataset includes 101 everyday objects spread across 7 categories, additionally, we propose a data augmentation strategy for effective learning; (3) We demonstrate that the learned geometry-aware representation leads to about 10% relative performance improvement over the baseline CNN on grasping objects from our dataset. (4) We further demonstrate that the model generalizes to novel viewpoints and object instances.
Self-Capacitance of High-Voltage Transformers
The calculation of a transformer's parasitics, such as its self capacitance, is fundamental for predicting the frequency behavior of the device, reducing this capacitance value and moreover for more advanced aims of capacitance integration and cancellation. This paper presents a comprehensive procedure for calculating all contributions to the self-capacitance of high-voltage transformers and provides a detailed analysis of the problem, based on a physical approach. The advantages of the analytical formulation of the problem rather than a finite element method analysis are discussed. The approach and formulas presented in this paper can also be used for other wound components rather than just step-up transformers. Finally, analytical and experimental results are presented for three different high-voltage transformer architectures.
Simulation as an engine of physical scene understanding.
In a glance, we can perceive whether a stack of dishes will topple, a branch will support a child's weight, a grocery bag is poorly packed and liable to tear or crush its contents, or a tool is firmly attached to a table or free to be lifted. Such rapid physical inferences are central to how people interact with the world and with each other, yet their computational underpinnings are poorly understood. We propose a model based on an "intuitive physics engine," a cognitive mechanism similar to computer engines that simulate rich physics in video games and graphics, but that uses approximate, probabilistic simulations to make robust and fast inferences in complex natural scenes where crucial information is unobserved. This single model fits data from five distinct psychophysical tasks, captures several illusions and biases, and explains core aspects of human mental models and common-sense reasoning that are instrumental to how humans understand their everyday world.
Impact of facebook addiction on narcissistic behavior and self-esteem among students.
OBJECTIVE To investigate the relationship between Facebook addiction, narcissism and self-esteem and to see if gender played any role in this equation. METHODS The correlational study was conducted from February to March 2013 at the Department of Psychology, University of Sargodha, Punjab, Pakistan. Using convenient sampling, two equal groups of male and female students were enrolled from different departments of the university. Bergen Facebook Addiction Scale, Hypersensitive Narcissism Scale and Rosenberg's Self-esteem Scale were used for evaluation. SPSS 17 was used for statistical analysis. RESULTS Of the 200 subjects in the study, 100(50%) each were males and females. Facebook addiction was positively correlated with narcissism(r=0.20; p<0.05) and negatively with self-esteem(r=-0.18; p<0.05). Relationship between narcissism and self-esteem was non-significant(r=0.05; p>0.05). Facebook addiction was a significant predictor of narcissistic behaviour (b=0.202; p<0.001) and low self-esteem (b=-0.18; p<0.001). There were no significant gender differences in the three variables (p>0.05 each). CONCLUSIONS Facebook addiction was a significant predictor of narcissistic behaviour and low levels of self-esteem among students.
Investigating the learning-theory foundations of game-based learning: a meta-analysis
Past studies on the issue of learning-theory foundations in game-based learning stressed the importance of establishing learning-theory foundation and provided an exploratory examination of established learning theories. However, we found research seldom addressed the development of the use or failure to use learning-theory foundations and categorized these learning theories into relative types and synthesized their development. We investigate this issue from the perspective of learning theories invoked to underpin educational computer game design and use based on the four types of learning theories: behaviourism, cognitivism, humanism and constructivism. Because the investigation needs to examine and analyse the results from a large number of independent previous studies, this study applied the meta-analysis method to present a more comprehensive description and discussion of the influence and implications of the findings. This study shows the distribution of development trends for the use of learning theory as a theoretical foundation, as opposed to those that fail to use learning theory in gamebased learning, along with the distribution of types and principles of learning theories that used a learning-theory foundation. These new findings can supplement the results of previous studies with regard to the issue of learning-theory foundations in game-based learning. The contributions of this study for the issue of learning-theory foundations in game-based learning are discussed.
Self-ensembling for visual domain adaptation
This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant [29] of temporal ensembling [14], a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.
When Ignorance is Bliss : Information , Fairness , and Bargaining Efficiency
Most theories of legal discovery assume that sharing information among disputing parties will lead to convergence of expectations and will facilitate settlement. However, psychological research shows that shared information, if it is open to multiple interpretations, is likely to be interpreted egocentrically by the disputants, which can cause beliefs to diverge rather than converge. We present results from a bargaining experiment which shows that information sharing leads to divergence of expectations, and to settlement delays, when the information exchanged is amenable to multiple interpretations. By contrast, when there is only one obvious interpretation, information sharing leads to convergence of expectations and speeds settlement. We show, further, that information sharing moderates the relationship between size of the bargaining zone and the prospects for settlement. Information and efficiency 3 When Ignorance is Bliss: Information, Fairness, and Bargaining Efficiency
Fault Monitoring of Wind Turbine Generator Brushes : A Data-Mining Approach
Components of wind turbines are subjected to asymmetric loads caused by variable wind conditions. Carbon brushes are critical components of the wind turbine generator. Adequately maintaining and detecting abnormalities in the carbon brushes early is essential for proper turbine performance. In this paper, data-mining algorithms are applied for early prediction of carbon brush faults. Predicting generator brush faults early enables timely maintenance or replacement of brushes. The results discussed in this paper are based on analyzing generator brush faults that occurred on 27 wind turbines. The datasets used to analyze faults were collected from the supervisory control and data acquisition (SCADA) systems installed at the wind turbines. Twenty-four data-mining models are constructed to predict faults up to 12 h before the actual fault occurs. To increase the prediction accuracy of the models discussed, a data balancing approach is used. Four data-mining algorithms were studied to evaluate the quality of the models for predicting generator brush faults. Among the selected data-mining algorithms, the boosting tree algorithm provided the best prediction results. Research limitations attributed to the available datasets are discussed. [DOI: 10.1115/1.4005624]
Kinetics of formation and dissociation of aquocobalt(III) complexes with some carboxylic acids in acid perchlorate solution.
The rates of formation and dissociation of monocarboxylic complexes of aquocobalt(III) cations with propionic, malonic, and 2-ethylmalonic acids have been measured with the stopped-flow method over a range of concentrations and temperatures in acid perchlorate media at an ionic strength 3.0 M. Although the rate constants for reactions of CoOHaq2+ with neutral ligands cover only a small range, indicating a dissociative mechanism, the associated activation parameters change cooperatively. These variations are discussed in terms of differences in the structure, proton distribution, and rates of water loss in the ion-pair precursors for the different ligands. Similar activation enthalpies of dissociation indicate a common mode of coordination, and the positive activation entropies for dissociation are consistent with a neutral leaving group.
The Generalized Reparameterization Gradient
The reparameterization gradient has become a widely used method to obtain Monte Carlo gradients to optimize the variational objective. However, this technique does not easily apply to commonly used distributions such as beta or gamma without further approximations, and most practical applications of the reparameterization gradient fit Gaussian distributions. In this paper, we introduce the generalized reparameterization gradient, a method that extends the reparameterization gradient to a wider class of variational distributions. Generalized reparameterizations use invertible transformations of the latent variables which lead to transformed distributions that weakly depend on the variational parameters. This results in new Monte Carlo gradients that combine reparameterization gradients and score function gradients. We demonstrate our approach on variational inference for two complex probabilistic models. The generalized reparameterization is e ective: even a single sample from the variational distribution is enough to obtain a low-variance gradient.
Deep contextual language understanding in spoken dialogue systems
We describe a unified multi-turn multi-task spoken language understanding (SLU) solution capable of handling multiple context sensitive classification (intent determination) and sequence labeling (slot filling) tasks simultaneously. The proposed architecture is based on recurrent convolutional neural networks (RCNN) with shared feature layers and globally normalized sequence modeling components. The temporal dependencies within and across different tasks are encoded succinctly as recurrent connections. The dialog system responses beyond SLU component are also exploited as effective external features. We show with extensive experiments on a number of datasets that the proposed joint learning framework generates state-of-the-art results for both classification and tagging, and the contextual modeling based on recurrent and external features significantly improves the context sensitivity of SLU models.
Running injury and stride time variability over a prolonged run.
Locomotor variability is inherent to movement and, in healthy systems, contains a predictable structure. In this study, detrended fluctuation analysis (DFA) was used to quantify the structure of variability in locomotion. Using DFA, long-range correlations (α) are calculated in over ground running and the influence of injury and fatigue on α is examined. An accelerometer was mounted to the tibia of 18 runners (9 with a history of injury) to quantify stride time. Participants ran at their preferred 5k pace±5% on an indoor track to fatigue. The complete time series data were divided into three consecutive intervals (beginning, middle, and end). Mean, standard deviation (SD), coefficient of variation (CV) and α of stride times were calculated for each interval. Averages for all variables were calculated per group for statistical analysis. No significant interval, group or interval×group effects were found for mean, SD or CV of stride time. A significant linear trend in α for interval occurred with a reduction in α over the course of the run (p=0.01) indicating that over the run, stride times of runners became more unpredictable. This was likely due to movement errors associated with fatigue necessitating frequent corrections. The injured group exhibited lower α (M=0.79, CI(95)=0.70, 0.88) than the non-injured group (p=0.01) (M=0.96, CI(95)=0.88, 1.05); a reduction hypothesized to be associated with altered complexity. Overall, these findings suggest injury and fatigue influence neuromuscular output during running.
Reasoning about Meaning in Natural Language with Compact Closed Categories and Frobenius Algebras
Compact closed categories have found applications in modeling quantum information protocols by Abramsky-Coecke. They also provide semantics for Lambek’s pregroup algebras, applied to formalizing the grammatical structure of natural language, and are implicit in a distributional model of word meaning based on vector spaces. Specifically, in previous work Coecke-Clark-Sadrzadeh used the product category of pregroups with vector spaces and provided a distributional model of meaning for sentences. We recast this theory in terms of strongly monoidal functors and advance it via Frobenius algebras over vector spaces. The former are used to formalize topological quantum field theories by Atiyah and Baez-Dolan, and the latter are used to model classical data in quantum protocols by Coecke-PavlovicVicary. The Frobenius algebras enable us to work in a single space in which meanings of words, phrases, and sentences of any structure live. Hence we can compare meanings of different language constructs and enhance the applicability of the theory. We report on experimental results on a number of language tasks and verify the theoretical predictions.
SyncGC: A Synchronized Garbage Collection Technique for Reducing Tail Latency in Cassandra
Data-center applications running on distributed databases often suffer from unexpectedly high response time fluctuation which is caused by long tail latency. In this paper, we find that long tail latency of user writes is mainly created by the interference with garbage collection (GC) tasks running in various system layers. In order to address the tail latency problem, we propose a synchronized garbage collection technique, called SyncGC. By scheduling multiple GC instances to execute in sync with each other in an overlapped manner, SyncGC prevents user requests from being interfered with GC instances, thereby minimizing their negative impacts on tail latency. Our experimental results with Cassandra show that SyncGC reduces the 99.99th-percentile tail latency and the maximum latency by 35% and 37%, on average, respectively.
A Novel Grid Synchronization PLL Method Based on Adaptive Low-Pass Notch Filter for Grid-Connected PCS
The amount of distributed energy resources (DERs) has increased constantly worldwide. The power ratings of DERs have become considerably high, as required by the new grid code requirement. To follow the grid code and optimize the function of grid-connected inverters based on DERs, a phase-locked loop (PLL) is essential for detecting the grid phase angle accurately when the grid voltage is polluted by harmonics and imbalance. This paper proposes a novel low-pass notch filter PLL (LPN-PLL) control strategy to synchronize with the true phase angle of the grid instead of using a conventional synchronous reference frame PLL (SRF-PLL), which requires a d-q-axis transformation of three-phase voltage and a proportional-integral controller. The proposed LPN-PLL is an upgraded version of the PLL method using the fast Fourier transform concept (FFT-PLL) which is robust to the harmonics and imbalance of the grid voltage. The proposed PLL algorithm was compared with conventional SRF-PLL and FFT-PLL and was implemented digitally using a digital signal processor TMS320F28335. A 10-kW three-phase grid-connected inverter was set, and a verification experiment was performed, showing the high performance and robustness of the proposal under low-voltage ride-through operation.
A Hybrid Fragmentation Approach for Distributed Deductive Database Systems
Fragmentation of base relations in distributed database management systems increases the level of concurrency and therefore system throughput for query processing. Algorithms for horizontal and vertical fragmentation of relations in relational, object-oriented and deductive databases exist; however, hybrid fragmentation techniques based on variable bindings appearing in user queries and query-access-rule dependency are lacking for deductive database systems. In this paper, we propose a hybrid fragmentation approach for distributed deductive database systems. Our approach first considers the horizontal partition of base relations according to the bindings imposed on user queries, and then generates vertical fragments of the horizontally partitioned relations and clusters rules using affinity of attributes and access frequency of queries and rules. The proposed fragmentation technique facilitates the design of distributed deductive database systems.
Properties of the thiobase pairs hydrogen bridges: a theoretical study.
The characteristics of the hydrogen bridges in the base pairs with S --> O substitution have been studied and the comparisons with the unmodified systems considered. The modifications induced by the sulfur atom in the structures, energies, and atomic charges have been evidenced and the energetic effects revaluated for the first time. Their effects on the hydrogen transfer from a base to the other and the relevance of these static and dynamical features for biological properties have been suggested.
Nuclear PARP-1 protein overexpression is associated with poor overall survival in early breast cancer.
BACKGROUND Poly(ADP-ribose)polymerase-1 (PARP-1) is a highly promising novel target in breast cancer. However, the expression of PARP-1 protein in breast cancer and its associations with outcome are yet poorly characterized. PATIENTS AND METHODS Quantitative expression of PARP-1 protein was assayed by a specific immunohistochemical signal intensity scanning assay in a range of normal to malignant breast lesions, including a series of patients (N = 330) with operable breast cancer to correlate with clinicopathological factors and long-term outcome. RESULTS PARP-1 was overexpressed in about a third of ductal carcinoma in situ and infiltrating breast carcinomas. PARP-1 protein overexpression was associated to higher tumor grade (P = 0.01), estrogen-negative tumors (P < 0.001) and triple-negative phenotype (P < 0.001). The hazard ratio (HR) for death in patients with PARP-1 overexpressing tumors was 7.24 (95% CI; 3.56-14.75). In a multivariate analysis, PARP-1 overexpression was an independent prognostic factor for both disease-free (HR 10.05; 95% CI 5.42-10.66) and overall survival (HR 1.82; 95% CI 1.32-2.52). CONCLUSIONS Nuclear PARP-1 is overexpressed during the malignant transformation of the breast, particularly in triple-negative tumors, and independently predicts poor prognosis in operable invasive breast cancer.
The Impact of Treatment Adherence for Patients With Diabetes and Hypertension on Cardiovascular Disease Risk: Protocol for a Retrospective Cohort Study, 2008-2018
BACKGROUND Cardiovascular disease (CVD) is the leading cause of death globally and in Canada. Diabetes and hypertension are major risk factors for CVD events. Despite the increasing availability of effective treatments, the majority of diabetic and hypertensive patients do not have adequate blood pressure and glycemic control. One of the major contributors is poor treatment adherence. OBJECTIVE This study aims to evaluate the impact of treatment adherence for patients with both diabetes and hypertension on acute severe CVD events and intermediate clinical outcomes in Canadian primary care settings. METHODS We will conduct a population-based retrospective cohort study of patients living with both diabetes and hypertension in Ontario, Canada, between January 1, 2008, and March 31, 2018. The Social Cognitive Theory will be used as a conceptual framework by which to frame the reciprocal relationship between treatment adherence, personal factors, and environmental determinants and how this interplay impacts CVD events and clinical outcomes. Data will be derived from the Diabetes Action Canada National Data Repository. A time-varying Cox proportional hazards model will be used to estimate the impacts of treatment adherence on CVD morbidity and mortality. Multivariable linear regression models and hierarchical regression models will be used to estimate the associations between treatment adherence of different medication categories and intermediate clinical outcomes. Our primary outcome is the association between treatment adherence and the risk of acute severe CVD events, including CVD mortality. The secondary outcome is the association between treatment adherence and intermediate clinical outcomes including diastolic and systolic blood pressures, glycated hemoglobin, low-density lipoprotein cholesterol, and total cholesterol. Owing to data limitation, we use medication prescriptions as a proxy to estimate treatment adherence. We assume that a patient adhered to medications if she or he had any prescription record in the 4 preceding quarters and 1 quarter after each quarter of interest. Acute severe CVD events are defined based on the World Health Organization's Monitoring Trends and Determinants in Cardiovascular Disease Project, including acute coronary heart disease, stroke, and heart failure. As causes of death are not available, the number of CVD deaths will be computed using the most recent systolic blood pressure distributions and the population attributable risks related to systolic blood pressure level. RESULTS The project was funded by Diabetes Action Canada (reference number: 503854) and approved by the University of Toronto Research Ethics Board (reference number: 36065). The project started in June 2018 and is expected to be finished by September 2019. CONCLUSIONS The findings will be helpful in identifying the challenges of treatment adherence for diabetic and hypertensive patients in primary care settings. This will also help to develop intervention strategies to promote treatment adherence for patients with multi-morbidities. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) DERR1-10.2196/13571.
Visceral leishmaniasis is preventable in a highly endemic village in West Bengal, India.
In 2004, following a cluster of kala-azar cases in Chatrakhali, West Bengal, India, we screened and treated this endemic village for leishmaniasis infection. In 2005, following new reports of kala-azar, we screened the village again and conducted a retrospective cohort study (exposure period: August 2004 to July 2005). We defined an incident case of leishmaniasis as a new seropositive sample (>or=1:1600 dilution in a direct agglutination test) in a person seronegative in 2004. We obtained information about potential risk factors and calculated the relative risk (RR) of infection for exposure to these factors. One hundred and fifty (20%) of the 751 residents acquired leishmaniasis in 1 year. Factors associated with infection included residing in homes with mud walls (RR 4.3), dampness in the home (RR 2.5), proximity to bodies of water (RR 2.5) and livestock ownership (RR 2.4). Sleeping dressed (RR 0.4), or under a bed net (RR 0.5) or in a cot (RR 0.6) were associated with a lower risk. High rates of infection indicated that transmission persisted in this community. Poor housing conditions were associated with a higher risk, while personal protection measures against vectors were effective. Major housing improvement and personal protection efforts are needed to protect this vulnerable population from leishmaniasis.
Tensegrities and Global Rigidity
A tensegrity is finite configuration of points in Ed suspended rigidly by inextendable cables and incompressable struts. Here it is explained how a stress-energy function, given by a symmetric stress matrix, can be used to create tensegrities that are globally rigid in the sense that the only configurations that satisfy the cable and strut constraints are congruent copies.
An Overview of Peak-to-Average Power Ratio Reduction Techniques for OFDM Signals
OFDM (Orthogonal Frequency Division Multiplexing) has been widely adopted for high data rate wireless communication systems due to its advantages such as extraordinary spectral efficiency, robustness to channel fading and better QoS (Quality of Service) performance for multiple users. However, some challenging issues are still unresolved in OFDM systems. One of the issues is the high PAPR (peak-toaverage power ratio), which results in nonlinearity in power amplifiers, and causes out of band radiation and in band distortion. This paper reviews some conventional PAPR reduction techniques and their modifications to achieve better PAPR performance. Advantages and disadvantages of each technique are discussed in detail. And comparisons between different techniques are also presented. Finally, this paper makes a prospect forecast about the direction for further researches in the area of PAPR reduction for OFDM signals.
Assessment of heart rate variability derived from finger-tip photoplethysmography as compared to electrocardiography.
Heart rate variability (HRV) is traditionally derived from RR interval time series of electrocardiography (ECG). Photoplethysmography (PPG) also reflects the cardiac rhythm since the mechanical activity of the heart is coupled to its electrical activity. Thus, theoretically, PPG can be used for determining the interval between successive heartbeats and heart rate variability. However, the PPG wave lags behind the ECG signal by the time required for transmission of pulse wave. In this study, finger-tip PPG and standard lead II ECG were recorded for five minutes from 10 healthy subjects at rest. The results showed a high correlation (median = 0.97) between the ECG-derived RR intervals and PPG-derived peak-to-peak (PP) intervals. PP variability was accurate (0.1 ms) as compared to RR variability. The time domain, frequency domain and Poincaré plot HRV parameters computed using RR interval method and PP interval method showed no significant differences (p < 0.05). The error analysis also showed insignificant differences between the HRV indices obtained by the two methods. Bland-Altman analysis showed high degree of agreement between the two methods for all the parameters of HRV. Thus, HRV can also be reliably estimated from the PPG based PP interval method.
Lazy evaluation and delimited control
The call-by-need lambda calculus provides an equational framework for reasoning syntactically about lazy evaluation. This paper examines its operational characteristics. By a series of reasoning steps, we systematically unpack the standard-order reduction relation of the calculus and discover a novel abstract machine definition which, like the calculus, goes "under lambdas." We prove that machine evaluation is equivalent to standard-order evaluation. Unlike traditional abstract machines, delimited control plays a significant role in the machine's behavior. In particular, the machine replaces the manipulation of a heap using store-based effects with disciplined management of the evaluation stack using control-based effects. In short, state is replaced with control. To further articulate this observation, we present a simulation of call-by-need in a call-by-value language using delimited control operations.
Memristor-Based Cellular Nonlinear/Neural Network: Design, Analysis, and Applications
Cellular nonlinear/neural network (CNN) has been recognized as a powerful massively parallel architecture capable of solving complex engineering problems by performing trillions of analog operations per second. The memristor was theoretically predicted in the late seventies, but it garnered nascent research interest due to the recent much-acclaimed discovery of nanocrossbar memories by engineers at the Hewlett-Packard Laboratory. The memristor is expected to be co-integrated with nanoscale CMOS technology to revolutionize conventional von Neumann as well as neuromorphic computing. In this paper, a compact CNN model based on memristors is presented along with its performance analysis and applications. In the new CNN design, the memristor bridge circuit acts as the synaptic circuit element and substitutes the complex multiplication circuit used in traditional CNN architectures. In addition, the negative differential resistance and nonlinear current-voltage characteristics of the memristor have been leveraged to replace the linear resistor in conventional CNNs. The proposed CNN design has several merits, for example, high density, nonvolatility, and programmability of synaptic weights. The proposed memristor-based CNN design operations for implementing several image processing functions are illustrated through simulation and contrasted with conventional CNNs. Monte-Carlo simulation has been used to demonstrate the behavior of the proposed CNN due to the variations in memristor synaptic weights.
5G transport networks: the need for new technologies and standards
Mobile transport networks will play a vital role in future 5G and beyond networks. In particular, access transport networks connecting radio access with core networks are of critical importance. They will be required to support massive connectivity, super high data rates, and real-time services in a ubiquitous environment. To attain these targets, transport networks should be constructed on the basis of a variety of technologies and methods, depending on application scenarios, geographic areas, and deployment models. In this article, we present several technologies, including analog radio-over-fiber transmission, intermediate-frequency- over-fiber technology, radio-on-radio transmission, and the convergence of fiber and millimeter-wave systems, that can facilitate building such effective transport networks in many use cases. For each technology, we present the system concept, possible application cases, and demonstration results. We also discuss potential standardization and development directions so the proposed technologies can be widely used.
A fast, efficient and automated method to extract vessels from fundus images
We present a fast, efficient, and automatic method for extracting vessels from retinal images. The proposed method is based on the second local entropy and on the gray-level co-occurrence matrix (GLCM). The algorithm is designed to have flexibility in the definition of the blood vessel contours. Using information from the GLCM, a statistic feature is calculated to act as a threshold value. The performance of the proposed approach was evaluated in terms of its sensitivity, specificity, and accuracy. The results obtained for these metrics were 0.9648, 0.9480, and 0.9759, respectively. These results show the high performance and accuracy that the proposed method offers. Another aspect evaluated in this method is the elapsed time to carry out the segmentation. The average time required by the proposed method is 3 s for images of size 565 9 584 pixels. To assess the ability and speed of the proposed method, the experimental results are compared with those obtained using other existing methods.
Low-dose of statin treatment improves cerebrovascular reactivity in patients with ischemic stroke: single photon emission computed tomography analysis.
BACKGROUND Statins, 3-hydroxy-3-methylglutaryl-coenzymeA reductase inhibitors, have pleiotropic effects that are independent of their cholesterol-lowering activities. For example, they improve vascular endothelial function and exert anti-inflammatory effects. In large clinical trials they reduced the incidence of stroke and myocardial infarction; however, little is currently known regarding the mechanism or mechanisms underlying their clinically confirmed stroke protection. PATIENTS AND METHODS We assessed 10 patients who had experienced a stroke at least 6 months earlier; they received low-dose (5 mg) simvastatin. Using our triple-injection technetium 99m-ethylcysteinate dimer method, we determined their cerebral blood flow and cerebrovascular reactivity. A second assessment of at-rest cerebral blood flow and cerebrovascular reactivity was performed 4 or more months (mean 6 months) after the start of statin administration. We used acetazolamide (1 g) as the vasodilator. The region of interest was the middle cerebral artery territory on a 3-dimensional stereotaxic region of interest template. RESULTS Statin administration did not significantly affect the regional cerebral blood flow at rest. Before statin treatment, the patients' vasoreactivity, determined by the triple-injection technetium 99m-ethylcysteinate dimer method, demonstrated delayed, poor, or near-normal response patterns. Statin treatment improved vasoreactivity in all patients. Their mean serum total cholesterol level before statin administration was 200 mg/dL (range 187-256 mg/dL). Statin treatment significantly reduced their mean serum total cholesterol to 180 mg/dL (range 128-220 mg/dL) (P < .01). CONCLUSIONS The clinically confirmed stroke protection activity exerted by statins may be attributable to improved cerebrovascular reactivity.