title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
From Manual to Semi-Automatic Semantic Annotation: About Ontology-Based Text Annotation Tools | Semantic Annotation is a basic technology for intelligent content and is bene cial in a wide range of content-oriented intelligent applications, esp. in the area of the Semantic Web. In this paper we present our work in ontology-based semantic annotation, which is embedded in a scenario of a knowledge portal application. Starting with seemingly good and bad manual semantic annotation, we describe our experiences made within the KAinitiative. The experiences gave us the starting point for developing an ergonomic and knowledge base-supported annotation tool. Furthermore, the annotation tool described are currently extended with mechanisms for semi-automatic information-extraction based annotation. Supporting the evolving nature of semantic content we additionally describe our idea of evolving ontologies supporting semantic annotation. This paper has been presented at the COLING-2000 Workshop on Semantic Annotation and Intelligent Content, Centre Universitaire, Luxembourg, 5.-6. August, 2000. |
Striatal activation during acquisition of a cognitive skill. | The striatum is thought to play an essential role in the acquisition of a wide range of motor, perceptual, and cognitive skills, but neuroimaging has not yet demonstrated striatal activation during nonmotor skill learning. Functional magnetic resonance imaging was performed while participants learned probabilistic classification, a cognitive task known to rely on procedural memory early in learning and declarative memory later in learning. Multiple brain regions were active during probabilistic classification compared with a perceptual-motor control task, including bilateral frontal cortices, occipital cortex, and the right caudate nucleus in the striatum. The left hippocampus was less active bilaterally during probabilistic classification than during the control task, and the time course of this hippocampal deactivation paralleled the expected involvement of medial temporal structures based on behavioral studies of amnesic patients. Findings provide initial evidence for the role of frontostriatal systems in normal cognitive skill learning. |
Discovering essential code elements in informal documentation | To access the knowledge contained in developer communication, such as forum posts, it is useful to determine automatically the code elements referred to in the discussions. We propose a novel traceability recovery approach to extract the code elements contained in various documents. As opposed to previous work, our approach does not require an index of code elements to find links, which makes it particularly well-suited for the analysis of informal documentation. When evaluated on 188 StackOverflow answer posts containing 993 code elements, the technique performs with average 0.92 precision and 0.90 recall. As a major refinement on traditional traceability approaches, we also propose to detect which of the code elements in a document are salient, or germane, to the topic of the post. To this end we developed a three-feature decision tree classifier that performs with a precision of 0.65-0.74 and recall of 0.30-0.65, depending on the subject of the document. |
Report to the Competition Bureau of Industry Canada Efficiency Considerations in Designing Electricity Markets | This report summarizes my perspective on several aspects of restructured wholesale markets for electricity. It is intended as a contribution to the Bureau’s examination of design issues in jurisdictions considering restructuring. The Introduction lays out some background and issues that motivate the subsequent discussion. The following sections consider the general architecture of wholesale markets for electricity. The first examines the choice among forms of organization, such as bilateral contracting or multilateral trading, and in the latter, the choice between a market-clearing exchange or a tight pool with centrally optimized scheduling. The second examines the transmission market in some detail, and the third examines the energy market similarly. The final two sections examine linkages among multiple markets in decentralized designs, focusing on the role of contractual commitments and the requirements for inter-market efficiency. To establish a point of departure: the current restructuring of electricity markets is consistent with the analysis by Joskow and Schmalensee in Markets for Power, 1983. They foresaw competitive markets for generation, transmission facilities operated on an open-access common-carrier basis, and retail competition among power marketers that rely on regulated utility distribution companies for delivery. Regulation of the wholesale and retail energy markets would be reduced to structural requirements and operational guidelines and monitoring, while retaining substantial regulation of the “wires” markets for transmission and distribution. These changes entail unbundling energy from T&D, thereby reversing the vertical integration of utilities. The current issues that I address here concern mainly the organization of the wholesale markets for energy and transmission, interpreted as including ancillary services and other requirements for system reliability and security. The examination of these issues in Canada can benefit from the history of restructuring in the provinces, such as Alberta, and other countries such as Britain, Australia, New Zealand, and Norway, newly implemented designs in countries such as Spain, and current developments in several states in the U.S. 2 I emphasize the implications of the general principles of market design based on ideas from economics and game theory, but on some practical aspects my views are parochial because my practical experience has been mostly in California. |
Pesticide and antibiotic residues in freshwater aquaculture fish: Chemical risk assessment from farm to table | The qualitative chemical risk assessments of freshwater aquaculture fish from farms, markets and food premises have been carried out in six main production districts in Malaysia. Three species of fish were involved in this study [red tilapia (Oreachromis sp. red hybrids), keli (Clarias spp.) and patin (Pangasius sutchii)]. About 240 fresh fish (90 red tilapia, 60 keli and 90 patin) were randomly collected direct from their farms (earth ponds, floating net cages and exmining pools). Another 240 fish with the same ratios as farm fish samples were also randomly collected from various markets (wet markets, local markets called ‘pasar tani’ and night markets). The same number of samples with the same ratio of ready-to-eat fish from food premises (restaurants, food stalls and night market food stalls) were also collected. The fish were analyzed for chemical hazards, including pesticide residues and antibiotic residues. All data were then statistically analyzed. The results revealed that there were low chemical hazards in fresh water aquaculture fish. Pesticide and antibiotic residues were only detected in 2.9% and 5.8% of farm fish samples respectively. |
The Effect of 10-K Restatements on Firm Value , Information Asymmetries , and Investors ’ Reliance on Earnings | Restating 10-Ks has become an increasingly common phenomenon in financial reporting. Restatements clearly signal that the firm’s prior financial statements were not credible and were of relatively lower “quality”. In this study, we examine the effect of restatements on investors’ and dealers’ perceptions of the firm. First, we examine the market returns and the bid-ask spread effects at the announcement of the accounting problem that leads to restatement. We find negative market returns for accounting problem announcements, and we find that the negative reaction is most pronounced for firms with revenue recognition issues. We also find an increase in spreads surrounding the announcement of revenue recognition problems. Second we examine returns and spreads from the announcement of the restatement to the filing of the restated financial statements. We find a significant negative market reaction and a larger negative reaction for firms with revenue recognition problems. We find no change in spreads from before the announcement of the accounting problem to after the restatement is filed. Finally, we examine the effect of the restatement on earnings response coefficients, and find that the market reacts less to earnings after a restatement than to earnings prior to a restatement. In general, these results indicate that investors and dealers react negatively to restatements and are more concerned with revenue recognition problems than with other financial reporting errors. |
Exploring Domain Knowledge for Affective Video Content Analyses | The well-established film grammar is often used to change visual and audio elements of videos to invoke audiences' emotional experience. Such film grammar, referred to as domain knowledge, is crucial for affective video content analyses, but has not been thoroughly explored yet. In this paper, we propose a novel method to analyze video affective content through exploring domain knowledge. Specifically, take visual elements as an example, we first infer probabilistic dependencies between visual elements and emotions from the summarized film grammar. Then, we transfer the domain knowledge as constraints, and formulate affective video content analyses as a constrained optimization problem. Experiments on the LIRIS-ACCEDE database and the DEAP database demonstrate that the proposed affective content analyses method can successfully leverage well-established film grammar for better emotion classification from video content. |
How are WEEE doing? A global review of the management of electrical and electronic wastes. | This paper presents and critically analyses the current waste electrical and electronic equipment (WEEE) management practices in various countries and regions. Global trends in (i) the quantities and composition of WEEE; and (ii) the various strategies and practices adopted by selected countries to handle, regulate and prevent WEEE are comprehensively examined. The findings indicate that for (i), the quantities of WEEE generated are high and/or on the increase. IT and telecommunications equipment seem to be the dominant WEEE being generated, at least in terms of numbers, in Africa, in the poorer regions of Asia and in Latin/South America. However, the paper contends that the reported figures on quantities of WEEE generated may be grossly underestimated. For (ii), with the notable exception of Europe, many countries seem to be lacking or are slow in initiating, drafting and adopting WEEE regulations. Handling of WEEE in developing countries is typified by high rate of repair and reuse within a largely informal recycling sector. In both developed and developing nations, the landfilling of WEEE is still a concern. It has been established that stockpiling of unwanted electrical and electronic products is common in both the USA and less developed economies. The paper also identifies and discusses four common priority areas for WEEE across the globe, namely: (i) resource depletion; (ii) ethical concerns; (iii) health and environmental issues; and (iv) WEEE takeback strategies. Further, the paper discusses the future perspectives on WEEE generation, treatment, prevention and regulation. Four key conclusions are drawn from this review: global amounts of WEEE will continue unabated for some time due to emergence of new technologies and affordable electronics; informal recycling in developing nations has the potential of making a valuable contribution if their operations can be changed with strict safety standards as a priority; the pace of initiating and enacting WEEE specific legislation is very slow across the globe and in some cases non-existent; and globally, there is need for more accurate and current data on amounts and types of WEEE generated. |
Model-based Software Testing | Software testing requires the use of a model to guide such efforts as test selection and test verification. Often, such models are implicit, existing only in the head of a human tester, applying test inputs in an ad hoc fashion. The mental model testers build encapsulates application behavior, allowing testers to understand the application’s capabilities and more effectively test its range of possible behaviors. When these mental models are written down, they become sharable, reusable testing artifacts. In this case, testers are performing what has become to be known as model-based testing. Model-based testing has recently gained attention with the popularization of models (including UML) in software design and development. There are a number of models of software in use today, a few of which make good models for testing. This paper introduces model-based testing and discusses its tasks in general terms with finite state models (arguably the most popular software models) as examples. In addition, advantages, difficulties, and shortcoming of various model-based approaches are concisely presented. Finally, we close with a discussion of where model-based testing fits in the present and future of software engineering. |
Synchronous mucinous metaplasia and neoplasia of the female genital tract with external urethral meatus neoplasm: A case report | •We present a case of multiple mucinous metaplasia and neoplasia of cervix, endometrium, fallopian tube, ovary, and mesenterium with external urethral meatus neoplasm.•Immunohistochemistry showed almost same pattern in each neoplasms.•PCR-direct sequencing showed no existence of both KRAS and GNAS mutations.•This report suggests a possibility of synchronous mucinous metaplasia and neoplasia "beyond" female genital tract. |
Cannabidiol: pharmacology and potential therapeutic role in epilepsy and other neuropsychiatric disorders. | To present a summary of current scientific evidence about the cannabinoid, cannabidiol (CBD) with regard to its relevance to epilepsy and other selected neuropsychiatric disorders. We summarize the presentations from a conference in which invited participants reviewed relevant aspects of the physiology, mechanisms of action, pharmacology, and data from studies with animal models and human subjects. Cannabis has been used to treat disease since ancient times. Δ(9) -Tetrahydrocannabinol (Δ(9) -THC) is the major psychoactive ingredient and CBD is the major nonpsychoactive ingredient in cannabis. Cannabis and Δ(9) -THC are anticonvulsant in most animal models but can be proconvulsant in some healthy animals. The psychotropic effects of Δ(9) -THC limit tolerability. CBD is anticonvulsant in many acute animal models, but there are limited data in chronic models. The antiepileptic mechanisms of CBD are not known, but may include effects on the equilibrative nucleoside transporter; the orphan G-protein-coupled receptor GPR55; the transient receptor potential of vanilloid type-1 channel; the 5-HT1a receptor; and the α3 and α1 glycine receptors. CBD has neuroprotective and antiinflammatory effects, and it appears to be well tolerated in humans, but small and methodologically limited studies of CBD in human epilepsy have been inconclusive. More recent anecdotal reports of high-ratio CBD:Δ(9) -THC medical marijuana have claimed efficacy, but studies were not controlled. CBD bears investigation in epilepsy and other neuropsychiatric disorders, including anxiety, schizophrenia, addiction, and neonatal hypoxic-ischemic encephalopathy. However, we lack data from well-powered double-blind randomized, controlled studies on the efficacy of pure CBD for any disorder. Initial dose-tolerability and double-blind randomized, controlled studies focusing on target intractable epilepsy populations such as patients with Dravet and Lennox-Gastaut syndromes are being planned. Trials in other treatment-resistant epilepsies may also be warranted. A PowerPoint slide summarizing this article is available for download in the Supporting Information section here. |
Versu—A Simulationist Storytelling System | Versu is a text-based simulationist interactive drama. Because it uses autonomous agents, the drama is highly replayable: you can play the same story from multiple perspectives, or assign different characters to the various roles. The architecture relies on the notion of a social practice to achieve coordination between the independent autonomous agents. A social practice describes a recurring social situation, and is a successor to the Schankian script. Social practices are implemented as reactive joint plans, providing affordances to the agents who participate in them. The practices never control the agents directly; they merely provide suggestions. It is always the individual agent who decides what to do, using utility-based reactive action selection. |
The Effects of Hyperparameters on SGD Training of Neural Networks | The performance of neural network classifiers is determined by a number of hyperparameters, including learning rate, batch size, and depth. A number of attempts have been made to explore these parameters in the literature, and at times, to develop methods for optimizing them. However, exploration of parameter spaces has often been limited. In this note, I report the results of large scale experiments exploring these different parameters and their interactions. 1 Datasets and Libraries All experiments reported here were carried out using the Torch library [1] and CUDA (some of the experiments have been reproduced on a smaller scale with other libraries). The dataset for all the experiments is MNIST [3, 2]. Characters were deskewed prior to all experiments. Deskewing significantly reduces error rates in nearest neighbor classifiers. Skew corresponds to a simple one-parameter family of linear transformations in feature space and causes decision regions to become highly anisotropic. Without deskewing, differences in performance between different architectures might primarily reduce to their ability to “learn deskewing”. With deskewing, MNIST character classification become more of an instance of a typical classification problem. Prior results on classifying deskewed MNIST data both with neural networks and with other methods are shown in the table below. 2 Logistic vs Softmax Outputs Multi-Layer Perceptrons (MLPs) used for classification usually attempt to approximate posterior probabilities and use those as their discriminant function. Two common approaches to this are the use of least square regression with logistic output units trained with a least square error measure (“logistic outputs”) 1 ar X iv :1 50 8. 02 78 8v 1 [ cs .N E ] 1 2 A ug 2 01 5 Method Test Error Preprocessing Reference Reduced Set SVM deg 5 polynomial 1 deskewing LeCun et al. 1998 SVM deg 4 polynomial 1.1 deskewing LeCun et al. 1998 K-nearest-neighbors, L3 1.22 deskewing, noise removal, blurring, 2 pixel shift Kenneth Wilder, U. Chicago K-nearest-neighbors, L3 1.33 deskewing, noise removal, blurring, 1 pixel shift Kenneth Wilder, U. Chicago 2-layer NN, 300 HU 1.6 deskewing LeCun et al. 1998 Table 1: Other previously reported results on the MNIST database. and a softmax output layer (“softmax outputs”). In the limit of infinite amounts of training data, both approaches converge to true posterior probability estimates. Softmax output layers have the property that they are guaranteed to produce a normalized posterior probability distribution across all classes, while least square regression with logistic output units generates independent probability estimates for each class membership without any guarantees that these probabilities sum up to one. Softmax is often preferred, although there is no obvious theoretical reason why it should yield better discriminant functions or lower classification error for finite training sets. In OCR and speech recognition, some practitioners have observed that logistic outputs yield better posterior probability estimates and better results when combined with probabilistic language models. In addition, when the sum of the posterior probability estimates derived from logistic outputs differs significantly from unity, that is a strong indication that the input lies outside the training set and should be rejected. Figure 1 shows a scatterplot of test vs training error for a large number of MLPs with one hidden layer at different learning rates, different number of hidden units, and different batch sizes. Such scatterplots show what error rates are achievable by the different architectures, hyperparameter choices, initializations, and order of sample presentations. The lowest points in the vertical direction indicate the lowest test set error achievable by the architecture in this set of experiments. The scatterplot shows that logistic outputs achieve test set error rates of about 1.0% vs 1.1% for softmax outputs. At the same time, logistic outputs never achieve zero percent training set error, while softmax outputs frequently do. In order to ascertain that the difference in test set error between the two |
Internal Uniplanar Antenna for LTE/WWAN/GPS/GLONASS Applications in Tablet/Laptop Computers | This letter presents an internal uniplanar small size multiband antenna for tablet/laptop computer applications. The proposed antenna in addition to common LTE/WWAN channels covers commercial GPS/GLONASS frequency bands. The antenna is comprised of three sections: coupled-fed, shorting and low frequency spiral strips with the size of 50 ×11 ×0.8 mm2. With the aid of spiral strip, lower band operation at 900 MHz is achieved. Two operating frequency bands cover 870-965 and 1556-2480 MHz. In order to validate simulation results, a prototype of the proposed printed antenna is fabricated and tested. Good agreement between the simulation and measurement results is obtained. |
Interpreting and teaching the Bible in Latin America | Biblical interpretation among indigenous groups in Latin America bears a distinctive pedagogical orientation. To teach and understand the Bible as the word of God is to enable others to see, judge, and transform the world as the Book of Life. |
Cationic screening of charged surface groups (carboxylates) affects electron transfer steps in photosystem-II water oxidation and quinone reduction. | The functional or regulatory role of long-distance interactions between protein surface and interior represents an insufficiently understood aspect of protein function. Cationic screening of surface charges determines the morphology of thylakoid membrane stacks. We show that it also influences directly the light-driven reactions in the interior of photosystem II (PSII). After laser-flash excitation of PSII membrane particles from spinach, time courses of the delayed recombination fluorescence (10μs-10ms) and the variable chlorophyll-fluorescence yield (100μs-1s) were recorded in the presence of chloride salts. At low salt-concentrations, a stimulating effect was observed for the S-state transition efficiency, the time constant of O2-formation at the Mn4Ca-complex of PSII, and the halftime of re-oxidation of the primary quinone acceptor (Qa) by the secondary quinone acceptor (Qb). The cation valence determined the half-effect concentrations of the stimulating salt effect, which were around 6μM, 200μM and 10mM for trivalent (LaCl3), bivalent (MgCl2, CaCl2), and monovalent cations (NaCl, KCl), respectively. A depressing high-salt effect also depended strongly on the cation valence (onset concentrations around 2mM, 50mM, and 500mM). These salt effects are proposed to originate from electrostatic screening of negatively charged carboxylate sidechains, which are found in the form of carboxylate clusters at the solvent-exposed protein surface. We conclude that the influence of electrostatic screening by solvent cations manifests a functionally relevant long-distance interaction between protein surface and electron-transfer reactions in the protein interior. A relation to regulation and adaptation in response to environmental changes is conceivable. |
Towards Diverse and Natural Image Descriptions via a Conditional GAN | Despite the substantial progress in recent years, the image captioning techniques are still far from being perfect. Sentences produced by existing methods, e.g. those based on RNNs, are often overly rigid and lacking in variability. This issue is related to a learning principle widely used in practice, that is, to maximize the likelihood of training samples. This principle encourages high resemblance to the “ground-truth” captions, while suppressing other reasonable descriptions. Conventional evaluation metrics, e.g. BLEU and METEOR, also favor such restrictive methods. In this paper, we explore an alternative approach, with the aim to improve the naturalness and diversity – two essential properties of human expression. Specifically, we propose a new framework based on Conditional Generative Adversarial Networks (CGAN), which jointly learns a generator to produce descriptions conditioned on images and an evaluator to assess how well a description fits the visual content. It is noteworthy that training a sequence generator is nontrivial. We overcome the difficulty by Policy Gradient, a strategy stemming from Reinforcement Learning, which allows the generator to receive early feedback along the way. We tested our method on two large datasets, where it performed competitively against real people in our user study and outperformed other methods on various tasks. |
Visualizing Topic Models | Managing large collections of documents is an important problem for many areas of science, industry, and culture. Probabilistic topic modeling offers a promising solution. Topic modeling is an unsupervised machine learning method that learns the underlying themes in a large collection of otherwise unorganized documents. This discovered structure summarizes and organizes the documents. However, topic models are high-level statistical tools—a user must scrutinize numerical distributions to understand and explore their results. In this paper, we present a method for visualizing topic models. Our method creates a navigator of the documents, allowing users to explore the hidden structure that a topic model discovers. These browsing interfaces reveal meaningful patterns in a collection, helping end-users explore and understand its contents in new ways. We provide open source software of our method. Understanding and navigating large collections of documents has become an important activity in many spheres. However, many document collections are not coherently organized and organizing them by hand is impractical. We need automated ways to discover and visualize the structure of a collection in order to more easily explore its contents. Probabilistic topic modeling is a set of machine learning tools that may provide a solution (Blei and Lafferty 2009). Topic modeling algorithms discover a hidden thematic structure in a collection of documents; they find salient themes and represent each document as a combination of themes. However, topic models are high-level statistical tools. A user must scrutinize numerical distributions to understand and explore their results; the raw output of the model is not enough to create an easily explored corpus. We propose a method for using a fitted topic model to organize, summarize, visualize, and interact with a corpus. With our method, users can explore the corpus, moving between high level discovered summaries (the “topics”) and the documents themselves, as Figure 1 illustrates. Our design is centered around the idea that the model both summarizes and organizes the collection. Our method translates these representations into a visual system for exploring a collection, but visualizing this structure is not enough. The discovered structure induces relationships—between topics and articles, and between articles and articles—which lead to interactions in the visualization. Copyright c © 2012, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Thus, we have three main goals in designing the visualization: summarize the corpus for the user; reveal the relationships between the content and summaries; and, reveal the relationships across content. We aim to present these in a ways that are accessible and useful to a spectrum of users, not just machine learning experts. |
Plasmonic V-groove waveguides with Bragg grating filters via nanoimprint lithography. | We demonstrate spectral filtering with state-of-the-art Bragg gratings in plasmonic V-groove waveguides fabricated by wafer scale processing based on nanoimprint lithography. Transmission spectra of the devices having 16 grating periods exhibit spectral rejection of the channel plasmon polaritons with 8.2 dB extinction ratio and -3 dB bandwidth of Δλ = 39.9 nm near telecommunications wavelengths. Near-field scanning optical microscopy measurements verify spectral reflection from the grating structures, and the oscillations of propagating modes along grating-less V-grooves correspond well with effective refractive index values calculated by finite element simulations in COMSOL. The results represent advancement towards the implementation of plasmonic V-grooves with greater functional complexity and mass-production compatibility. |
Shaping smoking cessation in hard-to-treat smokers. | OBJECTIVE
Contingency management (CM) effectively treats addictions by providing abstinence incentives. However, CM fails for many who do not readily become abstinent and earn incentives. Shaping may improve outcomes in these hard-to-treat (HTT) individuals. Shaping sets intermediate criteria for incentive delivery between the present behavior and total abstinence. This should result in HTT individuals having improving, rather than poor, outcomes. We examined whether shaping improved outcomes in HTT smokers (never abstinent during a 10-visit baseline).
METHOD
Smokers were stratified into HTT (n = 96) and easier-to-treat (ETT [abstinent at least once during baseline]; n = 50) and randomly assigned to either CM or CM with shaping (CMS). CM provided incentives for breath carbon monoxide (CO) levels <4 ppm (approximately 1 day of abstinence). CMS shaped abstinence by providing incentives for COs lower than the 7th lowest of the participant's last 9 samples or <4 ppm. Interventions lasted for 60 successive weekday visits.
RESULTS
Cluster analysis identified 4 groups of participants: stable successes, improving, deteriorating, and poor outcomes. In comparison with ETT, HTT participants were more likely to belong to 1 of the 2 unsuccessful clusters (odds ratio [OR] = 8.1, 95% CI [3.1, 21]). This difference was greater with CM (OR = 42, 95% CI [5.9, 307]) than with CMS, in which the difference between HTT and ETT participants was not significant. Assignment to CMS predicted membership in the improving (p = .002) as compared with the poor outcomes cluster.
CONCLUSION
Shaping can increase CM's effectiveness for HTT smokers. |
Self-Compassion, Stress, and Coping. | People who are high in self-compassion treat themselves with kindness and concern when they experience negative events. The present article examines the construct of self-compassion from the standpoint of research on coping in an effort to understand the ways in which people who are high in self-compassion cope with stressful events. Self-compassionate people tend to rely heavily on positive cognitive restructuring but do not appear to differ from less self-compassionate people in the degree to which they cope through problem-solving and distraction. Existing evidence does not show clear differences in the degree to which people who are low vs. high in self-compassion seek support as a coping strategy, but more research is needed. |
A 3D Printer for Interactive Electromagnetic Devices | We introduce a new form of low-cost 3D printer to print interactive electromechanical objects with wound in place coils. At the heart of this printer is a mechanism for depositing wire within a five degree of freedom (5DOF) fused deposition modeling (FDM) 3D printer. Copper wire can be used with this mechanism to form coils which induce magnetic fields as a current is passed through them. Soft iron wire can additionally be used to form components with high magnetic permeability which are thus able to shape and direct these magnetic fields to where they are needed. When fabricated with structural plastic elements, this allows simple but complete custom electromagnetic devices to be 3D printed. As examples, we demonstrate the fabrication of a solenoid actuator for the arm of a Lucky Cat figurine, a 6-pole motor stepper stator, a reluctance motor rotor and a Ferrofluid display. In addition, we show how printed coils which generate small currents in response to user actions can be used as input sensors in interactive devices. |
Piriformis syndrome: anatomic considerations, a new injection technique, and a review of the literature. | BACKGROUND
Piriformis syndrome can be caused by anatomic abnormalities. The treatments of piriformis syndrome include the injection of steroid into the piriformis muscle and near the area of the sciatic nerve. These techniques use either fluoroscopy and muscle electromyography to identify the piriformis muscle or a nerve stimulator to stimulate the sciatic nerve.
METHODS
The authors performed a cadaver study and noted anatomic variations of the piriformis muscle and sciatic nerve. To standardize their technique of injection, they also noted the distance from the lower border of the sacroiliac joint (SIJ) to the sciatic nerve. They retrospectively reviewed the charts of 19 patients who had received piriformis muscle injections, noting the site of needle insertion in terms of the distance from the lower border of the SIJ and the depth of needle insertion at which the motor response of the foot was elicited. The authors tabulated the response of the patients to the injection, any associated diagnoses, and previous treatments that these patients had before the injection. Finally, they reviewed the literature on piriformis syndrome, a rare cause of buttock pain and sciatica.
RESULTS
In the cadavers, the distance from the lower border of the SIJ to the sciatic nerve was 2.9 +/- 0.6 (1.8-3.7) cm laterally and 0.7 +/- 0.7 (0.0-2.5) cm caudally. In 65 specimens, the sciatic nerve passed anterior and inferior to the piriformis. In one specimen, the muscle was bipartite and the two components of the sciatic nerve were separate, with the tibial nerve passing below the piriformis and the peroneal nerve passing between the two components of the muscle. In the patients who received the injections, the site of needle insertion was 1.5 +/- 0.8 (0.4-3.0) cm lateral and 1.2 +/- 0.6 (0.5-2.0) cm caudal to the lower border of the SIJ as seen on fluoroscopy. The needle was inserted at a depth of 9.2 +/- 1.5 (7.5-13.0) cm to stimulate the sciatic nerve. Patients had comorbid etiologies including herniated disc, failed back surgery syndrome, spinal stenosis, facet syndrome, SIJ dysfunction, and complex regional pain syndrome. Sixteen of the 19 patients responded to the injection, their improvements ranged from a few hours to 3 months.
CONCLUSIONS
Anatomic abnormalities causing piriformis syndrome are rare. The technique used in the current study was successful in injecting the medications near the area of the sciatic nerve and into the piriformis muscle. |
The Stroke Outcomes and Neuroimaging of Intracranial Atherosclerosis (SONIA) trial. | BACKGROUND
Transcranial Doppler ultrasound (TCD) and magnetic resonance angiography (MRA) can identify intracranial atherosclerosis but have not been rigorously validated against the gold standard, catheter angiography. The WASID trial (Warfarin Aspirin Symptomatic Intracranial Disease) required performance of angiography to verify the presence of intracranial stenosis, allowing for prospective evaluation of TCD and MRA. The aims of Stroke Outcomes and Neuroimaging of Intracranial Atherosclerosis (SONIA) trial were to define abnormalities on TCD/MRA to see how well they identify 50 to 99% intracranial stenosis of large proximal arteries on catheter angiography.
STUDY DESIGN
SONIA standardized the performance and interpretation of TCD, MRA, and angiography. Study-wide cutpoints defining positive TCD/MRA were used. Hard copy TCD/MRA were centrally read, blind to the results of angiography.
RESULTS
SONIA enrolled 407 patients at 46 sites in the United States. For prospectively tested noninvasive test cutpoints, positive predictive values (PPVs) and negative predictive values (NPVs) were TCD, PPV 36% (95% CI: 27 to 46); NPV, 86% (95% CI: 81 to 89); MRA, PPV 59% (95% CI: 54 to 65); NPV, 91% (95% CI: 89 to 93). For cutpoints modified to maximize PPV, they were TCD, PPV 50% (95% CI: 36 to 64), NPV 85% (95% CI: 81 to 88); MRA PPV 66% (95% CI: 58 to 73), NPV 87% (95% CI: 85 to 89). For each test, a characteristic performance curve showing how the predictive values vary with a changing test cutpoint was obtained.
CONCLUSIONS
Both transcranial Doppler ultrasound and magnetic resonance angiography noninvasively identify 50 to 99% intracranial large vessel stenoses with substantial negative predictive value. The Stroke Outcomes and Neuroimaging of Intracranial Atherosclerosis trial methods allow transcranial Doppler ultrasound and magnetic resonance angiography to reliably exclude the presence of intracranial stenosis. Abnormal findings on transcranial Doppler ultrasound or magnetic resonance angiography require a confirmatory test such as angiography to reliably identify stenosis. |
Local , Distortional , and Euler Buckling of Thin-Walled Columns | Open cross-section, thin-walled, cold-formed steel columns have at least three competing buckling modes: local, dis and Euler~i.e., flexural or flexural-torsional ! buckling. Closed-form prediction of the buckling stress in the local mode, includ interaction of the connected elements, and the distortional mode, including consideration of the elastic and geometric stiffne web/flange juncture, are provided and shown to agree well with numerical methods. Numerical analyses and experiments postbuckling capacity in the distortional mode is lower than in the local mode. Current North American design specificati cold-formed steel columns ignore local buckling interaction and do not provide an explicit check for distortional buckling. E experiments on cold-formed channel, zed, and rack columns indicate inconsistency and systematic error in current design me provide validation for alternative methods. A new method is proposed for design that explicitly incorporates local, distortional an buckling, does not require calculations of effective width and/or effective properties, gives reliable predictions devoid of systema and provides a means to introduce rational analysis for elastic buckling prediction into the design of thin-walled columns. DOI: 10.1061/ ~ASCE!0733-9445~2002!128:3~289! CE Database keywords: Thin-wall structures; Columns; Buckling; Cold-formed steel. |
Implicit cognition and addiction: a tool for explaining paradoxical behavior. | Research on implicit cognition and addiction has expanded greatly during the past decade. This research area provides new ways to understand why people engage in behaviors that they know are harmful or counterproductive in the long run. Implicit cognition takes a different view from traditional cognitive approaches to addiction by assuming that behavior is often not a result of a reflective decision that takes into account the pros and cons known by the individual. Instead of a cognitive algebra integrating many cognitions relevant to choice, implicit cognition assumes that the influential cognitions are the ones that are spontaneously activated during critical decision points. This selective review highlights many of the consistent findings supporting predictive effects of implicit cognition on substance use and abuse in adolescents and adults; reveals a recent integration with dual-process models; outlines the rapid evolution of different measurement tools; and introduces new routes for intervention. |
Multi-Task Learning and Weighted Cross-Entropy for DNN-Based Keyword Spotting | We propose improved Deep Neural Network (DNN) training loss functions for more accurate single keyword spotting on resource-constrained embedded devices. The loss function modifications consist of a combination of multi-task training and weighted cross entropy. In the multi-task architecture, the keyword DNN acoustic model is trained with two tasks in parallel the main task of predicting the keyword-specific phone states, and an auxiliary task of predicting LVCSR senones. We show that multi-task learning leads to comparable accuracy over a previously proposed transfer learning approach where the keyword DNN training is initialized by an LVCSR DNN of the same input and hidden layer sizes. The combination of LVCSRinitialization and Multi-task training gives improved keyword detection accuracy compared to either technique alone. We also propose modifying the loss function to give a higher weight on input frames corresponding to keyword phone targets, with a motivation to balance the keyword and background training data. We show that weighted cross-entropy results in additional accuracy improvements. Finally, we show that the combination of 3 techniques LVCSR-initialization, multi-task training and weighted cross-entropy gives the best results, with significantly lower False Alarm Rate than the LVCSR-initialization technique alone, across a wide range of Miss Rates. |
A Q-band CMOS LNA with noise cancellation | In this paper, a Q-band CMOS low noise amplifier (LNA) exploiting transformer positive-negative feedback and noise cancelling is presented. The proposed low noise amplifier consists of a positive transformer feedback to achieve noise-reduction and a negative transformer feedback to obtain good reverse isolation and stability. The noise cancellation technique is applied in this LNA to achieve a low noise figure. This LNA has been fabricated by standard commercial 90 nm technology. Based on measurements, this proposed LNA achieve a voltage gain of 11.5 dB, a noise figure of 6.5 dB and an input P1dB of -18 dBm. It consumes 11.5 mA from a 1.2 V supply and the total area of this design is 0.6*0.7 mm2. |
Diffusion of innovations and HIV/AIDS. | As the HIV/AIDS epidemic continues its relentless spread in many parts of the world, DOI provides a useful framework for analyzing the difficulties in achieving behavior change necessary to reduce HIV rates. The DOI concepts most relevant to this question include communication channels, the innovation-decision process, homophily, the attributes of the innovation, adopter categories, and opinion leaders. The preventive measures needed to halt the transmission of HIV constitute a "preventive innovation." This article describes the attributes of this preventive innovation in terms of relative advantage, compatibility, complexity, trialability, and observability. It reviews studies that incorporated DOI into HIV/AIDS behavior change interventions, both in Western countries and in the developing world. Finally, it discusses possible reasons that the use of DOI has been fairly limited to date in HIV/AIDS prevention interventions in developing countries. |
Injecting Logical Background Knowledge into Embeddings for Relation Extraction | Matrix factorization approaches to relation extraction provide several attractive features: they support distant supervision, handle open schemas, and leverage unlabeled data. Unfortunately, these methods share a shortcoming with all other distantly supervised approaches: they cannot learn to extract target relations without existing data in the knowledge base, and likewise, these models are inaccurate for relations with sparse data. Rule-based extractors, on the other hand, can be easily extended to novel relations and improved for existing but inaccurate relations, through first-order formulae that capture auxiliary domain knowledge. However, usually a large set of such formulae is necessary to achieve generalization. In this paper, we introduce a paradigm for learning low-dimensional embeddings of entity-pairs and relations that combine the advantages of matrix factorization with first-order logic domain knowledge. We introduce simple approaches for estimating such embeddings, as well as a novel training algorithm to jointly optimize over factual and first-order logic information. Our results show that this method is able to learn accurate extractors with little or no distant supervision alignments, while at the same time generalizing to textual patterns that do not appear in the formulae. |
A single-feeding circularly polarized microstrip antenna with the effect of hybrid feeding | A single series feed cross-aperture coupled microstrip antenna with the effect of hybrid feeding is proposed and demonstrated. To understand this antenna better, the characteristics according to the variation of parameters are shown. This proposed antenna has the following advantages of the effect of hybrid feeding: improved axial ratio bandwidth (4.6%); high gain (8 dBi); flat 3 dB gain bandwidth (above 16.7%). In measured radiation patterns, we have 3 dB beamwidth of /spl plusmn/30/spl deg/ and good F/B (front to back ratio) of 20 dB. |
Spatio-Temporal Image Boundary Extrapolation | Boundary prediction in images as well as video has been a very active topic of research and organizing visual information into boundaries and segments is believed to be a corner stone of visual perception. While prior work has focused on predicting boundaries for observed frames, our work aims at predicting boundaries of future unobserved frames. This requires our model to learn about the fate of boundaries and extrapolate motion patterns. We experiment on established realworld video segmentation dataset, which provides a testbed for this new task. We show for the first time spatio-temporal boundary extrapolation in this challenging scenario. Furthermore, we show long-term prediction of boundaries in situations where the motion is governed by the laws of physics. We successfully predict boundaries in a billiard scenario without any assumptions of a strong parametric model or any object notion. We argue that our model has with minimalistic model assumptions derived a notion of “intuitive physics” that can be applied to novel scenes. |
MULTIPLE JET IMPINGEMENT − A REVIEW | Jet impingement systems provide an effective means for the enhancement of convective processes due to the high heat and mass transfer rates that can be achieved. The range of industrial applications that impinging jets are being used in today is wide. In the annealing and tempering of materials, impinging jet systems are finding use in the cooling of hot metal, plastic, or glass sheets as well as in the drying of paper and fabric. Compact heat exchangers, with applications in the aeronautical or the automotive sector, often use multiple impinging jets in dense arrangements. Impingement systems in micro scale applications are commonly used for the cooling of electronic components, particularly electronic chips. In gas turbine applications (the focus of this investigation), jet impingement has been routinely used for a long time. Requirements are being imposed by demands for increased power output and efficiency as well as for reduced emissions. High thermal efficiency can be realized by increasing turbine inlet temperatures and compressor ratios. As a result of this, many gas turbine components, such as rotor disks, turbine vanes and blades, or combustion chamber walls, are operated at temperatures well above highest allowable material limits. In order to assure durability and long operating intervals, effective cooling concepts are required for these highly loaded components. Due to the complex geometry of turbine system coupled with high turbulence, the understanding of the flow and heat transfer characteristics remains a challenging subject [e.g. Han and Goldstein 2001, Son et al. 2001]. |
Prevalence of and risk factors for serum antibodies against Leptospira serovars in US veterinarians. | OBJECTIVE
To determine the seroprevalence of antibodies against Leptospira serovars among veterinarians and identify risk factors for seropositivity in veterinary care settings.
DESIGN
Seroepidemiologic survey.
STUDY POPULATION
Veterinarians attending the 2006 AVMA Annual Convention.
PROCEDURES
Blood samples were collected from 511 veterinarians, and serum was harvested for a microcapsule agglutination test (MAT) to detect antibodies against 6 serovars of Leptospira. Aggregate data analysis was performed to determine the ratio of the odds of a given exposure (eg, types of animals treated or biosafety practices) in seropositive individuals to the odds in seronegative individuals.
RESULTS
Evidence of previous leptospiral infection was detected in 2.5% of veterinarians. Most veterinarians reported multiple potential exposures to Leptospira spp and other pathogens in the previous 12 months, including unintentional needlestick injuries (379/511 [74.2%]), animal bites (345/511 [67.5%]), and animal scratches (451/511 [88.3%]). Treatment of a dog with an influenza-like illness within the past year was associated with seropositivity for antibodies against Leptospira spp.
CONCLUSIONS AND CLINICAL RELEVANCE
Veterinarians are at risk for leptospirosis and should take measures to decrease potential exposure to infectious agents in general. Diagnostic tests for leptospirosis should be considered when veterinarians have febrile illnesses of unknown origin. |
Timing of antiretroviral therapy after diagnosis of cryptococcal meningitis. | BACKGROUND
Cryptococcal meningitis accounts for 20 to 25% of acquired immunodeficiency syndrome-related deaths in Africa. Antiretroviral therapy (ART) is essential for survival; however, the question of when ART should be initiated after diagnosis of cryptococcal meningitis remains unanswered.
METHODS
We assessed survival at 26 weeks among 177 human immunodeficiency virus-infected adults in Uganda and South Africa who had cryptococcal meningitis and had not previously received ART. We randomly assigned study participants to undergo either earlier ART initiation (1 to 2 weeks after diagnosis) or deferred ART initiation (5 weeks after diagnosis). Participants received amphotericin B (0.7 to 1.0 mg per kilogram of body weight per day) and fluconazole (800 mg per day) for 14 days, followed by consolidation therapy with fluconazole.
RESULTS
The 26-week mortality with earlier ART initiation was significantly higher than with deferred ART initiation (45% [40 of 88 patients] vs. 30% [27 of 89 patients]; hazard ratio for death, 1.73; 95% confidence interval [CI], 1.06 to 2.82; P=0.03). The excess deaths associated with earlier ART initiation occurred 2 to 5 weeks after diagnosis (P=0.007 for the comparison between groups); mortality was similar in the two groups thereafter. Among patients with few white cells in their cerebrospinal fluid (<5 per cubic millimeter) at randomization, mortality was particularly elevated with earlier ART as compared with deferred ART (hazard ratio, 3.87; 95% CI, 1.41 to 10.58; P=0.008). The incidence of recognized cryptococcal immune reconstitution inflammatory syndrome did not differ significantly between the earlier-ART group and the deferred-ART group (20% and 13%, respectively; P=0.32). All other clinical, immunologic, virologic, and microbiologic outcomes, as well as adverse events, were similar between the groups.
CONCLUSIONS
Deferring ART for 5 weeks after the diagnosis of cryptococcal meningitis was associated with significantly improved survival, as compared with initiating ART at 1 to 2 weeks, especially among patients with a paucity of white cells in cerebrospinal fluid. (Funded by the National Institute of Allergy and Infectious Diseases and others; COAT ClinicalTrials.gov number, NCT01075152.). |
Entity recognition from clinical texts via recurrent neural network | BACKGROUND
Entity recognition is one of the most primary steps for text analysis and has long attracted considerable attention from researchers. In the clinical domain, various types of entities, such as clinical entities and protected health information (PHI), widely exist in clinical texts. Recognizing these entities has become a hot topic in clinical natural language processing (NLP), and a large number of traditional machine learning methods, such as support vector machine and conditional random field, have been deployed to recognize entities from clinical texts in the past few years. In recent years, recurrent neural network (RNN), one of deep learning methods that has shown great potential on many problems including named entity recognition, also has been gradually used for entity recognition from clinical texts.
METHODS
In this paper, we comprehensively investigate the performance of LSTM (long-short term memory), a representative variant of RNN, on clinical entity recognition and protected health information recognition. The LSTM model consists of three layers: input layer - generates representation of each word of a sentence; LSTM layer - outputs another word representation sequence that captures the context information of each word in this sentence; Inference layer - makes tagging decisions according to the output of LSTM layer, that is, outputting a label sequence.
RESULTS
Experiments conducted on corpora of the 2010, 2012 and 2014 i2b2 NLP challenges show that LSTM achieves highest micro-average F1-scores of 85.81% on the 2010 i2b2 medical concept extraction, 92.29% on the 2012 i2b2 clinical event detection, and 94.37% on the 2014 i2b2 de-identification, which is considerably competitive with other state-of-the-art systems.
CONCLUSIONS
LSTM that requires no hand-crafted feature has great potential on entity recognition from clinical texts. It outperforms traditional machine learning methods that suffer from fussy feature engineering. A possible future direction is how to integrate knowledge bases widely existing in the clinical domain into LSTM, which is a case of our future work. Moreover, how to use LSTM to recognize entities in specific formats is also another possible future direction. |
Quantum generative adversarial networks | Quantum machine learning is expected to be one of the first potential general-purpose applications of near-term quantum devices. A major recent breakthrough in classical machine learning is the notion of generative adversarial training, where the gradients of a discriminator model are used to train a separate generative model. In this work and a companion paper, we extend adversarial training to the quantum domain and show how to construct generative adversarial networks using quantum circuits. Furthermore, we also show how to compute gradients – a key element in generative adversarial network training – using another quantum circuit. We give an example of a simple practical circuit ansatz to parametrize quantum machine learning models and perform a simple numerical experiment to demonstrate that quantum generative adversarial networks can be trained successfully. |
A Weighted Finite-State Transducer (WFST)-Based Language Model for Online Indic Script Handwriting Recognition | Though designing of classifies for Indic script handwriting recognition has been researched with enough attention, use of language model has so far received little exposure. This paper attempts to develop a weighted finite-state transducer (WFST) based language model for improving the current recognition accuracy. Both the recognition hypothesis (i.e. the segmentation lattice) and the lexicon are modeled as two WFSTs. Concatenation of these two FSTs accept a valid word(s) which is (are) present in the recognition lattice. A third FST called error FST is also introduced to retrieve certain words which were missing in the previous concatenation operation. The proposed model has been tested for online Bangla handwriting recognition though the underlying principle can equally be applied for recognition of offline or printed words. Experiment on a part of ISI-Bangla handwriting database shows that while the present classifiers (without using any language model) can recognize about 73% word, use of recognition and lexicon FSTs improve this result by about 9% giving an average word-level accuracy of 82%. Introduction of error FST further improves this accuracy to 93%. This remarkable improvement in word recognition accuracy by using FST-based language model would serve as a significant revelation for the research in handwriting recognition, in general and Indic script handwriting recognition, in particular. |
Testing Programs with the Aid of a Compiler | if finite input-output specifications are added to the syntax of programs, these specifications can be verified at compile time. Programs which carry adequate tests with them in this way should be resistant to maintenance errors. If the specifications are independent of program details they are easy to give, and unlikely to contain errors in common with the program. Furthermore, certain finite specifications are maximal in that they exercise the control and expression structure of a program as well as any tests can. |
Placebo-controlled evaluation of amphetamine mixture-dextroamphetamine salts and amphetamine salts (Adderall): efficacy rate and side effects. | OBJECTIVE
The primary objective of this study was to determine the efficacy rate of Adderall in children newly diagnosed with attention-deficit/hyperactivity disorder (ADHD). A secondary objective was to address the severity of side effects associated with Adderall treatment in children with ADHD using the Barkley Side Effects Questionnaire (BSEQ).
DESIGN
Randomized, double-blind, placebo-controlled crossover trial.
SETTING
A large rural tertiary care clinic.
PATIENTS
Participants were prospectively recruited from children 5 to 18 years of age referred for academic and/or attention problems; 154 children who met the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition criteria for ADHD were enrolled. Interventions. Two doses of Adderall (0.15 mg/kg/dose and 0.3 mg/kg/dose) were compared with placebo in separate 2-week trials. Participants received each dosage regimen twice daily for 7 consecutive days.
MEASUREMENTS AND MAIN RESULTS
Efficacy rates were determined by comparing Adderall with placebo during the low-dose crossover sequence and also during the high-dose crossover sequence. The criteria that defined a positive response to Adderall relative to placebo (with each patient serving as their own control) included an indication of response by at least 1 of 2 parent measures of children's behavior or at least 2 of 5 teacher measures of children's behavior. The Adderall efficacy rate was determined based on parent criteria alone, teacher criteria alone, and by a more stringent definition of response that required concurrence between parent and teacher criteria. The Adderall response rate in this study ranged from 59% when requiring concurrence between parent and teacher observers, to 82% when based on parent criteria alone. Overall, 137 of 154 participants (89%) showed a positive response by either the parent or teacher response criteria. Parents completed a modified version of the BSEQ during each week of the trial. Appetite, stomachaches, and insomnia were rated as worse by parents while children were receiving either dose of Adderall; headaches were rated as worse when children were receiving the higher dose of Adderall. Parents rated certain side effects, including staring/daydreaming, sadness, euphoria, and anxious/irritable, as worse during placebo regimens.
CONCLUSIONS
We found that Adderall is highly efficacious in our population of youth diagnosed with ADHD. In addition, Adderall is well-tolerated with a side effect profile similar to that reported for other psychostimulants. |
Single-trial lie detection using a combined fNIRS-polygraph system | Deception is a human behavior that many people experience in daily life. It involves complex neuronal activities in addition to several physiological changes in the body. A polygraph, which can measure some of the physiological responses from the body, has been widely employed in lie-detection. Many researchers, however, believe that lie detection can become more precise if the neuronal changes that occur in the process of deception can be isolated and measured. In this study, we combine both measures (i.e., physiological and neuronal changes) for enhanced lie-detection. Specifically, to investigate the deception-related hemodynamic response, functional near-infrared spectroscopy (fNIRS) is applied at the prefrontal cortex besides a commercially available polygraph system. A mock crime scenario with a single-trial stimulus is set up as a deception protocol. The acquired data are classified into "true" and "lie" classes based on the fNIRS-based hemoglobin-concentration changes and polygraph-based physiological signal changes. Linear discriminant analysis is utilized as a classifier. The results indicate that the combined fNIRS-polygraph system delivers much higher classification accuracy than that of a singular system. This study demonstrates a plausible solution toward single-trial lie-detection by combining fNIRS and the polygraph. |
emuARM: A tool for teaching the ARM assembly language | Technology has always enhanced learning as well as the overall teaching experience. With proper tools and resources in hand, we can easily integrate educational and information technologies into the academic environment. In this paper, we present a software tool to enhance the learning of microprocessors and computer architecture for students. We have developed an ARM instruction set emulator, emuARM, which is a Java based software tool for duplicating the functions of an ARMv5 microprocessor. Here, we present the internal design and features of emuARM. We present a comparison of the features of emuARM with other present ARM emulators in the market. At the end, we present the results of a survey that attests the pedagogical value of our tool. |
CONTINUOUS FIRST ORDER LOGIC FOR UNBOUNDED METRIC STRUCTURES | We present an adaptation of continuous first order logic to unbounded metric structures. This has the advantage of being closer in spirit to C. Ward Henson's logic for Banach space structures than the unit ball approach (which has been the common approach so far to Banach space structures in continuous logic), as well as of applying in situations where the unit ball approach does not apply (i.e. when the unit ball is not a definable set). We also introduce the process of single point emboundment (closely related to the topological single point compactification), allowing to bring unbounded structures back into the setting of bounded continuous first order logic. Together with results from [4] regarding perturbations of bounded metric structures, we prove a Ryll–Nardzewski style characterization of theories of Banach spaces which are separably categorical up to small perturbation of the norm. This last result is motivated by an unpublished result of Henson. |
Cosmology of nonlinear oscillations | Abstract The nonlinear oscillations of a scalar field are shown to have cosmological equations of state with w = p / ρ ranging from −1 w |
Maximum power point tracking (MPPT) of sensorless PMSG wind power system | This paper investigates the modeling, simulation and implementation of sensorless maximum power point tracking (MPPT) of permanent magnet synchronous generator wind power system. A comprehensive portfolio of control schemes are discussed and verified by simulations and experiments. Particularly, a PMSG-based wind power emulation system has been developed based on two machine drive setups — one is controlled as wind energy source and operated in torque control mode while the other is controlled as a wind generator and operated in speed control mode to attain MPPT. Both simulation and experimental results demonstrate a robust sensorless MPPT operation in the customized PMSG wind power system. |
Automatic query reformulations for text retrieval in software engineering | There are more than twenty distinct software engineering tasks addressed with text retrieval (TR) techniques, such as, traceability link recovery, feature location, refactoring, reuse, etc. A common issue with all TR applications is that the results of the retrieval depend largely on the quality of the query. When a query performs poorly, it has to be reformulated and this is a difficult task for someone who had trouble writing a good query in the first place.
We propose a recommender (called Refoqus) based on machine learning, which is trained with a sample of queries and relevant results. Then, for a given query, it automatically recommends a reformulation strategy that should improve its performance, based on the properties of the query. We evaluated Refoqus empirically against four baseline approaches that are used in natural language document retrieval. The data used for the evaluation corresponds to changes from five open source systems in Java and C++ and it is used in the context of TR-based concept location in source code. Refoqus outperformed the baselines and its recommendations lead to query performance improvement or preservation in 84% of the cases (in average). |
StormDroid: A Streaminglized Machine Learning-Based System for Detecting Android Malware | Mobile devices are especially vulnerable nowadays to malware attacks, thanks to the current trend of increased app downloads. Despite the significant security and privacy concerns it received, effective malware detection (MD) remains a significant challenge. This paper tackles this challenge by introducing a streaminglized machine learning-based MD framework, StormDroid: (i) The core of StormDroid is based on machine learning, enhanced with a novel combination of contributed features that we observed over a fairly large collection of data set; and (ii) we streaminglize the whole MD process to support large-scale analysis, yielding an efficient and scalable MD technique that observes app behaviors statically and dynamically. Evaluated on roughly 8,000 applications, our combination of contributed features improves MD accuracy by almost 10% compared with state-of-the-art antivirus systems; in parallel our streaminglized process, StormDroid, further improves efficiency rate by approximately three times than a single thread. |
Mental Health Services for Individuals with High Functioning Autism Spectrum Disorder | Adolescents and adults with an autism spectrum disorder (ASD) who do not have an intellectual impairment or disability (ID), described here as individuals with high-functioning autism spectrum disorder (HFASD), represent a complex and underserved psychiatric population. While there is an emerging literature on the mental health needs of children with ASD with normal intelligence, we know less about these issues in adults. Of the few studies of adolescents and adults with HFASD completed to date, findings suggest that they face a multitude of cooccurring psychiatric (e.g., anxiety, depression), psychosocial, and functional issues, all of which occur in addition to their ASD symptomatology. Despite this, traditional mental health services and supports are falling short of meeting the needs of these adults. This review highlights the service needs and the corresponding gaps in care for this population. It also provides an overview of the literature on psychiatric risk factors, identifies areas requiring further study, and makes recommendations for how existing mental health services could include adults with HFASD. |
Psicologia USP, 1992-2002: uma aventura participativa | This article examines 10 years of production of the journal Psicologia USP (1992-2002), when under the editorship of Sylvia Leser de Mello. During this period, the basic policy of the journal was to publish critical essays on relevant psychological themes, as opposed to the publication of papers with empirical content. A multidisciplinar approach was adopted and led to the preparation of special issues (Memory, Unconscious, Psychology and Health, among others) and dossiers (Psychoanalysis and University, Psychology and Instrumental Reason, among others) in which researchers from the social and biological sciences took part, seeking for contrasts and convergencies. The journal contributed to scientific memory by publishing special issues on the work and influence of members of the Institute of Psychology ( Psychology and Ethology, Dante Moreira Leite, among others). This decade of participative effort and accomplishment played an important role in the determination of the course of the journal and prepared Psychology USP for new publishing challenges. |
MODERN TREATMENT YEAR BOOK, 1945 | can be carried out. This should ensure more adequate material and better facilities for the teaching of, not only undergraduates, but also postgraduates. But when our new organisation is completed will we be in Utopia? That depends as always on us as individuals. In the highly scientific age of the last century perhaps the machine has had more attention than the man. Perhaps disease has had more attention than the patient. Let us remember that the microscope does not observe nor do our books think. The reputation of a Medical School will depend not so much on its hospital and laboratories as upon the character and ability of both its students and teachers. 4 THIS Year Book has won a well-deserved place in the affections of many practitioners, and the current volume will enhance its reputation. The galaxy of brilliant contributors is a guarantee of the authenticity and interest of the work. The subject matter has been chosen with some care, and there is not a single dull page in the volume. One doubts whether the section on the treatment of cerebro-spinal fever represents present-day views on this important subject. The type is legible and pleasant to read; the paper is 'ersatz'; the illustrations are excellent. |
A 21nm high performance 64Gb MLC NAND flash memory with 400MB/s asynchronous toggle DDR interface | A monolithic 64Gb MLC NAND flash based on 21nm process technology has been developed for the first time. The device consists of 4-plane arrays and provides page size of up to 32KB. It also features a newly developed DDR interface that can support up to the maximum bandwidth of 400MB/s. To address performance and reliability, on-chip randomizer, soft data readout, and incremental bit line precharge scheme have been developed. |
3D Convolutional Neural Networks Fusion Model for Lung Nodule Detection onClinical CT Scans | Automatically accurate pulmonary nodule detection plays an important role in lung cancer diagnosis and early treatment. We propose a three-dimensional (3D) Convolutional Neural Networks (ConvNets) fusion model for lung nodule detection on clinical CT scans. Two 3D ConvNets models are trained separately without any pre-training weights: One trained on the LUng Nodule Analysis 2016 dataset (LUNA) and additional augmented data to learn the nodules’ representative features in volumetric space, which may cause overfitting problems meanwhile, so we train another network on original data and fuse the results of the two best-performing models to reduce this risk. Both use reshaped objective function to solve the class imbalance problem and differentiate hard samples from easy samples. More importantly, 335 patients’ CT scans from the hospital are further used to evaluate and help optimize the performance of our approach in the real situation, and we develop a system based on this method. Experimental results show a sensitivity of 95.1% at 8 false positives per scan in Free Receiver Operating Characteristics (FROC) curve analysis, and our system has a pleasing generalization ability in clinical data. |
Agency Problems and Dividend Policies Around the World | This paper outlines and tests two agency models of dividends. According to the “outcome” model, dividends are the result of effective pressure by minority shareholders to force corporate insiders to disgorge cash. According to the “substitute” model, insiders interested in issuing equity in the future choose to pay dividends to establish a reputation for decent treatment of minority shareholders. The first model predicts that stronger minority shareholder rights should be associated with higher dividend payouts; the second model predicts the opposite. Tests on a cross-section of 4,000 companies from 33 countries with different levels of minority shareholder rights support the outcome agency model of dividends. The authors are from Harvard University, Harvard University, Harvard University and University of Chicago, respectively. They are grateful to Alexander Aganin for excellent research assistance, and to Lucian Bebchuk, Mihir Desai, Edward Glaeser, Denis Gromb, Oliver Hart, James Hines, Kose John, James Poterba, Roberta Romano, Raghu Rajan, Lemma Senbet, René Stulz, Daniel Wolfenzohn, Luigi Zingales, and two anonymous referees for helpful comments. 2 The so-called dividend puzzle (Black 1976) has preoccupied the attention of financial economists at least since Modigliani and Miller’s (1958, 1961) seminal work. This work established that, in a frictionless world, when the investment policy of a firm is held constant, its dividend payout policy has no consequences for shareholder wealth. Higher dividend payouts lead to lower retained earnings and capital gains, and vice versa, leaving total wealth of the shareholders unchanged. Contrary to this prediction, however, corporations follow extremely deliberate dividend payout strategies (Lintner (1956)). This evidence raises a puzzle: how do firms choose their dividend policies? In the United States and other countries, the puzzle is even deeper since many shareholders are taxed more heavily on their dividend receipts than on capital gains. The actual magnitude of this tax burden is debated (see Poterba and Summers (1985) and Allen and Michaely (1997)), but taxes generally make it even harder to explain dividend policies of firms. Economists have proposed a number of explanations of the dividend puzzle. Of these, particularly popular is the idea that firms can signal future profitability by paying dividends (Bhattacharya (1979), John and Williams (1985), Miller and Rock (1985), Ambarish, John, and Williams (1987)). Empirically, this theory had considerable initial success, since firms that initiate (or raise) dividends experience share price increases, and the converse is true for firms that eliminate (or cut) dividends (Aharony and Swary (1980), Asquith and Mullins (1983)). Recent results are more mixed, since current dividend changes do not help predict firms’ future earnings growth (DeAngelo, DeAngelo, and Skinner (1996) and Benartzi, Michaely, and Thaler (1997)). Another idea, which has received only limited attention until recently (e.g., Easterbrook (1984), Jensen (1986), Fluck (1998a, 1998b), Myers (1998), Gomes (1998), Zwiebel (1996)), is 3 that dividend policies address agency problems between corporate insiders and outside shareholders. According to these theories, unless profits are paid out to shareholders, they may be diverted by the insiders for personal use or committed to unprofitable projects that provide private benefits for the insiders. As a consequence, outside shareholders have a preference for dividends over retained earnings. Theories differ on how outside shareholders actually get firms to disgorge cash. The key point, however, is that failure to disgorge cash leads to its diversion or waste, which is detrimental to outside shareholders’ interest. The agency approach moves away from the assumptions of the Modigliani-Miller theorem by recognizing two points. First, the investment policy of the firm cannot be taken as independent of its dividend policy, and, in particular, paying out dividends may reduce the inefficiency of marginal investments. Second, and more subtly, the allocation of all the profits of the firm to shareholders on a pro-rata basis cannot be taken for granted, and in particular the insiders may get preferential treatment through asset diversion, transfer prices and theft, even holding the investment policy constant. In so far as dividends are paid on a pro-rata basis, they benefit outside shareholders relative to the alternative of expropriation of retained earnings. In this paper, we attempt to identify some of the basic elements of the agency approach to dividends, to understand its key implications, and to evaluate them on a cross-section of over 4,000 firms from 33 countries around the world. The reason for looking around the world is that the severity of agency problems to which minority shareholders are exposed differs greatly across countries, in part because legal protection of these shareholders vary (La Porta et al. (1997, 1998)). Empirically, we find that dividend policies vary across legal regimes in ways consistent with a particular version of the agency theory of dividends. Specifically, firms in common law 4 countries, where investor protection is typically better, make higher dividend payouts than firms in civil law countries do. Moreover, in common but not civil law countries, high growth firms make lower dividend payouts than low growth firms. These results support the version of the agency theory in which investors in good legal protection countries use their legal powers to extract dividends from firms, especially when reinvestment opportunities are poor. Section I of the paper summarizes some of the theoretical arguments. Section II describes the data. Section III presents our empirical findings. Section IV concludes. I. Theoretical Issues. A. Agency Problems and Legal Regimes Conflicts of interest between corporate insiders, such as managers and controlling shareholders, on the one hand, and outside investors, such as minority shareholders, on the other hand, are central to the analysis of the modern corporation (Berle and Means (1932), Jensen and Meckling (1976)). The insiders who control corporate assets can use these assets for a range of purposes that are detrimental to the interests of the outside investors. Most simply, they can divert corporate assets to themselves, through outright theft, dilution of outside investors through share issues to the insiders, excessive salaries, asset sales to themselves or other corporations they control at favorable prices, or transfer pricing with other entities they control (see Shleifer and Vishny (1997) for a discussion). Alternatively, insiders can use corporate assets to pursue investment strategies that yield them personal benefits of control, such as growth or diversification, without benefitting outside investors (e.g., Baumol (1959), Jensen (1986)). What is meant by insiders varies from country to country. In the United States, U.K., 5 Canada, and Australia, where ownership in large corporations is relatively dispersed, most large corporations are to a significant extent controlled by their managers. In most other countries, large firms typically have shareholders that own a significant fraction of equity, such as the founding families (La Porta, Lopez-de-Silanes, and Shleifer (1999)). The controlling shareholders can effectively determine the decisions of the managers (indeed, managers typically come from the controlling family), and hence the problem of managerial control per se is not as severe as it is in the rich common law countries. On the other hand, the controlling shareholders can implement policies that benefit themselves at the expense of minority shareholders. Regardless of the identity of the insiders, the victims of insider control are minority shareholders. It is these minority shareholders that would typically have a taste for dividends. One of the principal remedies to agency problems is the law. Corporate and other law gives outside investors, including shareholders, certain powers to protect their investment against expropriation by insiders. These powers in the case of shareholders range from the right to receive the same per share dividends as the insiders, to the right to vote on important corporate matters, including the election of directors, to the right to sue the company for damages. The very fact that this legal protection exists probably explains why becoming a minority shareholder is a viable investment strategy, as opposed to just being an outright giveaway of money to strangers who are under few if any obligations to give it back. As pointed out by La Porta et al. (1998), the extent of legal protection of outside investors differs enormously across countries. Legal protection consists of both the content of the laws and the quality of their enforcement. Some countries, including most notably the wealthy common law countries such as the U.S. and the U.K., provide effective protection of minority shareholders 6 so that the outright expropriation of corporate assets by the insiders is rare. Agency problems manifest themselves primarily through non-value-maximizing investment choices. In many other countries, the condition of outside investors is a good deal more precarious, but even there some protection does exist. La Porta et al. (1998) show in particular that common law countries appear to have the best legal protection of minority shareholders, whereas civil law countries, and most conspicuously the French civil law countries, have the weakest protection. The quality of investor protection, viewed as a proxy for lower agency costs, has been shown to matter for a number of important issues in corporate finance. For example, corporate ownership is more concentrated in countries with inferior shareholder protection (La Porta et al. (1998), La Porta, Lopez-de-Silanes, and Shleifer (1999)). The valuation and breadth of cap |
Coordinated downlink multi-point communications in heterogeneous cellular networks | In this paper we assess how coordination among base stations can be exploited to improve downlink capacity in fourth generation (4G) cellular networks. We focus on heterogeneous networks where low-power pico cells are deployed within the coverage area of an existing macro network with the aim of offloading traffic from the (potentially congested) macro cells to low-power cells. Firstly, we describe an enhanced inter-cell interference coordination scheme which is shown to achieve a significant capacity gain in such deployments by leveraging a loose coordination among neighbor base stations. Secondly, we explore how a tighter coordination among base stations can be exploited to further improve the network capacity. Even though the schemes described in this paper apply to long term evolution (LTE) wireless networks, we point out that most of the findings and conclusions we draw apply to any cellular network. |
Long-term patency of saphenous vein and left internal mammary artery grafts after coronary artery bypass surgery: results from a Department of Veterans Affairs Cooperative Study. | OBJECTIVES
This study defined long-term patency of saphenous vein grafts (SVG) and internal mammary artery (IMA) grafts.
BACKGROUND
This VA Cooperative Studies Trial defined 10-year SVG patency in 1,074 patients and left IMA patency in 457 patients undergoing coronary artery bypass grafting (CABG).
METHODS
Patients underwent cardiac catheterizations at 1 week and 1, 3, 6, and 10 years after CABG.
RESULTS
Patency at 10 years was 61% for SVGs compared with 85% for IMA grafts (p < 0.001). If a SVG or IMA graft was patent at 1 week, that graft had a 68% and 88% chance, respectively, of being patent at 10 years. The SVG patency to the left anterior descending artery (LAD) (69%) was better (p < 0.001) than to the right coronary artery (56%), or circumflex (58%). Recipient vessel size was a significant predictor of graft patency, in vessels >2.0 mm in diameter SVG patency was 88% versus 55% in vessels </=2.0 mm (p < 0.001). Other positive significant predictors of graft patency were use of aspirin after bypass, older age, lower serum cholesterol, and lowest Canadian Functional Class (p < 0.001 to 0.058).
CONCLUSIONS
The 10-year patency of IMA grafts is better than SVGs. The 10-year patency for SVGs is better and the 10-year patency for IMA grafts is worse than expected. The 10-year patency of SVGs to the LAD is better than that to the right or circumflex. The best long-term predictors of SVG graft patency are grafting into the LAD and grafting into a vessel that is >2.0 mm in diameter. |
Cooperation between distributed agents through self-organisation | The paper discussu some antral irrw in the cooperation between disuibuted agcnu using lhe following case study: The objective is to explore a distant planet. more concretely to collect samples of a parricular typ of precious ruck. The location of the ruck .iamplu is unknown in advance but they arc rypiwlly clustered in cenain spm There i S a vehicle that can drive around on the planet and later reenter the s p x d (0 g0 back to ewlh There is no deuiled map of the t e m h although it h known hat the terrain is fuU of obstacles. hills. valleys. etc. |
CacheZoom: How SGX Amplifies the Power of Cache Attacks | In modern computing environments, hardware resources are commonly shared, and parallel computation is widely used. Parallel tasks can cause privacy and security problems if proper isolation is not enforced. Intel proposed SGX to create a trusted execution environment within the processor. SGX relies on the hardware, and claims runtime protection even if the OS and other software components are malicious. However, SGX disregards sidechannel attacks. We introduce a powerful cache sidechannel attack that provides system adversaries a high resolution channel. Our attack tool named CacheZoom is able to virtually track all memory accesses of SGX enclaves with high spatial and temporal precision. As proof of concept, we demonstrate AES key recovery attacks on commonly used implementations including those that were believed to be resistant in previous scenarios. Our results show that SGX cannot protect critical data sensitive computations, and efficient AES key recovery is possible in a practical environment. In contrast to previous works which require hundreds of measurements, this is the first cache side-channel attack on a real system that can recover AES keys with a minimal number of measurements. We can successfully recover AES keys from T-Table based implementations with as few as ten measurements.1 |
Phonetic Bengali Input Method for Computer and Mobile Devices | Current mobile devices do not support Bangla (or Bengali) Input method. Due to this many Bangla language speakers have to write Bangla in mobile phone using English alphabets. During this time they used to write English foreign words using English spelling. This tendency also exists when writing in computer using phonetically input methods, which cause many typing mistakes. In this scenario, computer transliteration input method need to correct the foreign words written using English spelling. In this paper, we proposed a transliteration input method for Bangla language. For English foreign words, the system used International-PhoneticAlphabet(IPA)-based transliteration method for Bangla language. Our proposed approach improved the quality of Bangla transliteration input method by 14 points. |
Two algorithms for the three-dimensional reconstruction of tomograms. | Three-dimensional (3-D) surface reconstructions provide a method to view complex anatomy contained in a set of computed tomography (CT), magnetic resonance imaging (MRI), or single photon emission computed tomography tomograms. Existing methods of 3-D display generate images based on the distance from an imaginary observation point to a patch on the surface and on the surface normal of the patch. We believe that the normalized gradient of the original values in the CT or MRI tomograms provides a better estimate for the surface normal and hence results in higher quality 3-D images. Then two algorithms that generate 3-D surface models are presented. The new methods use polygon and point primitives to interface with computer-aided design equipment. Finally, several 3-D images of both bony and soft tissue show the skull, spine, internal air cavities of the head and abdomen, and the abdominal aorta in detail. |
Paleobiological perspectives on early eukaryotic evolution. | Eukaryotic organisms radiated in Proterozoic oceans with oxygenated surface waters, but, commonly, anoxia at depth. Exceptionally preserved fossils of red algae favor crown group emergence more than 1200 million years ago, but older (up to 1600-1800 million years) microfossils could record stem group eukaryotes. Major eukaryotic diversification ~800 million years ago is documented by the increase in the taxonomic richness of complex, organic-walled microfossils, including simple coenocytic and multicellular forms, as well as widespread tests comparable to those of extant testate amoebae and simple foraminiferans and diverse scales comparable to organic and siliceous scales formed today by protists in several clades. Mid-Neoproterozoic establishment or expansion of eukaryophagy provides a possible mechanism for accelerating eukaryotic diversification long after the origin of the domain. Protists continued to diversify along with animals in the more pervasively oxygenated oceans of the Phanerozoic Eon. |
Convenient or Useful ? Consumer Adoption of Smartphones for Mobile Commerce | Merchants have developed apps for the smartphone that assist consumers through the buying process, from when they gather information to when they decide to purchase. Sharing information, such as location, shopping preferences and financial data, can enhance consumers’ experience. However, they may be reluctant to disclose these details unless they perceive that the benefits gained are more than the risk of privacy loss. This privacy calculus is added to the unified theory of acceptance and use of technology (UTAUT2) in order to explain consumers’ willingness to exchange the disclosure of personal information for additional value. Sharing information makes mobile commerce more convenient by saving time and effort. Companies are able to send offers that are tailored to a specific customer. Payments are processed faster because the merchant already has the financial data on hand. UTAUT2 is further extended with the Theory of Convenience. Results from a survey of over 300 consumers show that perceived value and perceived convenience are influencing variables and that perceived value mediates the influence of perceived convenience on intention to use. |
Global RDF Vector Space Embeddings | Vector space embeddings have been shown to perform well when using RDF data in data mining and machine learning tasks. Existing approaches, such as RDF2Vec, use local information, i.e., they rely on local sequences generated for nodes in the RDF graph. For word embeddings, global techniques, such as GloVe, have been proposed as an alternative. In this paper, we show how the idea of global embeddings can be transferred to RDF embeddings, and show that the results are competitive with traditional local techniques like RDF2Vec. |
Boundary-spanning: reflections on the practices and principles of Global Health | As Global Health evolves, not merely as a metaphor for international collaboration, but as a distinct field of practice, it warrants greater consideration of how it is practiced, by whom, and for what goals. We believe that, to become more relevant for the health systems and communities that are their intended beneficiaries, Global Health practices must actively span and disrupt boundaries of geography, geopolitics and constituency, some of which are rooted in imbalances of power and resources. In this process, fostering cross-country learning networks and communities of practice, and building local and national institutions with a global outlook in low and middle-income countries, are critically important. Crucially, boundary-spanning practices in Global Health require a mindset of inclusiveness, awareness of and respect for different coexisting realities. |
Learning Representations Using Complex-Valued Nets | Complex-valued neural networks (CVNNs) are an emerging field of research in neural networks due to their potential representational properties for audio, image, and physiological signals. It is common in signal processing to transform sequences of real values to the complex domain via a set of complex basis functions, such as the Fourier transform. We show how CVNNs can be used to learn complex representations of real valued time-series data. We present methods and results using a framework that can compose holomorphic and non-holomorphic functions in a multi-layer network using a theoretical result called the Wirtinger derivative. We test our methods on a representation learning task for real-valued signals, recurrent complex-valued networks and their real-valued counterparts. Our results show that recurrent complex-valued networks can perform as well as their realvalued counterparts while learning filters that are representative of the domain of the data. |
A Literature Review of Gamification Design Frameworks | This paper presents a review of the literature on gamification design frameworks. Gamification, understood as the use of game design elements in other contexts for the purpose of engagement, has become a hot topic in the recent years. However, there's also a cautionary tale to be extracted from Gartner's reports on the topic: many gamification-based solutions fail because, mostly, they have been created on a whim, or mixing bits and pieces from game components, without a clear and formal design process. The application of a definite design framework aims to be a path to success. Therefore, before starting the gamification of a process, it is very important to know which frameworks or methods exist and their main characteristics. The present review synthesizes the process of gamification design for a successful engagement experience. This review categorizes existing approaches and provides an assessment of their main features, which may prove invaluable to developers of gamified solutions at different levels and scopes. |
Visual Classifier Training for Text Document Retrieval | Performing exhaustive searches over a large number of text documents can be tedious, since it is very hard to formulate search queries or define filter criteria that capture an analyst's information need adequately. Classification through machine learning has the potential to improve search and filter tasks encompassing either complex or very specific information needs, individually. Unfortunately, analysts who are knowledgeable in their field are typically not machine learning specialists. Most classification methods, however, require a certain expertise regarding their parametrization to achieve good results. Supervised machine learning algorithms, in contrast, rely on labeled data, which can be provided by analysts. However, the effort for labeling can be very high, which shifts the problem from composing complex queries or defining accurate filters to another laborious task, in addition to the need for judging the trained classifier's quality. We therefore compare three approaches for interactive classifier training in a user study. All of the approaches are potential candidates for the integration into a larger retrieval system. They incorporate active learning to various degrees in order to reduce the labeling effort as well as to increase effectiveness. Two of them encompass interactive visualization for letting users explore the status of the classifier in context of the labeled documents, as well as for judging the quality of the classifier in iterative feedback loops. We see our work as a step towards introducing user controlled classification methods in addition to text search and filtering for increasing recall in analytics scenarios involving large corpora. |
Automatic Static Unpacking of Malware Binaries | Current malware is often transmitted in packed or encrypted form to prevent examination by anti-virus software.To analyze new malware, researchers typically resort to dynamic code analysis techniques to unpack the code for examination.Unfortunately, these dynamic techniques are susceptible to a variety of anti-monitoring defenses, as well as "time bombs" or "logic bombs," and can be slow and tedious to identify and disable. This paper discusses an alternative approach that relies on static analysis techniques to automate this process. Alias analysis can be used to identify the existence of unpacking,static slicing can identify the unpacking code, and control flow analysis can be used to identify and neutralize dynamic defenses. The identified unpacking code can be instrumented and transformed, then executed to perform the unpacking.We present a working prototype that can handle a variety of malware binaries, packed with both custom and commercial packers, and containing several examples of dynamic defenses. |
Comparison of frailty indicators based on clinical phenotype and the multiple deficit approach in predicting mortality and physical limitation. | OBJECTIVES
To compare three simple bedside tools based on frailty phenotypes with a Frailty Index using the multiple deficit approach in the prediction of mortality and physical limitation after 4 years.
DESIGN
Cohort study.
SETTING
Hong Kong, China.
PARTICIPANTS
Four thousand men and women aged 65 and older living in the community who were ambulatory enough to attend the study center.
METHODS
Interviewers obtained information regarding physical, psychological, and functional health; body mass index (BMI), grip strength, blood pressure, and ankle brachial index were determined. Three clinical frailty scales based on the Fried phenotype (Cardiovascular Health Study (CHS); Fatigue, Resistance, Ambulation, Illness, and Loss (FRAIL); and Hubbard) and a frailty index (FI) were constructed from these variables, and their ability to predict incident mortality and physical function limitations was compared using receiver operating characteristic (ROC) curves.
RESULTS
All tools predicted adverse outcomes. More participants were categorized into frail and prefrail categories using the CHS than with the other two clinical scales. For all frailty measures, with increasing levels of frailty, the sensitivity fell and the specificity increased to greater than 90%; the area under the ROC curve values were approximately 0.6.
CONCLUSION
Simple frailty scores are comparable with a multidimensional deficit accumulation FI in predicting mortality and physical limitations. The newer FRAIL, proposed for use in a clinical setting, is comparable with other existing short screening tools, as well as tools based on the multiple-deficits model used for research settings. Addition of a physical performance measure to screening tools may increase predictive accuracy. |
A low cross-polarized antipodal Vivaldi antenna array for wideband operation | An antipodal Vivaldi antenna gives good performance over a wide bandwidth, but the cross-polarization level is unfortunately high due to the slot flares on different layers. As a simple technique to reduce the cross-polarization level in the array, the antipodal antenna and its mirror image are placed alternately in the H-plane. The cross-polarization level is reduced more than 20 dB at broadside. |
Capturing Reliable Fine-Grained Sentiment Associations by Crowdsourcing and Best-Worst Scaling | Access to word–sentiment associations is useful for many applications, including sentiment analysis, stance detection, and linguistic analysis. However, manually assigning finegrained sentiment association scores to words has many challenges with respect to keeping annotations consistent. We apply the annotation technique of Best–Worst Scaling to obtain real-valued sentiment association scores for words and phrases in three different domains: general English, English Twitter, and Arabic Twitter. We show that on all three domains the ranking of words by sentiment remains remarkably consistent even when the annotation process is repeated with a different set of annotators. We also, for the first time, determine the minimum difference in sentiment association that is perceptible to native speakers of a language. |
Research Guides: Civil Engineering, Environmental Engineering, and Construction Management: Citing Sources | This research guide was designed to introduce you to the field of civil engineering and construction management. You'll find books, article databases, and other resources you need to start your research. |
A Feature-Based, Robust, Hierarchical Algorithm for Registering Pairs of Images of the Curved Human Retina | ÐThis paper describes a robust hierarchical algorithm for fully-automatic registration of a pair of images of the curved human retina photographed by a fundus microscope. Accurate registration is essential for mosaic synthesis, change detection, and design of computer-aided instrumentation. Central to the new algorithm is a 12-parameter interimage transformation derived by modeling the retina as a rigid quadratic surface with unknown parameters, imaged by an uncalibrated weak perspective camera. The parameters of this model are estimated by matching vascular landmarks extracted by an algorithm that recursively traces the blood vessel structure. The parameter estimation technique, which could be generalized to other applications, is a hierarchy of models and methods: an initial match set is pruned based on a zeroth order transformation estimated as the peak of a similarity-weighted histogram; a first order, affine transformation is estimated using the reduced match set and least-median of squares; and the final, second order, 12-parameter transformation is estimated using an M-estimator initialized from the first order estimate. This hierarchy makes the algorithm robust to unmatchable image features and mismatches between features caused by large interframe motions. Before final convergence of the M-estimator, feature positions are refined and the correspondence set is enhanced using normalized sum-of-squared differences matching of regions deformed by the emerging transformation. Experiments involving 3,000 image pairs (I; HPR Â I; HPR pixels) from 16 different healthy eyes were performed. Starting with as low as 20 percent overlap between images, the algorithm improves its success rate exponentially and has a negligible failure rate above 67 percent overlap. The experiments also quantify the reduction in errors as the model complexities increase. Final registration errors less than a pixel are routinely achieved. The speed, accuracy, and ability to handle small overlaps compare favorably with retinal image registration techniques published in the literature. |
Towards Semi-automatic Generation of R2R Mappings | Translating data from linked data sources to the vocabulary that is expected by a linked data application requires a large number of mappings and can require a lot of structural transformations as well as complex property value transformations. The R2R mapping language is a language based on SPARQL for publishing expressive mappings on the web. However, the specification of R2R mappings is not an easy task. This paper therefore proposes the use of mapping patterns to semi-automatically generate R2R mappings between RDF vocabularies. In this paper, we first specify a mapping language with a high level of abstraction to transform data from a source ontology to a target ontology vocabulary. Second, we introduce the proposed mapping patterns. Finally, we present a method to semi-automatically generate R2R mappings using the mapping |
Indoor air quality, ventilation and respiratory health in elderly residents living in nursing homes in Europe. | Few data exist on respiratory effects of indoor air quality and comfort parameters in the elderly. In the context of the GERIE study, we investigated for the first time the relationships of these factors to respiratory morbidity among elderly people permanently living in nursing homes in seven European countries. 600 elderly people from 50 nursing homes underwent a medical examination and completed a standardised questionnaire. Air quality and comfort parameters were objectively assessed in situ in the nursing home. Mean concentrations of air pollutants did not exceed the existing standards. Forced expiratory volume in 1 s/forced vital capacity ratio was highly significantly related to elevated levels of particles with a 50% cut-off aerodynamic diameter of <0.1 µm (PM0.1) (adjusted OR 8.16, 95% CI 2.24-29.3) and nitrogen dioxide (aOR 3.74, 95% CI 1.06-13.1). Excess risks for usual breathlessness and cough were found with elevated PM10 (aOR 1.53 (95% CI 1.15-2.07) and aOR 1.73 (95% CI 1.17-10.3), respectively) and nitrogen dioxide (aOR 1.58 (95% CI 1.15-2.20) and aOR 1.56 (95% CI 1.03-2.41), respectively). Excess risks for wheeze in the past year were found with PM0.1 (aOR 2.82, 95% CI 1.15-7.02) and for chronic obstructive pulmonary disease and exhaled carbon monoxide with formaldehyde (aOR 3.49 (95% CI 1.17-10.3) and aOR 1.25 (95% CI 1.02-1.55), respectively). Breathlessness and cough were associated with higher carbon dioxide. Relative humidity was inversely related to wheeze in the past year and usual cough. Elderly subjects aged ≥80 years were at higher risk. Pollutant effects were more pronounced in the case of poor ventilation. Even at low levels, indoor air quality affected respiratory health in elderly people permanently living in nursing homes, with frailty increasing with age. The effects were modulated by ventilation. |
Data Mining for the Internet of Things: Literature Review and Challenges | The massive data generated by the Internet of Things (IoT) are considered of high business value, and data mining algorithms can be applied to IoT to extract hidden information from data. In this paper, we give a systematic way to review data mining in knowledge view, technique view and application view, including classification, clustering, association analysis, time series analysis, outlier analysis, etc. And the latest application cases are also surveyed. As more and more devices connected to IoT, large volume of data should be analyzed, the latest algorithms should be modified to apply to big data also reviewed, challenges and open research issues are discussed. At last a suggested big data mining system is proposed. |
Tensors for Data Mining and Data Fusion: Models, Applications, and Scalable Algorithms | Tensors and tensor decompositions are very powerful and versatile tools that can model a wide variety of heterogeneous, multiaspect data. As a result, tensor decompositions, which extract useful latent information out of multiaspect data tensors, have witnessed increasing popularity and adoption by the data mining community. In this survey, we present some of the most widely used tensor decompositions, providing the key insights behind them, and summarizing them from a practitioner’s point of view. We then provide an overview of a very broad spectrum of applications where tensors have been instrumental in achieving state-of-the-art performance, ranging from social network analysis to brain data analysis, and from web mining to healthcare. Subsequently, we present recent algorithmic advances in scaling tensor decompositions up to today’s big data, outlining the existing systems and summarizing the key ideas behind them. Finally, we conclude with a list of challenges and open problems that outline exciting future research directions. |
Real-time people counting system using video camera | In this MTech thesis experiments will be tried out on a people counting system in an effort to enhance the accuracy when separating counting groups of people, and nonhuman objects. This system features automatic color equalization, adaptive background subtraction, shadow detection algorithm and Kalman tracking. The aim is to develop a reliable and accurate computer vision alternative to sensor or contact based mechanisms. The problem for many computer vision based systems are making good separation between the background and foreground, and teaching the computers what parts make up a scene. We also want to find features to classify the foreground moving objects, an easy task for a human, but a complex task for a computer. Video has been captured with a birds eye view close to one of the entrances at the school about ten meters above the floor. From this video troublesome parts have been selected to test the changes done to the algorithms and program code. |
Thinking inside the Box: five Organizational Strategies Enabled through Information Systems | The relationship between information systems (IS) and organizational strategies has been a much discussed topic with most of the prior studies taking a highly positive view of technology’s role in enabling organizational strategies. Despite this wealth of studies, there is a dearth of empirical investigations on how IS enable specific organizational strategies. Through a qualitative empirical investigation of five case organizations this research derives five organizational strategies that are specifically enabled through IS. The five strategies; (i) generic-heartland, (ii) craft-based selective, (iii) adhoc, IT-driven, (iv) corporative-orchestrated and (v) transformative provide a unique perspective of how IS enable organizational strategy. |
A Next-Generation Secure Cloud-Based Deep Learning License Plate Recognition for Smart Cities | License Plate Recognition System (LPRS) plays a vital role in smart city initiatives such as traffic control, smart parking, toll management and security. In this article, a cloud-based LPRS is addressed in the context of efficiency where accuracy and speed of processing plays a critical role towards its success. Signature-based features technique as a deep convolutional neural network in a cloud platform is proposed for plate localization, character detection and segmentation. Extracting significant features makes the LPRS to adequately recognize the license plate in a challenging situation such as i) congested traffic with multiple plates in the image ii) plate orientation towards brightness, iii) extra information on the plate, iv) distortion due to wear and tear and v) distortion about captured images in bad weather like as hazy images. Furthermore, the deep learning algorithm computed using bare-metal cloud servers with kernels optimized for NVIDIA GPUs, which speed up the training phase of the CNN LPDS algorithm. The experiments and results show the superiority of the performance in both recall and precision and accuracy in comparison with traditional LP detecting systems. |
New cytotoxic alkyl phloroglucinols from Protorhus thouvenotii. | Two new cytotoxic alkylene phloroglucinols and one known compound were isolated through chromatographic separation from the methanolic extract of the dried fruits of Protorhus thouvenotii (Anacardiaceae). The compounds showed marginal in vitro cytotoxicity in the A2780 ovarian cancer cell line assay with an IC50 of 11 microg/mL. |
Phase I study of the duocarmycin semisynthetic derivative KW-2189 given daily for five days every six weeks. | The duocarmycins represent a new group of antitumor antibiotics produced by Streptomyces that bind to the minor groove of DNA. KW-2189 is a water-soluble semisynthetic derivative of duocarmycin B2, with significant activity in murine and human tumor models. We conducted a Phase I trial of KW-2189 in patients who had solid tumors that were refractory to standard chemotherapy or for whom no more effective therapy existed. KW-2189 was administered as a rapid i.v. bolus daily for 5 days every 6 weeks. Twenty-two patients were enrolled and received a total of 31 cycles of KW-2189. Leukopenia, neutropenia, and thrombocytopenia were the dose-limiting toxicities, with nadirs occurring at medians of 36, 38, and 29 days, respectively, at the 0.04 mg/m2/day dose level. Nonhematological toxicities were mild, although one patient developed grade 3 fatigue. Four patients had stable disease over two to four cycles of treatment and showed no cumulative toxicity. The mean t1/2, plasma clearance, and steady-state volume of distribution were 13.5 min, 1,287 ml/min/m2, and 10,638 ml/m2, respectively. Pharmacokinetics were similar on days 1 and 5, with no drug accumulation in plasma. The active metabolite DU-86 was not consistently found in patient plasma. For Phase II trials, when the 5 days every 6 weeks schedule was used, 0.04 mg/m2/day KW-2189 appears to be the maximal tolerated dose, especially for patients who have received prior chemotherapy. At this dose level, the drug was well tolerated, and the toxicities were acceptable. |
Using prosody to avoid ambiguity : Effects of speaker awareness and referential context | In three experiments, a referential communication task was used to determine the conditions under which speakers produce and listeners use prosodic cues to distinguish alternative meanings of a syntactically ambiguous phrase. Analyses of the actions and utterances from Experiments 1 and 2 indicated that Speakers chose to produce effective prosodic cues to disambiguation only when the referential scene provided support for both interpretations of the phrase. In Experiment 3, on-line measures of parsing commitments were obtained by recording the Listener!s eye movements to objects as the Speaker gave the instructions. Results supported the previous experiments but also showed that the Speaker!s prosody affected the Listener!s interpretation prior to the onset of the ambiguous phrase, thus demonstrating that prosodic cues not only influence initial parsing but can also be used to predict material which has yet to be spoken. The findings suggest that informative prosodic cues depend upon speakers! knowledge of the situation: speakers provide prosodic cues when needed; listeners use these prosodic cues when present. ! 2002 Elsevier Science (USA). All rights reserved. |
Autonomous exploration of motor skills by skill babbling | Autonomous exploration of motor skills is a key capability of learning robotic systems. Learning motor skills can be formulated as inverse modeling problem, which targets at finding an inverse model that maps desired outcomes in some task space, e.g., via points of a motion, to appropriate actions, e.g., motion control policy parameters. In this paper, autonomous exploration of motor skills is achieved by incrementally learning inverse models starting from an initial demonstration. The algorithm is referred to as skill babbling, features sample-efficient learning, and scales to high-dimensional action spaces. Skill babbling extends ideas of goal-directed exploration, which organizes exploration in the space of goals. The proposed approach provides a modular framework for autonomous skill exploration by separating the learning of the inverse model from the exploration mechanism and a model of achievable targets, i.e. the workspace. The effectiveness of skill babbling is demonstrated for a range of motor tasks comprising the autonomous bootstrapping of inverse kinematics and parameterized motion primitives. |
SkyLens: Visual Analysis of Skyline on Multi-Dimensional Data | Skyline queries have wide-ranging applications in fields that involve multi-criteria decision making, including tourism, retail industry, and human resources. By automatically removing incompetent candidates, skyline queries allow users to focus on a subset of superior data items (i.e., the skyline), thus reducing the decision-making overhead. However, users are still required to interpret and compare these superior items manually before making a successful choice. This task is challenging because of two issues. First, people usually have fuzzy, unstable, and inconsistent preferences when presented with multiple candidates. Second, skyline queries do not reveal the reasons for the superiority of certain skyline points in a multi-dimensional space. To address these issues, we propose SkyLens, a visual analytic system aiming at revealing the superiority of skyline points from different perspectives and at different scales to aid users in their decision making. Two scenarios demonstrate the usefulness of SkyLens on two datasets with a dozen of attributes. A qualitative study is also conducted to show that users can efficiently accomplish skyline understanding and comparison tasks with SkyLens. |
CrsRecs: A personalized course recommendation system for college students | Every college student has different needs when it comes to learning. It can be difficult to decide which course is best to take on the road to graduation, and which professor will best suit the student's learning style. CrsRecs, our proposed course/professor recommendation system, makes that process much easier. Using topic analysis, tag analysis, sentiment analysis, predicted course/professor ratings, and survey data revealing student priorities with respect to classes (i.e., easy A, quality of the class, etc.), CrsRecs ranks potential courses in order of perceived preference for the student based on a hybrid technique combining the analysis results of a course. Empirical studies conducted to evaluate the performance of CrsRecs have revealed that CrsRecs not only suggests relevant courses to users by considering all the features of a course, but also outperforms existing state-of-the-art course recommendation approaches. |
Miniaturization of BranchLine Coupler Using Composite Right / Left-Handed Transmission Lines with Novel Meander-shaped-slots CSSRR | A novel compact-size branch-line coupler using composite right/left-handed transmission lines is proposed in this paper. In order to obtain miniaturization, composite right/left-handed transmission lines with novel complementary split single ring resonators which are realized by loading a pair of meander-shaped-slots in the split of the ring are designed. This novel coupler occupies only 22.8% of the area of the conventional approach at 0.7 GHz. The proposed coupler can be implemented by using the standard printed-circuit-board etching processes without any implementation of lumped elements and via-holes, making it very useful for wireless communication systems. The agreement between measured and stimulated results validates the feasible configuration of the proposed coupler. |
Detecting protein function and protein-protein interactions from genome sequences. | A computational method is proposed for inferring protein interactions from genome sequences on the basis of the observation that some pairs of interacting proteins have homologs in another organism fused into a single protein chain. Searching sequences from many genomes revealed 6809 such putative protein-protein interactions in Escherichia coli and 45,502 in yeast. Many members of these pairs were confirmed as functionally related; computational filtering further enriches for interactions. Some proteins have links to several other proteins; these coupled links appear to represent functional interactions such as complexes or pathways. Experimentally confirmed interacting pairs are documented in a Database of Interacting Proteins. |
Reconditioned merchandise: extended structured report formats in usability inspection | Structured Problem Report Formats have been key to improving the assessment of usability methods. Once extended to record analysts' rationales, they not only reveal analyst behaviour but also change it. We report on two versions of an Extended Structured Report Format for usability problems, briefly noting their impact on analyst behaviour, but more extensively presenting insights into decision making during usability inspection, thus validating and refining a model of evaluation performance. |
wmin . ac . uk Music and memory in advertising : music as a device of implicit learning and recall | Music may play several roles and have many effects in advertising; it may attract attention, carry the product message, act as a mnemonic device, and create excitement or a state of relaxation. There have been numerous studies that have focused on the general perceptual, cognitive and affective processing that occurs in response to exposure to music; there also have been studies on the effects of music on shortand long-term memory. However, few of these have examined the specific importance of music as a mnemonic device within filmed events (Boltz et al. 1991) or TV commercials (Yalch 1991, Stewart et al. 1998). In this paper, the role of music will be evaluated within advertising and during low-attention conditions. A series of experiments was carried out whereby musicians and nonmusicians were exposed to an advert that was embedded into a group of other adverts, presented in the middle of an engaging TV program, thus replicating very naturalistic conditions. There were 4 audio conditions examined in an example advertisement: jingle, instrumental music, instrumental music with voice-over and environmental sounds with voice-over. Results indicate that music is effective in facilitating implicit learning and recall of the advertised product, showing that, under non-attentive conditions, there is a certain mechanism of unconscious elaboration of the musical signal. The role of previous musical training seems to have a little significance under low-attentive conditions, thus we observe an unconscious physiological reaction to the information carried with the music of a commercial which is common to musicians and non-musicians. Conclusions concerning the function of music on listeners and on memory stimulation could prove effective in an analysis of the communicative role of music in advertising, and might also have wider ramifications for current research into more generalised analysis of music and meaning. |
Shared decision-making in breast cancer: discrepancy between the treatment efficacy required by patients and by physicians | Several factors can influence individual perceptions of the expected benefit of recommended adjuvant treatment for breast cancer. This study investigated differences between patients and physicians with regard to the required efficacy of treatment and the factors influencing patients’ and physicians’ willingness to accept different therapeutic options. A total of 9,000 questionnaires were distributed to patients with breast cancer, and 6,938 questionnaires were distributed to physicians treating breast cancer patients. The patients were asked for personal information and about their medical history and experiences during treatment. The physicians were asked about personal information and their specialty and work environment. The treatment efficacy required by the two groups was assessed using six virtual cases of breast cancer and the treatment regimens proposed, with specific benefits and side effects. A total of 2,155 patients and 527 physicians responded to the questionnaire (return rates of 23.9 and 7.6 %). Significantly different ratings between patients and physicians with regard to the expected benefit of certain treatment options were observed. The differences were noted not only for chemotherapy but also for antihormonal and antibody treatments. Whereas physicians had a quite realistic view of the expected treatment benefits, the patients’ expectations were varied. Approximately one-fifth of the patients were willing to accept treatment regimens even with marginal anticipated benefits, whereas one-third required unrealistic treatment benefits. Several influencing factors that were significantly associated with the quality rating of treatment regimens in the groups of breast cancer patients and physicians were also identified. In contrast to physicians, many breast cancer patients required treatment benefits beyond what was realistically possible, although a large group of patients were also satisfied with minimal benefits. Individual factors were also identified in both groups that significantly influence thresholds for accepting adjuvant treatment, independently of risk estimates and therapy guidelines. |
Intraaortic balloon counterpulsation in acute myocardial infarction complicated by cardiogenic shock: Design and rationale of the Intraaortic Balloon Pump in Cardiogenic Shock II (IABP-SHOCK II) trial. | “Because most of the patients will not be able to provide full informed consent before randomization, an individualized informed consent process covering 4 different scenarios has been validated and approved by the central ethical committee at the University of Leipzig, Germany, and also all local ethical committees. If the patient is not able to provide informed consent, 2 independent physicians assess the assumed patient's will (if possible by additional contact of relatives). In patients with limited ability to provide consent, a short version, and in patients with full capacity to consent, a long version of the informed consent will be used. If an initially impaired patient recovers, it is required to obtain the long version of informed consent retrospectively.” |
Detection of Malicious Scripting Code Through Discriminant and Adversary-Aware API Analysis | JavaScript and ActionScript are powerful scripting languages that do not only allow the delivery of advanced multimedia contents, but that can be also used to exploit critical vulnerabilities of third-party applications. To detect both ActionScriptand JavaScript-based malware, we propose in this paper a machine-learning methodology that is based on extracting discriminant information from system API methods, attributes and classes. Our strategy exploits the similarities between the two scripting languages, and has been devised by also considering the possibility of targeted attacks that aim to deceive the employed classification algorithms. We tested our method on PDF and SWF data, respectively embedding JavaScript and ActionScript codes. Results show that the proposed strategy allows us to detect most of the tested malicious files, with low false positive rates. Finally, we show that the proposed methodology is also reasonably robust against evasive and targeted attacks. |
Driver Modeling Based on Driving Behavior and Its Evaluation in Driver Identification | All drivers have habits behind the wheel. Different drivers vary in how they hit the gas and brake pedals, how they turn the steering wheel, and how much following distance they keep to follow a vehicle safely and comfortably. In this paper, we model such driving behaviors as car-following and pedal operation patterns. The relationship between following distance and velocity mapped into a two-dimensional space is modeled for each driver with an optimal velocity model approximated by a nonlinear function or with a statistical method of a Gaussian mixture model (GMM). Pedal operation patterns are also modeled with GMMs that represent the distributions of raw pedal operation signals or spectral features extracted through spectral analysis of the raw pedal operation signals. The driver models are evaluated in driver identification experiments using driving signals collected in a driving simulator and in a real vehicle. Experimental results show that the driver model based on the spectral features of pedal operation signals efficiently models driver individual differences and achieves an identification rate of 76.8% for a field test with 276 drivers, resulting in a relative error reduction of 55% over driver models that use raw pedal operation signals without spectral analysis |
Transformer tests using MATLAB/Simulink and their integration into undergraduate electric machinery courses | This article describes MATLAB/Simulink realization of open-circuit and short-circuit tests of transformers that are performed to identify equivalent circuit parameters. These simulation models are developed to support and enhance electric machinery education at the undergraduate level. The proposed tests have been successfully integrated into electric machinery courses at Drexel University, Philadelphia, PA and Nigde University, Nigde, Turkey. 2006 Wiley Periodicals, Inc. Comput Appl Eng Educ 14: 142 150, 2006; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20077 |
What is synergy? The Saariselkä agreement revisited | Many biological or chemical agents when combined interact with each other and produce a synergistic response that cannot be predicted based on the single agent responses alone. However, depending on the postulated null hypothesis of non-interaction, one may end up in different interpretations of synergy. Two popular reference models for null hypothesis include the Bliss independence model and the Loewe additivity model, each of which is formulated from different perspectives. During the last century, there has been an intensive debate on the suitability of these synergy models, both of which are theoretically justified and also in practice supported by different schools of scientists. More than 20 years ago, there was a community effort to make a consensus on the terminology one should use when claiming synergy. The agreement was formulated at a conference held in Saariselkä, Finland in 1992, stating that one should use the terms Bliss synergy or Loewe synergy to avoid ambiguity in the underlying models. We review the theoretical relationships between these models and argue that one should combine the advantages of both models to provide a more consistent definition of synergy and antagonism. |
Intrinsically Motivated Reinforcement Learning: A Promising Framework for Developmental Robot Learning | One of the primary challenges of developmental robotics is the question of how to learn and represent increasingly complex behavior in a self-motivated, open-ended way. Barto, Singh, and Chentanez (Barto, Singh, & Chentanez 2004; Singh, Barto, & Chentanez 2004) have recently presented an algorithm for intrinsically motivated reinforcement learning that strives to achieve broad competence in an environment in a tasknonspecific manner by incorporating internal reward to build a hierarchical collection of skills. This paper suggests that with its emphasis on task-general, self-motivated, and hierarchical learning, intrinsically motivated reinforcement learning is an obvious choice for organizing behavior in developmental robotics. We present additional preliminary results from a gridworld abstraction of a robot environment and advocate a lay-ion of a robot environment and advocate a layered learning architecture for applying the algorithm on a physically embodied system. |
Uncertainty Quantification - Theory, Implementation, and Applications | Give us 5 minutes and we will show you the best book to read today. This is it, the uncertainty quantification theory implementation and applications that will be your best choice for better reading book. Your five times will not spend wasted by reading this website. You can take the book as a source to make better concept. Referring the books that can be situated with your needs is sometime difficult. But here, this is so easy. You can find the best thing of book that you can read. |
SEMAFOR 1.0: A Probabilistic Frame-Semantic Parser | An elaboration on (Das et al., 2010), this report formalizes frame-semantic parsing as a structure prediction problem and describes an implemented parser that transforms an English sentence into a frame-semantic representation. SEMAFOR 1.0 finds words that evoke FrameNet frames, selects frames for them, and locates the arguments for each frame. The system uses two feature-based, discriminative probabilistic (log-linear) models, one with latent variables to permit disambiguation of new predicate words. The parser is demonstrated to significantly outperform previously published results and is released for public use. |
A Symbolic Approach for Explaining Errors in Image Classification Tasks | Machine learning algorithms, despite their increasing success in handling object recognition tasks, still seldom perform without error. Often the process of understanding why the algorithm has failed is the task of the human who, using domain knowledge and contextual information, can discover systematic shortcomings in either the data or the algorithm. This paper presents an approach where the process of reasoning about errors emerging from a machine learning framework is automated using symbolic techniques. By utilizing spatial and geometrical reasoning between objects in a scene, the system is able to describe misclassified regions in relation to its context. The system is demonstrated in the remote sensing domain where objects and entities are detected in satellite images. |
Virtual garments : A Fully Geometric Approach for Clothing Design | Modeling dressed characters is known as a very tedious process. It usually requires specifying 2D fabric patterns, positioning and assembling them in 3D, and then performing a physically-based simulation. The latter accounts for gravity and collisions to compute the rest shape of the garment, with the adequate folds and wrinkles. This paper presents a more intuitive way to design virtual clothing. We start with a 2D sketching system in which the user draws the contours and seam-lines of the garment directly on a virtual mannequin. Our system then converts the sketch into an initial 3D surface using an existing method based on a precomputed distance field around the mannequin. The system then splits the created surface into different panels delimited by the seam-lines. The generated panels are typically not developable. However, the panels of a realistic garment must be developable, since each panel must unfold into a 2D sewing pattern. Therefore our system automatically approximates each panel with a developable surface, while keeping them assembled along the seams. This process allows us to output the corresponding sewing patterns. The last step of our method computes a natural rest shape for the 3D garment, including the folds due to the collisions with the body and gravity. The folds are generated using procedural modeling of the buckling phenomena observed in real fabric. The result of our algorithm consists of a realistic looking 3D mannequin dressed in the designed garment and the 2D patterns which can be used for distortion free texture mapping. As we demonstrate, the patterns we create allow us to sew real replicas of the virtual garments. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.