title
stringlengths
8
300
abstract
stringlengths
0
10k
A 65-fJ/Conversion-Step 0.9-V 200-kS/s Rail-to-Rail 8-bit Successive Approximation ADC
An 8-bit successive approximation (SA) analog-to- digital converter (ADC) in 0.18 mum CMOS dedicated for energy-limited applications is presented. The SA ADC achieves a wide effective resolution bandwidth (ERBW) by applying only one bootstrapped switch, thereby preserving the desired low power characteristic. Measurement results show that at a supply voltage of 0.9 V and an output rate of 200 kS/s, the SA ADC performs a peak signal-to-noise-and-distortion ratio of 47.4 dB and an ERBW up to its Nyquist bandwidth (100 kHz). It consumes 2.47 muW in the test, corresponding to a figure of merit of 65 f J/conversion-step.
Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers
Most approaches to extraction multiple relations from a paragraph require multiple passes over the paragraph. In practice, multiple passes are computationally expensive and this makes difficult to scale to longer paragraphs and larger text corpora. In this work, we focus on the task of multiple relation extraction by encoding the paragraph only once (onepass). We build our solution on the pre-trained self-attentive (Transformer) models, where we first add a structured prediction layer to handle extraction between multiple entity pairs, then enhance the paragraph embedding to capture multiple relational information associated with each entity with an entity-aware attention technique. We show that our approach is not only scalable but can also perform state-of-the-art on the standard benchmark ACE 2005.
A Study of Students' Learning Styles, Discipline Attitudes and Knowledge Acquisition in Technology-Enhanced Probability and Statistics Education.
Many modern technological advances have direct impact on the format, style and efficacy of delivery and consumption of educational content. For example, various novel communication and information technology tools and resources enable efficient, timely, interactive and graphical demonstrations of diverse scientific concepts. In this manuscript, we report on a meta-study of 3 controlled experiments of using the Statistics Online Computational Resources in probability and statistics courses. Web-accessible SOCR applets, demonstrations, simulations and virtual experiments were used in different courses as treatment and compared to matched control classes utilizing traditional pedagogical approaches. Qualitative and quantitative data we collected for all courses included Felder-Silverman-Soloman index of learning styles, background assessment, pre and post surveys of attitude towards the subject, end-point satisfaction survey, and varieties of quiz, laboratory and test scores. Our findings indicate that students' learning styles and attitudes towards a discipline may be important confounds of their final quantitative performance. The observed positive effects of integrating information technology with established pedagogical techniques may be valid across disciplines within the broader spectrum courses in the science education curriculum. The two critical components of improving science education via blended instruction include instructor training, and development of appropriate activities, simulations and interactive resources.
A Descriptive Algorithm for Sobel Image Edge Detection
Image edge detection is a process of locating the e dg of an image which is important in finding the approximate absolute gradient magnitude at each point I of an input grayscale image. The problem of getting an appropriate absolute gradient magnitude for edges lies in the method used. The Sobel operator performs a 2-D spatial gradient measurement on images. Transferri ng a 2-D pixel array into statistically uncorrelated data se t enhances the removal of redundant data, as a result, reduction of the amount of data is required to represent a digital image. The Sobel edge detector uses a pair of 3 x 3 convolution masks, one estimating gradient in the x-direction and the other estimating gradient in y–direction. The Sobel detector is incredibly sensit ive o noise in pictures, it effectively highlight them as edges. Henc e, Sobel operator is recommended in massive data communication found in data transfer.
Paraconsistent Modal Logics
We introduce a modal expansion of paraconsistent Nelson logic that is also as a generalization of the Belnapian modal logic recently introduced by Odintsov and Wansing. We prove algebraic completeness theorems for both logics, defining and axiomatizing the corresponding algebraic semantics. We provide a representation for these algebras in terms of twist-structures, generalizing a known result on the representation of the algebraic counterpart of paraconsistent Nelson logic.
Autoradiographic mapping of beta-adrenoceptors in human skin
A high density of β2-adrenoceptors has been found in human skin. Using autoradiographic mapping we investigated the distribution of β1- β2-receptors in normal and diseased human skin. Cryostat sections of human skin obtained at biopsy were incubated with [125I]-iodocyanopindolol and nonspecific binding was identified by incubation of adjacent sections with 200 μM (-)-isoproterenol; β2-receptors were visualized using CGP 20712A and β1-receptors using ICI 118,551 as competing agents. The epidermis was densely labelled with an even distribution throughout all layers. Most of the β-receptors were of the β2-subtype with practically no β1-receptors. β-Receptors were also localized to eccrine sweat glands, dermal blood vessels, and perivascular inflammatory cells, but there was no labelling of sebaceous glands. Topical glucocorticoids caused an increase in the density of epidermal β-receptors. We conclude that keratinocytes and eccrine sweat glands express high densities of β2-receptors suggesting that they may have a physiological role in the regulation of these cells.
Daily functioning and self-management in patients with chronic low back pain after an intensive cognitive behavioral programme for pain management
Chronic low back pain (CLBP) is associated with persistent or recurrent disability which results in high costs for society. Cognitive behavioral treatments produce clinically relevant benefits for patients with CLBP. Nevertheless, no clear evidence for the most appropriate intervention is yet available. The purpose of this study is to evaluate the mid-term effects of treatment in a cohort of patients with CLBP participating in an intensive pain management programme. The programme provided by RealHealth-Netherlands is based on cognitive behavioral principles and executed in collaboration with orthopedic surgeons. Main outcome parameters were daily functioning (Roland and Morris Disability Questionnaire and Oswestry Disability Questionnaire), self-efficacy (Pain Self-Efficacy Questionnaire) and quality of life (Short Form 36 Physical Component Score). All parameters were measured at baseline, last day of residential programme and at 1 and 12 months follow-up. Repeated measures analysis was applied to examine changes over time. Clinical relevance was examined using minimal clinical important differences (MCID) estimates for main outcomes. To compare results with literature effect sizes (Cohen’s d) and Standardized Morbidity Ratios (SMR) were determined. 107 patients with CLBP participated in this programme. Mean scores on outcome measures showed a similar pattern: improvement after residential programme and maintenance of results over time. Effect sizes were 0.9 for functioning, 0.8 for self-efficacy and 1.3 for physical functioning related quality of life. Clinical relevancy: 79% reached MCID on functioning, 53% on self-efficacy and 80% on quality of life. Study results on functioning were found to be 36% better and 2% worse when related to previous research on, respectively, rehabilitation programmes and spinal surgery for similar conditions (SMR 136 and 98%, respectively). The participants of this evidence-based programme learned to manage CLBP, improved in daily functioning and quality of life. The study results are meaningful and comparable with results of spinal surgery and even better than results from less intensive rehabilitation programmes.
Critical Factors of ERP Adoption for Small- and Medium-Sized Enterprises: An Empirical Study
Small and Medium-sized Enterprises (SMEs) play a vital and pervasive role in the current development of Taiwan’s economy. Recently, the application of Enterprise Resource Planning (ERP) systems have enabled large enterprises to have direct contact with their clients via e-commerce technology, which has led to even fiercer competition among the SMEs. This study develops and tests a theoretical model including critical factors which influence ERP adoption in Taiwan’s SMEs. Specifically, four dimensions, including CEO characteristics, innovative technology characteristics, organizational characteristics, and environmental characteristics, are empirically examined. The results of a mail survey indicate that the CEO’s attitude towards information technology (IT) adoption, the CEO’s IT knowledge, the employees’ IT skills, business size, competitive pressure, cost, complexity, and compatibility are all important determinants in ERP adoption for SMEs. The authors’ results are compared with research on IT adoption in SMEs based in Singapore and the United States, while implications of the results are also discussed.
Current options in the treatment of mast cell mediator-related symptoms in mastocytosis.
Patients with mastocytosis have symptoms related to the tissue response to the release of mediators from mast cells (MC), local mast cell burden or associated non-mast cell hematological disorders. MC contain an array of biologically active mediators in their granules, which are preformed and stored. MC are also able to produce newly generated membrane-derived lipid mediators and are a source of multifunctional cytokines. Mediator-related symptoms can include pruritus, flushing, syncope, gastric distress, nausea and vomiting, diarrhea, bone pain and neuropsychiatric disturbances; these symptoms are variably controlled by adequate medications. Management of patients within all categories of mastocytosis includes: a) a careful counseling of patients (parents in pediatric cases) and care providers, b) avoidance of factors triggering acute mediator release, c) treatment of acute and chronic MC-mediator symptoms and, if indicated, d) an attempt for cytoreduction and treatment of organ infiltration by mast cells.
Learning to Filter Object Detections
Most object detection systems consist of three stages. First, a set of individual hypotheses for object locations is generated using a proposal generating algorithm. Second, a classifier scores every generated hypothesis independently to obtain a multi-class prediction. Finally, all scored hypotheses are filtered via a non-differentiable and decoupled non-maximum suppression (NMS) post-processing step. In this paper, we propose a filtering network (FNet), a method which replaces NMS with a differentiable neural network that allows joint reasoning and rescoring of the generated set of hypotheses per image. This formulation enables end-to-end training of the full object detection pipeline. First, we demonstrate that FNet, a feed-forward network architecture, is able to mimic NMS decisions, despite the sequential nature of NMS. We further analyze NMS failures and propose a loss formulation that is better aligned with the mean average precision (mAP) evaluation metric. We evaluate FNet on several standard detection datasets. Results surpass standard NMS on highly occluded settings of a synthetic overlapping MNIST dataset and show competitive behavior on PascalVOC2007 and KITTI detection benchmarks.
Force and vibration analysis of induction motors
This paper analyzes the radial electromagnetic force of induction motors by using two-dimensional nonlinear finite-element method considering the rotor current coupled with voltage equations. First, we analyze the steady-state characteristics of two induction motors to verify our finite-element model. Using this model, we analyze the radial force at the teeth and slots to clarify the influence of slip and the stator winding and the difference between the line source and a pulsewidth modulation inverter. Finally, we discuss the relationship with a measured vibration velocity.
Rapid octree construction from image sequences
The construction of a three-dimensional object model from a set of images taken from different viewpoints is an important problem in computer vision. One of the simplest ways to do this is to use the silhouettes of the object (the binary classification of images into object and background) to construct a bounding volume for the object. To efficiently represent this volume, we use an octree, which represents the object as a tree of recursively subdivided cubes. We develop a new algorithm for computing the octree bounding volume from multiple silhouettes and apply it to an object rotating on a turntable in front of a stationary camera. The algorithm performs a limited amount of processing for each viewpoint and incrementally builds the volumetric model. The resulting algorithm requires less total computation than previous algorithms, runs in close to real-time, and builds a model whose resolution improves over time. 1993 Academic Press, Inc.
A Deep Cascade Model for Multi-Document Reading Comprehension
A fundamental trade-off between effectiveness and efficiency needs to be balanced when designing an online question answering system. Effectiveness comes from sophisticated functions such as extractive machine reading comprehension (MRC), while efficiency is obtained from improvements in preliminary retrieval components such as candidate document selection and paragraph ranking. Given the complexity of the real-world multi-document MRC scenario, it is difficult to jointly optimize both in an end-to-end system. To address this problem, we develop a novel deep cascade learning model, which progressively evolves from the documentlevel and paragraph-level ranking of candidate texts to more precise answer extraction with machine reading comprehension. Specifically, irrelevant documents and paragraphs are first filtered out with simple functions for efficiency consideration. Then we jointly train three modules on the remaining texts for better tracking the answer: the document extraction, the paragraph extraction and the answer extraction. Experiment results show that the proposed method outperforms the previous state-of-the-art methods on two large-scale multidocument benchmark datasets, i.e., TriviaQA and DuReader. In addition, our online system can stably serve typical scenarios with millions of daily requests in less than 50ms.
Comparing yoga, exercise, and a self-care book for chronic low back pain: a randomized, controlled trial.
Context Yoga combines exercise with achieving a state of mental focus through breathing. In the United States, 1 million people practice yoga for low back pain. Contribution The authors recruited patients who had a recent primary care visit for low back pain and randomly assigned 101 to yoga or conventional exercise or a self-care book. Patients in the yoga and exercise groups reported good adherence at 26 weeks. Compared with self-care, symptoms were milder and function was better with yoga. The exercise group had intermediate outcomes. Symptoms improved between 12 and 26 weeks only with yoga. Implications Yoga was a more effective treatment for low back pain than a self-care book. The Editors Most treatments for chronic low back pain have modest efficacy at best (1). Exercise is one of the few proven treatments for chronic low back pain; however, its effects are often small, and no form has been shown to be clearly better than another (2-5). Yoga, which often couples physical exercise with breathing, is a popular alternative form of mindbody therapy. An estimated 14 million Americans practiced yoga in 2002 (6), including more than 1 million who used it as a treatment for back pain (7, 8). Yoga may benefit patients with back pain simply because it involves exercise or because of its effects on mental focus. We found no published studies in western biomedical literature that evaluated yoga for chronic low back pain; therefore, we designed a clinical trial to evaluate its effectiveness and safety for this condition. Methods Study Design and Setting This randomized, controlled trial compared the effects of yoga classes with conventional exercise classes and with a self-care book in patients with low back pain that persisted for at least 12 weeks. The study was conducted at Group Health Cooperative, a nonprofit, integrated health care system with approximately 500000 enrollees in Washington State and Idaho. The Group Health Cooperative institutional review board approved the study protocol, and all study participants gave oral informed consent before the eligibility screening and written consent before the baseline interview and randomization. Patients Patients from Group Health Cooperative were recruited for 12-week sessions of classes that were conducted between June and December 2003. We mailed letters describing the study to 6913 patients between 20 and 64 years of age who had visited a primary care provider for treatment of back pain 3 to 15 months before the study (according to electronic visit records). We also advertised the study in the health plan's consumer magazine. Patients were informed that we were comparing 3 approaches for the relief of back pain and that each was designed to help reduce the negative effects of low back pain on people's lives. A research assistant telephoned patients who returned statements of interest to assess their eligibility. After we received their signed informed consent forms, eligible patients were telephoned again for collection of baseline data and randomization to treatment. We excluded individuals whose back pain was complicated (for example, sciatica, previous back surgery, or diagnosed spinal stenosis), potentially attributable to specific underlying diseases or conditions (for example, pregnancy, metastatic cancer, spondylolisthesis, fractured bones, or dislocated joints), or minimal (rating of less than 3 on a bothersomeness scale of 0 to 10). We also excluded individuals who were currently receiving other back pain treatments or had participated in yoga or exercise training for back pain in the past year, those with a possible disincentive to improve (such as patients receiving workers' compensation or those involved in litigation), and those with unstable medical or severe psychiatric conditions or dementia. Patients who had contraindications (for example, symptoms consistent with severe disk disease) or schedules that precluded class participation, those who were unwilling to practice at home, or those who could not speak or understand English were also excluded. Randomization Protocol Participants were randomly assigned to participate in yoga or exercise classes or to receive the self-care book. We randomly generated treatment assignments for each class series by using a computer program with block sizes of 6 or 9. A researcher who was not involved in patient recruitment or randomization placed the assignments in opaque, sequentially numbered envelopes, which were stored in a locked filing cabinet until needed for randomization. Interventions The yoga and exercise classes developed specifically for this study consisted of 12 weekly 75-minute classes designed to benefit people with chronic low back pain. In addition to attending classes held at Group Health facilities, participants were asked to practice daily at home. Participants received handouts that described home practices, and yoga participants received auditory compact discs to guide them through the sequence of postures with the appropriate mental focus (examples of postures are shown in the Appendix Figure). Study participants retained access to all medical care provided by their insurance plan. Appendix Figure. Yoga postures Yoga We chose to use viniyoga, a therapeutically oriented style of yoga that emphasizes safety and is relatively easy to learn. Our class instructor and a senior teacher of viniyoga, who has +written a book about its therapeutic uses (9), designed the yoga intervention for patients with back pain who did not have previous yoga experience. Although all the sessions emphasized use of postures and breathing for managing low back symptoms, each had a specific focus: relaxation; strength-building, flexibility, and large-muscle movement; asymmetric poses; strengthening the hip muscles; lateral bending; integration; and customizing a personal practice. The postures were selected from a core of 17 relatively simple postures, some with adaptations (Appendix Table), and the sequence of the postures in each class was performed according to the rudiments of viniyoga (9). Each class included a question-and-answer period, an initial and final breathing exercise, 5 to 12 postures, and a guided deep relaxation. Most postures were not held but were repeated 3 or 6 times. Exercise Because we could not identify a clearly superior form of therapeutic exercise for low back pain from the literature, a physical therapist designed a 12-session class series that was 1) different from what most participants would have probably experienced in previous physical therapy sessions (to maximize adherence) and 2) similar to the yoga classes in number and length. We included a short educational talk that provided information on proper body mechanics, the benefits of exercise and realistic goal setting, and overcoming common barriers to developing an exercise routine (for example, fear). Each session began with the educational talk; feedback from the previous week; simple warm-ups to increase heart rate; and repetitions of a series of 7 aerobic exercises and 10 strengthening exercises that emphasized leg, hip, abdominal, and back muscles. Over the course of the 12-week series, the number of repetitions of each aerobic and strength exercise increased from 8 to 30 in increments of 2. The strengthening exercises were followed by 12 stretches for the same muscle groups; each stretch was held for 30 seconds. Classes ended with a short, unguided period of deep, slow breathing. Self-Care Book Participants were mailed a copy of The Back Pain Helpbook (10), an evidence-based book that emphasized such self-care strategies as adoption of a comprehensive fitness and strength program, appropriate lifestyle modification, and guidelines for managing flare-ups. Although we did not provide any instructions for using the book, many of the chapters concluded with specific action items. Outcome Measures Interviewers who were masked to the treatment assignments conducted telephone interviews at baseline and at 6, 12, and 26 weeks after randomization. The baseline interview collected information regarding sociodemographic characteristics, back pain history, and the participant's level of knowledge about yoga and exercise. Participants were asked to describe their current pain and to rate their expectations for each intervention. The primary outcomes were back-related dysfunction and symptoms, and the primary time point of interest was 12 weeks. We used the modified Roland Disability Scale (11) to measure patient dysfunction by totaling the number of positive responses to 23 questions about limitations of daily activities that might arise from back pain. This scale has been found to be valid, reliable, and sensitive to change (12-14); researchers estimate that the minimum clinically significant difference on the Roland scale ranges from 2 to 3 points (13, 15). Participants rated how bothersome their back pain had been during the previous week on an 11-point scale, in which 0 represented not at all bothersome and 10 represented extremely bothersome; a similar measure demonstrated substantial construct validity in earlier research (13). Estimates of the minimum clinically significant difference on the bothersomeness scale were approximately 1.5 points (16, 17). Secondary outcome measures were general health status, which we assessed by conducting the Short Form-36 Health Survey (18); degree of restricted activity as determined by patient responses to 3 questions (19); and medication use. After all outcomes data were collected, we asked questions related to specific interventions (for example, Did you practice at home?). At the 12-week interview, we asked class participants about any pain or substantial discomfort they experienced as a result of the classes. We assessed adherence to the home practice recommendations by asking class participants to complete weekly home practice logs and by asking about home practice during the follow-up i
Accurate realtime full-body motion capture using a single depth camera
We present a fast, automatic method for accurately capturing full-body motion data using a single depth camera. At the core of our system lies a realtime registration process that accurately reconstructs 3D human poses from single monocular depth images, even in the case of significant occlusions. The idea is to formulate the registration problem in a Maximum A Posteriori (MAP) framework and iteratively register a 3D articulated human body model with monocular depth cues via linear system solvers. We integrate depth data, silhouette information, full-body geometry, temporal pose priors, and occlusion reasoning into a unified MAP estimation framework. Our 3D tracking process, however, requires manual initialization and recovery from failures. We address this challenge by combining 3D tracking with 3D pose detection. This combination not only automates the whole process but also significantly improves the robustness and accuracy of the system. Our whole algorithm is highly parallel and is therefore easily implemented on a GPU. We demonstrate the power of our approach by capturing a wide range of human movements in real time and achieve state-of-the-art accuracy in our comparison against alternative systems such as Kinect [2012].
Prasugrel versus clopidogrel in patients with acute coronary syndromes.
BACKGROUND Dual-antiplatelet therapy with aspirin and a thienopyridine is a cornerstone of treatment to prevent thrombotic complications of acute coronary syndromes and percutaneous coronary intervention. METHODS To compare prasugrel, a new thienopyridine, with clopidogrel, we randomly assigned 13,608 patients with moderate-to-high-risk acute coronary syndromes with scheduled percutaneous coronary intervention to receive prasugrel (a 60-mg loading dose and a 10-mg daily maintenance dose) or clopidogrel (a 300-mg loading dose and a 75-mg daily maintenance dose), for 6 to 15 months. The primary efficacy end point was death from cardiovascular causes, nonfatal myocardial infarction, or nonfatal stroke. The key safety end point was major bleeding. RESULTS The primary efficacy end point occurred in 12.1% of patients receiving clopidogrel and 9.9% of patients receiving prasugrel (hazard ratio for prasugrel vs. clopidogrel, 0.81; 95% confidence interval [CI], 0.73 to 0.90; P<0.001). We also found significant reductions in the prasugrel group in the rates of myocardial infarction (9.7% for clopidogrel vs. 7.4% for prasugrel; P<0.001), urgent target-vessel revascularization (3.7% vs. 2.5%; P<0.001), and stent thrombosis (2.4% vs. 1.1%; P<0.001). Major bleeding was observed in 2.4% of patients receiving prasugrel and in 1.8% of patients receiving clopidogrel (hazard ratio, 1.32; 95% CI, 1.03 to 1.68; P=0.03). Also greater in the prasugrel group was the rate of life-threatening bleeding (1.4% vs. 0.9%; P=0.01), including nonfatal bleeding (1.1% vs. 0.9%; hazard ratio, 1.25; P=0.23) and fatal bleeding (0.4% vs. 0.1%; P=0.002). CONCLUSIONS In patients with acute coronary syndromes with scheduled percutaneous coronary intervention, prasugrel therapy was associated with significantly reduced rates of ischemic events, including stent thrombosis, but with an increased risk of major bleeding, including fatal bleeding. Overall mortality did not differ significantly between treatment groups. (ClinicalTrials.gov number, NCT00097591 [ClinicalTrials.gov].)
Generating Single Subject Activity Videos as a Sequence of Actions Using 3D Convolutional Generative Adversarial Networks
Humans have the remarkable ability of imagination, where within the human mind virtual simulations are done of scenarios whether visual, auditory or any other senses. These imaginations are based on the experiences during interaction with the real world, where human senses help the mind understand their surroundings. Such level of imagination has not yet been achieved using current algorithms, but a current trend in deep learning architectures known as Generative Adversarial Networks (GANs) have proven capable of generating new and interesting images or videos based on the training data. In that way, GANs can be used to mimic human imagination, where the resulting generated visuals of GANs are based on the data used during training. In this paper, we use a combination of Long Short-Term Memory (LSTM) Networks and 3D GANs to generate videos. We use a 3D Convolutional GAN to generate new human action videos based on trained data. The generated human action videos are used to generate longer videos consisting of a sequence of short actions combined creating longer and more complex activities. To generate the sequence of actions needed we use an LSTM network to translate a simple input description text into the required sequence of actions. The generated chunks are then concatenated using a motion interpolation scheme to form a single video consisting of many generated actions. Hence a visualization of the input text description is generated as a video of a subject performing the activity described.
Exploration for Multi-task Reinforcement Learning with Deep Generative Models
Exploration in multi-task reinforcement learning is critical in training agents to deduce the underlying MDP. Many of the existing exploration frameworks such as E, Rmax, Thompson sampling assume a single stationary MDP and are not suitable for system identification in the multi-task setting. We present a novel method to facilitate exploration in multi-task reinforcement learning using deep generative models. We supplement our method with a low dimensional energy model to learn the underlying MDP distribution and provide a resilient and adaptive exploration signal to the agent. We evaluate our method on a new set of environments and provide intuitive interpretation of our results.
Glutathione Precursor, N-Acetyl-Cysteine, Improves Mismatch Negativity in Schizophrenia Patients
In schizophrenia patients, glutathione dysregulation at the gene, protein and functional levels, leads to N-methyl-D-aspartate (NMDA) receptor hypofunction. These patients also exhibit deficits in auditory sensory processing that manifests as impaired mismatch negativity (MMN), which is an auditory evoked potential (AEP) component related to NMDA receptor function. N-acetyl-cysteine (NAC), a glutathione precursor, was administered to patients to determine whether increased levels of brain glutathione would improve MMN and by extension NMDA function. A randomized, double-blind, cross-over protocol was conducted, entailing the administration of NAC (2g/day) for 60 days and then placebo for another 60 days (or vice versa). 128-channel AEPs were recorded during a frequency oddball discrimination task at protocol onset, at the point of cross-over, and at the end of the study. At the onset of the protocol, the MMN of patients was significantly impaired compared to sex- and age- matched healthy controls (p=0.003), without any evidence of concomitant P300 component deficits. Treatment with NAC significantly improved MMN generation compared with placebo (p=0.025) without any measurable effects on the P300 component. MMN improvement was observed in the absence of robust changes in assessments of clinical severity, though the latter was observed in a larger and more prolonged clinical study. This pattern suggests that MMN enhancement may precede changes to indices of clinical severity, highlighting the possible utility AEPs as a biomarker of treatment efficacy. The improvement of this functional marker may indicate an important pathway towards new therapeutic strategies that target glutathione dysregulation in schizophrenia.
Policy Choices in Social Work Education: Market Model vs. Central Theory
This article assesses the consequences of what might be termed, from a social policy perspective, a “market model” of social work education. Because this model results in a highly decentralized and relatively unpredictable educational outcome, the author argues for renewed attention to a unitary theory for social work practice. The basis of this theoretical approach is drawn from the physical sciences and focuses on the nature of change as the central dynamic. An attempt is made to identify principles of change that illuminate and reinforce a fundamental social work perspective.
GLAD: Global-Local-Alignment Descriptor for Pedestrian Retrieval
The huge variance of human pose and the misalignment of detected human images significantly increase the difficulty of person Re-Identification (Re-ID). Moreover, efficient Re-ID systems are required to cope with the massive visual data being produced by video surveillance systems. Targeting to solve these problems, this work proposes a Global-Local-Alignment Descriptor (GLAD) and an efficient indexing and retrieval framework, respectively. GLAD explicitly leverages the local and global cues in human body to generate a discriminative and robust representation. It consists of part extraction and descriptor learning modules, where several part regions are first detected and then deep neural networks are designed for representation learning on both the local and global regions. A hierarchical indexing and retrieval framework is designed to eliminate the huge redundancy in the gallery set, and accelerate the online Re-ID procedure. Extensive experimental results show GLAD achieves competitive accuracy compared to the state-of-the-art methods. Our retrieval framework significantly accelerates the online Re-ID procedure without loss of accuracy. Therefore, this work has potential to work better on person Re-ID tasks in real scenarios.
Face Sketch–Photo Synthesis and Retrieval Using Sparse Representation
Sketch-photo synthesis plays an important role in sketch-based face photo retrieval and photo-based face sketch retrieval systems. In this paper, we propose an automatic sketch-photo synthesis and retrieval algorithm based on sparse representation. The proposed sketch-photo synthesis method works at patch level and is composed of two steps: sparse neighbor selection (SNS) for an initial estimate of the pseudoimage (pseudosketch or pseudophoto) and sparse-representation-based enhancement (SRE) for further improving the quality of the synthesized image. SNS can find closely related neighbors adaptively and then generate an initial estimate for the pseudoimage. In SRE, a coupled sparse representation model is first constructed to learn the mapping between sketch patches and photo patches, and a patch-derivative-based sparse representation method is subsequently applied to enhance the quality of the synthesized photos and sketches. Finally, four retrieval modes, namely, sketch-based, photo-based, pseudosketch-based, and pseudophoto-based retrieval are proposed, and a retrieval algorithm is developed by using sparse representation. Extensive experimental results illustrate the effectiveness of the proposed face sketch-photo synthesis and retrieval algorithms.
Generative design in minecraft (GDMC): settlement generation competition
This paper introduces the settlement generation competition for Minecraft, the first part of the Generative Design in Minecraft challenge. The settlement generation competition is about creating Artificial Intelligence (AI) agents that can produce functional, aesthetically appealing and believable settlements adapted to a given Minecraft map---ideally at a level that can compete with human created designs. The aim of the competition is to advance procedural content generation for games, especially in overcoming the challenges of adaptive and holistic PCG. The paper introduces the technical details of the challenge, but mostly focuses on what challenges this competition provides and why they are scientifically relevant.
What Every Programmer Should Know About Memory
As CPU cores become both faster and more numerous, the limiting factor for most programs is now, and will be for some time, memory access. Hardware designers have come up with ever more sophisticated memory handling and acceleration techniques–such as CPU caches–but these cannot work optimally without some help from the programmer. Unfortunately, neither the structure nor the cost of using the memory subsystem of a computer or the caches on CPUs is well understood by most programmers. This paper explains the structure of memory subsystems in use on modern commodity hardware, illustrating why CPU caches were developed, how they work, and what programs should do to achieve optimal performance by utilizing them.
Evaluation of food effect on pharmacokinetics of vismodegib in advanced solid tumor patients.
PURPOSE Vismodegib, an orally bioavailable small-molecule Smoothened inhibitor, is approved for treatment of advanced basal cell carcinoma (BCC). We conducted a pharmacokinetic study of vismodegib in patients with advanced solid tumors to explore the effects of food on drug exposure. EXPERIMENTAL DESIGN In part I, patients were randomized to fasting overnight (FO), a high fat meal (HF), or a low fat meal (LF) before a single dose of vismodegib 150 mg. Plasma concentrations of vismodegib were determined by a validated liquid chromatography-tandem mass spectrometry assay. Primary endpoints were C(max) and area under the curve (AUC(0-168)). In part II, patients randomized to FO or HF in part I took vismodegib 150 mg daily after fasting; those randomized to LF took it after a meal. Primary endpoints after two weeks were C(max) and AUC(0-24). RESULTS Sixty (22 FO, 20 HF, 18 LF) and 52 (25 fasting, 27 fed) patients were evaluable for primary endpoints in parts I and II, respectively. Mean C(max) and AUC(0-168) after a single dose were higher in HF than FO patients [ratios of geometric means (90% CI) = 1.75 (1.30, 2.34) and 1.74 (1.25, 2.42), respectively]. There were no significant differences in C(max) or AUC(0-24) between fasting and fed groups after daily dosing. The frequencies of drug-related toxicities were similar in both groups. CONCLUSIONS A HF meal increases plasma exposure to a single dose of vismodegib, but there are no pharmacokinetic or safety differences between fasting and fed groups at steady-state. Vismodegib may be taken with or without food for daily dosing.
A Dual-Band CPW-Fed Inductive Slot-Monopole Hybrid Antenna
A dual-band coplanar waveguide (CPW)-fed hybrid antenna consisting of a 5.4 GHz high-band CPW-fed inductive slot antenna and a 2.4 GHz low-band bifurcated F-shaped monopole antenna is proposed and investigated experimentally. This antenna possesses an appealing characteristic that the CPW-fed inductive slot antenna reinforces and thus improves the radiation efficiency of the bifurcated monopole antenna. Moreover, due to field orthogonality, one band resonant frequency and return loss bandwidth of the proposed hybrid antenna allows almost independent optimization without noticeably affecting those of the other band.
A Pattern Recognition System for Malicious PDF Files Detection
Malicious PDF files have been used to harm computer security during the past two-three years, and modern antivirus are proving to be not completely effective against this kind of threat. In this paper an innovative technique, which combines a feature extractor module strongly related to the structure of PDF files and an effective classifier, is presented. This system has proven to be more effective than other stateof-the-art research tools for malicious PDF detection, as well as than most of antivirus in commerce. Moreover, its flexibility allows adopting it either as a stand-alone tool or as plug-in to improve the performance of an already installed antivirus.
Pose Estimation from Line Correspondences using Direct Linear Transformation
Ansar, the method by Ansar and Daniilidis (2003), implementation from Xu et al. (2016). Mirzaei, the method by Mirzaei and Roumeliotis (2011). RPnL, the method by Zhang et al. (2013). ASPnL, the method by Xu et al. (2016). LPnL Bar LS, the method by Xu et al. (2016). LPnL Bar ENull, the method by Xu et al. (2016). DLT-Lines, the method by Hartley and Zisserman (2004, p. 180), our implementation. DLT-Plücker-Lines, the method by Přibyl et al. (2015), our implementation. DLT-Combined-Lines, the proposed method.
Closed-loop control of blood glucose level in type-1 diabetics: A simulation study
Survey on diabetes is one of the popular fields of biomedical signal processing. In this paper, a closed-loop system which utilizes modified Stolwijk-Hardy glucose insulin interaction model is considered. The modified model was derived by adding an exogenous insulin infusion term. Two control algorithms are used for exogenous insulin infusion: a Mamdani type fuzzy logic controller (FLC), and a fuzzy-PID controller. Simulations are performed to assess control function in terms of keeping desired steady state plasma glucose level (0.81 mg/ml) against to exogenous glucose input. Simulation results are notable and significant in terms of controlling blood glucose level (BGL). The control algorithms that applied to the model are firstly proposed, therefore this study is made a contribution to the literature.
Validation of a Chinese version of the Five Facet Mindfulness Questionnaire in Hong Kong and development of a short form.
Mindfulness-based interventions are increasingly being used in various populations to improve well-being and reduce psychological afflictions. However, there is lack of a validated mindfulness measurement in the Chinese language. This study validated the Chinese version of the Five Facet Mindfulness Questionnaire (FFMQ-C) in both a community sample of 230 adults and a clinical sample of 156 patients with significant psychological distress. Results showed a good test-retest reliability (.88) and a high internal consistency (.83 in the community sample and .80 in the clinical sample). Mindfulness as measured by FFMQ-C has moderate to large correlations with psychological distress and mental well-being. Two of the five subscales (describing and acting with awareness) showed incremental validity over the others in predicting psychological symptoms and mental health. Confirmatory factor analysis confirmed the five-factor structure of the FFMQ-C and demonstrated adequate model fit. A 20-item short form scale (FFMQ-SF) was developed using the proposed comprehensive criteria. These findings indicate that the FFMQ-C is reliable and valid to measure mindfulness in a Chinese population. Further study is needed to evaluate the psychometric properties of FFMQ-SF.
From MAP to Marginals: Variational Inference in Bayesian Submodular Models
Submodular optimization has found many applications in machine learning and beyond. We carry out the first systematic investigation of inference in probabilistic models defined through submodular functions, generalizing regular pairwise MRFs and Determinantal Point Processes. In particular, we present L-FIELD, a variational approach to general log-submodular and log-supermodular distributions based on suband supergradients. We obtain both lower and upper bounds on the log-partition function, which enables us to compute probability intervals for marginals, conditionals and marginal likelihoods. We also obtain fully factorized approximate posteriors, at the same computational cost as ordinary submodular optimization. Our framework results in convex problems for optimizing over differentials of submodular functions, which we show how to optimally solve. We provide theoretical guarantees of the approximation quality with respect to the curvature of the function. We further establish natural relations between our variational approach and the classical mean-field method. Lastly, we empirically demonstrate the accuracy of our inference scheme on several submodular models.
Extraction of Pharmacokinetic Evidence of Drug–Drug Interactions from the Literature
Drug-drug interaction (DDI) is a major cause of morbidity and mortality and a subject of intense scientific interest. Biomedical literature mining can aid DDI research by extracting evidence for large numbers of potential interactions from published literature and clinical databases. Though DDI is investigated in domains ranging in scale from intracellular biochemistry to human populations, literature mining has not been used to extract specific types of experimental evidence, which are reported differently for distinct experimental goals. We focus on pharmacokinetic evidence for DDI, essential for identifying causal mechanisms of putative interactions and as input for further pharmacological and pharmacoepidemiology investigations. We used manually curated corpora of PubMed abstracts and annotated sentences to evaluate the efficacy of literature mining on two tasks: first, identifying PubMed abstracts containing pharmacokinetic evidence of DDIs; second, extracting sentences containing such evidence from abstracts. We implemented a text mining pipeline and evaluated it using several linear classifiers and a variety of feature transforms. The most important textual features in the abstract and sentence classification tasks were analyzed. We also investigated the performance benefits of using features derived from PubMed metadata fields, various publicly available named entity recognizers, and pharmacokinetic dictionaries. Several classifiers performed very well in distinguishing relevant and irrelevant abstracts (reaching F1≈0.93, MCC≈0.74, iAUC≈0.99) and sentences (F1≈0.76, MCC≈0.65, iAUC≈0.83). We found that word bigram features were important for achieving optimal classifier performance and that features derived from Medical Subject Headings (MeSH) terms significantly improved abstract classification. We also found that some drug-related named entity recognition tools and dictionaries led to slight but significant improvements, especially in classification of evidence sentences. Based on our thorough analysis of classifiers and feature transforms and the high classification performance achieved, we demonstrate that literature mining can aid DDI discovery by supporting automatic extraction of specific types of experimental evidence.
The study of once- and twice-daily biphasic insulin aspart 30 (BIAsp 30) with sitagliptin, and twice-daily BIAsp 30 without sitagliptin, in patients with type 2 diabetes uncontrolled on sitagliptin and metformin-The Sit2Mix trial.
AIMS Investigate efficacy and tolerability of intensifying diabetes treatment with once- or twice-daily biphasic insulin aspart 30 (BIAsp 30) added to sitagliptin, and twice-daily BIAsp 30 without sitagliptin in patients with type 2 diabetes (T2D) inadequately controlled on sitagliptin. METHODS Open-label, three-arm, 24-week trial; 582 insulin-naïve patients were randomized to twice-daily BIAsp 30+sitagliptin (BIAsp BID+Sit), once-daily BIAsp 30+sitagliptin (BIAsp QD+Sit) or twice-daily BIAsp 30 without sitagliptin (BIAsp BID), all with metformin. RESULTS After 24 weeks, HbA1c reduction (%) was superior with BIAsp BID+Sit vs. BIAsp QD+Sit (BIAsp BID+Sit minus BIAsp QD+Sit difference: -0.36 [95% CI -0.54; -0.17], P<0.001) and BIAsp BID (BIAsp BID minus BIAsp BID+Sit difference: 0.24 [0.06; 0.43], P=0.01). Observed final HbA1c values were 6.9%, 7.2% and 7.1% (baseline 8.4%), and 59.8%, 46.5% and 49.7% of patients achieved HbA1c <7.0%, respectively. Confirmed hypoglycaemia was lower with BIAsp QD+Sit vs. BIAsp BID (P=0.015); rate: 1.17 (BIAsp QD+Sit), 1.50 (BIAsp BID+Sit) and 2.24 (BIAsp BID) episodes/patient-year. Difference in bodyweight change favoured BIAsp QD+Sit vs. both BID groups (P<0.001). CONCLUSIONS Adding BIAsp 30 to patients with T2D poorly controlled with sitagliptin and metformin is efficacious and well tolerated; however, while BIAsp BID+Sit showed superior glycaemic control, BIAsp QD+Sit had a lower rate of hypoglycaemia and showed no weight gain.
Bell: bit-encoding online memory leak detection
Memory leaks compromise availability and security by crippling performance and crashing programs. Leaks are difficult to diagnose because they have no immediate symptoms. Online leak detection tools benefit from storing and reporting per-object sites (e.g., allocation sites) for potentially leaking objects. In programs with many small objects, per-object sites add high space overhead, limiting their use in production environments.This paper introduces Bit-Encoding Leak Location (Bell), a statistical approach that encodes per-object sites to a single bit per object. A bit loses information about a site, but given sufficient objects that use the site and a known, finite set of possible sites, Bell uses brute-force decoding to recover the site with high accuracy.We use this approach to encode object allocation and last-use sites in Sleigh, a new leak detection tool. Sleigh detects stale objects (objects unused for a long time) and uses Bell decoding to report their allocation and last-use sites. Our implementation steals four unused bits in the object header and thus incurs no per-object space overhead. Sleigh's instrumentation adds 29% execution time overhead, which adaptive profiling reduces to 11%. Sleigh's output is directly useful for finding and fixing leaks in SPEC JBB2000 and Eclipse, although sufficiently many objects must leak before Bell decoding can report sites with confidence. Bell is suitable for other leak detection approaches that store per-object sites, and for other problems amenable to statistical per-object metadata.
Machine translation using deep learning: An overview
This Paper reveals the information about Deep Neural Network (DNN) and concept of deep learning in field of natural language processing i.e. machine translation. Now day's DNN is playing major role in machine leaning technics. Recursive recurrent neural network (R2NN) is a best technic for machine learning. It is the combination of recurrent neural network and recursive neural network (such as Recursive auto encoder). This paper presents how to train the recurrent neural network for reordering for source to target language by using Semi-supervised learning methods. Word2vec tool is required to generate word vectors of source language and Auto encoder helps us in reconstruction of the vectors for target language in tree structure. Results of word2vec play an important role in word alignment of the input vectors. RNN structure is very complicated and to train the large data file on word2vec is also a time-consuming task. Hence, a powerful hardware support (GPU) is required. GPU improves the system performance by decreasing training time period.
Burn-In Demonstrations for Multi-Modal Imitation Learning
Recent work on imitation learning has generated policies that reproduce expert behavior from multi-modal data. However, past approaches have focused only on recreating a small number of distinct, expert maneuvers, or have relied on supervised learning techniques that produce unstable policies. This work extends InfoGAIL, an algorithm for multi-modal imitation learning, to reproduce behavior over an extended period of time. Our approach involves reformulating the typical imitation learning setting to include “burn-in demonstrations” upon which policies are conditioned at test time. We demonstrate that our approach outperforms standard InfoGAIL in maximizing the mutual information between predicted and unseen style labels in road scene simulations, and we show that our method leads to policies that imitate expert autonomous driving systems over long time horizons.
Reliability Qualification of CoSi2 Electrical Fuse for 90Nm Technology
The reliability of CoSi2/p-poly Si electrical fuse (eFUSE) programmed by electromigration for 90nm technology will be presented. Both programmed and unprogrammed fuse elements were shown to be stable through extensive reliability evaluations. A qualification methodology is demonstrated to define an optimized reliable electrical fuse programming window by combining fuse resistance measurements, physical analysis, and functional sensing data. This methodology addresses the impact on electrical fuse reliability caused by process variation and device degradation (e.g., NBTI) in the sensing circuit and allows an adequate margin to ensure electrical fuse reliability over the chip lifetime
A selective endothelin-receptor antagonist to reduce blood pressure in patients with treatment-resistant hypertension: a randomised, double-blind, placebo-controlled trial
BACKGROUND Hypertension cannot always be adequately controlled with available drugs. We investigated the blood-pressure-lowering effects of the new vasodilatory, selective endothelin type A antagonist, darusentan, in patients with treatment-resistant hypertension. METHODS This randomised, double-blind study was undertaken in 117 sites in North and South America, Europe, New Zealand, and Australia. 379 patients with systolic blood pressure of 140 mm Hg or more (>/=130 mm Hg if patient had diabetes or chronic kidney disease) who were receiving at least three blood-pressure-lowering drugs, including a diuretic, at full or maximum tolerated doses were randomly assigned to 14 weeks' treatment with placebo (n=132) or darusentan 50 mg (n=81), 100 mg (n=81), or 300 mg (n=85) taken once daily. Randomisation was made centrally via an automated telephone system, and patients and all investigators were masked to treatment assignments. The primary endpoints were changes in sitting systolic and diastolic blood pressures. Analysis was by intention to treat. The study is registered with ClinicalTrials.gov, number NCT00330369. FINDINGS All randomly assigned participants were analysed. The mean reductions in clinic systolic and diastolic blood pressures were 9/5 mm Hg (SD 14/8) with placebo, 17/10 mm Hg (15/9) with darusentan 50 mg, 18/10 mm Hg (16/9) with darusentan 100 mg, and 18/11 mm Hg (18/10) with darusentan 300 mg (p<0.0001 for all effects). The main adverse effects were related to fluid accumulation. Oedema or fluid retention occurred in 67 (27%) patients given darusentan compared with 19 (14%) given placebo. One patient in the placebo group died (sudden cardiac death), and five patients in the three darusentan dose groups combined had cardiac-related serious adverse events. INTERPRETATION Darusentan provides additional reduction in blood pressure in patients who have not attained their treatment goals with three or more antihypertensive drugs. As with other vasodilatory drugs, fluid management with effective diuretic therapy might be needed. FUNDING Gilead Sciences.
Viewpoint-independent book spine segmentation
We propose a method to precisely segment books on bookshelves in images taken from general viewpoints. The proposed segmentation algorithm overcomes difficulties due to text and texture on book spines, various book orientations under perspective projection, and book proximity. A shape dependent active contour is used as a first step to establish a set of book spine candidates. A subset of these candidates are selected using spatial constraints on the assembly of spine candidates by formulating the selection problem as the maximal weighted independent set (MWIS) of a graph. The segmented book spines may be used by recognition systems (e.g., library automation), or rendered in computer graphics applications. We also propose a novel application that uses the segmented book spines to assist users in bookshelf reorganization or to modify the image to create a bookshelf with a tidier look. Our method was successfully tested on challenging sets of images.
Study of PEV Charging on Residential Distribution Transformer Life
Due to the characteristics of electric power generation, transmission and distribution in the U.S., experts have identified local distribution as a likely part of the chain to be adversely affected by unregulated PEV (Plug-in Electric Vehicle) charging. This paper presents a study performed to assess the impact of PEV charging on a local residential distribution transformer.
Digital signatures and electronic documents: a cautionary tale
Often, the main motivation for using PKI in business environments is to streamline workflow, by enabling humans to digitally sign electronic documents, instead of manually signing paper ones. However, this application fails if adversaries can construct electronic documents whose viewed contents can change in useful ways, without invalidating the digital signature. In this paper, we examine the space of such attacks, and describe how many popular electronic document formats and PKI packages permit them.
José Lezama Lima, poet of the image
An analysis of the life and work of the Spanish American poet, Jose Lezama Lima, this book places his work in its artistic, cultural and political context and examines his theory of the poetic system of the world. The author aims to make Lezama's work more accessible.
Bank Liquidity Creation
Although the modern theory of financial intermediation portrays liquidity creation as an essential role of banks, comprehensive measures of bank liquidity creation do not exist. We construct four measures and apply them to data on U.S. banks from 1993-2003. We find that bank liquidity creation increased every year and exceeded $2.8 trillion in 2003. Large banks, multibank holding company members, retail banks, and recently merged banks create the most liquidity. Bank liquidity creation is positively correlated with bank value. Testing recent theories of the relationship between capital and liquidity creation, we find that the relationship is positive for large banks and negative for small banks.
The Discrete Shearlet Transform: A New Directional Transform and Compactly Supported Shearlet Frames
It is now widely acknowledged that analyzing the intrinsic geometrical features of the underlying image is essential in many applications including image processing. In order to achieve this, several directional image representation schemes have been proposed. In this paper, we develop the discrete shearlet transform (DST) which provides efficient multiscale directional representation and show that the implementation of the transform is built in the discrete framework based on a multiresolution analysis (MRA). We assess the performance of the DST in image denoising and approximation applications. In image approximations, our approximation scheme using the DST outperforms the discrete wavelet transform (DWT) while the computational cost of our scheme is comparable to the DWT. Also, in image denoising, the DST compares favorably with other existing transforms in the literature.
Asthma control cost-utility randomized trial evaluation (ACCURATE): the goals of asthma treatment
BACKGROUND Despite the availability of effective therapies, asthma remains a source of significant morbidity and use of health care resources. The central research question of the ACCURATE trial is whether maximal doses of (combination) therapy should be used for long periods in an attempt to achieve complete control of all features of asthma. An additional question is whether patients and society value the potential incremental benefit, if any, sufficiently to concur with such a treatment approach. We assessed patient preferences and cost-effectiveness of three treatment strategies aimed at achieving different levels of clinical control:1. sufficiently controlled asthma2. strictly controlled asthma3. strictly controlled asthma based on exhaled nitric oxide as an additional disease marker DESIGN 720 Patients with mild to moderate persistent asthma from general practices with a practice nurse, age 18-50 yr, daily treatment with inhaled corticosteroids (more then 3 months usage of inhaled corticosteroids in the previous year), will be identified via patient registries of general practices in the Leiden, Nijmegen, and Amsterdam areas in The Netherlands. The design is a 12-month cluster-randomised parallel trial with 40 general practices in each of the three arms. The patients will visit the general practice at baseline, 3, 6, 9, and 12 months. At each planned and unplanned visit to the general practice treatment will be adjusted with support of an internet-based asthma monitoring system supervised by a central coordinating specialist nurse. Patient preferences and utilities will be assessed by questionnaire and interview. Data on asthma control, treatment step, adherence to treatment, utilities and costs will be obtained every 3 months and at each unplanned visit. Differences in societal costs (medication, other (health) care and productivity) will be compared to differences in the number of limited activity days and in quality adjusted life years (Dutch EQ5D, SF6D, e-TTO, VAS). This is the first study to assess patient preferences and cost-effectiveness of asthma treatment strategies driven by different target levels of asthma control. TRIAL REGISTRATION Netherlands Trial Register (NTR): NTR1756.
Semantics of programming languages
class Expression { abstract Expression smallStep(State state) throws CanNotReduce; abstract Type typeCheck(Environment env) throws TypeError; } abstract class Value extends Expression { final Expression smallStep(State state) throws CanNotReduce{ throw new CanNotReduce("I’m a value"); } }class Value extends Expression { final Expression smallStep(State state) throws CanNotReduce{ throw new CanNotReduce("I’m a value"); } } class CanNotReduce extends Exception{ CanNotReduce(String reason) {super(reason);} } class TypeError extends Exception { TypeError(String reason) {super(reason);}} class Bool extends Value { boolean value; Bool(boolean b) { value = b; } public String toString() { return value ? "TRUE" : "FALSE"; } Type typeCheck(Environment env) throws TypeError { return Type.BOOL; } } class Int extends Value { int value; Int(int i) { value = i; } public String toString(){return ""+ value;} Type typeCheck(Environment env) throws TypeError { return Type.INT; } } class Skip extends Value { public String toString(){return "SKIP";} Type typeCheck(Environment env) throws TypeError { return Type.UNIT; } }
The Implications and Impacts of Web Services to Electronic Commerce Research and Practices
Web services refer to a family of technologies that can universally standardize the communication of applications in order to connect systems, business partners, and customers cost-effectively through the World Wide Web. Major software vendors such as IBM, Microsoft, SAP, SUN, and Oracle are all embracing Web services standards and are releasing new products or tools that are Web services enabled. Web services will ease the constraints of time, cost, and space for discovering, negotiating, and conducting e-business transactions. As a result, Web services will change the way businesses design their applications as services, integrate with other business entities, manage business process workflows, and conduct e-business transactions. The early adopters of Web services are showing promising results such as greater development productivity gains and easier and faster integration with trading partners. However, there are many issues worth studying regarding Web services in the context of e-commerce. This special issue of the JECR aims to encourage awareness and discussion of important issues and applications of Web Services that are related to electronic commerce from the organizational, economics, and technical perspectives. Research opportunities of Web services and e-commerce area are fruitful and important for both academics and practitioners. We wish that this introductory article can shed some light for researchers and practitioners to better understand important issues and future trends of Web services and e-business.
Improved Shortest Path Maps with GPU Shaders
We present in this paper several improvements for computing shortest path maps using OpenGL shaders [1]. The approach explores GPU rasterization as a way to propagate optimal costs on a polygonal 2D environment, producing shortest path maps which can efficiently be queried at run-time. Our improved method relies on Compute Shaders for improved performance, does not require any CPU pre-computation, and handles shortest path maps both with source points and with line segment sources. The produced path maps partition the input environment into regions sharing a same parent point along the shortest path to the closest source point or segment source. Our method produces paths with global optimality, a characteristic which has been mostly neglected in animated virtual environments. The proposed approach is particularly suitable for the animation of multiple agents moving toward the entrances or exits of a virtual environment, a situation which is efficiently represented with the proposed path maps.
Highly potent and moderately potent topical steroids are effective in treating phimosis: a prospective randomized study.
PURPOSE We report a prospective randomized study comparing the effects of highly potent and moderately potent topical steroids in treating pediatric phimosis. MATERIALS AND METHODS A total of 70 boys 1 to 12 years old with phimosis were randomly assigned to receive topical application of either betamethasone valerate 0.06% (a highly potent steroid) or clobetasone butyrate 0.05% (a moderately potent steroid). Parents of the boys were instructed to retract the foreskin gently without causing pain, and to apply the topical steroids over the stenotic opening of the prepuce twice daily for 4 weeks, then for another 4 weeks if no improvement was achieved. Retractibility of the prepuce was graded from 0 to 5. Response to treatment was arbitrarily defined as improvement in the retractibility score of more than 2 points. RESULTS Mean treatment and followup periods were 4.3 and 19.1 weeks, respectively. The response rates in boys treated with betamethasone valerate and clobetasone butyrate were 81.3% and 77.4%, respectively (p = 0.63). Mean retractibility score decreased from 3.9 +/- 1.0 to 1.7 +/- 1.1, and 4.2 +/- 1.0 to 1.9 +/- 1.0 in the betamethasone and clobetasone groups, respectively. Both steroids were effective in all age groups. Pretreatment retractibility score did not affect treatment outcomes. No adverse effect was encountered. CONCLUSIONS Highly potent and moderately potent topical steroids are of comparable effectiveness in treating phimosis. A less potent steroid may be considered first to decrease the risk of the potential adverse effects.
Exploiting Noise Correlation for Channel Decoding with Convolutional Neural Networks
Inspired by the recent advances in deep learning, we propose a novel iterative belief propagation-convolutional neural network (BP-CNN) architecture to exploit noise correlation for channel decoding under correlated noise. The standard BP decoder is used to estimate the coded bits, followed by a CNN to remove the estimation errors of the BP decoder and obtain a more accurate estimation of the channel noise. Iterating between BP and CNN will gradually improve the decoding SNR and hence result in better decoding performance. To train a well-behaved CNN model, we define a new loss function which involves not only the accuracy of the noise estimation but also the normality test for the estimation errors, i.e., to measure how likely the estimation errors follow a Gaussian distribution. The introduction of the normality test to the CNN training shapes the residual noise distribution and further reduces the BER of the iterative decoding, compared to using the standard quadratic loss function. We carry out extensive experiments to analyze and verify the proposed framework.
Avionic Data acquisition system using MIL STD 1553B controller with IRIG-B timecode decoder
Many critical parameters of rockets and satellite are obtained by means of telemetry system present onboard. For meaningful analysis of data these parameters are to be time stamped. This is achieved by means IRIG B time code proposed by Inter-range instrumentation group. The data obtained need to be transmitted to various sub systems present in ground. Bus based configuration is planned for this inter connection. This is achieved by MIL STD 1553B protocol. A data bus is used to provide a medium for the exchange of data and information between various systems. The MIL STD 1553B bus consists of Bus Controller, Remote Terminals and transmission media. The data received from telemetry system are Manchester encoded signals which needs to be decoded. The objective of the project was to design and develop MIL STD 1553B Bus controller to transmit command and data words to various other remote terminals. An IRIG B time decoder module was developed to decode the timing information transmitted from a reference so that the Data acquisition system can be used to process the data worldwide. A Manchester Data Decoder for decoding the data was developed. The system had better synchronization capability and clock independence.
Threat Modeling as a Basis for Security Requirements
We routinely hear vendors claim that their systems are “secure.” However, without knowing what assumptions are made by the vendor, it is hard to justify such a claim. Prior to claiming the security of a system, it is important to identify the threats to the system in question. Enumerating the threats to a system helps system architects develop realistic and meaningful security requirements. In this paper, we investigate how threat modeling can be used as foundations for the specification of security requirements. Although numerous works have been published on threat modeling, there is a lack of integrated, systematic approach toward threat modeling for complex systems. We examine the differences between modeling software products and complex systems, and outline our approach for identifying threats of networked systems. We also present three case studies of threat modeling: Software-Defined Radio, a network traffic monitoring tool (VisFlowConnect), and a cluster security monitoring tool (NVisionCC).
ENCARA: real-time detection of frontal faces
This paper describes a real-time approach for face detection and selection of frontal views, for further processing. Typically, face detection papers provide results for a set of single images but the problem of face detection in video streams rarely is tackled. Instead of performing an exhaustive search for every video stream frame a set of opportunistic ideas applied in a cascade fashion and based on temporal and spatial coherence provide promising results in real-time.
In sorrow to bring forth children: fertility amidst the plague of HIV
The HIV epidemic is lowering fertility in sub-Saharan Africa. This decline in fertility appears to reflect a fall in the demand for children, and not any adverse physiological consequences of the disease, as it is matched by changes in the expressed preference for children and the use of contraception, and is not significantly correlated with biological markers of sub-fecundity. A fall in fertility lowers dependency ratios and, for a given savings rate, increases future capital per person. These two effects more than offset the loss of prime working age adults and reduced human capital of orphaned children brought by the epidemic, allowing 27 of the nations of sub-Saharan Africa to cumulatively spend US$ 650 billion, or $5100 per dying adult AIDS victim, on patient care without harming the welfare of future generations. In sum, the behavioral response to the HIV epidemic creates the material resources to fight it.
Tactical Decision Making for Lane Changing with Deep Reinforcement Learning
In this paper we consider the problem of autonomous lane changing for self driving cars in a multi-lane, multi-agent setting. We present a framework that demonstrates a more structured and data efficient alternative to end-to-end complete policy learning on problems where the high-level policy is hard to formulate using traditional optimization or rule based methods but well designed low-level controllers are available. The framework uses deep reinforcement learning solely to obtain a high-level policy for tactical decision making, while still maintaining a tight integration with the low-level controller, thus getting the best of both worlds. This is possible with Q-masking, a technique with which we are able to incorporate prior knowledge, constraints and information from a low-level controller, directly in to the learning process thereby simplifying the reward function and making learning faster and efficient. We provide preliminary results in a simulator and show our approach to be more efficient than a greedy baseline, and more successful and safer than human driving.
Fuse: A Reproducible, Extendable, Internet-Scale Corpus of Spreadsheets
Spreadsheets are perhaps the most ubiquitous form of end-user programming software. This paper describes a corpus, called Fuse, containing 2,127,284 URLs that return spreadsheets (and their HTTP server responses), and 249,376 unique spreadsheets, contained within a public web archive of over 26.83 billion pages. Obtained using nearly 60,000 hours of computation, the resulting corpus exhibits several useful properties over prior spreadsheet corpora, including reproducibility and extendability. Our corpus is unencumbered by any license agreements, available to all, and intended for wide usage by end-user software engineering researchers. In this paper, we detail the data and the spreadsheet extraction process, describe the data schema, and discuss the trade-offs of Fuse with other corpora.
Overweight, obesity and cancer: epidemiological evidence and proposed mechanisms
The prevalence of obesity is rapidly increasing globally. Epidemiological studies have associated obesity with a range of cancer types, although the mechanisms by which obesity induces or promotes tumorigenesis vary by cancer site. These include insulin resistance and resultant chronic hyperinsulinaemia, increased bioavailability of steroid hormones and localized inflammation. Gaining a better understanding of the relationship between obesity and cancer can provide new insight into mechanisms of cancer pathogenesis.
Friends , Fans , and Followers : Do ads Work on Social Networks ? How Gender and age Shape Receptivity
DOI: 10.2501/JAR-51-1-258-275 INTRoDUCTIoN To generate brand awareness for its Old Spice fragrance line, Procter & Gamble invited Facebook users to “Turn Up Your Man Smell” by becoming “fans” of its products. Within a week, the brand’s fan page had more than 120,000 new fans (Morrisey, 2009). Not content merely to draw fans to its Facebook page, the Red Robin restaurant chain enlisted Facebook users as “brand ambassadors,” asking them to send pre-written recommendations to online friends. Some 1,500 customers—each with an average of 150 friends—agreed to post recommendations, which the company estimates resulted in approximately 225,000 positive advertising impressions (York, 2009). Faced with declining sales in the wake of safety recalls, Toyota used a combination of YouTube videos and Facebook pages to promote its Sienna minivan. Creating a fictional couple who “believe they are cool despite all evidence to the contrary” (Elliott, 2010), the automobile manufacturer broadcast a series of videos through the YouTube site, then solicited Facebook fans, combining both forms of social media. Within a few weeks, each of the YouTube videos had been sought out and viewed an estimated 12,000 to 15,000 times, with approximately 2,000 Facebook users signing on as fans of the Sienna. In these and similar cases, social-networking site (SNS) users not only embraced advertising-related content but actively promoted it. Yet, according to one industry-sponsored study, only 22 percent of consumers had a positive attitude toward social media advertising—and 8 percent of consumers studied had abandoned an SNS because of what they perceived as excessive advertising (AdReaction, 2010). For example, although much of the decline in MySpace usage has been due to users’ abandonment of the site in favor of the “next big thing” (i.e., Facebook), many users have suggested that the propensity of unwanted and unsolicited advertising messages contributed significantly to MySpace’s woes (Vara, 2006). These concerns suggest a delicate balancing act for social-networking advertising (SNA). On one hand, advertising provides revenue that enables sites to survive (or, in some instances, to thrive). On the other hand, overt and/or excessive commercialization in the form of advertising can dilute the appeal of SNSs. Thus, the key to successfully integrating advertising into SNSs is consumer acceptance (i.e., positive attitudes toward SNA). Consumers appear to be willing to accept SNA, but sites that do not manage advertising carefully may be perceived as being “populated by pseudo-users who [are] little more than paid corporate shills” (Clemons et al., 2007, p. 275). To help disentangle the paradox, this study develops and tests a model addressing consumer Friends, Fans, and Followers: Do ads Work on Social Networks? How Gender and age Shape Receptivity
Differences in seedling growth behaviour among species: trait correlations across species, and trait shifts along nutrient compared to rainfall gradients
0 Species!pairs from woody dicot lineages were chosen as phylogenetically inde! pendent contrasts "PICs# to represent evolutionary divergences along gradients of rainfall and nutrient stress\ and within particular habitat types\ in New South Wales\ Australia[ Seedlings were grown under controlled\ favourable conditions and measurements were made for various growth\ morphological and allocation traits[ 1 Trait correlations across all species were identi_ed\ particularly with respect to seedling relative growth rate "RGR# and speci_c leaf area "SLA#\ a fundamental measure of allocation strategy that re~ects the light!capture area deployed per unit of photosynthate invested in leaves[ 2 Across all species\ SLA\ speci_c root length "SRL# and seed reserve mass were the strongest predictors of seedling RGR[ That is\ a syndrome of leaf and root surface maximization and low seed mass was typical of high RGR plants[ This may be a high!risk strategy for individual seedlings\ but one presumably mitigated by a larger number of seedlings being produced\ increasing the chance that at least one will _nd itself in a favourable situation[ 3 Syndromes of repeated attribute divergence were identi_ed in the two sets of gradient PICs[ Species from lower resource habitats generally had lower SLA[ Thus\ in this important respect the two gradients appeared to be variants of a more general {stress| gradient[ 4 However\ trends in biomass allocation\ tissue density\ root morphology and seed reserve mass di}ered between gradients[ While SLA and RGR tended to shift together along gradients and in within!habitat PICs\ no single attribute emerged as the common\ primary factor driving RGR divergences within contrasts[ Within!habitat attribute shifts were of similar magnitude to those along gradients[
Combining Word Semantics within Complex Hilbert Space for Information Retrieval
Complex numbers are a fundamental aspect of the mathematical formalism of quantum physics. Quantum-like models developed outside physics often overlooked the role of complex numbers. Specifically, previous models in Information Retrieval (IR) ignored complex numbers. We argue that to advance the use of quantum models of IR, one has to lift the constraint of real-valued representations of the information space, and package more information within the representation by means of complex numbers. As a first attempt, we propose a complex-valued representation for IR, which explicitly uses complex valued Hilbert spaces, and thus where terms, documents and queries are represented as complex-valued vectors. The proposal consists of integrating distributional semantics evidence within the real component of a term vector; whereas, ontological information is encoded in the imaginary component. Our proposal has the merit of lifting the role of complex numbers from a computational byproduct of the model to the very mathematical texture that unifies different levels of semantic information. An empirical instantiation of our proposal is tested in the TREC Medical Record task of retrieving cohorts for clinical studies.
A WiFi-based method to count and locate pedestrians in urban traffic scenarios
Estimating the number of people in a given environment is an attractive tool with a wide range of applications. Urban environments are not an exception. Counting pedestrians and locate them properly in a road traffic scenario can facilitate the design of intelligent systems for traffic control that take into account more actors and not only vehicle-based optimizations. In this work, we present a new WiFi-based passive method to estimate the number of pedestrians in an urban traffic scenario formed by signaled intersections. Particularly, we are able i) to distinguish between pedestrians walking and pedestrians waiting in a pedestrian crossing and ii) to estimate the exact location where static pedestrians are waiting to cross. By doing so, the pedestrian factor could be included in intelligent control management systems for traffic optimization in real time. The performance analysis carried out shows that our method is able to achieve a significant level of accuracy while presenting an easy implementation.
Anchor Cascade for Efficient Face Detection
Face detection is essential to facial analysis tasks, such as facial reenactment and face recognition. Both cascade face detectors and anchor-based face detectors have translated shining demos into practice and received intensive attention from the community. However, cascade face detectors often suffer from a low detection accuracy, while anchor-based face detectors rely heavily on very large neural networks pre-trained on large-scale image classification datasets such as ImageNet, which is not computationally efficient for both training and deployment. In this paper, we devise an efficient anchor-based cascade framework called anchor cascade. To improve the detection accuracy by exploring contextual information, we further propose a context pyramid maxout mechanism for anchor cascade. As a result, anchor cascade can train very efficient face detection models with a high detection accuracy. Specifically, compared with a popular convolutional neural network (CNN)-based cascade face detector MTCNN, our anchor cascade face detector greatly improves the detection accuracy, e.g., from 0.9435 to 0.9704 at $1k$ false positives on FDDB, while it still runs in comparable speed. Experimental results on two widely used face detection benchmarks, FDDB and WIDER FACE, demonstrate the effectiveness of the proposed framework.
A FOFE-based Local Detection Approach for Named Entity Recognition and Mention Detection
In this paper, we study a novel approach for named entity recognition (NER) and mention detection in natural language processing. Instead of treating NER as a sequence labelling problem, we propose a new local detection approach, which rely on the recent fixed-size ordinally forgetting encoding (FOFE) method to fully encode each sentence fragment and its left/right contexts into a fixed-size representation. Afterwards, a simple feedforward neural network is used to reject or predict entity label for each individual fragment. The proposed method has been evaluated in several popular NER and mention detection tasks, including the CoNLL 2003 NER task and TAC-KBP2015 and TAC-KBP2016 Trilingual Entity Discovery and Linking (EDL) tasks. Our methods have yielded pretty strong performance in all of these examined tasks. This local detection approach has shown many advantages over the traditional sequence labelling methods.
Analysis of a Wideband Circularly Polarized Cylindrical Dielectric Resonator Antenna With Broadside Radiation Coupled With Simple Microstrip Feeding
In this paper, a wideband circularly polarized cylindrical shaped dielectric resonator antenna (DRA) with simple microstrip feed network has been designed and investigated. The proposed design uses dual vertical microstrip lines arranged in a perpendicular fashion to excite fundamental orthogonal hybrid <inline-formula> <tex-math notation="LaTeX">${HE}_{11\delta }^{x}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">${HE}_{11\delta }^{y}$ </tex-math></inline-formula> modes in the cylindrical DR. The Phase quadrature relationships between orthogonal modes have been attained by varying corresponding microstrips heights. To ratify the simulation results, an antenna prototype is fabricated and measured. Measured input reflection coefficient and axial ratio bandwidth (at <inline-formula> <tex-math notation="LaTeX">$\Phi =0^{\circ }$ </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">$\theta =0^{\circ }$ </tex-math></inline-formula>) of 30.37% (2.82–3.83 GHz) and 24.6% (2.75–3.52 GHz) has been achieved, respectively. This antenna design achieves an average gain of 5.5 dBi and radiation efficiency of above 96% over operational frequency band. Justifiable agreement between simulated and fabricated antenna results are obtained.
Explicability? Legibility? Predictability? Transparency? Privacy? Security? The Emerging Landscape of Interpretable Agent Behavior
There has been significant interest of late in generating behavior of agents that is interpretable to the human (observer) in the loop. However, the work in this area has typically lacked coherence on the topic, with proposed solutions for “explicable”, “legible”, “predictable” and “transparent” planning with overlapping, and sometimes conflicting, semantics all aimed at some notion of understanding what intentions the observer will ascribe to an agent by observing its behavior. This is also true for the recent works on “security” and “privacy” of plans which are also trying to answer the same question, but from the opposite point of view – i.e. when the agent is trying to hide instead of reveal its intentions. This paper attempts to provide a workable taxonomy of relevant concepts in this exciting and emerging field of inquiry.
Organ printing: the future of bone regeneration?
In engineered bone grafts, the combined actions of bone-forming cells, matrix and bioactive stimuli determine the eventual performance of the implant. The current notion is that well-built 3D constructs include the biological elements that recapitulate native bone tissue structure to achieve bone formation once implanted. The relatively new technology of organ/tissue printing now enables the accurate 3D organization of the components that are important for bone formation and also addresses issues, such as graft porosity and vascularization. Bone printing is seen as a great promise, because it combines rapid prototyping technology to produce a scaffold of the desired shape and internal structure with incorporation of multiple living cell types that can form the bone tissue once implanted.
A geometric approach to saddle points of surfaces
We outline an alternative approach to the geometric notion of a saddle point for real-valued functions of two real variables. It is argued that our treatment is more natural than the usual treatment of this topic in standard texts on calculus.
Economic Crisis and Ownership Structure: Evidence from an Emerging Market
It is generally assumed that the corporations in emerging markets are more sensitive to financial distress arising from global crisis than their counterparts in developed countries because of a lower level of institutionalization and governance structure. Parent companies need to build effective corporate governance to overcome the effects of a global economic crisis, considering the drawbacks of an emerging market. The study aims to understand the relation between the capital structure of ultimate parent companies with corporate performance of the affiliates in an emerging market, Turkey, for the period between 2008-2013. The paper divides this period into a pre-economic crisis period of 2008-2010 and a post-economic crisis period of 2011-2013. The ANOVA results revealed that business group affiliates had a higher financial performance and firm value and were more innovative compared to the non-affiliates. The regression analysis showed that the degree of control of the group by the affiliated firm was positively associated with firm value for both the years of crisis and those of recovery periods. The analysis also posits that professionalism in management was positively associated with the affiliates’ value in recovery periods. Innovativeness was another variable which contributed positively to value.
Exploratory analyses of the association of MRI with clinical, laboratory and radiographic findings in patients with rheumatoid arthritis
OBJECTIVES Evaluate relationships between MRI and clinical/laboratory/radiographic findings in rheumatoid arthritis (RA). METHODS 637 methotrexate-naive patients (GO-BEFORE) and 444 patients with active RA despite methotrexate (GO-FORWARD) were randomly assigned to subcutaneous placebo + methotrexate, golimumab 100mg + placebo, golimumab 50mg + methotrexate, or golimumab 100mg + methotrexate every-4-weeks. In GO-BEFORE(n=318) and GO-FORWARD(n=240) substudies, MRI of dominant wrist/metacarpophalangeal joints were scored for synovitis, bone oedema and bone erosion (RA MRI scoring (RAMRIS) system). Relationships between RAMRIS scores and serum C-reactive protein (CRP), 28-joint count disease activity score (DAS28-CRP) and van der Heijde modified Sharp (vdH-S) scores were assessed. RESULTS Baseline and weeks 24/28 DAS28-CRP, CRP, and vdH-S generally correlated well with baseline and week 24 RAMRIS synovitis, oedema and erosion scores. Early (week 4) CRP changes correlated with later (week 12) RAMRIS synovitis/oedema change scores; earlier (week 12) changes in some RAMRIS scores correlated with later (weeks 24/28) changes in vdH-S. Significant correlations between RAMRIS change scores and clinical/radiographic change scores were weak. CONCLUSIONS MRI and clinical/laboratory/radiographic measures generally correlated well. Associations between earlier changes in CRP and later changes in RAMRIS synovitis/osteitis were observed. Changes in MRI and clinical/radiographic measures did not correlate well, probably because MRI is more sensitive than radiographs and more objective than DAS28-CRP.
eCommerce and the Region: Not Necessarily an Unequivocal Good
eCommerce is generally assumed to be an unequivocal benefit for regional areas. Drawing upon the globalisation literature and the experience in Australia as a case study, this paper questions whether eCommerce is an unequivocal benefit and suggests that at least in some cases eCommerce may lead to the increased import of goods and service into non-metropolitan regions and the domination of these regions by large businesses based in urban areas. The impact of eCommerce in non-metropolitan areas needs to be systematically studied and a number of research avenues are suggested.
Graphene-on-silicon Schottky junction solar cells.
www.MaterialsViews.com C O M Graphene-On-Silicon Schottky Junction Solar Cells M U N I By Xinming Li , Hongwei Zhu , * Kunlin Wang , * Anyuan Cao , Jinquan Wei , Chunyan Li , Yi Jia , Zhen Li , Xiao Li , and Dehai Wu C A IO N Graphene, a single atomic layer of carbon hexagons, has stimulated a lot of research interest owing to its unique structure and fascinating properties. [ 1 ] Graphene has been produced in the form of ultrathin sheets consisting of one or a few atomic layers by chemical vapor deposition (CVD) [ 2–4 ] or solution processing [ 5 , 6 ] and can be transferred to various substrates. The two-dimensionality and structural fl atness make graphene sheets ideal candidates for thin-fi lm devices and combination with other semiconductor materials such as silicon. These fi lms typically show sheet resistances on the order of several hundred ohm per square at about 80% optical transparency. [ 7 ] With modifi cation on the electronic properties and improvement of processing techniques, graphene fi lms show potential for use in conductive, fl exible electrodes, as an alternative for indium tin oxide (ITO). Graphene applications are just starting, and current investigations are on a number of areas such as fi llers for composites, nanoelectronics, and transparent electrodes. [ 8 ] For applications related to solar cells, graphene microsheets were dispersed into conjugated polymers to improve exciton dissociation and charge transport. [ 9–11 ] Also, solution-processed thin fi lms were used as conductive and transparent electrodes for organic [ 12 ] and dyesensitized [ 13 ] solar cells, although the cell effi ciency is still lower than those with ITO and fl uorine tin oxide (FTO) electrodes. Compared with carbon nanotube fi lms that have been extensively studied, graphene fi lms may have several advantages. A continuous single-layer graphene fi lm could retain high conductivity at very low (atomic) thickness, and avoid contact resistance that occurs in a carbon nanotube fi lm between interconnected nanotube bundles. In addition, graphene fi lms have minimum porosity and, in small areas, can provide an extremely fl at surface for molecule assembly and device integration. There are many opportunities in utilizing distinct properties of graphene and exploring novel applications. Bulk heterojunction structures based on carbon materials have attracted a great deal of interest for both scientifi c fundamentals and potential applications in various new optoelectronic devices,
Geometric Loss Functions for Camera Pose Regression with Deep Learning
Deep learning has shown to be effective for robust and real-time monocular image relocalisation. In particular, PoseNet [22] is a deep convolutional neural network which learns to regress the 6-DOF camera pose from a single image. It learns to localize using high level features and is robust to difficult lighting, motion blur and unknown camera intrinsics, where point based SIFT registration fails. However, it was trained using a naive loss function, with hyper-parameters which require expensive tuning. In this paper, we give the problem a more fundamental theoretical treatment. We explore a number of novel loss functions for learning camera pose which are based on geometry and scene reprojection error. Additionally we show how to automatically learn an optimal weighting to simultaneously regress position and orientation. By leveraging geometry, we demonstrate that our technique significantly improves PoseNets performance across datasets ranging from indoor rooms to a small city.
Linking perceived service quality and service loyalty : a multi-dimensional perspective
In recent research on service quality it has been argued that the relationship between perceived service quality and service loyalty is an issue which requires conceptual and empirical elaboration through replication and extension of current knowledge. Focuses on the refinement of a scale for measuring service loyalty dimensions and the relationships between dimensions of service quality and these service loyalty dimensions. The results of an empirical study of a large sample of customers from four different service industries suggest that four dimensions of service loyalty can be identified: purchase intentions, word-of-mouth communication; price sensitivity; and complaining behaviour. Further analysis yields an intricate pattern of service quality-service loyalty relationships at the level of the individual dimensions with notable differences across
Optimal Placement and Configuration of Roadside Units in Vehicular Networks
In this paper, we propose a novel optimization framework for Roadside Unit (RSU) deployment and configuration in a vehicular network. We formulate the problem of placement of RSUs and selecting their configurations (e.g. power level, types of antenna and wired/wireless back haul network connectivity) as a linear program. The objective function is to minimize the total cost to deploy and maintain the network of RSU's. A user specified constraint on the minimum coverage provided by the RSU is also incorporated into the optimization framework. Further, the framework also supports the option of specifying selected regions of higher importance such as locations of frequently occurring accidents and incorporating constraints requiring stricter coverage in those areas. Simulation results are presented to demonstrate the feasibility of deployment on the campus map of Southern Methodist University (SMU). The efficiency and scalability of the optimization procedure for large scale problems are also studied and results shows that optimization over an area with the size of Cambridge, Massachusetts is completed in under 2 minutes. Finally, the effects of variation in several key parameters on the resulting design are studied.
Experiences of African Immigrant Women Living with HIV in the U.K.: Implications for Health Professionals
In the U.K. immigrant women from Africa constitute an increasingly large proportion of newly diagnosed cases of HIV. A significant minority of these are refugees and asylum seekers. Very little is known about their experiences of living with HIV/AIDS, their psychosocial needs or their views of health care provision. This paper reports the results of a qualitative study that explored these issues by interviewing eight African women living with HIV in the British city of Nottingham. Women’s ability to live positively with HIV was found to be strongly shaped by their migration history, their legal status, their experience of AIDS-related stigma and their Christian faith. Significantly, health services were represented as a safe social space, and were highly valued as a source of advice and support. The findings indicate that non-judgemental, personalised health care plays a key role in encouraging migrant African women to access psychosocial support and appropriate HIV services.
Animation and control of breaking waves
Controlling fluids is still an open and challenging problem in fluid animation. In this paper we develop a novel fluid animation control approach and we present its application to controlling breaking waves. In our <i>Slice Method</i> framework an animator defines the shape of a breaking wave at a desired moment in its evolution based on a library of breaking waves. Our system computes then the subsequent dynamics with the aid of a 3D Navier-Stokes solver. The wave dynamics previous to the moment the animator exerts control can also be generated based on the wave library. The animator is thus enabled to obtain a full animation of a breaking wave while controlling the shape and the timing of the breaking. An additional advantage of the method is that it provides a significantly faster method for obtaining the full 3D breaking wave evolution compared to starting the simulation at an early stage and using solely the 3D Navier-Stokes equations. We present a series of 2D and 3D breaking wave animations to demonstrate the power of the method.
Comparison of Regularization Methods for ImageNet Classification with Deep Convolutional Neural Networks
Large and Deep Convolutional Neural Networks achieve good results in image classification tasks, but they need methods to prevent overfitting. In this paper we compare performance of different regularization techniques on ImageNet Large Scale Visual Recognition Challenge 2013. We show empirically that Dropout works better than DropConnect on ImageNet dataset. © 2013 Published by Elsevier B.V. Selection and/or peer review under responsibility of American Applied Science Research Institute
A computer vision approach for the assessment of autism-related behavioral markers
The early detection of developmental disorders is key to child outcome, allowing interventions to be initiated that promote development and improve prognosis. Research on autism spectrum disorder (ASD) suggests behavioral markers can be observed late in the first year of life. Many of these studies involved extensive frame-by-frame video observation and analysis of a child's natural behavior. Although non-intrusive, these methods are extremely time-intensive and require a high level of observer training; thus, they are impractical for clinical purposes. Diagnostic measures for ASD are available for infants but are only accurate when used by specialists experienced in early diagnosis. This work is a first milestone in a long-term multidisciplinary project that aims at helping clinicians and general practitioners accomplish this early detection/measurement task automatically. We focus on providing computer vision tools to measure and identify ASD behavioral markers based on components of the Autism Observation Scale for Infants (AOSI). In particular, we develop algorithms to measure three critical AOSI activities that assess visual attention. We augment these AOSI activities with an additional test that analyzes asymmetrical patterns in unsupported gait. The first set of algorithms involves assessing head motion by facial feature tracking, while the gait analysis relies on joint foreground segmentation and 2D body pose estimation in video. We show results that provide insightful knowledge to augment the clinician's behavioral observations obtained from real in-clinic assessments.
Recommending and Evaluating Choices in a Virtual Community of Use
When making a choice in the absence of decisive first-hand knowledge, choosing as other like-minded, similarly-situated people have successfully chosen in the past is a good strategy — in effect, using other people as filters and guides: filters to strain out potentially bad choices and guides to point out potentially good choices. Current human-computer interfaces largely ignore the power of the social strategy. For most choices within an interface, new users are left to fend for themselves and if necessary, to pursue help outside of the interface. We present a general history-of-use method that automates a social method for informing choice and report on how it fares in the context of a fielded test case: the selection of videos from a large set. The positive results show that communal history-of-use data can serve as a powerful resource for use in interfaces.
Fast image dehazing using guided joint bilateral filter
In this paper, we propose a new fast dehazing method from single image based on filtering. The basic idea is to compute an accurate atmosphere veil that is not only smoother, but also respect with depth information of the underlying image. We firstly obtain an initial atmosphere scattering light through median filtering, then refine it by guided joint bilateral filtering to generate a new atmosphere veil which removes the abundant texture information and recovers the depth edge information. Finally, we solve the scene radiance using the atmosphere attenuation model. Compared with exiting state of the art dehazing methods, our method could get a better dehazing effect at distant scene and places where depth changes abruptly. Our method is fast with linear complexity in the number of pixels of the input image; furthermore, as our method can be performed in parallel, thus it can be further accelerated using GPU, which makes our method applicable for real-time requirement.
Use of Polynomial Phase Modeling to FMCW Radar. Part C: Estimation of Target Acceleration in FMCW Radars
An ultra-wideband microwave balun using a tapered coaxial coil structure working from kHz range to beyond 26.5 GHz
Many wideband baluns have been presented in the past using coupled lines, pure magnetic coupling or slotlines. Their limitations were set whether in high frequency or low frequency performance. Due to their lumped element bandpass representation, many of them allow just certain bandwidth. The tapered coaxial coil structure allows balun operation beyond 26 GHz and down to the kHz range through partial ferrite filling. The cable losses, cable cut-off frequency, the number of windings, the permeability of the ferrite and the minimum coil diameter limit the bandwidth. The tapering allows resonance free operation through the whole band. Many microwave devices like mixers, power amplifiers, SWR-bridges, antennas, etc. can be made more broadband with this kind of balun. A stepwise approach to the proposed structure is presented and compared to previous balun implementations. Finally, a measurement is provided and some implementation possibilities are discussed.
Deep Predictive Models in Interactive Music
Musical performance requires prediction to operate instruments, to perform in groups and to improvise. We argue, with reference to a number of digital music instruments (DMIs), including two of our own, that predictive machine learning models can help interactive systems to understand their temporal context and ensemble behaviour. We also discuss how recent advances in deep learning highlight the role of prediction in DMIs, by allowing data-driven predictive models with a long memory of past states. We advocate for predictive musical interaction, where a predictive model is embedded in a musical interface, assisting users by predicting unknown states of musical processes. We propose a framework for characterising prediction as relating to the instrumental sound, ongoing musical process, or between members of an ensemble. Our framework shows that different musical interface design configurations lead to different types of prediction. We show that our framework accommodates deep generative models, as well as models for predicting gestural states, or other high-level musical information. We apply our framework to examples from our recent work and the literature, and discuss the benefits and challenges revealed by these systems as well as musical use-cases where prediction is a necessary component.
Strategies Toward Political Pressures: A Typology of Firm Responses
Growth management strategies can fail, sometimes from lack of integrating political impacts into a firm's dominating market ideas. This paper outlines a set of responses for a firm confronted with external demands for change. The firm's product and its impact constitute a connected environment around the firm. Promotive thinking, seeing the product and impact environments as interdependent aspects in a political context and viewing impacts as new growth opportunities, can turn political pressures into an expansion of the market.
Personality Recognition on Social Media With Label Distribution Learning
Personality is an important psychological construct accounting for individual differences in people. To reliably, validly, and efficiently recognize an individual’s personality is a worthwhile goal; however, the traditional ways of personality assessment through self-report inventories or interviews conducted by psychologists are costly and less practical in social media domains, since they need the subjects to take active actions to cooperate. This paper proposes a method of big five personality recognition (PR) from microblog in Chinese language environments with a new machine learning paradigm named label distribution learning (LDL), which has never been previously reported to be used in PR. One hundred and thirteen features are extracted from 994 active Sina Weibo users’ profiles and micro-blogs. Eight LDL algorithms and nine non-trivial conventional machine learning algorithms are adopted to train the big five personality traits prediction models. Experimental results show that two of the proposed LDL approaches outperform the others in predictive ability, and the most predictive one also achieves relatively higher running efficiency among all the algorithms.
Inferring user interests in the Twitter social network
We propose a novel mechanism to infer topics of interest of individual users in the Twitter social network. We observe that in Twitter, a user generally follows experts on various topics of her interest in order to acquire information on those topics. We use a methodology based on social annotations (proposed earlier by us) to first deduce the topical expertise of popular Twitter users, and then transitively infer the interests of the users who follow them. This methodology is a sharp departure from the traditional techniques of inferring interests of a user from the tweets that she posts or receives. We show that the topics of interest inferred by the proposed methodology are far superior than the topics extracted by state-of-the-art techniques such as using topic models (Labeled LDA) on tweets. Based upon the proposed methodology, we build a system Who Likes What, which can infer the interests of millions of Twitter users. To our knowledge, this is the first system that can infer interests for Twitter users at such scale. Hence, this system would be particularly beneficial in developing personalized recommender services over the Twitter platform.
What Do America's 'Traditional' Forms of School Choice Teach Us About School Choice Reforms?
he majority of U.S. states are currently considering or have recently passed reforms that increase the ease with which parents can choose a school for their children (Tucker and Lauber 1995). At first view, these reforms seem to take elementary and secondary education into wholly unknown territory. Yet this view neglects the fact that choices made by American parents have traditionally been an important force in determining the education their children receive. Parents' ability to choose among fiscally independent public school districts (through residential decisions) and to choose private schools (by paying tuition) is such an established feature of American education that it is almost taken for granted. Yet, through these choices, American parents exercise more control over their children's schooling than do many of their European counterparts. Of course, American parents are not all equally able to exercise choice. High-income parents routinely exercise more choice because they have more school districts and private schools within their choice sets. In addition, there is significant variation in the degree of choice across different areas of the country. Some metropolitan areas, for instance, contain many independent school districts and/or a number of private schools. Other metropolitan areas are completely monopolized by one school district or have almost no private schooling. The purpose of this paper is to answer three related questions. First, what general facts can we learn by examining the traditional forms of school choice in the United States? In particular, we need to understand the general relationship between school choice and five factors: (1) student achievement, (2) student segregation (along lines of ability, income, and taste for education, as well as race and ethnicity), 1 (3) school efficiency, (4) teachers' salaries and teacher unionism, and (5) the degree to which parents are involved in and influence their children's Caroline M. Hoxby is an associate professor of economics at Harvard University and a faculty research fellow of the National Bureau of Economic Research.
The relationship between nomophobia and the distraction associated with smartphone use among nursing students in their clinical practicum
BACKGROUND The increasing concern about the adverse effects of overuse of smartphones during clinical practicum implies the need for policies restricting smartphone use while attending to patients. It is important to educate health personnel about the potential risks that can arise from the associated distraction. OBJECTIVE The aim of this study was to analyze the relationship between the level of nomophobia and the distraction associated with smartphone use among nursing students during their clinical practicum. METHODS A cross-sectional study was carried out on 304 nursing students. The nomophobia questionnaire (NMP-Q) and a questionnaire about smartphone use, the distraction associated with it, and opinions about phone restriction policies in hospitals were used. RESULTS A positive correlation between the use of smartphones and the total score of nomophobia was found. In the same way, there was a positive correlation between opinion about smartphone restriction polices with each of the dimensions of nomophobia and the total score of the questionnaire. CONCLUSIONS Nursing students who show high levels of nomophobia also regularly use their smartphones during their clinical practicum, although they also believe that the implementation of policies restricting smartphone use while working is necessary.
Weekly low-dose mitoxantrone plus doxorubicin as second-line chemotherapy for advanced breast cancer
Weekly low dose mitoxantrone (3 mg/m2) plus doxorubicin (8 mg/m2) was administered as second-line chemotherapy to 33 patients with advanced breast cancer. Four out of 28 evaluable patients (14%) obtained a partial response with a median duration of 34 weeks (range 18–67+ weeks), while 8 patients (29%) showed stable disease with a median duration of 28 weeks (range 11+–60 weeks). Gastrointestinal toxicity and alopecia were mild. Grade II and III leukopenia occurred in 63% of the courses without serious infectious disease. Four patients experienced an asymptomatic drop of 16–20% in the left ventricular ejection fraction (LVEF) after relatively low cumulative doses of each drug, and one patient with a history of pericarditis carcinomatosa and mediastinal irradiation developed a heart failure. In conclusion, this second-line combination treatment had moderate activity in breast cancer and caused only few subjective side effects, especially with respect to gastrointestinal symptoms.
Up-Regulation of Neurotrophic Factors by Cinnamon and its Metabolite Sodium Benzoate: Therapeutic Implications for Neurodegenerative Disorders
This study underlines the importance of cinnamon, a widely-used food spice and flavoring material, and its metabolite sodium benzoate (NaB), a widely-used food preservative and a FDA-approved drug against urea cycle disorders in humans, in increasing the levels of neurotrophic factors [e.g., brain-derived neurotrophic factor (BDNF) and neurotrophin-3 (NT-3)] in the CNS. NaB, but not sodium formate (NaFO), dose-dependently induced the expression of BDNF and NT-3 in primary human neurons and astrocytes. Interestingly, oral administration of ground cinnamon increased the level of NaB in serum and brain and upregulated the levels of these neurotrophic factors in vivo in mouse CNS. Accordingly, oral feeding of NaB, but not NaFO, also increased the level of these neurotrophic factors in vivo in the CNS of mice. NaB induced the activation of protein kinase A (PKA), but not protein kinase C (PKC), and H-89, an inhibitor of PKA, abrogated NaB-induced increase in neurotrophic factors. Furthermore, activation of cAMP response element binding (CREB) protein, but not NF-κB, by NaB, abrogation of NaB-induced expression of neurotrophic factors by siRNA knockdown of CREB and the recruitment of CREB and CREB-binding protein to the BDNF promoter by NaB suggest that NaB exerts its neurotrophic effect through the activation of CREB. Accordingly, cinnamon feeding also increased the activity of PKA and the level of phospho-CREB in vivo in the CNS. These results highlight a novel neutrophic property of cinnamon and its metabolite NaB via PKA – CREB pathway, which may be of benefit for various neurodegenerative disorders.
Deciphering Severely Degraded License Plates
Extremely low-quality images, on the order of 20 pixels in width, appear with frustrating frequency in many forensic investigations. Even advanced de-noising and super-resolution technologies are unable to extract useful information from such lowquality images. We show, however, that useful information is present in such highly degraded images. We also show that convolutional neural networks can be trained to decipher the contents of highly degraded images of license plates, and that these networks significantly outperform human observers. Introduction Recognizing text in images is a well-studied problem [1]. Text recognition can either be done by recognizing individual characters or by recognizing the full word. For degraded images, however, it is difficult to localize and recognize individual characters in an image. Word recognition, therefore, has become central to text recognition in degraded images. Deep convolutional neural networks [2] have been used for text recognition in natural images. Goodfellow et al. [3] used a deep neural network to localize, segment and recognize multiple digits on street view images. Jaderberg et al. [4] also proposed an end to end text recognition system for natural images using a deep neural network. Jaderberg et al. used a large word dictionary and formulated the text recognition task as a large scale classification problem. Recently, Svoboda et al. [5] used CNN to remove motion blur from images of license plate blurred with a blur kernel of various directions and lengths. Although this approach is able to deblur highly blurred images, it does not contend with extremely low resolution and noisy images. Unlike much of this previous work we focus on extracting text from highly degraded images on the scale of only a few pixels per character. Hsieh et al. [6] were the first to show that information can be extracted from highly degraded license plates. In this work the authors assume a known font type, font size, and character layout, and assume that the degraded image is blurry and perspectively distorted, but does not necessarily contain additive noise. Although the authors only show results on a small set of images, they do show that information is present in license plates as small as 20 pixels in width. Building on these ideas, in this paper we propose to train a CNN for recognizing highly degraded license plates with an unknown background template, font type, size and character location. We also explicitly work in the presence of high amounts of additive noise – a common occurrence in real-world imagery. Recognition by Human Observers We begin by performing a perceptual study to determine how well human observers can decipher degraded license plates. This study provides a baseline against which to compare our computational approaches. Observers were shown images of synthetically Figure 1. An example of the type of degraded license plate that we seek to
Implementation of health-care monitoring system using Raspberry Pi
Devotion towards own body is one of the important factor considered in this era. The equipments which provide results at run time and also accuracy maintained are provided by the electronic engineers. With the help of new technology of Raspberry Pi, health care system can be monitored. In this type of technology same area network is shared by multiple users which helps in monitoring. Wireless communication is done through Wi-Fi which provides flexibility and extendibility. In this paper basic parameters like body temperature is monitored and is transferred on webpage to make it locally visible for users.
Privacy and Security in Mobile Cloud Computing: Review
Mobile cloud computing is computing of Mobile application through cloud. As we know market of mobile phones is growing rapidly. According to IDC, the premier global market intelligence firm, the worldwide Smartphone market grew 42. 5% year over year in the first quarter of 2012. With the growing demand of Smartphone the demand for fast computation is also growing. Inspite of comparatively more processing power and storage capability of Smartphone&apos;s, they still lag behind Personal Computers in meeting processing and storage demands of high end applications like speech recognition, security software, gaming, health services etc. Mobile cloud computing is an answer to intensive processing and storage demand of real-time and high end applications. Being in nascent stage, Mobile Cloud Computing has privacy and security issues which deter the users from adopting this technology. This review paper throws light on privacy and security issues of Mobile Cloud Computing.
A Self-Calibrating Radar Sensor System for Measuring Vital Signs
Vital signs (i.e., heartbeat and respiration) are crucial physiological signals that are useful in numerous medical applications. The process of measuring these signals should be simple, reliable, and comfortable for patients. In this paper, a noncontact self-calibrating vital signs monitoring system based on the Doppler radar is presented. The system hardware and software were designed with a four-tiered layer structure. To enable accurate vital signs measurement, baseband signals in the radar sensor were modeled and a framework for signal demodulation was proposed. Specifically, a signal model identification method was formulated into a quadratically constrained l1 minimization problem and solved using the upper bound and linear matrix inequality (LMI) relaxations. The performance of the proposed system was comprehensively evaluated using three experimental sets, and the results indicated that this system can be used to effectively measure human vital signs.
A robust layered control system for a mobile robot
A new architecture for controlling mobile robots is described. Layers of control system are built to let the robot operate at increasing levels of competence. Layers are made up of asynchronous modules that communicate over low-bandwidth channels. Each module is an instance of a fairly simple computational machine. Higher-level layers can subsume the roles of lower levels by suppressing their outputs. However, lower levels continue to function as higher levels are added. The result is a robust and flexible robot control system. The system has been used to control a mobile robot wandering around unconstrained laboratory areas and computer machine rooms. Eventually it is intended to control a robot that wanders the office areas of our laboratory, building maps of its surroundings using an onboard arm to perform simple tasks.
Closing complexity gaps for coloring problems on H-free graphs
If a graph G contains no subgraph isomorphic to some graph H, then G is called H-free. A coloring of a graph G = (V,E) is a mapping c : V → {1, 2, . . .} such that no two adjacent vertices have the same color, i.e., c(u) 6= c(v) if uv ∈ E; if |c(V )| ≤ k then c is a k-coloring. The Coloring problem is to test whether a graph has a coloring with at most k colors for some integer k. The Precoloring Extension problem is to decide whether a partial k-coloring of a graph can be extended to a k-coloring of the whole graph for some integer k. The List Coloring problem is to decide whether a graph allows a coloring, such that every vertex u receives a color from some given set L(u). By imposing an upper bound ` on the size of each L(u) we obtain the `-List Coloring problem. We first classify the Precoloring Extension problem and the `-List Coloring problem for H-free graphs. We then show that 3-List Coloring is NP-complete for n-vertex graphs of minimum degree n−2, i.e., for complete graphs minus a matching, whereas List Coloring is fixed-parameter tractable for this graph class when parameterized by the number of vertices of degree n− 2. Finally, for a fixed integer k > 0, the List k-Coloring problem is to decide whether a graph allows a coloring, such that every vertex u receives a color from some given set L(u) that must be a subset of {1, . . . , k}. We show that List 4-Coloring is NPcomplete for P6-free graphs, where P6 is the path on six vertices. This completes the classification of List k-Coloring for P6-free graphs.
Can a “poor” verification system be a “good” identification system? A preliminary study
The matching accuracy of a biometric system is typically quantified through measures such as the False Match Rate (FMR), False Non-match Rate (FNMR), Equal Error Rate (EER), Receiver Operating Characteristic (ROC) curve and Cumulative Match Characteristic (CMC) curve. In this work, we analyze the relationship between the ROC and CMC curves, which are two measures commonly used to describe the performance of verification and identification systems, respectively. We establish that it is possible for a biometric system to exhibit “good” verification performance and “poor” identification performance (and vice versa) by demonstrating the conditions required to produce such outcomes. Experimental analysis using synthetically generated match scores confirms our hypothesis that the ROC or CMC alone cannot completely characterize biometric system performance.
Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks
Two problems arise when using distant supervision for relation extraction. First, in this method, an already existing knowledge base is heuristically aligned to texts, and the alignment results are treated as labeled data. However, the heuristic alignment can fail, resulting in wrong label problem. In addition, in previous approaches, statistical models have typically been applied to ad hoc features. The noise that originates from the feature extraction process can cause poor performance. In this paper, we propose a novel model dubbed the Piecewise Convolutional Neural Networks (PCNNs) with multi-instance learning to address these two problems. To solve the first problem, distant supervised relation extraction is treated as a multi-instance problem in which the uncertainty of instance labels is taken into account. To address the latter problem, we avoid feature engineering and instead adopt convolutional architecture with piecewise max pooling to automatically learn relevant features. Experiments show that our method is effective and outperforms several competitive baseline methods.
논문 : 관광목적지로서 도시공간의 개발방향
Recently, the demand for urban tourism has grown, but its importance has been neglected and underestimated. This paper aims to understand various issues about urban space as a tourist destination and to suggest the directions for developing urban space. According to the review of urban tourism researches, the characteristics of urban tourism as a tourist destination compared with rural tourism are as follows: 1) Major industries are related to culture or art. 2) Urban tourism is symbolized by various images and symbols. 3) Urban space is the place for tourism consumption to both tourists and citizen. 4) Many tourist sites in the city make urban space so different. Several factors that are important to develop urban space as a tourist destination are as follows: 1) Urban space as a tourist destination must be unique against the imitation among cities. 2) Developing urban space for urban tourists depends on an effective grouping of urban tourism elements. 3) Linkage among urban tourism elements that are grouped is also important. 4) City must be developed for citizen to satisfy the meaning `welfare tourism`. 5) `Sustainable urban tourism` which can satisfy not only the economic profits but also the environmental conditions should be considered in developing urban space.
Rehabilitation in patients with radically treated respiratory cancer: A randomised controlled trial comparing two training modalities.
INTRODUCTION The evidence on the effectiveness of rehabilitation in lung cancer patients is limited. Whole body vibration (WBV) has been proposed as an alternative to conventional resistance training (CRT). METHODS We investigated the effect of radical treatment (RT) and of two rehabilitation programmes in lung cancer patients. The primary endpoint was a change in 6-min walking distance (6MWD) after rehabilitation. Patients were randomised after RT to either CRT, WBVT or standard follow-up (CON). Patients were evaluated before, after RT and after 12 weeks of intervention. RESULTS Of 121 included patients, 70 were randomised to either CON (24), CRT (24) or WBVT (22). After RT, 6MWD decreased with a mean of 38m (95% CI 22-54) and increased with a mean of 95m (95% CI 58-132) in CRT (p<0.0001), 37m (95% CI -1-76) in WBVT (p=0.06) and 1m (95% CI -34-36) in CON (p=0.95), respectively. Surgical treatment, magnitude of decrease in 6MWD by RT and allocation to either CRT or WBVT were prognostic for reaching the minimally clinically important difference of 54m increase in 6MWD after intervention. CONCLUSIONS RT of lung cancer significantly impairs patients' exercise capacity. CRT significantly improves and restores functional exercise capacity, whereas WBVT does not fully substitute for CRT.