title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Read + Verify: Machine Reading Comprehension with Unanswerable Questions | Machine reading comprehension with unanswerable questions aims to abstain from answering when no answer can be inferred. In addition to extract answers, previous works usually predict an additional “no-answer” probability to detect unanswerable cases. However, they fail to validate the answerability of the question by verifying the legitimacy of the predicted answer. To address this problem, we propose a novel read-then-verify system, which not only utilizes a neural reader to extract candidate answers and produce noanswer probabilities, but also leverages an answer verifier to decide whether the predicted answer is entailed by the input snippets. Moreover, we introduce two auxiliary losses to help the reader better handle answer extraction as well as noanswer detection, and investigate three different architectures for the answer verifier. Our experiments on the SQuAD 2.0 dataset show that our system obtains a score of 74.2 F1 on test set, achieving state-of-the-art results at the time of submission (Aug. 28th, 2018). |
"Was It Good? It Was Provocative." Learning the Meaning of Scalar Adjectives | Texts and dialogues often express information indirectly. For instance, speakers’ answers to yes/no questions do not always straightforwardly convey a ‘yes’ or ‘no’ answer. The intended reply is clear in some cases (Was it good? It was great!) but uncertain in others (Was it acceptable? It was unprecedented.). In this paper, we present methods for interpreting the answers to questions like these which involve scalar modifiers. We show how to ground scalar modifier meaning based on data collected from the Web. We learn scales between modifiers and infer the extent to which a given answer conveys ‘yes’ or ‘no’. To evaluate the methods, we collected examples of question–answer pairs involving scalar modifiers from CNN transcripts and the Dialog Act corpus and use response distributions from Mechanical Turk workers to assess the degree to which each answer conveys ‘yes’ or ‘no’. Our experimental results closely match the Turkers’ response data, demonstrating that meanings can be learned from Web data and that such meanings can drive pragmatic inference. |
Using probe vehicle trajectories in stop-and-go waves for inferring unobserved vehicles | Data from vehicles instrumented with GPS or other localization technologies are increasingly becoming widely available due to the investments in Connected and Automated Vehicles (CAVs) and the prevalence of personal mobile devices such as smartphones. Tracking or trajectory data from these probe vehicles are already being used in practice for travel time or speed estimation and for monitoring network conditions. However, there has been limited work on extracting other critical traffic flow variables, in particular density and flow, from probe data. This paper presents a microscopic approach (akin to car-following) for inferring the number of unobserved vehicles in between a set of probe vehicles in the traffic stream. In particular, we develop algorithms to extract and exploit the somewhat regular patterns in the trajectories when the probe vehicles travel through stop-and-go waves in congested traffic. Using certain critical points of trajectories as the input, the number of unobserved vehicles between consecutive probes are then estimated through a Naïve Bayes model. The parameters needed for the Naïve Bayes include means and standard deviations for the probability density functions (pdfs) for the distance headways between vehicles. These parameters are estimated through supervised as well as unsupervised learning methods. The proposed ideas are tested based on the trajectory data collected from US 101 and I-80 in California for the FHWA's NGSIM (next generation simulation) project. Under the dense traffic conditions analyzed, the results show that the number of unobserved vehicles between two probes can be predicted with an accuracy of ±1 vehicle almost always. |
Impact of different values of prosthesis-patient mismatch on outcome in male patients with aortic valve replacement. | AIMS
Mortality and left ventricular mass (LVM) recovery/regression after aortic valve replacement in patients with prosthesis-patient mismatch (PPM) is controversial. This study evaluated the impact of different values of indexed effective orifice area (EOAi) in male patients on mortality and indexed LVM (ILVM) recovery/regression.
METHOD
The study recruited 376 male patients with and without PPM after aortic valve replacement with different EOAi cut-off values.
RESULTS
At EOAi 0.85 cm/m or less, 295 patients had PPM (78.5%). ILVM recovery occurred in 60.5% of no-PPM patients versus 46.1% of patients with PPM (P = 0.003), and ILVM regression was 35 versus 25% (P < 0.001). Time for ILVM regression was shorter in no-PPM group. At EOAi 0.75 cm/m or less, 201 patients had PPM (53.4%). ILVM recovery occurred in 55.4% of no-PPM patients versus 45.2% of patients with PPM (P = 0.06), regression was 32 versus 29% (P = 0.09). Time for ILVM regression was similar between groups. Regardless the cut-off value for PPM definition, mortality was similar.
CONCLUSION
LVM recovery/regression, but not mortality, was different at different EOAi. The cut-off value at EOAi 0.75 cm/m or less guaranteed a more balanced patient distribution between groups and the best compromise between specificity and sensitivity. |
Animating Corrosion and Erosion | In this paper, we present a simple method for animating natural phenomena such as erosion, sedimentation, and acidic corrosion. We discretize the appropriate physical or chemical equations using finite differences, and we use the results to modify the shape of a solid body. We remove mass from an object by treating its surface as a level set and advecting it inward, and we deposit the chemical and physical byproducts into simulated fluid. Similarly, our technique deposits sediment onto a surface by advecting the level set outward. Our idea can be used for off-line high quality animations as well as interactive applications such as games, and we demonstrate both in this paper. |
Cardiac three-dimensional magnetic resonance imaging and fluoroscopy merging: a new approach for electroanatomic mapping to assist catheter ablation. | BACKGROUND
Modern nonfluoroscopic mapping systems construct 3D electroanatomic maps by tracking intracardiac catheters. They require specialized catheters and/or dedicated hardware. We developed a new method for electroanatomic mapping by merging detailed 3D models of the endocardial cavities with fluoroscopic images without the need for specialized hardware. This developmental work focused on the right atrium because of the difficulties in visualizing its anatomic landmarks in 3D with current approaches.
METHODS AND RESULTS
Cardiac MRI images were acquired in 39 patients referred for radiofrequency catheter ablation using balanced steady state free-precession sequences. We optimized acquisition and developed software for construction of detailed 3D models, after contouring of endocardial cavities with cross-checking of different imaging planes. 3D models were then merged with biplane fluoroscopic images by methods for image calibration and registration implemented in a custom software application. The feasibility and accuracy of this merging process were determined in heart-cast experiments and electroanatomic mapping in patients. Right atrial dimensions and relevant anatomic landmarks could be identified and measured in all 3D models. Cephalocaudal, posteroanterior, and lateroseptal diameters were, respectively, 65+/-11, 54+/-11, and 57+/-9 mm; posterior isthmus length was 26+/-6 mm; Eustachian valve height was 5+/-5 mm; and coronary sinus ostium height and width were 16+/-3 and 12+/-3 mm, respectively (n=39). The average alignment error was 0.2+/-0.3 mm in heart casts (n=40) and 1.9 to 2.5 mm in patient experiments (n=9), ie, acceptable for clinical use. In 11 patients, reliable catheter positioning and projection of activation times resulted in 3D electroanatomic maps with an unprecedented level of anatomic detail, which assisted ablation.
CONCLUSIONS
This new approach allows activation visualization in a highly detailed 3D anatomic environment without the need for a specialized nonfluoroscopic mapping system. |
Four-port bimanual 23-gauge vitrectomy for diabetic tractional retinal detachment. | PURPOSE
Four-port bimanual vitrectomy is a surgical technique that facilitates removal of epiretinal membranes in severe proliferative diabetic retinopathy (PDR). As the illumination is held by the assistant through the fourth scleral incision, fibrovascular membranes are removed by bimanual manipulation techniques. The objective of this study was to evaluate the safety and efficacy of four-port bimanual 23-gauge vitrectomy for patients with tractional retinal detachment (TRD) in severe PDR.
DESIGN
Retrospective, comparative, consecutive, interventional case series.
METHODS
Sixty-six eyes of 58 consecutive patients who underwent primary vitrectomy for severe diabetic TRD. Thirty-six eyes of 31 cases that were treated with four-port 23-gauge vitrectomy were compared with 30 eyes of 27 cases that were treated with 23-gauge pars plana vitrectomy (PPV). Main outcome measures were best-corrected visual acuity (BCVA), retinal status, intraocular pressure, and incidence of intraoperative and postoperative complications with at least 6 months of follow-up.
RESULTS
The primary and ultimate anatomic success rates (94.4% versus 93.3%, and 100% in both groups, respectively) and the mean BCVA changes did not differ significantly between groups. The whole surgical time and the membrane removal time were significantly (p < 0.001, respectively) shorter in the four-port 23-gauge group than in the 23-gauge group. There was no difference in the incidence of intraoperative and postoperative complications in both groups.
CONCLUSIONS
Four-port bimanual 23-gauge vitrectomy offers comparable anatomic success and shortens the surgical time compared with conventional 23-gauge PPV in patients with TRD resulting from severe PDR. |
The YouTube-8M Kaggle Competition: Challenges and Methods | We took part in the YouTube-8M Video Understanding Challenge hosted on Kaggle, and achieved the 10th place within less than one month’s time. In this paper, we present an extensive analysis and solution to the underlying machine-learning problem based on frame-level data, where major challenges are identified and corresponding preliminary methods are proposed. It’s noteworthy that, with merely the proposed strategies and uniformlyaveraging multi-crop ensemble was it sufficient for us to reach our ranking1. We also report the methods we believe to be promising but didn’t have enough time to train to convergence. We hope this paper could serve, to some extent, as a review and guideline of the YouTube-8M multi-label video classification benchmark, inspiring future attempts and research. |
An evaluation of educational values of YouTube videos for academic writing | The aim is to assess the impact of YouTube videos about academic writing and its skills on the writing performance of students. Theoretical perspectives from constructivism and associated learning models are used to inform the purpose of the research. The contextual setting is matriculation students awaiting admission to higher institutions. The population is 40 students belonging to a class aimed at assisting disadvantaged students in their academic writing in Scottsville, Province of KwaZulu-Natal, South Africa. The students are broken into two groups – control/traditional teaching and the treatment/YouTube facilitated groups. Consequently, a dominant qualitative approach is adopted using focus group discussion, interviews and tests to identify underlying patterns, methods and approaches to best fit academic writing guides and videos to improve user experiences of the media for academic writing. The fundamental results show that positive characterisations of user experiences include innovation, surprise, playfulness and stimulation whereas the narratives that are not satisfying are categorised as dissatisfaction, frustration, dissolution, disappointment, anger, confusion and irritation. Ultimately, the major findings of the research have the potential to improve user experiences on the platform by highlighting how and when the positive and negative experiences of users occur and a mapping of the differences in the academic writing performance between the two groups and best practices. Finally, the results have implications for pedagogy the fitting of YouTube videos to academic writing instruction. |
Cyclosporine and mycophenolate mofetil prophylaxis with fludarabine and melphalan conditioning for unrelated donor transplantation: a prospective study of 22 patients with hematologic malignancies | Summary:In an attempt to decrease toxicity in high-risk patients undergoing unrelated donor hematopoietic stem cell transplantation (URD HSCT), we tested a combination of cyclosporine (CSP) and mycophenolate mofetil (MMF) as graft-versus-host disease (GVHD) prophylaxis with the reduced-intensity conditioning regimen fludarabine/melphalan (Flu/Mel). A total of 22 adult patients with advanced myeloid (n=15) and lymphoid (n=7) malignancies were treated. All patients received Flu 25 mg/m2 for 5 days and Mel 140 mg/m2, with CSP 3 mg/kg daily and MMF 15 mg/kg three times a day. The median age was 49 years (range 18–66). Durable engraftment was seen in all but one patient with myelofibrosis. The 1-year nonrelapse mortality was 32%, 27% from GVHD. The cumulative incidence of acute GVHD grade 2–4 and 3–4 was 63 and 41%, respectively. With a median follow-up of 18 months, the disease-free survival (DFS) and overall survival (OS) are 55 and 59%, respectively. For patients with AML and MDS (n=14), the DFS and OS is 71%. For patients undergoing a second transplant (n=14), the DFS and OS is 57%. In conclusion, this regimen is associated with acceptable toxicity but high rates of GVHD in high-risk patients undergoing URD HSCT. Encouraging disease control for patients with advanced myeloid malignancies was observed. |
Asymmetrical Duty Cycle-Controlled LLC Resonant Converter With Equivalent Switching Frequency Doubler | In the conventional full-bridge LLC converter, the duty cycle is kept as 0.5 and the phase-shift angle of the half-bridge modules is 0° to be a symmetrical operation, which makes the resonant tank operating frequency only equal to the switching frequency of the power devices. By regulating the duty cycles of the upper and lower switches in each half-bridge module to be 0.75 and 0.25 and the phase-shift angle of the half-bridge modules to be 180°, the asymmetrical duty cycle controlled full-bridge LLC resonant converter is derived. The proposed asymmetrical duty cycle controlled scheme halves the switching frequency of the primary switches. As a result, the driving losses are effectively reduced. Compared with the conventional full-bridge LLC converter, the soft-switching condition of the derived asymmetrical controlled LLC converter becomes easier to reduce the resonant current. Consequently, the conduction losses are decreased and the conversion efficiency is improved. The asymmetrical control scheme can be also extended to the stacked structure for high input voltage applications. Finally, two LLC converter prototypes both with 200-kHz resonant frequency for asymmetrical and symmetrical control schemes are built and compared to validate the effectiveness of the proposed control strategy. |
Impact of acute kidney injury on mortality and medical costs in patients with meticillin-resistant Staphylococcus aureus bacteraemia: a retrospective, multicentre observational study. | BACKGROUND
Despite the frequent occurrence of acute kidney injury (AKI) associated with meticillin-resistant Staphylococcus aureus (MRSA) infection during treatment, the adverse impact of renal injury on clinical and economic outcomes has not been evaluated.
AIM
To study the clinical and economic burdens of MRSA bacteraemia and the impact of AKI occurring during treatment on outcomes.
METHODS
Medical records of patients hospitalized for MRSA bacteraemia between March 2010 and February 2011 in eight hospitals in Korea were reviewed retrospectively to evaluate the risk factors for AKI and mortality. Direct medical costs per patient of MRSA bacteraemia during treatment were estimated from the medical resources consumed.
FINDINGS
In all, 335 patients were identified to have MRSA bacteraemia. AKI occurred in 135 patients (40.3%) during first-line antibiotic therapy. Independent risk factors for AKI were male sex, underlying renal disease, intra-abdominal and central venous catheter infection, and increase in Pitt bacteraemia score. Seventy-seven (23.0%) patients died during the study period. Underlying solid tumour, high Pitt bacteraemia score, and occurrence of AKI were independent risk factors for mortality. The mean total medical cost per MRSA patient was estimated as South Korean Won 5,435,361 (US$4,906), and occurrence of AKI and ICU admission were identified as independent predictors of increased direct medical costs. Compared with patients who retained their baseline renal function, patients with AKI had a 45% increase in medical costs.
CONCLUSIONS
Patients who developed AKI showed significantly higher mortality rate and greater direct medical costs compared with patients who retained baseline renal function. |
Multiport UHF RFID-Tag Antenna for Enhanced Energy Harvesting of Self-Powered Wireless Sensors | This paper presents the design and experimental evaluation of a long-range solar powered sensor-enhanced radio frequency identification (RFID) tag. The tag antenna is a multiport microstrip patch with an overlay of thin-film solar cells for energy harvesting. A second port is allocated on the patch antenna for supplementary energy harvesting from the RF signal transmitted by the reader. An $I^{2}C$-RFID chip along with a microcontroller unit (MCU) and temperature and humidity sensor are incorporated into the tag design to implement a low-cost wireless sensor using a commercial RFID reader. The measurements of the fabricated RFID-tag sensor demonstrate that a maximum sensing/reading range of 27 m is achieved when all the circuits are powered using solar cells, while it is 7.48 m with only the secondary option of energy harvesting. The proposed sensor with dual energy harvesting achieves both a longer range and a lifetime compared with similar battery-powered sensor-enhanced RFID tags. The RFID sensor is also evaluated in a climate chamber and the sensor data (temperature/humidity) were remotely recorded with an excellent accuracy using a commercial ultra high frequency (UHF) RFID reader. In addition, the sensor can be programmed for the temperature/humidity surveillance of sensitive items, such as those found in various supply chain and transportation applications. |
Scenarios On Estimation of the Ecological Risks of Oil Transport Activity In Russian Far East Offshore Regions | In this study the authors are looking at the environmental risks of oil transport activity in the Russian Federation’s Far East offshore regions. The ecological risks which emerge during the functioning of the pipeline system and oil trans-shipping terminal are presented. A modeling of oil spills spreading are suggested and discussed. The estimation of the risks from the ecological accidents to the recreation activity and fishing industry in the Primorye Territory is presented. The ways of ecological risks reduction and protective measures for ecosystems including economical sanctions are worked out. |
Communication chains and multitasking | There is a growing literature on managing multitasking and interruptions in the workplace. In an ethnographic study, we investigated the phenomenon of communication chains, the occurrence of interactions in quick succession. Focusing on chains enable us to better understand the role of communication in multitasking. Our results reveal that chains are prevalent in information workers, and that attributes such as the number of links, and the rate of media and organizational switching can be predicted from the first catalyzing link of the chain. When chains are triggered by external interruptions, they have more links, a trend for more media switches and more organizational switches. We also found that more switching of organizational contexts in communication is associated with higher levels of stress. We describe the role of communication chains as performing alignment in multitasking and discuss the implications of our results. |
Formononetin inhibits enterovirus 71 replication by regulating COX- 2/PGE2 expression | The activation of ERK, p38 and JNK signal cascade in host cells has been demonstrated to up-regulate of enterovirus 71 (EV71)-induced cyclooxygenase-2 (COX-2)/ prostaglandins E2 (PGE2) expression which is essential for viral replication. So, we want to know whether a compound can inhibit EV71 infection by suppressing COX-2/PGE2 expression. The antiviral effect of formononetin was determined by cytopathic effect (CPE) assay and the time course assays. The influence of formononetin for EV71 replication was determined by immunofluorescence assay, western blotting assay and qRT-PCR assay. The mechanism of the antiviral activity of formononetin was determined by western blotting assay and ELISA assay. Formononetin could reduce EV71 RNA and protein synthesis in a dose-dependent manner. The time course assays showed that formononetin displayed significant antiviral activity both before (24 or 12 h) and after (0–6 h) EV71 inoculation in SK-N-SH cells. Formononetin was also able to prevent EV71-induced cytopathic effect (CPE) and suppress the activation of ERK, p38 and JNK signal pathways. Furthermore, formononetin could suppress the EV71-induced COX-2/PGE2 expression. Also, formononetin exhibited similar antiviral activities against other members of Picornaviridae including coxsackievirus B2 (CVB2), coxsackievirus B3 (CVB3) and coxsackievirus B6 (CVB6). Formononetin could inhibit EV71-induced COX-2 expression and PGE2 production via MAPKs pathway including ERK, p38 and JNK. Formononetin exhibited antiviral activities against some members of Picornaviridae. These findings suggest that formononetin could be a potential lead or supplement for the development of new anti-EV71 agents in the future. |
Loss of HSulf-1 promotes altered lipid metabolism in ovarian cancer | BACKGROUND
Loss of the endosulfatase HSulf-1 is common in ovarian cancer, upregulates heparin binding growth factor signaling and potentiates tumorigenesis and angiogenesis. However, metabolic differences between isogenic cells with and without HSulf-1 have not been characterized upon HSulf-1 suppression in vitro. Since growth factor signaling is closely tied to metabolic alterations, we determined the extent to which HSulf-1 loss affects cancer cell metabolism.
RESULTS
Ingenuity pathway analysis of gene expression in HSulf-1 shRNA-silenced cells (Sh1 and Sh2 cells) compared to non-targeted control shRNA cells (NTC cells) and subsequent Kyoto Encyclopedia of Genes and Genomics (KEGG) database analysis showed altered metabolic pathways with changes in the lipid metabolism as one of the major pathways altered inSh1 and 2 cells. Untargeted global metabolomic profiling in these isogenic cell lines identified approximately 338 metabolites using GC/MS and LC/MS/MS platforms. Knockdown of HSulf-1 in OV202 cells induced significant changes in 156 metabolites associated with several metabolic pathways including amino acid, lipids, and nucleotides. Loss of HSulf-1 promoted overall fatty acid synthesis leading to enhance the metabolite levels of long chain, branched, and essential fatty acids along with sphingolipids. Furthermore, HSulf-1 loss induced the expression of lipogenic genes including FASN, SREBF1, PPARγ, and PLA2G3 stimulated lipid droplet accumulation. Conversely, re-expression of HSulf-1 in Sh1 cells reduced the lipid droplet formation. Additionally, HSulf-1 also enhanced CPT1A and fatty acid oxidation and augmented the protein expression of key lipolytic enzymes such as MAGL, DAGLA, HSL, and ASCL1. Overall, these findings suggest that loss of HSulf-1 by concomitantly enhancing fatty acid synthesis and oxidation confers a lipogenic phenotype leading to the metabolic alterations associated with the progression of ovarian cancer.
CONCLUSIONS
Taken together, these findings demonstrate that loss of HSulf-1 potentially contributes to the metabolic alterations associated with the progression of ovarian pathogenesis, specifically impacting the lipogenic phenotype of ovarian cancer cells that can be therapeutically targeted. |
A mathematical model of the finding of usability problems | For 11 studies, we find that the detection of usability problems as a function of number of users tested or heuristic evaluators employed is well modeled as a Poisson process. The model can be used to plan the amount of evaluation required to achieve desired levels of thoroughness or benefits. Results of early tests can provide estimates of the number of problems left to be found and the number of additional evaluations needed to find a given fraction. With quantitative evaluation costs and detection values, the model can estimate the numbers of evaluations at which optimal cost/benefit ratios are obtained and at which marginal utility vanishes. For a “medium” example, we estimate that 16 evaluations would be worth their cost, with maximum benefit/cost ratio at four. |
CAPACITOR-LESS LOW-DROPOUT VOLTAGE REGULATOR | A 1.2-V 40-mA capacitor-free CMOS low-dropout regulator (LDO) for system-on-chip applications to reduce board space and external pins is presented. By utilizing damping-factor control frequency compensation on the advanced LDO structure, the proposed LDO provides high stability, as well as fast line and load transient responses, even in capacitorfree operation. The proposed LDO has been implemented in a tsmc65nm CMOS technology, and the total error of the output voltage due to line and load variations is less. Moreover, the output voltage can recover with ≈2.3μs for full load current changes. The power-supply rejection ratio at 1 MHz is 26 dB. |
Joint Parsing and Named Entity Recognition | For many language technology applications, such as question answering, the overall system runs several independent processors over the data (such as a named entity recognizer, a coreference system, and a parser). This easily results in inconsistent annotations, which are harmful to the performance of the aggregate system. We begin to address this problem with a joint model of parsing and named entity recognition, based on a discriminative feature-based constituency parser. Our model produces a consistent output, where the named entity spans do not conflict with the phrasal spans of the parse tree. The joint representation also allows the information from each type of annotation to improve performance on the other, and, in experiments with the OntoNotes corpus, we found improvements of up to 1.36% absolute F1 for parsing, and up to 9.0% F1 for named entity recognition. |
Entrepreneurship and the Business Cycle | We find new empirical regularities in the business cycle in a cross-country panel of 22 OECD countries for the period 1972 to 2007; entrepreneurship Granger-causes the cycles of the world economy. Furthermore, the entrepreneurial cycle is positively affected by the national unemployment cycle. We discuss possible causes and implications of these findings. |
A theory-based framework for evaluating exergames as persuasive technology | Exergames are video games that use exertion-based interfaces to promote physical activity, fitness, and gross motor skill development. The purpose of this paper is to describe the development of an organizing framework based on principles of learning theory to classify and rank exergames according to embedded behavior change principles. Behavioral contingencies represent a key theory-based game design principle that can be objectively measured, evaluated, and manipulated to help explain and change the frequency and duration of game play. Case examples are presented that demonstrate how to code dimensions of behavior, consequences of behavior, and antecedents of behavior. Our framework may be used to identify game principles which, in the future, might be used to predict which games are most likely to promote adoption and maintenance of leisure time physical activity. |
Adaptive backstepping control of an induction motor under time-varying load torque and rotor resistance uncertainty | A new global adaptive controller is designed for induction motor speed control based on measurements of speed and stator current. The designed partial state feedback controller is singularity free, and guarantees asymptotic tracking of smooth reference trajectories for the speed of the motor under time varying load torque and rotor resistance uncertainty for any initial condition. The new control algorithm generates estimates for unknown time varying load torque, rotor resistance and unmeasured state variables (rotor fluxes) which asymptotically tracks and converges to their true values. The rotor flux modulus asymptotically tracks a desired reference signal which allows the motor to operate within its specifications. As in the field-oriented control scheme the control algorithm generates references for the magnetizing flux component and for the torque component of the stator current which lead to significant simplifications for current-fed motors. The control strategy yields decoupled rotor speed and rotor flux amplitude tracking control goals which allows the selection of an appropriate flux modulus for the rotor to maximize efficiency |
Defining success after surgery for pelvic organ prolapse. | OBJECTIVES
To describe pelvic organ prolapse surgical success rates using a variety of definitions with differing requirements for anatomic, symptomatic, or re-treatment outcomes.
METHODS
Eighteen different surgical success definitions were evaluated in participants who underwent abdominal sacrocolpopexy within the Colpopexy and Urinary Reduction Efforts trial. The participants' assessments of overall improvement and rating of treatment success were compared between surgical success and failure for each of the definitions studied. The Wilcoxon rank sum test was used to identify significant differences in outcomes between success and failure.
RESULTS
Treatment success varied widely depending on definition used (19.2-97.2%). Approximately 71% of the participants considered their surgery "very successful," and 85.2% considered themselves "much better" than before surgery. Definitions of success requiring all anatomic support to be proximal to the hymen had the lowest treatment success (19.2-57.6%). Approximately 94% achieved surgical success when it was defined as the absence of prolapse beyond the hymen. Subjective cure (absence of bulge symptoms) occurred in 92.1% while absence of re-treatment occurred in 97.2% of participants. Subjective cure was associated with significant improvements in the patient's assessment of both treatment success and overall improvement, more so than any other definition considered (P<.001 and <.001, respectively). Similarly, the greatest difference in symptom burden and health-related quality of life as measured by the Pelvic Organ Prolapse Distress Inventory and Pelvic Organ Prolapse Impact Questionnaire scores between treatment successes and failures was noted when success was defined as subjective cure (P<.001).
CONCLUSION
The definition of success substantially affects treatment success rates after pelvic organ prolapse surgery. The absence of vaginal bulge symptoms postoperatively has a significant relationship with a patient's assessment of overall improvement, while anatomic success alone does not.
LEVEL OF EVIDENCE
II. |
Making sense of the bazaar: 1st workshop on open source software engineering | Since the coining of the term "Open Source" in 1998, there has been a surge of academic and industrial research on the topic. Making Sense of the Bazaar: 1st Workshop on Open Source Software Engineering brought together 30 researchers and practitioners from 8 countries to discuss Open Source Software as an emerging Software Engineering paradigm. The full proceedings of the workshop have been made available online, and the full workshop report will be published in a special issue of IEE Proceedings --- Software on Open Source Software Engineering. |
Image processing based Feature extraction of Bangladeshi banknotes | Counterfeit currency is a burning question throughout the world. The counterfeiters are becoming harder to track down because of their rapid adoption of and adaptation with highly advanced technology. One of the most effective methods to stop counterfeiting can be the widespread use of counterfeit detection tools/software that are easily available and are efficient in terms of cost, reliability and accuracy. This paper presents a core software system to build a robust automated counterfeit currency detection tool for Bangladeshi bank notes. The software detects fake currency by extracting existing features of banknotes such as micro-printing, optically variable ink (OVI), water-mark, iridescent ink, security thread and ultraviolet lines using OCR (Optical Character recognition), Contour Analysis, Face Recognition, Speeded UP Robust Features (SURF) and Canny Edge & Hough transformation algorithm of OpenCV. The success rate of this software can be measured in terms of accuracy and speed. This paper also focuses on the pros and cons of implementation details that may degrade the performance of image processing based paper currency authentication systems. |
Substance use disorders in prisoners: an updated systematic review and meta‐regression analysis in recently incarcerated men and women | AIMS
The aims were to (1) estimate the prevalence of alcohol and drug use disorders in prisoners on reception to prison and (2) estimate and test sources of between study heterogeneity.
METHODS
Studies reporting the 12-month prevalence of alcohol and drug use disorders in prisoners on reception to prison from 1 January 1966 to 11 August 2015 were identified from seven bibliographic indexes. Primary studies involving clinical interviews or validated instruments leading to DSM or ICD diagnoses were included; self-report surveys and investigations that assessed individuals more than 3 months after arrival to prison were not. Random-effects meta-analysis and subgroup and meta-regression analyses were conducted. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed.
RESULTS
In total, 24 studies with a total of 18 388 prisoners across 10 countries were identified. The random-effects pooled prevalence estimate of alcohol use disorder was 24% [95% confidence interval (CI) = 21-27], with very high heterogeneity (I2 = 94%). These ranged from 16 to 51% in male and 10-30% in female prisoners. For drug use disorders, there was evidence of heterogeneity by sex, and the pooled prevalence estimate in male prisoners was 30% (95% CI = 22-38; I2 = 98%; 13 studies; range 10-61%) and, in female prisoners, was 51% (95% CI = 43-58; I2 = 95%; 10 studies; range 30-69%). On meta-regression, sources of heterogeneity included higher prevalence of drug use disorders in women, increasing rates of drug use disorders in recent decades, and participation rate.
CONCLUSIONS
Substance use disorders are highly prevalent in prisoners. Approximately a quarter of newly incarcerated prisoners of both sexes had an alcohol use disorder, and the prevalence of a drug use disorder was at least as high in men, and higher in women. |
Survey of Image Denoising Techniques | Removing noise from the original signal is still a challenging problem for researchers. There have been several published algorithms and each approach has its assumptions, advantages, and limitations. This paper presents a review of some significant work in the area of image denoising. After a brief introduction, some popular approaches are classified into different groups and an overview of various algorithms and analysis is provided. Insights and potential future trends in the area of denoising are also discussed. |
Are educational computer microgames engaging and effective for knowledge acquisition at high-schools ? A quasi-experimental study | Curricular schooling can benefit from the usage of educational computer games, but it is difficult to integrate them in the formal schooling system. Here, we investigate one possible approach to this integration, which capitalizes on using a micro-game that can be played with a teacher’s guidance as a supplement after a traditional expository lecture followed by a debriefing. The game’s purpose is to reinforce and integrate part of the knowledge learnt during the lecture. We investigated feasibility of this approach in a quasi-experimental study in 70 min long seminars on the topic of animal learning at 5 classes at 4 different high-schools in the Czech Republic. Each class was divided to two groups randomly. After an expository lecture, the game group played a game called Orbis Pictus Bestialis while the control group received an extra lecture that used media-rich materials. The time allotment was the same in both groups. We investigated the immediate and one month delayed effects of the game on students’ knowledge reinforced and integrated by the game as well as on knowledge learnt during the expository lecture but not strengthened by the game. We also investigated students’ overall appeal towards the seminar and its perceived educational value. Data from 100 students were analysed. The results showed that a) the game-playing is comparable to the traditional form of teaching concerning immediate knowledge gains and has a significant medium positive effect size regarding retention, b) the gameplaying is not detrimental to information transmitted in the expository lecture but not strengthened by the game, c) perceived educational value and the overall appeal were high in the game group, nevertheless the perceived educational value was slightly lower in the game group comparing to the traditional group. Our results suggest that the proposed approach of harnessing educational computer games at high-schools is promising. 2011 Elsevier Ltd. All rights reserved. |
Zero-Shot Adaptive Transfer for Conversational Language Understanding | Conversational agents such as Alexa and Google Assistant constantly need to increase their language understanding capabilities by adding new domains. A massive amount of labeled data is required for training each new domain. While domain adaptation approaches alleviate the annotation cost, prior approaches suffer from increased training time and suboptimal concept alignments. To tackle this, we introduce a novel Zero-Shot Adaptive Transfer method for slot tagging that utilizes the slot description for transferring reusable concepts across domains, and enjoys efficient training without any explicit concept alignments. Extensive experimentation over a dataset of 10 domains relevant to our commercial personal digital assistant shows that our model outperforms previous state-of-the-art systems by a large margin, and achieves an even higher improvement in the low data regime. |
Global, regional, and national causes of child mortality: an updated systematic analysis for 2010 with time trends since 2000 | BACKGROUND
Information about the distribution of causes of and time trends for child mortality should be periodically updated. We report the latest estimates of causes of child mortality in 2010 with time trends since 2000.
METHODS
Updated total numbers of deaths in children aged 0-27 days and 1-59 months were applied to the corresponding country-specific distribution of deaths by cause. We did the following to derive the number of deaths in children aged 1-59 months: we used vital registration data for countries with an adequate vital registration system; we applied a multinomial logistic regression model to vital registration data for low-mortality countries without adequate vital registration; we used a similar multinomial logistic regression with verbal autopsy data for high-mortality countries; for India and China, we developed national models. We aggregated country results to generate regional and global estimates.
FINDINGS
Of 7·6 million deaths in children younger than 5 years in 2010, 64·0% (4·879 million) were attributable to infectious causes and 40·3% (3·072 million) occurred in neonates. Preterm birth complications (14·1%; 1·078 million, uncertainty range [UR] 0·916-1·325), intrapartum-related complications (9·4%; 0·717 million, 0·610-0·876), and sepsis or meningitis (5·2%; 0·393 million, 0·252-0·552) were the leading causes of neonatal death. In older children, pneumonia (14·1%; 1·071 million, 0·977-1·176), diarrhoea (9·9%; 0·751 million, 0·538-1·031), and malaria (7·4%; 0·564 million, 0·432-0·709) claimed the most lives. Despite tremendous efforts to identify relevant data, the causes of only 2·7% (0·205 million) of deaths in children younger than 5 years were medically certified in 2010. Between 2000 and 2010, the global burden of deaths in children younger than 5 years decreased by 2 million, of which pneumonia, measles, and diarrhoea contributed the most to the overall reduction (0·451 million [0·339-0·547], 0·363 million [0·283-0·419], and 0·359 million [0·215-0·476], respectively). However, only tetanus, measles, AIDS, and malaria (in Africa) decreased at an annual rate sufficient to attain the Millennium Development Goal 4.
INTERPRETATION
Child survival strategies should direct resources toward the leading causes of child mortality, with attention focusing on infectious and neonatal causes. More rapid decreases from 2010-15 will need accelerated reduction for the most common causes of death, notably pneumonia and preterm birth complications. Continued efforts to gather high-quality data and enhance estimation methods are essential for the improvement of future estimates.
FUNDING
The Bill & Melinda Gates Foundation. |
Artificial and Computational Intelligence in Games: AI-Driven Game Design (Dagstuhl Seminar 17471) | With the dramatic growth of the game industry over the past decade, its rapid inclusion in many sectors of today’s society, and the increased complexity of games, game development has reached a point where it is no longer humanly possible to use only manual techniques to create games. Large parts of games need to be designed, built, and tested automatically. In recent years, researchers have delved into artificial intelligence techniques to support, assist, and even drive game development. Such techniques include procedural content generation, automated narration, player modelling and adaptation, and automated game design. This research is still very young, but already the games industry is taking small steps to integrate some of these techniques in their approach to design. The goal of this seminar was to bring together researchers and industry representatives who work at the forefront of artificial intelligence (AI) and computational intelligence (CI) in games, to (1) explore and extend the possibilities of AI-driven game design, (2) to identify the most viable applications of AI-driven game design in the game industry, and (3) to investigate new approaches to AI-driven game design. To this end, the seminar included a wide range of researchers and developers, including specialists in AI/CI for abstract games, commercial video games, and serious games. Thus, it fostered a better understanding of and unified vision on AI-driven game design, using input from both scientists as well as AI specialists from industry. Seminar November 19–24, 2017 – http://www.dagstuhl.de/17471 1998 ACM Subject Classification I.2.1 Artificial Intelligence Games |
Gather-Excite : Exploiting Feature Context in Convolutional Neural Networks | While the use of bottom-up local operators in convolutional neural networks (CNNs) matches well some of the statistics of natural images, it may also prevent such models from capturing contextual long-range feature interactions. In this work, we propose a simple, lightweight approach for better context exploitation in CNNs. We do so by introducing a pair of operators: gather, which efficiently aggregates feature responses from a large spatial extent, and excite, which redistributes the pooled information to local features. The operators are cheap, both in terms of number of added parameters and computational complexity, and can be integrated directly in existing architectures to improve their performance. Experiments on several datasets show that gather-excite can bring benefits comparable to increasing the depth of a CNN at a fraction of the cost. For example, we find ResNet-50 with gather-excite operators is able to outperform its 101-layer counterpart on ImageNet with no additional learnable parameters. We also propose a parametric gather-excite operator pair which yields further performance gains, relate it to the recently-introduced Squeeze-and-Excitation Networks, and analyse the effects of these changes to the CNN feature activation statistics. |
Categorical Reparameterization with Gumbel-Softmax | Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification. |
Framing the feudal bond: a chapter in the history of the ius commune in Medieval Europe | In this article I wish to show how history of legal doctrines can assist in a better understanding of the legal reasoning over a long historical period. First I will describe the nineteenth century discussion on the definition of law as a ‘science’, and some influences of the medieval idea of science on the modern definition. Then, I’ll try to delve deeper into a particular doctrinal problem of the Middle Ages: how to fit the feudal relationship between lord and vassal into the categories of Roman law. The scholastic interpretation of these categories is very original, to the point of framing a purely personal relationship among property rights. The effort made by medieval legal culture to frame the reality into the abstract concepts of law can be seen as the birth of legal dogmatics. |
Increased Solubility , Dissolution and Physicochemical Studies of Curcumin-Polyvinylpyrrolidone K-30 Solid Dispersions | Solid dispersions (SD) of curcuminpolyvinylpyrrolidone in the ratio of 1:2, 1:4, 1:5, 1:6, and 1:8 were prepared in an attempt to increase the solubility and dissolution. Solubility, dissolution, powder X-ray diffraction (XRD), differential scanning calorimetry (DSC) and Fourier transform infrared spectroscopy (FTIR) of solid dispersions, physical mixtures (PM) and curcumin were evaluated. Both solubility and dissolution of curcumin solid dispersions were significantly greater than those observed for physical mixtures and intact curcumin. The powder X-ray diffractograms indicated that the amorphous curcumin was obtained from all solid dispersions. It was found that the optimum weight ratio for curcumin:PVP K-30 is 1:6. The 1:6 solid dispersion still in the amorphous from after storage at ambient temperature for 2 years and the dissolution profile did not significantly different from freshly prepared. Keywords—Curcumin, polyvinylpyrrolidone K-30, solid dispersion, dissolution, physicochemical. |
International collaborative study on ghost cell odontogenic tumours: calcifying cystic odontogenic tumour, dentinogenic ghost cell tumour and ghost cell odontogenic carcinoma. | BACKGROUND
Calcifying odontogenic cyst was described first by Gorlin et al. in 1962; since then several hundreds of cases had been reported. In 1981, Praetorius et al. proposed a widely used classification. Afterwards, several authors proposed different classifications and discussed its neoplastic potential. The 2005 WHO Classification of Odontogenic Tumours re-named this entity as calcifying cystic odontogenic tumour (CCOT) and defined the clinico-pathological features of the ghost cell odontogenic tumours, the CCOT, the dentinogenic ghost cell tumour (DGCT) and the ghost cell odontogenic carcinoma (GCOC).
METHODS
The aim of this paper was to review the clinical-pathological features of 122 CCOT, DGCT and GCOC cases retrieved from the files of the oral pathology laboratories from 14 institutions in Mexico, South Africa, Denmark, the USA, Brazil, Guatemala and Peru. It attempts to clarify and to group the clinico-pathological features of the analysed cases and to propose an objective, comprehensive and useful classification under the 2005 WHO classification guidelines.
RESULTS
CCOT cases were divided into four sub-types: (i) simple cystic; (ii) odontoma associated; (iii) ameloblastomatous proliferating; and (iv) CCOT associated with benign odontogenic tumours other than odontomas. DGCT was separated into a central aggressive DGCT and a peripheral non-aggressive counterpart. For GCOC, three variants were identified. The first reported cases of a recurrent peripheral CCOT and a multiple synchronous, CCOT are included.
CONCLUSIONS
Our results suggest that ghost cell odontogenic tumours comprise a heterogeneous group of neoplasms which need further studies to define more precisely their biological behaviour. |
A neural predictor of cultural popularity | We use neuroimaging to predict cultural popularity — something that is popular in the broadest sense and appeals to a large number of individuals. Neuroeconomic research suggests that activity in reward-related regions of the brain, notably the orbitofrontal cortex and ventral striatum, is predictive of future purchasing decisions, but it is unknown whether the neural signals of a small group of individuals are predictive of the purchasing decisions of the population at large. For neuroimaging to be useful as a measure of widespread popularity, these neural responses would have to generalize to a much larger population that is not the direct subject of the brain imaging itself. Here, we test the possibility of using functional magnetic resonance imaging (fMRI) to predict the relative popularity of a common good: music. We used fMRI to measure the brain responses of a relatively small group of adolescents while listening to songs of largely unknown artists. As a measure of popularity, the sales of these songs were totaled for the three years following scanning, and brain responses were then correlated with these “future” earnings. Although subjective likability of the songs was not predictive of sales, activity within the ventral striatum was significantly correlated with the number of units sold. These results suggest that the neural responses to goods are not only predictive of purchase decisions for those individuals actually scanned, but such responses generalize to the population at large and may be used to predict cultural popularity. © 2011 Published by Elsevier Inc. on behalf of Society for Consumer Psychology. |
Anisotropic Ru3+ 4d5 magnetism in the α-RuCl3 honeycomb system: Susceptibility, specific heat, and zero-field NMR | Hexagonal α-Ru trichloride single crystals exhibit a strong magnetic anisotropy and we show that upon applying fields up to 14 T in the honeycomb plane the successive magnetic order at T1 = 14 K and T2 = 8 K could be completely suppressed whereas in the perpendicular direction the magnetic order is robust. Furthermore the field dependence of χ(T) implies coexisting ferroand antiferromagnetic exchange between in-plane components of Ru-spins, whereas for out-of-plane components a strong antiferromagnetic exchange becomes evident. Ru zero-field nuclear magnetic resonance evidence a complex (probably chiral) long-range magnetic order below 14 K. The large orbital moment on Ru is found in density-functional calculations. |
Interactive Grounded Language Acquisition and Generalization in a 2D World | We build a virtual agent for learning language in a 2D maze-like world. The agent sees images of the surrounding environment, listens to a virtual teacher, and takes actions to receive rewards. It interactively learns the teacher’s language from scratch based on two language use cases: sentence-directed navigation and question answering. It learns simultaneously the visual representations of the world, the language, and the action control. By disentangling language grounding from other computational routines and sharing a concept detection function between language grounding and prediction, the agent reliably interpolates and extrapolates to interpret sentences that contain new word combinations or new words missing from training sentences. The new words are transferred from the answers of language prediction. Such a language ability is trained and evaluated on a population of over 1.6 million distinct sentences consisting of 119 object words, 8 color words, 9 spatial-relation words, and 50 grammatical words. The proposed model significantly outperforms five comparison methods for interpreting zero-shot sentences. In addition, we demonstrate human-interpretable intermediate outputs of the model in the appendix. |
Generating Sentences by Editing Prototypes | We propose a new generative language model for sentences that first samples a prototype sentence from the training corpus and then edits it into a new sentence. Compared to traditional language models that generate from scratch either left-to-right or by first sampling a latent sentence vector, our prototype-then-edit model improves perplexity on language modeling and generates higher quality outputs according to human evaluation. Furthermore, the model gives rise to a latent edit vector that captures interpretable semantics such as sentence similarity and sentence-level analogies. |
Using neural networks and data mining techniques for the financial distress prediction model | The operating status of an enterprise is disclosed periodically in a financial statement. As a result, investors usually only get information about the financial distress a company may be in after the formal financial statement has been published. If company executives intentionally package financial statements with the purpose of hiding the actual status of the company, then investors will have even less chance of obtaining the real financial information. For example, a company can manipulate its current ratio by up to 200% so that its liquidity deficiency will not show up as a financial distress in the short run. To improve the accuracy of the financial distress prediction model, this paper adopted the operating rules of the Taiwan stock exchange corporation (TSEC) which were violated by those companies that were subsequently stopped and suspended, as the range of the analysis of this research. In addition, this paper also used financial ratios, other non-financial ratios, and factor analysis to extract adaptable variables. Moreover, the artificial neural network (ANN) and data mining (DM) techniques were used to construct the financial distress prediction model. The empirical experiment with a total of 37 ratios and 68 listed companies as the initial samples obtained a satisfactory result, which testifies for the feasibility and validity of our proposed methods for the financial distress prediction of listed companies. This paper makes four critical contributions: (1) The more factor analysis we used, the less accuracy we obtained by the ANN and DM approach. (2) The closer we get to the actual occurrence of financial distress, the higher the accuracy we obtain, with an 82.14% correct percentage for two seasons prior to the occurrence of financial distress. (3) Our empirical results show that factor analysis increases the error of classifying companies that are in a financial crisis as normal companies. (4) By developing a financial distress prediction model, the ANN approach obtains better prediction accuracy than the DM clustering approach. Therefore, this paper proposes that the artificial intelligent (AI) approach could be a more suitable methodology than traditional statistics for predicting the potential financial distress of a company. Crown Copyright 2008 Published by Elsevier Ltd. All rights reserved. |
Sequence to Sequence Learning with Neural Networks | Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT’14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM’s BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM’s performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier. |
PyNN: A Common Interface for Neuronal Network Simulators | Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN. |
The complex link between severity of asthma and rhinitis in mite allergic patients. | AIM
The aim of the study was to evaluate the link between the severity of upper and lower airways diseases in mite allergic patients with respiratory allergy.
PATIENTS AND METHOD
A multicentre, observational, cross-sectional study was carried out in 556 consecutively enrolled mite allergic patients with rhinitis and asthma comorbidity attending a specialist unit. Severity assessment of rhinitis and asthma was evaluated in accordance with ARIA and GINA guidelines.
RESULTS
Reliable data were available for 518 patients. The distribution of rhinitis severity was: 15.6% mild intermittent rhinitis, 4.4% moderate-severe intermittent rhinitis, 30.3% mild persistent rhinitis and 49.6% moderate persistent rhinitis. The distribution of asthma severity was: 41.3% mild intermittent asthma, 14.3% mild persistent asthma, 19.1% moderate persistent asthma and 25.3% severe persistent asthma. In patients with moderate-severe persistent rhinitis (49.5%) a significant trend (p = 0.005) was found pointing to an increased link with asthma severity.
CONCLUSION
A link between respective severities of rhinitis and asthma was found in only half of mite allergic patients with rhinitis and asthma. |
“Like a pillar of fire above the Alps”: William Blake and the Prospect of Revolution | The Alps are not part of a sublime prospect in William Blake's poetry simply because they provoke Burkean terror or occasion a Kantian sensation of the unrepresentable. Rather, in both The Song of Los (1795) and Jerusalem (1804–c. 1820), the Alps are sublime in so far as they present the prospect of revolution. The earlier poem flirts simultaneously with a vision of revolution as eschatological finality and as the political realization of the ideas of “Rousseau and Voltaire.” By contrast, Jerusalem conveys revolution as an experience that potentiates the emergence of a radical ontological condition that frees the subject from both political and religious subjectivation. This paper traces the ways Jerusalem harnesses the Kantian sublime to Deleuzian concepts of difference and repetition. Jerusalem embraces the contra-finality of the sublime that Kant disparages and exploits the aesthetic as a Deleuzian site for the emergence of ontological difference. When Los sings his “watch song,” “the Alps and the Appe... |
Multi-level Simulation of Internet of Things on Smart Territories | In this paper, a methodology is presented and employed for simulating the Internet of Things (IoT). The requirement for scalability, due to the possibly huge amount of involved sensors and devices, and the heterogeneous scenarios that might occur, impose resorting to sophisticated modeling and simulation techniques. In particular, multi-level simulation is regarded as a main framework that allows simulating large-scale IoT environments while keeping high levels of detail, when it is needed. We consider a use case based on the deployment of smart services in decentralized territories. A two level simulator is employed, which is based on a coarse agent-based, adaptive parallel and distributed simulation approach to model the general life of simulated entities. However, when needed a finer grained simulator (based on OMNeT++) is triggered on a restricted portion of the simulated area, which allows considering all issues concerned with wireless communications. Based on this use case, it is confirmed that the ad-hoc wireless networking technologies do represent a principle tool to deploy smart services over decentralized countrysides. Moreover, the performance evaluation confirms the viability of utilizing multi-level simulation for simulating large scale IoT environments. |
Clinical Utility of a Bronchial Genomic Classifier in Patients With Suspected Lung Cancer. | BACKGROUND
Bronchoscopy is often the initial diagnostic procedure performed in patients with pulmonary lesions suggestive of lung cancer. A bronchial genomic classifier was previously validated to identify patients at low risk for lung cancer after an inconclusive bronchoscopy. In this study, we evaluated the potential of the classifier to reduce invasive procedure utilization in patients with suspected lung cancer.
METHODS
In two multicenter trials of patients undergoing bronchoscopy for suspected lung cancer, the classifier was measured in normal-appearing bronchial epithelial cells from a mainstem bronchus. Among patients with low and intermediate pretest probability of cancer (n = 222), subsequent invasive procedures after an inconclusive bronchoscopy were identified. Estimates of the ability of the classifier to reduce unnecessary procedures were calculated.
RESULTS
Of the 222 patients, 188 (85%) had an inconclusive bronchoscopy and follow-up procedure data available for analysis. Seventy-seven (41%) patients underwent an additional 99 invasive procedures, which included surgical lung biopsy in 40 (52%) patients. Benign and malignant diseases were ultimately diagnosed in 62 (81%) and 15 (19%) patients, respectively. Among those undergoing surgical biopsy, 20 (50%) were performed in patients with benign disease. If the classifier had been used to guide decision making, procedures could have been avoided in 50% (21 of 42) of patients undergoing further invasive testing. Further, among 35 patients with an inconclusive index bronchoscopy who were diagnosed with lung cancer, the sensitivity of the classifier was 89%, with 4 (11%) patients having a false-negative classifier result.
CONCLUSIONS
Invasive procedures after an inconclusive bronchoscopy occur frequently, and most are performed in patients ultimately diagnosed with benign disease. Using the genomic classifier as an adjunct to bronchoscopy may reduce the frequency and associated morbidity of these invasive procedures.
TRIAL REGISTRY
ClinicalTrials.gov; Nos. NCT01309087 and NCT00746759; URL: www.clinicaltrials.gov. |
The next-to-shortest path problem on directed graphs with positive edge weights | Given an edge-weighted graph G and two distinct vertices s and t of G, the next-to-shortest path problem asks for a path from s to t of minimum length among all paths from s to t except the shortest ones. In this article, we consider the version where G is directed and all edge weights are positive. Some properties of the requested path are derived when G is an arbitrary digraph. In addition, if G is planar, an O(n3)-time algorithm is proposed, where n is the number of vertices of G. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 000(00), 000–00 |
Prediction of Objective Child Motivation Test Scores from Parents' Reports of Child-Rearing Practices | The mothers and fathers of 310 junior high school children completed a child-rearing questionnaire and the children completed the School Motivation Analysis Test. The mothers' and fathers' data were factor analyzed separately and factor scores computed and employed as predictor variables in separate sets of multiple regression analyses employing the childrens' school motivation scores as criteria. Of the 40 possible full model R2s 29 were significant. The pattern of variable interrelationships is discussed, with some of the most consistent results being a significant positive relationship of the children's integrated and unintegrated assertiveness scores with the fathers' late authoritative discipline and the mothers' high family-adjustment scores. Among numerous other relationships discussed were significant positive relationships of fathers' low use of discipline and mothers' low physical punishment vs high use of reason with the childrens' integrated self-sentiment and superego. |
State Complexity of Prefix-Free Regular Languages | We investigate the state complexities of basic operations for prefix-free regular languages. The state complexity of an operation for regular languages is the number of states that are necessary and sufficient in the worst-case for the minimal deterministic finite-state automaton (DFA) that accepts the language obtained from the operation. We know that a regular language is prefix-free if and only if its minimal DFA has only one final state and the final state has no out-transitions whose target state is not a sink state. Based on this observation, we reduce the state complexities for prefix-free regular languages compared with the state complexities for (general) regular languages. For both catenation and Kleene star operations of (general) regular languages, the state complexities are exponential in the size of given minimal DFAs. On the other hand, if both regular languages are prefix-free, then the state complexities are at most linear. We also demonstrate that we can reduce the state complexities of intersection and union operations based on the structural properties of prefix-free minimal DFAs. |
Designing Centers of Expertise for Academic Learning Through Video Games | Schools appear to be facing a crisis of engaging secondary students in meaningful learning. Many are recognizing that the learning principles in computer and video games make reflect our best theories of cognition, yet are underutilized as an educational resource. This paper suggests an alternative model for game-based learning outside of schools. Drawing on case studies of youth participating in a year-long program, it describes a n approach to bridging learners’ identities in and out of school through historical simulation computer games situated within a community of practice of game experts. Participants developed both academic skills and productive identities as consumers and producers of information. through these cases, we propose a model of centers of expertise, learning programs that seek to foster and develop new media literacies with pay off in schools and that lead to new identities outside of school as well. |
Biomechanical advantages of triple-loaded suture anchors compared with double-row rotator cuff repairs. | PURPOSE
To evaluate the strength and suture-tendon interface security of various suture anchors triply and doubly loaded with ultrahigh-molecular weight polyethylene-containing sutures and to evaluate the relative effectiveness of placing these anchors in a single-row or double-row arrangement by cyclic loading and then destructive testing.
METHODS
The infraspinatus muscle was reattached to the original humeral footprint by use of 1 of 5 different repair patterns in 40 bovine shoulders. Two single-row repairs and three double-row repairs were tested. High-strength sutures were used for all repairs. Five groups were studied: group 1, 2 triple-loaded screw suture anchors in a single row with simple stitches; group 2, 2 triple-loaded screw anchors in a single row with simple stitches over a fourth suture passed perpendicularly ("rip-stop" stitch); group 3, 2 medial and 2 lateral screw anchors with a single vertical mattress stitch passed from the medial anchors and 2 simple stitches passed from the lateral anchors; group 4, 2 medial double-loaded screw anchors tied in 2 mattress stitches and 2 push-in lateral anchors capturing the medial sutures in a "crisscross" spanning stitch; and group 5, 2 medial double-loaded screw anchors tied in 2 mattress stitches and 2 push-in lateral anchors creating a "suture-bridge" stitch. The specimens were cycled between 10 and 180 N at 1.0 Hz for 3,500 cycles or until failure. Endpoints were cyclic loading displacement (5 and 10 mm), total displacement, and ultimate failure load.
RESULTS
A single row of triply loaded anchors was more resistant to stretching to a 5- and 10-mm gap than the double-row repairs with or without the addition of a rip-stop suture (P < .05). The addition of a rip-stop stitch made the repair more resistant to gap formation than a double row repair (P < .05). The crisscross double row created by 2 medial double-loaded suture anchors and 2 lateral push-in anchors stretched more than any other group (P < .05).
CONCLUSIONS
Double-row repairs with either crossing sutures or 4 separate anchor points were more likely to fail (5- or 10-mm gap) than a single-row repair loaded with 3 simple sutures.
CLINICAL RELEVANCE
The triple-loaded anchors with ultrahigh-molecular weight polyethylene-containing sutures placed in a single row were more resistant to stretching than the double-row groups. |
Probability Models for Customer-Base Analysis | As more firms begin to collect (and seek value from) richer customer-level datasets, a focus on the emerging concept of customer-base analysis is becoming increasingly common and critical. Such analyses include forward-looking projections ranging from aggregate-level sales trajectories to individual-level conditional expectations (which, in turn, can be used to derive estimates of customer lifetime value). We provide an overview of a class of parsimonious models (called probability models) that are well-suited to meet these rising challenges. We first present a taxonomy that captures some of the key distinctions across different kinds of business settings and customer relationships, and identify some of the unique modeling and measurement issues that arise across them. We then provide deeper coverage of these modeling issues, first for noncontractual settings (i.e., situations in which customer “death” is unobservable), then contractual ones (i.e., situations in which customer “death” can be observed). We review recent literature in these areas, highlighting substantive insights that arise from the research as well as the methods used to capture them. We focus on practical applications that use appropriately chosen data summaries (such as recency and frequency) and rely on commonly available software packages (such as Microsoft Excel). n 2009 Direct Marketing Educational Foundation, Inc. Published by Elsevier B.V. All rights reserved. |
Collaborative Filtering for Implicit Feedback Datasets | A common task of recommender systems is to improve customer experience through personalized recommendations based on prior implicit feedback. These systems passively track different sorts of user behavior, such as purchase history, watching habits and browsing activity, in order to model user preferences. Unlike the much more extensively researched explicit feedback, we do not have any direct input from the users regarding their preferences. In particular, we lack substantial evidence on which products consumer dislike. In this work we identify unique properties of implicit feedback datasets. We propose treating the data as indication of positive and negative preference associated with vastly varying confidence levels. This leads to a factor model which is especially tailored for implicit feedback recommenders. We also suggest a scalable optimization procedure, which scales linearly with the data size. The algorithm is used successfully within a recommender system for television shows. It compares favorably with well tuned implementations of other known methods. In addition, we offer a novel way to give explanations to recommendations given by this factor model. |
Multiple response codes play specific roles in response selection and inhibition under task switching. | Several task-switch studies show that response category repetition is favorable on task repetition trials, but disadvantageous on task switch trials. In the present study we investigated how this interaction depends on the type and number of involved response categories. In a dual-task number-categorization experiment, subjects had to respond to tasks T(1) and T(2) with one of the two fingers of their left and right hand, respectively. For one group of participants, the use of spatial response categories, and for another group the use of finger-type categories was induced. It turned out that the interaction between task switching and response category repetition was clearly related to the induced response categories, but at the same time, the spatial categories nevertheless also affected response selection in the finger-type group. However, these two effects were additive. This shows that multiple response codes can simultaneously be involved in response selection, but that they affect performance differentially. |
NONCONVEX MODEL FOR FACTORING NONNEGATIVE MATRICES | We study the Nonnegative Matrix Factorization problem which approximates a nonnegative matrix by a low-rank factorization. This problem is particularly important in Machine Learning, and finds itself in a large number of applications. Unfortunately, the original formulation is ill-posed and NPhard. In this paper, we propose a row sparse model based on Row Entropy Minimization to solve the NMF problem under separable assumption which states that each data point is a convex combination of a few distinct data columns. We utilize the concentration of the entropy function and the `∞ norm to concentrate the energy on the least number of latent variables. We prove that under the separability assumption, our proposed model robustly recovers data columns that generate the dataset, even when the data is corrupted by noise. We empirically justify the robustness of the proposed model and show that it is significantly more robust than the state-ofthe-art separable NMF algorithms. |
Analysis and Implementation of a New Switching Memristor Scroll Hyperchaotic System and Application in Secure Communication | This paper proposed a novel switching scroll hyperchaotic system based on a memristor device and explored its application to secure communication. The new system could be switched between the double-scroll chaotic system and multiscroll one by switch S1 and switch S2. We gave the construction process of the novel system, its numerical simulations, and dynamical properties, firstly. Moreover, the memristive circuit implementation of the new switching system was presented and the results were also in agreement with those of numerical simulation. Finally, the new switching memristive system was applied to secure communication by means of the drive-response synchronization with chaotic masking. When the voice signal is a rising waveform, it is encrypted by the double-scroll memristive system. When the voice signal is a falling waveform, the multiscroll memristive system works. The voice signal is completely submerged in the chaotic signal and could not be distinguished at all. Security analyses show that it is a successful application to secure communication. |
A randomised phase II trial of Stereotactic Ablative Fractionated radiotherapy versus Radiosurgery for Oligometastatic Neoplasia to the lung (TROG 13.01 SAFRON II) | Stereotactic ablative body radiotherapy (SABR) is emerging as a non-invasive method for precision irradiation of lung tumours. However, the ideal dose/fractionation schedule is not yet known. The primary purpose of this study is to assess safety and efficacy profile of single and multi-fraction SABR in the context of pulmonary oligometastases. The TROG 13.01/ALTG 13.001 clinical trial is a multicentre unblinded randomised phase II study. Eligible patients have up to three metastases to the lung from any non-haematological malignancy, each < 5 cm in size, non-central targets, and have all primary and extrathoracic disease controlled with local therapies. Patients are randomised 1:1 to a single fraction of 28Gy versus 48Gy in four fractions of SABR. The primary objective is to assess the safety of each treatment arm, with secondary objectives including assessment of quality of life, local efficacy, resource use and costs, overall and disease free survival and time to distant failure. Outcomes will be stratified by number of metastases and origin of the primary disease (colorectal versus non-colorectal primary). Planned substudies include an assessment of the impact of online e-Learning platforms for lung SABR and assessment of the effect of SABR fractionation on the immune responses. A total of 84 patients are required to complete the study. Fractionation schedules have not yet been investigated in a randomised fashion in the setting of oligometastatic disease. Assuming the likelihood of similar clinical efficacy in both arms, the present study design allows for exploration of the hypothesis that cost implications of managing potentially increased toxicities from single fraction SABR will be outweighed by costs associated with delivering multiple-fraction SABR. ACTRN12613001157763 , registered 17th October 2013 |
Tamper-proofing of electronic and printed text documents via robust hashing and data-hiding | In this paper, we deal with the problem of authentication and tamper-proofing of text documents that can be distributed in electronic or printed forms. We advocate the combination of robust text hashing and text data-hiding technologies as an efficient solution to this problem. First, we consider the problem of text data-hiding in the scope of the Gel'fand-Pinsker data-hiding framework. For illustration, two modern text data-hiding methods, namely color index modulation (CIM) and location index modulation (LIM), are explained. Second, we study two approaches to robust text hashing that are well suited for the considered problem. In particular, both approaches are compatible with CIM and LIM. The first approach makes use of optical character recognition (OCR) and a classical cryptographic message authentication code (MAC). The second approach is new and can be used in some scenarios where OCR does not produce consistent results. The experimental work compares both approaches and shows their robustness against typical intentional/unintentional document distortions including electronic format conversion, printing, scanning, [...] VILLAN SEBASTIAN, Renato Fisher, et al. Tamper-proofing of Electronic and Printed Text Documents via Robust Hashing and Data-Hiding. In: Proceedings of SPIE-IS&T Electronic Imaging 2007, Security, Steganography, and Watermarking of Multimedia |
Stepwise Fabrication of Co-Embedded Porous Multichannel Carbon Nanofibers for High-Efficiency Oxygen Reduction | HighlightsAn interconnected structure is developed by evaporation of zinc species using a ZnCo2O4 precursor as the cobalt resource, enabling communications between channels as well as homogeneous loading of active sites.A shell structure of Co3O4 is formed on the surface of a zero-valent Co0 core during a stepwise carbothermic reduction of ZnCo2O4.The Co-embedded multichannel carbon nanofibers exhibit not only a superior half-wave potential, but also an excellent durability compared to those of the commercial 30% Pt/C.AbstractA novel nonprecious metal material consisting of Co-embedded porous interconnected multichannel carbon nanofibers (Co/IMCCNFs) was rationally designed for oxygen reduction reaction (ORR) electrocatalysis. In the synthesis, ZnCo2O4 was employed to form interconnected mesoporous channels and provide highly active Co3O4/Co core–shell nanoparticle-based sites for the ORR. The IMC structure with a large synergistic effect of the N and Co active sites provided fast ORR electrocatalysis kinetics. The Co/IMCCNFs exhibited a high half-wave potential of 0.82 V (vs. reversible hydrogen electrode) and excellent stability with a current retention up to 88% after 12,000 cycles in a current–time test, which is only 55% for 30 wt% Pt/C. |
Link prediction in multiplex online social networks | Online social networks play a major role in modern societies, and they have shaped the way social relationships evolve. Link prediction in social networks has many potential applications such as recommending new items to users, friendship suggestion and discovering spurious connections. Many real social networks evolve the connections in multiple layers (e.g. multiple social networking platforms). In this article, we study the link prediction problem in multiplex networks. As an example, we consider a multiplex network of Twitter (as a microblogging service) and Foursquare (as a location-based social network). We consider social networks of the same users in these two platforms and develop a meta-path-based algorithm for predicting the links. The connectivity information of the two layers is used to predict the links in Foursquare network. Three classical classifiers (naive Bayes, support vector machines (SVM) and K-nearest neighbour) are used for the classification task. Although the networks are not highly correlated in the layers, our experiments show that including the cross-layer information significantly improves the prediction performance. The SVM classifier results in the best performance with an average accuracy of 89%. |
A Deep Neural Network for Modeling Music | We propose a convolutional neural network architecture with k-max pooling layer for semantic modeling of music. The aim of a music model is to analyze and represent the semantic content of music for purposes of classification, discovery, or clustering. The k-max pooling layer is used in the network to make it possible to pool the k most active features, capturing the semantic-rich and time-varying information about music. Our network takes an input music as a sequence of audio words, where each audio word is associated with a distributed feature vector that can be fine-tuned by backpropagating errors during the training. The architecture allows us to take advantage of the better trained audio word embeddings and the deep structures to produce more robust music representations. Experiment results with two different music collections show that our neural networks achieved the best accuracy in music genre classification comparing with three state-of-art systems. |
Performance Study of an In-Car Switched Ethernet Network without Prioritization | This paper presents the current state of our research in realtime communication of an IP-based in-car network. The Internet Protocol (IP) will serve as convergence layer of different specific in-car network protocols and IEEE 802.3 Ethernet will be the basic technology to transport IP. In this work, we evaluate a legacy switched Ethernet network without any Quality of Service (QoS) mechanisms. While there are arguments for not using QoS mechanisms, we give evidence that communication requirements and service constraints of a more and more streaming intensive in-car network cannot be met without. We argue for a setup with different traffic types: CAN and FlexRay like control messages, camera streaming, video and audio streaming, and bulk traffic. We will also argue for a simple double star topology as a valid assumption where the target architecture of the IP-based in-car network is not yet clear. Setup and simulation will serve as framework and motivation for future work: Analyzing IP-based real-time communication using QoS mechanisms characterizing traffic classes after IEEE 802.1Q and IEEE 802.1 Audio Video Bridging (AVB). |
Design and realization of two-wheel micro-mouse diagonal dashing | In order to reduce micromouse dashing time in complex maze, and improve micromouse’s stability in high speed dashing, diagonal dashing method was proposed. Considering the actual dashing trajectory of micromouse in diagonal path, the path was decomposed into three different trajectories; Fully consider turning in and turning out of micromouse dashing action in diagonal, leading and passing of the every turning was used to realize micromouse posture adjustment, with the help of accelerometer sensor ADXL202, rotation angle error compensation was done and the micromouse realized its precise position correction; For the diagonal dashing, front sensor S1,S6 and accelerometer sensor ADXL202 were used to ensure micromouse dashing posture. Principle of new diagonal dashing method is verified by micromouse based on STM32F103. Experiments of micromouse dashing show that diagonal dashing method can greatly improve its stability, and also can reduce its dashing time in complex maze. |
Recognition of Urban Sound Events Using Deep Context-Aware Feature Extractors and Handcrafted Features | This paper proposes a method for recognizing audio events in urban environments that combines handcrafted audio features with a deep learning architectural scheme (Convolutional Neural Networks, CNNs), which has been trained to distinguish between different audio context classes. The core idea is to use the CNNs as a method to extract context-aware deep audio features that can offer supplementary feature representations to any soundscape analysis classification task. Towards this end, the CNN is trained on a database of audio samples which are annotated in terms of their respective "scene" (e.g. train, street, park), and then it is combined with handcrafted audio features in an early fusion approach, in order to recognize the audio event of an unknown audio recording. Detailed experimentation proves that the proposed context-aware deep learning scheme, when combined with the typical handcrafted features, leads to a significant performance boosting in terms of classification accuracy. The main contribution of this work is the demonstration that transferring audio contextual knowledge using CNNs as feature extractors can significantly improve the performance of the audio classifier, without need for CNN training (a rather demanding process that requires huge datasets and complex data augmentation procedures). |
Collaborative manufacturing with physical human – robot interaction | Although the concept of industrial cobots dates back to 1999, most present day hybrid human-machine assembly systems are merely weight compensators. Here, we present results on the development of a collaborative human-robot manufacturing cell for homokinetic joint assembly. The robot alternates active and passive behaviours during assembly, to lighten the burden on the operator in the first case, and to comply to his/her needs in the latter. Our approach can successfully manage direct physical contact between robot and human, and between robot and environment. Furthermore, it can be applied to standard position (and not torque) controlled robots, common in the industry. The approach is validated in a series of assembly experiments. The human workload is reduced, diminishing the risk of strain injuries. Besides, a complete risk analysis indicates that the proposed setup is compatible with the safety standards, and could be certified. |
Automatic Generation of Personas Using YouTube Social Media Data | We develop and implement an approach for automating generating personas in real time using actual YouTube social media data from a global media corporation. From the organization's YouTube channel, we gather demographic data, customer interactions, and topical interests, leveraging more than 188,000 subscriber profiles and more than 30 million user interactions. We demonstrate that online user data can be used to develop personas in real time using social media analytics. To get diverse aspects of personas, we collect user data from other social media channels as well and match them with the corresponding user data to generate richer personas. Our results provide corporate insights into competitive marketing, topical interests, and preferred product features for the users of the online news medium. Research implications are that reasonably rich personas can be generated in real time, instead of being the result of a laborious and time-consuming manual development process. |
Hierarchical classification of dynamically varying radar pulse repetition interval modulation patterns | The central purpose of passive signal intercept receivers is to perform automatic categorization of unknown radar signals. Currently, there is an urgent need to develop intelligent classification algorithms for these devices due to emerging complexity of radar waveforms. Especially multifunction radars (MFRs) capable of performing several simultaneous tasks by utilizing complex, dynamically varying scheduled waveforms are a major challenge for automatic pattern classification systems. To assist recognition of complex radar emissions in modern intercept receivers, we have developed a novel method to recognize dynamically varying pulse repetition interval (PRI) modulation patterns emitted by MFRs. We use robust feature extraction and classifier design techniques to assist recognition in unpredictable real-world signal environments. We classify received pulse trains hierarchically which allows unambiguous detection of the subpatterns using a sliding window. Accuracy, robustness and reliability of the technique are demonstrated with extensive simulations using both static and dynamically varying PRI modulation patterns. |
Music Recommendation Based on Acoustic Features and User Access Patterns | Music recommendation is receiving increasing attention as the music industry develops venues to deliver music over the Internet. The goal of music recommendation is to present users lists of songs that they are likely to enjoy. Collaborative-filtering and content-based recommendations are two widely used approaches that have been proposed for music recommendation. However, both approaches have their own disadvantages: collaborative-filtering methods need a large collection of user history data and content-based methods lack the ability of understanding the interests and preferences of users. To overcome these limitations, this paper presents a novel dynamic music similarity measurement strategy that utilizes both content features and user access patterns. The seamless integration of them significantly improves the music similarity measurement accuracy and performance. Based on this strategy, recommended songs are obtained by a means of label propagation over a graph representing music similarity. Experimental results on a real data set collected from http://www.newwisdom.net demonstrate the effectiveness of the proposed approach. |
Process and environmental variation impacts on ASIC timing | With each semiconductor process node, the impacts on performance of environmental and semiconductor process variations become a larger portion of the cycle time of the product. Simple guard-banding for these effects leads to increased product development times and uncompetitive products. In addition, traditional static timing methodologies are unable to cope with the large number of permutations of process, voltage, and temperature corners created by these independent sources of variation. In this paper we will discuss the sources of variation; by introducing the concepts of systematic inter-die variation, systematic intra-die variation and intra-die random variation. We will show that by treating these forms of variations differently, we can achieve design closure with less guard-banding than traditional methods. |
The Discriminating Power of Multiplicities in the Lambda-Calculus | The?-calculus with multiplicities is a refinement of the lazy?-calculus where the argument in an application comes with a multiplicity, which is an upper bound to the number of its uses. This introduces potential deadlocks in the evaluation. We study the discriminating power of this calculus over the usual?-terms. We prove in particular that the observational equivalence induced by contexts with multiplicities coincides with the equality of Levy?Longo trees associated with?-terms. This is a consequence of the characterization we give of the corresponding observational precongruence, as an intensional preorder involving?-expansion, namely, Ong's lazy Plotkin?Scott?Engeler preorder. |
Facebook and Romantic Relationships: Intimacy and Couple Satisfaction Associated with Online Social Network Use | Online social networks, such as Facebook, have gained immense popularity and potentially affect the way people build and maintain interpersonal relationships. The present study sought to examine time spent on online social networks, as it relates to intimacy and relationship satisfaction experienced in romantic relationships. Results did not find relationships between an individual's usage of online social networks and his/her perception of relationship satisfaction and intimacy. However, the study found a negative relationship between intimacy and the perception of a romantic partner's use of online social networks. This finding may allude to an attributional bias in which individuals are more likely to perceive a partner's usage as negative compared to their own usage. Additionally, it was found that intimacy mediates the relationship between online social network usage and overall relationship satisfaction, which suggests that the level of intimacy experienced in a relationship may serve as a buffer that protects the overall level of satisfaction. |
Medical versus surgical treatment of stable angina pectoris: progress report of a large scale study. | A large scale, prospective, randomized study of surgical v. medical management of disabling angina pectoris is being conducted as a cooperative study among thirteen Veterans Administration hospitals in the U.S.A. A total of 1015 patients have been entered into the study and follow-up data are currently being evaluated. Patient entry into the study was concluded in December 1974. Patient compliance has been acceptable with only 7% of patients not adhering to their randomization category. Thirty-day operative mortality (1972-1974) in 309 patients was 5-3%. The patient population exhibited a severe degree of coronary disease. There was ECG evidence of prior myocardial infarction in 40%. There were significant obstructive lesions in three major coronary arteries in 51% and significant lesions of the left main coronary artery in 11%. Medical and surgical treatment groups demonstrated no significant differences in objective descriptive characteristics. Mortality in the medical group at 1 year was 8%. Mortality was influenced by several factors including the number of vessels involved, left ventricular function and the presence of left main coronary artery disease. The lowest mortality occurred in patients with single vessel disease and normal LV function who had a 1-year mortality of 3%. Patients with 3-vessel disease and abnormal LV function exhibited a 14% 1-year mortality. Patients with disease of the left main coronary artery and poor LV function had a 1-year mortality of 37%. Analyses of the results of treatment modalities in sub-groups is currently being performed and will be reported in future publications. |
A longitudinal study of emotional intelligence in graduate nurse anesthesia students | OBJECTIVE
Emotional intelligence (EI) is an important component not only for success in the nurse anesthesia (NA) profession, but as a NA student as well. Using the ability-based EI model, the purpose of this was to examine the difference in EI between the first semester and last semester of NA training programs.
METHODS
First semester NA students completed the online Mayer-Salovey-Caruso Emotional Intelligence Test V2.0 EI instrument, and then the same students repeated the instrument in their last (7th) semester.
RESULTS
There was a statistically significant correlation between overall EI and long-term overall EI (P = 0.000), reasoning area and long-term reasoning area (P = 0.035), experiencing area (P = 0.000) and long-term experiencing area, perceiving branch and long-term perceiving branch (P = 0.000), using and long-term using branch (P = 0.000), and the managing branch and long-term managing branch (P = 0.026). The correlation between the understanding branch and the long-term understanding branch was not statistically significant (P < 0.157). The paired sample t-test demonstrated no statistically significant change (n = 34) in overall EI, the two areas scores, or the four-branch scores between the first semester and the last semester of a NA training program.
CONCLUSIONS
This longitudinal study shows the lack of EI change in NA students over time. Thus, no change in EI occurs as a result of transitioning through a NA program based on the accrediting body's standardized curriculum, but the results helped the researcher provide useful data to inform future research on the use of EI measures as predictors of NA program success. |
Lessons of Enduring Value: Henry George a Century Later | . Henry George, in the judgment of Joseph Schumpeter, was an economist, self taught but, for his time, a century ago, well taught. George's writings can serve mankind constructively today. He wrote brilliantly in showing the destructiveness for human well-being of tariffs which obstruct international trade. His language shows clearly why such impediments to trade wastefully depress levels of living and opportunity. George foresaw some of the more sophisticated reasons why socialism could not be economically successful and also why it would threaten human freedom. Regarding the possibilities of reducing poverty, however, George has not been fully confirmed by a century's experience. But the reasoning that underlies his case for relying on land taxation for government revenue deserves serious attention today. |
Convergent Learning: Do different neural networks learn the same representations? | Recent successes in training large, deep neural networks have prompted active investigation into the representations learned on their intermediate layers. Such research is difficult because it requires making sense of non-linear computations performed by millions of learned parameters, but valuable because it increases our ability to understand current models and training algorithms and thus create improved versions of them. In this paper we investigate the extent to which neural networks exhibit what we call convergent learning, which is when the representations learned by multiple nets converge to a set of features which are either individually similar between networks or where subsets of features span similar lowdimensional spaces. We propose a specific method of probing representations: training multiple networks and then comparing and contrasting their individual, learned representations at the level of neurons or groups of neurons. We begin research into this question by introducing three techniques to approximately align different neural networks on a feature or subspace level: a bipartite matching approach that makes one-to-one assignments between neurons, a sparse prediction and clustering approach that finds one-to-many mappings, and a spectral clustering approach that finds many-to-many mappings. This initial investigation reveals a few interesting, previously unknown properties of neural networks, and we argue that future research into the question of convergent learning will yield many more. The insights described here include (1) that some features are learned reliably in multiple networks, yet other features are not consistently learned; (2) that units learn to span low-dimensional subspaces and, while these subspaces are common to multiple networks, the specific basis vectors learned are not; (3) that the representation codes are a mix between a local (single unit) code and slightly, but not fully, distributed codes across multiple units; (4) that the average activation values of neurons vary considerably within a network, yet the mean activation values across different networks converge to an almost identical distribution. 1 |
Hermitian Co-Attention Networks for Text Matching in Asymmetrical Domains | Co-Attentions are highly effective attention mechanisms for text matching applications. Co-Attention enables the learning of pairwise attentions, i.e., learning to attend based on computing word-level affinity scores between two documents. However, text matching problems can exist in either symmetrical or asymmetrical domains. For example, paraphrase identification is a symmetrical task while question-answer matching and entailment classification are considered asymmetrical domains. In this paper, we argue that Co-Attention models in asymmetrical domains require different treatment as opposed to symmetrical domains, i.e., a concept of word-level directionality should be incorporated while learning word-level similarity scores. Hence, the standard inner product in real space commonly adopted in co-attention is not suitable. This paper leverages attractive properties of the complex vector space and proposes a co-attention mechanism based on the complex-valued inner product (Hermitian products). Unlike the real dot product, the dot product in complex space is asymmetric because the first item is conjugated. Aside from modeling and encoding directionality, our proposed approach also enhances the representation learning process. Extensive experiments on five text matching benchmark datasets demonstrate the effectiveness of our approach. |
DiSAN: Directional Self-Attention Network for RNN/CNN-Free Language Understanding | Recurrent neural nets (RNN) and convolutional neural nets (CNN) are widely used on NLP tasks to capture the long-term and local dependencies, respectively. Attention mechanisms have recently attracted enormous interest due to their highly parallelizable computation, significantly less training time, and flexibility in modeling dependencies. We propose a novel attention mechanism in which the attention between elements from input sequence(s) is directional and multi-dimensional (i.e., feature-wise). A light-weight neural net, “Directional Self-Attention Network (DiSAN)”, is then proposed to learn sentence embedding, based solely on the proposed attention without any RNN/CNN structure. DiSAN is only composed of a directional self-attention with temporal order encoded, followed by a multi-dimensional attention that compresses the sequence into a vector representation. Despite its simple form, DiSAN outperforms complicated RNN models on both prediction quality and time efficiency. It achieves the best test accuracy among all sentence encoding methods and improves the most recent best result by 1.02% on the Stanford Natural Language Inference (SNLI) dataset, and shows stateof-the-art test accuracy on the Stanford Sentiment Treebank (SST), Multi-Genre natural language inference (MultiNLI), Sentences Involving Compositional Knowledge (SICK), Customer Review, MPQA, TREC question-type classification and Subjectivity (SUBJ) datasets. |
On Threat Modeling and Mitigation of Medical Cyber-Physical Systems | Medical Cyber Physical Systems (MCPS) are lifecritical networked systems of medical devices. These systems are increasingly used in hospitals to provide high-quality healthcare for patients. However, MCPS also bring concerns about security and safety and new challenges to protect patients from acts of theft or malice. In this paper, we focus our investigation on a thorough understanding of threat modeling in MCPS. We examine the roles of stakeholders and system components and sketch an abstract architecture of a MCPS to demonstrate various threat modeling options. We also discuss possible security techniques and their applicability and utility for the design of secure MCPS. This work forms a basis for understanding threatening conditions in MCPS, and embarks on promising state-of-the-art research trends for addressing MCPS security concerns. |
Preparation and characterization of chitosan/gelatin/PVA hydrogel for wound dressings. | Chitosan (CS)/gelatin (Gel)/polyvinyl alcohol (PVA) hydrogels were prepared by the gamma irradiation method for usage in wound dressing applications. Chitosan and gelatin solution was mixed with poly(vinyl alcohol) (PVA) solution at different weight ratios of CS/Gel of 1:3, 1:2, 1:1, 2:1 and 3:1. The hydrogels irradiated at 40kGy. The structure of the hydrogels was characterized by using FT-IR and SEM. The CS/Gel/PVA hydrogels were characterized for physical properties and blood clotting activity. The tensile strength of CS/Gel/PVA hydrogel enhanced than on the basis of the Gel/PVA hydrogel. The highest tensile strength reached the 2.2Mpa. All hydrogels have shown a good coagulation effect. It takes only 5min for the BCI index to reached 0.032 only 5min when the weight ratio of CS/Gel was 1:1. It means that the hemostatic effect of hydrogels were optimal. And the hydrogrls also showed good pH-sensitivity, swelling ability and water evaporation rate. Therefore, this hydrogel showed a promising potential to be applied as wound dressing. |
A phase II trial of chemoradiation therapy with weekly oxaliplatin and protracted infusion of 5-fluorouracil for esophageal cancer | Background Chemoradiation therapy using regimens containing cisplatin and 5-fluorouracil are most commonly used for inoperable cancer of the esophagus. Cisplatin is relatively toxic and is not suitable for many patients. Little data exists using platinum analogues together with protracted infusion 5-fluorouracil and radiation therapy in the curative setting. Methods Fourteen patients with localised oesophageal cancer suitable for curative chemoradiation therapy registered on the study. Chemotherapy consisted of 5-fluorouracil 225 mg/m2 daily throughout radiation therapy, with oxaliplatin 60 mg/m2 weekly. The radiation dose was 56 to 60 Gy in 28 to 30 fractions. Results The median age of the patients was 70.5 years. Therapy was associated with excessive grade 3 and 4 non-hematologic toxicity. There was one treatment related death. The median progression-free survival was 31.5 months and median overall survival 32.6 months. Six patients achieved a prolonged complete endoscopic and radiological response. Conclusions Although weekly oxaliplatin in combination with infusional 5 fluorouracil produces durable remissions in esophageal cancer, the regimen used in this trial was not acceptable for routine use. Future protocols should incorporate lower chemotherapy doses. |
Blockchain and Principles of Business Process Re-Engineering for Process Innovation | Blockchain has emerged as one of the most promising and revolutionary technologies in the past years. Companies are exploring implementation of use cases in hope of significant gains in efficiencies. However, to achieve the impact hoped for, it is not sufficient to merely replace existing technologies. The current business processes must also be redesigned and innovated to enable realization of hoped for benefits. This conceptual paper provides a theoretical contribution on how blockchain technology and smart contracts potentially can, within the framework of the seven principles of business process re-engineering (BPR), enable process innovations. In this paper, we analyze the BPR principles in light of their applicability to blockchain-based solutions. We find these principles to be applicable and helpful in understanding how blockchain technology could enable transformational redesign of current processes. However, the viewpoint taken, should be expanded from intrato inter-organizational processes operating within an ecosystem of separate organizational entities. In such a blockchain powered ecosystem, smart contracts take on a pivotal role, both as repositories of data and executioner of activities. |
A Micromachined Terahertz Waveguide 90 $^{\circ}$ Twist | Waveguide twists are often necessary to provide polarization rotation between waveguide-based components. At terahertz frequencies, it is desirable to use a twist design that is compact in order to reduce loss; however, these designs are difficult if not impossible to realize using standard machining. This paper presents a micromachined compact waveguide twist for terahertz frequencies. The Rud-Kirilenko twist geometry is ideally suited to the micromachining processes developed at the University of Virginia. Measurements of a WR-1.5 micromachined twist exhibit a return loss near 20 dB and a median insertion loss of 0.5 dB from 600 to 750 GHz. |
An associative information retrieval algorithm for a Kanerva-like memory model | This paper presents an associative information retrieval algorithm for a Kanerva-like sparse distributed memory (SDM) model. This memory model is used to implement the associative level of a hierarchical heterogeneous knowledge-base model consisting of multi-levels, starting from an associative level, through to the semantic, rule-based and description-generator level as the top level in the hierarchy. The architecture of knowledge-base was inspired by biological and psychological models. The proposed algorithm retrieves concepts from the associative level based on the similarity between a concept of interest and already stored concepts. The similarity is expressed by a value of the linguistic variable. With this approach it is possible to solve a problem when the inference processes at the semantic level encounter an unknown concept of interest. The algorithm is demonstrated by retrieving concepts that were stored based on the results of psychological experiment. |
Factors affecting incisional complication rates associated with colic surgery in horses: 78 cases (1983-1985). | From May 1, 1983 to April 1, 1985, 142 operations were performed on horses with signs of acute abdominal pain (colic), using a ventral midline incision. Seventy-eight horses lived for at least 15 days after surgery or had acute dehiscence and were included in the study. Seventy horses had surgery once, and 8 horses had surgery 2 or more times. Six-month follow-up evaluation was obtained for 66 horses that had 1 surgery and for 6 horses that had multiple surgeries. Incisional complications included drainage (including infection), acute dehiscence, hernia, and suture sinus formation. The effects of preoperative peritoneal fluid presence, enterotomy or resection, suture material and pattern used in the linea alba, type of skin closure and use of a sutured-on stent bandage on the incidence of incisional complications were investigated. The complication of incisional infection rate associated with a near-far-far-near suture pattern vs simple interrupted pattern in the linea alba was the only statistically significant (P less than 0.05) difference observed. |
Complementarity, Cognition and Capabilities: Towards an Evolutionary Theory of Production | This book is based on the premise that mainstream economics has become excessively specialized and formalized, entering a state of de facto withdrawal from the study of the economy in favour of exercises in applied mathematics. The editors believe that there is much scope for synergies by engaging in an encounter with economics and the other social sciences. The chapters in this book offer important new contributions to such a development. |
A study of the basic DC-DC converters applied in maximum power point tracking | In Maximum Power Point (MPP) applications the DC-DC tracker converter is as important as the MPP tracking algorithm. In this paper, the DC-DC Buck, Boost, Buck-Boost, Cúk, Sepic and Zeta converters are analyzed in order to determine which one is more proper to be applied as Maximum Power Point Tracker (MPPT). The proposed analysis take into account the radiation and temperature conditions, besides the load connected at the photovoltaic module. The comparison among the related converters is based in, both, analytical and simulation results. |
Symptomatic cimetidine treatment of duodenal and prepyloric ulcers | In a randomized trial, 75 patients with an endoscopically confirmed and symptomatic duodenal (N=50) or prepyloric (N=25) ulcer were allocated to cimetidine treatment (1 g/day) either regularly for four weeks (standard treatment group) or regularly for a minimum of one week and thereafter only until the symptoms were controlled (symptomatic treatment group). The four-week healing frequencies in the standard and symptomatic treatment groups were 72 and 67%, respectively. The difference ±95% confidence limits was 5±21%. Prospective recording of pain revealed that the two treatment regimens were about equally effective in relieving symptoms during weeks 2–4. Patients with unhealed ulcers reported pain during the day and night significantly more often than those with healed ulcers. In the symptomatic treatment group the average patient saved 11 days of cimetidine medication during weeks 2–4. We believe that disappearance of symptoms might be a valuable means of deciding when treatment for peptic ulcers can be discontinued. Provided its efficacy and safety can be further confirmed, symptomatic treatment might become a practical and possibly a money-saving mode of ulcer management, which should also be applicable to the ulcer regimens of tomorrow. |
A Students Attendance System Using QR Code | Smartphones are becoming more preferred companions to users than desktops or notebooks. Knowing that smartphones are most popular with users at the age around 26, using smartphones to speed up the process of taking attendance by university instructors would save lecturing time and hence enhance the educational process. This paper proposes a system that is based on a QR code, which is being displayed for students during or at the beginning of each lecture. The students will need to scan the code in order to confirm their attendance. The paper explains the high level implementation details of the proposed system. It also discusses how the system verifies student identity to eliminate false registrations. Keywords—Mobile Computing; Attendance System; Educational System; GPS |
SPARTACUS: underdiagnosis of chronic daily headache in primary care | Despite the burden of chronic daily headache (CDH), general practitioners’ (GPs) ability to recognize it is unknown. This work is a sub-study of a population-based study investigating GPs’ knowledge on their CDH patients. Patients diagnosed with CDH through the screening questionnaire were interviewed by their GPs who indicated if subjects were known as patients suffering from CDH with medication overuse (MO), CDH without MO, episodic headache (EH) or non-headache sufferers. Our study showed that 64.37 % of CDH sufferers are misdiagnosed by their GPs. However, overusers are better known to GPs. |
Whitening Black-Box Neural Networks | Many deployed learned models are black boxes: given input, returns output. Internal information about the model, such as the architecture, optimisation procedure, or training data, is not disclosed explicitly as it might contain proprietary information or make the system more vulnerable. This work shows that such attributes of neural networks can be exposed from a sequence of queries. This has multiple implications. On the one hand, our work exposes the vulnerability of black-box neural networks to different types of attacks – we show that the revealed internal information helps generate more effective adversarial examples against the black box model. On the other hand, this technique can be used for better protection of private content from automatic recognition models using adversarial examples. Our paper suggests that it is actually hard to draw a line between white box and black box models. |
An Unsupervised Speaker Clustering Technique based on SOM and I-vectors for Speech Recognition Systems | In this paper, we introduce an enhancement for speech recognition systems using an unsupervised speaker clustering technique. The proposed technique is mainly based on I-vectors and Self-Organizing Map Neural Network (SOM). The input to the proposed algorithm is a set of speech utterances. For each utterance, we extract 100-dimensional I-vector and then SOM is used to group the utterances to different speakers. In our experiments, we compared our technique with Normalized Cross Likelihood ratio Clustering (NCLR). Results show that the proposed technique reduces the speaker error rate in comparison with NCLR. Finally, we have experimented the effect of speaker clustering on Speaker Adaptive Training (SAT) in a speech recognition system implemented to test the performance of the proposed technique. It was noted that the proposed technique reduced the WER over clustering speakers with NCLR. |
Cloud computing service models: A comparative study | Cloud computing still suffer of many security issues that required the researchers to focus on it to make the users fully trust on it. In this paper we explain the security issues which attached to each service models Software as a service (SaaS), Platform as a service (PaaS) and Infrastructure as a service (IaaS). Furthermore, a comparative study has been presented for the three service models to evaluate their performance with rang of specific factors such as characteristics, Typical level of control granted to cloud consumer, provider and consumer activities. |
Corruption Traps and Economic Growth | Corruption is closely related to economic growth. This paper considers the prospects for constructing a model of growth in an economy with bureaucrats who collect bribe from entrepreneurs to pursue their bribe income maximization. The bureaucrats have an ability to set their effort level on receiving investment projects, and entrepreneurs must pay a bribe to operate their investment project if the bureaucrats observe them. Under progressive bribe rates along with project's pro tability, corruption can make entrepreneurs choose less productive investment projects to avoid high bribe rate, de ned as a reversion of pro tability. This reversion is possible to lead an economy to be captured in a corruption trap, where its unique steady states depends on its initial conditions, or a corruption collapse, in which a high steady state fails to exist and only a lower steady state survives. A major implication of the analysis is that corruption can explain part of income disparity among developed countries and poor corrupt countries. |
Wealth Distribution and Imperfect Factor Markets: A Classroom Experiment | The author presents a simple exercise to demonstrate how initial property distribution can affect final wealth patterns in developing areas of the world. The simulation is a variant of the Monopoly board game in which students role play different members of a market in which they each face different rules of credit access and salary patterns. The property distribution and new mortgage rules reflect the reality of many developing areas. The simulation can be completed in one full class period and has proven successful in making students more sensitive to wealth distribution issues. Students have suggested several variations of this simulation to make it applicable across more settings. |
Low Voltage and Current Stress ZVZCS Full Bridge DC–DC Converter Using Center Tapped Rectifier Reset | An improved zero-voltage and zero-current-switching (ZVZCS) full bridge dc-dc converter is proposed based on phase shift control. With an auxiliary center tapped rectifier at the secondary side, an auxiliary voltage source is applied to reset the primary current of the transformer winding. Therefore, zero-voltage switching for the leading leg switches and zero-current switching for the lagging leg switches can be achieved, respectively, without any increase of current and voltage stresses. Since the primary current in the circulating interval for the phase shift full bridge converter is eliminated, the conduction loss in primary switches is reduced. A 1 kW prototype is made to verify the theoretical analysis. |
Goals and Habits in the Brain | An enduring and richly elaborated dichotomy in cognitive neuroscience is that of reflective versus reflexive decision making and choice. Other literatures refer to the two ends of what is likely to be a spectrum with terms such as goal-directed versus habitual, model-based versus model-free or prospective versus retrospective. One of the most rigorous traditions of experimental work in the field started with studies in rodents and graduated via human versions and enrichments of those experiments to a current state in which new paradigms are probing and challenging the very heart of the distinction. We review four generations of work in this tradition and provide pointers to the forefront of the field's fifth generation. |
Principles of Gestalt Psychology | When I first conceived the plan of writing this book I guessed, though I did not know, how much effort it would cost to carry it out, and what demands it would put on a potential reader. And I doubted, not rhetorically but very honestly and sincerely, whether such labour on the part of the author and the reader was justified. I was not so much troubled by the idea of writing another book on psychology in addition to the many books which have appeared during the last ten years, as by the idea of writing a book on psychology. Writing a book for publication is a social act. Is one justified in demanding co-operation of society for such an enterprise? What good can society, or a small fraction of it, at best derive from it? I tried to give an answer to this question, and when now, after having completed the book, I return to this first chapter, I find that the answer which then gave me sufficient courage to start on my long journey, has stayed with me to the end. I believed I had found a reason why a book on psychology might do some good. Psychology has split up into so many branches and schools, either ignoring or fighting each other, that even an outsider may have the impression surely strengthened by the publications. "Psychologies of 1925" and "Psychologies of 1930" that the plural "psychologies" should be substituted for the singular. |
Five-Factor Model personality profiles of drug users | BACKGROUND
Personality traits are considered risk factors for drug use, and, in turn, the psychoactive substances impact individuals' traits. Furthermore, there is increasing interest in developing treatment approaches that match an individual's personality profile. To advance our knowledge of the role of individual differences in drug use, the present study compares the personality profile of tobacco, marijuana, cocaine, and heroin users and non-users using the wide spectrum Five-Factor Model (FFM) of personality in a diverse community sample.
METHOD
Participants (N = 1,102; mean age = 57) were part of the Epidemiologic Catchment Area (ECA) program in Baltimore, MD, USA. The sample was drawn from a community with a wide range of socio-economic conditions. Personality traits were assessed with the Revised NEO Personality Inventory (NEO-PI-R), and psychoactive substance use was assessed with systematic interview.
RESULTS
Compared to never smokers, current cigarette smokers score lower on Conscientiousness and higher on Neuroticism. Similar, but more extreme, is the profile of cocaine/heroin users, which score very high on Neuroticism, especially Vulnerability, and very low on Conscientiousness, particularly Competence, Achievement-Striving, and Deliberation. By contrast, marijuana users score high on Openness to Experience, average on Neuroticism, but low on Agreeableness and Conscientiousness.
CONCLUSION
In addition to confirming high levels of negative affect and impulsive traits, this study highlights the links between drug use and low Conscientiousness. These links provide insight into the etiology of drug use and have implications for public health interventions. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.