title
stringlengths
8
300
abstract
stringlengths
0
10k
Internal friction in WC-Co hard metals☆
Abstract Internal friction measurements have been performed on various grades of WC-Co cemented carbides from room temperature to 1000°C. These composite materials exhibit an internal friction spectrum which is mainly composed of a relaxation peak and a high temperature exponential background. The peak appears in the same temperature range where an increase in toughness has been observed and interpreted as being due to a brittle-to-ductile transition of the material. The exponential background can be associated with high temperature creep phenomena. The analysis of the results obtained shows the important role of the cobalt binder phase in the mechanical behaviour of WC-Co.
Automatic Keyword Extraction on Twitter
In this paper, we build a corpus of tweets from Twitter annotated with keywords using crowdsourcing methods. We identify key differences between this domain and the work performed on other domains, such as news, which makes existing approaches for automatic keyword extraction not generalize well on Twitter datasets. These datasets include the small amount of content in each tweet, the frequent usage of lexical variants and the high variance of the cardinality of keywords present in each tweet. We propose methods for addressing these issues, which leads to solid improvements on this dataset for this task.
Albatross: Lightweight Elasticity in Shared Storage Databases for the Cloud using Live Data Migration
Database systems serving cloud platforms must serve large numbers of applications (or tenants). In addition to managing tenants with small data footprints, different schemas, and variable load patterns, such multitenant data platforms must minimize their operating costs by efficient resource sharing. When deployed over a pay-per-use infrastructure, elastic scaling and load balancing, enabled by low cost live migration of tenant databases, is critical to tolerate load variations while minimizing operating cost. However, existing databases—relational databases and Key-Value stores alike—lack low cost live migration techniques, thus resulting in heavy performance impact during elastic scaling. We present Albatross, a technique for live migration in a multitenant database serving OLTP style workloads where the persistent database image is stored in a network attached storage. Albatross migrates the database cache and the state of active transactions to ensure minimal impact on transaction execution while allowing transactions active during migration to continue execution. It also guarantees serializability while ensuring correctness during failures. Our evaluation using two OLTP benchmarks shows that Albatross can migrate a live tenant database with no aborted transactions, negligible impact on transaction latency and throughput both during and after migration, and an unavailability window as low as 300 ms.
A flexible approach for abstracting and personalizing large business process models
In process-aware information systems (PAISs), usually, different user groups have distinguished perspectives on the business processes supported and on related business data. Hence, personalized views and proper abstractions on these business processes are needed. However, existing PAISs do not provide adequate mechanisms for creating and visualizing process views and process model abstractions. Usually, process models are displayed to users in exactly the same way as originally modeled. This paper presents a flexible approach for creating personalized views based on parameterizable operations. Respective view creation operations can be flexibly composed to either hide non-relevant process information or to abstract it. Depending on the parameterization of the selected view creation operations, one obtains process views with more or less relaxed properties, e.g., regarding the degree of information loss or the soundness of the resulting model abstractions. Altogether, the realized view concept allows for a more flexible abstraction and visualization of large business process models satisfying the needs of different user groups.
A randomized phase II study comparing erlotinib versus erlotinib with alternating chemotherapy in relapsed non-small-cell lung cancer patients: the NVALT-10 study.
BACKGROUND Epidermal growth factor receptor tyrosine kinase inhibitors (TKIs) administered concurrently with chemotherapy did not improve outcome in non-small-cell lung cancer (NSCLC). However, in preclinical models and early phase noncomparative studies, pharmacodynamic separation of chemotherapy and TKIs did show a synergistic effect. PATIENTS AND METHODS A randomized phase II study was carried out in patients with advanced NSCLC who had progressed on or following first-line chemotherapy. Erlotinib 150 mg daily (monotherapy) or erlotinib 150 mg during 15 days intercalated with four 21-day cycles docetaxel for squamous (SQ) or pemetrexed for nonsquamous (NSQ) patients was administered (combination therapy). After completion of chemotherapy, erlotinib was continued daily. Primary end point was progression-free survival (PFS). RESULTS Two hundred and thirty-one patients were randomized, 115 in the monotherapy arm and 116 in the combination arm. The adjusted hazard ratio for PFS was 0.76 [95% confidence interval (CI) 0.58-1.02; P = 0.06], for overall survival (OS) 0.67 (95% CI 0.49-0.91; P = 0.01) favoring the combination arm. This improvement was primarily observed in NSQ subgroup. Common Toxicity Criteria grade 3+ toxic effect occurred in 20% versus 56%, rash in 7% versus 15% and febrile neutropenia in 0% versus 6% in monotherapy and combination therapy, respectively. CONCLUSIONS PFS was not significantly different between the arms. OS was significantly improved in the combination arm, an effect restricted to NSQ histology. STUDY REGISTRATION NUMBER NCT00835471.
Chemical and Physical Behavior of Human Hair
At or near its surface, hair fibers contain a thick protective cover consisting of six to eight layers of flat overlapping scale-like structures called cuticle or scales which consists of high sulfur KAPs, keratin proteins and structural lipids. The cuticle layers surround the cortex, but the cortex contains the major part of the fiber mass. The cortex consists of spindle-shaped cells that are aligned parallel with the fiber axis. Cortical cells consist of both Type I and Type II keratins (IF proteins) and KAP proteins. Coarser hairs often contain one or more loosely packed porous regions called the medulla, located near the center of the fiber. The cell membrane complex, the “glue” that binds or holds all of the cells together, is a highly laminar structure consisting of both structural lipid and protein structures. Hair fibers grow in cycles consisting of three distinct stages called anagen (growth), catagen (transition) and telogen (rest). Each stage is controlled by molecular signals/regulators acting first on stem cells and then on the newly formed cells in the bulb and subsequently higher up in differentiation in the growing fiber. The effects and incidence of hair growth and hair loss (normal and diseased) for both males and females are described in detail. Molecular structures controlling hair fiber curvature (whether a fiber is straight or curly) and the effects of the different structural units of the fiber on stress–strain and swelling behavior are described in detail.
Internet Web servers: workload characterization and performance implications
This paper presents a workload characterization study for Internet Web servers. Six different data sets are used in the study: three from academic environments, two from scientific research organizations, and one from a commercial Internet provider. These data sets represent three different orders of magnitude in server activity, and two different orders of magnitude in time duration, ranging from one week of activity to one year. The workload characterization focuses on the document type distribution, the document size distribution, the document referencing behavior, and the geographic distribution of server requests. Throughout the study, emphasis is placed on finding workload characteristics that are common to all the data sets studied. Ten such characteristics are identified. The paper concludes with a discussion of caching and performance issues, using the observed workload characteristics to suggest performance enhancements that seem promising for Internet Web servers.
High-resolution MRI (3T-MRI) in diagnosis of wrist pain: is diagnostic arthroscopy still necessary?
3T MRI has become increasingly available for better imaging of interosseous ligaments, TFCC, and avascular necrosis compared with 1.5T MRI. This study assesses the sensitivity and specificity of 3T MRI compared with arthroscopy as the gold standard. Eighteen patients were examined with 3T MRI using coronal T1-TSE; PD-FS; and coronal, sagittal, and axial contrast-enhanced T1-FFE-FS sequences. Two musculoskeletal radiologists evaluated the images independently. Patients underwent diagnostic arthroscopy. The classifications of the cartilage lesions showed good correlations with the arthroscopy findings (κ = 0.8–0.9). In contrast to the arthroscopy, cartilage of the distal carpal row was very good and could be evaluated in all patients on MRI. The sensitivity for the TFCC lesion was 83%, and the specificity was 42% (radiologist 1) and 63% (radiologist 2). For the ligament lesions, the sensitivity and specificity were 75 and 100%, respectively, with a high interobserver agreement (κ = 0.8–0.9). 3T MRI proved to be of good value in diagnosing cartilage lesions, especially in the distal carpal row, whereas wrist arthroscopy provided therapeutic options. When evaluating the surgical therapeutical options, 3T MRI is a good diagnostic tool for pre-operatively evaluating the cartilage of the distal carpal row.
Role of Transmitted Gag CTL Polymorphisms in Defining Replicative Capacity and Early HIV-1 Pathogenesis
Initial studies of 88 transmission pairs in the Zambia Emory HIV Research Project cohort demonstrated that the number of transmitted HLA-B associated polymorphisms in Gag, but not Nef, was negatively correlated to set point viral load (VL) in the newly infected partners. These results suggested that accumulation of CTL escape mutations in Gag might attenuate viral replication and provide a clinical benefit during early stages of infection. Using a novel approach, we have cloned gag sequences isolated from the earliest seroconversion plasma sample from the acutely infected recipient of 149 epidemiologically linked Zambian transmission pairs into a primary isolate, subtype C proviral vector, MJ4. We determined the replicative capacity (RC) of these Gag-MJ4 chimeras by infecting the GXR25 cell line and quantifying virion production in supernatants via a radiolabeled reverse transcriptase assay. We observed a statistically significant positive correlation between RC conferred by the transmitted Gag sequence and set point VL in newly infected individuals (p = 0.02). Furthermore, the RC of Gag-MJ4 chimeras also correlated with the VL of chronically infected donors near the estimated date of infection (p = 0.01), demonstrating that virus replication contributes to VL in both acute and chronic infection. These studies also allowed for the elucidation of novel sites in Gag associated with changes in RC, where rare mutations had the greatest effect on fitness. Although we observed both advantageous and deleterious rare mutations, the latter could point to vulnerable targets in the HIV-1 genome. Importantly, RC correlated significantly (p = 0.029) with the rate of CD4+ T cell decline over the first 3 years of infection in a manner that is partially independent of VL, suggesting that the replication capacity of HIV-1 during the earliest stages of infection is a determinant of pathogenesis beyond what might be expected based on set point VL alone.
A Bootstrap Method for Automatic Rule Acquisition on Emotion Cause Extraction
Emotion cause extraction is one of the promising research topics in sentiment analysis, but has not been well-investigated so far. This task enables us to obtain useful information for sentiment classification and possibly to gain further insights about human emotion as well. This paper proposes a bootstrapping technique to automatically acquire conjunctive phrases as textual cue patterns for emotion cause extraction. The proposed method first gathers emotion causes via manually given cue phrases. It then acquires new conjunctive phrases from emotion phrases that contain similar emotion causes to previously gathered ones. In existing studies, the cost for creating comprehensive cue phrase rules for building emotion cause corpora was high because of their dependencies both on languages and on textual natures. The contribution of our method is its ability to automatically create the corpora from just a few cue phrases as seeds. Our method can expand cue phrases at low cost and acquire a large number of emotion causes of the promising quality compared to human annotations.
RITZ-5: randomized intravenous TeZosentan (an endothelin-A/B antagonist) for the treatment of pulmonary edema: a prospective, multicenter, double-blind, placebo-controlled study.
OBJECTIVES The objective of this study was to evaluate the addition of intravenous (IV) tezosentan to standard therapy for patients with pulmonary edema. BACKGROUND Tezosentan is an IV nonselective endothelin (ET)-1 antagonist that yields favorable hemodynamic effects in patients with acute congestive heart failure (CHF). METHODS Pulmonary edema was defined as acute CHF leading to respiratory failure, as evidenced by an oxygen saturation (SO(2)) <90% by pulse oxymeter despite oxygen treatment. All patients received oxygen 8 l/min through a face mask, 3 mg of IV morphine, 80 mg of furosemide, and 1 to 3 mg/h continuous drip isosorbide-dinitrate according to their blood pressure level and were randomized to receive a placebo or tezosentan (50 or 100 mg/h) for up to 24 h. RESULTS Eighty-four patients were randomized. The primary end point, the change in SO(2) from baseline to 1 h, was 9.1 +/- 6.3% in the placebo arm versus 7.6 +/- 10% in the tezosentan group (p = NS). The incidence of death, recurrent pulmonary edema, mechanical ventilation, and myocardial infarction during the first 24 h of treatment was 19% in both groups. Reduced baseline SO(2), lower echocardiographic ejection fraction, high baseline mean arterial blood pressure (MAP), and inappropriate vasodilation (MAP reduction at 30 min of <5% or >30%) correlated with worse outcomes. A post-hoc analysis revealed that the outcome of patients who received only 50 mg/h tezosentan was better than patients in the placebo group whereas patients receiving 100 mg/h had the worst outcomes. CONCLUSIONS In the present study, tezosentan (an ET-1 antagonist) did not affect the outcome of pulmonary edema, possibly because of the high dose used.
Knowledge Management Process : a theoretical-conceptual research
Resumo: A gestão do Conhecimento (GC) é um tema que vem despertando o interesse de muitos pesquisadores nas últimas décadas, sendo grande parte das contribuições orientadas por etapas, denominadas processo de GC. Por se tratar de um tema abrangente, as publicações sobre o processo de GC apresentam contribuições multidisciplinares e, desta forma, esta pesquisa tem por objetivo conceituar este processo, analisando as principais abordagens que orientam o estudo de cada etapa, e, também, levantar as principais publicações que tratam do tema, classificando‐as quanto à sua área de contribuição. Para alcançar estes objetivos, este artigo é orientado por uma pesquisa teórico‐conceitual, na qual foram estudados 71 artigos. Os resultados desta pesquisa apontam que o processo de GC é constituído de quatro etapas: aquisição, armazenamento, distribuição e utilização do conhecimento. Na fase de aquisição, as temáticas estudadas são aprendizagem organizacional, absorção de conhecimento, processo criativo e transformação do conhecimento. Na fase de armazenamento, as contribuições tratam do indivíduo, organização e tecnologia da informação, enquanto na fase de distribuição os estudos concentram‐se nas temáticas contato social, comunidade de prática e compartilhamento via tecnologia de informação. E, por fim, na fase de utilização, são abordados os temas forma de utilização, capacidade dinâmica e recuperação e transformação do conhecimento. Palavras-chave: Processo de gestão do conhecimento; Aquisição de conhecimento; Armazenamento de conhecimento; Distribuição de conhecimento; Utilização de conhecimento; Pesquisa teórico‐conceitual. Abstract: Knowledge Management (KM) is a subject that has aroused the interest of many researchers in the last decade, being great part of contributions driven by steps, named KM process. Because it is an embracing theme, publications about KM process have multidisciplinary contributions and, thus, this research aims to conceptualize this process, analyzing the main approach that guides the study of each stage, and also, to raise the main publications on the subject, classifying them as to their contribution area. To reach these goals, this article is oriented by a theoretical-conceptual research, in which 71 articles were studied. The results indicate that the KM process consists of four stages: acquisition, storage, distribution, and use of knowledge. In the acquisition phase, the studied themes are organizational learning, knowledge inception, creative process and knowledge transformation. In the storage phase, the contributions deal with a person, an organization and information technology, while in the distribution phase the studies concentrate in social contact themes, practice community and sharing via information technology. And, finally, in the use phase, we address the form of use, dynamic capacity and retrieval and knowledge transformation.
MUSIC EDITING: PROCESS TO PRACTICE—BRIDGING THE VARIOUS PERSPECTIVES IN FILMMAKING AND STORY-TELLING
S Music and the Moving Image Conference May 27th 29th, 2016 1. Loewe Friday, May 27, 2016, 9:30AM – 11:00AM MUSIC EDITING: PROCESS TO PRACTICE—BRIDGING THE VARIOUS PERSPECTIVES IN FILMMAKING AND STORY-TELLING Nancy Allen, Film Music Editor While the technical aspects of music editing and film-making continue to evolve, the fundamental nature of story-telling remains the same. Ideally, the role of the music editor exists at an intersection between the Composer, Director, and Picture Editor, where important creative decisions are made. This privileged position allows the Music Editor to better explore how to tell the story through music and bring the evolving vision of the film into tighter focus. 2. Loewe Friday, May 27, 2016, 11:30 AM – 1:00 PM GREAT EXPECTATIONS? THE CHANGING ROLE OF AUDIOVISUAL INCONGRUENCE IN CONTEMPORARY MULTIMEDIA Dave Ireland, University of Leeds Film-music moments that are perceived to be incongruent, misfitting or inappropriate have often been described as highly memorable. These claims can in part be explained by the separate processing of sonic and visual information that can occur when incongruent combinations subvert expectations of an audiovisual pairing in which the constituent components share a greater number of properties. Drawing upon a sequence from the TV sitcom Modern Family in which images of violent destruction are juxtaposed with performance of tranquil classical music, this paper highlights the increasing prevalence of such uses of audiovisual difference in contemporary multimedia. Indeed, such principles even now underlie a form of Internet meme entitled ‘Whilst I play unfitting music’. Such examples serve to emphasize the evolving functions of incongruence, emphasizing the ways in which such types of audiovisual pairing now also serve as a marker of authorial style and a source of intertextual parody. Drawing upon psychological theories of expectation and ideas from semiotics that facilitate consideration of the potential disjunction between authorial intent and perceiver response, this paper contends that such forms of incongruence should be approached from a psycho-semiotic perspective. Through consideration of the aforementioned examples, it will be demonstrated that this approach allows for: more holistic understanding of evolving expectations and attitudes towards audiovisual incongruence that may shape perceiver response; and a more nuanced mode of analyzing factors that may influence judgments of film-music fit and appropriateness. MUSICAL META-MORPHOSIS: BREAKING THE FOURTH WALL THROUGH DIEGETIC-IZING AND METACAESURA Rebecca Eaton, Texas State University In “The Fantastical Gap,” Stilwell suggests that metadiegetic music—which puts the audience “inside a character’s head”— begets such a strong spectator bond that it becomes “a kind of musical ‘direct address,’ threatening to break the fourth wall that is the screen.” While Stillwell theorizes a breaking of the fourth wall through audience over-identification, in this paper I define two means of film music transgression that potentially unsuture an audience, exposing film qua film: “diegetic-izing” and “metacaesura.” While these postmodern techniques 1) reveal film as a constructed artifact, and 2) thus render the spectator a more, not less, “troublesome viewing subject,” my analyses demonstrate that these breaches of convention still further the narrative aims of their respective films. Both Buhler and Stilwell analyze music that gradually dissolves from non-diegetic to diegetic. “Diegeticizing” unexpectedly reveals what was assumed to be nondiegetic as diegetic, subverting Gorbman’s first principle of invisibility. In parodies including Blazing Saddles and Spaceballs, this reflexive uncloaking plays for laughs. The Truman Show and the Hunger Games franchise skewer live soundtrack musicians and timpani—ergo, film music itself—as tools of emotional manipulation or propaganda. “Metacaesura” serves as another means of breaking the fourth wall. Metacaesura arises when non-diegetic music cuts off in media res. While diegeticizing renders film music visible, metacaesura renders it audible (if only in hindsight). In Honda’s “Responsible You,” Pleasantville, and The Truman Show, the dramatic cessation of nondiegetic music compels the audience to acknowledge the constructedness of both film and their own worlds. Partial Bibliography Brown, Tom. Breaking the Fourth Wall: Direct Address in the Cinema. Edinburgh: Edinburgh University Press, 2012. Buhler, James. “Analytical and Interpretive Approaches to Film Music (II): Interpreting Interactions of Music and Film.” In Film Music: An Anthology of Critical Essays, edited by K.J. Donnelly, 39-61. Edinburgh University Press, 2001. Buhler, James, Anahid Kassabian, David Neumeyer, and Robynn Stillwell. “Roundtable on Film Music.” Velvet Light Trap 51 (Spring 2003): 73-91. Buhler, James, Caryl Flinn, and David Neumeyer, eds. Music and Cinema. Hanover: Wesleyan/University Press of New England, 2000. Eaton, Rebecca M. Doran. “Unheard Minimalisms: The Function of the Minimalist Technique in Film Scores.” PhD diss., The University of Texas at Austin, 2008. Gorbman, Claudia. Unheard Melodies: Narrative Film Music. Bloomington: University of Indiana Press, 1987. Harries, Dan. Film Parody. London: British Film Institute, 2000. Kassabian, Anahid. Hearing Film: Tracking Identifications in Contemporary Hollywood Film Music. New York: Routledge, 2001. Neumeyer, David. “Diegetic/nondiegetic: A Theoretical Model.” Music and the Moving Image 2.1 (2009): 26–39. Stilwell, Robynn J. “The Fantastical Gap Between Diegetic and Nondiegetic.” In Beyond the Soundtrack, edited by Daniel Goldmark, Lawrence Kramer, and Richard Leppert, 184202. Berkeley: The University of California Press, 2007. REDEFINING PERSPECTIVE IN ATONEMENT: HOW MUSIC SET THE STAGE FOR MODERN MEDIA CONSUMPTION Lillie McDonough, New York University One of the most striking narrative devices in Joe Wright’s film adaptation of Atonement (2007) is in the way Dario Marianelli’s original score dissolves the boundaries between diagetic and non-diagetic music at key moments in the drama. I argue that these moments carry us into a liminal state where the viewer is simultaneously in the shoes of a first person character in the world of the film and in the shoes of a third person viewer aware of the underscore as a hallmark of the fiction of a film in the first place. This reflects the experience of Briony recalling the story, both as participant and narrator, at the metalevel of the audience. The way the score renegotiates the customary musical playing space creates a meta-narrative that resembles one of the fastest growing forms of digital media of today: videogames. At their core, video games work by placing the player in a liminal state of both a viewer who watches the story unfold and an agent who actively takes part in the story’s creation. In fact, the growing trend towards hyperrealism and virtual reality intentionally progressively erodes the boundaries between the first person agent in real the world and agent on screen in the digital world. Viewed through this lens, the philosophy behind the experience of Atonement’s score and sound design appears to set the stage for way our consumption of media has developed since Atonement’s release in 2007. Mainly, it foreshadows and highlights a prevalent desire to progressively blur the lines between media and life. 3. Room 303, Friday, May 27, 2016, 11:30 AM – 1:00 PM HOLLYWOOD ORCHESTRATORS AND GHOSTWRITERS OF THE 1960s AND 1970s: THE CASE OF MOACIR SANTOS Lucas Bonetti, State University of Campinas In Hollywood in the 1960s and 1970s, freelance film composers trying to break into the market saw ghostwriting as opportunities to their professional networks. Meanwhile, more renowned composers saw freelancers as means of easing their work burdens. The phenomenon was so widespread that freelancers even sometimes found themselves ghostwriting for other ghostwriters. Ghostwriting had its limitations, though: because freelancers did not receive credit, they could not grow their resumes. Moreover, their music often had to follow such strict guidelines that they were not able to showcase their own compositional voices. Being an orchestrator raised fewer questions about authorship, and orchestrators usually did not receive credit for their work. Typically, composers provided orchestrators with detailed sketches, thereby limiting their creative possibilities. This story would suggest that orchestrators were barely more than copyists—though with more intense workloads. This kind of thankless work was especially common in scoring for episodic television series of the era, where the fast pace of the industry demanded more agility and productivity. Brazilian composer Moacir Santos worked as a Hollywood ghostwriter and orchestrator starting in 1968. His experiences exemplify the difficulties of these professions during this era. In this paper I draw on an interview-based research I conducted in the Los Angeles area to show how Santos’s experiences showcase the difficulties of being a Hollywood outsider at the time. In particular, I examine testimony about racial prejudice experienced by Santos, and how misinformation about his ghostwriting activity has led to misunderstandings among scholars about his contributions. SING A SONG!: CHARITY BAILEY AND INTERRACIAL MUSIC EDUCATION ON 1950s NYC TELEVISION Melinda Russell, Carleton College Rhode Island native Charity Bailey (1904-1978) helped to define a children’s music market in print and recordings; in each instance the contents and forms she developed are still central to American children’s musical culture and practice. After study at Juilliard and Dalcroze, Bailey taught music at the Little Red School House in Greenwich Village from 1943-1954, where her students included Mary Travers and Eric Weissberg. Bailey’s focus on African, African-American, and Car
Drought disturbance from climate change: response of United States forests.
Predicted changes in climate have raised concerns about potential impacts on terrestrial forest ecosystem productivity, biogeochemical cycling, and the availability of water resources. This review summarizes characteristics of drought typical to the major forest regions of the United States, future drought projections, and important features of plant and forest community response to drought. Research needs and strategies for coping with future drought are also discussed. Notwithstanding uncertainties surrounding the magnitude and direction of future climate change, and the net impact on soil water availability to forests, a number of conclusions can be made regarding the sensitivity of forests to future drought. The primary response will be a reduction in net primary production and stand water use, which are driven by reductions in stomatal conductance. Mortality of small stature plants (i.e. seedlings and saplings) is a likely consequence of severe drought. In comparison, deep rooting and substantial reserves of carbohydrates and nutrients make mature trees less susceptible to water limitations caused by severe or prolonged drought. However, severe or prolonged drought may render even mature trees more susceptible to insects or disease. Drought-induced reductions in decomposition rates may cause a buildup of organic material on the forest floor, with ramifications for fire regimes and nutrient cycling. Although early model predictions of climate change impacts suggested extensive forest dieback and species migration, more recent analyses suggest that catastrophic dieback will be a local phenomenon, and changes in forest composition will be a relatively gradual process. Better climate predictions at regional scales, with a higher temporal resolution (months to days), coupled with carefully designed, field-based experiments that incorporate multiple driving variables (e.g. temperature and CO2), will advance our ability to predict the response of different forest regions to climate change.
Has My Algorithm Succeeded? An Evaluator for Human Pose Estimators
Example classifications (test set) [And09] Andriluka et al. Pictorial structures revisited: People detection and articulated pose estimation. In CVPR, 2009 [Eic09] Eichner et al. Articulated Human Pose Estimation and Search in (Almost) Unconstrained Still Images. In IJCV, 2012 [Sap10] Sapp et al. Cascaded models for articulated pose estimation. In ECCV, 2010 [Yan11] Yang and Ramanan. Articulated pose estimation with flexible mixturesof-parts. In CVPR, 2011. References Human Pose Estimation (HPE) Algorithm Input
Appearance-and-Relation Networks for Video Classification
Spatiotemporal feature learning in videos is a fundamental problem in computer vision. This paper presents a new architecture, termed as Appearance-and-Relation Network (ARTNet), to learn video representation in an end-to-end manner. ARTNets are constructed by stacking multiple generic building blocks, called as SMART, whose goal is to simultaneously model appearance and relation from RGB input in a separate and explicit manner. Specifically, SMART blocks decouple the spatiotemporal learning module into an appearance branch for spatial modeling and a relation branch for temporal modeling. The appearance branch is implemented based on the linear combination of pixels or filter responses in each frame, while the relation branch is designed based on the multiplicative interactions between pixels or filter responses across multiple frames. We perform experiments on three action recognition benchmarks: Kinetics, UCF101, and HMDB51, demonstrating that SMART blocks obtain an evident improvement over 3D convolutions for spatiotemporal feature learning. Under the same training setting, ARTNets achieve superior performance on these three datasets to the existing state-of-the-art methods.1
Deep learning in remote sensing: a review
This is the pre-acceptance version, to read the final version please go to IEEE Geoscience and Remote Sensing Magazine on IEEE XPlore. Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a “black-box” solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization. X. Zhu and L. Mou are with the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Germany and with Signal Processing in Earth Observation (SiPEO), Technical University of Munich (TUM), Germany, E-mails: [email protected]; [email protected]. D. Tuia was with the Department of Geography, University of Zurich, Switzerland. He is now with the Laboratory of GeoInformation Science and Remote Sensing, Wageningen University of Research, the Netherlands. E-mail: [email protected]. G.-S Xia and L. Zhang are with the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS), Wuhan University. E-mail:[email protected]; [email protected]. F. Xu is with the Key Laboratory for Information Science of Electromagnetic Waves (MoE), Fudan Univeristy. E-mail: [email protected]. F. Fraundorfer is with the Institute of Computer Graphics and Vision, TU Graz, Austria and with the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), Germany. E-mail: [email protected]. The work of X. Zhu and L. Mou are supported by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No [ERC-2016-StG-714087], Acronym: So2Sat), Helmholtz Association under the framework of the Young Investigators Group “SiPEO” (VH-NG-1018, www.sipeo.bgu.tum.de) and China Scholarship Council. The work of D. Tuia is supported by the Swiss National Science Foundation (SNSF) under the project NO. PP0P2 150593. The work of G.-S. Xia and L. Zhang are supported by the National Natural Science Foundation of China (NSFC) projects with grant No. 41501462 and No. 41431175. The work of F. Xu are supported by the National Natural Science Foundation of China (NSFC) projects with grant No. 61571134. October 12, 2017 DRAFT ar X iv :1 71 0. 03 95 9v 1 [ cs .C V ] 1 1 O ct 2 01 7 IEEE GEOSCIENCE AND REMOTE SENSING MAGAZINE, IN PRESS. 2
The Open Provenance Model core specification (v1.1)
The Open Provenance Model is a model of provenance that is designed to meet the following requirements: (1) To allow provenance information to be exchanged between systems, by means of a compatibility layer based on a shared provenance model. (2) To allow developers to build and share tools that operate on such a provenance model. (3) To define provenance in a precise, technology-agnostic manner. (4) To support a digital representation of provenance for any “thing”, whether produced by computer systems or not. (5) To allow multiple levels of description to coexist. (6) To define a core set of rules that identify the valid inferences that can be made on provenance representation. This document contains the specification of the Open Provenance Model (v1.1) resulting from a community effort to achieve inter-operability in the Provenance Challenge series.
The Impact of Modulated Color Light on the Autonomic Nervous System
The purpose of this study is to evaluate the impact of modulated light projections perceived through the eyes on the autonomic nervous system (ANS). Three types of light projections, each containing both specific colors and specific modulations in the brainwaves frequency range, were tested, in addition to a placebo projection consisting of non-modulated white light. Evaluation was done using a combination of physiological measures (HR, HRV, SC) and psychological tests (Amen, POMS). Significant differences were found in the ANS effects of each of the colored light projections, and also between the colored and white projections.
ConfErr: A tool for assessing resilience to human configuration errors
We present ConfErr, a tool for testing and quantifying the resilience of software systems to human-induced configuration errors. ConfErr uses human error models rooted in psychology and linguistics to generate realistic configuration mistakes; it then injects these mistakes and measures their effects, producing a resilience profile of the system under test. The resilience profile, capturing succinctly how sensitive the target software is to different classes of configuration errors, can be used for improving the software or to compare systems to each other. ConfErr is highly portable, because all mutations are performed on abstract representations of the configuration files. Using ConfErr, we found several serious flaws in the MySQL and Postgres databases, Apache web server, and BIND and djbdns name servers; we were also able to directly compare the resilience of functionally-equivalent systems, such as MySQL and Postgres.
Exploring Multiple Execution Paths for Malware Analysis
Malicious code (or Malware) is defined as software that fulfills the deliberately harmful intent of an attacker. Malware analysis is the process of determining the behavior and purpose of a given Malware sample (such as a virus, worm, or Trojan horse). This process is a necessary step to be able to develop effective detection techniques and removal tools. Currently, Malware analysis is mostly a manual process that is tedious and time-intensive. To mitigate this problem, a number of analysis tools have been proposed that automatically extract the behavior of an unknown program by executing it in a restricted environment and recording the operating system calls that are invoked. The problem of dynamic analysis tools is that only a single program execution is observed. Unfortunately, however, it is possible that certain malicious actions are only triggered under specific circumstances (e.g., on a particular day, when a certain file is present, or when a certain command is received). In this paper, we propose a system that allows us to explore multiple execution paths and identify malicious actions that are executed only when certain conditions are met. This enables us to automatically extract a more complete view of the program under analysis and identify under which circumstances suspicious actions are carried out. Our experimental results demonstrate that many Malware samples show different behavior depending on input read from the environment. Thus, by exploring multiple execution paths, we can obtain a more complete picture of their actions.
Generalized 2D principal component analysis for face image representation and recognition
In the tasks of image representation, recognition and retrieval, a 2D image is usually transformed into a 1D long vector and modelled as a point in a high-dimensional vector space. This vector-space model brings up much convenience and many advantages. However, it also leads to some problems such as the Curse of Dimensionality dilemma and Small Sample Size problem, and thus produces us a series of challenges, for example, how to deal with the problem of numerical instability in image recognition, how to improve the accuracy and meantime to lower down the computational complexity and storage requirement in image retrieval, and how to enhance the image quality and meanwhile to reduce the transmission time in image transmission, etc. In this paper, these problems are solved, to some extent, by the proposed Generalized 2D Principal Component Analysis (G2DPCA). G2DPCA overcomes the limitations of the recently proposed 2DPCA (Yang et al., 2004) from the following aspects: (1) the essence of 2DPCA is clarified and the theoretical proof why 2DPCA is better than Principal Component Analysis (PCA) is given; (2) 2DPCA often needs much more coefficients than PCA in representing an image. In this work, a Bilateral-projection-based 2DPCA (B2DPCA) is proposed to remedy this drawback; (3) a Kernel-based 2DPCA (K2DPCA) scheme is developed and the relationship between K2DPCA and KPCA (Scholkopf et al., 1998) is explored. Experimental results in face image representation and recognition show the excellent performance of G2DPCA.
Studies on the diversity of the distinct phylogenetic lineage encompassing Glomus claroideum and Glomus etunicatum
Morphological and molecular characters were analysed to investigate diversity within isolates of the Glomus claroideum/Glomus etunicatum species group in the genus Glomus. The inter- and intra-isolate sequence diversity of the large subunit (LSU) rRNA gene D2 region of eight isolates of G. claroideum and G. etunicatum was studied using PCR-single strand conformational polymorphism (SSCP)-sequencing. In addition, two isolates recently obtained from Southern China were included in the analysis to allow for a wider geographic screening. Single spore DNA isolation confirmed the magnitude of gene diversity found in multispore DNA extractions. An apparent overlap of spore morphological characters was found between G. claroideum and G. etunicatum in some isolates. Analysis of the sequence frequencies in all G. etunicatum and G. claroideum isolates (ten) showed that four LSU D2 sequences, representing 32.1% of the clones analysed for multispore extraction (564) were found to be common to both species, and those sequences were the most abundant in four of the ten isolates analysed. The frequency of these sequences ranged between 23.2% and 87.5% of the clones analysed in each isolate. The implications for the use of phenotypic characters to define species in arbuscular mycorrhizal fungi are discussed. The current position of G. claroideum/G.etunicatum in the taxonomy of the Glomeromycota is also discussed.
Biometrics for Child Vaccination and Welfare: Persistence of Fingerprint Recognition for Infants and Toddlers
With a number of emerging applications requiring biometric recognition of children (e.g., tracking child vaccination schedules, identifying missing children and preventing newborn baby swaps in hospitals), investigating the temporal stability of biometric recognition accuracy for children is important. The persistence of recognition accuracy of three of the most commonly used biometric traits (fingerprints, face and iris) has been investigated for adults. However, persistence of biometric recognition accuracy has not been studied systematically for children in the age group of 0-4 years. Given that very young children are often uncooperative and do not comprehend or follow instructions, in our opinion, among all biometric modalities, fingerprints are the most viable for recognizing children. This is primarily because it is easier to capture fingerprints of young children compared to other biometric traits, e.g., iris, where a child needs to stare directly towards the camera to initiate iris capture. In this report, we detail our initiative to investigate the persistence of fingerprint recognition for children in the age group of 0-4 years. Based on preliminary results obtained for the data collected in the first phase of our study, use of fingerprints for recognition of 0-4 year-old children appears promising.
Reverse ontology matching
Ontology Matching aims to find the semantic correspondences between ontologies that belong to a single domain but that have been developed separately. However, there are still some problem areas to be solved, because experts are still needed to supervise the matching processes and an efficient way to reuse the alignments has not yet been found. We propose a novel technique named Reverse Ontology Matching, which aims to find the matching functions that were used in the original process. The use of these functions is very useful for aspects such as modeling behavior from experts, performing matching-by-example, reverse engineering existing ontology matching tools or compressing ontology alignment repositories. Moreover, the results obtained from a widely used benchmark dataset provide evidence of the effectiveness of this approach.
An Analysis of the Precision and Reliability of the Leap Motion Sensor and Its Suitability for Static and Dynamic Tracking
We present the results of an evaluation of the performance of the Leap Motion Controller with the aid of a professional, high-precision, fast motion tracking system. A set of static and dynamic measurements was performed with different numbers of tracking objects and configurations. For the static measurements, a plastic arm model simulating a human arm was used. A set of 37 reference locations was selected to cover the controller's sensory space. For the dynamic measurements, a special V-shaped tool, consisting of two tracking objects maintaining a constant distance between them, was created to simulate two human fingers. In the static scenario, the standard deviation was less than 0.5 mm. The linear correlation revealed a significant increase in the standard deviation when moving away from the controller. The results of the dynamic scenario revealed the inconsistent performance of the controller, with a significant drop in accuracy for samples taken more than 250 mm above the controller's surface. The Leap Motion Controller undoubtedly represents a revolutionary input device for gesture-based human-computer interaction; however, due to its rather limited sensory space and inconsistent sampling frequency, in its current configuration it cannot currently be used as a professional tracking system.
STCN: Stochastic Temporal Convolutional Networks
Convolutional architectures have recently been shown to be competitive on many sequence modelling tasks when compared to the de-facto standard of recurrent neural networks (RNNs), while providing computational and modeling advantages due to inherent parallelism. However, currently there remains a performance gap to more expressive stochastic RNN variants, especially those with several layers of dependent random variables. In this work, we propose stochastic temporal convolutional networks (STCNs), a novel architecture that combines the computational advantages of temporal convolutional networks (TCN) with the representational power and robustness of stochastic latent spaces. In particular, we propose a hierarchy of stochastic latent variables that captures temporal dependencies at different time-scales. The architecture is modular and flexible due to decoupling of deterministic and stochastic layers. We show that the proposed architecture achieves state of the art log-likelihoods across several tasks. Finally, the model is capable of predicting high-quality synthetic samples over a long-range temporal horizon in modeling of handwritten text.
PowerCore: a program applying the advanced M strategy with a heuristic search for establishing core sets
MOTIVATION Core sets are necessary to ensure that access to useful alleles or characteristics retained in genebanks is guaranteed. We have successfully developed a computational tool named 'PowerCore' that aims to support the development of core sets by reducing the redundancy of useful alleles and thus enhancing their richness. RESULTS The program, using a new approach completely different from any other previous methodologies, selects entries of core sets by the advanced M (maximization) strategy implemented through a modified heuristic algorithm. The developed core set has been validated to retain all characteristics for qualitative traits and all classes for quantitative ones. PowerCore effectively selected the accessions with higher diversity representing the entire coverage of variables and gave a 100% reproducible list of entries whenever repeated. AVAILABILITY PowerCore software uses the .NET Framework Version 1.1 environment which is freely available for the MS Windows platform. The files can be downloaded from http://genebank.rda.go.kr/powercore/. The distribution of the package includes executable programs, sample data and a user manual.
An AI Approach to Automatic Natural Music Transcription
Automatic music transcription (AMT) remains a fundamental and difficult problem in music information research, and current music transcription systems are still unable to match human performance. AMT aims to automatically generate a score representation given a polyphonic acoustical signal. In our project, we approach the AMT problem on two fronts: acoustic modeling to identify pitches from a frame of audio, and establishing a score generation model to convert exact piano roll representations of audio into more ‘natural’ sheet music. We build an end to end pipeline that aims to convert .wav classical piano audio files into a ‘natural’ score representation.
Collaborative Filtering With User-Item Co-Autoregressive Models
Deep neural networks have shown promise in collaborative filtering (CF). However, existing neural approaches are either user-based or item-based, which cannot leverage all the underlying information explicitly. We propose CF-UIcA, a neural co-autoregressive model for CF tasks, which exploits the structural correlation in the domains of both users and items. The co-autoregression allows extra desired properties to be incorporated for different tasks. Furthermore, we develop an efficient stochastic learning algorithm to handle large scale datasets. We evaluate CF-UIcA on two popular benchmarks: MovieLens 1M and Netflix, and achieve state-of-the-art performance in both rating prediction and top-N recommendation tasks, which demonstrates the effectiveness of CF-UIcA.
Efficacy and Safety of Zhuanggu Joint Capsules in Combination with Celecoxib in Knee Osteoarthritis: A Multi-center, Randomized, Double-blind, Double-dummy, and Parallel Controlled Trial
BACKGROUND Knee osteoarthritis (KOA) is a chronic joint disease that manifests as knee pain as well as different degrees of lower limb swelling, stiffness, and movement disorders. The therapeutic goal is to alleviate or eliminate pain, correct deformities, improve or restore joint functions, and improve the quality of life. This study aimed to evaluate the efficacy and safety of Zhuanggu joint capsules combined with celecoxib and the benefit of treatment with Zhuanggu alone for KOA. METHODS This multi-center, randomized, double-blind, double-dummy, parallel controlled trial, started from December 2011 to May 2014, was carried out in 6 cities, including Beijing, Shanghai, Chongqing, Changchun, Chengdu, and Nanjing. A total of 432 patients with KOA were divided into three groups (144 cases in each group). The groups were treated, respectively, with Zhuanggu joint capsules combined with celecoxib capsule simulants, Zhuanggu joint capsules combined with celecoxib capsules, and celecoxib capsules combined with Zhuanggu joint capsule simulants for 4 weeks consecutively. The improvement of Western Ontario and McMaster Universities Osteoarthritis (WOMAC) index and the decreased rates in each dimension of WOMAC were evaluated before and after the treatment. Intergroup and intragroup comparisons of quantitative indices were performed. Statistically significant differences were evaluated with pairwise comparisons using Chi-square test (or Fisher's exact test) and an inspection level of α = 0.0167. RESULTS Four weeks after treatment, the total efficacies of Zhuanggu group, combination group, and celecoxib group were 65%, 80%, and 64%, respectively, with statistically significant differences among the three groups (P = 0.005). Intergroup pairwise comparisons showed that the total efficacy of the combination group was significantly higher than that of the Zhuanggu (P = 0.005) and celecoxib (P = 0.003) groups. The difference between the latter two groups was not statistically significant (P > 0.0167). Four weeks after discontinuation, the efficacies of the three groups were 78%, 95%, and 65%, respectively, with statistically significant differences (P < 0.0001). Intergroup pairwise comparisons revealed that the efficacy of the combination group was significantly better than that of the Zhuanggu and the celecoxib groups (P < 0.0001). The difference between the latter two groups was not statistically significant (P > 0.0167). The incidences of adverse events in Zhuanggu group, combination group, and celecoxib group were 8.5%, 8.5%, and 11.1%, respectively, with insignificant differences (P > 0.05). CONCLUSIONS Zhuanggu joint capsules alone or combined with celecoxib showed clinical efficacy in the treatment of KOA. The safety of Zhuanggu joint capsules alone or combined with celecoxib was acceptable. TRIAL REGISTRATION Chinese Clinical Trial Registry, ChiCTR-IPR-15007267; http://www.medresman.org/uc/project/projectedit.aspx?proj=1364.
Corpus-independent Generic Keyphrase Extraction Using Word Embedding Vectors
Keyphrase extraction from a given document is a difficult task that requires not only local statistical information but also extensive background knowledge. In this paper, we propose a graph-based ranking approach that uses information supplied by word embedding vectors as the background knowledge. We first introduce a weighting scheme that computes informativeness and phraseness scores of words using the information supplied by both word embedding vectors and local statistics. Keyphrase extraction is performed by constructing a weighted undirected graph for a document, where nodes represent words and edges are co-occurrence relations of two words within a defined window size. The weights of edges are computed by the afore-mentioned weighting scheme, and a weighted PageRank algorithm is used to compute final scores of words. Keyphrases are formed in post-processing stage using heuristics. Our work is evaluated on various publicly available datasets with documents of varying length. We show that evaluation results are comparable to the state-of-the-art algorithms, which are often typically tuned to a specific corpus to achieve the claimed results.
Towards the Systematic Testing of Aspect-Oriented Programs
The code that provides solutions to key software requirements, such as security and fault-tolerance, tends to be spread throughout (or cross-cut) the program modules that implement the “primary functionality” of a software system. Aspect-oriented programming is an emerging programming paradigm that supports implementing such cross-cutting requirements into named program units called “aspects”. To construct a system as an aspect-oriented program (AOP), one develops code for primary functionality in traditional modules and code for cross-cutting functionality in aspect modules. Compiling and running an AOP requires that the aspect code be “woven” into the code. Although aspect-oriented programming supports the separation of concerns into named program units, explicit and implicit dependencies of both aspects and traditional modules will result in systems with new testing challenges, which include new sources for program faults. This paper introduces a candidate fault model, along with associated testing criteria, for AOPs based on interactions that are unique to AOPs. The paper also identifies key issues relevant to the systematic testing of AOPs.
Leading a multigenerational nursing workforce: issues, challenges and strategies.
Today's nursing workforce is made up of staff and nursing leaders from four different generational cohorts. Generational diversity, including workforce differences in attitudes, beliefs, work habits, and expectations, has proven challenging for nursing leaders. The purpose of this article is to assist nursing leaders to reframe perceptions about generational differences and to view these differences in attitudes and behaviors as potential strengths. Developing the skill to view generational differences through a different lens will allow the leader to flex their leadership style, enhance quality and productivity, reduce conflict, and maximize the contributions of all staff. This article provides an overview of the generational cohorts and presents strategies which nursing leaders can use to coach and motivate, communication with, and reduce conflict for each generational cohort of nurses.
Self-Organised Middleware Architecture for the Internet-of-Things
Presently, middleware technologies abound for the Internet-of-Things (IoT), directed at hiding the complexity of underlying technologies and easing the use and management of IoT resources. The middleware solutions of today are capable technologies, which provide much advanced services and that are built using superior architectural models, they however fail short in some important aspects: existing middleware do not properly activate the link between diverse applications with much different monitoring purposes and many disparate sensing networks that are of heterogeneous nature and geographically dispersed. Then, current middleware are unfit to provide some system-wide global arrangement (intelligence, routing, data delivery) emerging from the behaviors of the constituent nodes, rather than from the coordination of single elements, i.e. self-organization. This paper presents the SIMPLE self-organized and intelligent middleware platform. SIMPLE middleware innovates from current state-of-research exactly by exhibiting self-organization properties, a focus on data-dissemination using multi-level subscriptions processing and a tiered networking approach able to cope with many disparate, widespread and heterogeneous sensing networks (e.g. WSN). In this way, the SIMLE middleware is provided as robust zero-configuration technology, with no central dependable system, immune to failures, and able to efficiently deliver the right data at the right time, to needing applications.
Illumination-aware faster R-CNN for robust multispectral pedestrian detection
Multispectral images of color-thermal pairs have shown more effective than a single color channel for pedestrian detection, especially under challenging illumination conditions. However, there is still a lack of studies on how to fuse the two modalities effectively. In this paper, we deeply compare six different convolutional network fusion architectures and analyse their adaptations, enabling a vanilla architecture to obtain detection performances comparable to the state-of-the-art results. Further, we discover that pedestrian detection confidences from color or thermal images are correlated with illumination conditions. With this in mind, we propose an Illumination-aware Faster R-CNN (IAF RCNN). Specifically, an Illumination-aware Network is introduced to give an illumination measure of the input image. Then we adaptively merge color and thermal sub-networks via a gate function defined over the illumination value. The experimental results on KAIST Multispectral Pedestrian Benchmark validate the effectiveness of the proposed IAF R-CNN.
TiNA: a scheme for temporal coherency-aware in-network aggregation
This paper presents TiNA, a scheme for minimizing energy consumption in sensor networks by exploiting end-user tolerance to temporal coherency. TiNA utilizes temporal coherency tolerances to both reduce the amount of information transmitted by individual nodes (communication cost dominates power usage in sensor networks), and to improve quality of data when not all sensor readings can be propagated up the network within a given time constraint. TiNA was evaluated against a traditional in-network aggregation scheme with respect to power savings as well as the quality of data for aggregate queries. Preliminary results show that TiNA can reduce power consumption by up to 50% without any loss in the quality of data.
Scheduling Real-Time Transactions: A Performance Evaluation
Managing transactions with real-time requirements presents many new problems. In this paper we address several: How can we schedule transactions with deadlines? How do the real-time constraints affect concurrency control? How should overloads be handled? How does the scheduling of 1/0 requests affect the timeliness of transactions? How should exclusive and shared locking be handled? We describe a new group of algorithms for scheduling real-time transactions that produce serializable schedules. We present a model for scheduling transactions with deadlines on a single processor disk resident database system, and evaluate the scheduling algorithms through detailed simulation experiments.
Square Root SAM: Simultaneous Localization and Mapping via Square Root Information Smoothing
Solving the SLAM (simultaneous localization and mapping) problem is one way to enable a robot to explore, map, and navigate in a previously unknown environment. Smoothing approaches have been investigated as a viable alternative to extended Kalman filter (EKF)based solutions to the problem. In particular, approaches have been looked at that factorize either the associated information matrix or the measurement Jacobian into square root form. Such techniques have several significant advantages over the EKF: they are faster yet exact; they can be used in either batch or incremental mode; are better equipped to deal with non-linear process and measurement models; and yield the entire robot trajectory, at lower cost for a large class of SLAM problems. In addition, in an indirect but dramatic way, column ordering heuristics automaticallyexploit the locality inherent in the geographic nature of the SLAM problem. This paper presents the theory underlying these methods, along with an interpretation of factorization in terms of the graphical model associated with the SLAM problem. Both simulation results and actual SLAM experiments in large-scale environments are presented that underscore the potential of these methods as an alternative to EKF-based approaches. KEY WORDS—mobile robots, SLAM, graphical models
A uniform approach to constraint-solving for lists, multisets, compact lists, and sets
Lists, multisets, and sets are well-known data structures whose usefulness is widely recognized in various areas of computer science. They have been analyzed from an axiomatic point of view with a parametric approach in Dovier et al. [1998], where the relevant unification algorithms have been developed. In this article, we extend these results considering more general constraints, namely, equality and membership constraints and their negative counterparts.
Familial occurrence of imperforate hymen.
Imperforate hymen is uncommon, occurring in 0.1 % of newborn females. Non-syndromic familial occurrence of imperforate hymen is extremely rare and has been reported only three times in the English literature. The authors describe two cases in a family across two generations, one presenting with chronic cyclical abdominal pain and the other acutely. There were no other significant reproductive or systemic abnormalities in either case. Imperforate hymen occurs mostly in a sporadic manner, although rare familial cases do occur. Both the recessive and the dominant modes of transmission have been suggested. However, no genetic markers or mutations have been proven as etiological factors. Evaluating all female relatives of the affected patients at an early age can lead to early diagnosis and treatment in an asymptomatic case.
An efficient closed-form approach to the design of linear-phase FIR digital filters with variable-bandwidth characteristics
This paper deals with the design of variable-bandwidth linear-phase FIR digital filters. Such filters are implemented as a linear combination of fixed-coefficient linear-phase filters and the variable bandwidth characteristics are provided by a tuning parameter embedded in the filter structure. These filters are designed in a least-square sense by formulating an error function reflecting the difference between the desired variable bandwidth filter and the practical filter represented as a linear combination of fixed-coefficient filters in a quadratic form. The filter coefficients are obtained by solving a system of linear equations comprising of a block-symmetric positive-definite matrix in which each block is a Toeplitz-plus-Hankel matrix. Consequently, a significant reduction in computational complexity can be achieved in obtaining the entries of this matrix. Moreover, closed-form expressions are provided for both the block-symmetric matrix as well as the vector involved in the system of linear equations. r 2005 Elsevier B.V. All rights reserved.
Nd:YAG Laser Treatment of Keloids and Hypertrophic Scars
Pathological cutaneous scars such as keloids and hypertrophic scars (HSs) are characterized by a diffuse redness that is caused by the overgrowth of capillary vessels due to chronic inflammation. Our group has been using long-pulsed, 1064-nm Nd:YAG laser in noncontact mode with low fluence and a submillisecond pulse duration to treat keloids and hypertrophic scars since 2006 with satisfactory results. The present study examined the efficacy of this approach in 22 Japanese patients with keloids (n = 16) or hypertrophic scars (n = 6) who were treated every 3 to 4 weeks. Treatment settings were as follows: 5 mm spot size diameter; 14 J/cm(2) energy density; 300 μs exposure time per pulse; and 10 Hz repetition rate. The responses of the pathological scars to the treatment were assessed by measuring their erythema, hypertrophy, hardness, itching, and pain or tenderness. Moreover, skin samples from 3 volunteer patients were subjected to histological evaluation and 5 patients underwent thermography during therapy. The average total scar assessment score dropped from 9.86 to 6.34. Hematoxylin and eosin staining and Elastica Masson-Goldner staining showed that laser treatment structurally changed the tissue collagen. This influence reached a depth of 0.5 to 1 mm. Electron microscopy revealed plasma protein leakage, proteoglycan particles, and a change in the collagen fiber fascicles. Further analyses revealed that noncontact mode Nd:YAG laser treatment is highly effective for keloids and hypertrophic scars regardless of patient age, the origin and multiplicity of scarring, the location of the scar(s), or the tension on the scar.
Clustering for Simultaneous Extraction of Aspects and Features from Reviews
This paper presents a clustering approach that simultaneously identifies product features and groups them into aspect categories from online reviews. Unlike prior approaches that first extract features and then group them into categories, the proposed approach combines feature and aspect discovery instead of chaining them. In addition, prior work on feature extraction tends to require seed terms and focus on identifying explicit features, while the proposed approach extracts both explicit and implicit features, and does not require seed terms. We evaluate this approach on reviews from three domains. The results show that it outperforms several state-of-the-art methods on both tasks across all three domains.
A randomized controlled trial of mindfulness meditation versus relaxation training: effects on distress, positive states of mind, rumination, and distraction.
BACKGROUND Although mindfulness meditation interventions have recently shown benefits for reducing stress in various populations, little is known about their relative efficacy compared with relaxation interventions. PURPOSE This randomized controlled trial examines the effects of a 1-month mindfulness meditation versus somatic relaxation training as compared to a control group in 83 students (M age = 25; 16 men and 67 women) reporting distress. METHOD Psychological distress, positive states of mind, distractive and ruminative thoughts and behaviors, and spiritual experience were measured, while controlling for social desirability. RESULTS Hierarchical linear modeling reveals that both meditation and relaxation groups experienced significant decreases in distress as well as increases in positive mood states over time, compared with the control group (p < .05 in all cases). There were no significant differences between meditation and relaxation on distress and positive mood states over time. Effect sizes for distress were large for both meditation and relaxation (Cohen's d = 1.36 and .91, respectively), whereas the meditation group showed a larger effect size for positive states of mind than relaxation (Cohen's d =.71 and .25, respectively). The meditation group also demonstrated significant pre-post decreases in both distractive and ruminative thoughts/behaviors compared with the control group (p < .04 in all cases; Cohen's d = .57 for rumination and .25 for distraction for the meditation group), with mediation models suggesting that mindfulness meditation's effects on reducing distress were partially mediated by reducing rumination. No significant effects were found for spiritual experience. CONCLUSIONS The data suggest that compared with a no-treatment control, brief training in mindfulness meditation or somatic relaxation reduces distress and improves positive mood states. However, mindfulness meditation may be specific in its ability to reduce distractive and ruminative thoughts and behaviors, and this ability may provide a unique mechanism by which mindfulness meditation reduces distress.
Organizational Justice Perceptions in China: Development of the Chinese Organizational Justice Scale
ORGANIZATIONAL JUSTICE PERCEPTIONS IN CHINA: DEVELOPMENT OF THE CHINESE ORGANIZATIONAL JUSTICE SCALE Katherine Mohler Fodchuk Old Dominion University, 2009 Director: Dr. Donald D. Davis Research analyzing fairness perceptions within organizations has gained the attention of cross-cultural theorists as the criteria used to judge fairness varies across cultures. Review of the literature indicates that researchers use translated Western measures of organizational justice on Eastern samples despite evidence of cultural variation in justice criteria. This dissertation addresses some of the gaps in the current research by developing and validating an indigenous measure of Chinese organizational justice perceptions. A preliminary qualitative study revealed numerous justice rules used by Chinese employees to determine whether a workplace decision was fair. The qualitative results were used to develop the Chinese Organizational Justice Scale (COJS). The COJS and various outcome measures were administered to 307 Chinese employees. The COJS revealed a fivefactor model for Chinese organizational justice perceptions with distributive justice breaking into two factors. The five-factor COJS measurement model indicated excellent fit and psychometric properties and included factors of distributive justice west (equity-based distributions), distributive justice east (distributions based on need, guanxi, and nonperformance related equity criteria), procedural justice, informational justice, and interpersonal justice. Unique Chinese justice criteria were identified for distributive justice and procedural justice.
Representation Learning of Drug and Disease Terms for Drug Repositioning
Drug repositioning (DR) refers to identification of novel indications for the approved drugs. The requirement of huge investment of time as well as money and risk of failure in clinical trials have led to surge in interest in drug repositioning. DR exploits two major aspects associated with drugs and diseases: existence of similarity among drugs and among diseases due to their shared involved genes or pathways or common biological effects. Existing methods of identifying drug-disease association majorly rely on the information available in the structured databases only. On the other hand, abundant information available in form of free texts in biomedical research articles are not being fully exploited. Word-embedding or obtaining vector representation of words from a large corpora of free texts using neural network methods have been shown to give significant performance for several natural language processing tasks. In this work we propose a novel way of representation learning to obtain features of drugs and diseases by combining complementary information available in unstructured texts and structured datasets. Next we use matrix completion approach on these feature vectors to learn projection matrix between drug and disease vector spaces. The proposed method has shown competitive performance with state-of-the-art methods. Further, the case studies on Alzheimer's and Hypertension diseases have shown that the predicted associations are matching with the existing knowledge.
An LDA-Based Approach to Scientific Paper Recommendation
Recommendation of scientific papers is a task aimed to support researchers in accessing relevant articles from a large pool of unseen articles. When writing a paper, a researcher focuses on the topics related to her/his scientific domain, by using a technical language. The core idea of this paper is to exploit the topics related to the researchers scientific production (authored articles) to formally define her/his profile; in particular we propose to employ topic modeling to formally represent the user profile, and language modeling to formally represent each unseen paper. The recommendation technique we propose relies on the assessment of the closeness of the language used in the researchers papers and the one employed in the unseen papers. The proposed approach exploits a reliable knowledge source for building the user profile, and it alleviates the cold-start problem, typical of collaborative filtering techniques. We also present a preliminary evaluation of our approach on the DBLP.
B9. Equivalent circuit with frequency-independent lumped elements for plasmonic graphene patch antenna using particle swarm optimization technique
Graphene patch microstrip antenna has been investigated for 600 GHz applications. The graphene material introduces a reconfigurable surface conductivity in terahertz frequency band. The input impedance is calculated using the finite integral technique. A five-lumped elements equivalent circuit for graphene patch microstrip antenna has been investigated. The values of the lumped elements equivalent circuit are optimized using the particle swarm optimization techniques. The optimization is performed to minimize the mean square error between the input impedance of the finite integral technique and that calculated by the equivalent circuit model. The effect of varying the graphene material chemical potential and relaxation time on the radiation characteristics of the graphene patch microstrip antenna has been investigated. An improved new equivalent circuit model has been introduced to best fitting the input impedance using a rational function and PSO. The Cauer's realization method is used to synthesize a new lumped-elements equivalent circuits.
Web personalization research: an information systems perspective
Purpose – Web personalization has been studied in different streams of research such as Marketing, Human Computer Interaction and Computer Science. However, an information systems perspective of web personalization research is very scarcely visible in this body of knowledge. This research review seeks to address two important questions: how has web personalization evolved as an integrative discipline? How has web personalization been treated in IS literature and where should researchers focus next? Design/methodology/approach – The paper intently follows an information systems perspective in its thematic classification of web personalization research which is consistent with the early conceptualization of information systems by logically mapping IS categories into web personalization research streams. Articles from 100 þ journals were analyzed and important concepts related to web personalization were classified from an information systems perspective. Findings – Surrounding the theme of web personalization two parallel streams of research evolved. First stream consisted of internet business models, computer science algorithms and web mining. Second stream focussed on human computer Interaction studies, user modelling and targeted marketing. Future information systems researchers in web personalization must focus on four important areas of social media, web development methodologies, emerging Internet accessing gadgets and domains other than e-Commerce. Originality/value – Web personalization has been studied previously in separate research streams. But no integrated view from different research streams exists. Although research interest in web mining has been growing as evidenced by growing number of publications an information systems perspective of web personalization research is very scarcely visible in the body of knowledge. The authors intently follow an information systems perspective in their thematic classification of web personalization research which is consistent with the early conceptualization of information systems by logically mapping IS categories into web personalization research streams. This thematic segregation of different research streams into IS framework makes our study distinct from other early reviews. They also identify four important areas where future IS researchers should focus on.
Oxidative degradation of chlorophenolic compounds with pyrite-Fenton process.
Batch experiments, in conjunction with chromatographic and spectroscopic measurements, were performed to comparatively investigate the degradation of various chlorophenolic (CP) compounds (e.g., 2-CP, 4-CP, 2,3-DCP, 2,4-DCP, 2,4,6-TCP, 2,3,4,6-TeCP) by a modified Fenton process using pyrite as the catalyst. The batch results show that the CP removal by pyrite-Fenton process was highly dependent on chemical conditions (e.g., pH, CP and pyrite concentration), CP type, number and location of chlorine atoms on the aromatic ring. With the exception of 2,3,4,6-TeCP and 2,3-DCP, the CP removal decreased with increasing the number of chlorine constituents. While the main mechanism responsible for monochlorophenol removal (e.g., 2-CP and 4-CP) was the hydroxyl radical attack on aromatic rings, the CP removal for multichlorophenolic compounds (e.g., 2,3,4,6-TeCP) was driven by both: (1) hydroxyl radical attack on aromatic rings by both solution and surface-bound hydroxyl radicals and (2) adsorption onto pyrite surface sites. The adsorption affinity increased with increasing the number of Cl atoms on the aromatic ring due to enhanced hydrophobic effect. The TOC removal was not 100% complete for all CPs investigated due to formation of chemically less degradable chlorinated intermediate organic compounds as well as low molecular weight organic acids such as formic and acetic acid. Spectroscopic measurements with SEM-EDS, zeta potential and XPS provided evidence for the partial oxidation of pyrite surface Fe(II) and disulfide groups under acidic conditions.
Research on Sustainable Development of Urban Water Conservation
In this paper,the author introduced the urban water as well as concept and meaning of sustainable de-velopment.Also the author proposed some countermeasures on the sustainable development of urban water conser-vation such as improving the flood control system,strengthening water conservation and water pollution control as well as strengthening ecological and environmental protection.
Three-dimensional (3D) modeling generation and provision system based on user-based conditions
The present invention relates to a system for generating and providing a three-dimensional (3D) modeling according to a user-based condition. More specifically, the present invention relates to a system for generating and providing a 3D modeling according to a user-based condition which can reduce data waste and a time by dynamically providing existing established high precision 3D modeling data as customized modeling data optimized for cultural genres through a database, provides various supplementary service of spatial culture content, and has various application expandability to research and development fields requiring real environment construction including education, psychiatry, and psychotherapy such as experiential learning as well as general cultural content such as a movie and a game. The system for generating and providing a 3D modeling according to a user-based condition comprises: a 3D GIS data store (100); and a data format conversion part (200).
RUN: Residual U-Net for Computer-Aided Detection of Pulmonary Nodules without Candidate Selection
The early detection and early diagnosis of lung cancer are crucial to improve the survival rate of lung cancer patients. Pulmonary nodules detection results have a significant impact on the later diagnosis. In this work, we propose a new network named RUN to complete nodule detection in a single step by bypassing the candidate selection. The system introduces the shortcut of the residual network to improve the traditional U-Net, thereby solving the disadvantage of poor results due to its lack of depth. Furthermore, we compare the experimental results with the traditional U-Net. We validate our method in LUng Nodule Analysis 2016 (LUNA16) Nodule Detection Challenge. We acquire a sensitivity of 90.90% at 2 false positives per scan and therefore achieve better performance than the current state-of-the-art approaches.
A Model-Based Method for Computer-Aided Medical Decision-Making
While MYCIN and PIP were under development at Stanford and Tufts! M.I. T., a group of computer scientists at Rutgers University was developing a system to aid in the evaluation and treatment of patients with glaucoma. The group was led by Professor Casimir K ulikou1ski, a researcher with extensive background in mathematical and pattern-recognition approaches to computer-based medical decision making (Nordyke et al., 1971), working within the Rutgers Research Resource on Computers in Biomedicine headed by Professor Saul Amarel. Working collaboratively with Dr. Arin Safir, Professor of Ophthalmology, "who was then based at the Mt. Sinai School of Medicine in New York City, Kulikowski and Sholom Weiss (a graduate student at Rutgers who "went on to become a research scientist there) developed a method of computer-assisted medical decision making that was based on causal-associational network (CASNET) models of disease. Although the work was inspired by the glaucoma domain, the approach had general features that were later refined in the development of the EXPERT system-building tool (see Chapters 18 and 20). A CASNET model consists of three rnain components: observations of a patient, pathophysiological states, and disease classifications. As observations are recorded, they are associated with the appropriate intennediate states. These states, in turn, are typically causally related, thereby forming a network that summarizes the mechanisms of disease. It is these patterns of states in the network that are linked to individual disease classes. Strat-
The Work Design Questionnaire (WDQ): developing and validating a comprehensive measure for assessing job design and the nature of work.
Although there are thousands of studies investigating work and job design, existing measures are incomplete. In an effort to address this gap, the authors reviewed the work design literature, identified and integrated previously described work characteristics, and developed a measure to tap those work characteristics. The resultant Work Design Questionnaire (WDQ) was validated with 540 incumbents holding 243 distinct jobs and demonstrated excellent reliability and convergent and discriminant validity. In addition, the authors found that, although both task and knowledge work characteristics predicted satisfaction, only knowledge characteristics were related to training and compensation requirements. Finally, the results showed that social support incrementally predicted satisfaction beyond motivational work characteristics but was not related to increased training and compensation requirements. These results provide new insight into how to avoid the trade-offs commonly observed in work design research. Taken together, the WDQ appears to hold promise as a general measure of work characteristics that can be used by scholars and practitioners to conduct basic research on the nature of work or to design and redesign jobs in organizations.
Proposal for the integration of decentralised composting of the organic fraction of municipal solid waste into the waste management system of Cuba.
Municipal solid waste (MSW) generation and management in Cuba was studied with a view to integrating composting of the organic fractions of MSW into the system. Composting is already included as part of the environmental strategy of the country as an appropriate waste management solution. However, no programme for area-wide implementation yet exists. The evaluation of studies carried out by some Cuban and international organisations showed that organic matter comprises approximately 60-70% of the MSW, with households being the main source. If all organic waste fractions were considered, the theoretical amount of organic waste produced would be approximately 1 Mio. Mg/a, leading to the production of approximately 0.5 Mio. Mg/a of compost. Composting could, therefore, be a suitable solution for treating the organic waste fractions of the MSW. Composting would best be carried out in decentralised systems, since transportation is a problem in Cuba. Furthermore, low technology and low budget composting options should be considered due to the problematic local economic situation. The location for such decentralised composting units would optimally be located at urban agricultural farms, which can be found all over Cuba. These farms are a unique model for sustainable farming in the world, and have a high demand for organic fertiliser. In this paper, options for the collection and impurity-separation in urban areas are discussed, and a stepwise introduction of source-separation, starting with hotel and restaurant waste, is suggested. For rural areas, the implementation of home composting is recommended.
Semantics-based Graph Approach to Complex Question-Answering
This paper suggests an architectural approach of representing knowledge graph for complex question-answering. There are four kinds of entity relations added to our knowledge graph: syntactic dependencies, semantic role labels, named entities, and coreference links, which can be effectively applied to answer complex questions. As a proof of concept, we demonstrate how our knowledge graph can be used to solve complex questions such as arithmetics. Our experiment shows a promising result on solving arithmetic questions, achieving the 3folds cross-validation score of 71.75%.
The analgesic efficacy of pre-operative bilateral erector spinae plane (ESP) blocks in patients having ventral hernia repair.
Laparoscopic ventral hernia repair is an operation associated with significant postoperative pain, and regional anaesthetic techniques are of potential benefit. The erector spinae plane (ESP) block performed at the level of the T5 transverse process has recently been described for thoracic surgery, and we hypothesised that performing the ESP block at a lower vertebral level would provide effective abdominal analgesia. We performed pre-operative bilateral ESP blocks with 20-30 ml ropivacaine 0.5% at the level of the T7 transverse process in four patients undergoing laparoscopic ventral hernia repair. Median (range) 24-h opioid consumption was 18.7 mg (0.0-43.0 mg) oral morphine. The highest and lowest median (range) pain scores in the first 24 h were 3.5 (3.0-5.0) and 2.5 (0.0-3.0) on an 11-point numerical rating scale. We also performed the block in a fresh cadaver and assessed the extent of injectate spread using computerised tomography. There was radiographic evidence of spread extending cranially to the upper thoracic levels and caudally as far as the L2-L3 transverse processes. We conclude that the ESP block is a promising regional anaesthetic technique for laparoscopic ventral hernia repair and other abdominal surgery when performed at the level of the T7 transverse process. Its advantages are the ability to block both supra-umbilical and infra-umbilical dermatomes with a single-level injection and its relative simplicity.
Quantification of Protein Interactions and Solution Transport Using High-Density GMR Sensor Arrays
Monitoring the kinetics of protein interactions on a high-density sensor array is vital to drug development and proteomic analysis. Label-free kinetic assays based on surface plasmon resonance are the current gold standard, but they have poor detection limits, suffer from non-specific binding, and are not amenable to high-throughput analyses. Here, we show that magnetically responsive nanosensors that have been scaled to over 100,000 sensors per cm² can be used to measure the binding kinetics of various proteins with high spatial and temporal resolution. We present an analytical model that describes the binding of magnetically labelled antibodies to proteins that are immobilized on the sensor surface. This model is able to quantify the kinetics of antibody-antigen binding at sensitivities as low as 20 zeptomoles of solute.
Backing Rich Credentials with a Blockchain PKI ∗
This is the second of a series of papers describing the results of a project whose goal was to identify five remote identity proofing solutions that can be used as alternatives to knowledge-based verification. This paper describes the second solution, which makes use of a rich credential adapted for use on a blockchain and backed by a blockchain PKI. A rich credential, also used in Solution 1, allows the subject to identify him/herself to a remote verifier with which the subject has no prior relationship by presenting verification factors including possession of a private key, knowledge of a password, and possession of one or more biometric features, with selective disclosure of attributes and selective presentation of verification factors. In Solution 2 the issuer is a bank and the biometric verification factor is speaker recognition, which can be combined with face recognition to defeat voice morphing. The paper describes in detail the concept of a blockchain PKI, and shows that it has remarkable advantages over a traditional PKI, notably the fact that revocation checking is performed on the verifier’s local copy of the blockchain without requiring CRLs or OCSP.
Science In The Pleasure Ground: A History of the Arnold Arboretum
One of Boston's most beautiful and treasured outdoor spaces, the Arnold Arboretum is a living museum of trees and shrubs, a public park, and a laboratory for scientific investigation. The unique, intricate garden is admired worldwide as a model for naturalistic landscape architecture. In this generously illustrated volume, Ida Hay provides the first comprehensive history of the Arboretum's pioneering role and contemporary significance in successfully blending scientific endeavors with public recreation and aesthetic display. Her engaging narrative focuses on the lives, contributions, and interrelationships of those who founded and developed the Arboretum, beginning with the grant of land in Jamaica Plain to Harvard University by Benjamin Bussey and the endowment provided by James Arnold. These founding events are set against the background of scientific developments in the biological sciences and popular interest in horticulture and naturalistic landscape design in early nineteenth-century New England. The significant contributions of Charles Sprague Sargent, the first director, and the landscape architect Frederick Law Olmsted provide the heart of the dynamic story behind the design and construction of the Arnold's grounds. Sargent's fifty-year administration established the plan for collecting and displaying its trees and shrubs for the education and pleasure of visitors. He also created and managed the worldwide scientific research of the institution, while simultaneously pursuing an active career in research and writing himself. The interaction of scientific research and public education was embodied in the first director, who created and implemented the dual mission of theArboretum: "science in the pleasure ground." The leadership of subsequent directors of the Arboretum - Oakes Ames, Elmer D. Merrill, Karl Sax, Richard A. Howard, Peter S. Ashton, Robert E. Cook - reflected new priorities in scientific research and collection policies for the herbarium an
Accelerating Braided B+ Tree Searches on a GPU with CUDA
Previous work has shown that using the GPU as a brute force method for SELECT statements on a SQLite database table yields significant speedups. However, this requires that the entire table be selected and transformed from the B-Tree to row-column format. This paper investigates possible speedups by traversing B+ Trees in parallel on the GPU, avoiding the overhead of selecting the entire table to transform it into row-column format and leveraging the logarithmic nature of tree searches. We experiment with different input sizes, different orders of the B+ Tree, and batch multiple queries together to find optimal speedups for SELECT statements with single search parameters as well as range searches. We additionally make a comparison to a simple GPU brute force algorithm on a row-column version of the B+ Tree.
Neonatal abdominal wall defects.
Gastroschisis and omphalocele are the two most common congenital abdominal wall defects. Both are frequently detected prenatally due to routine maternal serum screening and fetal ultrasound. Prenatal diagnosis may influence timing, mode and location of delivery. Prognosis for gastroschisis is primarily determined by the degree of bowel injury, whereas prognosis for omphalocele is related to the number and severity of associated anomalies. The surgical management of both conditions consists of closure of the abdominal wall defect, while minimizing the risk of injury to the abdominal viscera either through direct trauma or due to increased intra-abdominal pressure. Options include primary closure or a variety of staged approaches. Long-term outcome is favorable in most cases; however, significant associated anomalies (in the case of omphalocele) or intestinal dysfunction (in the case of gastroschisis) may result in morbidity and mortality.
Visual object-action recognition: Inferring object affordances from human demonstration
This paper investigates object categorization according to function, i.e., learning the affordances of objects from human demonstration. Object affordances (functionality) are inferred from observations of humans using the objects in different types of actions. The intended application is learning from demonstration, in which a robot learns to employ objects in household tasks, from observing a human performing the same tasks with the objects. We present a method for categorizing manipulated objects and human manipulation actions in context of each other. The method is able to simultaneously segment and classify human hand actions, and detect and classify the objects involved in the action. This can serve as an initial step in a learning from demonstration method. Experiments show that the contextual information improves the classification of both objects and actions. 2010 Elsevier Inc. All rights reserved.
An Algorithm for Learning Shape and Appearance Models without Annotations
This paper presents a framework for automatically learning shape and appearance models for medical (and certain other) images. It is based on the idea that having a more accurate shape and appearance model leads to more accurate image registration, which in turn leads to a more accurate shape and appearance model. This leads naturally to an iterative scheme, which is based on a probabilistic generative model that is fit using Gauss-Newton updates within an EM-like framework. It was developed with the aim of enabling distributed privacy-preserving analysis of brain image data, such that shared information (shape and appearance basis functions) may be passed across sites, whereas latent variables that encode individual images remain secure within each site. These latent variables are proposed as features for privacy-preserving data mining applications. The approach is demonstrated qualitatively on the KDEF dataset of 2D face images, showing that it can align images that traditionally require shape and appearance models trained using manually annotated data (manually defined landmarks etc.). It is applied to MNIST dataset of handwritten digits to show its potential for machine learning applications, particularly when training data is limited. The model is able to handle “missing data”, which allows it to be cross-validated according to how well it can predict left-out voxels. The Email address: [email protected] (John Ashburner) Preprint submitted to Medical Image Analysis July 30, 2018 ar X iv :1 80 7. 10 73 1v 1 [ cs .C V ] 2 7 Ju l 2 01 8
Drugs and drug-like molecules can modulate the function of mucosal-associated invariant T cells
The major-histocompatibility-complex-(MHC)-class-I-related molecule MR1 can present activating and non-activating vitamin-B-based ligands to mucosal-associated invariant T cells (MAIT cells). Whether MR1 binds other ligands is unknown. Here we identified a range of small organic molecules, drugs, drug metabolites and drug-like molecules, including salicylates and diclofenac, as MR1-binding ligands. Some of these ligands inhibited MAIT cells ex vivo and in vivo, while others, including diclofenac metabolites, were agonists. Crystal structures of a T cell antigen receptor (TCR) from a MAIT cell in complex with MR1 bound to the non-stimulatory and stimulatory compounds showed distinct ligand orientations and contacts within MR1, which highlighted the versatility of the MR1 binding pocket. The findings demonstrated that MR1 was able to capture chemically diverse structures, spanning mono- and bicyclic compounds, that either inhibited or activated MAIT cells. This indicated that drugs and drug-like molecules can modulate MAIT cell function in mammals.
Decentralized control: An overview
The paper reviews the past and present results in the area of decentralized control of large-scale complex systems. An emphasis is laid on decentralization, decomposition, and robustness. These methodologies serve as effective tools to overcome specific difficulties arising in largescale complex systems such as high dimensionality, information structure constraints, uncertainty, and delays. Several prospective topics for future research are introduced in this contents. The overview is focused on recent decomposition approaches in interconnected dynamic systems due to their potential in providing the extension of decentralized control into networked control systems. # 2008 Elsevier Ltd. All rights reserved.
Investigating the Role of Personality Traits and Influence Strategies on the Persuasive Effect of Personalized Recommendations
Recommender systems provide suggestions for products, services, or information that match users’ interests and/or needs. However, not all recommendations persuade users to select or use the recommended item. The Elaboration Likelihood Model (ELM) suggests that individuals with low motivation or ability to process the information provided with a recommended item could eventually get persuaded to select/use the item if appropriate peripheral cues enrich the recommendation. The purpose of this research is to investigate the persuasive effect of certain influence strategies and the role of personality in the acceptance of recommendations. In the present study, a movie Recommender System was developed in order to empirically investigate the aforementioned questions applying certain persuasive strategies in the form of textual messages alongside the recommended item. The statistical method of Fuzzy-Set Qualitative Comparative Analysis (fsQCA) was used for data analysis and the results revealed that motivating messages do change users’ acceptance of the recommender item but not unconditionally since user’s personality differentiates the effect of the persuasive strategies.
A hybrid convolutional neural networks with extreme learning machine for WCE image classification
Wireless Capsule Endoscopy (WCE) is considered as a promising technology for non-invasive gastrointestinal disease examination. This paper studies the classification problem of the digestive organs for wireless capsule endoscopy (WCE) images aiming at saving the review time of doctors. Our previous study has proved the Convolutional Neural Networks (CNN)-based WCE classification system is able to achieve 95% classification accuracy in average, but it is difficult to further improve the classification accuracy owing to the variations of individuals and the complex digestive tract circumstance. Research shows that there are two possible approaches to improve classification accuracy: to extract more discriminative image features and to employ a more powerful classifier. In this paper, we propose to design a WCE classification system by a hybrid CNN with Extreme Learning Machine (ELM). In our approach, we construct the CNN as a data-driven feature extractor and the cascaded ELM as a strong classifier instead of the conventional used full-connection classifier in deep CNN classification system. Moreover, to improve the convergence and classification capability of ELM under supervision manner, a new initialization is employed. Our developed WCE image classification system is named as HCNN-NELM. With about 1 million real WCE images (25 examinations), intensive experiments are conducted to evaluate its performance. Results illustrate its superior performance compared to traditional classification methods and conventional CNN-based method, where about 97.25% classification accuracy can be achieved in average.
Discovery of Online Shopping Patterns Across Websites
• In the online world, customers can easily navigate to different online stores to make purchases • Market basket analysis is often used to discover associations among products for brick-and-mortar stores, but rarely for online shops • Defined to online shopping patterns and developed 2 novel methods to perform market basket analysis across websites • The methods presented in this paper can be only not only to online shopping application, but to other dimensions as well. Uyanga, Do young suk
Robust sparse coding for face recognition
Recently the sparse representation (or coding) based classification (SRC) has been successfully used in face recognition. In SRC, the testing image is represented as a sparse linear combination of the training samples, and the representation fidelity is measured by the l2-norm or l1-norm of coding residual. Such a sparse coding model actually assumes that the coding residual follows Gaussian or Laplacian distribution, which may not be accurate enough to describe the coding errors in practice. In this paper, we propose a new scheme, namely the robust sparse coding (RSC), by modeling the sparse coding as a sparsity-constrained robust regression problem. The RSC seeks for the MLE (maximum likelihood estimation) solution of the sparse coding problem, and it is much more robust to outliers (e.g., occlusions, corruptions, etc.) than SRC. An efficient iteratively reweighted sparse coding algorithm is proposed to solve the RSC model. Extensive experiments on representative face databases demonstrate that the RSC scheme is much more effective than state-of-the-art methods in dealing with face occlusion, corruption, lighting and expression changes, etc.
3D Fluid Flow Estimation with Integrated Particle Reconstruction
The standard approach to densely reconstruct the motion in a volume of fluid is to inject high-contrast tracer particles and record their motion with multiple high-speed cameras. Almost all existing work processes the acquired multi-view video in two separate steps: first, a per-frame reconstruction of the particles, usually in the form of soft occupancy likelihoods in a voxel representation; followed by 3D motion estimation, with some form of dense matching between the precomputed voxel grids from different time steps. In this sequential procedure, the first step cannot use temporal consistency considerations to support the reconstruction, while the second step has no access to the original, highresolution image data. We show, for the first time, how to jointly reconstruct both the individual tracer particles and a dense 3D fluid motion field from the image data, using an integrated energy minimization. Our hybrid Lagrangian/Eulerian model explicitly reconstructs individual particles, and at the same time recovers a dense 3D motion field in the entire domain. Making particles explicit greatly reduces the memory consumption and allows one to use the high-resolution input images for matching. Whereas the dense motion field makes it possible to include physical a-priori constraints and account for the incompressibility and viscosity of the fluid. The method exhibits greatly (≈70%) improved results over a recent baseline with two separate steps for 3D reconstruction and motion estimation. Our results with only two time steps are comparable to those of state-of-the-art tracking-based methods that require much longer sequences.
A cross-collection mixture model for comparative text mining
In this paper, we define and study a novel text mining problem, which we refer to as Comparative Text Mining (CTM). Given a set of comparable text collections, the task of comparative text mining is to discover any latent common themes across all collections as well as summarize the similarity and differences of these collections along each common theme. This general problem subsumes many interesting applications, including business intelligence and opinion summarization. We propose a generative probabilistic mixture model for comparative text mining. The model simultaneously performs cross-collection clustering and within-collection clustering, and can be applied to an arbitrary set of comparable text collections. The model can be estimated efficiently using the Expectation-Maximization (EM) algorithm. We evaluate the model on two different text data sets (i.e., a news article data set and a laptop review data set), and compare it with a baseline clustering method also based on a mixture model. Experiment results show that the model is quite effective in discovering the latent common themes across collections and performs significantly better than our baseline mixture model.
High Frequency Trading and Price Discovery
We examine the role of high-frequency traders (HFT) in price discovery and price efficiency. Overall HFT facilitate price efficiency by trading in the direction of permanent price changes and in the opposite direction of transitory pricing errors on average days and the highest volatility days. This is done through their marketable orders. In contrast, HFT liquidity-supplying non-marketable orders are adversely selected in terms of the permanent and transitory components as these trades are in the direction opposite to permanent price changes and in the same direction as transitory pricing errors. HFT predicts price changes in the overall market over short horizons measured in seconds. HFT is correlated with public information, such as macro news announcements, marketwide price movements, and limit order book imbalances. (for internet appendix click: http://goo.gl/vyOEB)
Developmental disorders of the dentition: an update.
Dental anomalies are common congenital malformations that can occur either as isolated findings or as part of a syndrome. This review focuses on genetic causes of abnormal tooth development and the implications of these abnormalities for clinical care. As an introduction, we describe general insights into the genetics of tooth development obtained from mouse and zebrafish models. This is followed by a discussion of isolated as well as syndromic tooth agenesis, including Van der Woude syndrome (VWS), ectodermal dysplasias (EDs), oral-facial-digital (OFD) syndrome type I, Rieger syndrome, holoprosencephaly, and tooth anomalies associated with cleft lip and palate. Next, we review delayed formation and eruption of teeth, as well as abnormalities in tooth size, shape, and form. Finally, isolated and syndromic causes of supernumerary teeth are considered, including cleidocranial dysplasia and Gardner syndrome.
A Method of Preventing Unauthorized Data Transmission in Controller Area Network
There is a strong demand for the security of Controller Area Network (CAN), a major in-vehicle network. A number of methods to detect unauthorized data transmission, such as anomaly detection and misuse detection, have already been proposed. However, all of them have no capability of preventing unauthorized data transmission itself. In this paper, we propose a novel method that realizes the prevention as well as detection. Our method can be effectively implemented with minimal changes in the current architecture of Electronic Control Unit. The method works even in a CAN with multiple buses interconnected by gateways.
The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients
This research aimed at the case of customers’ default payments in Taiwan and compares the predictive accuracy of probability of default among six data mining methods. From the perspective of risk management, the result of predictive accuracy of the estimated probability of default will be more valuable than the binary result of classification credible or not credible clients. Because the real probability of default is unknown, this study presented the novel ‘‘Sorting Smoothing Method” to estimate the real probability of default. With the real probability of default as the response variable (Y), and the predictive probability of default as the independent variable (X), the simple linear regression result (Y = A + BX) shows that the forecasting model produced by artificial neural network has the highest coefficient of determination; its regression intercept (A) is close to zero, and regression coefficient (B) to one. Therefore, among the six data mining techniques, artificial neural network is the only one that can accurately estimate the real probability of default. 2007 Elsevier Ltd. All rights reserved.
Differentially Private Releasing via Deep Generative Model (Technical Report).
Privacy-preserving releasing of complex data (e.g., image, text, audio) represents a long-standing challenge for the data mining research community. Due to rich semantics of the data and lack of a priori knowledge about the analysis task, excessive sanitization is often necessary to ensure privacy, leading to significant loss of the data utility. In this paper, we present dp-GAN, a general private releasing framework for semantic-rich data. Instead of sanitizing and then releasing the data, the data curator publishes a deep generative model which is trained using the original data in a differentially private manner; with the generative model, the analyst is able to produce an unlimited amount of synthetic data for arbitrary analysis tasks. In contrast of alternative solutions, dp-GAN highlights a set of key features: (i) it provides theoretical privacy guarantee via enforcing the differential privacy principle; (ii) it retains desirable utility in the released model, enabling a variety of otherwise impossible analyses; and (iii) most importantly, it achieves practical training scalability and stability by employing multi-fold optimization strategies. Through extensive empirical evaluation on benchmark datasets and analyses, we validate the efficacy of dp-GAN. (The source code and the data used in the paper is available at: https://github.com/alps-lab/dpgan)
KNOWROB-MAP - knowledge-linked semantic object maps
Autonomous household robots are supposed to accomplish complex tasks like cleaning the dishes which involve both navigation and manipulation within the environment. For navigation, spatial information is mostly sufficient, but manipulation tasks raise the demand for deeper knowledge about objects, such as their types, their functions, or the way how they can be used. We present KNOWROB-MAP, a system for building environment models for robots by combining spatial information about objects in the environment with encyclopedic knowledge about the types and properties of objects, with common-sense knowledge describing what the objects can be used for, and with knowledge derived from observations of human activities by learning statistical relational models. In this paper, we describe the concept and implementation of KNOWROB-MAP and present several examples demonstrating the range of information the system can provide to autonomous robots.
Balancing energy efficiency and quality of aggregate data in sensor networks
In-network aggregation has been proposed as one method for reducing energy consumption in sensor networks. In this paper, we explore two ideas related to further reducing energy consumption in the context of in-network aggregation. The first is by influencing the construction of the routing trees for sensor networks with the goal of reducing the size of transmitted data. To this end, we propose a group-aware network configuration method that “clusters” along the same path sensor nodes that belong to the same group. The second idea involves imposing a hierarchy of output filters on the sensor network with the goal of both reducing the size of transmitted data and minimizing the number of transmitted messages. More specifically, we propose a framework to use temporal coherency tolerances in conjunction with in-network aggregation to save energy at the sensor nodes while maintaining specified quality of data. These tolerances are based on user preferences or can be dictated by the network in cases where the network cannot support the current tolerance level. Our framework, called TiNA, works on top of existing in-network aggregation schemes. We evaluate experimentally our proposed schemes in the context of existing in-network aggregation schemes. We present experimental results measuring energy consumption, response time, and quality of data for Group-By queries. Overall, our schemes provide significant energy savings with respect to communication and a negligible drop in quality of data.
Accelerating Haskell array codes with multicore GPUs
Current GPUs are massively parallel multicore processors optimised for workloads with a large degree of SIMD parallelism. Good performance requires highly idiomatic programs, whose development is work intensive and requires expert knowledge. To raise the level of abstraction, we propose a domain-specific high-level language of array computations that captures appropriate idioms in the form of collective array operations. We embed this purely functional array language in Haskell with an online code generator for NVIDIA's CUDA GPGPU programming environment. We regard the embedded language's collective array operations as algorithmic skeletons; our code generator instantiates CUDA implementations of those skeletons to execute embedded array programs. This paper outlines our embedding in Haskell, details the design and implementation of the dynamic code generator, and reports on initial benchmark results. These results suggest that we can compete with moderately optimised native CUDA code, while enabling much simpler source programs.
Autonomous Offroad Navigation Under Poor GPS Conditions
This paper describes an approach for autonomous offroad navigation for the Civilian European Landrobot Trial 2009 (C-ELROB). The main focus in this paper is how to cope with poor GPS conditions such as occurring in forest environments. Therefore the expected and detected problems are stated, and methods how to achieve good autonomous driving performance due to less weigthing of inaccurate GPS information are presented. But first, an introduction how to combine obstacle-avoidance and path following behavior within the reactive so-called tentacles approach is given. Extensions for utilizing visual information for tentacle evaluation as well as the integration of geodetic information to gather information on the road network are presented afterwards. Finally the results gathered from the successful C-ELROB trials are analyzed.
Granulocyte-macrophage colony-stimulating factor (GM-CSF): what role in bone marrow transplantation?
Infection during the period of bone marrow aplasia remains one of the major risks associated with high-dose chemotherapy and transplantation. Over the past several years, a number of investigators in Europe and North America have evaluated the use of GM-CSF in the setting of autologous bone marrow transplantation. These studies have almost all shown a hastening of myeloid engraftment. This, for the most part, has led to fewer serious infections and a decreased hospital stay for the GM-CSF treated patients. An overall survival advantage has not been noted. There has also not been any consistent multi-lineage effect. Future trials with combinations of sequentially used cytokines may lead to more rapid recovery of red blood cells and platelets in addition to granulocytes. Infektionen in der Phase der Knochenmarkaplasie nach Hochdosis-Chemotherapie und Knochenmarktransplantation stellen nach wie vor eine große Gefahr dar. Der Einsatz von GM-CSF bei autologer Knochenmarktransplantation wurde in den vergangenen Jahren von mehreren Forschungsgruppen in Europa und Nordamerika erprobt. In fast allen Studien wurde eine Beschleunigung des Knochenmark-Engraftment beobachtet. Bei den meisten mit GM-CSF behandelten Patienten war dies mit einer Reduktion schwerer Infektionen und Verkürzung des Krankenhausaufenthaltes verbunden. Doch war insgesamt keine Lebensverlängerung nachzuweisen. Eine Beeinflussung multipler Zellinien konnte nicht konstant festgestellt werden. Eine raschere Erholung der Erythrozyten und Thrombozyten und nicht nur der Granulozyten wird mit der sequentiellen Anwendung kombinierter Zytokine erwartet, die in zukünftigen Studien geprüft werden sollen.
Abnormal event detection in tourism video based on salient spatio-temporal features and sparse combination learning
With the booming development of tourism, travel security problems are becoming more and more prominent. Congestion, stampedes, fights and other tourism emergency events occurred frequently, which should be a wake-up call for tourism security. Therefore, it is of great research value and application prospect to real-time monitor tourists and detect abnormal events in tourism surveillance video by using computer vision and video intelligent processing technology, which can realize the timely forecast and early warning of tourism emergencies. At present, although most of the video-based abnormal event detection methods work well in simple scenes, there are often problems such as low detection rate and high false positive rate in complex motion scenarios, and the detection of abnormal events can’t be processed in real time. To tackle these issues, we propose an abnormal event detection model in tourism video based on salient spatio-temporal features and sparse combination learning, which has good robustness and timeliness in complex motion scenarios and can be adapted to real-time anomaly detection in practical applications. Specifically, spatio-temporal gradient model is combined with foreground detection to extract 3D gradient features on the foreground target of video sequence as the salient spatio-temporal features, which can eliminate the interference of the background. Sparse combination learning algorithm is used to establish the abnormal event detection model, which can realize the real-time detection of abnormal events. In addition, we construct a new ScenicSpot dataset with 18 video clips (5964 frames) containing both normal and abnormal events. The experimental results on ScenicSpot dataset and two standard benchmark datasets show that our method can realize the automatic detection and recognition of tourists’ abnormal behavior, and has better performance compared with the classical methods.
Icing flashover characteristics and discharge process of 500 kV AC transmission line suspension insulator strings
The icing flashover characteristics and discharge processes of 500 kV ac transmission line long insulator strings energized in the ice accretion were studied. The influence of pollution and grading rings (including simulated conductor) on heavy ice-covered insulators¿ flashover voltage were considered in this paper. Based on the artificial icing test results, it was found that 1) under the severe icing conditions and three salt deposit density (SDD) levels of PSDD=0.1, 0.05, 0.025 mg/cm2, compared with 500 kV transmission line rated phase to ground voltage, the flashover voltages of 28 units XWP2-160 porcelain double-shed insulator are 20.5% lower, 13.1% lower and 2.3% higher respectively; and the values of the FXBW4-500/160 composite insulator are 18.7% lower, 12.0% lower and 4.3% higher respectively. 2) equipped with the given grading ring and simulated conductor the icing flashover stress of 28 units XWP2-160 is lower than those without grading ring and simulated conductor under the three pollution degrees. The discharge process of the icing flashover and relevant factors were investigated also, which shows that 1) the locations of the air gaps on icing insulators are not completely random. 2) arcs on long insulator strings require more time to reach a critical length, hence, the arc propagation might be easily influenced by the decreasing of melted water conductivity and ice shedding than short insulator strings.
An Empirical Assessment of Refactoring Impact on Software Quality Using a Hierarchical Quality Model
Software refactoring is a collection of reengineering activities that aims to improve software quality. Refactorings are commonly used in agile software processes to improve software quality after a significant software development or evolution. There is belief that refactoring improves quality factors such as understandability, flexibility, and reusability. However, there is limited empirical evidence to support such assumptions. The aim of this study is to confirm such claims using a hierarchal quality model. We study the effect of software refactoring on software quality. We provide details of our findings as heuristics that can help software developers make more informed decisions about what refactorings to perform in regard to improve a particular quality factor. We validate the proposed heuristics in an empirical setting on two open-source systems. We found that the majority of refactoring heuristics do improve quality; however some heuristics do not have a positive impact on all software quality factors. In addition, we found that the impact analysis of refactorings divides software measures into two categories: high and low impacted measures. These categories help in the endeavor to know the best measures that can be used to identify refactoring candidates. We validated our findings on two open-source systems—Eclipse and Struts. For both systems, we found consistency between the heuristics and the actual refactorings.
Ethereum transaction graph analysis
Cryptocurrency platforms such as Bitcoin and Ethereum have become more popular due to decentralized control and the promise of anonymity. Ethereum is particularly powerful due to its support for smart contracts which are implemented through Turing complete scripting languages and digital tokens that represent fungible tradable goods. It is necessary to understand whether de-anonymization is feasible to quantify the promise of anonymity. Cryptocurrencies are increasingly being used in online black markets like Silk Road and ransomware like CryptoLocker and WannaCry. In this paper, we propose a model for persisting transactions from Ethereum into a graph database, Neo4j. We propose leveraging graph compute or analytics against the transactions persisted into a graph database.
Probabilistic Marching Cubes
In this paper we revisit the computation and visualization of equivalents to isocontours in uncertain scalar fields. We model uncertainty by discrete random fields and, in contrast to previous methods, also take arbitrary spatial correlations into account. Starting with joint distributions of the random variables associated to the sample locations, we compute level crossing probabilities for cells of the sample grid. This corresponds to computing the probabilities that the well-known symmetry-reduced marching cubes cases occur in random field realizations. For Gaussian random fields, only marginal density functions that correspond to the vertices of the considered cell need to be integrated. We compute the integrals for each cell in the sample grid using a Monte Carlo method. The probabilistic ansatz does not suffer from degenerate cases that usually require case distinctions and solutions of ill-conditioned problems. Applications in 2D and 3D, both to synthetic and real data from ensemble simulations in climate research, illustrate the influence of spatial correlations on the spatial distribution of uncertain isocontours.
Automated Essay Scoring: A Literature Review
Introduction In recent decades, large-scale English language proficiency testing and testing research have seen an increased interest in constructed-response essay-writing items (Aschbacher, 1991; Powers, Burstein, Chodorow, Fowles, & Kukich, 2001; Weigle, 2002). The TOEFL iBT, for example, includes two constructed-response writing tasks, one of which is an integrative task requiring the test-taker to write in response to information delivered both aurally and in written form (Educational Testing Service, n.d.). Similarly, the IELTS academic test requires test-takers to write in response to a question that relates to a chart or graph that the test-taker must read and interpret (International English Language Testing System, n.d.). Theoretical justification for the use of such integrative, constructed-response tasks (i.e., tasks which require the test-taker to draw upon information received through several modalities in support of a communicative function) date back to at least the early 1960’s. Carroll (1961, 1972) argued that tests which measure linguistic knowledge alone fail to predict the knowledge and abilities that score users are most likely to be interested in, i.e., prediction of actual use of language knowledge for communicative purposes in specific contexts:
DDoS Attacks in Cloud Computing: Issues, Taxonomy, and Future Directions
Security issues related to the cloud computing are relevant to various stakeholders for an informed cloud adoption decision. Apart from data breaches, the cyber security research community is revisiting the attack space for cloud-specific solutions as these issues affect budget, resource management, and service quality. Distributed Denial of Service (DDoS) attack is one such serious attack in the cloud space. In this paper, we present developments related to DDoS attack mitigation solutions in the cloud. In particular, we present a comprehensive survey with a detailed insight into the characterization, prevention, detection, and mitigation mechanisms of these attacks. Additionally, we present a comprehensive solution taxonomy to classify DDoS attack solutions. We also provide a comprehensive discussion on important metrics to evaluate various solutions. This survey concludes that there is a strong requirement of solutions, which are designed keeping utility computing models in mind. Accurate auto-scaling decisions, multi-layer mitigation, and defense using profound resources in the cloud, are some of the key requirements of the desired solutions. In the end, we provide a definite guideline on effective solution building and detailed solution requirements to help the cyber security research community in designing defense mechanisms. To the best of our knowledge, this work is a novel attempt to identify the need of DDoS mitigation solutions involving multi-level information flow and effective resource management during the attack.
Modeling visual problem solving as analogical reasoning.
We present a computational model of visual problem solving, designed to solve problems from the Raven's Progressive Matrices intelligence test. The model builds on the claim that analogical reasoning lies at the heart of visual problem solving, and intelligence more broadly. Images are compared via structure mapping, aligning the common relational structure in 2 images to identify commonalities and differences. These commonalities or differences can themselves be reified and used as the input for future comparisons. When images fail to align, the model dynamically rerepresents them to facilitate the comparison. In our analysis, we find that the model matches adult human performance on the Standard Progressive Matrices test, and that problems which are difficult for the model are also difficult for people. Furthermore, we show that model operations involving abstraction and rerepresentation are particularly difficult for people, suggesting that these operations may be critical for performing visual problem solving, and reasoning more generally, at the highest level. (PsycINFO Database Record
Hiding a Needle in a Haystack: Privacy Preserving Apriori algorithm inMapReduce Framework
In the last few years, Hadoop become a "de facto" standard to process large scale data as an open source distributed system. With combination of data mining techniques, Hadoop improve data analysis utility. That is why, there are amount of research is studied to apply data mining technique to mapreduce framework in Hadoop. However, data mining have a possibility to cause a privacy violation and this threat is a huge obstacle for data mining using Hadoop. To solve this problem, numerous studies have been conducted. However, existing studies were insufficient and had several drawbacks. In this paper, we propose the privacy preserving data mining technique in Hadoop that is solve privacy violation without utility degradation. We focus on association rule mining algorithm that is representative data mining algorithm. We validate the proposed technique to satisfy performance and preserve data privacy through the experimental results.
In-session exposure tasks and therapeutic alliance across the treatment of childhood anxiety disorders.
The study examined the shape of therapeutic alliance using latent growth curve modeling and data from multiple informants (therapist, child, mother, father). Children (n = 86) with anxiety disorders were randomized to family-based cognitive-behavioral treatment (FCBT; N = 47) with exposure tasks or to family education, support, and attention (FESA; N = 39). Children in FCBT engaged in exposure tasks in Sessions 9-16, whereas FESA participants did not. Alliance growth curves of FCBT and FESA youths were compared to examine the impact of exposure tasks on the shape of the alliance (between-subjects). Within FCBT, the shape of alliance prior to exposure tasks was compared with the shape of alliance following exposure tasks (within-subjects). Therapist, child, mother, and father alliance ratings indicated significant growth in the alliance across treatment sessions. Initial alliance growth was steep and subsequently slowed over time, regardless of the use of exposure tasks. Data did not indicate a rupture in the therapeutic alliance following the introduction of in-session exposures. Results are discussed in relation to the processes, mediators, and ingredients of efficacious interventions as well as in terms of the dissemination of empirically supported treatments.
A hybrid approach to offloading mobile image classification
Current mobile devices are unable to execute complex vision applications in a timely and power efficient manner without offloading some of the computation. This paper examines the tradeoffs that arise from executing some of the workload onboard and some remotely. Feature extraction and matching play an essential role in image classification and have the potential to be executed locally. Along with advances in mobile hardware, understanding the computation requirements of these applications is essential to realize their full potential in mobile environments. We analyze the ability of a mobile platform to execute feature extraction and matching, and prediction workloads under various scenarios. The best configuration for optimal runtime (11% faster) executes feature extraction with a GPU onboard and offloads the rest of the pipeline. Alternatively, compressing and sending the image over the network achieves lowest data transferred (2.5× better) and lowest energy usage (3.7× better) than the next best option.
A quantitative assessment of heterogeneity for surface-immobilized proteins.
Many biotechnological applications use protein receptors immobilized on solid supports. Although, in solution, these receptors display homogeneous binding affinities and association/dissociation kinetics for their complementary ligand, they often display heterogeneous binding characteristics after immobilization. In this study, a fluorescence-based fiber-optic biosensor was used to quantify the heterogeneity associated with the binding of a soluble analyte, fluorescently labeled trinitrobenzene, to surface-immobilized monoclonal anti-TNT antibodies. The antibodies were immobilized on silica fiber-optic probes via five different immobilization strategies. We used the Sips isotherm to assesses and compare the heterogeneity in the antibody binding affinity and kinetic rate parameters for these different immobilization schemes. In addition, we globally analyzed kinetic data with a two-compartment transport-kinetic model to analyze the heterogeneity in the analyte-antibody kinetics. These analyses provide a quantitative tool by which to evaluate the relative homogeneity of different antibody preparations. Our results demonstrate that the more homogeneous protein preparations exhibit more uniform affinities and kinetic constants.
Comparing tobacco use among incoming recruits and military personnel on active duty in the United States.
OBJECTIVE To compare the tobacco use profile of recruits with that of military personnel on active duty to determine whether the military environment in some way induces service members to initiate tobacco use. DESIGN AND SETTING Cross-sectional survey of United States armed forces active duty and recruit personnel in 1994-95. SUBJECTS 2711 military recruits and 4603 military personnel on active duty. MAIN OUTCOME MEASURES Comparative cigarette smoking and smokeless tobacco use prevalence between recruits and personnel on active duty controlling for age, sex, and race. Impact of demographic factors on the odds of smoking or using smokeless tobacco. RESULTS Increases in tobacco use in American military personnel occurred exclusively in men. The highest tobacco use resided with white men on active duty (43% cigarette smoking; 24% smokeless tobacco use) and represents a doubling of tobacco use seen among white male recruits. Among non-white men, tobacco use increased 2-4 times between recruits and personnel on active duty. CONCLUSIONS Efforts to reduce tobacco use by American military personnel on active duty should focus more on discouraging the initiation of tobacco use.
Segmentation of stock trading customers according to potential value
In this article, we use three clustering methods (K-means, self-organizing map, and fuzzy K-means) to find properly graded stock market brokerage commission rates based on the 3-month long total trades of two different transaction modes (representative assisted and online trading system). Stock traders for both modes are classified in terms of the amount of the total trade as well as the amount of trade of each transaction mode, respectively. Results of our empirical analysis indicate that fuzzy K-means cluster analysis is the most robust approach for segmentation of customers of both transaction modes. We then propose a decision tree based rule to classify three groups of customers and suggest different brokerage commission rates of 0.4, 0.45, and 0.5% for representative assisted mode and 0.06, 0.1, and 0.18% for online trading system, respectively. q 2003 Elsevier Ltd. All rights reserved.
Exploring cross-layer power management for PGAS applications on the SCC platform
High-performance parallel computing architectures are increasingly based on multi-core processors. While current commercially available processors are at 8 and 16 cores, technological and power constraints are limiting the performance growth of the cores and are resulting in architectures with much higher core counts, such as the experimental many-core Intel Single-chip Cloud Computer (SCC) platform. These trends are presenting new sets of challenges to HPC applications including programming complexity and the need for extreme energy efficiency. In this paper, we first investigate the power behavior of scientific Partitioned Global Address Space (PGAS) application kernels on the SCC platform, and explore opportunities and challenges for power management within the PGAS framework. Results obtained via empirical evaluation of Unified Parallel C (UPC) applications on the SCC platform under different constraints, show that, for specific operations, the potential for energy savings in PGAS is large; and power/performance trade-offs can be effectively managed using a cross-layer approach. We investigate cross-layer power management using PGAS language extensions and runtime mechanisms that manipulate power/performance tradeoffs. Specifically, we present the design, implementation and evaluation of such a middleware for application-aware cross-layer power management of UPC applications on the SCC platform. Finally, based on our observations, we provide a set of insights that can be used to support similar power management for PGAS applications on other many-core platforms.
Spectrum of Retinal Vascular Diseases Associated With Paracentral Acute Middle Maculopathy.
PURPOSE To evaluate the spectrum of retinal diseases that can demonstrate paracentral acute middle maculopathy and isolated ischemia of the intermediate and deep capillary plexus. DESIGN Retrospective, multicenter, observational case series. METHODS This is a retrospective case series review of 9 patients (10 eyes) from 5 centers with paracentral acute middle maculopathy lesions and previously unreported retinal vascular etiologies. Case presentations and multimodal imaging, including color photographs, near-infrared reflectance, fluorescein angiography, spectral-domain optical coherence tomography (SD OCT), and orbital color Doppler imaging, are described. Baseline and follow-up findings are correlated with clinical presentation, demographics, and systemic associations. RESULTS Five men and 4 women, aged 27-66 years, were included. Isolated band-like hyperreflective lesions in the middle retinal layers, otherwise known as paracentral acute middle maculopathy, were observed in all patients at baseline presentation. Follow-up SD OCT analysis of these paracentral acute middle maculopathy lesions demonstrated subsequent thinning of the inner nuclear layer. Novel retinal vascular associations leading to retinal vasculopathy and paracentral acute middle maculopathy include eye compression injury causing global ocular ischemia, sickle cell crisis, Purtscher's retinopathy, inflammatory occlusive retinal vasculitis, post-H1N1 vaccine, hypertensive retinopathy, migraine disorder, and post-upper respiratory infection. CONCLUSION Paracentral acute middle maculopathy lesions may develop in a wide spectrum of retinal vascular diseases. They are best identified with SD OCT analysis and may represent ischemia of the intermediate and deep capillary plexus. These lesions typically result in permanent thinning of the inner nuclear layer and are critical to identify in order to determine the cause of unexplained vision loss.