title
stringlengths
8
300
abstract
stringlengths
0
10k
EvilCoder: automated bug insertion
The art of finding software vulnerabilities has been covered extensively in the literature and there is a huge body of work on this topic. In contrast, the intentional insertion of exploitable, security-critical bugs has received little (public) attention yet. Wanting more bugs seems to be counterproductive at first sight, but the comprehensive evaluation of bug-finding techniques suffers from a lack of ground truth and the scarcity of bugs. In this paper, we propose EvilCoder, a system to automatically find potentially vulnerable source code locations and modify the source code to be actually vulnerable. More specifically, we leverage automated program analysis techniques to find sensitive sinks which match typical bug patterns (e.g., a sensitive API function with a preceding sanity check), and try to find data-flow connections to user-controlled sources. We then transform the source code such that exploitation becomes possible, for example by removing or modifying input sanitization or other types of security checks. Our tool is designed to randomly pick vulnerable locations and possible modifications, such that it can generate numerous different vulnerabilities on the same software corpus. We evaluated our tool on several open-source projects such as for example libpng and vsftpd, where we found between 22 and 158 unique connected source-sink pairs per project. This translates to hundreds of potentially vulnerable data-flow paths and hundreds of bugs we can insert. We hope to support future bug-finding techniques by supplying freshly generated, bug-ridden test corpora so that such techniques can (finally) be evaluated and compared in a comprehensive and statistically meaningful way.
Health care spending and use of information technology in OECD countries.
In 2003, the United States had fewer practicing physicians, practicing nurses, and acute care bed days per capita than the median country in the Organization for Economic Cooperation and Development (OECD). Nevertheless, U.S. health spending per capita was almost two and a half times the per capita health spending of the median OECD country. One proposal for both lowering health spending and improving quality is the adoption of health information technology (HIT). The United States lags as much as a dozen years behind other industrialized countries in HIT adoption--countries where national governments have played major roles in establishing the rule, and health insurers have paid most of the costs.
Discovery of Blog Communities based on Mutual Awareness
Blogs have many fast growing communities on the Internet. Discovering such communities in the blogosphere is important for sustaining and encouraging new blogger participation. We focus on extracting communities based on two key insights – (a) communities form due to individual blogger actions that are mutually observable; (b) semantics of the hyperlink structure are different from traditional web analysis problems. Our approach involves developing computational models for mutual awareness that incorporates the specific action type, frequency and time of occurrence. We use the mutual awareness feature with a rankingbased community extraction algorithm to discover communities. To validate our approach, four performance measures are used on the WWW2006 Blog Workshop dataset and the NEC focused blog dataset with excellent quantitative results. The extracted communities also demonstrate to be semantically cohesive with respect to their topics of interest.
Sexual strategies theory: an evolutionary perspective on human mating.
This article proposes a contextual-evolutionary theory of human mating strategies. Both men and women are hypothesized to have evolved distinct psychological mechanisms that underlie short-term and long-term strategies. Men and women confront different adaptive problems in short-term as opposed to long-term mating contexts. Consequently, different mate preferences become activated from their strategic repertoires. Nine key hypotheses and 22 predictions from Sexual Strategies Theory are outlined and tested empirically. Adaptive problems sensitive to context include sexual accessibility, fertility assessment, commitment seeking and avoidance, immediate and enduring resource procurement, paternity certainty, assessment of mate value, and parental investment. Discussion summarizes 6 additional sources of behavioral data, outlines adaptive problems common to both sexes, and suggests additional contexts likely to cause shifts in mating strategy.
Neurobiological Basis of Language Learning Difficulties
In this paper we highlight why there is a need to examine subcortical learning systems in children with language impairment and dyslexia, rather than focusing solely on cortical areas relevant for language. First, behavioural studies find that children with these neurodevelopmental disorders perform less well than peers on procedural learning tasks that depend on corticostriatal learning circuits. Second, fMRI studies in neurotypical adults implicate corticostriatal and hippocampal systems in language learning. Finally, structural and functional abnormalities are seen in the striatum in children with language disorders. Studying corticostriatal networks in developmental language disorders could offer us insights into their neurobiological basis and elucidate possible modes of compensation for intervention.
Microbiology and immunology of fish larvae
1 Department of Biotechnology, Norwegian University of Science and Technology, Trondheim, Norway 2 Institute of Marine Research, Bergen, Norway 3 INRA, UR 1067, Nutrition, Metabolism, Aquaculture, Ifremer, Centre de Brest, Brest, France 4 Department of Cell Biology and Histology, University of Murcia, Murcia, Spain 5 Department of Science for Innovative Biology, Agroindustry, and Forestry, University of Tuscia, Viterbo, Italy 6 Institute of Aquaculture, Hellenic Center for Marine Research, Iraklion, Crete, Greece 7 Laboratory of Aquaculture & Artemia Reference Center, Ghent University, Ghent, Belgium 8 Laboratory of Microbial Ecology and Technology (LabMET), Ghent University, Ghent, Belgium
Deep Neural Network Compression With Single and Multiple Level Quantization
Network quantization is an effective solution to compress deep neural networks for practical usage. Existing network quantization methods cannot sufficiently exploit the depth information to generate low-bit compressed network. In this paper, we propose two novel network quantization approaches, single-level network quantization (SLQ) for high-bit quantization and multi-level network quantization (MLQ) for extremely low-bit quantization (ternary). We are the first to consider the network quantization from both width and depth level. In the width level, parameters are divided into two parts: one for quantization and the other for re-training to eliminate the quantization loss. SLQ leverages the distribution of the parameters to improve the width level. In the depth level, we introduce incremental layer compensation to quantize layers iteratively which decreases the quantization loss in each iteration. The proposed approaches are validated with extensive experiments based on the state-of-the-art neural networks including AlexNet, VGG-16, GoogleNet and ResNet-18. Both SLQ and MLQ achieve impressive results.
Immersion, presence and performance in virtual environments: an experiment with tri-dimensional chess
This paper describes an experiment to assess the influence of immersion on performance in immersive virtual environments. The task involved Tri-Dimensional Chess, and required subjects to reproduce on a real chess board the state of the board learned from a sequence of moves witnessed in a virtual environment. Twenty four subjects were allocated to a factorial design consisting of two levels of immersion (exocentric screen based, and egocentric HMD based), and two kinds of environment (plain and realistic. The results suggest that egocentric subjects performed better than exocentric, and those in the more realistic environment performed better than those in the less realistic environment. Previous knowledge of chess, and amount of virtual practice were also significant, and may be considered as control variables to equalise these factors amongst the subjects. Other things being equal, males remembered the moves better than females, although female performance improved with higher spatial ability test score. The paper also attempts to clarify the relationship between immersion, presence and performance, and locates the experiment within such a theoretical framework.
Cultural Differences in Spontaneous Trait and Situation Inferences
Previous findings indicated that when people observe someone’s behavior, they spontaneously infer the traits and situations that cause the target person’s behavior. These inference processes are called spontaneous trait inferences (STIs) and spontaneous situation inferences (SSIs). While both patterns of inferences have been observed, no research has examined the extent to which people from different cultural backgrounds produce these inferences when information affords both trait and situation inferences. Based on the theoretical frameworks of social orientations and thinking styles, we hypothesized that European Canadians would be more likely to produce STIs than SSIs because of the individualistic/independent social orientation and the analytic thinking style dominant in North America, whereas Japanese would produce both STIs and SSIs equally because of the collectivistic/interdependent social orientation and the holistic thinking style dominant in East Asia. Employing the savings-in-relearning paradigm, we presented information that affords both STIs and SSIs and examined cultural differences in the extent of both inferences. The results supported our hypotheses. The relationships between culturally dominant styles of thought and the inference processes in impression formation are discussed.
Design of permanent-magnet generators for wind turbines
There has been much interest and studies in high efficiency wind generators with permanent magnet excitation due to the increasing availability of permanent magnet materials, especially Nd-Fe-. The present paper is aimed to outline the design, analysis of such a PM generator. The generator that is being used will be an 8-pole permanent magnet generator rated at 5 kW and using NdFeB for the field excitation. The details of the permanent magnet generator are also presented.
Scheduling algorithms and operating systems support for real-time systems
This paper summarizes the state of the real-time eld in the areas of scheduling and operating system kernels. Given the vast amount of work that has been done by both the operations research and computer science communities in the scheduling area, we discuss four paradigms underlying the scheduling approaches and present several exemplars of each. The four paradigms are: static table-driven scheduling, static priority preemptive scheduling, dynamic planning-based scheduling, and dynamic best-e ort scheduling. In the operating system context, we argue that most of the proprietary commercial kernels as well as real-time extensions to timesharing operating system kernels do not t the needs of predictable real-time systems. We discuss several research kernels that are currently being built to explicitly meet the needs of real-time applications. This material is based upon work supported by the National Science Foundation under grants CDA8922572 and IRI 9208920, and by the O ce of Naval Research under grant N00014-92-J-1048.
Prostate segmentation algorithm using dyadic wavelet transform and discrete dynamic contour.
Knowing the location and the volume of the prostate is important for ultrasound-guided prostate brachytherapy, a commonly used prostate cancer treatment method. The prostate boundary must be segmented before a dose plan can be obtained. However, manual segmentation is arduous and time consuming. This paper introduces a semi-automatic segmentation algorithm based on the dyadic wavelet transform (DWT) and the discrete dynamic contour (DDC). A spline interpolation method is used to determine the initial contour based on four user-defined initial points. The DDC model then refines the initial contour based on the approximate coefficients and the wavelet coefficients generated using the DWT. The DDC model is executed under two settings. The coefficients used in these two settings are derived using smoothing functions with different sizes. A selection rule is used to choose the best contour based on the contours produced in these two settings. The accuracy of the final contour produced by the proposed algorithm is evaluated by comparing it with the manual contour outlined by an expert observer. A total of 114 2D TRUS images taken for six different patients scheduled for brachytherapy were segmented using the proposed algorithm. The average difference between the contour segmented using the proposed algorithm and the manually outlined contour is less than 3 pixels.
Two-stage ELM for phishing Web pages detection using hybrid features
Increasing high volume phishing attacks are being encountered every day due to attackers’ high financial returns. Recently, there has been significant interest in applying machine learning for phishing Web pages detection. Different from literatures, this paper introduces predicted labels of textual contents to be part of the features and proposes a novel framework for phishing Web pages detection using hybrid features consisting of URL-based, Web-based, rule-based and textual content-based features. We achieve this framework by developing an efficient two-stage extreme learning machine (ELM). The first stage is to construct classification models on textual contents of Web pages using ELM. In particular, we take Optical Character Recognition (OCR) as an assistant tool to extract textual contents from image format Web pages in this stage. In the second stage, a classification model on hybrid features is developed by using a linear combination model-based ensemble ELMs (LC-ELMs), with the weights calculated by the generalized inverse. Experimental results indicate the proposed framework is promising for detecting phishing Web pages.
Understanding the behavior of malicious applications in social networks
The World Wide Web has evolved from a collection of static HTML pages to an assortment of Web 2.0 applications. Online social networking in particular is becoming more popular by the day since the establishment of SixDegrees in 1997. Millions of people use social networking web sites daily, such as Facebook, My-Space, Orkut, and LinkedIn. A side-effect of this growth is that possible exploits can turn OSNs into platforms for malicious and illegal activities, like DDoS attacks, privacy violations, disk compromise, and malware propagation. In this article we show that social networking web sites have the ideal properties to become attack platforms. We introduce a new term, antisocial networks, that refers to distributed systems based on social networking web sites which can be exploited to carry out network attacks. An adversary can take control of a visitor's session by remotely manipulating their browsers through legitimate web control functionality such as image-loading HTML tags, JavaScript instructions, and Java applets.
A study on the efficiency of transparent patch antennas designed from conductive oxide films
A study on the efficiency of transparent patch antennas designed from indium tin oxide (ITO) films is presented to provide design guidelines for patch type transparent antennas. The trade-offs between optical transparency and antenna efficiency is analyzed by considering typical material properties of ITO films. It is shown that the efficiency of a patch antenna designed from ITO films is determined by the electron mobility of ITO films, operational frequency, and the substrate material. The study shows that with today's material processing methods, it is feasible to achieve at least 30% efficiency of an ITO antenna with 90% optical transparency for operational frequency higher than 5 GHz. While progress in material science may improve the antenna performance, the highly transparent patch antenna with 30% efficiency may be employed in array design for practical implementations.
Optical and structural properties of perovskite films prepared with two-step spin-coating method
Perovskite materials have become great research interest due to their superior properties for high power conversion efficiency (PCE) solar cells. Recently, a PCE above 18% is already achieved using perovskite methylamonium lead iodide (CH3NH3PbI3, MAPbI3) as active material. High quality crystal of perovskite film as active material is very crucial to be controlled for obtaining high PCE with high environmentally stable solar cells. Spin-coating and dip-coating methods are commonly used to prepare perovskite films for solar cells. However, not all lead(II) iodide PbI2 can be completely converted into perovskite which cause low PCE and deteriorate the stability of solar cells. Therefore, a suitable and simple method for obtaining high quality crystal MAPbI3 film is highly needed. Herein, we report our current study on the optical, structural and morphological properties of perovskite CH3NH3PbI3 films prepared by use of two-step spin coating method. By using this method, no PbI2 peak is observed in the XRD ...
Microstructure of HK40 alloy after high temperature service in oxidizing/carburizing environment
Crack propagation and oxidation phenomena during high temperature service of tubes made from HK40 alloy have been investigated. The materials were characterized using electron microscopy, X-ray diffraction and microanalysis techniques after being used for an extended period (up to 25,000 h) as furnace tubes in ethylene pyrolysis. The service conditions subjected the materials to oxidizing and carburizing conditions on the surfaces leading to the formation of complex oxide structures in both external and internal oxide scales, and of carbide-denuded zones in subsurface regions. A macrocrack in one of the samples provided an opportunity to study the mechanism of crack propagation and the sequence of oxidation of the constituent elements in the material. The observations imply that silicon segregation during carbide coarsening was an important precursor to crack propagation.
Protein structural dynamics revealed by time-resolved X-ray solution scattering.
One of the most important questions in biological science is how a protein functions. When a protein performs its function, it undergoes regulated structural transitions. In this regard, to better understand the underlying principle of a protein function, it is desirable to monitor the dynamic evolution of the protein structure in real time. To probe fast and subtle motions of a protein in physiological conditions demands an experimental tool that is not only equipped with superb spatiotemporal resolution but also applicable to samples in solution phase. Time-resolved X-ray solution scattering (TRXSS), discussed in this Account, fits all of those requirements needed for probing the movements of proteins in aqueous solution. The technique utilizes a pump-probe scheme employing an optical pump pulse to initiate photoreactions of proteins and an X-ray probe pulse to monitor ensuing structural changes. The technical advances in ultrafast lasers and X-ray sources allow us to achieve superb temporal resolution down to femtoseconds. Because X-rays scatter off all atomic pairs in a protein, an X-ray scattering pattern provides information on the global structure of the protein with subangstrom spatial resolution. Importantly, TRXSS is readily applicable to aqueous solution samples of proteins with the aid of theoretical models and therefore is well suited for investigating structural dynamics of protein transitions in physiological conditions. In this Account, we demonstrate that TRXSS can be used to probe real-time structural dynamics of proteins in solution ranging from subtle helix movement to global conformational change. Specifically, we discuss the photoreactions of photoactive yellow protein (PYP) and homodimeric hemoglobin (HbI). For PYP, we revealed the kinetics of structural transitions among four transient intermediates comprising a photocycle and, by applying structural analysis based on ab initio shape reconstruction, showed that the signaling of PYP involves the protrusion of the N-terminus with significant increase of the overall protein size. For HbI, we elucidated the dynamics of complex allosteric transitions among transient intermediates. In particular, by applying structural refinement analysis based on rigid-body modeling, we found that the allosteric transition of HbI accompanies the rotation of quaternary structure and the contraction between two heme domains. By making use of the experimental and analysis methods presented in this Account, we envision that the TRXSS can be used to probe the structural dynamics of various proteins, allowing us to decipher the working mechanisms of their functions. Furthermore, when combined with femtosecond X-ray pulses generated from X-ray free electron lasers, TRXSS will gain access to ultrafast protein dynamics on sub-picosecond time scales.
Trends in Microgrid Control
The increasing interest in integrating intermittent renewable energy sources into microgrids presents major challenges from the viewpoints of reliable operation and control. In this paper, the major issues and challenges in microgrid control are discussed, and a review of state-of-the-art control strategies and trends is presented; a general overview of the main control principles (e.g., droop control, model predictive control, multi-agent systems) is also included. The paper classifies microgrid control strategies into three levels: primary, secondary, and tertiary, where primary and secondary levels are associated with the operation of the microgrid itself, and tertiary level pertains to the coordinated operation of the microgrid and the host grid. Each control level is discussed in detail in view of the relevant existing technical literature.
POWER8 design methodology innovations for improving productivity and reducing power
The design complexity of modern high performance processors calls for innovative design methodologies for achieving time-to-market goals. New design techniques are also needed to curtail power increases that inherently arise from ever increasing performance targets. This paper describes new design approaches employed by the POWER8 processor design team to address complexity and power consumption challenges. Improvements in productivity are attained by leveraging a new and more synthesis-centric design methodology. New optimization strategies for synthesized macros allow power reduction without sacrificing performance. These methodology innovations contributed to the industry leading performance of the POWER8 processor. Overall, POWER8 delivers a 2.5x increase in per-socket performance over its predecessor, POWER7+, while maintaining the same power dissipation.
Cannabis - from cultivar to chemovar.
The medicinal use of Cannabis is increasing as countries worldwide are setting up official programs to provide patients with access to safe sources of medicinal-grade Cannabis. An important question that remains to be answered is which of the many varieties of Cannabis should be made available for medicinal use. Drug varieties of Cannabis are commonly distinguished through the use of popular names, with a major distinction being made between Indica and Sativa types. Although more than 700 different cultivars have already been described, it is unclear whether such classification reflects any relevant differences in chemical composition. Some attempts have been made to classify Cannabis varieties based on chemical composition, but they have mainly been useful for forensic applications, distinguishing drug varieties, with high THC content, from the non-drug hemp varieties. The biologically active terpenoids have not been included in these approaches. For a clearer understanding of the medicinal properties of the Cannabis plant, a better classification system, based on a range of potentially active constituents, is needed. The cannabinoids and terpenoids, present in high concentrations in Cannabis flowers, are the main candidates. In this study, we compared cultivars obtained from multiple sources. Based on the analysis of 28 major compounds present in these samples, followed by principal component analysis (PCA) of the quantitative data, we were able to identify the Cannabis constituents that defined the samples into distinct chemovar groups. The study indicates the usefulness of a PCA approach for chemotaxonomic classification of Cannabis varieties.
Development of a high-voltage closed-loop power supply for ozone generation
This paper deals with the design of an AC-220-volt-mains-fed power supply for ozone generation. A power stage consisting of a buck converter to regulate the output power plus a current-fed parallel-resonant push-pull inverter to supply an ozone generator (OG) is proposed and analysed. A closed-loop operation is presented as a method to compensate variations in the AC source voltage. Inverter's step-up transformer issues and their effect on the performance of the overall circuit are also studied. The use of a UC3872 integrated circuit is proposed to control both the push-pull inverter and the buck converter, as well as to provide the possibility to protect the power supply in case a short circuit, an open-lamp operation or any other circumstance might occur. Implementation of a 100 W prototype and experimental results are shown and discussed.
The effect of statins on survival in patients with stage IV lung cancer.
OBJECTIVES Prior studies have shown an anticancer effect of statins in patients with certain malignancies. However, it is unclear whether statins have a mortality benefit in lung cancer. We compared survival of patients with stage IV non-small cell lung cancer (NSCLC) receiving vs. not receiving statins prior to diagnosis. METHODS Using data from the Surveillance, Epidemiology and End Results registry linked to Medicare claims, we identified 5118 patients  >65 years of age diagnosed with stage IV NSCLC between 2007 and 2009. We used propensity score methods to assess the association of statin use with overall and lung cancer-specific survival while controlling for measured confounders. RESULTS Overall, 27% of patients were on statins at time of lung cancer diagnosis. Median survival in the statin group was 7 months, compared to 4 months in patients not treated with statins (p<0.001). Propensity score analyses found that statin use was associated with improvement in overall (hazard ratio [HR]: 0.76, 95% confidence interval [CI]: 0.73-0.79) and lung cancer-specific survival (HR: 0.77, 95% CI: 0.73-0.81), after controlling for baseline patient characteristics, cancer characteristics, staging work-up and chemotherapy use. CONCLUSIONS Statin use is associated with improved survival among patients with stage IV NSCLC suggesting a potential anticancer effect. Further research should evaluate plausible biological mechanisms as well as test the effect of statins in prospective clinical trials.
A Privacy Preserving System for Cloud Computing
Cloud computing is changing the way that organizations manage their data, due to its robustness, low cost and ubiquitous nature. Privacy concerns arise whenever sensitive data is outsourced to the cloud. This paper introduces a cloud database storage architecture that prevents the local administrator as well as the cloud administrator to learn about the outsourced database content. Moreover, machine readable rights expressions are used in order to limit users of the database to a need-to-know basis. These limitations are not changeable by administrators after the database related application is launched, since a new role of rights editors is defined once an application is launced. Furthermore, trusted computing is applied to bind cryptographic key information to trusted states. By limiting the necessary trust in both corporate as well as external administrators and service providers, we counteract the often criticized privacy and confidentiality risks of corporate cloud computing.
Alternative G protein coupling and biased agonism: New insights into melanocortin-4 receptor signalling
The melanocortin-4 receptor (MC4R) is a prototypical G protein-coupled receptor (GPCR) that plays a considerable role in controlling appetite and energy homeostasis. Signalling initiated by MC4R is orchestrated by multiple agonists, inverse agonism and by interactions with accessory proteins. The exact molecular events translating MC4R signalling into its physiological role, however, are not fully understood. This review is an attempt to summarize new aspects of MC4R signalling in the context of its recently discovered alternative G protein coupling, and to give a perspective on how future research could improve our knowledge about the intertwining molecular mechanisms that are responsible for the regulation of energy homeostasis by the melanocortin system.
An event-based conceptual model for context-aware movement analysis
Current tracking technologies enable collection of data describing movements of various kinds of objects, including people, animals, icebergs, vehicles, containers with goods, etc. Analysis of movement data is now a hot research topic. However, most of the suggested analysis methods deal with movement data alone. Little has been done to support the analysis of movement in its spatio-temporal context, which includes various spatial and temporal objects as well as diverse properties associated with spatial locations and time moments. Comprehensive analysis of movement requires detection and analysis of relations that occur between moving objects and elements of the context in the process of the movement. We suggest a conceptual model in which movement is considered as a combination of spatial events of diverse types and extents in space and time. Spatial and temporal relations occur between movement events and elements of the spatial and temporal contexts. The model gives a ground to a generic approach based on extraction of interesting events from trajectories and treating the events as independent objects. By means of a prototype implementation, we tested the approach on complex real data about movement of wild animals. The testing showed the validity of the approach.
Measuring User Engagement
User engagement refers to the quality of the user experience that emphasizes the positive aspects of interacting with an online application and, in particular, the desire to use that application longer and repeatedly. User engagement is a key concept in the design of online applications (whether for desktop, tablet or mobile), motivated by the observation that successful applications are not just used, but are engaged with. Users invest time, attention, and emotion in their use of technology, and seek to satisfy pragmatic and hedonic needs. Measurement is critical for evaluating whether online applications are able to successfully engage users, and may inform the design of and use of applications. User engagement is a multifaceted, complex phenomenon; this gives rise to a number of potential measurement approaches. Common ways to evaluate user engagement include using self-report measures, e.g., questionnaires; observational methods, e.g. facial expression analysis, speech analysis; neuro-physiological signal processing methods, e.g., respiratory and cardiovascular accelerations and decelerations, muscle spasms; and web analytics, e.g., number of site visits, click depth. These methods represent various trade-offs in terms of the setting (laboratory versus “in the wild”), object of measurement (user behaviour, affect or cognition) and scale of data collected. For instance, small-scale user studies are deep and rich, but limited in terms of generalizability, whereas large-scale web analytic studies are powerful but negate users’ motivation and context. The focus of this book is how user engagement is currently being measured and various considerations for its measurement. Our goal is to leave readers with an appreciation of the various ways in which to measure user engagement, and their associated strengths and weaknesses. We emphasize the multifaceted nature of user engagement and the unique contextual constraints that come to bear upon attempts to measure engagement in different settings, and across different user groups and web domains. At the same time, this book advocates for the development of “good” measures and good measurement practices that will advance the study of user engagement and improve our understanding of this construct, which has become so vital in our wired world.
Manual therapy, physical therapy, or continued care by a general practitioner for patients with neck pain. A randomized, controlled trial.
Context Neck pain is common among primary care patients. Evidence on the effectiveness of therapies for neck pain is limited. A previous randomized, controlled trial suggested benefits from manual therapy and physical therapy. Contribution This randomized, controlled trial of manual therapy, physical therapy, and continued care by a doctor confirms the superiority of manual therapy and physical therapy over continued care. At 7 weeks, 68.3% of patients in the manual therapy group reported resolved or much improved pain, compared with 50.8% of patients in the physical therapy group and 35.9% of patients in the continued care group. Clinical Implications Primary care physicians should consider manual therapy when treating patients with neck pain. The Editors Neck pain is a common problem in the general population, with point prevalences between 10% and 15% (1-3). It is most common at approximately 50 years of age and is more common in women than in men (1, 2, 4-6). Neck pain can be severely disabling and costly, and little is known about its clinical course (7-9). Limited range of motion and a subjective feeling of stiffness may accompany neck pain, which is often precipitated or aggravated by neck movements or sustained neck postures. Headache, brachialgia, dizziness, and other signs and symptoms may also be present in combination with neck pain (10, 11). Although history taking and diagnostic examination can suggest a potential cause, in most cases the pathologic basis for neck pain is unclear and the pain is labeled nonspecific. Conservative treatment methods that are frequently used in general practice include analgesics, rest, or referral to a physical therapist or manual therapist (12, 13). Physical therapy may include passive treatment, such as massage, interferential current, or heat applications, and active treatment, such as exercise therapies. Physical therapists can specialize in passive manual (or hands-on) techniques, including mobilization or manipulation (high-velocity thrust techniques), also referred to as manual therapy (14-19). According to the International Federation of Orthopedic Manipulative Therapies, Orthopedic manipulative (manual) therapy is a specialization within physical therapy and provides comprehensive conservative management for pain and other symptoms of neuro-musculo-articular dysfunction in the spine and extremities (unpublished data). Today, many different manual therapy approaches are applied by various health professionals, including medical doctors, physical therapists, massage therapists, manual therapists, chiropractors, and osteopathic doctors. Reviews of trials involving manual therapy or physical therapy show that most interventions in these categories are characterized by a combination of passive and active components (20-23). Although a combination of manual therapy or physical therapy that includes exercises appears to be effective for neck pain, these therapies have not been studied in sufficient detail to draw firm conclusions, and the methodologic quality of most trials on neck pain is rather low (20-23). Koes and colleagues (24, 25) performed a randomized trial on back and neck pain and found promising results for manual therapy and physical therapy in subgroup analyses of patients with neck pain. In our randomized, controlled trial, we compared the effectiveness of manual therapy, physical therapy, and continued care by a general practitioner in patients with nonspecific neck pain. Methods Patients Patients with nonspecific neck pain whose clinical presentation did not warrant referral for further diagnostic screening were referred to one of four research centers by 42 general practitioners for study selection. We excluded patients whose history, signs, and symptoms suggested a potential nonbenign cause (including previous surgery of the neck) or evidence of a specific pathologic condition, such as malignancy, neurologic disease, fracture, herniated disc, or systemic rheumatic disease. Two research assistants who were experienced physical therapists and were blinded to treatment allocation performed physical examinations at baseline and follow-up. They used standardized inclusion and exclusion criteria and performed a short neurologic examination (Appendix Table 1) and range-of-motion assessment. The eligibility criteria were age between 18 and 70 years, pain or stiffness in the neck for at least 2 weeks, neck symptoms reproducible during physical examination, willingness to adhere to treatment and measurement regimens, no physical therapy or manual therapy for neck pain during the previous 6 months, no involvement in litigation, and written informed consent. Patients with concurrent headaches, nonradicular pain in the upper extremities, and low back pain were not excluded, but neck pain had to be the main symptom for all patients. Random Assignment and Data Collection All patient data were collected before randomization. Patients were assigned to a treatment group on the basis of block randomization after prestratification for symptom severity (severity scores <7 points or 7 points on a scale of 0 to 10); age (<40 years or 40 years); and, mainly for practical reasons, research center (four local centers). Randomized permuted blocks of six patients were generated for each stratum by using a computer-generated random-sequence table. A researcher who was not involved in the project prepared opaque, sequentially numbered sealed envelopes that contained folded cards indicating one of the three interventions. Interventions The intervention period lasted 6 weeks. Patients were allowed to perform exercises at home and to continue medication prescribed at baseline or use over-the-counter analgesics. Other co-interventions were discouraged but were registered if they occurred. Within the boundaries of the protocol, treatment could be reassessed and adapted to the patient's condition. The specific treatment characteristics were registered at each visit. A maximum number of visits was set for each intervention group; however, the patients did not have to complete this maximum number if symptoms had resolved. Manual Therapy Our approach to manual therapy was eclectic and incorporated several techniques used in western Europe, North America, and Australia, including those described by Cyriax, Kaltenborn, Maitland, and Mennel (15, 16, 19). In our trial, manual therapy (defined as the use of passive movements to help restore normal spinal function) included hands-on muscular mobilization techniques (aimed at improving soft tissue function), specific articular mobilization techniques (to improve overall joint function and decrease any restrictions in movement at single or multiple segmental levels in the cervical spine), and coordination or stabilization techniques (to improve postural control, coordination, and movement patterns by using the stabilizing cervical musculature) (26). Joint mobilization is a form of manual therapy that involves low-velocity passive movements within or at the limit of joint range of motion (27). Manual therapists must undergo extensive training to be able to skillfully perform mobilization techniques (15, 19). Spinal manipulations (low-amplitude, high-velocity thrust techniques) were not included in this protocol. Forty-five minute treatment sessions were scheduled once per week, for a maximum of six treatments. Six experienced manual therapists acknowledged by the Netherlands Manual Therapy Association performed the treatment. Physical Therapy The physical therapists used a combination of several treatment options, but active exercise therapies were the cornerstone of their strategy. Active exercise therapy involves participation by the patient and includes active exercises (to improve strength or range of motion), postural exercises, stretching, relaxation exercises, and functional exercises. Manual traction or stretching, massage, or physical therapy methods, such as interferential current or heat applications, could precede the exercise therapy. Specific manual mobilization techniques were not included in this protocol. Thirty-minute treatment sessions were scheduled twice per week for a maximum of 12 treatments. The treatment was performed by five experienced physical therapists. We prevented cross-contamination with manual therapy by choosing physical therapists who were not manual therapy specialists. Continued Care by a General Practitioner Each patient in this group received standardized care from his or her general practitioner, including advice on prognosis, advice on psychosocial issues, advice on self-care (heat application, home exercises), advice on ergonomics (for example, size of pillow, work position), and encouragement to await further recovery. The treatment protocol was similar to the practice guidelines for low back pain issued by the Dutch College of General Practitioners (28). Patients received an educational booklet containing ergonomic advice and exercises (29). Medication, including paracetamol or nonsteroidal anti-inflammatory drugs, was prescribed on a time-contingent basis if necessary. Ten-minute follow-up visits, scheduled every 2 weeks, were optional, and referral during the intervention period was discouraged. Outcome Measures Data were collected at the research center after 3 and 7 weeks. At 7 weeks, treatment results were expected to be maximal. The patients were repeatedly asked not to reveal any information about their treatment allocation to the research assistants. The success of blinding was evaluated at 7 weeks. Primary outcome measures focused on perceived recovery, pain, and functional disability. Patients rated perceived recovery on a 6-point ordinal transition scale, ranging from much worse to completely recovered. Success was defined a priori as completely recovered or much improved (30). In addition, on the basis of the systematic assessment of spinal mobility, palpation, and pain reported by the
Dirichlet-Hawkes Processes with Applications to Clustering Continuous-Time Document Streams
Clusters in document streams, such as online news articles, can be induced by their textual contents, as well as by the temporal dynamics of their arriving patterns. Can we leverage both sources of information to obtain a better clustering of the documents, and distill information that is not possible to extract using contents only? In this paper, we propose a novel random process, referred to as the Dirichlet-Hawkes process, to take into account both information in a unified framework. A distinctive feature of the proposed model is that the preferential attachment of items to clusters according to cluster sizes, present in Dirichlet processes, is now driven according to the intensities of cluster-wise self-exciting temporal point processes, the Hawkes processes. This new model establishes a previously unexplored connection between Bayesian Nonparametrics and temporal Point Processes, which makes the number of clusters grow to accommodate the increasing complexity of online streaming contents, while at the same time adapts to the ever changing dynamics of the respective continuous arrival time. We conducted large-scale experiments on both synthetic and real world news articles, and show that Dirichlet-Hawkes processes can recover both meaningful topics and temporal dynamics, which leads to better predictive performance in terms of content perplexity and arrival time of future documents.
A Robust Uniaxial Force Sensor for Minimally Invasive Surgery
This paper presents a novel miniature uniaxial force sensor for use within a beating heart during mitral valve annuloplasty. The sensor measures 5.5 mm in diameter and 12 mm in length and provides a hollow core to pass instrumentation. A soft elastomer flexure design maintains a waterproof seal. Fiber optic transduction eliminates electrical circuitry within the heart, and acetal components minimize ultrasound-imaging artifacts. Calibration uses a nonlinear viscoelastic method, and in vitro tests demonstrate a 0-4-N force range with rms errors of 0.13 N (< 3.2%). In vivo tests provide the first endocardial measurements of tissue-minimally invasive surgery instrument interaction forces in a beating heart.
Multiband Planar Vivaldi Antenna for Mobile Communication and Industrial Applications
The proposed antenna topology is an interesting radiating element, characterized by broadband or multiband capabilities. The exponential and soft/tapered design of the edge transitions and feeding makes it a challenging item to design and tune, leading though to impressive results. The antenna is build on Rogers RO3010 material. The bands in which the antenna works are GPS and Galileo (1.57 GHz), UMTS (1.8–2.17 GHz) and ISM 2.4 GHz (Bluetooth WiFi). The purpose of such an antenna is to be embedded in an Assisted GPS (A-GPS) reference station. Such a device serves as a fix GPS reference distributing the positioning information to mobile device users and delivering at the same time services via GSM network standards or via Wi-Fi / Bluetooth connections.
Perceived CT-Space for Motion Planning in Unknown and Unpredictable Environments
One of the ultimate goals in robotics is to make high-DOF robots work autonomously in unknown changing environments. However, motion planning in completely unknown environments is largely an open problem and poses many challenges. One challenge is that in such an environment, the configuration-time space (CT-space) of a robot is not known beforehand. This paper describes how guaranteed collision-free regions in the unknown CT-space can be discovered progressively via sensing in real time based on the concept dynamic envelope, which is not conservative, i.e., does not assume worst-case scenarios, and is robust to uncertainties in obstacle behaviors. The introduced method can be used in general by real-time motion planners for high-DOF robots to discover the existence of guaranteed collisionfree future motions efficiently. The utility is further confirmed both in simulation and in real-world testing involving a 5-DOF robot manipulator.
Hemodynamic dependence of myocardial oxygen consumption indexes.
We tested the afterload and contractile state dependency of three indexes of myocardial oxygen consumption (MVO2): total energy requirement (Et), pressure work index (PWI), and pressure-volume area (PVA). MVO2 was measured in seven isolated canine hearts at four or five different end-diastolic volumes at each of three settings of afterload resistance and with the hearts contracting isovolumically. In several hearts, contractility was also varied by dobutamine infusion. Measured MVO2 (MMVO2) was compared with values predicted (PMVO2) by each index. There was always a high degree of correlation between MMVO2 and PMVO2 for each of the indexes. However, there was a large degree of variability in the coefficients of the MMVO2-PMVO2 relation from one heart to another. We also observed a statistically significant influence of both afterload and contractile state on the predictive power of each of the indexes. Thus each index that we tested had shortcomings in being able to predict MVO2 accurately over a wide range of hemodynamic conditions.
Silymarin/Silybin and Chronic Liver Disease: A Marriage of Many Years
Silymarin is the extract of Silybum marianum, or milk thistle, and its major active compound is silybin, which has a remarkable biological effect. It is used in different liver disorders, particularly chronic liver diseases, cirrhosis and hepatocellular carcinoma, because of its antioxidant, anti-inflammatory and antifibrotic power. Indeed, the anti-oxidant and anti-inflammatory effect of silymarin is oriented towards the reduction of virus-related liver damages through inflammatory cascade softening and immune system modulation. It also has a direct antiviral effect associated with its intravenous administration in hepatitis C virus infection. With respect to alcohol abuse, silymarin is able to increase cellular vitality and to reduce both lipid peroxidation and cellular necrosis. Furthermore, silymarin/silybin use has important biological effects in non-alcoholic fatty liver disease. These substances antagonize the progression of non-alcoholic fatty liver disease, by intervening in various therapeutic targets: oxidative stress, insulin resistance, liver fat accumulation and mitochondrial dysfunction. Silymarin is also used in liver cirrhosis and hepatocellular carcinoma that represent common end stages of different hepatopathies by modulating different molecular patterns. Therefore, the aim of this review is to examine scientific studies concerning the effects derived from silymarin/silybin use in chronic liver diseases, cirrhosis and hepatocellular carcinoma.
Rural children with asthma: impact of a parent and child asthma education program.
The goal of this study was to determine the effectiveness of an asthma educational intervention in improving asthma knowledge, self-efficacy, and quality of life in rural families. Children 6 to 12 years of age (62% male, 56% white, and 22% Medicaid) with persistent asthma (61%) were recruited from rural elementary schools and randomized into the control standard asthma education (CON) group or an interactive educational intervention (INT) group geared toward rural families.Parent/caregiver and child asthma knowledge, self-efficacy, and quality of life were assessed at baseline and at 10 months post enrollment. Despite high frequency of symptom reports, only 18% children reported an emergency department visit in the prior 6 months. Significant improvement in asthma knowledge was noted for INT parents and young INT children at follow-up (Parent: CON = 16.3; INT = 17.5, p < 0.001; Young children: CON = 10.8, INT = 12.45, p < 0.001). Child self-efficacy significantly increased in the INT group at follow-up; however, there was no significant difference in parent self-efficacy or parent and child quality of life at follow-up. Asthma symptom reports were significantly lower for the INT group at follow-up. For young rural children, an interactive asthma education intervention was associated with increased asthma knowledge and self-efficacy, decreased symptom reports, but not increased quality of life.
Helical rim advancement flaps for reconstruction.
Principles of reconstruction dictate a number of critical points for successful repair. To achieve aesthetic and functional goals, the dermatologic surgeon should avoid deviation of anatomical landmarks and free margins, maintain shape and symmetry, and repair with skin of similar characteristics. Reconstruction of the ear presents a number of unique challenges based on the limited amount of adjacent lax tissue within the cosmetic unit and the structure of the auricle, which consists of a relatively thin skin surface and flexible cartilaginous framework.
Faster R-CNN-Based Glomerular Detection in Multistained Human Whole Slide Images
The detection of objects of interest in high-resolution digital pathological images is a key part of diagnosis and is a labor-intensive task for pathologists. In this paper, we describe a Faster R-CNN-based approach for the detection of glomeruli in multistained whole slide images (WSIs) of human renal tissue sections. Faster R-CNN is a state-of-the-art general object detection method based on a convolutional neural network, which simultaneously proposes object bounds and objectness scores at each point in an image. The method takes an image obtained from a WSI with a sliding window and classifies and localizes every glomerulus in the image by drawing the bounding boxes. We configured Faster R-CNN with a pretrained Inception-ResNet model and retrained it to be adapted to our task, then evaluated it based on a large dataset consisting of more than 33,000 annotated glomeruli obtained from 800 WSIs. The results showed the approach produces comparable or higher than average F-measures with different stains compared to other recently published approaches. This approach could have practical application in hospitals and laboratories for the quantitative analysis of glomeruli in WSIs and, potentially, lead to a better understanding of chronic glomerulonephritis.
Assessing Soft-and Hardware Bottlenecks in PC-based Packet Forwarding Systems
Due to grown capabilities of commodity hardware for packet processing and the high flexibility of software, the use of those systems as alternatives to expensive dedicated networking devices has gained momentum. However, the performance of such PC-based software systems is still low when compared to specialized hardware. In this paper, we analyze the performance of several packet forwarding systems and identify bottlenecks by using profiling techniques. We show that the packet IO in the operating system’s network stack is a significant bottleneck and that a six-fold performance increase can be achieved with user space networking frameworks like Intel DPDK. Keywords–Linux Router; Intel DPDK; Performance Evaluation; Measurement.
Knowledge Management Strategies: Toward a Taxonomy
This paper draws on primary and secondary data to propose a taxonomy of strategies, or "schools." for knowledge management. The primary purpose of this fratiiework is to guide executives on choices to initiate knowledge tnanagement projects according to goals, organizational character, and technological, behavioral, or economic biases. It may also be useful to teachers in demonstrating the scope of knowledge management and to researchers in generating propositions for further study.
Courtship compliance : The effect of touch on women ’ s behavior
Previous research has shown that light tactile contact increases compliance to a wide variety of requests. However, the effect of touch on compliance to a courtship request has never been studied. In this paper, three experiments were conducted in a courtship context. In the first experiment, a young male confederate in a nightclub asked young women to dance with him during the period when slow songs were played. When formulating his request, the confederate touched (or not) the young woman on her forearm for 1 or 2 seconds. In the second experiment, a 20-year-old confederate approached a young woman in the street and asked her for her phone number. The request was again accompanied by a light touch (or not) on the young woman’s forearm. In both experiments, it was found that touch increased compliance to the man’s request. A replication of the second experiment accompanied with a survey administered to the female showed that high score of dominance was associated with tactile contact. The link between touch and the dominant position of the male was used to explain these results theoretically.
FAST POWER LINE DETECTION AND LOCALIZATION USING STEERABLE FILTER FOR ACTIVE UAV GUIDANCE
In this paper we present a fast power line detection and localisation algorithm as well as propose a high-level guidance architecture for active vision-based Unmanned Aerial Vehicle (UAV) guidance. The detection stage is based on steerable filters for edge ridge detection, followed by a line fitting algorithm to refine candidate power lines in images. The guidance architecture assumes an UAV with an onboard Gimbal camera. We first control the position of the Gimbal such that the power line is in the field of view of the camera. Then its pose is used to generate the appropriate control commands such that the aircraft moves and flies above the lines. We present initial experimental results for the detection stage which shows that the proposed algorithm outperforms two state-of-the-art line detection algorithms for power line detection from aerial imagery. * Corresponding author.
Olive (Olea europaea L.) Leaf Polyphenols Improve Insulin Sensitivity in Middle-Aged Overweight Men: A Randomized, Placebo-Controlled, Crossover Trial
BACKGROUND Olive plant leaves (Olea europaea L.) have been used for centuries in folk medicine to treat diabetes, but there are very limited data examining the effects of olive polyphenols on glucose homeostasis in humans. OBJECTIVE To assess the effects of supplementation with olive leaf polyphenols (51.1 mg oleuropein, 9.7 mg hydroxytyrosol per day) on insulin action and cardiovascular risk factors in middle-aged overweight men. DESIGN Randomized, double-blinded, placebo-controlled, crossover trial in New Zealand. 46 participants (aged 46.4 ± 5.5 years and BMI 28.0 ± 2.0 kg/m(2)) were randomized to receive capsules with olive leaf extract (OLE) or placebo for 12 weeks, crossing over to other treatment after a 6-week washout. Primary outcome was insulin sensitivity (Matsuda method). Secondary outcomes included glucose and insulin profiles, cytokines, lipid profile, body composition, 24-hour ambulatory blood pressure, and carotid intima-media thickness. RESULTS Treatment evaluations were based on the intention-to-treat principle. All participants took >96% of prescribed capsules. OLE supplementation was associated with a 15% improvement in insulin sensitivity (p = 0.024) compared to placebo. There was also a 28% improvement in pancreatic β-cell responsiveness (p = 0.013). OLE supplementation also led to increased fasting interleukin-6 (p = 0.014), IGFBP-1 (p = 0.024), and IGFBP-2 (p = 0.015) concentrations. There were however, no effects on interleukin-8, TNF-α, ultra-sensitive CRP, lipid profile, ambulatory blood pressure, body composition, carotid intima-media thickness, or liver function. CONCLUSIONS Supplementation with olive leaf polyphenols for 12 weeks significantly improved insulin sensitivity and pancreatic β-cell secretory capacity in overweight middle-aged men at risk of developing the metabolic syndrome.
Randomised, Double-Blind, Parallel, Placebo-Controlled Study of Oral Glucosamine, Methylsulfonylmethane and their Combination in Osteoarthritis.
OBJECTIVE Glucosamine, classified as a slow-acting drug in osteoarthritis (SADOA), is an efficacious chondroprotective agent. Methylsulfonylmethane (MSM), the isoxidised form of dimethyl-sulfoxide (DSMO), is an effective natural analgesic and anti-inflammatory agent. The aim of this study was to compare the efficacy and safety of oral glucosamine (Glu), methylsulfonylmethane (MSM), their combination and placebo in osteoarthritis of the knee. PATIENTS AND DESIGN A total of 118 patients of either sex with mild to moderate osteoarthritis were included in the study and randomised to receive either Glu 500mg, MSM 500mg, Glu and MSM or placebo capsules three times daily for 12 weeks. Patients were evaluated at 0 (before drug administration), 2, 4, 8 and 12 weeks post-treatment for efficacy and safety. The efficacy parameters studied were the pain index, the swelling index, visual analogue scale pain intensity, 15m walking time, the Lequesne index, and consumption of rescue medicine. RESULTS Glu, MSM and their combination significantly improved signs and symptoms of osteoarthritis compared with placebo. There was a statistically significant decrease in mean (+/- SD) pain index from 1.74 +/- 0.47 at baseline to 0.65 +/- 0.71 at week 12 with Glu (p < 0.001). MSM significantly decreased the mean pain index from 1.53 +/- 0.51 to 0.74 +/- 0.65, and combination treatment resulted in a more significant decrease in the mean pain index (1.7 +/- 0.47 to 0.36 +/- 0.33; p < 0.001). After 12 weeks, the mean swelling index significantly decreased with Glu and MSM, while the decrease in swelling index with combination therapy was greater (1.43 +/- 0.63 to 0.14 +/- 0.35; p < 0.05) after 12 weeks. The combination produced a statistically significant decrease in the Lequesne index. All treatments were well tolerated. CONCLUSION Glu, MSM and their combination produced an analgesic and anti-inflammatory effect in osteoarthritis. Combination therapy showed better efficacy in reducing pain and swelling and in improving the functional ability of joints than the individual agents. All the treatments were well tolerated. The onset of analgesic and anti-inflammatory activity was found to be more rapid with the combination than with Glu. It can be concluded that the combination of MSM with Glu provides better and more rapid improvement in patients with osteoarthritis.
Non-invasive blood glucose measurement
Impedance of blood relatively affected by blood-glucose concentration. Blood electrical impedance value is varied with the content of blood glucose in a human body. This characteristic between glucose and electrical impedance has been proven by using four electrode method's measurement. The bioelectrical voltage output shows a difference between fasting and non-fasting blood glucose measured by using designed four tin lead alloy electrode. 10 test subjects ages between 20-25 years old are UniMAP student has been participated in this experiment and measurement of blood glucose using current clinical measurement and designed device is obtained. Preliminary study using the developed device, has shown that glucose value in the range of 4-5mol/Liter having the range of 0.500V to -1.800V during fasting, and 0.100V or less during normal glucose condition, 5 to 11 mol/liter. On the other hand, It also shows that prediction of blood glucose using this design device could achieve relevant for measurement accuracy compared to gold standard measurement, the hand prick invasive measurement. This early result has support that there is an ample scope in blood electrical study for the non-invasive blood glucose measurement.
The impact of risk stratification on care coordination
Effective care coordination requires risk stratification, but little evidence has been collected about how it impacts clinicians. This care coordination pilot project created a unique opportunity to observe care coordination activities for 10,000 patients over 18 months, before and after risk stratification. Risk stratification feedback increased care coordination contacts with high-risk patients, without decreasing contacts with low-risk patients. The results of this study provide quantitative evidence of the importance of risk stratification in care coordination.
User-click modeling for understanding and predicting search-behavior
Recent advances in search users' click modeling consider both users' search queries and click/skip behavior on documents to infer the user's perceived relevance. Most of these models, including dynamic Bayesian networks (DBN) and user browsing models (UBM), use probabilistic models to understand user click behavior based on individual queries. The user behavior is more complex when her actions to satisfy her information needs form a search session, which may include multiple queries and subsequent click behaviors on various items on search result pages. Previous research is limited to treating each query within a search session in isolation, without paying attention to their dynamic interactions with other queries in a search session. Investigating this problem, we consider the sequence of queries and their clicks in a search session as a task and propose a task-centric click model~(TCM). TCM characterizes user behavior related to a task as a collective whole. Specifically, we identify and consider two new biases in TCM as the basis for user modeling. The first indicates that users tend to express their information needs incrementally in a task, and thus perform more clicks as their needs become clearer. The other illustrates that users tend to click fresh documents that are not included in the results of previous queries. Using these biases, TCM is more accurately able to capture user search behavior. Extensive experimental results demonstrate that by considering all the task information collectively, TCM can better interpret user click behavior and achieve significant improvements in terms of ranking metrics of NDCG and perplexity.
Abnormalities in emotion processing within cortical and subcortical regions in criminal psychopaths: evidence from a functional magnetic resonance imaging study using pictures with emotional content
BACKGROUND Neurobiology of psychopathy is important for our understanding of current neuropsychiatric questions. Despite a growing interest in biological research in psychopathy, its neural underpinning remains obscure. METHODS We used functional magnetic resonance imaging to study the influence of affective contents on brain activation in psychopaths. Series containing positive and negative pictures from the International Affective Picture System were shown to six male psychopaths and six male control subjects while 100 whole-brain echo-planar-imaging measurements were acquired. Differences in brain activation were evaluated using BrainVoyager software 4.6. RESULTS In psychopaths, increased activation through negative contents was found right-sided in prefrontal regions and amygdala. Activation was reduced right-sided in the subgenual cingulate and the temporal gyrus, and left-sided in the dorsal cingulate and the parahippocampal gyrus. Increased activation through positive contents was found left-sided in the orbitofrontal regions. Activation was reduced in right medial frontal and medial temporal regions. CONCLUSIONS These findings underline the hypotheses that psychopathy is neurobiologically reflected by dysregulation and disturbed functional connectivity of emotion-related brain regions. These findings may be interpreted within a framework including prefrontal regions that provide top-down control to and regulate bottom-up signals from limbic areas. Because of the small sample size, the results of this study have to be regarded as preliminary.
Neural Approaches to Conversational AI
This tutorial surveys neural approaches to conversational AI that were developed in the last few years. We group conversational systems into three categories: (1) question answering agents, (2) task-oriented dialogue agents, and (3) social bots. For each category, we present a review of state-of-the-art neural approaches, draw the connection between neural approaches and traditional symbolic approaches, and discuss the progress we have made and challenges we are facing, using specific systems and models as case studies.
A Dual Prediction Network for Image Captioning
General captioning practice involves a single forward prediction, with the aim of predicting the word in the next timestep given the word in the current timestep. In this paper, we present a novel captioning framework, namely Dual Prediction Network (DPN), which is end-to-end trainable and addresses the captioning problem with dual predictions. Specifically, the dual predictions consist of a forward prediction to generate the next word from the current input word, as well as a backward prediction to reconstruct the input word using the predicted word. DPN has two appealing properties: 1) By introducing an extra supervision signal on the prediction, DPN can better capture the interplay between the input and the target; 2) Utilizing the reconstructed input, DPN can make another new prediction. During the test phase, we average both predictions to formulate the final target sentence. Experimental results on the MS COCO dataset demonstrate that, benefiting from the reconstruction step, both generated predictions in DPN outperform the predictions of methods based on the general captioning practice (single forward prediction), and averaging them can bring a further accuracy boost. Overall, DPN achieves competitive results with state-of-the-art approaches, across multiple evaluation metrics.
Online Learning of a Memory for Learning Rates
The promise of learning to learn for robotics rests on the hope that by extracting some information about the learning process itself we can speed up subsequent similar learning tasks. Here, we introduce a computationally efficient online meta-learning algorithm that builds and optimizes a memory model of the optimal learning rate landscape from previously observed gradient behaviors. While performing task specific optimization, this memory of learning rates predicts how to scale currently observed gradients. After applying the gradient scaling our meta-learner updates its internal memory based on the observed effect its prediction had. Our meta-learner can be combined with any gradient-based optimizer, learns on the fly and can be transferred to new optimization tasks. In our evaluations we show that our meta-learning algorithm speeds up learning of MNIST classification and a variety of learning control tasks, either in batch or online learning settings.
COMPARISON OF SOLAR TERRESTRIAL AND SPACE POWER GENERATION FOR EUROPE
Electricity supply from space is one opportunity to ensure a climatic sustainable energy supply. However, generation in space must compete with terrestrial systems like photovoltaic or solar thermal power plants. This paper will compare electricity supply from Solar Power Satellites in space and two terrestrial generation systems for several European load curves: constant base load with 8760 full load hours per year in several power levels from below 1 GW to full supply, as well as remaining load where base load is subtracted from real existing load curves. Additionally, several combined space-terrestrial scenarios have been investigated, optimized for real load curves. Results are leveled electricity costs (LEC) and energy payback time (EPT).
Essential oils: their antibacterial properties and potential applications in foods--a review.
In vitro studies have demonstrated antibacterial activity of essential oils (EOs) against Listeria monocytogenes, Salmonella typhimurium, Escherichia coli O157:H7, Shigella dysenteria, Bacillus cereus and Staphylococcus aureus at levels between 0.2 and 10 microl ml(-1). Gram-negative organisms are slightly less susceptible than gram-positive bacteria. A number of EO components has been identified as effective antibacterials, e.g. carvacrol, thymol, eugenol, perillaldehyde, cinnamaldehyde and cinnamic acid, having minimum inhibitory concentrations (MICs) of 0.05-5 microl ml(-1) in vitro. A higher concentration is needed to achieve the same effect in foods. Studies with fresh meat, meat products, fish, milk, dairy products, vegetables, fruit and cooked rice have shown that the concentration needed to achieve a significant antibacterial effect is around 0.5-20 microl g(-1) in foods and about 0.1-10 microl ml(-1) in solutions for washing fruit and vegetables. EOs comprise a large number of components and it is likely that their mode of action involves several targets in the bacterial cell. The hydrophobicity of EOs enables them to partition in the lipids of the cell membrane and mitochondria, rendering them permeable and leading to leakage of cell contents. Physical conditions that improve the action of EOs are low pH, low temperature and low oxygen levels. Synergism has been observed between carvacrol and its precursor p-cymene and between cinnamaldehyde and eugenol. Synergy between EO components and mild preservation methods has also been observed. Some EO components are legally registered flavourings in the EU and the USA. Undesirable organoleptic effects can be limited by careful selection of EOs according to the type of food.
Urethral Dilatation, Ectopic Testis, Hypoplasia Penis, and Phimosis in A Kilis Goat Kid
A 6-day-old, male, Kilis goat kid with complaints of poor sucking reflex, dysuria, and swelling on the scrotal area was referred and it began to urinate when the sac was pressed on. On the clinical examination of the kid, it was observed that the urethral orifice and process narrowed down. Skin laid between anus-scrotum did not close fully on the ventral line. The most important finding was the penile urethral dilatation, which caused the fluctuating swelling on the scrotal region. Phimosis and two ectopic testis were also found on the right and left side in front of the preputium. There were no pathological changes in the hematological and urine analyses. Urethral diverticulum was treated by urethrostomy and hypoplasia of penis was noted during operation. No treatment for hypoplasia penis, phimosis and ectopic testis was performed. Postoperatively, kid healed and urination via urethral fistula without any complications was observed.
Psychological impact of thrombophilia testing in asymptomatic family members.
INTRODUCTION Psychological distress and worry are commonly described as potential consequences of genetic screening for various disorders. Thrombophilia testing may be offered to asymptomatic persons with a family history of venous thrombosis and thrombophilia. Our objectives were to measure the psychological impacts of thrombophilia testing in first degree relatives and to determine if our intervention, a more intensive care strategy to heighten awareness of both risk and symptoms of thrombosis, caused psychological distress. MATERIALS & METHODS First degree relatives of patients with a known thrombophilia and history of venous thrombosis were tested for thrombophilia. The Perceived Risk Questionnaire and validated psychological instruments (POMS-SF; SCL-90-R Somatization subscale) were administered before testing, one week after receiving results and a year later. Thrombophilia carriers were randomized in family clusters to receive Standard Care or the Intensive Care intervention. RESULTS There were 100 carriers who were randomized to Standard (n=48) or Intensive Care (n=52) and 103 non-carriers. One week after receiving results, we did not observe any difference in psychological distress between carriers and non-carriers, with low levels overall. At 1 year, psychological distress scores were similar between the Standard and Intensive Care arms and did not differ from baseline. CONCLUSIONS The results of this pilot study do not support the concern that thrombophilia screening in asymptomatic relatives triggers psychological distress and worry. Furthermore, our intensive educational approach did not appear to induce undue distress. While the positive benefits of thrombophilia screening remain unproven, clinicians should not be deterred from offering screening by the fear of causing psychological harm.
Meet continuous lattices, limit spaces, and L-topological spaces
In this paper, a systematic investigation of the relationship between meet continuous lattices, limit spaces, and L-topological spaces is given. It is a continuation of the investigation on this topic by Hohle (2000, 2001). The relationship between the Lowen functors and the functors introduced by Hohle (2000, 2001) is made clear.
Where is the technology-induced pedagogy? Snapshots from two multimedia EFL classrooms
This research examines two multimedia secondary EFL classrooms to identify what changes, pedagogical or otherwise, have taken place in technologically integrated classroom practice. The research analyses data generated from a range of sources: classroom observations, videotapes and teacher’s lesson plans. It is argued that substantial pedagogical innovations will not come unless there is a perceived change in the understanding of the process of teaching and learning and philosophy of language. The research concludes that the traditional Chinese notion of teaching and the role of the teacher in the classroom need to be redefined to allow for a learner-centred multimedia language classroom to emerge. New trends in Chinese ELT A new educational reform, which emphasises the integration of new technologies into the curriculum, is surging in China with an increased momentum. EFL professionals are either constantly searching themselves for, or being timetabled into various computer literacy courses, to develop and upgrade technology-based skills. It is not uncommon for some forward-thinking institutions to establish a skill database that contains information about staff skills in using various computer software. On the other hand, the ability to use PowerPoint, Authorware or Flash to preen their courseware, and give showcase “Multimedia EFL Lessons” has been admired as an enviable skill. 05Zhong 04/01/2002 1:21 pm Page 39
The ULK1 complex: sensing nutrient signals for autophagy activation.
The Atg1/ULK1 complex plays a central role in starvation-induced autophagy, integrating signals from upstream sensors such as MTOR and AMPK and transducing them to the downstream autophagy pathway. Much progress has been made in the last few years in understanding the mechanisms by which the complex is regulated through protein-protein interactions and post-translational modifications, providing insights into how the cell modulates autophagy, particularly in response to nutrient status. However, how the ULK1 complex transduces upstream signals to the downstream central autophagy pathway is still unclear. Although the protein kinase activity of ULK1 is required for its autophagic function, its protein substrate(s) responsible for autophagy activation has not been identified. Furthermore, examples of potential ULK1-independent autophagy have emerged, indicating that under certain specific contexts, the ULK1 complex might be dispensable for autophagy activation. This raises the question of how the autophagic machinery is activated independent of the ULK1 complex and what are the biological functions of such noncanonical autophagy pathways.
The Influence of Peer Pressure on Criminal Behaviour May
Peer pressure is a reoccurring phenomenon in criminal or deviant behaviour especially, as it pertains to adolescents. It may begin in early childhood of about 5years and increase through childhood to become more intense in adolescence years. This paper examines how peer pressure is present in adolescents and how it may influence or create the leverage to non-conformity to societal norms and laws. The paper analyses the process and occurrence of peer influence and pressure on individuals and groups within the framework of the social learning and the social control theories. Major features of the peer pressure process are identified as group dynamics, delinquent peer subculture, peer approval of delinquent behaviour and sanctions for non-conformity which include ridicule, mockery, ostracism and even mayhem or assault in some cases. Also, the paper highlights acceptance and rejection as key concepts that determine the sway or gladiation of adolescents to deviant and criminal behaviour. Finally, it concludes that peer pressure exists for conformity and in delinquent subculture, the result is conformity to criminal codes and behaviour. The paper recommends more urgent, serious and offensive grass root approaches by governments and institutions against this growing threat to the continued peace, orderliness and development of society.
PointNet: A 3D Convolutional Neural Network for real-time object class recognition
During the last few years, Convolutional Neural Networks are slowly but surely becoming the default method solve many computer vision related problems. This is mainly due to the continuous success that they have achieved when applied to certain tasks such as image, speech, or object recognition. Despite all the efforts, object class recognition methods based on deep learning techniques still have room for improvement. Most of the current approaches do not fully exploit 3D information, which has been proven to effectively improve the performance of other traditional object recognition methods. In this work, we propose PointNet, a new approach inspired by VoxNet and 3D ShapeNets, as an improvement over the existing methods by using density occupancy grids representations for the input data, and integrating them into a supervised Convolutional Neural Network architecture. An extensive experimentation was carried out, using ModelNet - a large-scale 3D CAD models dataset - to train and test the system, to prove that our approach is on par with state-of-the-art methods in terms of accuracy while being able to perform recognition under real-time constraints.
The Borsuk-Ulam-property, Tucker-property and constructive proofs in combinatorics
This article is concerned with a general scheme on how to obtain constructive proofs for combinatorial theorems that have topological proofs so far. To this end the combinatorial concept of Tucker-property of a finite group $G$ is introduced and its relation to the topological Borsuk-Ulam-property is discussed. Applications of the Tucker-property in combinatorics are demonstrated.
A Reflection on Call-by-Value
A number of compilers exploit the following strategy: translate a term to continuation-passing style (CPS) and optimize the resulting term using a sequence of reductions. Recent work suggests that an alternative strategy is superior: optimize directly in an extended source calculus. We suggest that the appropriate relation between the source and target calculi may be captured by a special case of a Galois connection known as a reflection. Previous work has focused on the weaker notion of an equational correspondence, which is based on equality rather than reduction. We show that Moggi's monad translation and Plotkin's CPS translation can both be regarded as reflections, and thereby strengthen a number of results in the literature.
COOLCAT: an entropy-based algorithm for categorical clustering
In this paper we explore the connection between clustering categorical data and entropy: clusters of similar poi lower entropy than those of dissimilar ones. We use this connection to design an incremental heuristic algorithm, COOLCAT, which is capable of efficiently clustering large data sets of records with categorical attributes, and data streams. In contrast with other categorical clustering algorithms published in the past, COOLCAT's clustering results are very stable for different sample sizes and parameter settings. Also, the criteria for clustering is a very intuitive one, since it is deeply rooted on the well-known notion of entropy. Most importantly, COOLCAT is well equipped to deal with clustering of data streams(continuously arriving streams of data point) since it is an incremental algorithm capable of clustering new points without having to look at every point that has been clustered so far. We demonstrate the efficiency and scalability of COOLCAT by a series of experiments on real and synthetic data sets.
An agent for optimizing airline ticket purchasing
Buying airline tickets is an ubiquitous task in which it is difficult for humans to minimize cost due to insufficient information. Even with historical data available for inspection (a recent addition to some travel reservation websites), it is difficult to assess how purchase timing translates into changes in expected cost. To address this problem, we introduce an agent which is able to optimize purchase timing on behalf of customers. We provide results that demonstrate the method can perform much closer to the optimal purchase policy than existing decision theoretic approaches for this domain.
Cross-cultural adaptation of research instruments: language, setting, time and statistical considerations
BACKGROUND Research questionnaires are not always translated appropriately before they are used in new temporal, cultural or linguistic settings. The results based on such instruments may therefore not accurately reflect what they are supposed to measure. This paper aims to illustrate the process and required steps involved in the cross-cultural adaptation of a research instrument using the adaptation process of an attitudinal instrument as an example. METHODS A questionnaire was needed for the implementation of a study in Norway 2007. There was no appropriate instruments available in Norwegian, thus an Australian-English instrument was cross-culturally adapted. RESULTS The adaptation process included investigation of conceptual and item equivalence. Two forward and two back-translations were synthesized and compared by an expert committee. Thereafter the instrument was pretested and adjusted accordingly. The final questionnaire was administered to opioid maintenance treatment staff (n=140) and harm reduction staff (n=180). The overall response rate was 84%. The original instrument failed confirmatory analysis. Instead a new two-factor scale was identified and found valid in the new setting. CONCLUSIONS The failure of the original scale highlights the importance of adapting instruments to current research settings. It also emphasizes the importance of ensuring that concepts within an instrument are equal between the original and target language, time and context. If the described stages in the cross-cultural adaptation process had been omitted, the findings would have been misleading, even if presented with apparent precision. Thus, it is important to consider possible barriers when making a direct comparison between different nations, cultures and times.
Edge-based split-and-merge superpixel segmentation
Superpixels are an oversegmentation of an image and popularly used as a preprocessing in many computer vision applications. Many state-of-the-art superpixel segmentation algorithms rely either on minimizing special energy functions or on clustering pixels in the effective distance space. While in this paper, we introduce a novel algorithm to produce superpixels based on the edge map by utilizing a split-andmerge strategy. Firstly, we obtain the initial superpixels with uniform size and shape. Secondly, in the splitting stage, we find all possible splitting contours for each superpixel by overlapping the boundaries of this superpixel with the edge map, and then choose the best one to split it which ensure the superpixels produced by this splitting are dissimilarity in color and similarity in size. Thirdly, in the merging stage, the Bhattacharyya distance between two color histograms in the RGB space for each pair of adjacent superpixels is computed to evaluate their color similarity for merging superpixels. At last, we iterate the split-and-merge steps until no superpixels have changed. Experimental results on the Berkeley Segmentation Dataset (BSD) show that the proposed algorithm can achieve a good performance compared with the state-of-the-art superpixel segmentation algorithms.
Short-term maximal-intensity resistance training increases volitional function and strength in chronic incomplete spinal cord injury: a pilot study.
BACKGROUND AND PURPOSE Recent research shows that individuals with an incomplete spinal cord injury (SCI) have a reserve of force-generating capability that is observable during repeated intermittent maximal volitional effort contractions. Previous studies suggest that increased neural drive contributes to the enhanced short-term force-generating capabilities. Whether this reserve can be harnessed with repeated training is unclear. The purpose of this pilot study was to investigate the effects of 4 weeks of maximal-intensity resistance training, compared with conventional progressive resistance training, on lower extremity function and strength in chronic incomplete SCI. METHODS Using a randomized crossover design, 5 individuals with chronic (> 1 year postinjury) SCI American Spinal Injury Association Impairment Scale classification C or D were tested before and after 4 weeks of both maximal-intensity training and progressive resistance training paradigms. Outcomes measures included the 6-Minute Walk Test, the Berg Balance Scale, and peak isometric torque for strength of lower extremity muscles. RESULTS Maximal-intensity resistance training was associated with an average increase of 12.19 ± 8.29 m on the 6-Minute Walk Test, 4 ± 1.9 points on the Berg Balance Scale, 4 ± 4.5 points on the lower extremity motor score), while no changes on the above scores were seen with conventional training. Furthermore, significant increases in peak volitional isometric torques (mean increase = 20 ± 8 Nm) were observed following maximal-intensity resistance training when compared with conventional training (mean increase = 0.12 ± 3 Nm, P = 0.03). DISCUSSION AND CONCLUSIONS Maximal-intensity training paradigm may facilitate rapid gains in volitional function and strength in persons with chronic motor-incomplete SCI, using a simple short-term training paradigm.
Efficient Methods for Overlapping Group Lasso
The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the eq norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.
Scheduling Garbage Collector for Embedded Real-Time Systems
This paper proposes a new scheduling method for multiple mutators and a garbage collector running on embedded real-time systems with a single processor and no virtual memory. The hard real-time tasks should reserve a certain amount of heap memory to prevent memory starvation and/or deadline miss. Since the memory requirement depends on the worst-case response time of a garbage collector, the traditional approach in which garbage collection is performed in the background demands large memory space. The proposed scheduling algorithm is based on an aperiodic scheduling technique, sporadic server. This paper also presents a modified copying garbage collection algorithm with hardware support. In order to minimize the worst-case response time of a garbage collector thus reducing the memory requirement, the garbage collector runs as the highest priority task with a preset bandwidth. This paper also investigates the schedulability of a garbage collector and mutator tasks as well as the worst-case memory requirement. Performance analysis shows that the proposed algorithm can provide a considerable reduction in the worst-case memory requirement compared with the background policy. Simulation results demonstrate that the proposed algorithm can produce the feasible memory requirement comparable to the complex on-line scheduling algorithm such as slack stealing.
Billboard, banner, blackboard: Marina Warner’s photographs of the Cultural Revolution
In 2012 the Royal College of Art hosted an international conference, The Shadow of Language, which brought together artists and academics to explore the interplay between image, language and translation by focusing on contemporary Chinese practices. At the conference Marina Warner, the Anglo-Italian writer of fiction, criticism and history, addressed the question of translation through the shadows in animation in her keynote lecture, 'The Ambiguous Life of Shadows'. It was then that I found out that Marina Warner had a large collection of photographs she had taken in China less than a year before Mao Zedong's death. At the tail of the Cultural Revolution, the country was starting to open after decades of determined isolation. New political and economic relationships were being established, and invitations extended to artists and intellectuals in the West, such as Roland Barthes and Julia Kristeva. The Italian filmmaker Michelangelo Antonioni was also invited, in 1972, be Premier Zhou Enlai to make the film Chung Kuo, Cina, a three and a half hour long document that focuses on the life of the people in China.
Cooperative neural networks (CoNN): Exploiting prior independence structure for improved classification
We propose a new approach, called cooperative neural networks (CoNN), which uses a set of cooperatively trained neural networks to capture latent representations that exploit prior given independence structure. The model is more flexible than traditional graphical models based on exponential family distributions, but incorporates more domain specific prior structure than traditional deep networks or variational autoencoders. The framework is very general and can be used to exploit the independence structure of any graphical model. We illustrate the technique by showing that we can transfer the independence structure of the popular Latent Dirichlet Allocation (LDA) model to a cooperative neural network, CoNNsLDA. Empirical evaluation of CoNN-sLDA on supervised text classification tasks demonstrates that the theoretical advantages of prior independence structure can be realized in practice we demonstrate a 23% reduction in error on the challenging MultiSent data set compared to state-of-the-art.
Automated extraction of product comparison matrices from informal product descriptions
Domain analysts, product managers, or customers aim to capture the important features and differences among a set of related products. A case-by-case reviewing of each product description is a laborious and time-consuming task that fails to deliver a condense view of a family of product. In this article, we investigate the use of automated techniques for synthesizing a product comparison matrix (PCM) from a set of product descriptions written in natural language. We describe a tool-supported process, based on term recognition, information extraction, clustering, and similarities, capable of identifying and organizing features and values in a PCM – despite the informality and absence of structure in the textual descriptions of products. We evaluate our proposal against numerous categories of products mined from BestBuy. Our empirical results show that the synthesized PCMs exhibit numerous quantitative, comparable information that can potentially complement or even refine technical descriptions of products. The user study shows that our automatic approach is capable of extracting a significant portion of correct features and correct values. This approach has been implemented in MatrixMiner a web environment with an interactive support for automatically synthesizing PCMs from informal product descriptions. MatrixMiner also maintains traceability with the original descriptions and the technical specifications for further refinement or maintenance by users. Preprint submitted to JSS January 5, 2017
NFV enabling network slicing for 5G
Network slicing is one of the most discussed concepts for designing 5G networks. It is intended to enable operators to slice a single physical network into multiple virtual networks optimized according to specific services and business goals. Network Functions Virtualization (NFV), a technology developed by the European Telecommunication Standards Institute (ETSI), will play a prominent role in implementing this concept. This paper reviews the options that the NFV technology offers to enable network slicing and highlights areas that may require further studies and standardization work.
Comparison of upper arm kinematics during a volleyball spike between players with and without a history of shoulder injury.
Volleyball players are at high risk of overuse shoulder injuries, with spike biomechanics a perceived risk factor. This study compared spike kinematics between elite male volleyball players with and without a history of shoulder injuries. Height, mass, maximum jump height, passive shoulder rotation range of motion (ROM), and active trunk ROM were collected on elite players with (13) and without (11) shoulder injury history and were compared using independent samples t tests (P < .05). The average of spike kinematics at impact and range 0.1 s before and after impact during down-the-line and cross-court spike types were compared using linear mixed models in SPSS (P < .01). No differences were detected between the injured and uninjured groups. Thoracic rotation and shoulder abduction at impact and range of shoulder rotation velocity differed between spike types. The ability to tolerate the differing demands of the spike types could be used as return-to-play criteria for injured athletes.
Food Image Recognition Using Very Deep Convolutional Networks
We evaluated the effectiveness in classifying food images of a deep-learning approach based on the specifications of Google's image recognition architecture Inception. The architecture is a deep convolutional neural network (DCNN) having a depth of 54 layers. In this study, we fine-tuned this architecture for classifying food images from three well-known food image datasets: ETH Food-101, UEC FOOD 100, and UEC FOOD 256. On these datasets we achieved, respectively, 88.28%, 81.45%, and 76.17% as top-1 accuracy and 96.88%, 97.27%, and 92.58% as top-5 accuracy. To the best of our knowledge, these results significantly improve the best published results obtained on the same datasets, while requiring less computation power, since the number of parameters and the computational complexity are much smaller than the competitors?. Because of this, even if it is still rather large, the deep network based on this architecture appears to be at least closer to the requirements for mobile systems.
Comparison of clinical performance of zirconia implants and titanium implants in animal models: a systematic review.
PURPOSE This study aimed to compare the values of removal torque (RT) and bone-implant contact (BIC) reported in different animal studies for zirconia and titanium implants. MATERIALS AND METHODS A systematic review of the literature was performed to analyze BIC and RT of animal studies in which both zirconia and titanium dental implants were used. To identify the studies to include in this systematic review, an exhaustive search of PubMed was performed of animal studies published in English with reports on the quantification of the osseointegration of both titanium and zirconia implants by means of BIC and/or RT. The results were aggregated and analyzed within each of the animal models (pig, rabbit, rat, monkey, dog, and sheep). RESULTS The selection process resulted in a final sample of 16 studies. In general, no significant differences were found between titanium and zirconia. The significant differences in terms of BIC and RT reported by the authors were attributable to the different surface treatments and microporosities of the implant surfaces studied, not to the materials themselves. Only two articles reported significantly lower BIC for modified zirconia implants as compared to modified titanium implants. Four authors described statistically significant differences in terms of RT between zirconia and titanium implants in the different animal models, regardless of the surface treatment received by the implants. CONCLUSIONS Within the limitations of this study, the values for the BIC and RT of zirconia implants in most of the studies analyzed did not show statistical differences compared with titanium implants. Modified-surface zirconia may have potential as a candidate for a successful implant material, although further clinical studies are necessary.
What would other programmers do: suggesting solutions to error messages
Interpreting compiler errors and exception messages is challenging for novice programmers. Presenting examples of how other programmers have corrected similar errors may help novices understand and correct such errors. This paper introduces HelpMeOut, a social recommender system that aids the debugging of error messages by suggesting solutions that peers have applied in the past. HelpMeOut comprises IDE instrumentation to collect examples of code changes that fix errors; a central database that stores fix reports from many users; and a suggestion interface that, given an error, queries the database for a list of relevant fixes and presents these to the programmer. We report on implementations of this architecture for two programming languages. An evaluation with novice programmers found that the technique can suggest useful fixes for 47% of errors after 39 person-hours of programming in an instrumented environment.
Re-embedding words
We present a fast method for re-purposing existing semantic word vectors to improve performance in a supervised task. Recently, with an increase in computing resources, it became possible to learn rich word embeddings from massive amounts of unlabeled data. However, some methods take days or weeks to learn good embeddings, and some are notoriously difficult to train. We propose a method that takes as input an existing embedding, some labeled data, and produces an embedding in the same space, but with a better predictive performance in the supervised task. We show improvement on the task of sentiment classification with respect to several baselines, and observe that the approach is most useful when the training set is sufficiently small.
Urban ecosystem and urban continued development
It discusses the influences of the urbanization process to the urban ecological condition,introduces the main features and the problems existed in the urban ecosystem,and brings forward the continued development viewpoints in cities based on the ecology,which will build the continued and developed cities assorted by the social ecosystem,economic ecosystem and the natural ecosystem.
Can we consider the policy instruments as cyclical substitutes? Some Empirical Evidence
The main objective of this article is to study how central banks and fiscal authorities interact when they conduct the economic policy. Specially, it seems very helpful to specify if this two policymakers act under complementary or substitutability strategy. Our methodology is based on a simple model of policy-mix that is estimated by panel data method. Our main results show that for seven OECD countries and during the period 1975-1997, the two policy instruments seem to be complementary. Moreover, even if the central banks become more and more independent during this period, the estimates underline that the reaction functions of central banks depend on output and the governments have continued to care about inflation rate.
GraphBuilder: scalable graph ETL framework
Graph abstraction is essential for many applications from finding a shortest path to executing complex machine learning (ML) algorithms like collaborative filtering. Graph construction from raw data for various applications is becoming challenging, due to exponential growth in data, as well as the need for large scale graph processing. Since graph construction is a data-parallel problem, MapReduce is well-suited for this task. We developed GraphBuilder, a scalable framework for graph Extract-Transform-Load (ETL), to offload many of the complexities of graph construction, including graph formation, tabulation, transformation, partitioning, output formatting, and serialization. GraphBuilder is written in Java, for ease of programming, and it scales using the MapReduce model. In this paper, we describe the motivation for GraphBuilder, its architecture, MapReduce algorithms, and performance evaluation of the framework. Since large graphs should be partitioned over a cluster for storing and processing and partitioning methods have significant performance impacts, we develop several graph partitioning methods and evaluate their performance. We also open source the framework at https://01.org/graphbuilder/.
Population pharmacokinetics and pharmacodynamics of rivaroxaban in patients with acute coronary syndromes.
WHAT IS ALREADY KNOWN ABOUT THIS SUBJECT • Population pharmacokinetics and pharmacodynamics of rivaroxaban have been characterized in healthy subjects and in patients with total venous thromboembolism, deep vein thrombosis or atrial fibrillation. WHAT THIS STUDY ADDS • This article is the first description of the population pharmacokinetics (PK) and pharmacodynamics (PD) of rivaroxaban in patients with acute coronary syndrome (ACS). It is the largest population pharmacokinetic and pharmacodynamic study on rivaroxaban conducted to date (n= 2290). The PK and PK-PD relationship of rivaroxaban in patients with ACS were similar to those in other patient populations. In addition, model-based simulations showed that the influence of renal function and age on the exposure to rivaroxaban in the ACS population were similar to the findings from Phase 1 special population studies. These findings suggest that rivaroxaban has highly predictable PK-PD and may provide a consistent anticoagulant effect across the studied patient populations, which allows an accurate prediction of the dose to control anticoagulation optimally. AIMS The aim of this analysis was to use a population approach to facilitate the understanding of the pharmacokinetics and pharmacodynamics of rivaroxaban in patients with acute coronary syndrome (ACS) and to evaluate the influence of patient covariates on the exposure of rivaroxaban in patients with ACS. METHODS A population pharmacokinetic model was developed using pharmacokinetic samples from 2290 patients in Anti-Xa Therapy to Lower Cardiovascular Events in Addition to Standard Therapy in Subjects with Acute Coronary Syndrome Thrombolysis in Myocardial Infarction 46. The relationship between pharmacokinetics and the primary pharmacodynamic end point, prothrombin time, was evaluated. RESULTS The pharmacokinetics of rivaroxaban in patients with ACS was adequately described by an oral one-compartment model. The estimated absorption rate, apparent clearance and volume of distribution were 1.24 h(-1) (interindividual variability, 139%), 6.48 l h(-1) (31%) and 57.9 l (10%), respectively. Simulations indicate that the influences of renal function, age and bodyweight on exposure in ACS patients are consistent with the findings in previous Phase 1 studies. Rivaroxaban plasma concentrations exhibit a close-to-linear relationship with prothrombin time in the ACS population, with little interindividual variability. The estimated pharmacokinetic and pharmacodynamic parameters for the ACS patients were comparable to those for venous thromboembolism prevention, deep vein thrombosis and atrial fibrillation patients. CONCLUSIONS The similarity in pharmacokinetics/pharmacodynamics of rivaroxaban among different patient populations and the low interindividual variability in the exposure-prothrombin time relationship indicate that the anticoagulant effect of rivaroxaban is highly predictable and consistent across all the patient populations studied.
Large deep neural networks for MS lesion segmentation
Multiple sclerosis (MS) is a multi-factorial autoimmune disorder, characterized by spatial and temporal dissemination of brain lesions that are visible in T2-weighted and Proton Density (PD) MRI. Assessment of lesion burden and is useful for monitoring the course of the disease, and assessing correlates of clinical outcomes. Although there are established semi-automated methods to measure lesion volume, most of them require human interaction and editing, which are time consuming and limits the ability to analyze large sets of data with high accuracy. The primary objective of this work is to improve existing segmentation algorithms and accelerate the time consuming operation of identifying and validating MS lesions. In this paper, a Deep Neural Network for MS Lesion Segmentation is implemented. The MS lesion samples are extracted from the Partners Comprehensive Longitudinal Investigation of Multiple Sclerosis (CLIMB) study. A set of 900 subjects with T2, PD and a manually corrected label map images were used to train a Deep Neural Network and identify MS lesions. Initial tests using this network achieved a 90% accuracy rate. A secondary goal was to enable this data repository for big data analysis by using this algorithm to segment the remaining cases available in the CLIMB repository.
Who has undiagnosed dementia? A cross-sectional analysis of participants of the Aging, Demographics and Memory Study.
BACKGROUND delays in diagnosing dementia may lead to suboptimal care, yet around half of those with dementia are undiagnosed. Any strategy for case finding should be informed by understanding the characteristics of the undiagnosed population. We used cross-sectional data from a population-based sample with dementia aged 71 years and older in the United States to describe the undiagnosed population and identify factors associated with non-diagnosis. METHODS the Aging, Demographics and Memory Study (ADAMS) Wave A participants (N = 856) each underwent a detailed neuropsychiatric investigation. Informants were asked whether the participant had ever received a doctor's diagnosis of dementia. We used multiple logistic regression to identify factors associated with informant report of a prior dementia diagnosis among those with a study diagnosis of dementia. RESULTS of those with a study diagnosis of dementia (n = 307), a prior diagnosis of dementia was reported by 121 informants (weighted proportion = 42%). Prior diagnosis was associated with greater clinical dementia rating (CDR), from 26% (CDR = 1) to 83% (CDR = 5). In multivariate analysis, those aged 90 years or older were less likely to be diagnosed (P = 0.008), but prior diagnosis was more common among married women (P = 0.038) and those who had spent more than 9 years in full-time education (P = 0.043). CONCLUSIONS people with dementia who are undiagnosed are older, have fewer years in education, are more likely to be unmarried, male and have less severe dementia than those with a diagnosis. Policymakers and clinicians should be mindful of the variation in diagnosis rates among subgroups of the population with dementia.
Stacking-based deep neural network: Deep analytic network on convolutional spectral histogram features
Stacking-based deep neural network (S-DNN), in general, denotes a deep neural network (DNN) resemblance in terms of its very deep, feedforward network architecture. The typical S-DNN aggregates a variable number of individually learnable modules in series to assemble a DNN-alike alternative to the targeted object recognition tasks. This work likewise devises an S-DNN instantiation, dubbed deep analytic network (DAN), on top of the spectral histogram (SH) features. The DAN learning principle relies on ridge regression, and some key DNN constituents, specifically, rectified linear unit, fine-tuning, and normalization. The DAN aptitude is scrutinized on three repositories of varying domains, including FERET (faces), MNIST (handwritten digits), and CIFAR10 (natural objects). The empirical results unveil that DAN escalates the SH baseline performance over a sufficiently deep layer.
Understanding Citizen ’ s Continuance Intention to Use e-Government Website : a Composite View of Technology Acceptance Model and Computer Self-Efficacy
This study aims to understand the fundamental factors influencing the citizen’s continuance intention to use eGovernment websites by using the Technology Acceptance Model (TAM) as a based theoretical model. Computer selfefficacy is adopted as an additional factor that influences the citizen’s continuance intention to use e-Government websites. To empirically test the proposed research model, the web-based survey was employed. The participants consisted of 614 country-wide citizens with at least a bachelor’s degree and an experience with e-Government websites. Regression analysis was conducted to test the model. The results revealed that perceived usefulness and perceived ease of use of e-Government websites and citizen’s computer self-efficacy directly enhanced citizen’s continuance intention to use e-Government websites. In addition, perceived ease of use of e-Government websites indirectly enhanced citizen’s continuance intention through perceived usefulness.
Exfoliative cheilitis.
Exfoliative cheilitis is an uncommon condition affecting the vermilion zone of the upper, lower or both lips. It is characterized by the continuous production and desquamation of unsightly, thick scales of keratin; when removed, these leave a normal appearing lip beneath. The etiology is unknown, although some cases may be factitious. Attempts at treatment by a wide variety of agents and techniques have been unsuccessful. Three patients with this disease are reported and its relationship to factitious cheilitis and candidal cheilitis is discussed.
Embodying Art and Art History: An Experiment with a Class Video Happening for the Series 'Access Denied'
A book written in a foreign language and migrated to the US along with its author, an art historian, finds a new communicative dimension by becoming a ready-made for art making purposes. Starting with an introduction explaining the genesis of the collaborative project Access Denied, this article focuses on one of the series’ artworks, namely a video-happening, by exploring its genesis, development, and outcomes. Staged during the day of finals in an advanced art history seminar, the experiment provided an embodied artistic experience and some reflections on art history course content in the debate that followed. The video happening became a basis for further reflection in this essay on the role of performance in stimulating arts-based research at the interstices between biography and scholarly inquiry, between art and art history, between modernism and postmodernism, between object and action, and between creation and destruction as the two opposite poles in modern creativity. IJEA Vol. 14 No. 9 http://www.ijea.org/v14n9/ 2 A Neglected Book on American Photorealism Becomes the Ground for Arts-Based Exploration In 1998-99 while I was still living in Italy, I took on the challenge of writing my Master’s degree dissertation on American Photorealism. The severe negative criticism and fast dismissal of this topic, made as early as 1972-73 by eminent scholars Giulio Carlo Argan and Gillo Dorfles (Carli, 2003; Mercurio, 2003), created barriers that discouraged major attempts towards deeper understanding of this artistic phenomenon in Italy. The existing superficial and negatively biased dismissal of photorealist artists situated them as incompetent in life drawing, photography plagiarizers, revitalists of the XIX century realism, and as promoting forms of capitalistic imperialism. My more favorable scholarly approach framed and justified Photorealism as a mature modernist development within the wider context of other artistic manifestations between the sixties and seventies. Two books came out of this dissertation: in 2000 a monograph on artist Don Eddy, and in 2004 the first book-length scholarship on American Photorealism in Italy, L’Iperrealismo ‘Fotografico’ Americano in Pittura. Risonanze Storiche nella East e nella West Coast. 1 This self-sponsored book had a relatively modest circulation and was not able to find an appropriate unprejudiced context, in Italy, to be promoted at the academic level. In 2005, not long after completing my Doctoral dissertation, an interdisciplinary study reconstructing the reception of American Photorealism at Documenta 5 in Kassel, I decided to migrate to the United States. My scholarship moved with me: however, once in a new country, its language became another barrier to overcome, as I would not be able to teach my students from a book they were not able to read. For all these reasons, this book on Photorealism came to embody a sense of permanent displacement and in-between-ness that later on I found perfect for arts-based explorations. As post-Pop legacy, Photorealism is situated at the edge between mature modernism and postmodernism, therefore connecting the timeline of our course on late modernism and our postmodern context. During a class experiment envisioned by one of the two Italian nontraditional students for Access Denied, my students unknowingly came to embody a ritual to metaphorically address art history’s fallacies towards this misunderstood artistic style. 1 The only exception is Italo Mussa’s Il Vero più Vero del Vero, a monograph published in 1974. Preceded by a general introduction, the monograph focused on the translation of the interviews to some Photorealist artists appeared in the November-December 1972 issue of Art in America. Cempellin: Embodying Art and Art History 3 The Access Denied, from Conference Abstract to Class Project On January 16, 2009, I received communication that my paper proposal, submitted earlier in Fall 2008, was accepted at the 4 International Conference on the Arts in Society under the section “Teaching and Learning the Arts.” The abstract I submitted, with the title The Access Denied Series: Restrictions as Liberating Forms of Communication, succinctly recalled the problematic origins of this book and the author’s scholarly displacement, wishing for a transformation of the book into: (...) a series of mixed-media conceptual works, which should creatively redesign a new role for this book, as well as a new educative dimension for art history itself. Indeed, I intend to involve my students into the artistic realization of this concept, in parallel to their art history studies, and have them virtually “dialoguing” with some of the great masters of XX century art. In this way, the project becomes the occasion for exploring mine, my art students’ and the audience’s own artistic and intellectual identities. Paradoxically, a denied access to the reading and handling of this book would allow it to acquire a new and rich communicative dimension. 2 The uncertainty on the outcome of my proposal prevented me from inserting the project in any spring course syllabus. However, as soon as I received notice of acceptance, I immediately talked with my advanced visual arts students, enrolled in an art history special topics seminar to check if we could explore this idea together. I introduced this as an alternative, experimental project, to the more traditional intensive-writing art history paper, and allowed students to choose between the two. A little less than half of the class opted to try the Access Denied project. The upper-division art history classes, including this seminar, address studio arts, art education and graphic design students, who use art history as a complement to their education in the visual arts. Due to the modest size of the art history program, it is not unusual that students come to the advanced seminar after taking only an art appreciation course and the two-part survey, both at lower-division level. Often, students come to the upper-level art history courses with little understanding and appreciation of art history contents and methods; however, the solid studio background accumulated in their respective majors makes them 2 The integral text of my submission is found in the web: http://a09.cgpublisher.com/proposals/503/index_html 3 The seminar in question was Pop Art (and Beyond), Spring 2009. The course started from the antecedents to Pop Art (Cubism, Duchamp, the Neodadaism of Rauschenberg and Johns, the Nouveau Réalisme) and expanded on British and American Pop Art. IJEA Vol. 14 No. 9 http://www.ijea.org/v14n9/ 4 open to new ideas, especially if involving an art-making component. The project became partially local and partially dispersed to address that I was going to engage both art making and art history. I teamed with ceramics Professor Mary Harden, a member of the art faculty from my previous affiliation in Oregon and two non-traditional art students from Italy, Marco Pascarella and Marco Martelli. This provided an additional layer of displacement, once again reflecting my biography: Italy, Oregon, and South Dakota, where the participants live, are all places I have lived or I am currently living. Any extension of my biographical component in the series’ general concept allowed me to become a conceptual artist. In my newly acquired role, I was able to establish a deeper dialogue with my art and design students. Students were allowed to expand the paradox denying/releasing of the general concept (=an art history book not meant to be read acquired a new communicative dimension by being incorporated into art making) through an idea of their own choice and in their own language, whether English or Italian. In a recent study linking arts-based research and curricular practices, James Haywood Rolling (2010) warned against the excessive academic structuring as hampering the development of “unpredictable thought, the kinds of metaphorical leaps that charter innovation” (pp.110-111). Similarly, I am convinced that when an opportunity with great potential surfaces and is agreed upon by students, the faculty should, to reasonable extent, welcome and try to accommodate it in the course. Teaching art history through arts-based research requires the art historian to explore the interstices between the claimed “objectivity” of art history and the “subjectivity” of art making. As Rita L. Irwin and Stephanie Springgay (2008) claim, a/r/tographic inquiry finds realization especially when “hybrid communities of artists, educators, and researchers locate themselves in the space of the in-between to create self-sustaining interrelating identities that inform, enhance, evoke, and/or provoke one another” (p.112). Each student participant was given a copy of the book. The student was required to choose a theme, ranging from very personal to more general, and try to give it concrete visual form in any chosen medium, by incorporating the book in whole, in part, or in fragments into his/her own project. In order to tie his or her work to the general concept of the series Access Denied, the student would need to find a way to make the book inaccessible, in answer to the general question: “by making the book untouchable and unreadable, what do you want to communicate?” The inspiration for the students’ projects originated from their study of a 4 The two Italian non-traditional art students were invited to engage the project on a more mature level, so that their work would become a role model to the undergraduate students. Cempellin: Embodying Art and Art History 5 major artist, or a combination of artists, within the chronological span of the course. Students’ ideas, as well as their techniques and process, were described in a development paper that combined artistic self-reflection of the chosen theme and meditation on art history. By making an artwork and a development paper connected to each other, they were asked to cross boundaries between st
An Analysis of Video Lecture in MOOC
Video is a content delivery form used for delivering lecture content in Massive Open Online Course (MOOC). While institutions plan to launch MOOC on their own platform or adapt an existing one, there is a need to specify the features required for video lecture in MOOC. In this paper, we present a checklist of features for video lecture incorporated in MOOC from the learner‟s perspective. The use case based approach has been followed for identifying the features of video lecture in MOOC. The checklist helps during requirement specification of video in MOOC as the provider select the desired features from the checklist.
White-Box Traceable Ciphertext-Policy Attribute-Based Encryption Supporting Flexible Attributes
Ciphertext-policy attribute-based encryption (CP-ABE) enables fine-grained access control to the encrypted data for commercial applications. There has been significant progress in CP-ABE over the recent years because of two properties called traceability and large universe, greatly enriching the commercial applications of CP-ABE. Traceability is the ability of ABE to trace the malicious users or traitors who intentionally leak the partial or modified decryption keys for profits. Nevertheless, due to the nature of CP-ABE, it is difficult to identify the original key owner from an exposed key since the decryption privilege is shared by multiple users who have the same attributes. On the other hand, the property of large universe in ABE enlarges the practical applications by supporting flexible number of attributes. Several systems have been proposed to obtain either of the above properties. However, none of them achieve the two properties simultaneously in practice, which limits the commercial applications of CP-ABE to a certain extent. In this paper, we propose two practical large universe CP-ABE systems supporting white-box traceability. Compared with existing systems, both the two proposed systems have two advantages: 1) the number of attributes is not polynomially bounded and 2) malicious users who leak their decryption keys could be traced. Moreover, another remarkable advantage of the second proposed system is that the storage overhead for traitor tracing is constant, which are suitable for commercial applications.
Internet Addiction Test (IAT): Which is the Best Factorial Solution?
BACKGROUND The Internet Addiction Test (IAT) by Kimberly Young is one of the most utilized diagnostic instruments for Internet addiction. Although many studies have documented psychometric properties of the IAT, consensus on the optimal overall structure of the instrument has yet to emerge since previous analyses yielded markedly different factor analytic results. OBJECTIVE The objective of this study was to evaluate the psychometric properties of the Italian version of the IAT, specifically testing the factor structure stability across cultures. METHODS In order to determine the dimensional structure underlying the questionnaire, both exploratory and confirmatory factor analyses were performed. The reliability of the questionnaire was computed by the Cronbach alpha coefficient. RESULTS Data analyses were conducted on a sample of 485 college students (32.3%, 157/485 males and 67.7%, 328/485 females) with a mean age of 24.05 years (SD 7.3, range 17-47). Results showed 176/485 (36.3%) participants with IAT score from 40 to 69, revealing excessive Internet use, and 11/485 (1.9%) participants with IAT score from 70 to 100, suggesting significant problems because of Internet use. The IAT Italian version showed good psychometric properties, in terms of internal consistency and factorial validity. Alpha values were satisfactory for both the one-factor solution (Cronbach alpha=.91), and the two-factor solution (Cronbach alpha=.88 and Cronbach alpha=.79). The one-factor solution comprised 20 items, explaining 36.18% of the variance. The two-factor solution, accounting for 42.15% of the variance, showed 11 items loading on Factor 1 (Emotional and Cognitive Preoccupation with the Internet) and 7 items on Factor 2 (Loss of Control and Interference with Daily Life). Goodness-of-fit indexes (NNFI: Non-Normed Fit Index; CFI: Comparative Fit Index; RMSEA: Root Mean Square Error of Approximation; SRMR: Standardized Root Mean Square Residual) from confirmatory factor analyses conducted on a random half subsample of participants (n=243) were satisfactory in both factorial solutions: two-factor model (χ²₁₃₂= 354.17, P<.001, χ²/df=2.68, NNFI=.99, CFI=.99, RMSEA=.02 [90% CI 0.000-0.038], and SRMR=.07), and one-factor model (χ²₁₆₉=483.79, P<.001, χ²/df=2.86, NNFI=.98, CFI=.99, RMSEA=.02 [90% CI 0.000-0.039], and SRMR=.07). CONCLUSIONS Our study was aimed at determining the most parsimonious and veridical representation of the structure of Internet addiction as measured by the IAT. Based on our findings, support was provided for both single and two-factor models, with slightly strong support for the bidimensionality of the instrument. Given the inconsistency of the factor analytic literature of the IAT, researchers should exercise caution when using the instrument, dividing the scale into factors or subscales. Additional research examining the cross-cultural stability of factor solutions is still needed.
Usability Evaluation of Google Classroom : Basis for the Adaptation of GSuite E-Learning Platform
Electronic learning is a technology learning that plays an important role in modern education and training. Its great contribution lies in the fact that content is available at any place and device from a fixed device to mobile device. Nowadays, education is accessible everywhere with the use of technology. There are several LMS (Learning Management Systems) available. One of the new tool available was released by Google under GSuite. Pangasinan State University is currently subscribed to GSuite for Education, and recently Google introduces Classroom as an eLearning platform for an educational institution. This research aims to evaluate the new product, its functionalities for the purpose of adapting and deployment. The main objective of this paper is to identify the usability and evaluation of the Learning Management System (LMS) Google Classroom, its functionalities, features, and satisfaction level of the students. Based on the result, the respondents agreed that GSuite classroom is recommended. The result of this study will be the proposed e-learning platform for Pangasinan State University, Lingayen Campus initially need in the College of Hospitality Management, Business and Public Administration.
The Intentional Unintentional Agent: Learning to Solve Many Continuous Control Tasks Simultaneously
This paper introduces the Intentional Unintentional (IU) agent. This agent endows the deep deterministic policy gradients (DDPG) agent for continuous control with the ability to solve several tasks simultaneously. Learning to solve many tasks simultaneously has been a long-standing, core goal of artificial intelligence, inspired by infant development and motivated by the desire to build flexible robot manipulators capable of many diverse behaviours. We show that the IU agent not only learns to solve many tasks simultaneously but it also learns faster than agents that target a single task at-a-time. In some cases, where the single task DDPG method completely fails, the IU agent successfully solves the task. To demonstrate this, we build a playroom environment using the MuJoCo physics engine, and introduce a grounded formal language to automatically generate tasks.
The implementation of a full EMV smartcard for a point-of-sale transaction
This paper examines the changes in the payment card environment as they relate to EMV (named after Europay, MasterCard and Visa). This research shows that if the combined dynamic data authentication (CDA) card variant of the EMV card is deployed in a full EMV environment, given the relevant known vulnerabilities and attacks against the EMV technology, the consequences of unauthorized disclosure of the cardholder data is of significantly reduced value to a criminal.
Fast Pedestrian Detection for Mobile Devices
In this paper we present a fast and robust solution for pedestrian detection that can run in real time conditions even on mobile devices with limited computational power. An optimization of the channel features based multiscale detection schemes is proposed by using 8 detection models for each half octave scales. The image features have to be computed only once each half octave and there is no need for feature approximation. We use multiscale square features for training the multiresolution pedestrian classifiers. The proposed solution achieves state of art detection results on Caltech pedestrian benchmark at over 100 FPS using a CPU implementation, being the fastest detection approach on the benchmark. The solution is fast enough to perform under real time conditions on mobile platforms, yet preserving its robustness. The full detection process can run at over 20 FPS on a quad-core ARM CPU based smartphone or tablet, being a suitable solution for limited computational power mobile devices or embedded platforms.
Engineering investments — An approach to management education for undergraduate engineers
The questions of do and should universities provide management education for undergraduate engineers are addressed. Then, the approach to management education that is being used at the State University of New York at Buffalo is described. This approach consists of a senior-level elective engineering course. The central theme of the course which unifies the many topics of discussion is the preparation and presentation of a proposal. The course is described in sufficient detail so that the reader can identify the various elements of management education that are woven into the material presented to students.
Credit scoring using the hybrid neural discriminant technique
Credit scoring has become a very important task as the credit industry has been experiencing double-digit growth rate during the past few decades. The artificial neural network is becoming a very popular alternative in credit scoring models due to its associated memory characteristic and generalization capability. However, the decision of network’s topology, importance of potential input variables and the long training process has often long been criticized and hence limited its application in handling credit scoring problems. The objective of the proposed study is to explore the performance of credit scoring by integrating the backpropagation neural networks with traditional discriminant analysis approach. To demonstrate the inclusion of the credit scoring result from discriminant analysis would simplify the network structure and improve the credit scoring accuracy of the designed neural network model, credit scoring tasks are performed on one bank credit card data set. As the results reveal, the proposed hybrid approach converges much faster than the conventional neural networks model. Moreover, the credit scoring accuracies increase in terms of the proposed methodology and outperform traditional discriminant analysis and logistic regression approaches. q 2002 Elsevier Science Ltd. All rights reserved.
Laying the foundations for a World Wide Argument Web
This paper lays theoretical and software foundations for a World Wide Argument Web (WWAW): a large-scale Web of interconnected arguments posted by individuals to express their opinions in a structured manner. First, we extend the recently proposed Argument Interchange Format (AIF) to express arguments with a structure based on Walton’s theory of argumentation schemes. Then, we describe an implementation of this ontology using the RDF Schema Semantic Web-based ontology language, and demonstrate how our ontology enables the representation of networks of arguments on the Semantic Web. Finally, we present a pilot Semantic Web-based system, ArgDF, through which users can create arguments using different argumentation schemes and can query arguments using a Semantic Web query language. Manipulation of existing arguments is also handled in ArgDF: users can attack or support parts of existing arguments, or use existing parts of an argument in the creation of new arguments. ArgDF also enables users to create new argumentation schemes. As such, ArgDF is an open platform not only for representing arguments, but also for building interlinked and dynamic argument networks on the Semantic Web. This initial public-domain tool is intended to seed a variety of future applications for authoring, linking, navigating, searching, and evaluating arguments on the Web. © 2007 Elsevier B.V. All rights reserved.
Integrated Grasp Planning and Visual Object Localization For a Humanoid Robot with Five-Fingered Hands
In this paper we present a framework for grasp planning with a humanoid robot arm and a five-fingered hand. The aim is to provide the humanoid robot with the ability of grasping objects that appear in a kitchen environment. Our approach is based on the use of an object model database that contains the description of all the objects that can appear in the robot workspace. This database is completed with two modules that make use of this object representation: an exhaustive offline grasp analysis system and a real-time stereo vision system. The offline grasp analysis system determines the best grasp for the objects by employing a simulation system, together with CAD models of the objects and the five-fingered hand. The results of this analysis are added to the object database using a description suited to the requirements of the grasp execution modules. A stereo camera system is used for a real-time object localization using a combination of appearance-based and model-based methods. The different components are integrated in a controller architecture to achieve manipulation task goals for the humanoid robot
PUSH: A Pipelined Reconstruction I/Of or Erasure-Coded Storage Clusters
A key design goal of erasure-coded storage clusters is to minimize reconstruction time, which in turn leads to high reliability by reducing vulnerability window size. PULL-Rep and PULL-Sur are two existing reconstruction schemes based on PULL-type transmission, where a rebuilding node initiates reconstruction by sending a set of read requests to surviving nodes to retrieve surviving blocks. To eliminate the transmission bottleneck of replacement nodes in PULL-Rep and mitigate the extra overhead caused by noncontiguous disk access in PULL-Sur, we incorporate PUSH-type transmissions to node reconstruction, where the reconstruction procedure is divided into multiple tasks accomplished by surviving nodes in a pipelining manner. We also propose two PUSH-based reconstruction schemes (i.e., PUSH-Rep and PUSH-Sur), which can not only exploit the I/O parallelism of PULL-Sur, but also maintain sequential I/O accesses inherited from PULL-Rep. We build four reconstruction-time models to study the reconstruction process and estimate the reconstruction time of the four schemes in large-scale storage clusters. We implement a proof-of-concept prototype where the four reconstruction schemes are deployed and quantitatively evaluated. Experimental results show that the PUSH-based reconstruction schemes outperform the PULL-based counterparts. In a real-world (9,6)RS-coded storage cluster, PUSH-Rep speeds up the reconstruction time by a factor of 5.76 compared with PULL-Rep; PUSH-Sur accelerates the reconstruction by a factor of 1.85 relative to PULL-Sur.
Inferring causal impact using Bayesian structural time-series models
An important problem in econometrics and marketing is to infer the causal impact that a designed market intervention has exerted on an outcome metric over time. In order to allocate a given budget optimally, for example, an advertiser must determine the incremental contributions that different advertising campaigns have made to web searches, product installs, or sales. This paper proposes to infer causal impact on the basis of a diffusion-regression state-space model that predicts the counterfactual market response that would have occurred had no intervention taken place. In contrast to classical difference-in-differences schemes, state-space models make it possible to (i) infer the temporal evolution of attributable impact, (ii) incorporate empirical priors on the parameters in a fully Bayesian treatment, and (iii) flexibly accommodate multiple sources of variation, including the time-varying influence of contemporaneous covariates, i.e., synthetic controls. Using a Markov chain Monte Carlo algorithm for posterior inference, we illustrate the statistical properties of our approach on synthetic data. We then demonstrate its practical utility by evaluating the effect of an online advertising campaign on search-related site visits. We discuss the strengths and limitations of our approach in improving the accuracy of causal attribution, power analyses, and principled budget allocation.