title
stringlengths
8
300
abstract
stringlengths
0
10k
Augmented Reality for the Study of Human Heart Anatomy
Augmented reality is increasingly applied in medical education mainly because educators can share knowledge through virtual objects. This research describes the development of a web application, which enhances users' medical knowledge with regards to the anatomy of the human heart by means of augmented reality. Evaluation is conducted in two different facets. In the first one, the authors of this paper evaluate the feasibility of a three-dimensional human heart module using one investigator under the supervision of an expert. In the second, evaluation aims at identifying usability issues by means of the cognitive walkthrough method. Three medical students (naive users) are called upon three target tasks in the web application. Task completion is appreciated in the light of the standard set of cognitive walkthrough questions. Augmented reality content miss hits are revealed by means of the first evaluation in an effort to enhance the educational utility of the three-dimensional human heart. Cognitive walkthrough provides further improvement points, which may further enhance usability in the next software release. The current piece of work constitutes the pre-pilot evaluation. Standardized methodologies are utilized in an effort to improve the application before its wider piloting to proper student populations. Such evaluations are considered important in experiential learning methods aiding online education of anatomy courses.
Total Recall: System Support for Automated Availability Management
Goal: Highly available data storage in large-scale distributed systems in which * Hosts are transiently inaccessible * Individual host failures are common Current peer-to-peer systems are prime examples * Highly dynamic, challenging environment * hosts join and leave frequently in short-term * Hosts leave permanently over long-term * Workload varies in terms of popularity, access patterns, file size These systems require automated availability management.
Big data, bigger dilemmas: A critical review
The recent interest in Big Data has generated a broad range of new academic, corporate, and policy practices along with an evolving debate amongst its proponents, detractors, and skeptics. While the practices draw on a common set of tools, techniques, and technologies, most contributions to the debate come either from a particular disciplinary perspective or with an eye on a domain-specific issue. A close examination of these contributions reveals a set of common problematics that arise in various guises in different places. It also demonstrates the need for a critical synthesis of the conceptual and practical dilemmas surrounding Big Data. The purpose of this article is to provide such a synthesis by drawing on relevant writings in the sciences, humanities, policy, and trade literature. In bringing these diverse literatures together, we aim to shed light on the common underlying issues that concern and affect all of these areas. By contextualizing the phenomenon of Big Data within larger socio-economic developments, we also seek to provide a broader understanding of its drivers, barriers, and challenges. This approach allows us to identify attributes of Big Data that need to receive more attention — autonomy, opacity, and generativity, disparity, and futurity — leading to questions and ideas for moving beyond dilemmas.
Cognition and Docition in OFDMA-Based Femtocell Networks
We address the coexistence problem between macrocell and femtocell systems by controlling the aggregated interference generated by multiple femtocell base stations at the macrocell receivers in a distributed fashion. We propose a solution based on intelligent and self-organized femtocells implementing a realtime multi-agent reinforcement learning technique, known as decentralized Q- learning. We compare this cognitive approach to a non-cognitive algorithm and to the well known iterative water- filling, showing the general superiority of our scheme in terms of (non-jeopardized) macrocell capacity. Furthermore, in distributed settings of such femtocell networks, the learning may be complex and slow due to mutually impacting decision making processes, which results in a non-stationary environment. We propose a timely solution -referred to as docition- to improve the learning process based on the concept of teaching and expert knowledge sharing in wireless environments. We demonstrate that such an approach improves the femtocells' learning ability and accuracy. We evaluate the docitive paradigm in the context of a 3GPP compliant OFDMA (Orthogonal Frequency Division Multiple Access) femtocell network modeled as a multi-agent system. We propose different docitive algorithms and we show their superiority to the well known paradigm of independent learning in terms of speed of convergence and precision.
TwoStep: An Authentication Method Combining Text and Graphical Passwords
Text-based passwords alone are subject to dictionary attac ks s users tend to choose weak passwords in favor of memorability, as we ll as phishing attacks. Many recognition-based graphical password schemes alone, in order to offer sufficient security, require a number of rounds of veri fication, introducing usability issues. We suggest a hybrid user authentication a pproach combining text passwords, recognition-based graphical passwords, a nd a two-step process, to provide increased security with fewer rounds than such gr aphical passwords alone. A variation of this two-step authentication method, which we have implemented and deployed, is in use in the real world.
Estimating Fingerprint Deformation
Fingerprint matching is affected by the nonlinear distortion introduced in fingerprint impressions during the image acquisition process. This nonlinear deformation causes fingerprint features such as minutiae points and ridge curves to be distorted in a complex manner. In this paper we develop an average deformation model for a fingerprint impression (baseline impression) by observing its relative distortion with respect to several other impressions of the same finger. The deformation is computed using a Thin Plate Spline (TPS) model that relies on ridge curve correspondences between image pairs. The estimated average deformation is used to distort the minutiae template of the baseline impression prior to matching. An index of deformation has been proposed to select the average deformation model with the least variability corresponding to a finger. Preliminary results indicate that the average deformation model can improve the matching performance of a fingerprint matcher.
Analysis of High-Performance Fast Feedthrough Logic Families in CMOS
This brief presents a new CMOS logic family using the feedthrough evaluation concept and analyzes its sensitivity against technology parameters for practical applications. The feedthrough logic (FTL) allows for a partial evaluation in a computational block before its input signals are valid, and does a quick final evaluation as soon as the inputs arrive. The FTL is well suited to arithmetic circuits where the critical path is made of a large cascade of inverting gates. Furthermore, FTL based circuits perform better in high fanout and high switching frequencies due to both lower delay and dynamic power consumption. Experimental results, for practical circuits, demonstrate that low-power FTL provides for smaller propagation time delay (4.1 times), lower energy consumption (35.6%), and similar combined delay, power consumption and active area product (0.7% worst), while providing lower sensitivity to power supply, temperature, capacitive load and process variations than the standard CMOS technologies.
Fashion Retail Master Data Model and Business Development
Retailing, and particularly fashion retailing, is changing into a much more technology driven business model using omni-channel retailing approaches. Also analytical and data-driven marketing is on the rise. However, there has not been paid a lot of attention to the underlying and underpinning datastructures, the characteristics for fashion retailing, the relationship between static and dynamic data, and the governance of this. This paper is analysing and discussing the data dimension of fashion retailing with focus on data-model development, master data management and the impact of this on business development in the form of increased operational effectiveness, better adaptation the omni-channel environment and improved alignment between the business strategy and the supporting data. The paper presents a case study of a major European fashion retail and wholesale company that is in the process of reorganising its master data model and master data governance to remove silos of data, connect and utilise data across business processes, and design a global product master data database that integrates data for all existing and expected sales channels. As a major finding of this paper is fashion retailing needs more strict master data governance than general retailing as products are plenty, designed products are not necessarily marketed, and product life-cycles generally are short.
NeXt generation/dynamic spectrum access/cognitive radio wireless networks: A survey
Todayâ€TMs wireless networks are characterized by a fixed spectrum assignment policy. However, a large portion of the assigned spectrum is used sporadically and geographical variations in the utilization of assigned spectrum ranges from 15% to 85% with a high variance in time. The limited available spectrum and the inefficiency in the spectrum usage necessitate a new communication paradigm to exploit the existing wireless spectrum opportunistically. This new networking paradigm is referred to as NeXt Generation (xG) Networks as well as Dynamic Spectrum Access (DSA) and cognitive radio networks. The term xG networks is used throughout the paper. The novel functionalities and current research challenges of the xG networks are explained in detail. More specifically, a brief overview of the cognitive radio technology is provided and the xG network architecture is introduced. Moreover, the xG network functions such as Purchase Export Previous article Next article Check if you have access through your login credentials or your institution.
Fe3Al Iron Aluminides Alloyed with High Concentrations of V and Cr: Their Structure and High Temperature Strength
Eight iron aluminide alloys with different contents of V and Cr were prepared up to 25 at. pct of both elements. The effect of the V and Cr concentration on the microstructure and mechanical properties was investigated by several complementary techniques. This investigation revealed that the microstructure of all the investigated alloys was comparable regardless of their chemical composition. All the alloys were in a solid solution condition without any major chemical inhomogeneity. For all alloys, a comparable grain size and D03 crystallographic structure was observed. In situ X-ray diffraction measurement revealed that the crystallographic structure was stable up to 1073 K (800 °C) regardless of the chemical composition. Mechanical testing showed that the compressive yield stress significantly increased with the increasing total sum of V plus Cr. Much higher values of yield stress were measured for symmetric concentrations of V and Cr when compared to non-symmetric ones. Eventually, it was shown that the formation of at least a rough system of the lattice positions occupation by four types of atoms in four sub-lattices derived from D03 is the most probable strengthening factor for alloys with symmetric concentrations of V and Cr.
The Stretta procedure for the treatment of GERD: a registry of 558 patients.
PURPOSE To evaluate gastroesophageal reflux disease (GERD) symptoms, patient satisfaction, and antisecretory drug use in a large group of GERD patients treated with the Stretta procedure (endoluminal temperature-controlled radiofrequency energy for the treatment of GERD) at multiple centers since February 1999. METHODS All subjects provided informed consent. A health care provider from each institution administered a standardized GERD survey to patients who had undergone Stretta. Subjects provided (at baseline and follow-up) (1) GERD severity (none, mild, moderate, severe), (2) percentage of GERD symptom control, (3) satisfaction, and (4) antisecretory medication use. Outcomes were compared with the McNemar test, paired t test, and Wilcoxon signed rank test. RESULTS Surveys of 558 patients were evaluated (33 institutions, mean follow-up of 8 months). Most patients (76%) were dissatisfied with baseline antisecretory therapy for GERD. After treatment, onset of GERD relief was less than 2 months (68.7%) or 2 to 6 months (14.6%). The median drug requirement improved from proton pump inhibitors twice daily to antacids as needed (P < .0001). The percentage of patients with satisfactory GERD control (absent or mild) improved from 26.3% at baseline (on drugs) to 77.0% after Stretta (P < .0001). Median baseline symptom control on drugs was 50%, compared with 90% at follow-up (P < .0001). Baseline patient satisfaction on drugs was 23.2%, compared with 86.5% at follow-up (P < .0001). Subgroup analysis (<1 year vs. >1 year of follow-up) showed a superior effect on symptom control and drug use in those patients beyond 1 year of follow-up, supporting procedure durability. CONCLUSIONS The Stretta procedure results in significant GERD symptom control and patient satisfaction, superior to that derived from drug therapy in this study group. The treatment effect is durable beyond 1 year, and most patients were off all antisecretory drugs at follow-up. These results support the use of the Stretta procedure for patients with GERD, particularly those with inadequate control of symptoms on medical therapy.
Returns to Buying Winners and Selling Losers : Implications for Stock Market Efficiency
This paper documents that strategies which buy stocks that have performed well i n the past and sell stocks t ha t have performed poorly i n the past generate significant positive returns over 3to 12-month holding periods. W e find that the profitability o f these strategies are not due to their systematic risk or to delayed stock price reactions to common factors. However, part o f the abnormal returns generated i n the first year after portfolio formation dissipates i n the following two years. A similar pattern o f returns around the earnings announcements o f past winners and losers is also documented. A POPULAR VIEW HELD by many journalists, psychologists, and economists is that individuals tend to overreact to information.' A direct extension of this view, suggested by De Bondt and Thaler (1985, 19871, is that stock prices also overreact to information, suggesting that contrarian strategies (buying past losers and selling past winners) achieve abnormal returns. De Bondt and Thaler (1985) show that over 3to 5-year holding periods stocks that performed poorly over the previous 3 to 5 years achieve higher returns than stocks that performed well over the same period. However, the interpretation of the De Bondt and Thaler results are still being debated. Some have argued that the De Bondt and Thaler results can be explained by the systematic risk of their contrarian portfolios and the size effect."n addition, since the long-term losers outperform the long-term winners only in Januaries, it is unclear whether their results can be attributed to overreaction. *Jegadeesh is from the Anderson Graduate School o f Management, UCLA. Ti tman is from Hong Kong University o f Science and Technology and the Anderson Graduate School o f Management , UCLA. W e would like to thank Kent Daniel, Ravi Jagannathan, Richard Roll, Hans Stoll, Ren6 Stulz, and two referees. W e also thank participants o f ' theJohnson Symposium held at the University o f Wisconsin at Madison and seminar participants at Harvard, SMU, UBC, UCLA, Penn State, University o f Michigan, University o f Minnesota, and York University for helpful comments, and Juan Siu and Kwan Ho Kim for excellent research assistance. ' s e e for example, the academic papers by Kahneman and Tversky (1982), De Bondt and Thaler (1985)and Shiller (1981). ' s e e for example, Chan (19881, Ball and Kothari (1989),and Zarowin (1990). For a n alternate view, see the recent paper by Chopra, Lakonishok, and Ritter (1992). 66 The Journal of Finance More recent papers by Jegadeesh (1990) and Lehmann (1990) provide evidence of shorter-term return reversals. These papers show that contrarian strategies that select stocks based on their returns in the previous week or month generate significant abnormal returns. However, since these strategies are transaction intensive and are based on short-term price movements, their apparent success may reflect the presence of short-term price pressure or a lack of liquidity in the market rather than overreaction. Jegadeesh and Titman (1991) provide evidence on the relation between short-term return reversals and bid-ask spreads that supports this interpretation. In addition, Lo and MacKinlay (1990) argue that a large part of the abnormal returns documented by Jegadeesh and Lehmann is attributable to a delayed stock price reaction to common factors rather than to overreaction. Although contrarian strategies have received a lot of attention in the recent academic literature, the early literature on market efficiency focused on relative strength strategies that buy past winners and sell past losers. Most notably, Levy (1967) claims that a trading rule that buys stocks with current prices that are substantially higher than their average prices over the past 27 weeks realizes significant abnormal returns. Jensen and Bennington (1970), however, point out that Levy had come up with his trading rule after examining 68 different trading rules in his dissertation and because of this express skepticism about his conclusions. Jensen and Rennington analyze the profitability of Levy's trading rule over a long time period that was, for the most part, outside Levy's original sample period. They find that in their sample period Levy's trading rule does not outperform a buy and hold strategy and hence attribute Levy's result to a selection bias. Although the current academic debate has focused on contrarian rather than relative strength trading rules, a number of practitioners still use relative strength as one of their stock selection criteria. For example, a majority of the mutual funds examined by Grinblatt and Titman (1989, 1991) show a tendency to buy stocks that have increased in price over the previous quarter. In addition, the Value Line rankings are known to be based in large part on past relative strength. The success of many of the mutual funds in the Grinblatt and Titman sample and the predictive power of Value Line rankings (see Copeland and Mayers (1982) and Stickel (1985)) provide suggestive evidence that the relative strength strategies may generate abnormal returns. How can we reconcile the success of Value Line rankings and the mutual funds that use relative strength rules with the current academic literature that suggests that the opposite strategy generates abnormal returns? One possibility is that the abnormal returns realized by these practitioners are either spurious or are unrelated to their tendencies to buy past winners. A second possibility is that the discrepancy is due to the difference between the time horizons used in the trading rules examined in the recent academic papers and those used in practice. For instance, the above cited evidence favoring contrarian strategies focuses on trading strategies based on either Returns to Buying Winners and Selling Losers 67 very short-term return reversals (1 week or 1 month), or very long-term return reversals (3 to 5 years). However, anecdotal evidence suggests that practitioners who use relative strength rules base their selections on price movements over the past 3 to 12 month^.^ This paper provides an analysis of relative strength trading strategies over 3to 12-month horizons. Our analysis of NYSE and AMEX stocks documents significant profits in the 1965 to 1989 sample period for each of the relative strength strategies examined. We provide a decomposition of these profits into different sources and develop tests that allow us to evaluate their relative importance. The results of these tests indicate that the profits are not due to the systematic risk of the trading strategies. In addition, the evidence indicates that the profits cannot be attributed to a lead-lag effect resulting from delayed stock price reactions to information about a common factor similar to that proposed by Lo and MacKmlay (1990). The evidence is, however, consistent with delayed price reactions to firm-specific information. Further tests suggest that part of the predictable price changes that occur during these 3to 12-month holding periods may not be permanent. The stocks included in the relative strength portfolios experience negative abnormal returns starting around 12 months after the formation date and continuing up to the thirty-first month. For example, the portfolio formed on the basis of returns realized in the past 6 months generates an average cumulative return of 9.5% over the next 12 months but loses more than half of this return in the following 24 months. Our analysis of stock returns around earnings announcement dates suggests a similar bias in market expectations. We find that past winners realize consistently higher returns around their earnings announcements in the 7 months following the portfolio formation date than do past losers. However, in each of the following 13 months past losers realize higher returns than past winners around earnings announcements. The rest of this paper is organized as follows: Section I describes the trading strategies that we examine and Section I1 documents their excess returns. Section I11 provides a decomposition of the profits from relative strength strategies and evaluates the relative importance of the different components. Section IV documents these returns in subsamples stratified on the basis of ex ante beta and firm size and Section V measures these profits across calendar months and over 5-year subperiods. The longer term performance of the stocks included in the relative strength portfolios is examined in Section VI and Section VII back tests the strategy over the 1927 to 1964 3 ~ o r instance, one of the inputs used by Value Line to assign a timeliness rank for each stock is a price momentum factor computed based on the stock's past 3to 12-month returns. Value Line reports that the price momentum factor is computed by "dividing the stock's latest 10-week average relative price by its 52-week average relative price." These timeliness ranks, according to Value Line, are "designed to discriminate among stocks on the basis of relative price performance over the next 6 to 12 months" (see Bernard (19841, pp. 52-53). The Journal of Finance period. Section VIII examines the returns of past winners and past losers around earnings announcement dates and Section IX concludes the paper. I. Trading Strategies If stock prices either overreact or underreact to information, then profitable trading strategies that select stocks based on their past returns will exist. This study investigates the efficiency of the stock market by examining the profitability of a number of these strategies. The strategies we consider select stocks based on their returns over the past 1, 2, 3, or 4 quarters. We also consider holding periods that vary from 1to 4 quarters. This gives a total of 16 strategies. In addition, we examine a second set of 16 strategies that skip a week between the portfolio formation period and the holding period. By skipping a week, we avoid some of the bid-ask spread, pri
- 1-Cooperative Information Systems : A Manifesto
Information systems technology, computer-supported cooperative work practice, and organizational modeling and planning theories have evolved with only accidental contact to each other. Cooperative information systems is a relatively young research area which tries to systematically investigate the synergies between these research fields, driven by the observation that change management is the central issue facing all three areas today and that all three fields have indeed developed rather similar strategies to cope with change. In this paper, we therefore propose a framework which views cooperative information systems as composed from three interrelated facets, viz. the system facet, the group collaboration facet, and the organizational facet. We present an overview of these facets, emphasizing strategies they have developed over the past few years to accommodate change. We also discuss the propagation of change across the facets, and sketch a basic software architecture intended to support the rapid construction and evolution of cooperative information systems on top of existing organizational and technical legacy.
Social TV: Linking TV Content to Buzz and Sales
“Social TV” is a term that broadly describes the online social interactions occurring between viewers while watching television. In this paper, we show that TV networks can derive value from social media content placed in shows because it leads to increased word of mouth via online posts, and it highly correlates with TV show related sales. In short, we show that TV event triggers change the online behavior of viewers. In this paper, we first show that using social media content on the televised American reality singing competition, The Voice, led to increased social media engagement during the TV broadcast. We then illustrate that social media buzz about a contestant after a performance is highly correlated with song sales from that contestant’s performance. We believe this to be the first study linking TV content to buzz and sales in real time.
A computational model for biofilm-based microbial fuel cells.
This study describes and evaluates a computational model for microbial fuel cells (MFCs) based on redox mediators with several populations of suspended and attached biofilm microorganisms, and multiple dissolved chemical species. A number of biological, chemical and electrochemical reactions can occur in the bulk liquid, in the biofilm and at the electrode surface. The evolution in time of important MFC parameters (current, charge, voltage and power production, consumption of substrates, suspended and attached biomass growth) has been simulated under several operational conditions. Model calculations evaluated the effect of different substrate utilization yields, standard potential of the redox mediator, ratio of suspended to biofilm cells, initial substrate and mediator concentrations, mediator diffusivity, mass transfer boundary layer, external load resistance, endogenous metabolism, repeated substrate additions and competition between different microbial groups in the biofilm. Two- and three-dimensional model simulations revealed the heterogeneous current distribution over the planar anode surface for younger and patchy biofilms, but becoming uniform in older and more homogeneous biofilms. For uniformly flat biofilms one-dimensional models should give sufficiently accurate descriptions of produced currents. Voltage- and power-current characteristics can also be calculated at different moments in time to evaluate the limiting regime in which the MFC operates. Finally, the model predictions are tested with previously reported experimental data obtained in a batch MFC with a Geobacter biofilm fed with acetate. The potential of the general modeling framework presented here is in the understanding and design of more complex cases of wastewater-fed microbial fuel cells.
Parsing Arabic Dialects
The Arabic language is a collection of spoken dialects with important phonological, morphological, lexical, and syntactic differences, along with a standard written language, Modern Standard Arabic (MSA). Since the spoken dialects are not officially written, it is very costly to obtain adequate corpora to use for training dialect NLP tools such as parsers. In this paper, we address the problem of parsing transcribed spoken Levantine Arabic (LA). We do not assume the existence of any annotated LA corpus (except for development and testing), nor of a parallel corpus LAMSA. Instead, we use explicit knowledge about the relation between LA and MSA.
Design of CMOS instrumentation amplifier using gm/ID methodology
This paper describes the design of an indirect current feedback Instrumentation Amplifier (IA). Transistor sizing plays a major role in achieving the desired gain, the Common Mode Rejection Ratio (CMRR) and the bandwidth of the Instrumentation Amplifier. A gm/ID based design methodology is employed to design the functional blocks of the IA. It links the design variables of each functional block to its target specifications and is used to develop design charts that are used to accurately size the transistors. The IA thus designed achieves a voltage gain of 31dB with a bandwidth 1.2MHz and a CMRR of 87dB at 1MHz. The circuit design is carried out using 0.18μm CMOS process.
Toward Future Scenario Generation: Extracting Event Causality Exploiting Semantic Relation, Context, and Association Features
We propose a supervised method of extracting event causalities like conduct slash-and-burn agriculture →exacerbate desertification from the web using semantic relation (between nouns), context, and association features. Experiments show that our method outperforms baselines that are based on state-of-the-art methods. We also propose methods of generatingfuture scenarioslike conduct slash-and-burn agriculture →exacerbate desertification→increase Asian dust (from China)→asthma gets worse . Experiments show that we can generate 50,000 scenarios with 68% precision. We also generated a scenario deforestation continues→global warming worsens →sea temperatures rise →vibrio parahaemolyticus fouls (water) , which is written in no document in our input web corpus crawled in 2007. But the vibrio risk due to global warming was observed in Baker-Austin et al. (2013). Thus, we “predicted” the future event sequence in a sense.
Augmented Virtual Reality : How to Improve Education Systems
This essay presents and discusses the developing role of virtual and augmented reality technologies in education. Addressing the challenges in adapting such technologies to focus on improving students’ learning outcomes, the author discusses the inclusion of experiential modes as a vehicle for improving students’ knowledge acquisition. Stakeholders in the educational role of technology include students, faculty members, institutions, and manufacturers. While the benefits of such technologies are still under investigation, the technology landscape offers opportunities to enhance face-to-face and online teaching, including contributions in the understanding of abstract concepts and training in real environments and situations. Barriers to technology use involve limited adoption of augmented and virtual reality technologies, and, more directly, necessary training of teachers in using such technologies within meaningful educational contexts. The author proposes a six-step methodology to aid adoption of these technologies as basic elements within the regular education: training teachers; developing conceptual prototypes; teamwork involving the teacher, a technical programmer, and an educational architect; and producing the experience, which then provides results in the subsequent two phases wherein teachers are trained to apply augmentedand virtual-reality solutions within their teaching methodology using an available subject-specific experience and then finally implementing the use of the experience in a regular subject with students. The essay concludes with discussion of the business opportunities facing virtual reality in face-to-face education as well as augmented and virtual reality in online education.
Routing Protocols in Wireless Sensor Networks - A Survey
Advances in wireless sensor network (WSN) technology has provided the availability of small and low-cost sensor nodes with capability of sensing various types of physical and environmental conditions, data processing, and wireless communication. Variety of sensing capabilities results in profusion of application areas. However, the characteristics of wireless sensor networks require more effective methods for data forwarding and processing. In WSN, the sensor nodes have a limited transmission range, and their processing and storage capabilities as well as their energy resources are also limited. Routing protocols for wireless sensor networks are responsible for maintaining the routes in the network and have to ensure reliable multi-hop communication under these conditions. In this paper, we give a survey of routing protocols for Wireless Sensor Network and compare their strengths and limitations.
Unsupervised Learning with Mixed Numeric and Nominal Data
ÐThis paper presents a Similarity-Based Agglomerative Clustering (SBAC) algorithm that works well for data with mixed numeric and nominal features. A similarity measure, proposed by Goodall for biological taxonomy [15], that gives greater weight to uncommon feature value matches in similarity computations and makes no assumptions of the underlying distributions of the feature values, is adopted to define the similarity measure between pairs of objects. An agglomerative algorithm is employed to construct a dendrogram and a simple distinctness heuristic is used to extract a partition of the data. The performance of SBAC has been studied on real and artificially generated data sets. Results demonstrate the effectiveness of this algorithm in unsupervised discovery tasks. Comparisons with other clustering schemes illustrate the superior performance of this approach. Index TermsÐAgglomerative clustering, conceptual clustering, feature weighting, interpretation, knowledge discovery, mixed numeric and nominal data, similarity measures, 2 aggregation.
Short-term diagnostic stability of probable headache disorders based on the International Classification of Headache Disorders, 3rd edition beta version, in first-visit patients: a multicenter follow-up study
A “Probable headache disorder” is diagnosed when a patient’s headache fulfills all but one criterion of a headache disorder in the 3rd beta edition of the International Classification of Headache Disorder (ICHD-3β). We investigated diagnostic changes in probable headache disorders in first-visit patients after at least 3 months of follow-up. This was a longitudinal study using a prospective headache registry from nine headache clinics of referral hospitals. The diagnostic change of probable headache disorders at baseline was assessed at least 3 months after the initial visit using ICHD-3β. Of 216 patients with probable headache disorders at baseline, the initial probable diagnosis remained unchanged for 162 (75.0 %) patients, while it progressed to a definite diagnosis within the same headache subtype for 45 (20.8 %) by fulfilling the criteria during a median follow-up period of 6.5 months. Significant difference on the proportions of constant diagnosis was not found between headache subtypes (P < 0.935): 75.9 % for probable migraine, 73.7 % for probable tension-type headache (TTH), and 76.0 % for probable other primary headache disorders (OPHD). Among patients with headache recurrence, the proportion of constant diagnosis was higher for probable migraine than for probable TTH plus probable OPHD (59.2 vs. 23.1 %; P < 0.001). The proportions of constant diagnosis did not significantly differ by follow-up duration (>3 and ≤ 6 months vs. > 6 and ≤ 10 months) in probable migraine, probable TTH, and probable OPHD, respectively. In this study, a probable headache diagnosis, based on ICHD-3β, remained in approximately three-quarters of the outpatients; however, diagnostic stability could differ by headache recurrence and subtype. Probable headache management might have to consider these differences.
A Universal Toolkit for Cryptographically Secure Privacy-Preserving Data Mining
The issue of potential data misuse rises whenever it is collected from several sources. In a common setting, a large database is either horizontally or vertically partitioned between multiple entities who want to find global trends from the data. Such tasks can be solved with secure multi-party computation (MPC) techniques. However, practitioners tend to consider such solutions inefficient. Furthermore, there are no established tools for applying secure multi-party computation in real-world applications. In this paper, we describe Sharemind—a toolkit, which allows data mining specialist with no cryptographic expertise to develop data mining algorithms with good security guarantees. We list the building blocks needed to deploy a privacy-preserving data mining application and explain the design decisions that make Sharemind applications efficient in practice. To validate the practical feasibility of our approach, we implemented and benchmarked four algorithms for frequent itemset mining.
On-line policy optimisation of Bayesian spoken dialogue systems via human interaction
A partially observable Markov decision process has been proposed as a dialogue model that enables robustness to speech recognition errors and automatic policy optimisation using reinforcement learning (RL). However, conventional RL algorithms require a very large number of dialogues, necessitating a user simulator. Recently, Gaussian processes have been shown to substantially speed up the optimisation, making it possible to learn directly from interaction with human users. However, early studies have been limited to very low dimensional spaces and the learning has exhibited convergence problems. Here we investigate learning from human interaction using the Bayesian Update of Dialogue State system. This dynamic Bayesian network based system has an optimisation space covering more than one hundred features, allowing a wide range of behaviours to be learned. Using an improved policy model and a more robust reward function, we show that stable learning can be achieved that significantly outperforms a simulator trained policy.
Metformin and lactic acidosis in an Australian community setting: the Fremantle Diabetes Study.
OBJECTIVE To determine the incidence of lactic acidosis in community-based patients with type 2 diabetes, with special reference to metformin therapy. DESIGN Substudy within a longitudinal observational study, the Fremantle Diabetes Study (FDS). PARTICIPANTS AND SETTING 1279 patients from a postcode-defined population of 120 097 people in Western Australia. MAIN OUTCOME MEASURES Confirmed hospitalisation with lactic acidosis identified through the WA Data Linkage System during two periods: (1) from study entry, between 1993 and 1996, and study close in November 2001; and (2) from study entry to 30 June 2006. RESULTS At entry, 33.3% of patients were metformin-treated, and 23.1% of these had one or more contraindications to metformin (55.1% and 38.0%, respectively, after 5 years' follow-up). Five confirmed cases of lactic acidosis were identified during 12 466 patient-years of observation; all had at least one other potential cause, such as cardiogenic shock or renal failure. From study entry to close, the incidence was 0/100,000 patient-years in both metformin-treated and non-metformin-treated patients. Between study entry and 30 June 2006, incidence was 57/100,000 patient-years (95% CI, 12-168) in metformin-treated patients and 28/100,000 patient-years (95% CI, 3-100) in the non-metformin-treated group, an incidence rate difference of -30 (-105 to 46) (P=0.4). CONCLUSION The incidence of lactic acidosis in patients with type 2 diabetes is low but increases with age and duration of diabetes, as cardiovascular and renal causes become more prevalent. Metformin does not increase the risk of lactic acidosis, even when other recognised precipitants are present.
Using Convolutional Neural Networks to Perform Classification on State Farm Insurance Driver Images
For the State Farm Photo Classification Kaggle Challenge, we use two different Convolutional Neural Network models to classify pictures of drivers in their cars. The first is a model trained from scratch on the provided dataset, and the other is a model that was first pretrained on the ImageNet dataset and then underwent transfer learning on the provided StateFarm dataset. With the first approach, we achieved a validation accuracy of 10.9%, which is not much better than random. However, with the second approach, we achieved an accuracy of 21.1%. Finally, we explore ways to make these models better based on past models and training techniques.
Error detecting and error correcting codes
When a message is transmitted, it has the potential to get scrambled by noise. This is certainly true of voice messages, and is also true of the digital messages that are sent to and from computers. Now even sound and video are being transmitted in this manner. A digital message is a sequence of 0’s and 1’s which encodes a given message. More data will be added to a given binary message that will help to detect if an error has been made in the transmission of the message; adding such data is called an error-detecting code. More data may also be added to the original message so that errors made in transmission may be detected, and also to figure out what the original message was from the possibly corrupt message that was received. This type of code is an error-correcting code.
A Study on Connected Components Labeling algorithms using GPUs
Connected Components Labeling (CCL) is a wellknown problem with many applications in Image Processing. We propose in this article an optimized version of CCL for GPUs using GPGPU (General-Purpose Computing on Graphics Processing Units) techniques and the usual UnionFind algorithm to solve the CCL problem. We compare its performance with an efficient serial algorithm and with Label Equivalence, another method proposed in the literature which uses GPUs as well. Our algorithm presented a 5-10x speedup in relation to the serial algorithm and performed similar to Label Equivalence, but proved to be more predictable in the sense that its execution time is more image-independent. Keywords-Image processing; Connected Components Labeling; Connected Components Analysis; GPU; CUDA
A Picture of Instagram is Worth More Than a Thousand Words: Workload Characterization and Application
Participatory sensing systems (PSSs) have the potential to become fundamental tools to support the study, in large scale, of urban social behavior and city dynamics. To that end, this work characterizes the photo sharing system Instagram, considered one of the currently most popular PSS on the Internet. Based on a dataset of approximately 2.3 million shared photos, we characterize user's behavior in the system showing that there are several advantages and opportunities for large scale sensing, such as a global coverage at low cost, but also challenges, such as a very unequal photo sharing frequency, both spatially and temporally. We also observe that the temporal photo sharing pattern is a good indicator about cultural behaviors, and also says a lot about certain classes of places. Moreover, we present an application to identify regions of interest in a city based on data obtained from Instagram, which illustrates the promising potential of PSSs for the study of city dynamics.
The role of parenting styles in children's problem behavior.
This study investigated the combination of mothers' and fathers' parenting styles (affection, behavioral control, and psychological control) that would be most influential in predicting their children's internal and external problem behaviors. A total of 196 children (aged 5-6 years) were followed up six times from kindergarten to the second grade to measure their problem behaviors. Mothers and fathers filled in a questionnaire measuring their parenting styles once every year. The results showed that a high level of psychological control exercised by mothers combined with high affection predicted increases in the levels of both internal and external problem behaviors among children. Behavioral control exercised by mothers decreased children's external problem behavior but only when combined with a low level of psychological control.
Easy Questions First? A Case Study on Curriculum Learning for Question Answering
Cognitive science researchers have emphasized the importance of ordering a complex task into a sequence of easy to hard problems. Such an ordering provides an easier path to learning and increases the speed of acquisition of the task compared to conventional learning. Recent works in machine learning have explored a curriculum learning approach called selfpaced learning which orders data samples on the easiness scale so that easy samples can be introduced to the learning algorithm first and harder samples can be introduced successively. We introduce a number of heuristics that improve upon selfpaced learning. Then, we argue that incorporating easy, yet, a diverse set of samples can further improve learning. We compare these curriculum learning proposals in the context of four non-convex models for QA and show that they lead to real improvements in each of them.
Progestogens or progestogen-releasing intrauterine systems for uterine fibroids.
BACKGROUND Uterine fibroids are the most common premenopausal benign uterine tumours. Fibroids can cause symptoms including heavy menstrual bleeding, pelvic pressure and pain. Progestogens can be administered by various routes. Intramuscular injection of depot medroxyprogesterone acetate (DMPA) has dual actions (stimulatory or inhibitory) on fibroid cell growth. Progestogen-releasing intrauterine systems (IUS) decrease menstrual blood loss associated with fibroids by inducing endometrial atrophy and reduction of uterine fibroid size. Currently, their effectiveness for the treatment of uterine fibroids has not been evaluated. OBJECTIVES To determine the effectiveness of progestogens or progestogen-releasing intrauterine systems in treating premenopausal women with uterine fibroids. SEARCH METHODS We searched the Menstrual Disorders and Subfertility Group Specialised Register (inception to 17 August 2012), CENTRAL (inception to 17 August 2012) and Database of Abstracts of Reviews of Effects (DARE) in The Cochrane Library, MEDLINE (inception to 17 August 2012), Ovid EMBASE (1 January 2010 to 17 August 2012), Ovid PsycINFO (inception to 17 August 2012), CINAHL database, and trials registers for ongoing and registered trials. SELECTION CRITERIA All identified published or unpublished randomised controlled trials (RCTs) assessing the effect of progestogens or progestogen-releasing intrauterine systems in treating premenopausal women with uterine fibroids. DATA COLLECTION AND ANALYSIS We assessed all potentially eligible studies identified as a result of the search strategy. Two review authors extracted data from each included study using an agreed form and assessed the risk of bias. We resolved discrepancies through discussion. MAIN RESULTS This review included three studies. However, data for progestogen-releasing intrauterine systems were available from only one study that compared 29 women with a levonorgestrel (LNG)-IUS versus 29 women with a combined oral contraceptive (COC) for treating uterine fibroids. There was a significant reduction of menstrual blood loss (MBL) in women receiving the LNG-IUS compared to the COC using the alkaline hematin test (mean difference (MD) 77.5%, 95% CI 71.3% to 83.67%, 58 women) and a pictorial assessment chart (PBAC) (MD 34.5%, 95% CI 14.9% to 54.1%, 58 women). The reduction in uterine fibroid size was significantly greater in the leuprorelin group at 16 weeks compared to the progestogen lynestrenol group (MD -15.93 mm, 95% CI -18.02 to -13.84 mm, 46 women). There was no RCT evaluating the effect of DMPA on uterine fibroids. AUTHORS' CONCLUSIONS Progestogen-releasing intrauterine systems appear to reduce menstrual blood loss in premenopausal women with uterine fibroids. Oral progestogens did not reduce fibroid size or fibroid- related symptoms. However, there was a methodological limitation and the one included study with data had a small sample size. This evidence is insufficient to support the use of progestogens or progestogen-releasing intrauterine systems in treating premenopausal women with uterine fibroids.
Harness patterns for upper-extremity prostheses.
1 Chief, Research Limb Section, Army Prosthetics Research Laboratory, Walter Reed Army Medical Center, Washington, D. C, The comparatively recent development of more functional components for artificial arms has made it necessary to analyze in greater detail the requirements of harnessing the power needed for effective operation. Just as an automobile is helpless without a well-designed and well-built engine and transmission system, so an arm prosthesis is helpless without a welldesigned and well-constructed harness. To build a successful harness system requires not a knowledge of some long-lost art but, instead, a careful appraisal of the wearer, of the device to be worn, and of the available tools to be put to work. Since the modern body harness constitutes a dynamic coupling between a human being and a mechanism designed to replace a living extremity, the problem of devising it is also one of dynamics and of what some call "human engineering." Many illustrations of typical harness patterns are presented later in this article. But it is not enough for the harnessmaker simply to reproduce what is shown in these drawings of typical patterns or to superimpose on an individual amputee a generalized harness pattern of any particular type. He must first understand the purpose of the harness, the requirements of the particular prosthesis involved, and the body motions available, and he must then apply his own skill and judgment in making appropriate modifications to suit the individual case. It is, of course, far more important to produce a harness that will give the desired functional results than it is to produce one that looks exactly like any one of the drawings. The illustrations are therefore intended as general guides only, not as a detailed description applicable to every case of amputation at the indicated level. When planning and making any harness, the prosthetist should examine the location of each element to ensure proper function with the expenditure of minimum effort on the part of the particular wearer concerned. The first and most simple requirement of any harness is that it must hold the prosthesis securely on the stump. The second is that it must be comfortable to the amputee. Generally, suspension, as such, is easily obtained, but to suspend the prosthesis properly and at the same time to assure maximum comfort for its wearer is more difficult. If either of these requirements becomes a matter of choice, then comfort must be the more important consideration. If the harness is not comfortable, or at least tolerable, the person for whom it was intended will soon hang it politely on a suitable nail. Since almost no harness can be constructed satisfactorily without a few compromises at first, it is unwise to promise complete success on the first try. The third and all-important requirement of functional body harness is that it must supply a source of power for the operating components of the prosthesis. This means simply that residual body motions must be harnessed to replace lost functions of the natural member, but to provide controls that are operable in an effective and yet inconspicuous manner poses a complex problem. It requires an examination of the body motions that can be utilized by the harness without detracting from the usefulness of the remaining normal hand and without introducing unduly awkward gyrations of parts of the anatomy not ordinarily involved
A comparative study of teenage pregnancy.
Teenage pregnancy is a global problem and is considered a high-risk group, in spite of conflicting evidence. Our objective was to compare obstetric outcomes of pregnancy in teenagers and older women. This was a retrospective study of case records of pregnancies from August 2000 to July 2001. Girls aged < or =19 years were compared with pregnancy outcomes in older women (19-35 years) in the same hospital. The study took place in the Government General Hospital, Sangli, India, a teaching hospital in rural India, with an annual delivery rate of over 3,500. A total of 386 teenage pregnancies were compared with pregnancies in 3,326 older women. Socioeconomic data, age, number of pregnancies, antenatal care and complications, mode of delivery, and neonatal outcomes were considered. The incidence of teenage pregnancy in the study was 10%. A significant proportion of teenage pregnant mothers were in their first pregnancies. The teenage mothers were nearly three times more at risk of developing anaemia (OR = 2.83, 95% CI = 2.2-3.7, p < 0.0001) and delivering pre-term (OR = 2.97, 95% CI = 2.4-3.7, p < 0.0001). Teenage mothers were twice as likely to develop hypertensive problems in pregnancy (OR = 2.2, 95% CI = 1.5-3.2, p < 0.0001) and were more likely to deliver vaginally with no significant increase in the risk of assisted vaginal delivery or caesarean section. Young mothers were nearly twice at risk of delivering low birth weight babies (OR = 1.8, 95% CI = 1.5-2.2, p < 0.0001) and 50% less likely to have normal birth weight babies (OR = 0.5, 95% CI = 1.2-2.9, p < 0.0001). The outcome of this study showed that teenage pregnancies are still a common occurrence in rural India in spite of various legislations and government programmes and teenage pregnancy is a risk factor for poor obstetric outcome in rural India. Cultural practices, poor socioeconomic conditions, low literacy rate and lack of awareness of the risks are some of the main contributory factors. Early booking, good care during pregnancy and delivery and proper utilisation of contraceptive services can prevent the incidence and complications in this high-risk group.
Euler Spiral for Shape Completion
In this paper we address the curve completion problem, e.g., the geometric continuation of boundaries of objects which are temporarily interrupted by occlusion. Also known as the gap completion or shape completion problem, this problem is a significant element of perceptual grouping of edge elements and has been approached by using cubic splines or biarcs which minimize total curvature squared (elastica), as motivated by a physical analogy. Our approach is motivated by railroad design methods of the early 1900's which connect two rail segments by “transition curves”, and by the work of Knuth on mathematical typography. We propose that in using an energy minimizing solution completion curves should not penalize curvature as in elastica but curvature variation. The minimization of total curvature variation leads to an Euler Spiral solution, a curve whose curvature varies linearly with arclength. We reduce the construction of this curve from a pair of points and tangents at these points to solving a nonlinear system of equations involving Fresnel Integrals, whose solution relies on optimization from a suitable initial condition constrained to satisfy given boundary conditions. Since the choice of an appropriate initial curve is critical in this optimization, we analytically derive an optimal solution in the class of biarc curves, which is then used as the initial curve. The resulting interpolations yield intuitive interpolation across gaps and occlusions, and are extensible, in contrast to the scale-invariant version of elastica. In addition, Euler Spiral segments can be used in other applications of curve completions, e.g., modeling boundary segments between curvature extrema or modeling skeletal branch geometry.
Scaling Queries over Big RDF Graphs with Semantic Hash Partitioning
Massive volumes of big RDF data are growing beyond the performance capacity of conventional RDF data management systems operating on a single node. Applications using large RDF data demand efficient data partitioning solutions for supporting RDF data access on a cluster of compute nodes. In this paper we present a novel semantic hash partitioning approach and implement a Semantic HAsh Partitioning-Enabled distributed RDF data management system, called Shape. This paper makes three original contributions. First, the semantic hash partitioning approach we propose extends the simple hash partitioning method through direction-based triple groups and direction-based triple replications. The latter enhances the former by controlled data replication through intelligent utilization of data access locality, such that queries over big RDF graphs can be processed with zero or very small amount of inter-machine communication cost. Second, we generate locality-optimized query execution plans that are more efficient than popular multi-node RDF data management systems by effectively minimizing the inter-machine communication cost for query processing. Third but not the least, we provide a suite of locality-aware optimization techniques to further reduce the partition size and cut down on the inter-machine communication cost during distributed query processing. Experimental results show that our system scales well and can process big RDF datasets more efficiently than existing approaches.
A feature study for classification-based speech separation at very low signal-to-noise ratio
Speech separation is a challenging problem at low signal-to-noise ratios (SNRs). Separation can be formulated as a classification problem. In this study, we focus on the SNR level of -5 dB in which speech is generally dominated by background noise. In such a low SNR condition, extracting robust features from a noisy mixture is crucial for successful classification. Using a common neural network classifier, we systematically compare separation performance of many monaural features. In addition, we propose a new feature called Multi-Resolution Cochleagram (MRCG), which is extracted from four cochlea-grams of different resolutions to capture both local information and spectrotemporal context. Comparisons using two non-stationary noises show a range of feature robustness for speech separation with the proposed MRCG performing the best. We also find that ARMA filtering, a post-processing technique previously used for robust speech recognition, improves speech separation performance by smoothing the temporal trajectories of feature dimensions.
Potential Strategies to Prevent Ventilator-associated Events.
The Centers for Disease Control and Prevention (CDC) released ventilator-associated event (VAE) definitions in 2013. The new definitions were designed to track episodes of sustained respiratory deterioration in mechanically ventilated patients after a period of stability or improvement. More than 2,000 U.S. hospitals have reported their VAE rates to the CDC, but there has been little guidance to date on how to prevent VAEs. Existing ventilator-associated pneumonia prevention bundles are unlikely to be optimal insofar as pneumonia accounts for only a minority of VAEs. This review proposes a framework and potential intervention set to prevent VAEs on the basis of studies of VAE epidemiology, risk factors, and prevention. Work to date suggests that the majority of VAEs are caused by four conditions: pneumonia, fluid overload, atelectasis, and acute respiratory distress syndrome. Interventions that minimize ventilator exposure and target one or more of these conditions may therefore prevent VAEs. Potential strategies include avoiding intubation, minimizing sedation, paired daily spontaneous awakening and breathing trials, early exercise and mobility, low tidal volume ventilation, conservative fluid management, and conservative blood transfusion thresholds. Interventional studies have thus far affirmed that minimized sedation, paired daily spontaneous awakening and breathing trials, and conservative fluid management can reduce VAE rates and improve patient-centered outcomes. Further studies are needed to evaluate the impact of the other proposed interventions, to identify additional modifiable risk factors for VAEs, and to measure whether combining strategies into VAE prevention bundles confers additional benefits over implementing one or more of these interventions in isolation.
Brain tumor segmentation using thresholding, morphological operations and extraction of features of tumor
Brain is the most important and vital organ of the human body. The control and coordination of all the other vital structures is carried out by the brain. The tumor is formed by the uncontrolled multiplication of cell division. Numerous techniques were developed to detect and segment the brain tumor. Using thresholding and morphological operations efficient brain tumor segmentation is carried out. This is the efficient algorithm where segmentation of tumor is carried out and its features such as centroid, perimeter and area are calculated from the segmented tumor. To detect the brain tumor, scanned MRI images are given as the input. The work involved here helps in medical field to detect tumor and its features helps in giving the treatment plan to the patient. The entire paper is divided in to seven sections which are described in detailed in the following sections.
Factorized Attention: Self-Attention with Linear Complexities
Recent works have been applying self-attention to various fields in computer vision and natural language processing. However, the memory and computational demands of existing self-attention operations grow quadratically with the spatiotemporal size of the input. This prohibits the application of self-attention on large inputs, e.g., long sequences, high-definition images, or large videos. To remedy this, this paper proposes a novel factorized attention (FA) module, which achieves the same expressive power as previous approaches with substantially less memory and computational consumption. The resource-efficiency allows more widespread and flexible application of it. Empirical evaluations on object recognition demonstrate the effectiveness of these advantages. FA-augmented models achieved state-ofthe-art performance for object detection and instance segmentation on MS-COCO. Further, the resource-efficiency of FA democratizes self-attention to fields where the prohibitively high costs currently prevent its application. The state-of-the-art result for stereo depth estimation on the Scene Flow dataset exemplifies this.
A framework for on-line trend extraction and fault diagnosis
Qualitative trend analysis (QTA) is a process-history-based data-driven technique that works by extracting important features (trends) from the measured signals and evaluating the trends. QTA has been widely used for process fault detection and diagnosis. Recently, Dash et al. (2001, 2003) presented an intervalhalving-based algorithm for off-line automatic trend extraction from a record of data, a fuzzy-logic based methodology for trend-matching and a fuzzy-rule-based framework for fault diagnosis (FD). In this article, an algorithm for on-line extraction of qualitative trends is proposed. A framework for on-line fault diagnosis using QTA also has been presented. Some of the issues addressed are (i) development of a robust and computationally efficient QTA-knowledge-base, (ii) fault detection, (iii) estimation of the fault occurrence time, (iv) on-line trend-matching and (v) updating the QTA-knowledge-base when a novel fault is diagnosed manually. Some results for FD of the Tennessee Eastman (TE) process using the developed framework are presented. Copyright c 2003 IFAC.
Generating Impact-Based Summaries for Scientific Literature
In this paper, we present a study of a novel summarization problem, i.e., summarizing the impact of a scientific publication. Given a paper and its citation context, we study how to extract sentences that can represent the most influential content of the paper. We propose language modeling methods for solving this problem, and study how to incorporate features such as authority and proximity to accurately estimate the impact language model. Experiment results on a SIGIR publication collection show that the proposed methods are effective for generating impact-based summaries.
Temporally constrained ICA: an application to artifact rejection in electromagnetic brain signal analysis
Independent component analysis (ICA) is a technique which extracts statistically independent components from a set of measured signals. The technique enjoys numerous applications in biomedical signal analysis in the literature, especially in the analysis of electromagnetic (EM) brain signals. Standard implementations of ICA are restrictive mainly due to the square mixing assumption-for signal recordings which have large numbers of channels, the large number of resulting extracted sources makes the subsequent analysis laborious and highly subjective. There are many instances in neurophysiological analysis where there is strong a priori information about the signals being sought; temporally constrained ICA (cICA) can extract signals that are statistically independent, yet which are constrained to be similar to some reference signal which can incorporate such a priori information. We demonstrate this method on a synthetic dataset and on a number of artifactual waveforms identified in multichannel recordings of EEG and MEG. cICA repeatedly converges to the desired component within a few iterations and subjective analysis shows the waveforms to be of the expected morphologies and with realistic spatial distributions. This paper shows that cICA can be applied with great success to EM brain signal analysis, with an initial application in automating artifact extraction in EEG and MEG.
Feature combination strategies for saliency-based visual attention systems
acin ral eais a000 Abstract. Bottom-up or saliency-based visual attention allows primates to detect nonspecific conspicuous targets in cluttered scenes. A classical metaphor, derived from electrophysiological and psychophysical studies, describes attention as a rapidly shiftable ‘‘spotlight.’’ We use a model that reproduces the attentional scan paths of this spotlight. Simple multi-scale ‘‘feature maps’’ detect local spatial discontinuities in intensity, color, and orientation, and are combined into a unique ‘‘master’’ or ‘‘saliency’’ map. The saliency map is sequentially scanned, in order of decreasing saliency, by the focus of attention. We here study the problem of combining feature maps, from different visual modalities (such as color and orientation), into a unique saliency map. Four combination strategies are compared using three databases of natural color images: (1) Simple normalized summation, (2) linear combination with learned weights, (3) global nonlinear normalization followed by summation, and (4) local nonlinear competition between salient locations followed by summation. Performance was measured as the number of false detections before the most salient target was found. Strategy (1) always yielded poorest performance and (2) best performance, with a threefold to eightfold improvement in time to find a salient target. However, (2) yielded specialized systems with poor generalization. Interestingly, strategy (4) and its simplified, computationally efficient approximation (3) yielded significantly better performance than (1), with up to fourfold improvement, while preserving generality. © 2001 SPIE and IS&T. [DOI: 10.1117/1.1333677]
The separation of anatomic components technique for the reconstruction of massive midline abdominal wall defects: anatomy, surgical technique, applications, and limitations revisited.
Reconstruction of massive abdominal wall defects has long been a vexing clinical problem. A landmark development for the autogenous tissue reconstruction of these difficult wounds was the introduction of "components of anatomic separation" technique by Ramirez et al. This method uses bilateral, innervated, bipedicle, rectus abdominis-transversus abdominis-internal oblique muscle flap complexes transposed medially to reconstruct the central abdominal wall. Enamored with this concept, this institution sought to define the limitations and complications and to quantify functional outcome with the use of this technique. During a 4-year period (July of 1991 to 1995), 22 patients underwent reconstruction of massive midline abdominal wounds. The defects varied in size from 6 to 14 cm in width and from 10 to 24 cm in height. Causes included removal of infected synthetic mesh material (n = 7), recurrent hernia (n = 4), removal of split-thickness skin graft and dense abdominal wall cicatrix (n = 4), parastomal hernia (n = 2), primary incisional hernia (n = 2), trauma/enteric sepsis (n = 2), and tumor resection (abdominal wall desmoid tumor involving the right rectus abdominis muscle) (n = 1). Twenty patients were treated with mobilization of both rectus abdominis muscles, and in two patients one muscle complex was used. The plane of "separation" was the interface between the external and internal oblique muscles. A quantitative dynamic assessment of the abdominal wall was performed in two patients by using a Cybex TEF machine, with analysis of truncal flexion strength being undertaken preoperatively and at 6 months after surgery. Patients achieved wound healing in all cases with one operation. Minor complications included superficial infection in two patients and a wound seroma in one. One patient developed a recurrent incisional hernia 8 months postoperatively. There was one postoperative death caused by multisystem organ failure. One patient required the addition of synthetic mesh to achieve abdominal closure. This case involved a thin patient whose defect exceeded 16 cm in width. There has been no clinically apparent muscle weakness in the abdomen over that present preoperatively. Analysis of preoperative and postoperative truncal force generation revealed a 40 percent increase in strength in the two patients tested on a Cybex machine. Reoperation was possible through the reconstructed abdominal wall in two patients without untoward sequela. This operation is an effective method for autogenous reconstruction of massive midline abdominal wall defects. It can be used either as a primary mode of defect closure or to treat the complications of trauma, surgery, or various diseases.
Implementing shortest path routing mechanism using Openflow POX controller
Network management is a challenging problem of wide impact with many enterprises suffering significant financial losses. The Software Defined Networks (SDN) approach is a new paradigm that enables the management of networks with low cost and complexity. The goal of SDN is to make sure that all control-level logical decisions are taken at a central way, as compared to traditional networking, wherein control-level decisions are taken locally and intelligence is distributed in each switch. The aim of this paper is to present a routing solution based on SDN architecture implemented in OpenFlow environment and providing the shortest path routing. The simulations have been carried out in an emulation environment based on Linux and POX controller.
Psoriasis: emerging therapeutic strategies
Psoriasis is a chronic inflammatory skin disorder that is characterized by thickened, scaly plaques, and is estimated to affect ∼1–3% of the Caucasian population. Traditional treatments, although effective in patients with limited disease, have numerous shortcomings, including inconvenience and toxicity. These drawbacks mean that many patients experience cycles of disease clearance, in which normal quality of life alternates with active disease and poor quality of life. However, as this review discusses, recent advances have highlighted the key role of the immune system in the pathogenesis of psoriasis, and have provided new defined targets for therapeutic intervention, offering hope for safe and effective psoriasis treatment.
Detecting Anomalous Network Traffic with Self-organizing Maps
Integrated Network-Based Ohio University Network Detective Service (INBOUNDS) is a network based intrusion detection system being developed at Ohio University. The Anomalous Network-Traffic Detection with Self Organizing Maps (ANDSOM) module for INBOUNDS detects anomalous network traffic based on the Self-Organizing Map algorithm. Each network connection is characterized by six parameters and specified as a six-dimensional vector. The ANDSOM module creates a Self-Organizing Map (SOM) having a two-dimensional lattice of neurons for each network service. During the training phase, normal network traffic is fed to the ANDSOM module, and the neurons in the SOM are trained to capture its characteristic patterns. During real-time operation, a network connection is fed to its respective SOM, and a “winner” is selected by finding the neuron that is closest in distance to it. The network connection is then classified as an intrusion if this distance is more than a pre-set threshold.
Object-centric Sampling for Fine-grained Image Classification
This paper proposes to go beyond the state-of-the-art deep convolutional neural network (CNN) by incorporating the information from object detection, focusing on dealing with fine-grained image classification. Unfortunately, CNN suffers from over-fiting when it is trained on existing finegrained image classification benchmarks, which typically only consist of less than a few tens of thousands training images. Therefore, we first construct a large-scale finegrained car recognition dataset that consists of 333 car classes with more than 150 thousand training images. With this large-scale dataset, we are able to build a strong baseline for CNN with top-1 classification accuracy of 81.6%. One major challenge in fine-grained image classification is that many classes are very similar to each other while having large within-class variation. One contributing factor to the within-class variation is cluttered image backgroun d. However, the existing CNN training takes uniform window sampling over the image, acting as blind on the location of the object of interest. In contrast, this paper proposes an object-centric sampling(OCS) scheme that samples image windows based on the object location information. The challenge in using the location information lies in how to design powerful object detector and how to handle the imperfectness of detection results. To that end, we design a saliency-aware object detection approach specific for the setting of fine-grained image classification, and the uncertainty of detection results are naturally handled in our OCS scheme. Our framework is demonstrated to be very effective, improving top-1 accuracy to 89.3% (from 81.6%) on the large-scale fine-grained car classification dataset.
Towards an Understanding of Business Intelligence
Heading – abstract) Given the wide recognition of business intelligence (BI) over the last 20 years, we performed a literature review on the concept from a managerial perspective. We analysed 103 articles related to BI in the period 1990 to 2010. We found that BI is defined as a process, a product, and as a set of technologies, or a combination of these, which involves data, information, knowledge, decision making, related processes and technologies that support them. Our findings show that the literature focuses mostly on data and information, and less on knowledge and decision making. Moreover, in relation to the processes there is a substantial amount of literature about gathering and storing data and information, but less about analysing and using information and knowledge, and almost nothing about acting (making decisions) based on intelligence. The research literature has mainly focused on technologies and neglecting the role of the decision maker. We conclude by synthesizing a unified definition of BI and identifying possible future research streams.
Learning the kernel matrix via predictive low-rank approximations
Efficient and accurate low-rank approximations of multiple data sources are essential in the era of big data. The scaling of kernel-based learning algorithms to large datasets is limited by the O(n) computation and storage complexity of the full kernel matrix, which is required by most of the recent kernel learning algorithms. We present the mklaren algorithm to approximate multiple kernel matrices learn a regression model, which is entirely based on geometrical concepts. The algorithm does not require access to full kernel matrices yet it accounts for the correlations between all kernels. It uses Incomplete Cholesky decomposition, where pivot selection is based on least-angle regression in the combined, low-dimensional feature space. The algorithm has linear complexity in the number of data points and kernels. When explicit feature space induced by the kernel can be constructed, a mapping from the dual to the primal Ridge regression weights is used for model interpretation. The mklaren algorithm was tested on eight standard regression datasets. It outperforms contemporary kernel matrix approximation approaches when learning with multiple kernels. It identifies relevant kernels, achieving highest explained variance than other multiple kernel learning methods for the same number of iterations. Test accuracy, equivalent to the one using full kernel matrices, was achieved with at significantly lower approximation ranks. A difference in run times of two orders of magnitude was observed when either the number of samples or kernels exceeds 3000.
Circular markers for camera pose estimation
This papers presents a new system using circular markers to estimate the pose of a camera. Contrary to most markersbased systems using square markers, we advocate the use of circular markers, as we believe that they are easier to detect and provide a pose estimate that is more robust to noise. Unlike existing systems using circular markers, our method computes the exact pose from one single circular marker, and do not need specific points being explicitly shown on the marker (like center, or axes orientation). Indeed, the center and orientation is encoded directly in the marker’s code. We can thus use the entire marker surface for the code design. After solving the back projection problem for one conic correspondence, we end up with two possible poses. We show how to find the marker’s code, rotation and final pose in one single step, by using a pyramidal cross-correlation optimizer. The marker tracker runs at 100 frames/second on a desktop PC and 30 frames/second on a hand-held UMPC.
A survey of multi-source energy harvesting systems
Energy harvesting allows low-power embedded devices to be powered from naturally-ocurring or unwanted environmental energy (e.g. light, vibration, or temperature difference). While a number of systems incorporating energy harvesters are now available commercially, they are specific to certain types of energy source. Energy availability can be a temporal as well as spatial effect. To address this issue, 'hybrid' energy harvesting systems combine multiple harvesters on the same platform, but the design of these systems is not straightforward. This paper surveys their design, including trade-offs affecting their efficiency, applicability, and ease of deployment. This survey, and the taxonomy of multi-source energy harvesting systems that it presents, will be of benefit to designers of future systems. Furthermore, we identify and comment upon the current and future research directions in this field.
"You better not leave me shaming!": Conditional indirect effect analyses of anti-fat attitudes, body shame, and fat talk as a function of self-compassion in college women.
The present investigation provided a theoretically-driven analysis testing whether body shame helped account for the predicted positive associations between explicit weight bias in the form of possessing anti-fat attitudes (i.e., dislike, fear of fat, and willpower beliefs) and engaging in fat talk among 309 weight-diverse college women. We also evaluated whether self-compassion served as a protective factor in these relationships. Robust non-parametric bootstrap resampling procedures adjusted for body mass index (BMI) revealed stronger indirect and conditional indirect effects for dislike and fear of fat attitudes and weaker, marginal effects for the models inclusive of willpower beliefs. In general, the indirect effect of anti-fat attitudes on fat talk via body shame declined with increasing levels of self-compassion. Our preliminary findings may point to useful process variables to target in mitigating the impact of endorsing anti-fat prejudice on fat talk in college women and may help clarify who is at higher risk.
Bcl-xL overexpression does not enhance specific erythropoietin productivity of recombinant CHO cells grown at 33 degrees C and 37 degrees C.
Overexpression of bcl-xL in recombinant Chinese hamster ovary (rCHO) cells has been known to suppress apoptotic cell death and thereby extend culture longevity during batch culture. However, its effect on specific productivity (q) of rCHO cells is controversial. This study attempts to investigate the effect of bcl-xL overexpression on q of rCHO cells producing erythropoietin (EPO). To regulate the bcl-xL expression level, the Tet-off system was introduced in rCHO cells producing EPO (EPO-off-bcl-xL). The bcl-xL expression level was tightly controlled by doxycycline concentration. To evaluate the effect of bcl-xL overexpression on specific EPO productivity (q(EPO)) at different levels, EPO-off-bcl-xL cells were cultivated at the two different culture temperatures, 33 degrees C and 37 degrees C. The q(EPO) at 33 degrees C and 37 degrees C in the presence of 100 ng/mL doxycycline (without bcl-xL overexpression) were 4.89 +/- 0.21 and 3.18 +/- 0.06 microg/10(6)cells/day, respectively. In the absence of doxycycline, bcl-xL overexpression did not affect q(EPO) significantly, regardless of the culture temperature, though it extended the culture longevity. Taken together, bcl-xL overexpression showed no significant effect on the q(EPO) of rCHO cells grown at 33 degrees C and 37 degrees C.
Mixing Coins of Different Quality: A Game-Theoretic Approach
Cryptocoins based on public distributed ledgers can differ in their quality due to different subjective values users assign to coins depending on the unique transaction history of each coin. We apply game theory to study how qualitative differentiation between coins will affect the behavior of users interested in improving their anonymity through mixing services. We present two stylized models of mixing with perfect and imperfect information and analyze them for three distinct quality propagation policies: poison, haircut, and seniority. In the game of perfect information, mixing coins of high quality remains feasible under certain conditions, while imperfect information eventually leads to a mixing market where only coins of the lowest quality are mixed.
A grounded theory of psychological resilience in Olympic champions
1 Objective: Although it is well-established that the ability to manage stress is a prerequisite of 2 sporting excellence, the construct of psychological resilience has yet to be systematically 3 examined in athletic performers. The study reported here sought to explore and explain the 4 relationship between psychological resilience and optimal sport performance. 5 Design and Method: Twelve Olympic champions (8 men and 4 women) from a range of sports 6 were interviewed regarding their experiences of withstanding pressure during their sporting 7 careers. A grounded theory approach was employed throughout the data collection and 8 analysis, and interview transcripts were analyzed using open, axial and selective coding. 9 Methodological rigor was established by incorporating various verification strategies into the 10 research process, and the resultant grounded theory was also judged using the quality criteria of 11 fit, work, relevance, and modifiability. 12 Results and Conclusions: Results indicate that numerous psychological factors (relating to a 13 positive personality, motivation, confidence, focus, and perceived social support) protect the 14 world’s best athletes from the potential negative effect of stressors by influencing their 15 challenge appraisal and meta-cognitions. These processes promote facilitative responses that 16 precede optimal sport performance. The emergent theory provides sport psychologists, coaches 17 and national sport organizations with an understanding of the role of resilience in athletes’ lives 18 and the attainment of optimal sport performance. 19
Atrial fibrillation detection using feature based algorithm and deep convolutional neural network
Aims: Electrocardiographic waveforms (ECG) are recognized as the most reliable method to detect abnormal heart rhythms such as atrial fibrillation. This task is challenging when the signals are distorted by noise. This paper presents an automatic classification algorithm to classify short lead ECGs in terms of abnormality of heart rhythm (AF or alternative rhythms) and quality (noisy recordings). Methods: To meet this end, at first baseline wander removal and Butterworth filter for each signal are applied as a preprocessing stage. Due to the existence of noise in recordings, high quality beats are selected for any further analysis using cycle quality assessment. Then, three sets of features defined as correlation coefficient, fractal dimension and variance of R peaks are extracted to predict noisy recordings. Two separate approaches are employed to classify other three classes. The first approach is the feature based methodology and the second one is the applying deep neural networks. In the first approach, features from different domains are extracted. The method for AF detection utilizes and characterizes variability in RR-intervals which are extracted by applying classic Pan-Tompkins algorithm. To improve the accuracy of the AF-detection, atrial activity is analyzed by understanding whether the P-wave is present in signal. This is done by investigating the morphology of P-waves. Heart rate abnormality and the existence of premature beats in a signal are regarded as two characteristics to distinguish non-AF rhythms. The whole sets of features are fed into a neural network classifier. Another approach uses the segments with 600 samples as the input of a 1 dimensional convolutional neural network. The output obtained from both approaches are combined using a decision table and finally the recordings are classified into three classes. Results: The proposed method is evaluated using scoring function from 2017 PhysioNet/CinC Challenge and achieved an overall score of 80% and 71% on the training dataset and hidden test dataset, respectively.
2-Point-based outlier rejection for camera-IMU systems with applications to micro aerial vehicles
This paper presents a novel method to perform the outlier rejection task between two different views of a camera rigidly attached to an Inertial Measurement Unit (IMU). Only two feature correspondences and gyroscopic data from IMU measurerments are used to compute the motion hypothesis. By exploiting this 2-point motion parametrization, we propose two algorithms to remove wrong data associations in the feature-matching process for case of a 6DoF motion. We show that in the case of a monocular camera mounted on a quadrotor vehicle, motion priors from IMU can be used to discard wrong estimations in the framework of a 2-point-RANSAC based approach. The proposed methods are evaluated on both synthetic and real data.
Encrypted accelerated least squares regression
Information that is stored in an encrypted format is, by definition, usually not amenable to statistical analysis or machine learning methods. In this paper we present detailed analysis of coordinate and accelerated gradient descent algorithms which are capable of fitting least squares and penalised ridge regression models, using data encrypted under a fully homomorphic encryption scheme. Gradient descent is shown to dominate in terms of encrypted computational speed, and theoretical results are proven to give parameter bounds which ensure correctness of decryption. The characteristics of encrypted computation are empirically shown to favour a non-standard acceleration technique. This demonstrates the possibility of approximating conventional statistical regression methods using encrypted data without compromising privacy.
Methodological issues in the study of broader consequences
The study of the broader effects of international regimes is just beginning. For a long time, regime analysts operated with a two-fold fiction, namely that a regime could be established largely in isolation from other regimes and that its consequences were concentrated to its own domain. In the real world, the international system is increasingly densely populated by international governing institutions. A study elaborated for the Rio Summit of 1992 counted more than 125 important multilateral environmental regimes alone, most of which were institutionalized separately from each other (Sand 1992). Every year, states conclude about five new important environmental agreements (Beisheim et al. 1999: 350 – 51). Against the backdrop of this trend and the sheer number of independently established international regimes, it is difficult to image that interaction among regimes is an irrelevant phenomenon.
Autoregressive conditional heteroskedasticity with estimates of the variance of United Kingdom infla
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.
An Optimization Framework for Remapping and Reweighting Noisy Relevance Labels
Relevance labels is the essential part of any learning to rank framework. The rapid development of crowdsourcing platforms led to a significant reduction of the cost of manual labeling. This makes it possible to collect very large sets of labeled documents to train a ranking algorithm. However, relevance labels acquired via crowdsourcing are typically coarse and noisy, so certain consensus models are used to measure the quality of labels and to reduce the noise. This noise is likely to affect a ranker trained on such labels, and, since none of the existing consensus models directly optimizes ranking quality, one has to apply some heuristics to utilize the output of a consensus model in a ranking algorithm, e.g., to use majority voting among workers to get consensus labels. The major goal of this paper is to unify existing approaches to consensus modeling and noise reduction within a learning to rank framework. Namely, we present a machine learning algorithm aimed at improving the performance of a ranker trained on a crowdsourced dataset by proper remapping of labels and reweighting of samples. In the experimental part, we use several characteristics of workers/labels extracted via various consensus models in order to learn the remapping and reweighting functions. Our experiments on a large-scale dataset demonstrate that we can significantly improve state-of-the-art machine-learning algorithms by incorporating our framework.
Direction matters: hand pose estimation from local surface normals
We present a hierarchical regression framework for estimating hand joint positions from single depth images based on local surface normals. The hierarchical regression follows the tree structured topology of hand from wrist to finger tips. We propose a conditional regression forest, i.e. the Frame Conditioned Regression Forest (FCRF) which uses a new normal difference feature. At each stage of the regression, the frame of reference is established from either the local surface normal or previously estimated hand joints. By making the regression with respect to the local frame, the pose estimation is more robust to rigid transformations. We also introduce a new efficient approximation to estimate surface normals. We verify the effectiveness of our method by conducting experiments on two challenging real-world datasets and show consistent improvements over previous discriminative pose estimation methods.
Handheld or Handsfree?: Remote Collaboration via Lightweight Head-Mounted Displays and Handheld Devices
Emerging wearable and mobile communication technologies, such as lightweight head-mounted displays (HMDs) and handheld devices, promise support for everyday remote collaboration. Despite their potential for widespread use, their effectiveness as collaborative tools is unknown, particularly in physical tasks involving mobility. To better understand their impact on collaborative behaviors, perceptions, and performance, we conducted a two-by-two (technology type: HMD vs. tablet computer; task setting: static vs. dynamic) between-subjects study where participants (n=66) remotely collaborated as ``helper' and ``worker' pairs in the construction of a physical object. Our results showed that, in the dynamic task, HMD use enabled helpers to offer more frequent directing commands and more proactive assistance, resulting in marginally faster task completion. In the static task, while tablet use helped convey subtle visual information, helpers and workers had conflicting perceptions of how the two technologies contributed to their success. Our findings offer strong design and research implications, underlining the importance of a consistent view of the shared workspace and the differential support collaborators with different roles receive from technologies.
Evolved Policy Gradients
We propose a metalearning approach for learning gradient-based reinforcement learning (RL) algorithms. The idea is to evolve a differentiable loss function, such that an agent, which optimizes its policy to minimize this loss, will achieve high rewards. The loss is parametrized via temporal convolutions over the agent’s experience. Because this loss is highly flexible in its ability to take into account the agent’s history, it enables fast task learning. Empirical results show that our evolved policy gradient algorithm (EPG) achieves faster learning on several randomized environments compared to an off-the-shelf policy gradient method. We also demonstrate that EPG’s learned loss can generalize to out-of-distribution test time tasks, and exhibits qualitatively different behavior from other popular metalearning algorithms.
History of Pediatric Hematology Oncology
Pediatric Hematology Oncology as a specialty was possible because of the evolution of the science of Hematology, which developed microscopy for describing blood cell morphology and methods for quantitation of these elements. Before pediatric blood diseases could be defined, it was necessary to establish the normal blood values of infancy and childhood. The unique features of the blood of the newborn were the focus of many of the early studies. After normal values were established, specific blood disease and hematologic syndromes of children began to be described in Europe and the United States. Pediatric Hematology Oncology is a broad and complex area that encompasses perturbations of the several-formed elements of the blood and their precursors in the bone marrow, as well as the coagulation-fibrinolytic systems in the plasma, the reticuloendothelial system, and malignancies of the blood and solid tissues and organs. The interactions of the blood and nutrition have long been important areas of study. Advances in Pediatric Oncology have been particularly spectacular in the last 50 years. Using multi-modal therapy including combination chemotherapy, more than 80% of children with cancer can now be cured. During the last 50 years, Pediatric Hematology Oncology has increasingly used tools of the “new biology”: immunology, biochemistry, enzymology, genetics and molecular genetics, and others. During the last century, many diseases have been recognized and defined by biochemical and genetic mechanisms, and in some instances they have been prevented or cured.
Algorithms and applications for approximate nonnegative matrix factorization
In this paper we discuss the development and use of low-rank approximate nonnegative matrix factorization (NMF) algorithms for feature extraction and identification in the fields of text mining and spectral data analysis. The evolution and convergence properties of hybrid methods based on both sparsity and smoothness constraints for the resulting nonnegative matrix factors are discussed. The interpretability of NMF outputs in specific contexts are provided along with opportunities for future work in the modification of NMF algorithms for large-scale and time-varying datasets.
On the Factorization of RSA-120
We present data concerning the factorization of the 12O-digit number RSA-120, which we factored on July 9,1993, using the quadratic sieve method. The factorization took approximately 825 MIPS years and was completed within three months real time. At the time of writing RSA-120 is the largest integer ever factored by a general purpose factoring algorithm. We also present some conservative extrapolations to estimate the difficulty of factoring even larger numbers, using either the quadratic sieve method or the number field sieve, and discuss the issue of the crossover point between these two methods. On the factorization of RSA-120 Evaluation of integer factoring algorithms, both from a theoretical and practical point of view, is of great importance for anyone interested in the security of factoring-based public key cryptosystems. In this paper we concentrate on the practical aspects of factoring. Furthermore, we restrict ourselves to general purpose factoring algorithms, i.e., algorithms that do not rely on special properties the numbers to be factored or their factors might have. These are the algorithms that are most relevant for cryptanalysis. Currently the two leading general purpose factoring algorithms are the quadratic sieve (QS) and the number field sieve (NFS), cf. [12] and [2]. Throughout this paper, NFS is the generalized version (from [2]) of the algorithm from [$I; the latter algorithm is much faster, but can only be applied to composites of a very special form, cf. [9]. Let ~ , [ a , b ] = exp((b+ o(l))(log2:)"(ioglogz)'-") for real a, b , 2, and 2: -+ 00. To factor an odd integer n > 1 which is not a prime power, QS runs in time D.R. Stinson (Ed.): Advances in Cryptology CRYPT0 '93, LNCS 773, pp. 166-174, 1994 0 Spnnger-Verlag Berlin Heidelberg 1994
Designing Engaging Games Using Bayesian Optimization
We use Bayesian optimization methods to design games that maximize user engagement. Participants are paid to try a game for several minutes, at which point they can quit or continue to play voluntarily with no further compensation. Engagement is measured by player persistence, projections of how long others will play, and a post-game survey. Using Gaussian process surrogate-based optimization, we conduct efficient experiments to identify game design characteristics---specifically those influencing difficulty---that lead to maximal engagement. We study two games requiring trajectory planning, the difficulty of each is determined by a three-dimensional continuous design space. Two of the design dimensions manipulate the game in user-transparent manner (e.g., the spacing of obstacles), the third in a subtle and possibly covert manner (incremental trajectory corrections). Converging results indicate that overt difficulty manipulations are effective in modulating engagement only when combined with the covert manipulation, suggesting the critical role of a user's self-perception of competence.
Commonsense Causal Reasoning Using Millions of Personal Stories
The personal stories that people write in their Internet weblogs include a substantial amount of information about the causal relationships between everyday events. In this paper we describe our efforts to use millions of these stories for automated commonsense causal reasoning. Casting the commonsense causal reasoning problem as a Choice of Plausible Alternatives, we describe four experiments that compare various statistical and information retrieval approaches to exploit causal information in story corpora. The top performing system in these experiments uses a simple co-occurrence statistic between words in the causal antecedent and consequent, calculated as the Pointwise Mutual Information between words in a corpus of millions of personal stories.
Analysis of the Contribution of Agricultural Sector on the Nigerian Economic Development
Agricultural sector is seen as an engine that contributes to the growth of the overall economy of Nigeria, despite these efforts the sector is still characterized with low yields, low level of inputs and limited areas under cultivation due to government dependence on mono-cultural economy based on oil. This paper is an attempt to examine the impact of the agricultural sector on the Nigerian economy. The panel of data used was sourced from the statistical bulletin of the Central Bank of Nigeria and World Bank’s development indicators, multiple regression was used to analyze the data, the result indicated a positive relationship between Gross Domestic Product (GDP) vis a vis domestic saving, government expenditure on agriculture and foreign direct investment between the period of 1986-2007. It was also revealed in the study that 81% of the variation in GDP could be explained by Domestic Savings, Government Expenditure and Foreign Direct Investment. In order to improve the agricultural sector it is recommended that government provides more funding for agricultural universities in Nigeria to carry out researches on all areas of agricultural production this will lead to more exports and improvement in the competitiveness of Nigeria agriculture production in international markets. The Central bank of Nigeria should also come up with a stable policy for loan disbursement to farmers at a reasonable interest payback.
Dimensionality reduction for large-scale neural recordings
Most sensory, cognitive and motor functions depend on the interactions of many neurons. In recent years, there has been rapid development and increasing use of technologies for recording from large numbers of neurons, either sequentially or simultaneously. A key question is what scientific insight can be gained by studying a population of recorded neurons beyond studying each neuron individually. Here, we examine three important motivations for population studies: single-trial hypotheses requiring statistical power, hypotheses of population response structure and exploratory analyses of large data sets. Many recent studies have adopted dimensionality reduction to analyze these populations and to find features that are not apparent at the level of individual neurons. We describe the dimensionality reduction methods commonly applied to population activity and offer practical advice about selecting methods and interpreting their outputs. This review is intended for experimental and computational researchers who seek to understand the role dimensionality reduction has had and can have in systems neuroscience, and who seek to apply these methods to their own data.
Memory-augmented Chinese-Uyghur neural machine translation
Neural machine translation (NMT) has achieved notable performance recently. However, this approach has not been widely applied to the translation task between Chinese and Uyghur, partly due to the limited parallel data resource and the large proportion of rare words caused by the agglutinative nature of Uyghur. In this paper, we collect ∼200,000 sentence pairs and show that with this middle-scale database, an attention-based NMT can perform very well on Chinese- Uyghur/Uyghur-Chinese translation. To tackle rare words, we propose a novel memory structure to assist the NMT inference. Our experiments demonstrated that the memory-augmented NMT (M-NMT) outperforms both the vanilla NMT and the phrase-based statistical machine translation (SMT). Interestingly, the memory structure provides an elegant way for dealing with words that are out of vocabulary.
Thermoelectric properties and nonstoichiometry of GaGeTe
The polycrystalline samples of composition Ga1+xGe1-xTe (x = −0.03÷0.07) were synthesized from elements of 5N purity using solid state reaction. The products of synthesis were identified by X-ray diffraction. The samples for transport measurements were prepared using hot-pressing. They were characterized by the measurement of electric conductivity, Hall coefficient and Seebeck coefficient over a temperature range 80–450K and thermal conductivity over a temperature range 300-500 K. The samples show all p-type conductivity and we observe an increase in hole concentration with increasing x (content of Ga). We discuss the influence of Ga/Ge ratio on the phase purity of the samples and free carrier concentration. The investigation of thermoelectric properties shows that ZT parameter of these samples is too low at room temperature but increase distinctly with temperature.
Accelerometer Based Joint Step Detection and Adaptive Step Length Estimation Algorithm Using Handheld Devices
—The pedestrian inertial navigation systems are generally based on Pedestrian Dead Reckoning (PDR) algorithm. Considering the physiological characteristics of pedestrian movement, we use the cyclical characteristics and statistics of acceleration waveform and features which are associated with the walking speed to estimate the stride length. Due to the randomness of the pedestrian hand-held habit, the step events cannot always be detected by using the periods of zero velocity updates (ZUPTs). Furthermore, the signal patterns of the sensor could differ significantly depending on the carrying modes and the user’s hand motion. Hence, the step detection and the associated adaptive step length model using a handheld device equipped with accelerometer is required to obtain high-accurate measurements. To achieve this goal, a compositional algorithm of empirical formula and Back-propagation neural network by using handheld devices is proposed to estimate the step length, as well as achieve the accuracy of step detection higher than 98%. Furthermore, the proposed joint step detection and adaptive step length estimation algorithm can help much in the development of Pedestrian Navigation Devices (PNDs) based on the handheld inertial sensors.
The Benefit of Additional Opinions
In daily decision making, people often solicit one another’s opinions in the hope of improving their own judgment. According to both theory and empirical results, integrating even a few opinions is beneficial, with the accuracy gains diminishing as the bias of the judges or the correlation between their opinions increases. Decision makers using intuitive policies for integrating others’ opinions rely on a variety of accuracy cues in weighting the opinions they receive. They tend to discount dissenters and to give greater weight to their own opinion than to other people’s opinions. KEYWORDS—judgment and decision making; aggregating opinions; combining information It is common practice to solicit other people’s opinions prior to making a decision. An editor solicits two or three qualified reviewers for their opinions on a manuscript; a patient seeks a second opinion regarding a medical condition; a manager considers several judgmental forecasts of the market before embarking on a new venture. All these situations involve the decision maker in the task of combining other people’s opinions, mostly so as to improve the final decision. People also seek advice when they feel strongly accountable for their decisions. An accountant performing a complex audit might solicit advice to help justify his or her decisions and share the responsibility for the outcome with others. One could justifiably argue, however, that even such reasons for seeking others’ opinions are rooted in the belief that this process could improve decision making. Two main questions arise in the research on combining opinions. One involves the statistical aspects of the combination task: Under what conditions does combining opinions improve decision quality? The other concerns the psychological process of combining judgments: How do judges utilize other people’s opinions? These questions, which have been investigated by students of judgment and decision making, statistics, economics, and management, are intertwined, because the quality of the product is related to the way it is produced. In this review, I discuss what researchers have learned about the process and outcomes of combining opinions. Our focus here is on situations in which a decision maker seeks quantitative estimates, judgments, and forecasts from people possessing the relevant knowledge. The opinions are then combined by the individual decision maker, not by a group (decision making in groups deserves a separate discussion). It is useful to distinguish between two ways in which expert judgments can be combined: (a) intuitively (subjectively) and (b) mechanically (formally), that is, by using a consistent formula, such as simple or weighted averaging. ACCURACY GAINS FROM AGGREGATION Research has demonstrated repeatedly that both mechanical and intuitive methods of combining opinions improve accuracy. For example, in a study of inflation forecasts, the aggregate judgment created by averaging the forecasts of expert economists was more accurate than most of these individual forecasts, though not as good as the best ones (Zarnowitz, 1984). The best forecasts, however, could not be identified before the true value became known. Hence, taking the average was superior to selecting the judgment of any of the individuals. Moreover, a small number of opinions (e.g., three to six) is typically sufficient to realize most of the accuracy gains obtainable by aggregation. These fundamental results have been demonstrated in diverse domains, ranging from perception (line lengths) and generalknowledge tasks (historical dates) to business and economics (sales or inflation forecasts), and are an important reason for the broad interest in research on combining estimates (Johnson, Budescu, & Wallsten, 2001; Sorkin, Hayes, & West, 2001; Yaniv & Kleinberger, 2000). How Does Combining Opinions Improve Judgment? The improvement in accuracy is grounded in statistical principles, as well as psychological facts. For quantitative estimates, a common measure of accuracy is the average distance of the prediction from the event predicted. In the special case of judgments made on an arbitrary rating scale (e.g., an interviewer’s rating of a job candidate’s capability on a 9-point scale), a common measure is the correlation between the judgments and some objective outcome (e.g., the candidate’s actual success). In the case of quantitative estimates, it can be outlined in simple terms why improvement is to be expected from combining estimates. A subjective estimate about an objective event can be viewed as the sum of three components: the ‘‘truth,’’ random error (random Address correspondence to Ilan Yaniv, Department of Psychology, Hebrew University, Jerusalem 91905, Israel; e-mail: ilan.yaniv@ huji.ac.il. More complex methods based on Bayes’s theorem are less common in psychological research on combining opinions; hence, they are not treated here. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE Volume 13—Number 2 75 Copyright r 2004 American Psychological Society fluctuations in a judge’s performance), and constant bias (a consistent tendency to overor underestimate the event). Statistical principles guarantee that judgments formed by averaging several sources have lower random error than the individual sources on which the averages are based. Therefore, if the bias is small or zero, the average judgment is expected to converge about the truth (Einhorn, Hogarth, & Klempner, 1977). The case of categorical, binary judgments (e.g., a physician inspects a picture of a tumor and estimates whether it is benign or malignant) requires a special mention. Suppose a decision maker polls the judgments of N independent expert judges whose individual accuracy levels (chances of choosing the correct answer) are greater than 50% and then decides according to the majority. For example, three experts might judge whether or not a witness is lying, and the final decision would be the opinion supported by two or more experts. According to a well known 18th-century theorem (known as Condorcet’s jury theorem), the accuracy of the majority increases rapidly toward 100% as N increases (e.g., Sorkin et al., 2001). Thus, the majority outperforms the individual judges. For instance, the majority choice of five independent experts who are each correct 65% of the time is expected to be correct approximately 76% of the time. Conditions Under Which Accuracy Gains Are Observed A central condition for obtaining optimal accuracy gains through aggregation is that the experts are independent (e.g., little gain is expected if judge B is essentially a replica of judge A). But gains of appreciable size can be observed even when there are low or moderate positive correlations between the judgments of the experts (Johnson et al., 2001). The gains from aggregating quantitative judgments are also determined by the bias and the random error of the estimates (the lower the better). If judgments are made on rating scales, then the accuracy gains are related directly to the validity of each judge (i.e., how the judge’s ratings correlate with the objective value of what is rated) and indirectly to the correlations between different judges’ ratings (Einhorn et al., 1977; Hogarth, 1978; Johnson et al., 2001). Number of Opinions Needed As already noted, as few as three to six judgments might suffice to achieve most of what can be gained from averaging a larger number of opinions. This puzzling result that adding opinions does not contribute much to accuracy is related to my previous comments. Some level of dependence among experts is present in almost any realistic situation (their opinions tend to have some degree of correlation for a variety of reasons—they may rely on similar information sources or have similar backgrounds, or simply consult one another; cf. Soll, 1999). Therefore, the benefits accrued from polling more experts diminish rapidly, with each additional one amounting to ‘‘more of the same.’’ Similarly, bias or low judge validity limits the potential accuracy gains and further diminishes the value of added opinions. PSYCHOLOGICAL EFFECTS ON THE AGGREGATION OF
NMDA hypofunction in the posterior cingulate as a model for schizophrenia: an exploratory ketamine administration study in fMRI
BACKGROUND Based on animal data, NMDA receptor hypofunction has been suggested as a model for positive symptoms in schizophrenia. NMDA receptor hypofunction affects several corticolimbic brain regions, of which the posterior cingulate seems to be the most sensitive. However, empirical support for a crucial role of posterior cingulate NMDA hypofunction in the pathophysiology of positive symptoms is still missing in humans. We therefore conducted an fMRI study using the NMDA antagonist ketamine in healthy human subjects during episodic memory retrieval, which is supposed to activate the posterior cingulate. METHODS We investigated 16 healthy subjects which were assigned to either placebo (n = 7; saline) or ketamine (n = 9; 0.6 mg/kg/h) group in a double-blind study design. All subjects received their infusion while performing an episodic memory retrieval task in the scanner. Immediately after the fMRI session, psychopathological effects of ketamine were measured using the Altered States of Consciousness Questionnaire. RESULTS The placebo group showed BOLD signal increases in the posterior and anterior cingulate during retrieval. Signal increases were significantly lower in the ketamine group. Lower signal increases in the posterior cingulate correlated significantly with positive (i.e. psychosis-like) symptoms induced by ketamine. CONCLUSION The present study for the first time demonstrates a relationship between NMDA receptors, posterior cingulate and positive (i.e. psychosis-like) symptoms in humans. Confirming findings from animal studies, it supports the hypothesis of a pathophysiological role of NMDA receptor hypofunction in the posterior cingulate in schizophrenia.
LLC resonant converter using a planar transformer with new core shape
In this paper, a low profile LLC resonant converter with two transformers using a planar core is proposed for a slim switching mode power supply (SMPS). Design procedures, magnetic modeling and voltage gain characteristics on the proposed planar transformer and converter are described in detail. LLC resonant converter including two transformers using a planar core is connected in series at primary and in parallel by the center-tap winding at secondary. Based on the theoretical analysis and simulation results of the voltage gain characteristics, a 300W LLC resonant converter is designed and tested.
Occupational therapy effects on visual-motor skills in preschool children.
OBJECTIVE The purpose of this study was to evaluate the assumption that preschool children who receive occupational therapy will demonstrate significant improvement in their visual-motor skills as measured on the Developmental Test of Visual-Motor Integration (VMI) and the two supplemental Visual Perception and Motor Coordination tests. METHOD Preschool children with developmental delays (n = 12) received occupational therapy a minimum of one individual 30-minute session, and one group 30-minute session per week for 1 school year. Their performance was compared to that of two control groups; preschool students without disabilities who received occupational therapy (n = 16) for one 30-minute group session per week and students without disabilities (n = 15) who received no occupational therapy. The VMI and two supplemental tests were administered three times to each student, at the beginning, middle, and end of school year. RESULTS Planned comparison tests showed that students with developmental delays demonstrated statistically significant improvement in visual-motor skills and developed skills at a rate faster than expected when compared to typically developing peers on the VMI. The effect size for preschool students without disabilities who received occupational therapy exceeded the effect size for the VMI and Visual Perception supplemental test for the preschool students without disabilities who received no therapy, although, the difference in the post-test performance of these two groups was not statistically significant. DISCUSSION The results of this study demonstrate that intervention, including occupational therapy, can effectively improve visual-motor skills in preschool-aged children.
Family and youth factors associated with health beliefs and health outcomes in youth with type 1 diabetes.
OBJECTIVE To examine the association of family organization with metabolic control in adolescents with type 1 diabetes through the mechanisms of family self-efficacy for diabetes and disease management. METHOD Data from the baseline assessment of a longitudinal RCT were used, wherein 257 adolescent-parent dyads (adolescents aged 11-14) each completed the family organization subscale of the Family Environment Scale, the self-efficacy for Diabetes Self-Management Scale, the Diabetes Behavior Rating Scale, and 2 24-hr diabetes interviews. RESULTS Structural equation modeling showed greater family organization was associated indirectly with better disease management behaviors via greater family self-efficacy (β = .38, p < .001). Greater self-efficacy was indirectly associated with better metabolic control via better disease management both concurrently (β = -.37, p < .001) and prospectively (β = -.26, p < .001). The full model indicates more family organization is indirectly associated with better metabolic control concurrently and prospectively through greater self-efficacy and better disease management (β = -.13, p < .001). CONCLUSIONS Understanding the mechanisms by which family organization is associated with metabolic control provides insight into possible avenues of prevention/intervention for better diabetes management.
Statistical exponential families: A digest with flash cards
This document describes concisely the ubiquitous class of exponential family distributions met in statistics. The first part recalls definitions and summarizes main properties and duality with Bregman divergences (all proofs are skipped). The second part lists decompositions and related formula of common exponential family distributions. We recall the Fisher-Rao-Riemannian geometries and the dual affine connection information geometries of statistical manifolds. It is intended to maintain and update this document and catalog by adding new distribution items. See the jMEF library, a Java package for processing mixture of exponential families. Available for download at http://www.lix.polytechnique.fr/~nielsen/MEF/ École Polytechnique (France) and Sony Computer Science Laboratories Inc. (Japan). École Polytechnique (France).
Evaluation of an ambulatory system for gait analysis in hip osteoarthritis and after total hip replacement
Spatial and temporal parameters of gait have clinical relevance in the assessment of motor pathologies, particularly in orthopaedics. A new gait analysis system is proposed which consists of (a) an ambulatory device (Physilog ®) including a set of miniature gyroscopes and a portable datalogger, and (b) an algorithm for gait analysis. The aim of this study was the validation of this system, for accuracy and clinical applicability. Eleven patients with coxarthrosis, eight patients with total hip arthroplasty and nine control subjects were studied using this portable system and also a reference motion analyzer and force plate. The small differences in the stance period (19 ± 31 ms), stride length and velocity (0.4 ± 9.6 cm and 2.5 ± 8.3 cm/s, respectively), as well as thigh and shank rotations (2 .4 ± 4.3◦ and 0.3 ± 3.3◦, respectively), confirmed good agreement of the proposed system with the reference system. In addition, nearly the same accuracy was obtained for all three groups. Gait analysis based on Physilog ® was also in agreement with their Harris Hip Scores (HHS): the subjects with lower scores had a greater limp, a slower walking speed and a shorter stride. This ambulatory gait analysis system provides an easy, reproducible and objective method of quantifying changes in gait after joint replacement surgery for coxarthrosis. © 2003 Elsevier B.V. All rights reserved.
Identifying and Tracking Sentiments and Topics from Social Media Texts during Natural Disasters
We study the problem of identifying the topics and sentiments and tracking their shifts from social media texts in different geographical regions during emergencies and disasters. We propose a location-based dynamic sentiment-topic model (LDST) which can jointly model topic, sentiment, time and Geolocation information. The experimental results demonstrate that LDST performs very well at discovering topics and sentiments from social media and tracking their shifts in different geographical regions during emergencies and disasters1.
Sperm competition and its evolutionary consequences in humans
Sexual selection is the mechanism that favors an increase in the frequency of alleles associated with reproduction (Darwin, 1871). Darwin distinguished sexual selection from natural selection, but today most evolutionary scientists combine the two concepts under the name, natural selection. Sexual selection is composed of intrasexual competition (competition between members of the same sex for sexual access to members of the opposite sex) and intersexual selection (differential mate choice of members of the opposite sex). Focusing mainly on precopulatory adaptations associated with intrasexual competition and intersexual selection, postcopulatory sexual selection was largely ignored even a century after the presentation of sexual selection theory. Parker (1970) was the first to recognize that male–male competition may continue even after the initiation of copulation when males compete for fertilizations. More recently, Thornhill (1983) and others (e.g. Eberhard, 1996) recognized that intersexual selection may also continue after the initiation of copulation when a female biases paternity between two or more males’ sperm. The competition between males for fertilization of a single female’s ova is known as sperm competition (Parker, 1970), and the selection of sperm from two or more males by a single female is known as cryptic female choice (Eberhard, 1996; Thornhill, 1983). Although sperm competition and cryptic female choice together compose postcopulatory sexual selection (see Table 6.1), sperm competition is often used in reference to both processes (e.g. Baker & Bellis, 1995; Birkhead & Møller, 1998; Simmons, 2001; Shackelford, Pound, & Goetz, 2005). In this chapter, we review the current state of knowledge regarding human sperm competition (and see Shackelford et al., 2005).
Designing culturally situated technologies for the home
As digital technologies proliferate in the home, the Human Computer Interaction (HCI) community has turned its attention from the workplace and productivity tools towards domestic design environments and non-utilitarian activities. In the workplace, applications tend to focus on productivity and efficiency and involve relatively well-understood requirements and methodologies, but in domestic design environments we are faced with the need to support new classes of activities. While usability is still central to the field, HCI is beginning to address considerations such as pleasure, fun, emotional effect, aesthetics, the experience of use, and the social and cultural impacts of new technologies. These considerations are particularly relevant to the home, where technologies are situated or embedded within an ecology that is rich with meaning and nuance.The aim of this workshop is to explore ways of designing domestic technology by incorporating an awareness of cultural context, accrued social meanings, and user experience.
Brain Activation during Face Perception: Evidence of a Developmental Change
Behavioral studies suggest that children under age 10 process faces using a piecemeal strategy based on individual distinctive facial features, whereas older children use a configural strategy based on the spatial relations among the face's features. The purpose of this study was to determine whether activation of the fusiform gyrus, which is involved in face processing in adults, is greater during face processing in older children (1214 years) than in younger children (8 10 years). Functional MRI scans were obtained while children viewed faces and houses. A developmental change was observed: Older children, but not younger children, showed significantly more activation in bilateral fusiform gyri for faces than for houses. Activation in the fusiform gyrus correlated significantly with age and with a behavioral measure of configural face processing. Regions believed to be involved in processing basic facial features were activated in both younger and older children. Some evidence was also observed for greater activation for houses versus faces for the older children than for the younger children, suggesting that processing of these two stimulus types becomes more differentiated as children age. The current results provide biological insight into changes in visual processing of faces that occur with normal development.
Interpretable Visual Question Answering by Reasoning on Dependency Trees
Collaborative reasoning for understanding each image-question pair is very critical but underexplored for an interpretable visual question answering system. Although very recent works also attempted to use explicit compositional processes to assemble multiple subtasks embedded in the questions, their models heavily rely on annotations or handcrafted rules to obtain valid reasoning processes, leading to either heavy workloads or poor performance on composition reasoning. In this paper, to better align image and language domains in diverse and unrestricted cases, we propose a novel neural network model that performs global reasoning on a dependency tree parsed from the question, and we thus phrase our model as parse-tree-guided reasoning network (PTGRN). This network consists of three collaborative modules: i) an attention module to exploit the local visual evidence for each word parsed from the question, ii) a gated residual composition module to compose the previously mined evidence, and iii) a parse-tree-guided propagation module to pass the mined evidence along the parse tree. Our PTGRN is thus capable of building an interpretable VQA system that gradually derives the image cues following a question-driven parse-tree reasoning route. Experiments on relational datasets demonstrate the superiority of our PTGRN over current state-of-the-art VQA methods, and the visualization results highlight the explainable capability of our reasoning system.
A Novel, Simple Interpretation of Nesterov's Accelerated Method as a Combination of Gradient and Mirror Descent
First-order methods play a central role in large-scale convex optimization. Even though many variations exist, each suited to a particular problem form, almost all such methods fundamentally rely on two types of algorithmic steps and two corresponding types of analysis: gradient-descent steps, which yield primal progress, and mirror-descent steps, which yield dual progress. In this paper, we observe that the performances of these two types of step are complementary, so that faster algorithms can be designed by coupling the two steps and combining their analyses. In particular, we show how to obtain a conceptually simple interpretation of Nesterov’s accelerated gradient method [Nes83, Nes04, Nes05], a cornerstone algorithm in convex optimization. Nesterov’s method is the optimal first-order method for the class of smooth convex optimization problems. However, to the best of our knowledge, the proof of the fast convergence of Nesterov’s method has not found a clear interpretation and is still regarded by many as crucially relying on an “algebraic trick”[Jud13]. We apply our novel insights to express Nesterov’s algorithm as a natural coupling of gradient descent and mirror descent and to write its proof of convergence as a simple combination of the convergence analyses of the two underlying steps. We believe that the complementary view of gradient descent and mirror descent proposed in this paper will prove very useful in the design of first-order methods as it allows us to design fast algorithms in a conceptually easier way. For instance, our view greatly facilitates the adaptation of non-trivial variants of Nesterov’s method to specific scenarios, such as packing and covering problems [AO14b, AO14a]. ar X iv :1 40 7. 15 37 v2 [ cs .D S] 9 A ug 2 01 4
Refugee stress and folk belief: Hmong sudden deaths.
Since the first reported death in 1977, scores of seemingly healthy Hmong refugees have died mysteriously and without warning from what has come to be known as Sudden Unexpected Nocturnal Death Syndrome (SUNDS). To date medical research has provided no adequate explanation for these sudden deaths. This study is an investigation into the changing impact of traditional beliefs as they manifest during the stress of traumatic relocation. In Stockton, California, 118 Hmong men and women were interviewed regarding their awareness of and personal experience with a traditional nocturnal spirit encounter. An analysis of this data reveals that the supranormal attack acts as a trigger for Hmong SUNDS.
Efficacy and safety of a hyaluronic acid filler in subjects treated for correction of midface volume deficiency: a 24 month study
BACKGROUND Hyaluronic acid (HA) fillers are an established intervention for correcting facial volume deficiency. Few studies have evaluated treatment outcomes for longer than 6 months. The purpose of this study was to determine the durability of an HA filler in the correction of midface volume deficiency over 24 months, as independently evaluated by physician investigators and subjects. METHODS Subjects received treatment with Juvéderm(™) Voluma(™) to the malar area, based on the investigators' determination of baseline severity and aesthetic goals. The treatment was administered in one or two sessions over an initial 4-week period. Supplementary treatment was permissible at week 78, based on protocol-defined criteria. A clinically meaningful response was predefined as at least a one-point improvement on the MidFace Volume Deficit Scale (MFVDS) and on the Global Aesthetic Improvement Scale (GAIS). RESULTS Of the 103 subjects enrolled, 84% had moderate or significant volume deficiency at baseline. At the first post-treatment evaluation (week 8), 96% were documented to be MFVDS responders, with 98% and 100% graded as GAIS responders when assessed by the subjects and investigators, respectively. At week 78, 81.7% of subjects were still MFVDS responders, with 73.2% and 78.1% being GAIS responders, respectively. Seventy-two subjects completed the 24-month study, of whom 45 did not receive supplementary Voluma(™) at week 78. Forty-three of the 45 (95.6%) subjects were MFVDS responders, with 82.2% and 91.1% being GAIS responders, respectively. At end of the study, 66/72 subjects were either satisfied or very satisfied with Voluma(™), with 70/72 indicating that they would recommend the product to others. Adverse events were transient and infrequent, with injection site bruising and swelling being the most commonly reported. CONCLUSION Voluma(™) is safe and effective in the correction of mild to severe facial volume deficiency, achieving long-term clinically meaningful results. There was a high degree of satisfaction with the treatment outcome over the 24 months of the study.
Molecular Properties of 2-Pyrone-4,6-dicarboxylic Acid (PDC) as a Stable Metabolic Intermediate of Lignin Isolated by Fractional Precipitation with Na+ Ion
A chemically stable metabolic intermediate of lignin, 2-pyrone-4,6-dicarboxylic acid (PDC), was isolated, and the molecular properties were comprehensively investigated by using thermal analysis, o...
Extended-State-Observer-Based Output Feedback Nonlinear Robust Control of Hydraulic Systems With Backstepping
In this paper, an output feedback nonlinear control is proposed for a hydraulic system with mismatched modeling uncertainties in which an extended state observer (ESO) and a nonlinear robust controller are synthesized via the backstepping method. The ESO is designed to estimate not only the unmeasured system states but also the modeling uncertainties. The nonlinear robust controller is designed to stabilize the closed-loop system. The proposed controller accounts for not only the nonlinearities (e.g., nonlinear flow features of servovalve), but also the modeling uncertainties (e.g., parameter derivations and unmodeled dynamics). Furthermore, the controller theoretically guarantees a prescribed tracking transient performance and final tracking accuracy, while achieving asymptotic tracking performance in the absence of time-varying uncertainties, which is very important for high-accuracy tracking control of hydraulic servo systems. Extensive comparative experimental results are obtained to verify the high-performance nature of the proposed control strategy.
Association of age and response to androgen-deprivation therapy with or without radiotherapy for prostate cancer: data from CaPSURE.
OBJECTIVE To assess whether the response to primary androgen-deprivation therapy (PADT) and radiotherapy (RT) plus adjuvant ADT would be muted in older men, as their tumours might already be relatively androgen insensitive, because serum testosterone levels decline with increasing age. PATIENTS AND METHODS Using the Cancer of the Prostate Strategic Urologic Research Endeavor database, we conducted an observational study evaluating two groups of men treated for prostate cancer from 1995 to 2006. One group of 1748 men was treated with PADT and the second group of 612 men was treated with RT (external beam RT or brachytherapy) with neoadjuvant and/or adjuvant ADT. We tested whether age was a predictor of disease progression in the PADT group and prostate-specific antigen (PSA) recurrence in the RT + ADT group (Phoenix definition). Secondary outcomes were all cause (ACM) and prostate cancer-specific mortality (PCSM). RESULTS In both univariate and multivariate analysis stratifying by clinical risk group, age (<65, 65-69, 70-74, and > or =75 years) was not associated with the risk of secondary treatment or PSA recurrence for the PADT and the RT + ADT groups, respectively. Age category had no relationship to increased ACM or PCSM for the RT + ADT group. However, for the PADT group the oldest category (>75 years) had an increased hazard ratio (2.26, 95% confidence interval 1.04-4.88; P = 0.02) for ACM, but a decreased ratio for PCSM (0.29, 0.21-0.42; P < 0.01). CONCLUSION If we assume that age is a valid proxy measure for free available testosterone levels, then these levels do not seem to affect the likelihood of response to ADT, either used alone or combined with RT.
PEN: Parallel English-Persian news corpus
Parallel corpora are the necessary resources in many multilingual natural language processing applications, including machine translation and cross-lingual information retrieval. Manual preparation of a large scale parallel corpus is a very time consuming and costly procedure. In this paper, the work towards building a sentence-level aligned EnglishPersian corpus in a semi-automated manner is presented. The design of the corpus, collection, and alignment process of the sentences is described. Two statistical similarity measures were used to find the similarities of sentence pairs. To verify the alignment process automatically, Google Translator was used. The corpus is based on news resources available online and consists of about 30,000 formal sentence pairs.
Integrated heat air and moisture modeling and simulation
of corresponding MSc thesis by P.W.M.H. Steskens, 2006 June The purpose of this research project is to develop a computational model for multi dimensional transient Heat, Air and Moisture (HAM) flows in buildings. This project intends to produce a tool that enables the analysis of conditions leading to degradation of building components. It is the intention that with a combination of a transient multidimensional Heat, Air and Moisture (HAM) building model, models describing deterioration in building materials, and systematic collection of empirical knowledge, it may be possible to predict better the degradation processes in building materials. Two problems that are related to the development of a multidimensional transient HAM building model are studied. The first issue considers the research of the local temperature and humidity levels and variations in a room. The perspectives and possibilities of modeling a room with surrounding construction are studied. First of all, the airflow in the room as well as the temperature distribution in the building construction and materials is modeled. The modeling results in a transient multidimensional model of a room, which describes the thermal conditions (Heat and Air flow) in the room. Second, the model is verified and validated using experimental data. Third, the model is extended by adding moisture flow to the model. Finally, the obtained transient multidimensional HAM model of the room is verified and validated using experimental data obtained from literature. The second problem issue considers the research to the detailed thermodynamic behavior of a floor heating. First of all, models, describing the thermodynamic behavior of a floor heating, documented in literature and scientific articles are researched. Second, starting from a scientific article that is representative for the state-of-the-art in modeling floor heating thermal behavior, the results presented in the article have been reproduced. Third, the model is extended and improved for the general application in building services engineering and control design. The resulting model is expected to be a transient multidimensional detailed model of the thermal behavior of a floor heating system in a building.
Spread Pattern Formation of H5N1-Avian Influenza and its Implications for Control Strategies
Mechanisms contributing to the spread of avian influenza seem to be well identified, but how their interplay led to the current worldwide spread pattern of H5N1 influenza is still unknown due to the lack of effective global surveillance and relevant data. Here we develop some deterministic models based on the transmission cycle and modes of H5N1 and focusing on the interaction among poultry, wild birds and environment. Some of the model parameters are obtained from existing literatures, and others are allowed to vary in order to assess the effectiveness of various control strategies involving bird migration, agro-ecological environments, live and dead poultry trading, smuggling of wild birds, mechanical movement of infected materials and specific farming practices. Our simulations are carried out for a set of parameters that leads to the basic reproduction number 3.3. We show that by reducing 95% of the initial susceptible poultry population or by killing all infected poultry birds within one day, one may control the disease outbreak in a local setting. Our simulation shows that cleaning the environment is also a feasible and useful control measure, but culling wild birds and destroying their habitat are ineffective control measures. We use a one dimensional PDE model to examine the contribution to the spatial spread rate by the size of the susceptible poultry birds, the diffusion rates of the wild birds and the virus. We notice the diffusion rate of the wild birds with high mortality has very little impact on the spread speed. But for the wild birds who can survive the infection, depending on the direction of convection, their diffusion rate can substantially increase the spread rate, indicating a significant role of the migration of these type of wild birds in the spread of the disease.
Testosterone suppression in men with prostate cancer leads to an increase in arterial stiffness and hyperinsulinaemia.
The role of androgens in cardiovascular disease is uncertain. We aimed to determine the vascular effects of androgen suppression in men with prostate cancer. Arterial stiffness (or 'compliance') was measured in 16 men (71+/-9 years, mean+/-S.D.) prior to, and 3 months after, complete androgen suppression with gonadotrophin-releasing hormone analogues as treatment for prostate cancer. Fifteen control men (70+/-7 years) also had arterial stiffness studies at baseline and 3 months later. Two measures of arterial stiffness were employed: systemic arterial compliance (SAC) was measured by simultaneous recording of aortic flow and carotid artery pressure ('area method'), and pulse wave velocities (PWVs) were recorded with the 'Complior' system. The 16 cases underwent glucose-tolerance and fasting-lipids tests on both visits. After 3 months of testosterone suppression, there was a significant fall in SAC, which was not seen in the controls [mean change+/-S.E.M., -0.26+/-0.09 a.c.u. (arbitrary compliance unit) in the cases versus +0.06+/-0.11 in the controls; P =0.03). Central, but not peripheral, PWVs tended to increase in the cases (mean change+/-S.E.M. for aorto-femoral PWV, +0.5+/-0.4 m/s for cases versus -0.3+/-0.3 m/s for controls; P =0.08). After testosterone suppression, fasting insulin levels increased from 6.89+/-4.84 m-units/l to 11.34+/-8.16 m-units/l (mean+/-S.D.), total cholesterol increased from 5.32+/-0.77 mmol/l to 5.71+/-0.82 mmol/l and high-density lipoprotein cholesterol increased from 1.05+/-0.24 mmol/l to 1.26+/-0.36 mmol/l; P <0.005 for all. No significant change occurred in body-mass index, serum glucose, low-density lipoprotein cholesterol or triacylglycerol (triglyceride) levels. Our results indicate that loss of androgens in men leads to an increase in aortic stiffness and serum insulin levels, and may therefore adversely affect cardiovascular risk.
GazeTouchPass: Multimodal Authentication Using Gaze and Touch on Mobile Devices
We propose a multimodal scheme, GazeTouchPass, that combines gaze and touch for shoulder-surfing resistant user authentication on mobile devices. GazeTouchPass allows passwords with multiple switches between input modalities during authentication. This requires attackers to simultaneously observe the device screen and the user's eyes to find the password. We evaluate the security and usability of GazeTouchPass in two user studies. Our findings show that GazeTouchPass is usable and significantly more secure than single-modal authentication against basic and even advanced shoulder-surfing attacks.
A comparison of methods for sketch-based 3D shape retrieval
1077-3142/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.cviu.2013.11.008 q This paper has been recommended for acceptance by Nicu Sebe. ⇑ Corresponding author. Address: 601 University Drive, Department of Computer Science, Texas State University, San Marcos, TX 78666, United States. Fax: +1 512 245 8750. E-mail address: [email protected] (Y. Lu). Bo Li , Yijuan Lu a,⇑, Afzal Godil , Tobias Schreck , Benjamin Bustos , Alfredo Ferreira , Takahiko Furuya , Manuel J. Fonseca , Henry Johan , Takahiro Matsuda , Ryutarou Ohbuchi , Pedro B. Pascoal , Jose M. Saavedra d,h