title
stringlengths
8
300
abstract
stringlengths
0
10k
공공성 확보를 위한 도시재생사업에서 공공의 역할
The study aims to advise role-sharing between the public and the private and a role of the public to secure the publicity for successful urban regeneration projects through investigation about the problem and limitation of domestic urban regeneration projects. As a result, in foreign urban regeneration projects, the public is taking a role for a process simplification by an arrangement of the law and the system, infrastructure and service facility expansion, and an encouragement of private capital investment by financial support. In addition, the public is engaged in an environmental improvement of the citizen’s residence, a guideline for customized urban planning for each region, the publicity in the private projects through an incentive program, and a diversification of main body of the project through the private-public partnership, and the adjustment of interests between the bodies. Therefore, the public should share a role with the private properly to have the best effect of private capital and creativity and build diverse policy to make reasonable division of the profit for the successful urban regeneration projects with the publicity.
mHealth Technologies to Influence Physical Activity and Sedentary Behaviors: Behavior Change Techniques, Systematic Review and Meta-Analysis of Randomized Controlled Trials.
BACKGROUND mHealth programs offer potential for practical and cost-effective delivery of interventions capable of reaching many individuals. PURPOSE To (1) compare the effectiveness of mHealth interventions to promote physical activity (PA) and reduce sedentary behavior (SB) in free-living young people and adults with a comparator exposed to usual care/minimal intervention; (2) determine whether, and to what extent, such interventions affect PA and SB levels and (3) use the taxonomy of behavior change techniques (BCTs) to describe intervention characteristics. METHODS A systematic review and meta-analysis following PRISMA guidelines was undertaken to identify randomized controlled trials (RCTs) comparing mHealth interventions with usual or minimal care among individuals free from conditions that could limit PA. Total PA, moderate-to-vigorous intensity physical activity (MVPA), walking and SB outcomes were extracted. Intervention content was independently coded following the 93-item taxonomy of BCTs. RESULTS Twenty-one RCTs (1701 participants-700 with objectively measured PA) met eligibility criteria. SB decreased more following mHealth interventions than after usual care (standardised mean difference (SMD) -0.26, 95 % confidence interval (CI) -0.53 to -0.00). Summary effects across studies were small to moderate and non-significant for total PA (SMD 0.14, 95 % CI -0.12 to 0.41); MVPA (SMD 0.37, 95 % CI -0.03 to 0.77); and walking (SMD 0.14, 95 % CI -0.01 to 0.29). BCTs were employed more frequently in intervention (mean = 6.9, range 2 to 12) than in comparator conditions (mean = 3.1, range 0 to 10). Of all BCTs, only 31 were employed in intervention conditions. CONCLUSIONS Current mHealth interventions have small effects on PA/SB. Technological advancements will enable more comprehensive, interactive and responsive intervention delivery. Future mHealth PA studies should ensure that all the active ingredients of the intervention are reported in sufficient detail.
Plagiarism Detection Using the Levenshtein Distance and Smith-Waterman Algorithm
Plagiarism in texts is issues of increasing concern to the academic community. Now most common text plagiarism occurs by making a variety of minor alterations that include the insertion, deletion, or substitution of words. Such simple changes, however, require excessive string comparisons. In this paper, we present a hybrid plagiarism detection method. We investigate the use of a diagonal line, which is derived from Levenshtein distance, and simplified Smith-Waterman algorithm that is a classical tool in the identification and quantification of local similarities in biological sequences, with a view to the application in the plagiarism detection. Our approach avoids globally involved string comparisons and considers psychological factors, which can yield significant speed-up by experiment results. Based on the results, we indicate the practicality of such improvement using Levenshtein distance and Smith-Waterman algorithm and to illustrate the efficiency gains. In the future, it would be interesting to explore appropriate heuristics in the area of text comparison.
A 5 GHz Digital Fractional-N PLL Using a 1-bit Delta – Sigma Frequency-to-Digital Converter in 65 nm CMOS
A highly digital two-stage fractional-N phaselocked loop (PLL) architecture utilizing a first-order 1-bit frequency-to-digital converter (FDC) is proposed and implemented in a 65 nm CMOS process. Performance of the first-order 1-bit FDC is improved by using a phase interpolatorbased fractional divider that reduces phase quantizer input span and by using a multiplying delay-locked loop that increases its oversampling ratio. We also describe an analogy between a time-to-digital converter (TDC) and a FDC followed by an accumulator that allows us to leverage the TDC-based PLL analysis techniques to study the impact of FDC characteristics on FDC-based fractional-N PLL (FDCPLL) performance. Utilizing proposed techniques, a prototype PLL achieves 1 MHz bandwidth, −101.6 dBc/Hz in-band phase noise, and 1.22 psrms (1 kHz–40 MHz) jitter while generating 5.031 GHz output from 31.25 MHz reference clock input. For the same output frequency, the stand-alone second-stage fractional-N FDCPLL achieves 1 MHz bandwidth, −106.1 dBc/Hz in-band phase noise, and 403 fsrms jitter with a 500 MHz reference clock input. The two-stage PLL consumes 10.1 mW power from a 1 V supply, out of which 7.1 mW is consumed by the second-stage FDCPLL.
Racial and ethnic differences in epilepsy classification among probands in the Epilepsy Phenome/Genome Project (EPGP)
Little is known about the ethnic and racial differences in the prevalence of generalized and focal epilepsy among patients with non-acquired epilepsies. In this study, we examined epilepsy classification and race/ethnicity in 813 probands from sibling or parent-child pairs with epilepsy enrolled in the Epilepsy Genome/Phenome Project (EPGP). Subjects were classified as generalized epilepsy (GE), non-acquired focal epilepsy (NAFE), mixed epilepsy syndrome (both generalized and focal), and unclassifiable, based on consensus review of semiology and available clinical, electrophysiology, and neuroimaging data. In this cohort, 628 (77.2%) subjects identified exclusively as Caucasian/white and 65 (8.0%) subjects reported African ancestry, including subjects of mixed-race. Of the Caucasian/white subjects, 357 (56.8%) had GE, 207 (33.0%) had NAFE, 32 (5.1%) had a mixed syndrome, and 32 (5.1%) were unclassifiable. Among subjects of African ancestry, 28 (43.1%) had GE, 27 (41.5%) had NAFE, 2 (3.1%) had a mixed syndrome, and 8 (12.3%) were unclassifiable. There was a higher proportion of subjects with GE compared to other syndromes among Caucasians/whites compared to subjects with African ancestry (OR 1.74, 95% CI: 1.04-2.92, two-tailed Fisher's exact test, p=0.036). There was no difference in the rate of GE among subjects reporting Hispanic ethnicity (7.6% of total) when adjusted for race (Caucasian/white vs non-Caucasian/white; OR 0.65, 95% CI: 0.40-1.06, p>0.05). The proportion of participants with unclassifiable epilepsy was significantly greater in those of African-American descent. In a group of patients with epilepsy of unknown etiology and an affected first degree relative, GE is more common among Caucasian/white subjects than among those with African ancestry. These findings suggest there may be geographical differences in the distribution of epilepsy susceptibility genes and an effect of genetic background on epilepsy phenotype. However, the results should be interpreted with caution because of the low numbers of African-Americans in this cohort and more limited diagnostic data available for epilepsy classification in these subjects compared to Caucasians/whites.
Understanding the Self-Directed Online Learning Preferences, Goals, Achievements, and Challenges of MIT OpenCourseWare Subscribers
This research targeted the learning preferences, goals and motivations, achievements, challenges, and possibilities for life change of self-directed online learners who subscribed to the monthly OpenCourseWare (OCW) e-newsletter from MIT. Data collection included a 25-item survey of 1,429 newsletter subscribers; 613 of whom also completed an additional 15 open-ended survey items. The 25 close-ended survey findings indicated that respondents used a wide range of devices and places to learn for their self-directed learning needs. Key motivational factors included curiosity, interest, and internal need for self-improvement. Factors leading to success or personal change included freedom to learn, resource abundance, choice, control, and fun. In terms of achievements, respondents were learning both specific skills as well as more general skills that help them advance in their careers. Science, math, and foreign language skills were the most desired by the survey respondents. The key obstacles or challenges faced were time, lack of high quality open resources, and membership or technology fees. Several brief stories of life change across different age ranges are documented. Among the chief implications is that learning something new to enhance one’s life or to help others is often more important than course transcript credit or a certificate of completion.
A Tutorial on Planning Graph Based Reachability Heuristics
The primary revolution in automated planning in the last decade has been the very impressive scaleup in planner performance. A large part of the credit for this can be attributed squarely to the invention and deployment of powerful reachability heuristics. Most, if not all, modern reachability heuristics are based on a remarkably extensible data structure called the planning graph, which made its debut as a bit player in the success of GraphPlan, but quickly grew in prominence to occupy the center stage. Planning graphs are a cheap means to obtain informative look-ahead heuristics for search and have become ubiquitous in state of the art heuristic search planners. We present the foundations of planning graph heuristics in classical planning and explain how their flexibility lets them adapt to more expressive scenarios that consider action costs, goal utility, numeric resources, time, and uncertainty.
Power transfer capability and bifurcation phenomena of loosely coupled inductive power transfer systems
Loosely coupled inductive power transfer (LCIPT) systems are designed to deliver power efficiently from a stationary primary source to one or more movable secondary loads over relatively large air gaps via magnetic coupling. In this paper, a general approach is presented to identify the power transfer capability and bifurcation phenomena (multiple operating modes) for such systems. This is achieved using a high order mathematical model consisting of both primary and secondary resonant circuits. The primary compensation is deliberately designed to make the primary zero phase angle frequency equal the secondary resonant frequency to achieve maximum power with minimum VA rating of the supply. A contactless electric vehicle battery charger was used to validate the theory by comparing the measured and calculated operational frequency and power transfer. For bifurcation-free operation, the power transfer capability and controllability are assured by following the proposed bifurcation criteria. Where controllable operation within the bifurcation region is achievable, a significant increase in power is possible.
Vehicle routing with multiple deliverymen: Modeling and heuristic approaches for the VRPTW
0377-2217/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.ejor.2011.12.005 ⇑ Corresponding author. Fax: +55 16 33518240. E-mail addresses: [email protected] (V. (R. Morabito), [email protected] (M. Reiman In real life distribution of goods, relatively long service times may make it difficult to serve all requests during regular working hours. These difficulties are even greater if the beginning of the service in each demand site must occur within a time window and violations of routing time restrictions are particularly undesirable. We address this situation by considering a variant of the vehicle routing problem with time windows for which, besides routing and scheduling decisions, a number of extra deliverymen can be assigned to each route in order to reduce service times. This problem appears, for example, in the distribution of beverage and tobacco in highly dense Brazilian urban areas. We present a mathematical programming formulation for the problem, as well as a tabu search and an ant colony optimization heuristics for obtaining minimum cost routes. The performance of the model and the heuristic approaches are evaluated using instances generated from a set of classic examples from the literature. 2011 Elsevier B.V. All rights reserved.
Is sexual risk taking behaviour changing in rural south-west Uganda? Behaviour trends in a rural population cohort 1993–2006
OBJECTIVE To describe sexual behaviour trends in a rural Ugandan cohort in the context of an evolving HIV epidemic, 1993-2006. METHODS Sexual behaviour data were collected annually from a population cohort in which HIV serological surveys were also conducted. Behaviour trends were determined using survival analysis and logistic regression. Trends are reported based on the years in which the respective indicators were collected. RESULTS Between 1993 and 2006, median age at first sex increased from 16.7 years to 18.2 years among 17-20-year-old girls and from 18.5 years to 19.9 years among boys. Both sexes reported a dip in age at sexual debut between 1998 and 2001. One or more casual partners in the past 12 months among men rose from 11.6% in 1997 to 12.7% in 2004 and then declined to 10.2% in 2006. Among women it increased from 1.4% in 1997 to 3.7% in 2004 and then reduced to 1.4% in 2006. The rise in casual partners between 1997 and 2004 was driven mainly by older age groups. Trends in condom use with casual partners varied by age, increasing among those aged 35+ years, declining in the middle age groups and presenting a dip and then a rise in the youngest aged group (13-19 years). CONCLUSION Among youth, risky behaviour declined but increased in the late 1990s/early 2000s. Among those aged 35+ years, condom use rose but casual partners also rose. Several indicators portrayed a temporary increase in risk taking behaviour from 1998 to 2002.
High efficiency SRM drive using a Quasi-Current Source Inverter
This paper proposes a high efficiency driving method for a Switched Reluctance Motor (SRM). Recently, most of SRMs are driven by using a Voltage Source Inverter (VSI) and the torque is controlled by a current hysteresis control. However, when SRM is driven in high speed, the load current is decreased because the number of current sampling is not enough, the current hysteresis control cannot be achieved. This study proposes a Quasi-Current Source Inverter (QCSI) for SRM drive and the effects of the proposed method are verified by simulations and experiments. Consequently, the load current is smoothly changed in the variable speed conditions. Therefore, a high-efficiency driving can be realized in the variable speed.
Many-objective optimization algorithm applied to history matching
Reservoir model calibration, called history matching in the petroleum industry, is an important task to make more accurate predictions for better reservoir management. Providing an ensemble of good matched reservoir models from history matching is essential to reproduce the observed production data from a field and to forecast reservoir performance. The nature of history matching is multi-objective because there are multiple match criteria or misfit from different production data, wells and regions in the field. In many cases, these criteria are conflicting and can be handled by the multi-objective approach. Moreover, multi-objective provides faster misfit convergence and more robust towards stochastic nature of optimization algorithms. However, reservoir history matching may feature far too many objectives that can be efficiently handled by conventional multi-objective algorithms, such as multi-objective particle swarm optimizer (MOPSO) and non-dominated sorting genetic algorithm II (NSGA II). Under an increasing number of objectives, the performance of multi-objective history matching by these algorithms deteriorates (lower match quality and slower misfit convergence). In this work, we introduce a recently proposed algorithm for many-objective optimization problem, known as reference vector-guided evolutionary algorithm (RVEA), to history matching. We apply the algorithm to history matching a synthetic reservoir model and a real field case study with more than three objectives. The paper demonstrates the superiority of the proposed RVEA to the state of the art multi-objective history matching algorithms, namely MOPSO and NSGA II.
Structural and Functional Progression in the Early Manifest Glaucoma Trial.
PURPOSE To elucidate the temporal relationship between detection of glaucomatous optic disc progression, as assessed by fundus photography, and visual field progression. DESIGN Prospective, randomized, longitudinal trial. PARTICIPANTS Three hundred six study eyes with manifest glaucoma with field loss and 192 fellow eyes without any field defect at the start of the trial, from a total of 249 subjects included in the Early Manifest Glaucoma Trial (EMGT), were assessed. METHODS Evaluation of visual field progression and optic disc progression during an 8-year follow-up period. Three graders independently assessed optic disc progression in optic disc photographs. Visual field progression was assessed using glaucoma change probability maps and the EMGT progression criterion. MAIN OUTCOME MEASURES Time to detection of visual field progression and optic disc progression. RESULTS Among study eyes with manifest glaucoma, progression was detected in the visual field first in 163 eyes (52%) and in the optic disc first in 39 eyes (12%); in 1 eye (0%), it was found simultaneously with both methods. Among fellow eyes with normal fields, progression was detected in the visual field first in 28 eyes (15%) and in the optic disc first in 34 eyes (18%); in 1 eye (1%), it occurred simultaneously. CONCLUSIONS In eyes with manifest glaucoma, progression in the visual field was detected first more than 4 times as often as progression in the optic disc. Among fellow eyes without visual field loss at baseline, progression was detected first as frequently in the optic disc as in the visual field.
Real-Time Adaptive Image Compression
We present a machine learning-based approach to lossy image compression which outperforms all existing codecs, while running in real-time. Our algorithm typically produces files 2.5 times smaller than JPEG and JPEG 2000, 2 times smaller than WebP, and 1.7 times smaller than BPG on datasets of generic images across all quality levels. At the same time, our codec is designed to be lightweight and deployable: for example, it can encode or decode the Kodak dataset in around 10ms per image on GPU. Our architecture is an autoencoder featuring pyramidal analysis, an adaptive coding module, and regularization of the expected codelength. We also supplement our approach with adversarial training specialized towards use in a compression setting: this enables us to produce visually pleasing reconstructions for very low bitrates.
Dynamics of light harvesting in photosynthesis.
We review recent theoretical and experimental advances in the elucidation of the dynamics of light harvesting in photosynthesis, focusing on recent theoretical developments in structure-based modeling of electronic excitations in photosynthetic complexes and critically examining theoretical models for excitation energy transfer. We then briefly describe two-dimensional electronic spectroscopy and its application to the study of photosynthetic complexes, in particular the Fenna-Matthews-Olson complex from green sulfur bacteria. This review emphasizes recent experimental observations of long-lasting quantum coherence in photosynthetic systems and the implications of quantum coherence in natural photosynthesis.
E-recruiting support system based on text mining methods
Since web documents have different formats and contents; it is necessary for various documents to use standards to normalise their modelling in order to facilitate retrieval task. The model must take into consideration, both the syntactic structure, and the semantic content of the documents. Curriculum vitae (CV) is the document that summaries our education, skills, accomplishments, and experience. Job seekers submit their CV via the web. Therefore, in their recruitment process, companies are requiring systems for extraction and analysis of information from CVs: identifying specific patterns, which meet with certain profile. To extract the essential component of CVs and to relate them with user’s require, needs first, a study of their most significant elements and a better understanding of the CV feature. This work focuses on CVs’ analysis. It introduces an approach for analysing and structuring CVs which are in French. To this end, we make an extension of General Architecture of Text Engineering (GATE). The extension affects essentially a formulation of logic rules for the generation of annotations used for CV handling. The goal is to normalise the CV content according to the structure adopted by Europass CV. This action is guided by the HR-XML standard. We experiment the proposed process and we showed that there is an improvement in the extraction phase.
Robust control of speed and temperature in a power plant gas turbine.
In this paper, an H(∞) robust controller has been designed for an identified model of MONTAZER GHAEM power plant gas turbine (GE9001E). In design phase, a linear model (ARX model) which is obtained using real data has been applied. Since the turbine has been used in a combined cycle power plant, its speed and also the exhaust gas temperature should be adjusted simultaneously by controlling fuel signals and compressor inlet guide vane (IGV) position. Considering the limitations on the system inputs, the aim of the control is to maintain the turbine speed and the exhaust gas temperature within desired interval under uncertainties and load demand disturbances. Simulation results of applying the proposed robust controller on the nonlinear model of the system (NARX model), fairly fulfilled the predefined aims. Simulations also show the improvement in the performance compared to MPC and PID controllers for the same conditions.
Ethnicity and neighbourhood deprivation determines the response rate in sexual dysfunction surveys
Self-administered questionnaires provide a better alternative to disclose sensitive information in sexual health research. We describe the factors that determine the positive response (initial recruitment) to an initial invitation and subsequent completion of study to a postal questionnaire on sexual dysfunction. South Asians (SA) and Europids with and without diabetes (DM) were recruited from GP clinics in UK. Men who returned the properly filled consent form (‘recruited-group’) were sent the questionnaire and those who returned it were considered as the ‘completed-group’. Index of Multiple Deprivation Scores (IMDs) were generated using UK postcodes. We calculated the recruitment rate and completion rate of the recruited and the study-completed groups respectively. Total approached sample was 9100 [DM: 2914 (32 %), SA: 4563 (50.1 %)]. Recruitment rate was 8.8 % and was higher in Europids and in patients with DM. Mean IMDs for the recruited group was 20.9 ± 11.9, and it was higher among recruited SA compared to Europids (p < 0.001). Mean IMDs was higher in the recruited group compared to non-recruited (p < 0.01). All four recruited groups (SA/Europid and DM/non-DM) had lower IMDs compared to non-recruited. Completion rate was 71.5 % (n 544) (SA: 62.3 %, Europids: 77.4 %; p < 0.05). Recruitment for postal sexual health surveys is positively influenced by presence of investigated disease, older age, being from lesser deprived areas and Europid ethnicity. Furthermore, Europids were more likely to complete survey than South Asians irrespective of disease status.
Non-Tariff Barriers and the Telecommunications Sector
This paper discusses the nature, importance, and measurement of non-tariff barriers (NTBs) in services trade with particular reference to telecommunications services. It is shown that although more effectively addressed for the telecom sector at the multilateral level than for other service sectors, NTBs are still widespread and would appear to have a large potential for restricting trade in services. The paper reviews the scope and classification of non-tariff barriers to services trade and sets out an alternative typology for their classification, highlighting the fact that NTBs may be either government-imposed, may result from non-competitive market structures, or from the absence of appropriate regulation. The latter is shown to constitute one of the most important sources of NTBs in network industries, such as telecommunications services. Attempts by the relevant literature to measure NTBs in telecommunications are summarized and their usefulness in identifying ?appropriate? policy mixes is commented. Lastly, the paper probes the question of whether existing multilateral and regional instruments and agreements are adequate to deal with the non-tariff phenomenon in the telecom sector in its several dimensions.
Semantic similarity based evaluation for C programs through the use of symbolic execution
Automatic grading of programs has existed in various fields for many years ago. Within this paper, we propose a method for evaluating C programs. Two approaches are distinguished in this context: static and dynamic analysis methods. Unlike the dynamic analysis that requires an executable program to be evaluated, static analysis could evaluate a program even if it is not totally correct. The proposed method is based on static analysis of programs. It consists of comparing the evaluated program with the evaluator-provided program through their Control Flow Graphs. Here, the great challenge is to deal with the multiplicity of solutions that exists for the same programming problem. As a solution to this weakness, we propose an innovative similarity measure that compares two programs according to their semantic executions. In fact, the evaluated program is compared to the evaluator-provided program called model program by using the symbolic execution technique. The experimentations presented in this work are performed by using a basic implementation of the proposed method. The obtained results reveal a promising realization in the field of automated evaluation of programs. They also show that the proposed method guarantees a considerable approximation to the human program evaluation.
Control of 3D limb dynamics in unconstrained overarm throws of different speeds performed by skilled baseball players.
This study investigated how the human CNS organizes complex three-dimensional (3D) ball-throwing movements that require both speed and accuracy. Skilled baseball players threw a baseball to a target at three different speeds. Kinematic analysis revealed that the fingertip speed at ball release was mainly produced by trunk leftward rotation, shoulder internal rotation, elbow extension, and wrist flexion in all speed conditions. The study participants adjusted the angular velocities of these four motions to throw the balls at three different speeds. We also analyzed the dynamics of the 3D multijoint movements using a recently developed method called "nonorthogonal torque decomposition" that can clarify how angular acceleration about a joint coordinate axis (e.g., shoulder internal rotation) is generated by the muscle, gravity, and interaction torques. We found that the study participants utilized the interaction torque to generate larger angular velocities of the shoulder internal rotation, elbow extension, and wrist flexion. To increase the interaction torque acting at these joints, the ball throwers increased muscle torque at the shoulder and trunk but not at the elbow and wrist. These results indicates that skilled ball throwers adopted a hierarchical control in which the proximal muscle torques created a dynamic foundation for the entire limb motion and beneficial interaction torques for distal joint rotations.
The effect of tocilizumab on bone mineral density in patients with methotrexate-resistant active rheumatoid arthritis.
OBJECTIVE The aim of this study was to analyse the effects of therapy with tocilizumab (TCZ), an anti-IL-6 receptor antibody, on BMD of the lumbar spine and femoral neck in patients with RA. METHODS Eighty-six patients with active RA (indicated by a 28-joint DAS ESR >3.2) despite treatment with MTX 12 mg/week were included in this open-label prospective study and started on TCZ (8 mg/kg every 4 weeks). All patients used a stable dosage of MTX and were not allowed to use steroids or bisphosphonates during the study period. BMD of the lumbar spine and femoral neck was measured by dual-energy X-ray absorptiometry at baseline and 52 weeks after initiating TCZ. RESULTS Seventy-eight patients completed this study. BMD of the lumbar spine and femoral neck remained stable after 1 year of TCZ treatment. In 33 patients who had osteopenia at baseline, there was a significant increase in BMD of the lumbar spine [mean 0.022 (s.d.) 0.042, P < 0.05] and femoral neck [0.024 (0.0245), P < 0.05]. CONCLUSION TCZ affects BMD in patients who had active RA despite treatment with MTX. BMD of the lumbar spine and femoral neck in patients with normal BMD at baseline was stable. TCZ increased the BMD of patients who had osteopenia at baseline.
«Chumy Chúmez. Una biografía». Autoficción, testimonio y homenajes
In 1973 the cartoonist and humorist Chumy Chumez published the graphic novel, Chumy Chumez . Una biografia . From the collage technique, he recoveres prints of the nineteenth century and provides an explicit homage to the great illustrators. The book, written in the final years of the Franco regime, must be analyzed in the political context of its production, but also in the literary and artistic system of 70s. Both coordinates are visible in a graphic story by fragmentation, ellipsis and suggestion. The story introduces the reader to the moral climate of the dictatorship and, at the same time, reveals unequivocal evidence of a change of cycle, historical and aesthetic.
Fool's Gold: Extracting Finite State Machines from Recurrent Network Dynamics
Several recurrent networks have been proposed as representations for the task of formal language learning. After training a recurrent network recognize a formal language or predict the next symbol of a sequence, the next logical step is to understand the information processing carried out by the network. Some researchers have begun to extracting finite state machines from the internal state trajectories of their recurrent networks. This paper describes how sensitivity to initial conditions and discrete measurements can trick these extraction methods to return illusory finite state descriptions. INTRODUCTION Formal language learning (Gold, 1969) has been a topic of concern for cognitive science and artificial intelligence. It is the task of inducing a computational description of a formal language from a sequence of positive and negative examples of strings in the target language. Neural information processing approaches to this problem involve the use of recurrent networks that embody the internal state mechanisms underlying automata models (Cleeremans et al., 1989; Elman, 1990; Pollack, 1991; Giles et al, 1992; Watrous & Kuhn, 1992). Unlike traditional automata-based approaches, learning systems relying on recurrent networks have an additional burden: we are still unsure as to what these networks are doing.Some researchers have assumed that the networks are learning to simulate finite state Fool’s Gold: Extracting Finite State Machines From Recurrent Network Dynamics machines (FSMs) in their state dynamics and have begun to extract FSMs from the networks' state transition dynamics (Cleeremans et al., 1989; Giles et al., 1992; Watrous & Kuhn, 1992). These extraction methods employ various clustering techniques to partition the internal state space of the recurrent network into a finite number of regions corresponding to the states of a finite state automaton. This assumption of finite state behavior is dangerous on two accounts. First, these extraction techniques are based on a discretization of the state space which ignores the basic definition of information processing state. Second, discretization can give rise to incomplete computational explanations of systems operating over a continuous state space. SENSITIVITY TO INITIAL CONDITIONS In this section, I will demonstrate how sensitivity to initial conditions can confuse an FSM extraction system. The basis of this claim rests upon the definition of information processing state. Information processing (IP) state is the foundation underlying automata theory. Two IP states are the same if and only if they generate the same output responses for all possible future inputs (Hopcroft & Ullman, 1979). This definition is the fulcrum for many proofs and techniques, including finite state machine minimization. Any FSM extraction technique should embrace this definition, in fact it grounds the standard FSM minimization methods and the physical system modelling of Crutchfield and Young (Crutchfield & Young, 1989). Some dynamical systems exhibit exponential divergence for nearby state vectors, yet remain confined within an attractor. This is known as sensitivity to initial conditions. If this divergent behavior is quantized, it appears as nondeterministic symbol sequences (Crutchfield & Young, 1989) even though the underlying dynamical system is completely deterministic (Figure 1). Consider a recurrent network with one output and three recurrent state units. The output unit performs a threshold at zero activation for state unit one. That is, when the activation of the first state unit of the current state is less than zero then the output is A. Otherwise, the output is B. Equation 1 presents a mathematical description. is the current state of the system is the current output.
A Political History Of Nigeria And The Crisis Of Ethnicity In Nation-Building
The virus of ethnicity has been one of the most definitive causes of social crisis, injustice, inequality and religio-political instability in Nigeria. Ethnicity has been perceived in general as a major obstacle to the overall politico–economic development of the country. Nigeria is marked by underlying ethnic cleavages and inter-ethnic fears and tensions, hence a bellicose nation. These are revealed from time to time by conflicting lobbies at the moments of competition for shares of the national cake and political appointments to high offices, resource control, head of political parties and ministerial positions. Losers in competitions for high national offices often attribute their failures to ethnicity or ethnic marginalization, while winners hardly ever explained their success in terms of the influence of ethnicity, and are therefore not gallant losers or magnanimous in victory. The Nation’s incessant appeals to ethnicity have obviously showcased the evils inherent in the politicization of ethnicity. Consequently, the ensuing complications of ethnicity have grossly impinged on the development of the country in all ramifications. The paper, a historio-political venture, argues that the path was colonially charted though; the Nigerian political elite have in complicity exacerbated ethnicity in the country. As Nigeria warms to its centennial amalgamation birthday, the Nigerian political history is summable as a squandered century of nationhood, a nation-building in close call, extremely in dire need of operational reappraisals.
LMF-Based Control Algorithm for Single Stage Three-Phase Grid Integrated Solar PV System
This paper proposes the use of a least mean fourth (LMF)-based algorithm for single-stage three-phase grid-integrated solar photovoltaic (SPV) system. It consists of an SPV array, voltage source converter (VSC), three-phase grid, and linear/nonlinear loads. This system has an SPV array coupled with a VSC to provide three-phase active power and also acts as a static compensator for the reactive power compensation. It also conforms to an IEEE-519 standard on harmonics by improving the quality of power in the three-phase distribution network. Therefore, this system serves to provide harmonics alleviation, load balancing, power factor correction and regulating the terminal voltage at the point of common coupling. In order to increase the efficiency and maximum power to be extracted from the SPV array at varying environmental conditions, a single-stage system is used along with perturb and observe method of maximum power point tracking (MPPT) integrated with the LMF-based control technique. The proposed system is modeled and simulated using MATLAB/Simulink with available simpower system toolbox and the behaviour of the system under different loads and environmental conditions are verified experimentally on a developed system in the laboratory.
Measuring and Explaining Management Practices Across Firms and Countries Preliminary
We use an innovative survey tool to collect management practice data from 732 medium sized manufacturing firms in the US and Europe (France, Germany and the UK). Our measures of managerial best practice are strongly associated with superior firm performance in terms of productivity, profitability, Tobin’s Q, sales growth and survival. We also find significant intercountry variation with US firms on average better managed than European firms, but a much greater intra-country variation with a long tail of extremely badly managed firms. This presents a dilemma – why do so many firms exist with apparently inferior management practices, and why does this vary so much across countries? We find this is due to a combination of: (i) low product market competition and (ii) family firms passing management control down to the eldest sons (primo geniture). European firms in our sample report facing lower levels of competition, and substantially higher levels of primo geniture. These two factors appear to account for around half of the long tail of badly managed firms and half of the average US-Europe gap in management performance.
Compact 5.8-GHz Rectenna Using Stepped-Impedance Dipole Antenna
This letter describes a compact 5.8-GHz rectenna using a stepped-impedance dipole antenna. In comparison to the conventional uniform half-wavelength dipole antenna, the stepped-impedance dipole antenna shows a 23% length reduction. This stepped-impedance dipole antenna is then used to receive the microwave power for a rectenna. The rectenna shows a maximum conversion efficiency of 76% at 5.8 GHz with a load resistance of 250 Omega. Furthermore, the conversion efficiency is better than 67% from 5.725 to 5.875 GHz, which covers the entire 5.8-GHz ISM band.
Atrial Fibrillation Complexity Parameters Derived From Surface ECGs Predict Procedural Outcome and Long-Term Follow-Up of Stepwise Catheter Ablation for Atrial Fibrillation.
BACKGROUND The success rate of catheter ablation for persistent atrial fibrillation (AF) is still far from satisfactory. Identification of patients who will benefit from ablation is highly desirable. We investigated the predictive value of noninvasive AF complexity parameters derived from standard 12-lead ECGs for AF termination and long-term success of catheter ablation and compared them with clinical predictors. METHODS AND RESULTS The study included a training (93 patients) and a validation set (81 patients) of patients with persistent AF undergoing stepwise radiofrequency ablation. In the training set AF terminated in 81% during catheter ablation, 77% were in sinus rhythm after 6 years and multiple ablations. ECG-derived complexity parameters were determined from a baseline 10-s 12-lead ECG. Prediction of AF termination was similar using only ECG (cross-validated mean area under the curve [AUC], 0.76±0.15) or only clinical parameters (mean AUC, 0.75±0.16). The combination improved prediction to a mean AUC of 0.79±0.13. Using a combined model of ECG and clinical parameters, sinus rhythm at long-term follow-up could be predicted with a mean AUC of 0.71±0.12. In the validation set AF terminated in 57%, 61% were in sinus rhythm after 4.6 years. The combined models predicted termination with an AUC of 0.70 and sinus rhythm at long-term follow-up with an AUC of 0.61. Overall, fibrillation-wave amplitude provided the best rhythm prediction. CONCLUSIONS The predictive performance of ECG-derived AF complexity parameters for AF termination and long-term success of catheter ablation in patients with persistent AF is at least as good as known clinical predictive parameters, with fibrillation-wave amplitude as the best predictor.
Improving Entity Linking by Modeling Latent Relations between Mentions
• In-domain: 91% F1 on the dev set, 5 we reduced the learning rate from 10−4 to 10−5. We then stopped the training when F1 was not improved after 20 epochs. We did the same for ment-norm except that the learning rate was changed at 91.5% F1. Note that all the hyper-parameters except K and the turning point for early stopping were set to the values used by Ganea and Hofmann (2017). Systematic tuning is expensive though may have further ncreased the result of our models.
Asset Pricing with Liquidity Risk ∗ Viral
This paper studies equilibrium asset pricing with liquidity risk — the risk arising from unpredictable changes in liquidity over time. It is shown that a security’s required return depends on its expected illiquidity and on the covariances of its own return and illiquidity with market return and market illiquidity. This gives rise to a liquidityadjusted capital asset pricing model. Further, if a security’s liquidity is persistent, a shock to its illiquidity results in low contemporaneous returns and high predicted future returns. Empirical evidence based on cross-sectional tests is consistent with liquidity risk being priced. We are grateful for conversations with Andrew Ang, Joseph Chen, Sergei Davydenko, Francisco Gomes, Joel Hasbrouck, Andrew Jackson, Tim Johnson, Martin Lettau, Anthony Lynch, Stefan Nagel, Dimitri Vayanos, Luis Viceira, Jeff Wurgler, and seminar participants at London Business School, New York University, the National Bureau of Economic Research (NBER) Summer Institute 2002, and the Five Star Conference 2002. We are especially indebted to Yakov Amihud for being generous with his time in guiding us through the empirical tests. All errors remain our own. Acharya is at London Business School and is a Research Affiliate of the Centre for Economic Policy Research (CEPR). Address: London Business School, Regent’s Park, London NW1 4SA, UK. Phone: +44 (0)20 7262 5050 x 3535. Fax: +44 (0)20 7724 3317. Email: [email protected]. Web: http://www.london.edu/faculty/vacharya Pedersen is at the Stern School of Business, New York University, 44 West Fourth Street, Suite 9-190, New York, NY 10012-1126. Phone: (212) 998-0359. Fax: (212) 995-4233. Email: [email protected]. Web: http://www.stern.nyu.edu/∼lpederse/
Treatment- and Cost-Effectiveness of Early Intervention for Acute Low-Back Pain Patients: A One-Year Prospective Study
In an attempt to prevent acute low-back pain from becoming a chronic disability problem, an earlier study developed a statistical algorithm which accurately identified those acute low-back pain patients who were at high risk for developing such chronicity. The major goal of the present study was to evaluate the clinical effectiveness of employing an early intervention program with these high-risk patients in order to prevent the development of chronic disability at a 1-year follow-up. Approximately 700 acute low-back pain patients were screened for their high-risk versus low-risk status. On the basis of this screening, high-risk patients were then randomly assigned to one of two groups: a functional restoration early intervention group (n = 22), or a nonintervention group (n = 48). A group of low-risk subjects (n = 54) who did not receive any early intervention was also evaluated. All these subjects were prospectively tracked at 3-month intervals starting from the date of their initial evaluation, culminating in a 12-month follow-up. During these follow-up evaluations, pain disability and socioeconomic outcomes (such as return-to-work and healthcare utilization) were assessed. Results clearly indicated that the high-risk subjects who received early intervention displayed statistically significant fewer indices of chronic pain disability on a wide range of work, healthcare utilization, medication use, and self-report pain variables, relative to the high-risk subjects who do not receive such early intervention. In addition, the high-risk nonintervention group displayed significantly more symptoms of chronic pain disability on these variables relative to the initially low-risk subjects. Cost-comparison savings data were also evaluated. These data revealed that there were greater cost savings associated with the early intervention group versus the no early intervention group. The overall results of this study clearly demonstrate the treatment- and cost-effectiveness of an early intervention program for acute low-back pain patients.
Suicide attempts and nonsuicidal self-injury in the treatment of resistant depression in adolescents: findings from the TORDIA study.
OBJECTIVE To evaluate the clinical and prognostic significance of suicide attempts (SAs) and nonsuicidal self-injury (NSSI) in adolescents with treatment-resistant depression. METHOD Depressed adolescents who did not improve with an adequate SSRI trial (N = 334) were randomized to a medication switch (SSRI or venlafaxine), with or without cognitive-behavioral therapy. NSSI and SAs were assessed at baseline and throughout the 24-week treatment period. RESULTS Of the youths, 47.4% reported a history of self-injurious behavior at baseline: 23.9% NSSI alone, 14% NSSI+SAs, and 9.5% SAs alone. The 24-week incidence rates of SAs and NSSI were 7% and 11%, respectively; these rates were highest among youths with NSSI+SAs at baseline. NSSI history predicted both incident SAs (hazard ratio [HR]= 5.28, 95% confidence interval [CI] = 1.80-15.47, z = 3.04, p = .002) and incident NSSI (HR = 7.31, z = 4.19, 95% CI = 2.88-18.54, p < .001) through week 24, and was a stronger predictor of future attempts than a history of SAs (HR = 1.92, 95% CI = 0.81-4.52, z = 2.29, p = .13). In the most parsimonious model predicting time to incident SAs, baseline NSSI history and hopelessness were significant predictors, adjusting for treatment effects. Parallel analyses predicting time to incident NSSI through week 24 identified baseline NSSI history and physical and/or sexual abuse history as significant predictors. CONCLUSIONS NSSI is a common problem among youths with treatment-resistant depression and is a significant predictor of future SAs and NSSI, underscoring the critical need for strategies that target the prevention of both NSSI and suicidal behavior. CLINICAL TRIAL REGISTRATION INFORMATION Treatment of SSRI-Resistant Depression in Adolescents (TORDIA). URL: http://www.clinicaltrials.gov. Unique Identifier: NCT00018902.
Scalable mental health analysis in the clinical whitespace via natural language processing
Our increasingly digital life provides a wealth of data about our behavior, beliefs, mood, and well-being. This data provides some insight into the lives of patients outside the healthcare setting, and in aggregate can be insightful for the person's mental health and emotional crisis. Here, we introduce this community to some of the recent advancement in using natural language processing and machine learning to provide insight into mental health of both individuals and populations. We advocate using these linguistic signals as a supplement to those that are collected in the health care system, filling in some of the so-called “whitespace” between visits.
Business artifacts: An approach to operational specification
Any business, no matter what physical goods or services it produces, relies on business records. It needs to record details of what it produces in terms of concrete information. Business artifacts are a mechanism to record this information in units that are concrete, identifiable, self-describing, and indivisible. We developed the concept of artifacts, or semantic objects, in the context of a technique for constructing formal yet intuitive operational descriptions of a business. This technique, called OpS (Operational Specification), was developed over the course of many business-transformation and business-process-integration engagements for use in IBM’s internal processes as well as for use with customers. Business artifacts (or business records) are the basis for the factorization of knowledge that enables the OpS technique. In this paper we present a comprehensive discussion of business artifacts—what they are, how they are represented, and the role they play in operational business modeling. Unlike the more familiar and popular concept of business objects, business artifacts are pure instances rather than instances of a taxonomy of types. Consequently, the key operation on business artifacts is recognition rather than classification.
A Dynamic Self-Structuring Neural Network
Creating a neural network based classification model is commonly accomplished using the trial and error technique. However, the trial and error structuring method have several difficulties such as time and availability of experts. In this article, an algorithm that simplifies structuring neural network classification models has been proposed. The algorithm aims at creating a large enough structure to learn models from the training dataset that can be generalised well on the testing dataset. Our algorithm dynamically tunes the structure parameters during the training phase aiming to derive accurate non-overfitting classifiers. The proposed algorithm has been applied to phishing websites classification problem and it shows competitive results with respect to various evaluation measures such as Harmonic Mean (F1-score), precision, accuracy, etc. KeywordsClassification, Neural Network, Phishing, constructive, pruning.
From kids and horses: equine facilitated psychotherapy for children
Equine facilitated psychotherapy is a developing form of animal assisted therapy, which primarily incorporates human interaction with horses as guides. The behavior of a sensitive horse, provides a vehicle by which the therapist can use to teach the patient coping skills. This theoretical study is present to reader our opinion, about the main considerations of equine facilitated psychotherapy for children. In this scenario, the horse could be included as a cotherapist in a team composed of psychologist, occupational therapists, veterinary doctors and horsemen. Horses, by their large, gentle presence, put children therapeutically in touch with their own vitality. Children who ordinarily shun physical and emotional closeness often can accept from a horse. Beneficial results of a child-horse relationship include care translation, socialization and conversation, self-esteem promotion, companionship and affection stimulation. We conclude that equine facilitated psychotherapy provides well being and improvement in quality of children with mental health problems.
Energy-efficient dynamic traffic offloading and reconfiguration of networked data centers for big data stream mobile computing: review, challenges, and a case study
Big data stream mobile computing is proposed as a paradigm that relies on the convergence of broadband Internet mobile networking and real-time mobile cloud computing. It aims at fostering the rise of novel self-configuring integrated computing-communication platforms for enabling in real time the offloading and processing of big data streams acquired by resource-limited mobile/wireless devices. This position article formalizes this paradigm, discusses its most significant application opportunities, and outlines the major challenges in performing real-time energy-efficient management of the distributed resources available at both mobile devices and Internet-connected data centers. The performance analysis of a small-scale prototype is also included in order to provide insight into the energy vs. performance tradeoff that is achievable through the optimized design of the resource management modules. Performance comparisons with some state-of-the-art resource managers corroborate the discussion. Hints for future research directions conclude the article.
Outcomes of rigid night splinting and activity modification in the treatment of cubital tunnel syndrome.
PURPOSE To prospectively analyze, using validated outcome measures, symptom improvement in patients with mild to moderate cubital tunnel syndrome treated with rigid night splinting and activity modifications. METHODS Nineteen patients (25 extremities) were enrolled prospectively between August 2009 and January 2011 following a diagnosis of idiopathic cubital tunnel syndrome. Patients were treated with activity modifications as well as a 3-month course of rigid night splinting maintaining 45° of elbow flexion. Treatment failure was defined as progression to operative management. Outcome measures included patient-reported splinting compliance as well as the Quick Disabilities of the Arm, Shoulder, and Hand questionnaire and the Short Form-12. Follow-up included a standardized physical examination. Subgroup analysis included an examination of the association between splinting success and ulnar nerve hypermobility. RESULTS Twenty-four of 25 extremities were available at mean follow-up of 2 years (range, 15-32 mo). Twenty-one of 24 (88%) extremities were successfully treated without surgery. We observed a high compliance rate with the splinting protocol during the 3-month treatment period. Quick Disabilities of the Arm, Shoulder, and Hand scores improved significantly from 29 to 11, Short Form-12 physical component summary score improved significantly from 45 to 54, and Short Form-12 mental component summary score improved significantly from 54 to 62. Average grip strength increased significantly from 32 kg to 35 kg, and ulnar nerve provocative testing resolved in 82% of patients available for follow-up examination. CONCLUSIONS Rigid night splinting when combined with activity modification appears to be a successful, well-tolerated, and durable treatment modality in the management of cubital tunnel syndrome. We recommend that patients presenting with mild to moderate symptoms consider initial treatment with activity modification and rigid night splinting for 3 months based on a high likelihood of avoiding surgical intervention. TYPE OF STUDY/LEVEL OF EVIDENCE Therapeutic II.
An analysis of facial expression recognition under partial facial image occlusion
In this paper, an analysis of the effect of partial occlusion on facial expression recognition is investigated. The classification from partially occluded images in one of the six basic facial expressions is performed using a method based on Gabor wavelets texture information extraction, a supervised image decomposition method based on Discriminant Non-negative Matrix Factorization and a shape-based method that exploits the geometrical displacement of certain facial features. We demonstrate how partial occlusion affects the above mentioned methods in the classification of the six basic facial expressions, and indicate the way partial occlusion affects human observers when recognizing facial expressions. An attempt to specify which part of the face (left, right, lower or upper region) contains more discriminant information for each facial expression, is also made and conclusions regarding the pairs of facial expressions misclassifications that each type of occlusion introduces, are drawn. 2008 Published by Elsevier B.V.
Cerecyte coil trial: procedural safety and clinical outcomes in patients with ruptured and unruptured intracranial aneurysms.
BACKGROUND AND PURPOSE This study arose from a need to systematically evaluate the clinical and angiographic outcomes of intracranial aneurysms treated with modified coils. We report the procedural safety and clinical outcomes in a prospective randomized controlled trial of endovascular coiling for ruptured and unruptured intracranial aneurysms, comparing polymer-loaded Cerecyte coils with bare platinum coils in 23 centers worldwide. MATERIALS AND METHODS Five hundred patients between 18 and 70 years of age with a ruptured or unruptured target aneurysm planning to undergo endovascular coiling were randomized to receive Cerecyte or bare platinum coils. Analysis was by intention to treat. RESULTS Two hundred forty-nine patients were allocated to Cerecyte coils and 251 to bare platinum coils. Baseline characteristics were balanced. For ruptured aneurysms, in-hospital mortality was 2/114 (1.8%) with Cerecyte versus 0/119 (0%) bare platinum coils. There were 8 (3.4%) adverse procedural events resulting in neurological deterioration: 5/114 (4.4%) with Cerecyte versus 3/119 (2.5%) with bare platinum coils (P = .22). The 6-month mRS score of ≤2 was not significantly different in 103/109 (94.5%) patients with Cerecyte and 110/112 (98.2%) patients with bare platinum coils. Poor outcome (mRS score of ≥3 or death) was 6/109 (5.5%) with Cerecyte versus 2/112 (1.8%) with bare platinum coils (P = .070). For UIAs, there was no in-hospital mortality. There were 7 (2.7%) adverse procedural events with neurological deterioration, 5/133 (3.8%) with Cerecyte versus 2/131 (1.5%) with bare platinum coils (P = .13). There was a 6-month mRS score of ≤2 in 114/119 (95.8%) patients with Cerecyte versus 123/123 (100%) patients with bare platinum coils. There was poor outcome (mRS ≥3 and 1 death) in 5/119 (4.2%) patients with Cerecyte versus 0/123 (0%) patients with bare platinum coils (P = .011). CONCLUSIONS There was a statistical excess of poor outcomes in the Cerecyte arm at discharge in the ruptured aneurysm group and at 6-month follow-up in the unruptured group. Overall adverse clinical outcomes and in-hospital mortality were exceptionally low in both groups.
A Surge of State Abortion Restrictions Puts Providers — And the Women They Serve — in the Crosshairs
ful of states have moved to improve access to abortion, and proactive legislation has been introduced in Congress aimed at stemming the tide of restrictive laws designed to place roadblocks in the path of women seeking abortion care. Although this emerging campaign may be more successful and take hold faster in some places than others, it marks an important shift toward reshaping the national debate over what a real agenda to protect women’s reproductive health looks like.
Mining Frequent Patterns without Candidate Generation: A Frequent-Pattern Tree Approach
Mining frequent patterns in transaction databases, time-series databases, and many other kinds of databases has been studied popularly in data mining research. Most of the previous studies adopt an Apriori-like candidate set generation-and-test approach. However, candidate set generation is still costly, especially when there exist a large number of patterns and/or long patterns. In this study, we propose a novel frequent-pattern tree (FP-tree) structure, which is an extended prefix-tree structure for storing compressed, crucial information about frequent patterns, and develop an efficient FP-tree-based mining method, FP-growth, for mining the complete set of frequent patterns by pattern fragment growth. Efficiency of mining is achieved with three techniques: (1) a large database is compressed into a condensed, smaller data structure, FP-tree which avoids costly, repeated database scans, (2) our FP-tree-based mining adopts a pattern-fragment growth method to avoid the costly generation of a large number of candidate sets, and (3) a partitioning-based, divide-and-conquer method is used to decompose the mining task into a set of smaller tasks for mining confined patterns in conditional databases, which dramatically reduces the search space. Our performance study shows that the FP-growth method is efficient and scalable for mining both long and short frequent patterns, and is about an order of magnitude faster than the Apriori algorithm and also faster than some recently reported new frequent-pattern mining methods.
Behavioral Patterns of Older Adults in Assisted Living
In this paper, we examine at-home activity rhythms and present a dozen of behavioral patterns obtained from an activity monitoring pilot study of 22 residents in an assisted living setting with four case studies. Established behavioral patterns have been captured using custom software based on a statistical predictive algorithm that models circadian activity rhythms (CARs) and their deviations. The CAR was statistically estimated based on the average amount of time a resident spent in each room within their assisted living apartment, and also on the activity level given by the average n.umber of motion events per room. A validated in-home monitoring system (IMS) recorded the monitored resident's movement data and established the occupancy period and activity level for each room. Using these data, residents' circadian behaviors were extracted, deviations indicating anomalies were detected, and the latter were correlated to activity reports generated by the IMS as well as notes of the facility's professional caregivers on the monitored residents. The system could be used to detect deviations in activity patterns and to warn caregivers of such deviations, which could reflect changes in health status, thus providing caregivers with the opportunity to apply standard of care diagnostics and to intervene in a timely manner.
Meta-analysis of the heritability of human traits based on fifty years of twin studies
Despite a century of research on complex traits in humans, the relative importance and specific nature of the influences of genes and environment on human traits remain controversial. We report a meta-analysis of twin correlations and reported variance components for 17,804 traits from 2,748 publications including 14,558,903 partly dependent twin pairs, virtually all published twin studies of complex traits. Estimates of heritability cluster strongly within functional domains, and across all traits the reported heritability is 49%. For a majority (69%) of traits, the observed twin correlations are consistent with a simple and parsimonious model where twin resemblance is solely due to additive genetic variation. The data are inconsistent with substantial influences from shared environment or non-additive genetic variation. This study provides the most comprehensive analysis of the causes of individual differences in human traits thus far and will guide future gene-mapping efforts. All the results can be visualized using the MaTCH webtool.
A phase I multidose study of dacetuzumab (SGN-40; humanized anti-CD40 monoclonal antibody) in patients with multiple myeloma.
This first-in-human, phase I study evaluated the safety, maximum-tolerated dose, pharmacokinetics, and antitumor activity of dacetuzumab in 44 patients with advanced multiple myeloma. Patients received intravenous dacetuzumab, either in 4 uniform weekly doses (first 4 cohorts) or using a 5-week intrapatient dose escalation schedule (7 subsequent cohorts; the last 3 cohorts received steroid pre-medication). An initial dose of 4 mg/kg dacetuzumab exceeded the maximum-tolerated dose for uniform weekly dosing. Intrapatient dose escalation with steroid pre-medication appeared effective in reducing symptoms of cytokine release syndrome and the maximum-tolerated dose with this dosing schema was 12 mg/kg/week. Adverse events potentially related to dacetuzumab included cytokine release syndrome symptoms, non-infectious ocular inflammation, and elevated hepatic enzymes. Peak dacetuzumab blood levels increased with dose. Nine patients (20%) had a best clinical response of stable disease. The observed safety profile suggested that dacetuzumab may be combined with other multiple myeloma therapies. Two combination trials are ongoing. Clinical trials gov identifier: NCT00079716.
Types of minority class examples and their influence on learning classifiers from imbalanced data
Many real-world applications reveal difficulties in learning classifiers from imbalanced data. Although several methods for improving classifiers have been introduced, the identification of conditions for the efficient use of the particular method is still an open research problem. It is also worth to study the nature of imbalanced data, characteristics of the minority class distribution and their influence on classification performance. However, current studies on imbalanced data difficulty factors have been mainly done with artificial datasets and their conclusions are not easily applicable to the real-world problems, also because the methods for their identification are not sufficiently developed. In our paper, we capture difficulties of class distribution in real datasets by considering four types of minority class examples: safe, borderline, rare and outliers. First, we confirm their occurrence in real data by exploring multidimensional visualizations of selected datasets. Then, we introduce a method for an identification of these types of examples, which is based on analyzing a class distribution in a local neighbourhood of the considered example. Two ways of modeling this neighbourhood are presented: with k-nearest examples and with kernel functions. Experiments with artificial datasets show that these methods are able to re-discover simulated types of examples. Next contributions of this paper include carrying out a comprehensive experimental study with 26 real world imbalanced datasets, where (1) we identify new data characteristics basing on the analysis of types of minority examples; (2) we demonstrate that considering the results of this analysis allow to differentiate classification performance of popular classifiers and pre-processing methods and to evaluate their areas of competence. Finally, we highlight directions of exploiting the results of our analysis for developing new algorithms for learning classifiers and pre-processing methods.
Design and implementation of continuous integration scheme based on Jenkins and Ansible
In the process of information system optimization and upgrading in the era of cloud computing, the number and variety of business requirements are increasingly complex, keeps sustained growth, the process of continuous integration delivery of information systems becomes increasingly complex, the amount of repetitive work is growing. This paper focuses on the continuous integration of specific information systems, a collaborative work scheme for continuous integrated delivery based on Jenkins and Ansible is proposed. Both theory and practice show that continuous integrated delivery cooperative systems can effectively improve the efficiency and quality of continuous integrated delivery of information systems. The effect of the optimization and upgrading of the information system is obvious.
Traffic signal timing via deep reinforcement learning
In this paper, we propose a set of algorithms to design signal timing plans via deep reinforcement learning. The core idea of this approach is to set up a deep neural network (DNN) to learn the Q-function of reinforcement learning from the sampled traffic state/control inputs and the corresponding traffic system performance output. Based on the obtained DNN, we can find the appropriate signal timing policies by implicitly modeling the control actions and the change of system states. We explain the possible benefits and implementation tricks of this new approach. The relationships between this new approach and some existing approaches are also carefully discussed.
Understanding, avoiding, and managing dermal filler complications.
BACKGROUND Dermal fillers are increasingly being utilized for multiple cosmetic dermatology indications. The appeal of these products can be partly attributed to their strong safety profiles. Nevertheless, complications can sometimes occur. OBJECTIVE To summarize the complications associated with each available dermal filling agent, strategies to avoid them, and management options if they do arise. METHODS AND MATERIALS Complications with dermal fillers reported in peer-reviewed publications, prescribing information, and recent presentations at professional meetings were reviewed. Recommendations for avoiding and managing complications are provided, based on the literature review and the author's experience. RESULTS Inappropriate placement or superficial placement is one of the most frequent reasons for patient dissatisfaction. Due to the reversibility of hyaluronic acid, complications from these fillers can be easily corrected. Sensitivity to any of the currently approved FDA products is quite rare and can usually be managed with anti-inflammatory agents. Infection is quite uncommon as well and can usually be managed with either antibiotics or antivirals depending on the clinical features. The most concerning complication is cutaneous necrosis, and a protocol to treat the full spectrum of this process is reviewed. CONCLUSIONS Complications with dermal fillers are infrequent, and strategies to minimize their incidence and impact are easily deployed. Familiarity with each family of soft-tissue augmentation products, potential complications, and their management will optimize the use of these agents.
The combination of hypointense and hyperintense signal changes on T2-weighted magnetic resonance imaging sequences: a specific marker of multiple system atrophy?
OBJECTIVE To compare the frequency and specificity of hypointense magnetic resonance imaging (MRI) signal changes alone with the frequency and specificity of a pathological MRI pattern consisting of a hyperintense lateral rim and a dorsolateral signal attenuation on T2-weighted MRIs in patients with parkinsonism of various origins. PATIENTS Ninety patients with Parkinson disease (PD) (n = 65), progressive supranuclear palsy (PSP) (n = 10), and multiple system atrophy (MSA) of the striatonigral degeneration type (n = 15) underwent MRI. SETTING University medical center. RESULTS Nine of the 15 patients with MSA showed the pattern with hyperintense lateral rim and a dorsolateral hypointense signal attenuation on T2-weighted images within the putamen. This pattern was not found in the 65 patients with PD, nor in the 10 patients with PSP. Only hypointense changes in the putamen were found in 6 patients (9%) with PD, 4 patients (40%) with PSP, and 5 patients (36%) with MSA. CONCLUSIONS Our data suggest that the pattern consisting of hypointense and hyperintense T2 changes within the putamen is a highly specific MRI sign of MSA, while hypointensity alone remains a sensitive, but nonspecific MRI sign of MSA. In clinically doubtful cases, the appearance of a hypointense and hyperintense signal pattern on MRI makes the diagnosis of PD very unlikely, while hypointense signal changes alone do not exclude idiopathic PD.
Artificial neural networks improve the prediction of Kt/V, follow-up dietary protein intake and hypotension risk in haemodialysis patients.
BACKGROUND Artificial neural networks (ANN) represent a promising alternative to classical statistical and mathematical methods to solve multidimensional non-linear problems. The aim of the study was to compare the performance of ANN in predicting the dialysis quality (Kt/V), the follow-up dietary protein intake and the risk of intradialytic hypotension in haemodialysis patients with that predicted by experienced nephrologists. METHODS A combined retrospective and prospective observational study was performed in two Swiss dialysis units (80 chronic haemodialysis patients, 480 monthly clinical observations and biochemical test results). Using mathematical models based on linear and logistic regressions as background, ANN were built and then prospectively compared with the ability of six experienced nephrologists to predict the Kt/V and the follow-up protein catabolic rate (PCR) and to detect a Kt/V < 1.30, a follow-up PCR < 1.00 g/kg/day and the occurrence of hypotension. RESULTS ANN compared with nephrologists gave a more accurate correlation between estimated and calculated Kt/V and follow-up PCR (P<0.001). The same superiority of ANN was also seen in the ability to detect a Kt/V < 1.30, a follow-up PCR < 1.00 g/kg/day and the occurrence of hypotension expressed as a percentage of correct answers, sensitivity, specificity and predictivity. CONCLUSIONS The use of ANN significantly improves the ability of experienced nephrologists to estimate the Kt/V and the follow-up PCR and to detect a Kt/V < 1.30, a follow-up PCR < 1.00 g/kg/day and the occurrence of intradialytic hypotension.
An Environmental Analysis of Machining
This paper presents a system-level environmental analysis of machining. The analysis presented here considers not only the environmental impact of the material removal process itself, but also the impact of associated processes such as material preparation and cutting fluid preparation. This larger system view results in a more complete assessment of machining. Energy analyses show that the energy requirements of actual material removal can be quite small when compared to the total energy associated with machine tool operation. Also, depending on the energy intensity of the materials being machined, the energy of material production can, in some cases, far exceed the energy required for machine tool operation.
Twitter Sentiment Analysis for Large-Scale Data: An Unsupervised Approach
Millions of tweets are generated each day on multifarious issues. Topical diversity in content demands domain-independent solutions for analysing twitter sentiments. Scalability is another issue when dealing with huge amount of tweets. This paper presents an unsupervised method for analysing tweet sentiments. Polarity of tweets is evaluated by using three sentiment lexicons—SenticNet, SentiWordNet and SentislangNet. SentislangNet is a sentiment lexicon built from SenticNet and SentiWordNet for slangs and acronyms. Experimental results show fairly good $$F$$ F -score. The method is implemented and tested in parallel python framework and is shown to scale well with large volume of data on multiple cores.
Higher levels of neanderthal ancestry in East Asians than in Europeans.
Neanderthals were a group of archaic hominins that occupied most of Europe and parts of Western Asia from ∼30,000 to 300,000 years ago (KYA). They coexisted with modern humans during part of this time. Previous genetic analyses that compared a draft sequence of the Neanderthal genome with genomes of several modern humans concluded that Neanderthals made a small (1-4%) contribution to the gene pools of all non-African populations. This observation was consistent with a single episode of admixture from Neanderthals into the ancestors of all non-Africans when the two groups coexisted in the Middle East 50-80 KYA. We examined the relationship between Neanderthals and modern humans in greater detail by applying two complementary methods to the published draft Neanderthal genome and an expanded set of high-coverage modern human genome sequences. We find that, consistent with the recent finding of Meyer et al. (2012), Neanderthals contributed more DNA to modern East Asians than to modern Europeans. Furthermore we find that the Maasai of East Africa have a small but significant fraction of Neanderthal DNA. Because our analysis is of several genomic samples from each modern human population considered, we are able to document the extent of variation in Neanderthal ancestry within and among populations. Our results combined with those previously published show that a more complex model of admixture between Neanderthals and modern humans is necessary to account for the different levels of Neanderthal ancestry among human populations. In particular, at least some Neanderthal-modern human admixture must postdate the separation of the ancestors of modern European and modern East Asian populations.
Continuous electromyography monitoring of motor cranial nerves during cerebellopontine angle surgery.
OBJECT Electromyography (EMG) monitoring is expected to reduce the incidence of motor cranial nerve deficits in cerebellopontine angle surgery. The aim of this study was to provide a detailed analysis of intraoperative EMG phenomena with respect to their surgical significance. METHODS Using a system that continuously records facial and lower cranial nerve EMG signals during the entire operative procedure, the authors examined 30 patients undergoing surgery on acoustic neuroma (24 patients) or meningioma (six patients). Free-running EMG signals were recorded from muscles targeted by the facial, trigeminal, and lower cranial nerves, and were analyzed off-line with respect to waveform characteristics, frequencies, and amplitudes. Intraoperative measurements were correlated with typical surgical maneuvers and postoperative outcomes. Characteristic EMG discharges were obtained: spikes and bursts were recorded immediately following the direct manipulation of a dissecting instrument near the cranial nerve, but also during periods when the nerve had not yet been exposed. Bursts could be precisely attributed to contact activity. Three distinct types of trains were identified: A, B, and C trains. Whereas B and C trains are irrelevant with respect to postoperative outcome, the A train--a sinusoidal, symmetrical sequence of high-frequency and low-amplitude signals--was observed in 19 patients and could be well correlated with additional postoperative facial nerve paresis (in 18 patients). CONCLUSIONS It could be demonstrated that the occurrence of A trains is a highly reliable predictor for postoperative facial palsy. Although some degree of functional worsening is to be expected postoperatively, there is a good chance of avoiding major deficits by warning the surgeon early. Continuous EMG monitoring is superior to electrical nerve stimulation or acoustic loudspeaker monitoring alone. The detailed analysis of EMG-waveform characteristics is able to provide more accurate warning criteria during surgery.
Epitope analysis of the envelope and non-structural glycoproteins of Murray Valley encephalitis virus.
Previous studies have shown that antibodies produced against strategic flavivirus epitopes play an important role in recovery and immunity. Definition of the conformation and location of these epitopes and the degree of their conservation among flaviviruses is important to understanding the humoral response to flavivirus infection. In this study we have examined epitopes recognized by 14 monoclonal antibodies (MAbs) produced to the envelope (E) and non-structural (NS1) proteins of Murray Valley encephalitis virus (MVE). These antibodies were analysed for specificity, neutralization, haemagglutination inhibition (HI) and competitive binding. We have identified six distinct epitopes on the E protein which are located in four non-overlapping domains. MAbs to epitopes in one domain neutralized virus, were specific for MVE and Japanese encephalitis virus, and reacted with epitopes resistant to reduction. Two other E domains, one specific to MVE and the other shared by all flaviviruses, also contained neutralization sites and were stabilized by disulphide bonds. The fourth domain on E was conserved among the flaviviruses, sensitive to SDS denaturation and did not induce neutralizing antibody. Studies with MVE NS1 MAbs revealed that they were mostly type-specific, unreactive with conserved epitopes, and unreactive in HI and neutralization tests. The six epitopes identified on NS1 did not overlap and represent antigenic domains either resistant or sensitive to reduction. Immunoblotting of viral proteins in MVE-infected C6/36 cells revealed two distinct forms of NS1 and high Mr proteins of 97K and 108K that represented disulphide-linked heterodimers of E and NS1.
Advances in Pre-Training Distributed Word Representations
Many Natural Language Processing applications nowadays rely on pre-trained word representations estimated from large text corpora such as news collections, Wikipedia and Web Crawl. In this paper, we show how to train high-quality word vector representations by using a combination of known tricks that are however rarely used together. The main result of our work is the new set of publicly available pre-trained models that outperform the current state of the art by a large margin on a number of tasks.
The evolution and psychology of self-deception.
In this article we argue that self-deception evolved to facilitate interpersonal deception by allowing people to avoid the cues to conscious deception that might reveal deceptive intent. Self-deception has two additional advantages: It eliminates the costly cognitive load that is typically associated with deceiving, and it can minimize retribution if the deception is discovered. Beyond its role in specific acts of deception, self-deceptive self-enhancement also allows people to display more confidence than is warranted, which has a host of social advantages. The question then arises of how the self can be both deceiver and deceived. We propose that this is achieved through dissociations of mental processes, including conscious versus unconscious memories, conscious versus unconscious attitudes, and automatic versus controlled processes. Given the variety of methods for deceiving others, it should come as no surprise that self-deception manifests itself in a number of different psychological processes, and we discuss various types of self-deception. We then discuss the interpersonal versus intrapersonal nature of self-deception before considering the levels of consciousness at which the self can be deceived. Finally, we contrast our evolutionary approach to self-deception with current theories and debates in psychology and consider some of the costs associated with self-deception.
Antibacterial activity of oregano (Origanum vulgare Linn.) against gram positive bacteria.
The present investigation is focused on antibacterial potential of infusion, decoction and essential oil of oregano (Origanum vulgare) against 111 Gram-positive bacterial isolates belonging to 23 different species related to 3 genera. Infusion and essential oil exhibited antibacterial activity against Staphylococcus saprophyticus, S. aureus, Micrococcus roseus, M. kristinae, M. nishinomiyaensis, M. lylae, M. luteus, M. sedentarius, M. varians, Bacillus megaterium, B. thuringiensis, B. alvei, B. circulans, B. brevis, B. coagulans, B. pumilus, B. laterosporus, B. polymyxa, B. macerans, B. subtilis, B. firmus, B. cereus and B. lichiniformis. The infusion exhibited maximum activity against B. laterosporus (17.5 mm mean zone of inhibition+/-1.5 Standard deviation) followed by B. polymyxa (17.0 mm+/-2.0 SD) and essential oil of oregano exhibited maximum activity against S. saprophyticus (16.8 mm+/-1.8 SD) followed by B. circulans (14.5 mm+/-0.5 SD). While all these tested isolates were found resistant to decoction of oregano.
Positive expiratory pressure physiotherapy for airway clearance in people with cystic fibrosis.
BACKGROUND Chest physiotherapy is widely prescribed to assist the clearance of airway secretions in people with cystic fibrosis. Positive expiratory pressure (PEP) devices provide back pressure to the airways during expiration. This may improve clearance by building up gas behind mucus via collateral ventilation and by temporarily increasing functional residual capacity. Given the widespread use of PEP devices, there is a need to determine the evidence for their effect. This is an update of a previously published review. OBJECTIVES To determine the effectiveness and acceptability of PEP devices compared to other forms of physiotherapy as a means of improving mucus clearance and other outcomes in people with cystic fibrosis. SEARCH METHODS We searched the Cochrane Cystic Fibrosis and Genetic Disorders Group Trials Register comprising of references identified from comprehensive electronic database searches and handsearches of relevant journals and abstract books of conference proceedings. The electronic database CINAHL was also searched from 1982 to 2013.Most recent search of the Group's Cystic Fibrosis Trial Register: 02 December 2014. SELECTION CRITERIA Randomised controlled studies in which PEP was compared with any other form of physiotherapy in people with cystic fibrosis. This included, postural drainage and percussion, active cycle of breathing techniques, oscillating PEP devices, thoracic oscillating devices, bilevel positive airway pressure (BiPaP) and exercise. Studies also had to include one or more of the following outcomes: change in forced expiratory volume in one second; number of respiratory exacerbations; a direct measure of mucus clearance; weight of expectorated secretions; other pulmonary function parameters; a measure of exercise tolerance; ventilation scans; cost of intervention; and adherence to treatment. DATA COLLECTION AND ANALYSIS Three authors independently applied the inclusion and exclusion criteria to publications and assessed the risk of bias of the included studies. MAIN RESULTS A total of 26 studies (involving 733 participants) were included in the review. Eighteen studies involving 296 participants were cross-over in design. Data were not published in sufficient detail in most of these studies to perform any meta-analysis. These studies compared PEP to active cycle of breathing techniques (ACBT), autogenic drainage (AD), oral oscillating PEP devices, high frequency chest wall oscillation (HFCWO) and Bi level PEP devices (BiPaP) and exercise.Forced expiratory volume in one second was the review's primary outcome and the most frequently reported outcome in the studies. Single interventions or series of treatments that continued for up to three months demonstrated no significant difference in effect between PEP and other methods of airway clearance on this outcome. However, long-term studies had equivocal or conflicting results regarding the effect on this outcome. A second primary outcome was the number of respiratory exacerbations. There was a lower exacerbation rate in participants using PEP compared to other techniques when used with a mask for at least one year. Participant preference was reported in 10 studies; and in all studies with an intervention period of at least one month, this was in favour of PEP. The results for the remaining outcome measures were not examined or reported in sufficient detail to provide any high-level evidence. The only reported adverse event was in a study where infants performing either PEP or postural drainage with percussion experienced some gastro-oesophageal reflux. This was more severe in the postural drainage with percussion group. Many studies had a risk of bias as they did not report how the randomisation sequence was either generated or concealed. Most studies reported the number of dropouts and also reported on all planned outcome measures. AUTHORS' CONCLUSIONS Following meta-analyses of the effects of PEP versus other airway clearance techniques on lung function and patient preference, this Cochrane review demonstrated that there was a significant reduction in pulmonary exacerbations in people using PEP compared to those using HFCWO in the study where exacerbation rate was a primary outcome measure. It is important to note, however, that there may be individual preferences with respect to airway clearance techniques and that each patient needs to be considered individually for the selection of their optimal treatment regimen in the short and long term, throughout life, as circumstances including developmental stages, pulmonary symptoms and lung function change over time. This also applies as conditions vary between baseline function and pulmonary exacerbations.However, meta-analysis in this Cochrane review has shown a significant reduction in pulmonary exacerbations in people using PEP in the few studies where exacerbation rate was a primary outcome measure.
Education set design for smart home applications
Along with the increasing number of smart home applications, the need for the education of those systems has been unavoidable. Smart homes are a building and occupy large area. It has a lot of special control systems, communication systems and wiring. Thus, education of smart home systems requires special laboratories. At the places where special laboratories do not exist, the education is carried on theoretically. This is a problem. To achieve good smart home education a close collaboration between practice and theory is required. In this study, it has been tried to make an approach to the problem. So, the education set has been developed for better education than only theoretical education. 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ 19: 631 638, 2011; View this article online at wileyonlinelibrary.com/journal/cae; DOI 10.1002/cae.20360
William Morris and the European Artistic Connection
The lecture is connected to my researches – the influence of the Kelmscott Press over the European private press movement. Among many examples, I chose to put the stress on an obvious connection between late 19th century French and British private press. My aim is to bring  into the light the links between William Morris’ Kelmscott Press and Lucien Pissarro Eragny Press both in substance and form. According to William Morris, Middle Age and early Renaissance Art of the Book had to be considered as a guideline for the production of books. This idea has to be considered as the foundation of Morris’ Book Aesthetics and was spread all over Europe.The substance and form problematics was  a point especially covered in two texts. Firstly, we’ll talk about The Ideal Book lecture that Morris gave in 1893. His demonstration was about the strong connection between Architecture and book arrangement. The second text – Aims in founding the Kelmscott Press (1895)- deals with similar ideas and opens this way: « … I began printing books with the hope of producing some which would have a definite claim to beauty, while at the same time they should be easy to read and should not dazzle the eye, or trouble the intellect of the reader by eccentricity of form in the letters. … » These two texts which initiated Morris’ Book Aesthetics principles gave rise to major precepts that have definitely left their mark on late 19th and early 20th centuries private press holders: the will to update the art of making books,  that was agonizing due to omnipotent sales and distribution considerations; the moral obligation to get back to the very source of the Art of the Book in order to appropriate a part of its knowledge and formal aspects; the complete control over the book production, from the editorial contents to its arrangement without forgetting the decoration and the use of quality paper and inks; and last, the idea that a good book is made of a scholarly substance and a form to emphasize it. The latter, as well as adapting to the literary content, must have a coherent ornamental significance – beautiful and striking in its own. It can’t be a sycophantic decoration but has to be regarded as important as the text. At the very end of the 19th century, these ideas went around in Europe and got mixed with others. Some people blatantly made them their own and began manufacturing books with the ideas of Beauty, Ethics and Good in mind. Of course, the result was somewhat different but the philosophical approach surely derived from Morris’ principles. We could choose many examples, but being a French member of the William Morris Society I’d rather present a French British illustration. Therefore, I will talk about Lucien Pissarro’s Eragny Press which really acted at that time as a bridge between French and British Aesthetics. What is more, even though the lives of Lucien Pissarro and William Morris hardly had anything in common we can still establish the convergence of their political ideas as Pissarro was then very close to French socialist and even anarchist league.
Large-scale Isolated Gesture Recognition using pyramidal 3D convolutional networks
Human gesture recognition is one of the central research fields of computer vision, and effective gesture recognition is still challenging up to now. In this paper, we present a pyramidal 3D convolutional network framework for large-scale isolated human gesture recognition. 3D convolutional networks are utilized to learn the spatiotemporal features from gesture video files. Pyramid input is proposed to preserve the multi-scale contextual information of gestures, and each pyramid segment is uniformly sampled with temporal jitter. Pyramid fusion layers are inserted into the 3D convolutional networks to fuse the features of pyramid input. This strategy makes the networks recognize human gestures from the entire video files, not just from segmented clips independently. We present the experiment results on the 2016 ChaLearn LAP Large-scale Isolated Gesture Recognition Challenge, in which we placed third.
Laplacian Auto-Encoders: An explicit learning of nonlinear data manifold
A key factor contributing to the success of many auto-encoders based deep learning techniques is the implicit consideration of the underlying data manifold in their training criteria. In this paper, we aim to make this consideration more explicit by training auto-encoders completely from the manifold learning perspective. We propose a novel unsupervised manifold learning method termed Laplacian Auto-Encoders (LAEs). Starting from a general regularized function learning framework, LAE regularizes training of autoencoders so that the learned encoding function has the locality-preserving property for data points on the manifold. By exploiting the analog relation between the graph Laplacian and the Laplace–Beltrami operator on the continuous manifold, we derive discrete approximations of the firstand higher-order auto-encoder regularizers that can be applied in practical scenarios, where only data points sampled from the distribution on the manifold are available. Our proposed LAE has potentially better generalization capability, due to its explicit respect of the underlying data manifold. Extensive experiments on benchmark visual classification datasets show that LAE consistently outperforms alternative auto-encoders recently proposed in deep learning literature, especially when training samples are relatively scarce. & 2015 Elsevier B.V. All rights reserved.
The Rise of Guardians: Fact-checking URL Recommendation to Combat Fake News
A large body of research work and efforts have been focused on detecting fake news and building online fact-check systems in order to debunk fake news as soon as possible. Despite the existence of these systems, fake news is still wildly shared by online users. It indicates that these systems may not be fully utilized. After detecting fake news, what is the next step to stop people from sharing it? How can we improve the utilization of these fact-check systems? To fill this gap, in this paper, we (i) collect and analyze online users called guardians, who correct misinformation and fake news in online discussions by referring fact-checking URLs; and (ii) propose a novel fact-checking URL recommendation model to encourage the guardians to engage more in fact-checking activities. We found that the guardians usually took less than one day to reply to claims in online conversations and took another day to spread verified information to hundreds of millions of followers. Our proposed recommendation model outperformed four state-of-the-art models by 11%~33%. Our source code and dataset are available at http://web.cs.wpi.edu/~kmlee/data/gau.html.
A Complete Axiomatisation of the ZX-Calculus for Clifford+T Quantum Mechanics
We introduce the first complete and approximately universal diagrammatic language for quantum mechanics. We make the ZX-Calculus, a diagrammatic language introduced by Coecke and Duncan, complete for the so-called Clifford+T quantum mechanics by adding two new axioms to the language. The completeness of the ZX-Calculus for Clifford+T quantum mechanics -- also called the π/4-fragment of the ZX-Calculus -- was one of the main open questions in categorical quantum mechanics. We prove the completeness of this fragment using the recently studied ZW-Calculus, a calculus dealing with integer matrices. We also prove that the π/4-fragment of the ZX-Calculus represents exactly all the matrices over some finite dimensional extension of the ring of dyadic rationals.
Gazing at games: using eye tracking to control virtual characters
Welcome to the course: Gazing at Games: Using Eye Tracking to Control Virtual Characters. I will start with a short introduction of the course which will give you an idea of its aims and structure. I will also talk a bit about my background and research interests and motivate why I think this work is important.
Atrial fibrillation after taser exposure in a previously healthy adolescent.
We are reporting a previously healthy adolescent who developed atrial fibrillation after being tased. He has a structurally normal heart on echocardiogram, normal electrolyte level and thyroid function test results, and a urine toxicology screen positive for marijuana. The patient ultimately required external defibrillation to convert his cardiac rhythm to normal sinus rhythm and has had no recurrent arrhythmias since hospital discharge (approximately 1 year). This is the first reported case of atrial fibrillation developing after a Taser shot, occurring in an adolescent without other risk factors. This case illustrates the arrhythmogenic potential of a Taser in otherwise healthy young individuals, and further study of occurrence of Taser-induced arrhythmias is warranted.
Neural Machine Translation of Indian Languages
Neural Machine Translation (NMT) is a new technique for machine translation that has led to remarkable improvements compared to rule-based and statistical machine translation (SMT) techniques, by overcoming many of the weaknesses in the conventional techniques. We study and apply NMT techniques to create a system with multiple models which we then apply for six Indian language pairs. We compare the performances of our NMT models with our system using automatic evaluation metrics such as UNK Count, METEOR, F-Measure, and BLEU. We find that NMT techniques are very effective for machine translations of Indian language pairs. We then demonstrate that we can achieve good accuracy even using a shallow network; on comparing the performance of Google Translate on our test dataset, our best model outperformed Google Translate by a margin of 17 BLEU points on Urdu-Hindi, 29 BLEU points on Punjabi-Hindi, and 30 BLEU points on Gujarati-Hindi translations.
Information credibility on twitter
We analyze the information credibility of news propagated through Twitter, a popular microblogging service. Previous research has shown that most of the messages posted on Twitter are truthful, but the service is also used to spread misinformation and false rumors, often unintentionally. On this paper we focus on automatic methods for assessing the credibility of a given set of tweets. Specifically, we analyze microblog postings related to "trending" topics, and classify them as credible or not credible, based on features extracted from them. We use features from users' posting and re-posting ("re-tweeting") behavior, from the text of the posts, and from citations to external sources. We evaluate our methods using a significant number of human assessments about the credibility of items on a recent sample of Twitter postings. Our results shows that there are measurable differences in the way messages propagate, that can be used to classify them automatically as credible or not credible, with precision and recall in the range of 70% to 80%.
The development of hyperactive-impulsive behaviors during the preschool years: the predictive validity of parental assessments.
The objectives of this study were to establish the different developmental trajectories of hyperactive-impulsive behaviors on the basis of both mother and father ratings at 19, 32, 50, and 63 months, and to examine the predictive validity of these trajectories with respect to later hyperactive-impulsive behaviors, as rated by teachers in the first 2 years of school. Hyperactive-impulsive behaviors were assessed in a population-based sample of 1,112 twins (565 boys and 547 girls) at 19, 32, 50, and 63 months of age. The results revealed a differentiated and consistent view of developmental trajectories of hyperactive-impulsive behaviors derived from these repeated assessments, with 7.1% of children seen by mothers (7% for fathers) as displaying high and stable hyperactive-impulsive behaviors. According to mother ratings, children on a high-chronic trajectory were more likely than other children to display hyperactive-impulsive behaviors at 72 and 84 months according to their teachers. Repeated measures over time and father-based trajectories significantly added to the prediction teacher later ratings of hyperactive-impulsive behaviors. These results support the predictive validity of parental assessment of hyperactive-impulsive behaviors during the preschool years and their use to identify children at risk for further evaluation and possible intervention.
The validity of the Hospital Anxiety and Depression Scale. An updated literature review.
OBJECTIVE To review the literature of the validity of the Hospital Anxiety and Depression Scale (HADS). METHOD A review of the 747 identified papers that used HADS was performed to address the following questions: (I) How are the factor structure, discriminant validity and the internal consistency of HADS? (II) How does HADS perform as a case finder for anxiety disorders and depression? (III) How does HADS agree with other self-rating instruments used to rate anxiety and depression? RESULTS Most factor analyses demonstrated a two-factor solution in good accordance with the HADS subscales for Anxiety (HADS-A) and Depression (HADS-D), respectively. The correlations between the two subscales varied from.40 to.74 (mean.56). Cronbach's alpha for HADS-A varied from.68 to.93 (mean.83) and for HADS-D from.67 to.90 (mean.82). In most studies an optimal balance between sensitivity and specificity was achieved when caseness was defined by a score of 8 or above on both HADS-A and HADS-D. The sensitivity and specificity for both HADS-A and HADS-D of approximately 0.80 were very similar to the sensitivity and specificity achieved by the General Health Questionnaire (GHQ). Correlations between HADS and other commonly used questionnaires were in the range.49 to.83. CONCLUSIONS HADS was found to perform well in assessing the symptom severity and caseness of anxiety disorders and depression in both somatic, psychiatric and primary care patients and in the general population.
RoboCupRescue 2014 - Robot League Team Hector Darmstadt (Germany)
This paper describes the approach used by Team Hector Darmstadt for participation in the 2013 RoboCup Rescue League competition. Participating in the RoboCup Rescue competition since 2009, the members of Team Hector Darmstadt focus on exploration of disaster sites using autonomous Unmanned Ground Vehicles (UGVs). The team has been established as part of a PhD program funded by the German Research Foundation at TU Darmstadt and combines expertise from Computer Science and Mechanical Engineering. We give an overview of the complete system used to solve the problem of reliably finding victims in harsh USAR environments. This includes hardware as well as software solutions and diverse topics like locomotion, SLAM, pose estimation, human robot interaction and victim detection. As a contribution to the RoboCup Rescue community, major parts of the used software have been released and documented as open source software for ROS.
Satellite Image Analysis for Disaster and Crisis-Management Support
This paper describes how multisource satellite data and efficient image analysis may successfully be used to conduct rapid-mapping tasks in the domain of disaster and crisis-management support. The German Aerospace Center (DLR) has set up a dedicated crosscutting service, which is the so-called "Center for satellite-based Crisis Information" (ZKI), to facilitate the use of its Earth-observation capacities in the service of national and international response to major disaster situations, humanitarian relief efforts, and civil security issues. This paper describes successful rapid satellite mapping campaigns supporting disaster relief and demonstrates how this technology can be used for civilian crisis-management purposes. During the last years, various international coordination bodies were established, improving the disaster-response-related cooperation within the Earth-observation community worldwide. DLR/ZKI operates in this context, closely networking with public authorities (civil security), nongovernmental organizations (humanitarian relief organizations), satellite operators, and other space agencies. This paper reflects on several of these international activities, such as the International Charter Space and Major Disasters, describes mapping procedures, and reports on rapid-mapping experiences gained during various disaster-response applications. The example cases presented cover rapid impact assessment after the Indian Ocean Tsunami, forest fires mapping for Portugal, earthquake-damage assessment for Pakistan, and landslide extent mapping for the Philippines
An improved algorithm on Viola-Jones object detector
In image processing, Viola-Jones object detector [1] is one of the most successful and widely used object detectors. A popular implementation used by the community is the one in OpenCV. The detector shows its strong power in detecting faces, but we found it hard to be extended to other kinds of objects. The convergence of the training phase of this algorithm depends a lot on the training data. And the prediction precision stays low. In this paper, we have come up with new ideas to improve its performance for diverse object categories. We incorporated six different types of feature images into the Viola and Jones' framework. The integral image [1] used by the Viola-Jones detector is then computed on these feature images respectively instead of only on the gray image. The stage classifier in Viola-Jones detector is now trained on one of these feature images. We also present a new stopping criterion for the stage training. In addition, we integrate a key points based SVM [2] predictor into the prediction phase to improve the confidence of the detection result.
Can Good Principals Keep Teachers in Disadvantaged Schools ? Linking Principal Effectiveness to Teacher Satisfaction and Turnover in Hard-to-Staff Environments
Background: High rates of teacher turnover likely mean greater school instability, disruption of curricular cohesiveness, and a continual need to hire inexperienced teachers, who typically are less effective, as replacements for teachers who leave. Unfortunately, research consistently finds that teachers who work in schools with large numbers of poor students and students of color feel less satisfied and are more likely to turn over, meaning that turnover is concentrated in the very schools that would benefit most from a stable staff of experienced teachers. Despite the potential challenge that this turnover disparity poses for equity of educational opportunity and student performance gaps across schools, little research has examined the reasons for elevated teacher turnover in schools with large numbers of traditionally disadvantaged students. Purpose: This study hypothesizes that school working conditions help explain both teacher satisfaction and turnover. In particular, it focuses on the role effective principals in retaining teachers, particularly in disadvantaged schools with the greatest staffing challenges. Research Design: The study conducts quantitative analysis of national data from the 2003-04 Schools and Staffing Survey and 2004-05 Teacher Follow-up Survey. Regression analyses combat the potential for bias from omitted variables by utilizing an extensive set of control variables and employing a school district fixed effects approach that implicitly makes comparisons among principals and teachers within the same local context. Conclusions: Descriptive analyses confirm that observable measures of teachers‘ work environments, including ratings of the effectiveness of the principal, generally are lower in schools with large numbers of disadvantaged students. Regression results show that principal effectiveness is associated with greater teacher satisfaction and a lower probability that the teacher leaves the school within a year. Moreover, the positive impacts of principal effectiveness on these teacher outcomes are even greater in disadvantaged schools. These findings suggest that policies focused on getting the best principals into the most challenging school environments may be effective strategies for lowering perpetually high teacher turnover rates in those schools.
CORPORATE ENVIRONMENTAL ORIENTATION:CONCEPTUALIZATION AND THE CASE OF ANDEAN EXPORTERS
As more consumers, firms, and governments become aware of environmental degradation and take measures to abate their negative contribution, green practices will play a more prominent role in corporate management. This paper conducts a literature review of firm greening modes, the proposed benefits of firm greening, implementation approaches, and green marketing in order to propose a framework for analyzing firm greening in the international arena, paying particular attention to behavior within developing countries. A survey administered to general managers of three hundred and seventeen small and medium sized Colombian, Ecuadorian, and Venezuelan firms determines the managerial perceptions of greening activities in Andean firms. Though Andean managers are environmentally aware, they do not conduct adequate levels of measurement, auditing, and reporting. Also, the concept of stakeholders and peer corporations may limit Andean firms responsiveness.
An Efficient Signature Verification Method Based on an Interval Symbolic Representation and a Fuzzy Similarity Measure
In this paper, an efficient offline signature verification method based on an interval symbolic representation and a fuzzy similarity measure is proposed. In the feature extraction step, a set of local binary pattern-based features is computed from both the signature image and its under-sampled bitmap. Interval-valued symbolic data is then created for each feature in every signature class. As a result, a signature model composed of a set of interval values (corresponding to the number of features) is obtained for each individual’s handwritten signature class. A novel fuzzy similarity measure is further proposed to compute the similarity between a test sample signature and the corresponding interval-valued symbolic model for the verification of the test sample. To evaluate the proposed verification approach, a benchmark offline English signature data set (GPDS-300) and a large data set (BHSig260) composed of Bangla and Hindi offline signatures were used. A comparison of our results with some recent signature verification methods available in the literature was provided in terms of average error rate and we noted that the proposed method always outperforms when the number of training samples is eight or more.
Central pattern generators for locomotion control in animals and robots: A review
The problem of controlling locomotion is an area in which neuroscience and robotics can fruitfully interact. In this article, I will review research carried out on locomotor central pattern generators (CPGs), i.e. neural circuits capable of producing coordinated patterns of high-dimensional rhythmic output signals while receiving only simple, low-dimensional, input signals. The review will first cover neurobiological observations concerning locomotor CPGs and their numerical modelling, with a special focus on vertebrates. It will then cover how CPG models implemented as neural networks or systems of coupled oscillators can be used in robotics for controlling the locomotion of articulated robots. The review also presents how robots can be used as scientific tools to obtain a better understanding of the functioning of biological CPGs. Finally, various methods for designing CPGs to control specific modes of locomotion will be briefly reviewed. In this process, I will discuss different types of CPG models, the pros and cons of using CPGs with robots, and the pros and cons of using robots as scientific tools. Open research topics both in biology and in robotics will also be discussed.
P300 and response time from a manual Stroop task
OBJECTIVES Manual response time (RT) and P300 event-related potential (ERP) measures were recorded in a Stroop color naming task to determine if previous results with vocal responses would be obtained using an arbitrary stimulus-response (S-R) mapping. METHODS Subjects (n = 32) were instructed to respond to the display color of a word but to ignore its meaning. Display color was congruent, neutral, or incongruent with word meaning. RESULTS Stroop facilitation and interference effects were observed, as RT was shortest in the congruent condition, intermediate in the neutral condition, and longest in the incongruent condition. In contrast, P300 latency did not vary across color/word congruence conditions, suggesting that the RT difference between congruence conditions originated after stimulus evaluation. CONCLUSIONS These manual RT/P300 findings support the view that Stroop interference and facilitation originate from response competition between the relevant and irrelevant stimulus attributes. By employing an arbitrary mapping of color words onto buttons, the present results indicate that the disparate effects of Stroop stimuli on RT and P300 latency do not depend on the nature of the S-R translation.
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
The success of long short-term memory (LSTM) neural networks in language processing is typically attributed to their ability to capture long-distance statistical regularities. Linguistic regularities are often sensitive to syntactic structure; can such dependencies be captured by LSTMs, which do not have explicit structural representations? We begin addressing this question using number agreement in English subject-verb dependencies. We probe the architecture’s grammatical competence both using training objectives with an explicit grammatical target (number prediction, grammaticality judgments) and using language models. In the strongly supervised settings, the LSTM achieved very high overall accuracy (less than 1% errors), but errors increased when sequential and structural information conflicted. The frequency of such errors rose sharply in the language-modeling setting. We conclude that LSTMs can capture a non-trivial amount of grammatical structure given targeted supervision, but stronger architectures may be required to further reduce errors; furthermore, the language modeling signal is insufficient for capturing syntax-sensitive dependencies, and should be supplemented with more direct supervision if such dependencies need to be captured.
SLDR-DL: A Framework for SLD-Resolution with Deep Learning
This paper introduces an SLD-resolution technique based on deep learning. This technique enables neural networks to learn from old and successful resolution processes and to use learnt experiences to guide new resolution processes. An implementation of this technique is named SLDR-DL. It includes a Prolog library of deep feedforward neural networks and some essential functions of resolution. In the SLDR-DL framework, users can define logical rules in the form of definite clauses and teach neural networks to use the rules in reasoning processes.
Design and control of underactuated tendon-driven mechanisms
Many robotic hands or prosthetic hands have been developed in the last several decades, and many use tendon-driven mechanisms for their transmissions. Robotic hands are now built with underactuated mechanisms, which have fewer actuators than degrees of freedom, to reduce mechanical complexity or to realize a biomimetic motion such as flexion of an index finger. The design is heuristic and it is useful to develop design methods for the underactuated mechanisms. This paper classifies mechanisms driven by tendons into three classes, and proposes a design method for them. The two classes are related to underactuated tendon-driven mechanisms, and these have been used without distinction so far. An index finger robot, which has four active tendons and two passive tendons, is developed and controlled with the proposed method.
Pluronic lecithin organogel
The purpose of this review is to give detail insight of pluronic lecithin organogels (PLOs) as a topical and transdermal drug delivery system. Pluronic lecithin organogel is a microemulsion-based gel that has been effectively used by physicians and pharmacists to deliver hydrophilic and lipophilic drugs topically and transdermally across the stratum corneum. It is thermodynamically stable, viscoelastic, and biocompatible gel composed of phospholipids (lecithin), organic solvent, and polar solvent. Various types of therapeutic agents have been easily incorporated in PLO to improve their topical drug delivery. Pluronic lecithin organogel improves the topical administration of drug mainly because of desired drug partitioning, biphasic drug solubility, and the modification of skin barrier system by organogel components. Beside this, it shows low skin irritation, increases patient compliance, reduces side effects, avoids first pass metabolism, and increases efficiency of drug. In addition, PLO has been shown in vivo and in vitro to modulate the release and permeation of drugs applied transdermally. Thus, in future, it has wide range of applications and opportunities to experiment with various drugs in this type of drug delivery system.
An assessment of advanced mobile services acceptance: Contributions from TAM and diffusion theory models
Today, in addition to traditional mobile services, there are new ones already being used, thanks to the advances in 3G-related technologies. Our work contributed to the emerging body of research by integrating TAM and Diffusion Theory. Based on a sample of 542 Dutch consumers, we found that traditional antecedents of behavioral intention, ease of use and perceived usefulness, can be linked to diffusion-related variables, such as social influence and perceived benefits (flexibility and status). 2008 Elsevier B.V. All rights reserved.
Circularly polarized array antenna with corporate-feed network and series-feed elements
In this paper, corporate-feed circularly polarized microstrip array antennas are studied. The antenna element is a series-feed slot-coupled structure. Series feeding causes sequential rotation effect at the element level. Antenna elements are then used to form the subarray by applying sequential rotation to their feeding. Arrays having 4, 16, and 64 elements were made. The maximum achieved gains are 15.3, 21, and 25.4 dBic, respectively. All arrays have less than 15 dB return loss and 3 dB axial ratio from 10 to 13 GHz. The patterns are all quite symmetrical.
Neuronal Circuits for Fear Expression and Recovery: Recent Advances and Potential Therapeutic Strategies
Recent technological developments, such as single unit recordings coupled to optogenetic approaches, have provided unprecedented knowledge about the precise neuronal circuits contributing to the expression and recovery of conditioned fear behavior. These data have provided an understanding of the contributions of distinct brain regions such as the amygdala, prefrontal cortex, hippocampus, and periaqueductal gray matter to the control of conditioned fear behavior. Notably, the precise manipulation and identification of specific cell types by optogenetic techniques have provided novel avenues to establish causal links between changes in neuronal activity that develop in dedicated neuronal structures and the short and long-lasting expression of conditioned fear memories. In this review, we provide an update on the key neuronal circuits and cell types mediating conditioned fear expression and recovery and how these new discoveries might refine therapeutic approaches for psychiatric conditions such as anxiety disorders and posttraumatic stress disorder.
Using Argumentative Structure to Interpret Debates in Online Deliberative Democracy and eRulemaking
Governments around the world are increasingly utilising online platforms and social media to engage with, and ascertain the opinions of, their citizens. Whilst policy makers could potentially benefit from such enormous feedback from society, they first face the challenge of making sense out of the large volumes of data produced. In this article, we show how the analysis of argumentative and dialogical structures allows for the principled identification of those issues that are central, controversial, or popular in an online corpus of debates. Although areas such as controversy mining work towards identifying issues that are a source of disagreement, by looking at the deeper argumentative structure, we show that a much richer understanding can be obtained. We provide results from using a pipeline of argument-mining techniques on the debate corpus, showing that the accuracy obtained is sufficient to automatically identify those issues that are key to the discussion, attracting proportionately more support than others, and those that are divisive, attracting proportionately more conflicting viewpoints.
Distributed vector Processing of a new local MultiScale Fourier transform for medical imaging applications
The recently developed S-transform (ST) combines features of the Fourier and Wavelet transforms; it reveals frequency variation over both space and time. It is a potentially powerful tool that can be applied to medical image processing including texture analysis and noise filtering. However, calculation of the ST is computationally intensive, making conventional implementations too slow for many medical applications. This problem was addressed by combining parallel and vector computations to provide a 25-fold reduction in computation time. This approach could help accelerate many medical image processing algorithms.
The genetical evolution of social behaviour. I.
Grounds for thinking that the model described in the previous paper can be used to support general biological principles of social evolution are briefly discussed. Two principles are presented, the first concerning the evolution of social behaviour in general and the second the evolution of social discrimination. Some tentative evidence is given. More general application of the theory in biology is then discussed, particular attention being given to cases where the indicated interpretation differs from previous views and to cases which appear anomalous. A hypothesis is outlined concerning social evolution in the Hymenoptera; but the evidence that at present exists is found somewhat contrary on certain points. Other subjects considered include warning behaviour, the evolution of distasteful properties in insects, clones of cells and clones of zooids as contrasted with other types of colonies, the confinement of parental care to true offspring in birds and insects, fights, the behaviour of parasitoid insect larvae within a host, parental care in connection with monogyny and monandry and multi-ovulate ovaries in plants in connection with wind and insect pollination.
The effectiveness of a 15 minute weekly massage in reducing physical and psychological stress in nurses.
OBJECTIVE To investigate the effectiveness of massage therapy in reducing physiological and psychological indicators of stress in nurses employed in an acute care hospital. DESIGN Randomised controlled trial. SETTING Acute care hospital in Queensland. SUBJECTS Sixty nurses were recruited to the five week study and randomly assigned to two groups. INTERVENTION A 15 minute back massage once a week. The control group did not receive any therapy. MAIN OUTCOME MEASURES Demographic information, a life events questionnaire and a brief medical history of all participants was completed at enrolment. Physiological stress was measured at weeks one, three and five by urinary cortisol and blood pressure readings. Psychological stress levels were measured at weeks one and five with the State-Trait Anxiety Inventory (STAI). RESULTS Differences in the change in urinary cortisol and blood pressure between the two groups did not reach statistical significance. However, STAI scores decreased over the five weeks for those participants who received a weekly massage. The STAI scores of the control group increased over the five week period. These differences between the groups were statistically significant. CONCLUSION The results of this study suggest that massage therapy is a beneficial tool for the health of nurses as it may reduce psychological stress levels. It is recommended that further large studies be conducted to measure the symptoms of stress rather than the physiological signs of stress in nurses.
BAG OF REGION EMBEDDINGS VIA LOCAL CONTEXT UNITS FOR TEXT CLASSIFICATION
Contextual information and word orders are proved valuable for text classification task. To make use of local word order information, n-grams are commonly used features in several models, such as linear models. However, these models commonly suffer the data sparsity problem and are difficult to represent large size region. The discrete or distributed representations of n-grams can be regarded as region embeddings, which are representations of fixed size regions. In this paper, we propose two novel text classification models that learn task specific region embeddings without hand crafted features, hence the drawbacks of n-grams can be overcome. In our model, each word has two attributes, a commonly used word embedding, and an additional local context unit which is used to interact with the word’s local context. Both the units and word embeddings are used to generate representations of regions, and are learned as model parameters. Finally, bag of region embeddings of a document is fed to a linear classifier. Experimental results show that our proposed methods achieve state-of-the-art performance on several benchmark datasets. We provide visualizations and analysis illustrating that our proposed local context unit can capture the syntactic and semantic information.
A methodology for estimating the value of privacy in information disclosure systems
In many types of information systems, users face an implicit tradeoff between disclosing personal information and receiving benefits, such as discounts by an electronic commerce service that requires users to divulge some personal information. While these benefits are relatively measurable, the value of privacy involved in disclosing the information is much less tangible, making it hard to design and evaluate information systems that manage personal information. Meanwhile, existing methods to assess and measure the value of privacy, such as self-reported questionnaires, are notoriously unrelated of real eworld behavior. To overcome this obstacle, we propose a methodology called VOPE (Value of Privacy Estimator), which relies on behavioral economics' Prospect Theory (Kahneman & Tversky, 1979) and valuates people's privacy preferences in information disclosure scenarios. VOPE is based on an iterative and responsive methodology in which users take or leave a transaction that includes a component of information disclosure. To evaluate the method, we conduct an empirical experiment (n 1⁄4 195), estimating people's privacy valuations in electronic commerce transactions. We report on the convergence of estimations and validate our results by comparing the values to theoretical projections of existing results (Tsai, Egelman, Cranor, & Acquisti, 2011), and to another independent experiment that required participants to rank the sensitivity of information disclosure transactions. Finally, we discuss how information systems designers and regulators can use VOPE to create and to oversee systems that balance privacy and utility. © 2016 Elsevier Ltd. All rights reserved.
Islam and Social Change in French West Africa: Abbreviations used in references
Change your habit to hang or waste the time to only chat with your friends. It is done by your everyday, don't you feel bored? Now, we will show you the new habit that, actually it's a very old habit to do that can make your life more qualified. When feeling bored of always chatting with your friends all free time, you can find the book enPDF islam and social change in french west africa history of an emancipatory community african studies and then read it.
PreSET: Improving performance of phase change memories by exploiting asymmetry in write times
Phase Change Memory (PCM) is a promising technology for building future main memory systems. A prominent characteristic of PCM is that it has write latency much higher than read latency. Servicing such slow writes causes significant contention for read requests. For our baseline PCM system, the slow writes increase the effective read latency by almost 2X, causing significant performance degradation. This paper alleviates the problem of slow writes by exploiting the fundamental property of PCM devices that writes are slow only in one direction (SET operation) and are almost as fast as reads in the other direction (RESET operation). Therefore, a write operation to a line in which all memory cells have been SET prior to the write, will incur much lower latency. We propose PreSET, an architectural technique that leverages this property to pro-actively SET all the bits in a given memory line well in advance of the anticipated write to that memory line. Our proposed design initiates a PreSET request for a memory line as soon as that line becomes dirty in the cache, thereby allowing a large window of time for the PreSET operation to complete. Our evaluations show that PreSET is more effective and incurs lower storage overhead than previously proposed write cancellation techniques. We also describe static and dynamic throttling schemes to limit the rate of PreSET operations. Our proposal reduces effective read latency from 982 cycles to 594 cycles and increases system performance by 34%, while improving the energy-delay-product by 25%.
Selective immobilization of multivalent ligands for surface plasmon resonance and fluorescence microscopy.
Cell surface multivalent ligands, such as proteoglycans and mucins, are often tethered by a single attachment point. In vitro, however, it is difficult to immobilize multivalent ligands at single sites due to their heterogeneity. Moreover, multivalent ligands often lack a single group with reactivity orthogonal to other functionality in the ligand. Biophysical analyses of multivalent ligand-receptor interactions would benefit from the availability of strategies for uniform immobilization of multivalent ligands. To this end, we report the design and synthesis of a multivalent ligand that has a single terminal orthogonal functional group and we demonstrate that this material can be selectively immobilized onto a surface suitable for surface plasmon resonance (SPR) experiments. The polymeric ligand we generated displays multiple copies of 3,6-disulfogalactose, and it can bind to the cell adhesion molecules P- and L-selectin. Using SPR measurements, we found that surfaces displaying our multivalent ligands bind specifically to P- and L-selectin. The affinities of P- and L-selectin for surfaces displaying the multivalent ligand are five- to sixfold better than the affinities for a surface modified with the corresponding monovalent ligand. In addition to binding soluble proteins, surfaces bearing immobilized polymers bound to cells displaying L-selectin. Cell binding was confirmed by visualizing adherent cells by fluorescence microscopy. Together, our results indicate that synthetic surfaces can be created by selective immobilization of multivalent ligands and that these surfaces are capable of binding soluble and cell-surface-associated receptors with high affinity.
Analgesic, anti-inflammatory and anti-platelet activities of the methanolic extract of Acacia modesta leaves
The current study was aimed to evaluate Acacia modesta for analgesic, anti-inflammatory, and anti-platelet activities. The analgesic and anti-inflammatory effects were assessed in rodents using acetic acid and formalin-induced nociception, hot plate and carrageenan-induced rat paw oedema tests. The intraperitoneal (i.p.) administration of the methanolic extract (50 and 100 mg/kg) produced significant inhibition (P < 0.01) of the acetic acid-induced writhing in mice and suppressed formalin-induced licking response of animals in both phases of the test. In the hot plate assay the plant extract (100 mg/kg) increased pain threshold of mice. Naloxone (5 mg/kg i.p.) partially reversed the analgesic effect of the extract in formalin and hot plate tests. A. modesta (100 and 200 mg/kg i.p.) exhibited sedative effect in barbiturate-induced hypnosis test similar to that produced by diazepam (10 mg/kg i.p.). The plant extract (50–200 mg/kg i.p.) produced marked anti-inflammatory effect in carrageenan-induced rat paw oedema assay comparable to diclofenac and produced a dose-dependent (0.5–2.5 mg/mL) inhibitory effect against arachidonic acid induced platelet aggregation. These data suggest that A. modesta possesses peripheral analgesic and anti-inflammatory properties, with analgesic effects partially associated with the opioid system.
Pathogenesis of the epigastric hernia
Epigastric herniation is a rather common condition with a reported prevalence up to 10 %. Only a minority is symptomatic, presumably the reason for the scarce literature on this subject. Epigastric hernias have specific characteristics for which several anatomical theories have been developed. Whether these descriptions of pathological mechanisms still hold with regard to the characteristics of epigastric hernia is the subject of this review. A multi-database research was performed to reveal relevant literature by free text word and subject headings ‘epigastric hernia’, ‘linea alba’, ‘midline’ and ‘abdominal wall’. Reviewed were studies on anatomical theories describing the pathological mechanism of epigastric herniation, incidence, prevalence and female-to-male ratio and possible explanatory factors. Three different theories have been described of which two have not been confirmed by other studies. The attachment of the diaphragm causing extra tension in the epigastric region is the one still standing. Around 1.6–3.6 % of all abdominal hernias and 0.5–5 % of all operated abdominal hernias is an epigastric hernia. Epigastric hernias are 2–3 times more common in men, with a higher incidence in patients from 20 to 50 years. Some cadaver studies show an epigastric hernia rate of 0.5–10 %. These specific features of the epigastric hernias (the large asymptomatic proportion, male predominance, only above umbilical level) are discussed with regard to the general theories. The epigastric hernia is a very common condition, mostly asymptomatic. Together with general factors for hernia formation, the theory of extra tension in the epigastric region by the diaphragm is the most likely theory of epigastric hernia formation.
A Contextual Bandits Framework for Personalized Learning Action Selection
Recent developments in machine learning have the potential to revolutionize education by providing an optimized, personalized learning experience for each student. We study the problem of selecting the best personalized learning action that each student should take next given their learning history; possible actions could include reading a textbook section, watching a lecture video, interacting with a simulation or lab, solving a practice question, and so on. We first estimate each student’s knowledge profile from their binary-valued graded responses to questions in their previous assessments using the SPARFA framework. We then employ these knowledge profiles as contexts in the contextual (multi-armed) bandits framework to learn a policy that selects the personalized learning actions that maximize each student’s immediate success, i.e., their performance on their next assessment. We develop two algorithms for personalized learning action selection. While one is mainly of theoretical interest, we experimentally validate the other using a real-world educational dataset. Our experimental results demonstrate that our approach achieves superior or comparable performance as compared to existing algorithms in terms of maximizing the students’ immediate success.