title
stringlengths
8
300
abstract
stringlengths
0
10k
Open source computer algebra systems: Axiom
This survey will look at Axiom, a free and very powerful computer algebra system available. It is a general purpose CAS useful for symbolic computation, research, and the development of new mathematical algorithms. Axiom is similar in some ways to Maxima, covered in the survey [J1], but different in many ways as well. Axiom, Maxima, and SAGE [S], are the largest of the general-purpose open-source CAS's. If you want to "take a test drive," Axiom can be tested without installation via the web interface [AS] or the SAGE online interface [S].
Adaptation of the Nomophobia Questionnaire (NMP-Q) to Spanish in a sample of adolescents.
INTRODUCTION Nomophobia is the fear of being out of mobile phone contact. People suffering from this anxiety disorder have feelings of stress and nervousness when access to their mobiles or computers is not possible. This work is an adaptation and validation study of the Spanish version of the Nomophobia Questionnaire (NMP-Q). METHODOLOGY The study included 306 students (46.1% males and 53.9% females) with ages ranging 13 to 19 years (Md=15.41±1.22). RESULTS Exploratory factor analysis revealed four dimensions that accounted for 64.4% of total variance. The ordinal α-value was 0.95, ranging from 0.75 to 0.92 across factors. Measure of stability was calculated by the testretest method (r=0.823). Indicators of convergence with the Spanish versions of the “Mobile Phone Problem Use Scale” (r=0.654) and the “Generalized Problematic Internet Use Scale” (r=0.531) were identified. Problematic mobile phone use patterns were examined taking the 15P, 80P and 95P percentiles as cut-off points. Scores of 39, 87 and 116 on NMP-Q corresponded to occasional, at-risk and problematic users, respectively. CONCLUSIONS Psychometric analysis shows that the Spanish version of the NMP-Q is a valid and reliable tool for the study of nomophobia.
Modified bagging of maximal information coefficient for genome-wide identification
A new method, modified Bagging (mBagging) of Maximal Information Coefficient (mBoMIC), was developed for genome-wide identification. Traditional Bagging is inadequate to meet some requirements of genome-wide identification, in terms of statistical performance and time cost. To improve statistical performance and reduce time cost, an mBagging was developed to introduce Maximal Information Coefficient (MIC) into genomewide identification. The mBoMIC overcame the weakness of original MIC, i.e., the statistical power is inadequate and MIC values are volatile. The three incompatible measures of Bagging, i.e. time cost, statistical power and false positive rate, were significantly improved simultaneously. Compared with traditional Bagging, mBagging reduced time cost by 80%, improved statistical power by 15%, and decreased false positive rate by 31%. The mBoMIC has sensitivity and university in genome-wide identification. The SNPs identified only by mBoMIC have been reported as SNPs associated with cardiac disease.
Trust establishment in cooperative wireless networks
In cooperative wireless networks, relay nodes are used to improve the channel capacity of the system. However, the presence of malicious relays in the network may severely degrade the performance of the system. More specifically, there exists a possibility that a node refuses to cooperate when it is selected for cooperation or deliberately drop the received packets. Trust establishment is a mechanism to detect misbehaving nodes in a network. In this paper, we propose a trust establishment method for cooperative wireless networks using Bayesian framework. In contrast with previous schemes, this approach takes the channel state information and relay selection policy into account to derive a pure trust value for each relay node. The proposed method can be applied to any system with a general relay selection policy whose decisions in each cooperative transmission are independent of the previous ones. Moreover, it does not impose additional communication overhead on the system as it uses the available information in relay selection procedure.
RISK FACTORS FOR LAMB AND KID MORTALITY IN SHEEP AND GOAT FARMS IN JORDAN
This study was conducted to identify the risk factors that are associated with neonatal mortality in lambs and kids in Jordan. The bacterial causes of mortality in lambs and kids were investigated. One hundred sheep and goat flocks were selected randomly from different areas of North Jordan at the beginning of the lambing season. The flocks were visited every other week to collect information and to take samples from freshly dead animals. By the end of the lambing season, flocks that had neonatal mortality rate ≥ 1.0% were considered as “case group” while flocks that had neonatal mortality rate less than 1.0% − as “control group”. The results indicated that neonatal mortality rate (within 4 weeks of age), in lambs and kids, was 3.2%. However, the early neonatal mortality rate (within 48 hours of age) was 2.01% and represented 62.1% of the neonatal mortalities. The following risk factors were found to be associated with the neonatal mortality in lambs and kids: not separating the neonates from adult animals; not vaccinating dams against infectious diseases (pasteurellosis, colibacillosis and enterotoxemia); walking more than 5 km and starvation-mismothering exposure. The causes of neonatal mortality in lambs and kids were: diarrhea (59.75%), respiratory diseases (13.3%), unknown causes (12.34%), and accident (8.39%). Bacteria responsible for neonatal mortality were: Escherichia coli, Pasteurella multocida, Clostridium perfringens and Staphylococcus aureus. However, E. coli was the most frequent bacterial species identified as cause of neonatal mortality in lambs and kids and represented 63.4% of all bacterial isolates. The E. coli isolates belonged to 10 serogroups, the O44 and O26 being the most frequent isolates.
Penile Mondor’s disease
Dear Editor, To our knowledge, here, we report the Wrst penile Mondor’s disease (superWcial penile vein thrombosis), an extremely rare complication of inguinal hernia repair, in the English language literature. As over 800,000 inguinal hernia operations annually are performed in the USA, surgeons must be aware of the clinical presentation, diagnosis, and treatment options for this self-limited, benign disease. A 32-year-old male with right-sided inguinal hernia was admitted to our hospital for hernia repair. He was a hardworking mechanic. He had no thromboembolic event or hematologic disease before. On physical examination, he had an indirect reducible inguinal hernia and his preoperative blood tests were normal. We performed Lichtenstein hernia repair under local anesthesia with day-surgery setting and he was discharged uneventfully on the same day. On the seventh postoperative day, he was readmitted with painless cord-like induration on the dorsal surface of the penis and pain during erection. He had Wrst sexual intercourse on postoperative day 5 and denied any penile trauma. Physical examination revealed subcutaneous cordlike induration of the superWcial dorsal penile vein with inXammatory signs but without any sign of venereal infection or inguinal lymphadenopathy. Penile Mondor’s disease was conWrmed by the presence of thrombosis and the absence of venous Xow of the superWcial penile vein on color Doppler ultrasonography. He was treated with the non-steroidal anti-inXammatory drug (NSAID; diclofenac potassium 100 mg bid, p.o) and organo-heparinoid gel (bid, topical) for 2 weeks, and recommended to avoid sexual intercourse and not to wear a tool belt. He was totally asymptomatic and Doppler ultrasonography conWrmed recanalization of the superWcial dorsal penile vein at the end of the fourth week. Henri Mondor reported the Wrst superWcial vein thrombosis localized to the thoracoepigastric vein in 1939 and Braun-Falco described penile vein participation in 1955 [1, 2]. Afterwards, isolated superWcial penile vein thrombosis was Wrst described by Helm and Hodge in 1958 [2]. Penile Mondor’s disease, with an incidence of 1.39%, is a rare but also underdiagnosed disease, mostly published as case reports in the literature [3, 4]. Clinical presentation is characterized as the sudden onset of painless/seldom painful, cord-like induration, and localized inXammatory signs of the dorsal surface of the penis. The pathogenesis of penile Mondor’s disease is controversial: several predisposing factors, namely, penile trauma, excessive sexual intercourse, infection, pelvic surgery, intravenous drug injection, and prolonged venous compression due to a distended bladder, tumoral mass, sexual devices, or the use of tool belts. Furthermore, hypercoagulability disorders, such as the deWciency of protein S, protein C, and antithrombin III, are risk factors for penile Mondor’s disease [1, 2, 4]. BrieXy, all of the above-mentioned predisposing factors inXuences Virchow’s triad: vessel wall damage, stasis, and hypercoagulability [4]. With regard to diVerential diagnosis, sclerosing lymphangitis and Peyronie’s disease must be considered [4]. Sclerosing lymphangitis is a noninfectious acute cord-like lymphangitis of the coronal sulcus; however, the overlying skin is uninvolved and poses no erosion or ulceration. Though the etiology of sclerosing lymphangitis J. Kutlay · V. Genc · C. Ensari Department of Surgery, DÂokap Teaching and Research Hospital, Ankara, Turkey
Benefits of genetically modified crops for the poor : household income , nutrition , and health
The potential impacts of genetically modified (GM) crops on income, poverty and nutrition in developing countries continue to be the subject of public controversy. Here, a review of the evidence is given. As an example of a first-generation GM technology, the effects of insect-resistant Bt cotton are analysed. Bt cotton has already been adopted by millions of small-scale farmers, in India, China, and South Africa among others. On average, farmers benefit from insecticide savings, higher effective yields and sizeable income gains. Insights from India suggest that Bt cotton is employment generating and poverty reducing. As an example of a second-generation technology, the likely impacts of beta-carotenerich Golden Rice are analysed from an ex ante perspective. Vitamin A deficiency is a serious nutritional problem, causing multiple adverse health outcomes. Simulations for India show that Golden Rice could reduce related health problems significantly, preventing up to 40,000 child deaths every year. These examples clearly demonstrate that GM crops can contribute to poverty reduction and food security in developing countries. To realise such social benefits on a larger scale requires more public support for research targeted to the poor, as well as more efficient regulatory and technology delivery systems.
Investigating Web APIs on the World Wide Web
The world of services on the Web, thus far limited to "classical" Web services based on WSDL and SOAP, has been increasingly marked by the domination of Web APIs, characterised by their relative simplicity and their natural suitability for the Web. Currently, the development of Web APIs is rather autonomous, guided by no established standards or rules, and Web API documentation is commonly not based on an interface description language such as WSDL, but is rather given directly in HTML as part of a web page. As a result, the use of Web APIs requires extensive manual effort and the wealth of existing work on supporting common service tasks, including discovery, composition and invocation, can hardly be reused or adapted to APIs. Before we can achieve a higher level of automation and can make any significant improvement to current practices and technologies, we need to reach a deeper understanding of these. Therefore, in this paper we present a thorough analysis of the current landscape of Web API forms and descriptions, which has up-to-date remained unexplored. We base our findings on manually examining a body of publicly available APIs and, as a result, provide conclusions about common description forms, output types, usage of API parameters, invocation support, level of reusability, API granularity and authentication details. The collected data provides a solid basis for identifying deficiencies and realising how we can overcome existing limitations. More importantly, our analysis can be used as a basis for devising common standards and guidelines for Web API development.
Soft Pneumatic Actuator skin with embedded sensors
Soft Pneumatic Actuator skin (SPA-skin) is a novel concept of ultra-thin (<; 1 mm) sensor embedded actuators with distributed actuation points that could cover soft bodies. This highly customizable and flexible SPA-skin is ideal for providing proprioceptive sensing by covering pre-existing structures and robots bodies. Having few limitation of the surface quality, dynamics, or shape, these mechanical attributes allow potential applications in autonomous flexible braille, active surface pattern reconfiguration, distributed actuation and sensing for tactile interface improvements. In this paper, the authors present a proof-of-concept SPA-skin. The mechanical parameters, design criteria, sensor selection, and actuator construction process are illustrated. Two control schemes, actuation mode and force sensing mode, are also demonstrated with the latest prototype.
Deterministic quantum teleportation of atomic qubits
Quantum teleportation provides a means to transport quantum information efficiently from one location to another, without the physical transfer of the associated quantum-information carrier. This is achieved by using the non-local correlations of previously distributed, entangled quantum bits (qubits). Teleportation is expected to play an integral role in quantum communication and quantum computation. Previous experimental demonstrations have been implemented with optical systems that used both discrete and continuous variables, and with liquid-state nuclear magnetic resonance. Here we report unconditional teleportation of massive particle qubits using atomic (9Be+) ions confined in a segmented ion trap, which aids individual qubit addressing. We achieve an average fidelity of 78 per cent, which exceeds the fidelity of any protocol that does not use entanglement. This demonstration is also important because it incorporates most of the techniques necessary for scalable quantum information processing in an ion-trap system.
Justification Narratives for Individual Classifications
Machine learning models are now used extensively for decision making in diverse applications, but for non-experts they are essentially black boxes. While there has been some work on the explanation of classifications, these are targeted at the expert user. For the non-expert, a better model is one of justification not detailing how the model made its decision, but justifying it to the human user on his or her terms. In this paper we introduce the idea of a justification narrative: a simple model-agnostic mapping of the essential values underlying a classification to a semantic space. We present a package that automatically produces these narratives and realizes them visually or textually.
Storing and Analyzing Historical Graph Data at Scale
The work on large-scale graph analytics to date has largely focused on the study of static properties of graph snapshots. However, a static view of interactions between entities is often an oversimplification of several complex phenomena like the spread of epidemics, information diffusion, formation of online communities, and so on. Being able to find temporal interaction patterns, visualize the evolution of graph properties, or even simply compare them across time, adds significant value in reasoning over graphs. However, because of lack of underlying data management support, an analyst today has to manually navigate the added temporal complexity of dealing with large evolving graphs. In this paper, we present a system, called Historical Graph Store, that enables users to store large volumes of historical graph data and to express and run complex temporal graph analytical tasks against that data. It consists of two key components: a Temporal Graph Index (TGI), that compactly stores large volumes of historical graph evolution data in a partitioned and distributed fashion; it provides support for retrieving snapshots of the graph as of any timepoint in the past or evolution histories of individual nodes or neighborhoods; and a Spark-based Temporal Graph Analysis Framework (TAF), for expressing complex temporal analytical tasks and for executing them in an efficient and scalable manner. Our experiments demonstrate our system’s efficient storage, retrieval and analytics across a wide variety of queries on large volumes of historical graph data.
IoT architecture proposal for disabled people
We are living in a new communication age, which will radically transform the way we live in our society. A world where anything will be connected to Internet is being created, generating an entirely new dynamic network - The Internet of Things (IoT) - enabling new means of communication between people, things and environment. A suitable architecture for Internet of Things (IoT) demands the implementation of several and distinct technologies that range from computing (e.g. Cloud Computing), communications (e.g. 6LowPAN, 3/4G) to semantic (e.g. data mining). Thus, it is necessary to understand very well all these technologies in order to know which of them is most suitable for a given scenario. Therefore, this paper proposes an IoT architecture for disabled people and intends to identify and describe the most relevant IoT technologies and international standards for the stack of the proposed architecture. In particular, the paper discusses the enabling IoT technologies and its feasibility for people with disabilities. At the end, it presents two use cases that are currently being deployed for this population.
Do empowered stroke patients perform better at self-management and functional recovery after a stroke? A randomized controlled trial
BACKGROUND Self-management after a stroke is a challenge because of multifaceted care needs and complex disabling consequences that cause further hindrance to patient participation. A 13-week stroke patient empowerment intervention (Health Empowerment Intervention for Stroke Self-management [HEISS]) was developed to enhance patients' ability to participate in self-management. PURPOSE To examine the effects of the empowerment intervention on stroke patients' self-efficacy, self-management behavior, and functional recovery. METHODS This is a single-blind randomized controlled trial with stroke survivors assigned to either a control group (CG) receiving usual ambulatory rehabilitation care or the HEISS in addition to usual care (intervention group [IG]). Outcome data were collected at baseline (T0), 1 week (T1), 3 months (T2), and 6 months (T3) postintervention. Data were analyzed on the intention-to-treat principle. The generalized estimating equation model was used to assess the differential change of self-efficacy in illness management, self-management behaviors (cognitive symptom management, communication with physician, medication adherence, and self-blood pressure monitoring), and functional recovery (Barthel and Lawton indices) across time points (baseline = T0, 1 week = T1, 3 months = T2, and 6 months = T3 postintervention) between the two groups. RESULTS A total of 210 (CG =105, IG =105) Hong Kong Chinese stroke survivors (mean age =69 years, 49% women, 72% ischemic stroke, 89% hemiparesis, and 63% tactile sensory deficit) were enrolled in the study. Those in IG reported better self-efficacy in illness management 3-month (P=0.011) and 6-month (P=0.012) postintervention, along with better self-management behaviors at all follow-up time points (all P<0.05), apart from medication adherence (P>0.05). Those in IG had significantly better functional recovery (Barthel, all P<0.05; Lawton, all P<0.001), compared to CG. The overall dropout rate was 16.7%. CONCLUSION Patient empowerment intervention (HEISS) may influence self-efficacy in illness management and improve self-management behavior and functional recovery of stroke survivors. Furthermore, the HEISS can be conducted in parallel with existing ambulatory stroke rehabilitation services and provide added value in sustaining stroke self-management and functional improvement in the long term.
Making Decisions about Self-Disclosure in Online Social Networks
This paper explores privacy calculus decision making processes for online social networks (OSN). Content analysis method is applied to analyze data obtained from face-to-face interviews and online survey with open-ended questions of 96 OSN users from different countries. The factors users considered before self-disclosing are explored. The perceived benefits and risks of using OSN and their impact on self-disclosure are also identified. We determine that the perceived risks of OSN usage hinder selfdisclosure. It is not clear, however, whether the perceived benefits offset the impact of the risks on selfdisclosure behavior. The findings as a whole do not support privacy calculus in OSN settings.
Marketplace or Reseller?
Intermediaries can choose between functioning as a marketplace (on which suppliers sell their products directly to buyers) or as a reseller (purchasing products from suppliers and selling them to buyers). We model this as a decision between whether control rights over a non-contractible decision variable (the choice of some marketing activity) are better held by suppliers (the marketplacemode) or by the intermediary (the reseller-mode). Whether the marketplace or the reseller mode is preferred depends on whether independent suppliers or the intermediary have more important information relevant to the optimal tailoring of marketing activities for each specific product. We show that this tradeoff is shifted towards the reseller-mode when marketing activities create spillovers across products and when network effects lead to unfavorable expectations about supplier participation. If the reseller has a variable cost advantage (respectively, disadvantage) relative to the marketplace then the tradeoff is shifted towards the marketplace for long-tail (respectively, shorttail) products. We thus provide a theory of which products an intermediary should offer in each mode. We also provide some empirical evidence that supports our main results. JEL classification: D4, L1, L5
Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits
We present a new algorithm for the contextual bandit learning problem, where the learner repeatedly takes an action in response to the observed context, observing the reward only for that action. Our method assumes access to an oracle for solving cost-sensitive classification problems and achieves the statistically optimal regret guarantee with only Õ( √ T ) oracle calls across all T rounds. By doing so, we obtain the most practical contextual bandit learning algorithm amongst approaches that work for general policy classes. We further conduct a proof-of-concept experiment which demonstrates the excellent computational and prediction performance of (an online variant of) our algorithm relative to several baselines.
Automatic device driver synthesis with termite
Faulty device drivers cause significant damage through down time and data loss. The problem can be mitigated by an improved driver development process that guarantees correctness by construction. We achieve this by synthesising drivers automatically from formal specifications of device interfaces, thus reducing the impact of human error on driver reliability and potentially cutting down on development costs. We present a concrete driver synthesis approach and tool called Termite. We discuss the methodology, the technical and practical limitations of driver synthesis, and provide an evaluation of non-trivial drivers for Linux, generated using our tool. We show that the performance of the generated drivers is on par with the equivalent manually developed drivers. Furthermore, we demonstrate that device specifications can be reused across different operating systems by generating a driver for FreeBSD from the same specification as used for Linux.
MuxViz: a tool for multilayer analysis and visualization of networks
Multilayer relationships among entities and information about entities must be accompanied by the means to analyse, visualize and obtain insights from such data. We present open-source software (muxViz) that contains a collection of algorithms for the analysis of multilayer networks, which are an important way to represent a large variety of complex systems throughout science and engineering. We demonstrate the ability of muxViz to analyse and interactively visualize multilayer data using empirical genetic, neuronal and transportation networks. Our software is available at https://github.com/manlius/muxViz.
Service and Utility Oriented Distributed Computing Systems: Challenges and Opportunities for Modeling and Simulation Communities
Grids and peer-to-peer (P2P) networks have emerged as popular platforms for the next generation parallel and distributed computing. In these environments, resources are geographically distributed, managed and owned by various organizations with different policies, and interconnected by wide-area networks or the Internet. This introduces a number of resource management and application scheduling challenges in the domain of security, resource and policy heterogeneity, fault tolerance, and dynamic resource conditions. In these dynamic distributed computing environments, it is hard and challenging to carry out resource management design studies in a repeatable and controlled manner as resources and users are autonomous and distributed across multiple organizations with their own policies. Therefore, simulations have emerged as the most feasible technique for analyzing policies for resource allocation. This paper presents emerging trends in distributed computing and their promises for revolutionizing the computing field, and identifies distinct characteristics and challenges in building them. We motivate opportunities for modeling and simulation communities and present our discrete-event grid simulation toolkit, called GridSim, used by researchers world-wide for investigating the design of utility-oriented computing systems such as Data Centers and Grids. We present various case studies on the use of GridSim in modeling and simulation of Business Grids, parallel applications scheduling, workflow scheduling, and service pricing and revenue management.
Comprehensively and efficiently protecting the heap
The goal of this paper is to propose a scheme that provides comprehensive security protection for the heap. Heap vulnerabilities are increasingly being exploited for attacks on computer programs. In most implementations, the heap management library keeps the heap meta-data (heap structure information) and the application's heap data in an interleaved fashion and does not protect them against each other. Such implementations are inherently unsafe: vulnerabilities in the application can cause the heap library to perform unintended actions to achieve control-flow and non-control attacks.Unfortunately, current heap protection techniques are limited in that they use too many assumptions on how the attacks will be performed, require new hardware support, or require too many changes to the software developers' toolchain. We propose Heap Server, a new solution that does not have such drawbacks. Through existing virtual memory and inter-process protection mechanisms, Heap Server prevents the heap meta-data from being illegally overwritten, and heap data from being meaningfully overwritten. We show that through aggressive optimizations and parallelism, Heap Server protects the heap with nearly-negligible performance overheads even on heap-intensive applications. We also verify the protection against several real-world exploits and attack kernels.
A Change in Our Way of Thinking.
This article has three parts: a context for correc tional educational (CE) paradigm shift, a declaration of principles to articulate salient issues, and associ ated curricular impacts. Part One is about the stepped up pace of change in the CE field, one aspect of a worldwide shift from a mechanistic and competitive orientation to a wholistic and cooperative one. This development is consistent with advice from some of the world's great modern thinkers, and with our aspira tions for student learning. Our species is learning to change its way of thinking. The general trajectory of this change in CE paral lels humanities and social science course content. Any learning content can be used as a vehicle for a change when the learners are ready. Relevant CE changes also include professional and organizational improve ments. The humanities are closely linked to reflection about the human condition, and introspection for clarity and personal development. Humanities content is the subject of attention by correctional educators, and the contextfor the second part of the article: a CE/ humanities declaration of principles. Part Threefocuses on curricula consistent with the worldwide shift and identified learning needs. The scope of our change aspirations suggests that "we are all in this together." CE students and faculty are both changing their way of thinking. Part One: Three Arenas for Change After the first atomic bombs were dropped, Albert Einstein warned that it was time to change our way of thinking (Sagan, 1988, p. 6). If current thinking patterns were extended into the future, he said, life on the planet would end. Fritjof Capra studied the ways in which our thinking, as a species, is changing. He traced the development of a "mechanistic" world view during the last few centuries, with emphasis on scientific thinking and competition (milita rism, coercion, bureaucratic manipulation, exploitation). By contrast, he cited the emergence of a new "wholistic" world view, characterized by an emphasis on humanistic concerns and cooperation (peace, environment, health, social relationships). (Capra, 1983). Teilhard proposed a new model of our planet, in layers. He labeled the original rocky Earth the "geosphere," the layer of life the "biosphere," and the thinking skin of the planet—mankind—the "noosphere" (noos is Greek for wis dom). Teilhard cited vast, quickening changes in the noo sphere: "A new substance.. .of supreme importance.. .call it Homo progressivus...the man to who the...future matters more than the present...[They] are easily recognizable...—all those possessed by the demon (or the angel) of Research...some...attraction...causes them to unite...[If you] take two [of these] men, in any gathering, [they] will gravitate instinctively towards one another in the crowd...[and] recognize one another...this meeting...is not confined to individuals...[of] the same origins...[There is no] racial, social or religious barrier.. .against this force of attraction... [A] radical process of differentiation...is taking place within the human mass... the spontaneous... separation of that which moves and rises from that which remains immobile...over the whole extent of the globe...[This] grand phenomenon...represents a new and...final division of Mankind, based...on belief in progress...What finally divides the men of today into two camps is...an attitude of mind..." (Teilhard, 1964, pp. 142-144). This planetary shift is operationalized in each of what Teilhard called "compartments." In each nation, academic discipline, vocation, and social grouping, the planetary shift from the mechanistic and competitive world view to the wholistic and cooperative world view is being experienced. Our species is changing its attitude of mind, changing its way of thinking. In CE, the direction of this planetary shift is easily identified. CE past practice has focused on utilitarian, marketable knowledge for incarcerated students. The tradi tional "knowledge, skills, and attitudes" formula—in that priority order—has dominated, although it is clear that incarcerated students would be better served by a reverse order (attitudes, skills, and knowledge). The behaviorialism which dominates CE (tne diagnostic/prescriptive method) and corrections (the medical model) correspond with the old mechanistic world view. CE research is dominated by mechanistically oriented "number crunching methodolo gies," and a bias that supports past practice. The institu tions in which we work are managed by coercion and ma ipulation, and the role of education is tenuous, at best. Correctional educators are caught between institutional missions that do not prioritize education, notions of loyalty that support the status quo, and isolation (from the CE history/literature, and from colleagues at other locations). We frequently feel vulnerable and overwhelmed. This is natural for educators in anti-education institutions, with minimal support. Our current task is to improve opportuni ties for student learning, by improving CE. Many are working to reshape CE, consistent with the wholistic, cooperative global paradigm ("global perspec tives, local applications"). The CE trend that parallels the
Complex PTSD, affect dysregulation, and borderline personality disorder
Complex PTSD (cPTSD) was formulated to include, in addition to the core PTSD symptoms, dysregulation in three psychobiological areas: (1) emotion processing, (2) self-organization (including bodily integrity), and (3) relational security. The overlap of diagnostic criteria for cPTSD and borderline personality disorder (BPD) raises questions about the scientific integrity and clinical utility of the cPTSD construct/diagnosis, as well as opportunities to achieve an increasingly nuanced understanding of the role of psychological trauma in BPD. We review clinical and scientific findings regarding comorbidity, clinical phenomenology and neurobiology of BPD, PTSD, and cPTSD, and the role of traumatic victimization (in general and specific to primary caregivers), dissociation, and affect dysregulation. Findings suggest that BPD may involve heterogeneity related to psychological trauma that includes, but extends beyond, comorbidity with PTSD and potentially involves childhood victimization-related dissociation and affect dysregulation consistent with cPTSD. Although BPD and cPTSD overlap substantially, it is unwarranted to conceptualize cPTSD either as a replacement for BPD, or simply as a sub-type of BPD. We conclude with implications for clinical practice and scientific research based on a better differentiated view of cPTSD, BPD and PTSD.
The evolutionary origins of modularity
A central biological question is how natural organisms are so evolvable (capable of quickly adapting to new environments). A key driver of evolvability is the widespread modularity of biological networks--their organization as functional, sparsely connected subunits--but there is no consensus regarding why modularity itself evolved. Although most hypotheses assume indirect selection for evolvability, here we demonstrate that the ubiquitous, direct selection pressure to reduce the cost of connections between network nodes causes the emergence of modular networks. Computational evolution experiments with selection pressures to maximize network performance and minimize connection costs yield networks that are significantly more modular and more evolvable than control experiments that only select for performance. These results will catalyse research in numerous disciplines, such as neuroscience and genetics, and enhance our ability to harness evolution for engineering purposes.
Hippocampal-prefrontal input supports spatial encoding in working memory
Spatial working memory, the caching of behaviourally relevant spatial cues on a timescale of seconds, is a fundamental constituent of cognition. Although the prefrontal cortex and hippocampus are known to contribute jointly to successful spatial working memory, the anatomical pathway and temporal window for the interaction of these structures critical to spatial working memory has not yet been established. Here we find that direct hippocampal–prefrontal afferents are critical for encoding, but not for maintenance or retrieval, of spatial cues in mice. These cues are represented by the activity of individual prefrontal units in a manner that is dependent on hippocampal input only during the cue-encoding phase of a spatial working memory task. Successful encoding of these cues appears to be mediated by gamma-frequency synchrony between the two structures. These findings indicate a critical role for the direct hippocampal–prefrontal afferent pathway in the continuous updating of task-related spatial information during spatial working memory.
A pragmatic investigation into the effects of massage therapy on the self efficacy of multiple sclerosis clients.
OBJECTIVE This research was conducted to examine changes in self self-efficacy, (the perception/belief that one can competently cope with a challenging situation) in multiple sclerosis clients following a series of massage therapy treatments. METHOD This small practical trial investigated the effects of a pragmatic treatment protocol using a prospective randomized pretest posttest waitlist control design. Self-Efficacy scores were obtained before the first treatment, mid-treatment series, after the last treatment in the series, four weeks after the final treatment and again eight weeks after the final treatment had been received. INTERVENTION The intervention involved a series of weekly one hour therapeutic massage treatments conducted over eight weeks and a subsequent eight week follow up period. All treatments were delivered by supervised student therapists in the final term of their two year massage therapy program. OUTCOME MEASURES Self-Efficacy [SE] was the outcome for the study, measured using the Multiple Sclerosis Self-Efficacy survey [MSSE]. Descriptive statistics for SE scores were assessed and inferential analysis involved the testing of between group differences at each of the measurement points noted above. RESULTS Statistically significant improvement in self-efficacy was noted between treatment (n = 8) and control (n = 7) groups at mid treatment series (t = 2.32; p < 0.02), post treatment series (t = 1.81; p < 0.05) and at four week follow up (t = 2.24; p < 0.02). At the eight week follow up self-efficacy scores had decreased and there was no statistically significant difference between groups (t = 0.87; p < 0.2). CONCLUSION Study results support previous findings indicating that massage therapy increases the self-efficacy of clients with multiple sclerosis, potentially resulting in a better overall adjustment to the disease and an improvement in psycho-emotional state. The increase in self-efficacy after 4 weeks of treatment suggests that positive response occurs more rapidly that was previously demonstrated. The improvement in self-efficacy endured 4 weeks after the end of the treatment series, which suggests that massage therapy may have longer term effects on self-efficacy that were not previously noted. Lack of inter group difference at the eight week follow up reinforces the notion that on-going treatment is required in order to maintain the positive changes observed.
Identification, Diagnosis, and Control of a Flexible Robot Arm
The most important factors in manufacturing are quality, cost, and productivity. The trend is towards lighter robots with increased mechanical flexibilities, and therefore there is a need to include the flexibilities in the robot models to obtain good performance of the robot. The core theme in this thesis is modeling and identification of the physical parameters of an ABB IRB 1400 industrial robot. The approximation made is that the robot arm can be described using a finite number of masses connected by springs and dampers. It has been found that a three-mass model gives a reasonably good description of the robot when moving around axis one. The physical parameters of this model are identified using off-line and on-line algorithms. The algorithms are based on prediction error methods. For the off-line identification the MATLAB System Identification Toolbox is used. For the on-line identification the algorithm used is a modified version of a recursive prediction error method to cope with continuous time models. The models are then used in diagnosis and control. Two ways of doing diagnosis using on-line identification are investigated. Estimating some of the physical parameters of the robot arm recursively makes it possible to monitor important aspects of the system such as friction and load. LQG control of the flexible robot arm is also studied with the aim of good disturbance rejection. Aspects that have been studied are unstable regulators and the use of accelerometers.
Randomized Pharmacokinetic Study Comparing Subcutaneous and Intravenous Palonosetron in Cancer Patients Treated with Platinum Based Chemotherapy
BACKGROUND Palonosetron is a potent second generation 5- hydroxytryptamine-3 selective antagonist which can be administered by either intravenous (IV) or oral routes, but subcutaneous (SC) administration of palonosetron has never been studied, even though it could have useful clinical applications. In this study, we evaluate the bioavailability of SC palonosetron. PATIENTS AND METHODS Patients treated with platinum-based chemotherapy were randomized to receive SC or IV palonosetron, followed by the alternative route in a crossover manner, during the first two cycles of chemotherapy. Blood samples were collected at baseline and 10, 15, 30, 45, 60, 90 minutes and 2, 3, 4, 6, 8, 12 and 24 h after palonosetron administration. Urine was collected during 12 hours following palonosetron. We compared pharmacokinetic parameters including AUC0-24h, t1/2, and Cmax observed with each route of administration by analysis of variance (ANOVA). RESULTS From October 2009 to July 2010, 25 evaluable patients were included. AUC0-24h for IV and SC palonosetron were respectively 14.1 and 12.7 ng × h/ml (p=0.160). Bioavalability of SC palonosetron was 118% (95% IC: 69-168). Cmax was lower with SC than with IV route and was reached 15 minutes following SC administration. CONCLUSIONS Palonosetron bioavailability was similar when administered by either SC or IV route. This new route of administration might be specially useful for outpatient management of emesis and for administration of oral chemotherapy. TRIAL REGISTRATION ClinicalTrials.gov NCT01046240.
Testing the reliability of frontal sinuses in positive identification.
The use of frontal sinus radiographs in positive identification has become an increasingly applied and accepted technique among forensic anthropologists, radiologists, and pathologists. From an evidentiary standpoint, however, it is important to know whether frontal sinus radiographs are a reliable method for confirming or rejecting an identification, and standardized methods should be applied when making comparisons. The purpose of the following study is to develop an objective, standardized comparison method, and investigate the reliability of that method. Elliptic Fourier analysis (EFA) was used to assess the variation in 808 outlines of frontal sinuses by calculating likelihood ratios and posterior probabilities from EFA coefficients. Results show that using EFA coefficient comparison to estimate the probability of a correct identification is a reliable technique, and EFA comparison of frontal sinus outlines is recommended when it may be necessary to provide quantitative substantiation for a forensic identification based on these structures.
Compact and Broadband CB-CPW-to-SIW Transition Using Stepped-Impedance Resonator With 90$^{\circ}$-Bent Slot
In this paper, a compact and broadband conductor-backed coplanar waveguide (CB-CPW) to substrate integrated waveguide (SIW) transition using a stepped-impedance resonator (SIR) with a 90°-bent slot is proposed. The proposed transition can achieve a 36.8% 15-dB fractional bandwidth, which almost covers the S-band (2.6-3.95 GHz). Compared to the CB-CPW-to-SIW transition using the single-section quarter-wavelength transformer, transition using the SIR with the 90°-bent slot can increase the 15-dB fractional bandwidth from 23.44% to 36.8% and reduce the physical length from 14 to 7.2 mm. In order to verify the simulation results, back-to-back transitions are fabricated and measured, and the measurement results are in good agreement with those of simulation.
Does "Fans Economy" Work for Chinese Pop Music Industry?
China has become one of the largest entertainment markets in the world in recent years. Due to the success of Xiaomi, many Chinese pop music industry entrepreneurs believe "Fans Economy" works in the pop music industry. "Fans Economy" is based on the assumption that pop music consumer market could be segmented based on artists. Each music artist has its own exclusive loyal fans. In this paper, we provide an insightful study of the pop music artists and fans social network. Particularly, we segment the pop music consumer market and pop music artists respectively. Our results show that due to the Matthew Effect and limited diversity of consumer market, "Fans Economy" does not work for the Chinese pop music industry.
Aroma profiles of five basil (Ocimum basilicum L.) cultivars grown under conventional and organic conditions
A headspace solid-phase microextraction (HS-SPME) method coupled to gas chromatography–ion trap mass spectrometry (GC– ITMS) has been developed and applied for profiling of volatile compounds released from five Ocimum basilicum L. cultivars grown under both organic and conventional conditions. Comprehensive two-dimensional gas chromatography coupled to time-of-flight mass spectrometry (GC GC–TOFMS) was employed for confirmation of identity of volatiles extracted from the basil headspace by SPME. Linalool, methyl chavicol, eugenol, bergamotene, and methyl cinnamate were the dominant volatile components, the relative content of which was found to enable differentiating between the cultivars examined. The relative content of some sesquiterpenes, hydrocarbons benzenoid compounds, and monoterpene hydrocarbons was lower in dried and frozen leaves as compared to fresh basil leaves. A sensory analysis of the all examined samples proved the differences between evaluated cultivars. 2007 Elsevier Ltd. All rights reserved.
Neuroanatomy for the Neuroscientist
The brain and spinal cord form the central nervous system. The brain is the part of the central nervous system that is housed in the cranium/skull. It consists of the brain stem, diencephalon, cerebellum, and cerebrum. At the foramen magnum, the highest cervical segment of the spinal cord is continuous with the lowest level of the medulla of the brain stem. The spinal nerves from the sacral, lumbar, thoracic, and cervical levels of the spinal cord form the lower part of the peripheral nervous system and record general sensations of pain, temperature touch, and pressure. The 12 cranial nerves attached to the brain form the upper part of the peripheral nervous system and record general sensations of pain, temperature touch, and pressure, but in addition we now find the presence of the special senses of smell, vision, hearing, balance, and taste. The blood supply to the brain originates from the first major arterial branches from the heart insuring that over 20% of the entire supply of oxygenated blood flows directly into the brain.
Image Splicing Localization Using A Multi-Task Fully Convolutional Network (MFCN)
In this work, we propose a technique that utilizes a fully convolutional network (FCN) to localize image splicing attacks. We first evaluated a single-task FCN (SFCN) trained only on the surface label. Although the SFCN is shown to provide superior performance over existing methods, it still provides a coarse localization output in certain cases. Therefore, we propose the use of a multi-task FCN (MFCN) that utilizes two output branches for multi-task learning. One branch is used to learn the surface label, while the other branch is used to learn the edge or boundary of the spliced region. We trained the networks using the CASIA v2.0 dataset, and tested the trained models on the CASIA v1.0, Columbia Uncompressed, Carvalho, and the DARPA/NIST Nimble Challenge 2016 SCI datasets. Experiments show that the SFCN and MFCN outperform existing splicing localization algorithms, and that the MFCN can achieve finer localization than the SFCN.
A FAST ELITIST MULTIOBJECTIVE GENETIC ALGORITHM: NSGA-II
NSGA ( [5]) is a popular non-domination based genetic algorithm for multiobjective optimization. It is a very effective algorithm but has been generally criticized for its computational complexity, lack of elitism and for choosing the optimal parameter value for sharing parameter σshare. A modified version, NSGAII ( [3]) was developed, which has a better sorting algorithm , incorporates elitism and no sharing parameter needs to be chosen a priori. NSGA-II is discussed in detail in this.
Recent progress in road and lane detection: a survey
The problem of road or lane perception is a crucial enabler for advanced driver assistance systems. As such, it has been an active field of research for the past two decades with considerable progress made in the past few years. The problem was confronted under various scenarios, with different task definitions, leading to usage of diverse sensing modalities and approaches. In this paper we survey the approaches and the algorithmic techniques devised for the various modalities over the last 5 years. We present a generic break down of the problem into its functional building blocks and elaborate the wide range of proposed methods within this scheme. For each functional block, we describe the possible implementations suggested and analyze their underlying assumptions. While impressive advancements were demonstrated at limited scenarios, inspection into the needs of next generation systems reveals significant gaps. We identify these gaps and suggest research directions that may bridge them.
Teaching large scale data processing: the five-week course and two years' experiences
We have setup a new course on the large scale data processing using clusters. It introduces the concepts and design of distributed systems. Many newly developed ideas such as Google file system and MapReduce programming framework for processing large scale data sets are introduced. Students will gain practical experience with distributed programming technologies via several small labs and one large multi-week final project. Labs and projects will be completed using Hadoop, an open-source implementation of Google's distributed file system and MapReduce programming model. We have taught this class named "Mass Data Processing Technology on Large Scale Clusters" for two years. This paper will describe the design, perform of the course as well as the experiences and lessons learned.
Why do we care? Notes from the periphery.
the case of alcohol dependence, using a range of characteristics, but—with one exception—without data on outcomes, thereby limiting their clinical usefulness [12]. Similar research on subtypes of illicit drug abuse is scarce, e.g. on cocaine dependence [13]. The deficit in identifying symptom clusters (as a basis for typology) within the DSM-V criteria for substance abuse disorders continues to be a handicap for clinical work. This could be part of a future research agenda [14,15]. What can be expected from the proposed changes? European drug policy has a strong focus on all problematic forms of substance use besides dependent use, and prevention as well as treatment have developed interventions tailored to the type of disorder, differentiating harmful and dependent use. In research, the proposed change will impede comparability of earlier findings. All together, the changes might affect the present preference for this instrument.
The non-convex Burer-Monteiro approach works on smooth semidefinite programs
Semidefinite programs (SDPs) can be solved in polynomial time by interior point methods, but scalability can be an issue. To address this shortcoming, over a decade ago, Burer and Monteiro proposed to solve SDPs with few equality constraints via rank-restricted, non-convex surrogates. Remarkably, for some applications, local optimization methods seem to converge to global optima of these non-convex surrogates reliably. Although some theory supports this empirical success, a complete explanation of it remains an open question. In this paper, we consider a class of SDPs which includes applications such as max-cut, community detection in the stochastic block model, robust PCA, phase retrieval and synchronization of rotations. We show that the low-rank Burer–Monteiro formulation of SDPs in that class almost never has any spurious local optima. This paper was corrected on April 9, 2018. Theorems 2 and 4 had the assumption that M (1) is a manifold. From this assumption it was stated that TYM = {Ẏ ∈ Rn×p : A(Ẏ Y >+ Y Ẏ >) = 0}, which is not true in general. To ensure this identity, the theorems now make the stronger assumption that gradients of the constraintsA(Y Y >) = b are linearly independent for all Y inM. All examples treated in the paper satisfy this assumption. Appendix D gives details.
Breast screening using 2D-mammography or integrating digital breast tomosynthesis (3D-mammography) for single-reading or double-reading--evidence to guide future screening strategies.
PURPOSE We compared detection measures for breast screening strategies comprising single-reading or double-reading using standard 2D-mammography or 2D/3D-mammography, based on the 'screening with tomosynthesis or standard mammography' (STORM) trial. METHODS STORM prospectively examined screen-reading in two sequential phases, 2D-mammography alone and integrated 2D/3D-mammography, in asymptomatic women participating in Trento and Verona (Northern Italy) population-based screening services. Outcomes were ascertained from assessment and/or excision histology or follow-up. For each screen-reading strategy we calculated the number of detected and non-detected (including interval) cancers, cancer detection rates (CDRs), false positive recall (FPR) measures and incremental CDR relative to a comparator strategy. We estimated the false:true positive (FP:TP) ratio and sensitivity of each mammography screening strategy. Paired binary data were compared using McNemar's test. RESULTS Amongst 7292 screening participants, there were 65 (including six interval) breast cancers; estimated first-year interval cancer rate was 0.82/1000 screens (95% confidence interval (CI): 0.30-1.79/1000). For single-reading, 35 cancers were detected at both 2D and 2D/3D-mammography, 20 cancers were detected only with 2D/3D-mammography compared with none at 2D-mammography alone (p<0.001) and 10 cancers were not detected. For double-reading, 39 cancers were detected at 2D-mammography and 2D/3D-mammography, 20 were detected only with 2D/3D-mammography compared with none detected at 2D-mammography alone (p<0.001) and six cancers were not detected. The incremental CDR attributable to 2D/3D-mammography (versus 2D-mammography) of 2.7/1000 screens (95% CI: 1.6-4.2) was evident for single and for double-reading. Incremental CDR attributable to double-reading (versus single-reading) of 0.55/1000 screens (95% CI: -0.02-1.4) was evident for 2D-mammography and for 2D/3D-mammography. Estimated FP:TP ratios showed that 2D/3D-mammography screening strategies had more favourable FP to TP trade-off and higher sensitivity, applying single-reading or double-reading, relative to 2D-mammography screening. CONCLUSION The evidence we report warrants rethinking of breast screening strategies and should be used to inform future evaluations of 2D/3D-mammography that assess whether or not the estimated incremental detection translates into improved screening outcomes such as a reduction in interval cancer rates.
Specific muscle stabilizing as home exercises for persistent pelvic girdle pain after pregnancy: a randomized, controlled clinical trial.
OBJECTIVE To investigate the efficacy of home-based specific stabilizing exercises focusing on the local stabilizing muscles as the only intervention in the treatment of persistent postpartum pelvic girdle pain. DESIGN A prospective, randomized, single-blinded, clinically controlled study. SUBJECTS Eighty-eight women with pelvic girdle pain were recruited 3 months after delivery. METHODS The treatment consisted of specific stabilizing exercises targeting the local trunk muscles. The reference group had a single telephone contact with a physiotherapist. Primary outcome was disability measured with Oswestry Disability Index. Secondary outcomes were pain, health-related quality of life (EQ-5D), symptom satisfaction, and muscle function. RESULTS No significant differences between groups could be found at 3- or 6-month follow-up regarding primary outcome in disability. Within-group comparisons showed some improvement in both groups in terms of disability, pain, symptom satisfaction and muscle function compared with baseline, although the majority still experienced pelvic girdle pain. CONCLUSION Treatment with this home-training concept of specific stabilizing exercises targeting the local muscles was no more effective in improving consequences of persistent postpartum pelvic girdle pain than the clinically natural course. Regardless of whether treatment with specific stabilizing exercises was carried out, the majority of women still experienced some back pain almost one year after pregnancy.
The relationships among service quality, perceived value, customer satisfaction, and post-purchase intention in mobile value-added services
The purposes of this study are to construct an instrument to evaluate service quality of mobile value-added services and have a further discussion of the relationships among service quality, perceived value, customer satisfaction, and post-purchase intention. Structural equation modeling and multiple regression analysis were used to analyze the data collected from college and graduate students of fifteen major universities in Taiwan. The main findings are as follows: (1) service quality positively influences both perceived value and customer satisfaction; (2) perceived value positively influences on both customer satisfaction and post-purchase intention; (3) customer satisfaction positively influences post-purchase intention; (4) service quality has an indirect positive influence on post-purchase intention through customer satisfaction or perceived value; (5) among the dimensions of service quality, “customer service and system reliability” is most influential on perceived value and customer satisfaction, and the influence of “content quality” ranks second; (6) the proposed model is proven with the effectiveness in explaining the relationships among service quality, perceived value, customer satisfaction, and post-purchase intention in mobile added-value services.
Adaptive learning framework
In this paper we propose a new adaptive learning framework that classifies learner's based on individual preferences in terms of understanding and processing information. This framework generates learner's learning style based on Felder Silverman learning style model and suggest learning content based on learning style. The paper outlines how the system allows instructors to monitor learner's learning style and suggest learning content based on the inferred learning style. The framework supports both static and automatic student modelling approach. In this sense, Adaptive learning framework resolves the limitation of traditional learning management system thus improving learner's learning process.
German women’s writing of the Eighteenth and Nineteenth Centuries: future directions in feminist criticism
Book synopsis: German women writers of the eighteenth and nineteenth centuries have been the subject of feminist literary critical and historical studies for around thirty years. This volume, with contributions from an international group of scholars, takes stock of what feminist literary criticism has achieved in that time and reflects on future trends in the field. Offering both theoretical perspectives and individual case studies, the contributors grapple with the difficulties of appraising ‘non-feminist’ women writers and genres from a feminist perspective and present innovative approaches to research in early women’s writing. This inclusive and cross-disciplinary collection of essays will enrich the study of German women’s writing of the eighteenth and nineteenth centuries and contribute to contemporary debates in feminist literary criticism.
Evaluation of neural network architectures for embedded systems
Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilization of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint is an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs.
Software Testing Research: Achievements, Challenges, Dreams
Software engineering comprehends several disciplines devoted to prevent and remedy malfunctions and to warrant adequate behaviour. Testing, the subject of this paper, is a widespread validation approach in industry, but it is still largely ad hoc, expensive, and unpredictably effective. Indeed, software testing is a broad term encompassing a variety of activities along the development cycle and beyond, aimed at different goals. Hence, software testing research faces a collection of challenges. A consistent roadmap of the most relevant challenges to be addressed is here proposed. In it, the starting point is constituted by some important past achievements, while the destination consists of four identified goals to which research ultimately tends, but which remain as unreachable as dreams. The routes from the achievements to the dreams are paved by the outstanding research challenges, which are discussed in the paper along with interesting ongoing work.
A 7.663-TOPS 8.2-W Energy-efficient FPGA Accelerator for Binary Convolutional Neural Networks
FPGA-based hardware accelerator for convolutional neural networks (CNNs) has obtained great attentions due to its higher energy efficiency than GPUs. However, it has been a challenge for FPGA-based solutions to achieve a higher throughput than GPU counterparts. In this paper, we demonstrate that FPGA acceleration can be a superior solution in terms of both throughput and energy efficiency when a CNN is trained with binary constraints on weights and activations. Specifically, we propose an optimized accelerator architecture tailored for bitwise convolution and normalization that features massive spatial parallelism with deep pipeline (temporal parallelism) stages. Experiment results show that the proposed architecture running at 90 MHz on a Xilinx Virtex-7 FPGA achieves a computing throughput of 7.663 TOPS with a power consumption of 8.2 W regardless of the batch size of input data. This is 8.3x faster and 75x more energy-efficient than a Titan X GPU for processing online individual requests (in small batch size). For processing static data (in large batch size), the proposed solution is on a par with a Titan X GPU in terms of throughput while delivering 9.5x higher energy efficiency.
Lighting the path
Leadership comes naturally to Wanda Reder, vice president of the Power Systems Services Division of S&C Electric Company, and the president of the IEEE's Power & Energy Society (PES). Reder grew up on a ranch in a small town in western South Dakota and didn't really know what an engineer was. Though her father was mechanically oriented, she says her freshman high school algebra teacher put her on the engineering path. "My teacher said to me, 'If I had to do it over again, I would.' And so that was the beginning. I knew I was going into engineering."
Are acupoints specific for diseases? A systematic review of the randomized controlled trials with sham acupuncture controls
BACKGROUND The results of many clinical trials and experimental studies regarding acupoint specificity are contradictory. This review aims to investigate whether a difference in efficacy exists between ordinary acupuncture on specific acupoints and sham acupuncture controls on non-acupoints or on irrelevant acupoints. METHODS Databases including Medline, Embase, AMED and Chinese Biomedical Database were searched to identify randomized controlled trials published between 1998 and 2009 that compared traditional body acupuncture on acupoints with sham acupuncture controls on irrelevant acupoints or non-acupoints with the same needling depth. The Cochrane Collaboration's tool for assessing risk of bias was employed to address the quality of the included trials. RESULTS Twelve acupuncture clinical trials with sham acupuncture controls were identified and included in the review. The conditions treated varied. Half of the included trials had positive results on the primary outcomes and demonstrated acupoint specificity. However, among those six trials (total sample size: 985) with low risk of bias, five trials (sample size: 940) showed no statistically significant difference between proper and sham acupuncture treatments. CONCLUSION This review did not demonstrate the existence of acupoint specificity. Further clinical trials with larger sample sizes, optimal acupuncture treatment protocols and appropriate sham acupuncture controls are required to resolve this important issue.
Robotic Grasping and Contact: A Review
In this paper, we survey the eld of robotic grasping and the work that has been done in this area over the last two decades, with a slight bias toward the development of the theoretical framework and analytical results in this area.
Abandoning Objectives: Evolution Through the Search for Novelty Alone
In evolutionary computation, the fitness function normally measures progress toward an objective in the search space, effectively acting as an objective function. Through deception, such objective functions may actually prevent the objective from being reached. While methods exist to mitigate deception, they leave the underlying pathology untreated: Objective functions themselves may actively misdirect search toward dead ends. This paper proposes an approach to circumventing deception that also yields a new perspective on open-ended evolution. Instead of either explicitly seeking an objective or modeling natural evolution to capture open-endedness, the idea is to simply search for behavioral novelty. Even in an objective-based problem, such novelty search ignores the objective. Because many points in the search space collapse to a single behavior, the search for novelty is often feasible. Furthermore, because there are only so many simple behaviors, the search for novelty leads to increasing complexity. By decoupling open-ended search from artificial life worlds, the search for novelty is applicable to real world problems. Counterintuitively, in the maze navigation and biped walking tasks in this paper, novelty search significantly outperforms objective-based search, suggesting the strange conclusion that some problems are best solved by methods that ignore the objective. The main lesson is the inherent limitation of the objective-based paradigm and the unexploited opportunity to guide search through other means.
On the Feasibility of Predicting News Popularity at Cold Start
We perform a study on cold-start news popularity prediction using a collection of 13,319 news articles obtained from Yahoo News. We characterise the online popularity of news articles by two different metrics and try to predict them using machine learning techniques. Contrary to a prior work on the same topic, our findings indicate that predicting the news popularity at cold start is a difficult task and the previously published results may be superficial.
Spatially-Varying Blur Detection Based on Multiscale Fused and Sorted Transform Coefficients of Gradient Magnitudes
The detection of spatially-varying blur without having any information about the blur type is a challenging task. In this paper, we propose a novel effective approach to address this blur detection problem from a single image without requiring any knowledge about the blur type, level, or camera settings. Our approach computes blur detection maps based on a novel High-frequency multiscale Fusion and Sort Transform (HiFST) of gradient magnitudes. The evaluations of the proposed approach on a diverse set of blurry images with different blur types, levels, and contents demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods qualitatively and quantitatively.
Design and Implementation of Power Converters for Hybrid Wind-Solar Energy Conversion System with an Implementation of MPPT
This paper presents the design and implementation of power converters for wind conversion systems. The power converter can not only transfer the power from a wind generator, but also improve the stability and safety of the system. The proposed system consists of a Permanent magnet synchronous generator (PMSG); a DC/DC boosts converter, a bi-directional DC/DC converter and a full-bridge DC/AC inverter. The wind generator is the main power source of the system, and the battery is used for energy storage and power compensation to recover the natural irregularity of the wind power. In this paper presents a new system configuration of the front-end rectifier stage for a hybrid wind or photo voltaic energy system. The configuration allows the two sources to supply the load separately or simultaneously, depending on the availability of energy sources. The inherent nature of this cuk-scpic fused converter, additional input filters are not necessary to filter out high frequency harmonic content is determinant for the generator life span, heating issue and efficiency. The fused multi-input rectifier stage also allows maximum power from the wind and sun. When it is available an adaptive MPPT algorithm will be used for photo voltaic (PV) system. Operational analysis of the proposed system, will discoursed in this paper simulation results are given to highlight the merit of the proposed circuit Index terms —Wind generator, PV and Fuel Cells, Bi-directional DC/DC converter, full-bridge DC/AC inverter, MPPT.
A deadlock prevention method for railway networks using monitors for colored Petri nets
The real-time traffic control of railway networks authorizes movements of the trains and imposes safety constraints. The paper deals with the real time traffic control focusing on deadlock prevention problem. Colored Petri nets are used to model the dynamics of the railway network system: places represent tracks and stations, tokens are trains. The prevention policy is expressed by a set of linear inequality constraints, called colored Generalized Mutual Exclusion Constraints that are enforced by adding appropriate monitor places. Using digraph tools, deadlock situations are characterized and a strategy is established to define off-line a set of Generalized Mutual Exclusion Constraints that prevent deadlock. An example shows in detail the design of the proposed control logic.
End-to-End Text Recognition with Hybrid HMM Maxout Models
The problem of detecting and recognizing text in natural scenes has proved to be more challenging than its counterpart in documents, with most of the previous work focusing on a single part of the problem. In this work, we propose new solutions to the character and word recognition problems and then show how to combine these solutions in an end-to-end text-recognition system. We do so by leveraging the recently introduced Maxout networks along with hybrid HMM models that have proven useful for voice recognition. Using these elements, we build a tunable and highly accurate recognition system that beats state-of-theart results on all the sub-problems for both the ICDAR 2003 and SVT benchmark datasets.1
Chinese Grammatical Error Diagnosis with Long Short-Term Memory Networks
Grammatical error diagnosis is an important task in natural language processing. This paper introduces our Chinese Grammatical Error Diagnosis (CGED) system in the NLP-TEA-3 shared task for CGED. The CGED system can diagnose four types of grammatical errors which are redundant words (R), missing words (M), bad word selection (S) and disordered words (W). We treat the CGED task as a sequence labeling task and describe three models, including a CRFbased model, an LSTM-based model and an ensemble model using stacking. We also show in details how we build and train the models. Evaluation includes three levels, which are detection level, identification level and position level. On the CGED-HSK dataset of NLP-TEA-3 shared task, our system presents the best F1-scores in all the three levels and also the best recall in the last two levels.
Slash(dot) and burn: distributed moderation in a large online conversation space
Can a system of distributed moderation quickly and consistently separate high and low quality comments in an online conversation? Analysis of the site Slashdot.org suggests that the answer is a qualified yes, but that important challenges remain for designers of such systems. Thousands of users act as moderators. Final scores for comments are reasonably dispersed and the community generally agrees that moderations are fair. On the other hand, much of a conversation can pass before the best and worst comments are identified. Of those moderations that were judged unfair, only about half were subsequently counterbalanced by a moderation in the other direction. And comments with low scores, not at top-level, or posted late in a conversation were more likely to be overlooked by moderators.
Image guided depth enhancement via deep fusion and local linear regularizaron
Depth maps captured by RGB-D cameras are often noisy and incomplete at edge regions. Most existing methods assume that there is a co-occurrence of edges in depth map and its corresponding color image, and improve the quality of depth map guided by the color image. However, when the color image is noisy or richly detailed, the high frequency artifacts will be introduced into depth map. In this paper, we propose a deep residual network based on deep fusion and local linear regularization for guided depth enhancement. The presented scheme can effectively extract the correlation between depth map and color image in the deep feature space. To reduce the difficulty of training, a specific layer of network which introduces a local linear regularization constraint on the output depth is designed. Experiments on various applications, including depth denoising, super-resolution and inpainting, demonstrate the effectiveness and reliability of our proposed approach.
A multi-centre, blinded international trial of the effect of A1 and A2 β-casein variants on diabetes incidence in two rodent models of spontaneous Type I diabetes
Aims/hypothesis. The diabetes-inducing potential of cows' milk is still debated and there is no consensus on the diabetogenicity of individual milk proteins. A1-β-casein has been associated with increased diabetes frequency in ecological studies and in NOD mice. Our aim was to ascertain whether A1-β-casein was more diabetogenic than A2 and to test the diabetogenicity of a milk-free diet in animals representing different forms of spontaneous Type I (insulin-dependent) diabetes mellitus. Methods. Defined diets were coded and shipped to laboratories in New Zealand (NOD/NZ), Canada (BB) and the UK (NOD/Ba). Base diets were Pregestimil (PG) and ProSobee (PS). Purified fractions of whole casein (WC), A1 or A2-β-casein were added at 10%. A milk-free, wheat-predominant, NTP-2000 diet was the control. Animals were fed from weaning up to 150 or 250 days, and insulitis, diabetes frequency and expression of pancreatic cytokines were assessed. Results. Diabetes incidence was highest in three locations in animals fed NTP-2000. PG and PS diets were protective except for NOD/Ba mice fed PG+WC where incidence was similar to NTP-2000. A1 and A2 diets were protective in both models, but A1 β-casein was slightly more diabetogenic in PS-fed BB rats. The New Zealand study was confounded by an infection. Conclusion/interpretation. A milk-free, wheat-predominant diet was highly diabetogenic in three widely separate locations in both animal models. A previous result that A1 β-casein was more diabetogenic than A2 β-casein in NOD mice was not confirmed; both β-casein variants were protective in BB rats and NOD mice. Whole Casein promoted diabetes in NOD/Ba but protected BB showing that unique diabetes haplotypes react differently to dietary proteins. A1- was more diabetogenic than A2-β-casein only in PS-fed BB rats. Neither the analysis of insulitis nor of pancreatic cytokine gene expression showed a difference between A1 or A2 β-casein fed animals. Milk caseins are unlikely to be exclusive promoters of Type I diabetes, but could enhance the outcome of diabetes in some cases. Other diet components such as wheat could be more important promoters of Type I diabetes.
From Shapeshifter to Lava Monster : Gender Stereotypes in Disney ’ s Moana
Moana (2016) continues a tradition of Disney princess movies that perpetuate gender stereotypes. The movie contains the usual Electral undercurrent, with Moana seeking to prove her independence to her overprotective father. Moana’s partner in her adventures, Maui, is overtly hypermasculine, a trait epitomized by a phallic fishhook that is critical to his identity. Maui’s struggles with shapeshifting also reflect male anxieties about performing masculinity. Maui violates the Mother Island, first by entering her cave and then by using his fishhook to rob her of her fertility. The repercussions of this act are the basis of the plot: the Mother Island abandons her form as a nurturing, youthful female (Te Fiti) focused on creation to become a vengeful lava monster (Te Kā). At the end, Moana successfully urges Te Kā to get in touch with her true self, a brave but simple act that is sufficient to bring back Te Fiti, a passive, smiling green goddess. The association of youthful, fertile females with good and witch-like infertile females with evil implies that women’s worth and well-being are dependent upon their procreative function. Stereotypical gender tropes that also include female abuse of power and a narrow conception of masculinity merit analysis in order to further progress in recognizing and addressing patterns of gender hegemony in popular Disney films.
Sustainability - a new dimension in information systems evaluation
The paper introduces new dimensions of Information System evaluation. Sustainability issues have increasing importance and their influence is compared with the Internet revolution. Customers, policymakers and business partners increasingly require the monitoring and reporting of the impact of an organisation on sustainability. However, traditional IS evaluation approaches are not able to capture the impact of IT/IS on sustainability, especially in relation to social and environmental dimensions. In order to stimulate discussion and research, the authors propose a framework that was originally developed to focus on supply chain issues. The framework is built upon three dimensions: economic, social and environmental, which are divided further into three sub-dimensions. It was created to evaluate supply chain practices, but it can be used as a staring point to develop a framework for sustainability-oriented IS evaluation. In the paper the sustainability drivers are presented, and the problems related to IT/IS evaluation are discussed using selected examples from IT/IS applications in supply chain management. IT/IS solutions used in supply chains are perceived as having both positive and negative effects on sustainability. Moreover some business models supported by IT/IS that are perceived as positive for the supply chain processes, could be considered negative after taking into consideration their external effects on sustainability. For example Just-In-Time reduces inventory and costs, but it increases road transport and the associated CO2 emission, as well as potentially increasing road congestion and accidents.
Wild patterns: Ten years after the rise of adversarial machine learning
Learning-based pattern classifiers, including deep networks, have shown impressive performance in several application domains, ranging from computer vision to cybersecurity. However, it has also been shown that adversarial input perturbations carefully crafted either at training or at test time can easily subvert their predictions. The vulnerability of machine learning to such wild patterns (also referred to as adversarial examples), along with the design of suitable countermeasures, have been investigated in the research field of adversarial machine learning. In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks. We report interesting connections between these apparently-different lines of work, highlighting common misconceptions related to the security evaluation of machine-learning algorithms. We review the main threat models and attacks defined to this end, and discuss the main limitations of current work, along with the corresponding future challenges towards the design of more secure learning algorithms.
Scalable network traffic visualization using compressed graphs
The visualization of complex network traffic involving a large number of communication devices is a common yet challenging task. Traditional layout methods create the network graph with overwhelming visual clutter, which hinders the network understanding and traffic analysis tasks. The existing graph simplification algorithms (e.g. community-based clustering) can effectively reduce the visual complexity, but lead to less meaningful traffic representations. In this paper, we introduce a new method to the traffic monitoring and anomaly analysis of large networks, namely Structural Equivalence Grouping (SEG). Based on the intrinsic nature of the computer network traffic, SEG condenses the graph by more than 20 times while preserving the critical connectivity information. Computationally, SEG has a linear time complexity and supports undirected, directed and weighted traffic graphs up to a million nodes. We have built a Network Security and Anomaly Visualization (NSAV) tool based on SEG and conducted case studies in several real-world scenarios to show the effectiveness of our technique.
Evaluation of a Dry EEG System for Application of Passive Brain-Computer Interfaces in Autonomous Driving
We tested the applicability and signal quality of a 16 channel dry electroencephalography (EEG) system in a laboratory environment and in a car under controlled, realistic conditions. The aim of our investigation was an estimation how well a passive Brain-Computer Interface (pBCI) can work in an autonomous driving scenario. The evaluation considered speed and accuracy of self-applicability by an untrained person, quality of recorded EEG data, shifts of electrode positions on the head after driving-related movements, usability, and complexity of the system as such and wearing comfort over time. An experiment was conducted inside and outside of a stationary vehicle with running engine, air-conditioning, and muted radio. Signal quality was sufficient for standard EEG analysis in the time and frequency domain as well as for the use in pBCIs. While the influence of vehicle-induced interferences to data quality was insignificant, driving-related movements led to strong shifts in electrode positions. In general, the EEG system used allowed for a fast self-applicability of cap and electrodes. The assessed usability of the system was still acceptable while the wearing comfort decreased strongly over time due to friction and pressure to the head. From these results we conclude that the evaluated system should provide the essential requirements for an application in an autonomous driving context. Nevertheless, further refinement is suggested to reduce shifts of the system due to body movements and increase the headset's usability and wearing comfort.
Cooperative Control of Distributed Multi-Agent Systems
Will reading habit influence your life? Many say yes. Reading cooperative control of distributed multi agent systems is a good habit; you can develop this habit to be such interesting way. Yeah, reading habit will not only make you have any favourite activity. It will be one of guidance of your life. When reading has become a habit, you will not make it as disturbing activities or as boring activity. You can gain many benefits and importances of reading.
Evaluation on State of Charge Estimation of Batteries With Adaptive Extended Kalman Filter by Experiment Approach
An accurate State-of-Charge (SoC) estimation plays a significant role in battery systems used in electric vehicles due to the arduous operation environments and the requirement of ensuring safe and reliable operations of batteries. Among the conventional methods to estimate SoC, the Coulomb counting method is widely used, but its accuracy is limited due to the accumulated error. Another commonly used method is model-based online iterative estimation with the Kalman filters, which improves the estimation accuracy in some extent. To improve the performance of Kalman filters in SoC estimation, the adaptive extended Kalman filter (AEKF), which employs the covariance matching approach, is applied in this paper. First, we built an implementation flowchart of the AEKF for a general system. Second, we built an online open-circuit voltage (OCV) estimation approach with the AEKF algorithm so that we can then get the SoC estimate by looking up the OCV-SoC table. Third, we proposed a robust online model-based SoC estimation approach with the AEKF algorithm. Finally, an evaluation on the SoC estimation approaches is performed by the experiment approach from the aspects of SoC estimation accuracy and robustness. The results indicate that the proposed online SoC estimation with the AEKF algorithm performs optimally, and for different error initial values, the maximum SoC estimation error is less than 2% with close-loop state estimation characteristics.
A pipeline inspection robot with a linkage type mechanical clutch
This paper presents a new pipeline inspection robot with a linkage type mechanical clutch, which is designed for inspection of pipelines with 100mm diameter. This robot has three powered wheel chains each of which has a mechanical clutch. The mechanical clutch is designed by using a parallel linkage mechanism. The kinematic model of the pipeline inspection robot is driven and its proto type has been developed. The performance of this robot system is verified by both simulation and experimentation.
An HIS-Based Spiral Antenna for Pattern Reconfigurable Applications
Single arm rectangular spiral antenna with four open circuit switches over a high-impedance surface (HIS) is proposed for pattern reconfigurable applications. The HIS plane without vias is utilized to achieve a low-profile antenna design with a net thickness of 5.08 mm. This is equivalent to ~ lambdao/17 for the intended operating frequency of 3.3 GHz. By using the possible sixteen switching combinations a near 360deg beam steering is achieved, and the switched beams do not have a polarization variation from one pattern to another. The realized pattern reconfigurable antenna has both the tilted (thetasmax ges 25deg) and axial (5deg < thetasmax < 10deg) beams, which have an average directivity of 6.9 dBi.
Serologic detection of Toxoplasma gondii infection in stray and household cats and its hematologic evaluation Detecção sorológica da infecção por Toxoplasma gondii em gatos errantes e domiciliados e sua avaliação hematológica
Aims: This study focused on the serologic detection of Toxoplasma gondii infection in two groups of cats: stray and household groups. In addition, hematologic assessment of seropositive and seronegative cats was done. Methods: Sixty cats were serologically tested for anti-Toxoplasma gondii antibodies using the latex agglutination test. Six collection sites for each group of cats were identified in the urban communities of Sta Rosa and San Pedro, Laguna, Philippines. The 60 cats collected were divided into 30 stray and 30 household cats. Results: Results revealed that 28 (46.67%) of the 60 cats were seropositive. There were more household cats (28.33%) which showed seropositivity compared to stray cats (18.33%), however the difference was statistically insignificant (p>0.05) . Hematologic tests through complete blood count showed significantly (p<0.05) higher number of seropositive cats with abnormalities on hemoglobin level, red blood cell count, segmenter (neutrophil) and monocyte counts compared to the control. Other parameters such as percent packed cell volume, white blood cell count, eosinophil and lymphocyte counts showed insignificant (p>0.05) results across seropositive cats and the control. Blood chemistry analysis showed significantly higher (p<0.05) potassium level irregularities in seropositive cats relative to the seronegative cats. Other parameters such as amylase, blood sugar, blood uric acid, creatinine and blood urea nitrogen were statistically insignificant (p>0.05). Conclusions: Although Toxoplasma gondii infection suggests possible cause of hematologic abnormalities, it is recommended that further studies on this aspect be done to provide more basic and clinical research information that would improve cat health management.
Prospective Derivation of a Living Organoid Biobank of Colorectal Cancer Patients
In Rspondin-based 3D cultures, Lgr5 stem cells from multiple organs form ever-expanding epithelial organoids that retain their tissue identity. We report the establishment of tumor organoid cultures from 20 consecutive colorectal carcinoma (CRC) patients. For most, organoids were also generated from adjacent normal tissue. Organoids closely recapitulate several properties of the original tumor. The spectrum of genetic changes within the "living biobank" agrees well with previous large-scale mutational analyses of CRC. Gene expression analysis indicates that the major CRC molecular subtypes are represented. Tumor organoids are amenable to high-throughput drug screens allowing detection of gene-drug associations. As an example, a single organoid culture was exquisitely sensitive to Wnt secretion (porcupine) inhibitors and carried a mutation in the negative Wnt feedback regulator RNF43, rather than in APC. Organoid technology may fill the gap between cancer genetics and patient trials, complement cell-line- and xenograft-based drug studies, and allow personalized therapy design. PAPERCLIP.
High-Speed and Energy-Efficient Carry Skip Adder Operating Under a Wide Range of Supply Voltage Levels
In this paper, we present a carry skip adder (CSKA) structure that has a higher speed yet lower energy consumption compared with the conventional one. The speed enhancement is achieved by applying concatenation and incrementation schemes to improve the efficiency of the conventional CSKA (Conv-CSKA) structure. In addition, instead of utilizing multiplexer logic, the proposed structure makes use of AND-OR-Invert (AOI) and OR-AND-Invert (OAI) compound gates for the skip logic. The structure may be realized with both fixed stage size and variable stage size styles, wherein the latter further improves the speed and energy parameters of the adder. Finally, a hybrid variable latency extension of the proposed structure, which lowers the power consumption without considerably impacting the speed, is presented. This extension utilizes a modified parallel structure for increasing the slack time, and hence, enabling further voltage reduction. The proposed structures are assessed by comparing their speed, power, and energy parameters with those of other adders using a 45-nm static CMOS technology for a wide range of supply voltages. The results that are obtained using HSPICE simulations reveal, on average, 44% and 38% improvements in the delay and energy, respectively, compared with those of the Conv-CSKA. In addition, the power-delay product was the lowest among the structures considered in this paper, while its energy-delay product was almost the same as that of the Kogge-Stone parallel prefix adder with considerably smaller area and power consumption. Simulations on the proposed hybrid variable latency CSKA reveal reduction in the power consumption compared with the latest works in this field while having a reasonably high speed.
Omnidirectional conformal antenna array based on E-shaped patches
An antenna consisting of 4 element subarrays of strongly coupled E-shaped patches placed conformally on a metal cylinder is proposed for use in wireless local area networks (WLAN). The use of this special array configuration in the cylindrical case allows reaching a 8.3 % bandwidth and an omnidirectional radiation pattern in the horizontal plane with only 2 subarrays. CST Microwave Studio is used for validation purposes before manufacturing. To the knowledge of the authors, it is the first time that this recently introduced concept of strongly coupled E-shaped patches is used in a conformal antenna.
Underwater electric field measurement and analysis
The University of Idaho (UI) is developing electromagnetic field sensor systems that are attachable to autonomous underwater vehicles with the intent of taking survey measurements in underwater ocean environments. This paper presents the testing of these sensors and compares measurements to predictions. Testing was conducted off the coast of Florida with a moving artificial electric field source and an electric field sensor equipped AUV. At the closest pass, the peak value of the AUV-acquired electric field was consistent with the predicted field of the artificial source when the accuracy of AUV position was known to within ∼1 m.
Investment, Inflation and Economic Growth Nexus
The paper has twofold objectives. Firstly, the impact of the inflation rate on economic growth with the possibility of two threshold levels for Pakistan using annual data from 1961 to 2008 is examined and secondly, nonlinear relationship between inflation and investment has been investigated. Inflation and growth models support the existence of a nonlinear relationship with two thresholds (6 percent and 11 percent). Inflation below the first threshold affects economic growth positively but insignificantly; at moderate rates of inflation, between the two threshold levels, the effect of inflation is significant and strongly negative and at high rates of inflation, above the second threshold, the marginal impact of additional inflation on economic growth diminishes but is still significantly negative. Investment is one of the possible channels through which inflation influences economic growth and the analysis indicates the nonlinear relationship between these two variables with only one threshold at 7 percent. Rate of inflation below the threshold level has positive but insignificant impact, while above the threshold it has strong negative and significant impact on the investment. Therefore, it is desirable to keep the inflation below 6 percent because it may be helpful for the achievement of robust economic growth and investment. 1 The authors are Staff Economist at Pakistan Institute of Development Economics (PIDE) Islamabad and PhD Student at Pakistan Institute of Development Economics (PIDE) Islamabad, respectively. The authors thank to Dr. Musleh ud Din Joint Director at Pakistan Institute of Development Economics (PIDE) Islamabad for valuable suggestions and Muhammad Javid Staff Economist at Pakistan Institute of Development Economics (PIDE) Islamabad for help in estimation. The authors wish to thank Dr. Mohsin S. Khan, Dr. Waqar Masood Khan and Dr. Wasim Shahid Malik for their useful comments on an earlier version
Mayo Clinic validation of the D'amico risk group classification for predicting survival following radical prostatectomy.
PURPOSE The D'Amico risk group classification was originally developed to estimate the risk of biochemical recurrence following treatment for localized prostate cancer. We externally validated the ability of the risk groups to predict clinical progression, and cancer specific and overall survival following radical prostatectomy, and identify predictors of outcome in patients with high risk disease. MATERIALS AND METHODS We evaluated the records of 7,591 consecutive patients who underwent radical prostatectomy at our institution between 1987 and 2003. Postoperative survival was estimated using the Kaplan-Meier method. Cox proportional hazard regression models were used to analyze the ability of the risk groups to predict survival and to evaluate the impact of clinicopathological factors on outcome in patients at high risk. RESULTS Preoperative risk group stratification predicted the patient risk of biochemical and local recurrence, systemic progression, and cancer specific and overall survival (each p <0.001). The HR of death from prostate cancer after surgery in patients with high or intermediate risk disease was 11.5 (95% CI 5.9 to 22.3, p <0.0001) and 6.3 (95% CI 3.3 to 12.3, p <0.0001), respectively, compared to patients at low risk. In patients in the high risk group biopsy Gleason score (p = 0.006), pathological Gleason score (p = 0.006), pathological tumor stage (p = 0.04), positive lymph nodes (p = 0.02) and positive surgical margins (p = 0.008) predicted death from prostate cancer. CONCLUSIONS We validated the ability of the risk group stratification to predict disease progression and patient survival following radical prostatectomy. Additional prognostic information from surgical staging may assist in individualized postoperative management, particularly for patients at high risk.
Predictive control of an autonomous ground vehicle using an iterative linearization approach
This paper presents the design of a controller for an autonomous ground vehicle. The goal is to track the lane centerline while avoiding collisions with obstacles. A nonlinear model predictive control (MPC) framework is used where the control inputs are the front steering angle and the braking torques at the four wheels. The focus of this work is on the development of a tailored algorithm for solving the nonlinear MPC problem. Hardware-in-the-loop simulations with the proposed algorithm show a reduction in the computational time as compared to general purpose nonlinear solvers. Experimental tests on a passenger vehicle at high speeds on low friction road surfaces show the effectiveness of the proposed algorithm.
Identification and compensation of torque ripple in high-precision permanent magnet motor drives
Permanent magnet synchronous machines generate parasitic torque pulsations owing to distortion of the stator flux linkage distribution, variable magnetic reluctance at the stator slots, and secondary phenomena. The consequences are speed oscillations which, although small in magnitude, deteriorate the performance of the drive in demanding applications. The parasitic effects are analysed and modelled using the complex state-variable approach. A fast current control system is employed to produce highfrequency electromagnetic torque components for compensation. A self-commissioning scheme is described which identifies the machine parameters, particularly the torque ripple functions which depend on the angular position of the rotor. Variations of permanent magnet flux density with temperature are compensated by on-line adaptation. The algorithms for adaptation and control are implemented in a standard microcontroller system without additional hardware. The effectiveness of the adaptive torque ripple compensation is demonstrated by experiments.
Ideal rectangular cross-section Si-Fin channel double-gate MOSFETs fabricated using orientation-dependent wet etching
Ultranarrow and ideal rectangular cross section silicon(Si)-Fin channel double-gate MOSFETs (FXMOSFETs) have successfully been fabricated for the first time using [110]-oriented silicon-on-insulator (SOI) wafers and orientation-dependent wet etching. The transconductance (g/sub m/) normalized by 2/spl times/(Fin height) is found to be as high as 700 /spl mu/S//spl mu/m at V/sub d/=1 V in the fabricated 13-nm-thick and 82-nm-high Si- Fin channel double-gate MOSFET with a 105-nm gate length and a 2.2-nm gate oxide. The almost-ideal S-slope of 64 mV/decade is demonstrated in a 145-nm gate length device. These excellent results show that the Si-Fin channel with smooth [111]-oriented sidewalls is suitable to realize a high-performance FXMOSFET. The short-channel effects (SCEs) are effectively suppressed by reducing the Si-Fin thickness to 23 nm or less.
Compliance with homework assignments in cognitive-behavioral psychotherapy for depression: Relation to outcome and methods of enhancement
Despite the importance attached to homework in cognitive-behavioral therapy for depression, quantitative studies of its impact on outcome have been limited. One aim of the present study was to replicate a previous finding suggesting that improvement can be predicted from the quality of the client's compliance early in treatment. If homework is indeed an effective ingredient in this form of treatment, it is important to know how compliance can be influenced. The second aim of the present study was to examine the effectiveness of several methods of enhancing compliance that have frequently been recommended to therapists. The data were drawn from 235 sessions received by 25 clients. Therapists' ratings of compliance following the first two sessions of treatment contributed significantly to the prediction of improvement at termination (though not at followup). However, compliance itself could not be predicted from any of the clients' ratings of therapist behavior in recommending the assignments.
Three-Dimensional-Printing of Bio-Inspired Composites.
Optimized for millions of years, natural materials often outperform synthetic materials due to their hierarchical structures and multifunctional abilities. They usually feature a complex architecture that consists of simple building blocks. Indeed, many natural materials such as bone, nacre, hair, and spider silk, have outstanding material properties, making them applicable to engineering applications that may require both mechanical resilience and environmental compatibility. However, such natural materials are very difficult to harvest in bulk, and may be toxic in the way they occur naturally, and therefore, it is critical to use alternative methods to fabricate materials that have material functions similar to material function as their natural counterparts for large-scale applications. Recent progress in additive manufacturing, especially the ability to print multiple materials at upper micrometer resolution, has given researchers an excellent instrument to design and reconstruct natural-inspired materials. The most advanced 3D-printer can now be used to manufacture samples to emulate their geometry and material composition with high fidelity. Its capabilities, in combination with computational modeling, have provided us even more opportunities for designing, optimizing, and testing the function of composite materials, in order to achieve composites of high mechanical resilience and reliability. In this review article, we focus on the advanced material properties of several multifunctional biological materials and discuss how the advanced 3D-printing techniques can be used to mimic their architectures and functions. Lastly, we discuss the limitations of 3D-printing, suggest possible future developments, and discuss applications using bio-inspired materials as a tool in bioengineering and other fields.
Mediated and direct effects of the North Atlantic Ocean on winter temperatures in northwest Europe
This study has used a multiple regression model to quantify the importance of wintertime mean North Atlantic sea-surface temperatures (SSTs) for explaining (simultaneous) variations in wintertime mean temperatures in northwestern Europe. Although wintertime temperature variations are primarily determined by atmospheric flow patterns, it has been speculated that North Atlantic SSTs might also provide some additional information. To test this hypothesis, we have attempted to explain 1900–93 variations in wintertime mean central England temperature (CET) by using multiple regression with contemporaneous winter mean North Atlantic sea-level pressures (SLPs) and SSTs as explanatory variables. With no SST information, the leading SLP patterns (including the North Atlantic oscillation) explain 63% of the total variance in winter mean CET; however, SSTs alone are capable of explaining only 16% of the variance in winter mean CET. Much of the SST effect is ‘indirect’ in that it supplies no more significant information than already contained in the mean SLP; e.g. both SLP and SST together can only explain 68% of the variance. However, there is a small (5% variance) direct effect due to SST that is not mediated by mean SLP, which has a spatial pattern resembling the Newfoundland SST pattern identified by Ratcliffe and Murray (1970. Quarterly Journal of the Royal Meteorological Society 96: 226–246). In predictive mode, however, using explanatory variables from preceding seasons, SSTs contain more information than SLP factors. On longer time scales, the variance explained by contemporaneous SST increases, but the SLP explanatory variables still provide a better model than the SST variables. Copyright  2003 Royal Meteorological Society.
A Multimodal Analysis of Floor Control in Meetings
The participant in a human-to-human communication who controls the floor bears the burden of moving the communication process along. Change in control of the floor can happen through a number of mechanisms, including interruptions, delegation of the floor, and so on. This paper investigates floor control in multiparty meetings that are both audio and video taped; hence, we analyze patterns of speech (e.g., the use of discourse markers) and visual cues (e.g, eye gaze exchanges) that are often involved in floor control changes. Identifying who has control of the floor provides an important focus for information retrieval and summarization of meetings. Additionally, without understanding who has control of the floor, it is impossible to identify important events such as challenges for the floor. In this paper, we analyze multimodal cues related to floor control in two different meetings involving five participants each.
ICP-based pose-graph SLAM
Odometry-like localization solutions can be built upon Light Detection And Ranging (LIDAR) sensors, by sequentially registering the point clouds acquired along a robot trajectory. Yet such solutions inherently drift over time: we propose an approach that adds a graphical model layer on top of such LIDAR odometry layer, using the Iterative Closest Points (ICP) algorithm for registration. Reference frames called keyframes are defined along the robot trajectory, and ICP results are used to build a pose graph, that in turn is used to solve an optimization problem that provides updates for the keyframes upon loop closing, enabling the correction of the path of the robot and of the map of the environment. We present in details the configuration used to register data from the Velodyne High Definition LIDAR (HDL), and a strategy to build local maps upon which current frames are registered, either when discovering new areas or revisiting previously mapped areas. Experiments show that it is possible to build the graph using data from ICP and that the loop closings in the graph level reduces the overall drift of the system.
THE THEORY OF PROJECT MANAGEMENT : EXPLANATION TO NOVEL METHODS
In a series of prior papers, the authors have explored the theoretical foundation of project management. In this paper, this theoretical foundation is consolidated and used for explaining the novel features of two project management methods, which radically deviate from the conventional doctrine of project management: Last Planner and Scrum. Both methods have emerged since mid-nineties as practical responses to the failure of conventional project management methods, Scrum in the field of software projects, Last Planner in the field of construction projects. It is shown that both methods reject the underlying theoretical foundation of conventional project management and instead subscribe, implicitly or explicitly, to alternative theories, which better match the situation in question.
Segmenting Consumers Based on Luxury Value Perceptions
This study seeks to discover consumer segments by using a multidimensional concept of luxury by encompassing functional, individual and social components in the luxury market. Survey data was collected from 1097 consumers in Iran. Eight luxury factors were indentified through an exploratory factor analysis. These factors are used for segmenting these consumers with the K-means method. Cluster analysis of the data resulted in four different behavioral style segments namely: non-luxury consumers, rational consumers, social seeker consumers and materialistic consumers. Each segment shows the importance of luxury value dimensions differently. This study sheds light on the differences between consumers’ perception about luxury value, which helps marketers to choose their marketing strategies more consistently with the consumers’ viewpoint.
Facial Expression Recognition Based on Deep Evolutional Spatial-Temporal Networks
One key challenging issue of facial expression recognition is to capture the dynamic variation of facial physical structure from videos. In this paper, we propose a part-based hierarchical bidirectional recurrent neural network (PHRNN) to analyze the facial expression information of temporal sequences. Our PHRNN models facial morphological variations and dynamical evolution of expressions, which is effective to extract “temporal features” based on facial landmarks (geometry information) from consecutive frames. Meanwhile, in order to complement the still appearance information, a multi-signal convolutional neural network (MSCNN) is proposed to extract “spatial features” from still frames. We use both recognition and verification signals as supervision to calculate different loss functions, which are helpful to increase the variations of different expressions and reduce the differences among identical expressions. This deep evolutional spatial-temporal network (composed of PHRNN and MSCNN) extracts the partial-whole, geometry-appearance, and dynamic-still information, effectively boosting the performance of facial expression recognition. Experimental results show that this method largely outperforms the state-of-the-art ones. On three widely used facial expression databases (CK+, Oulu-CASIA, and MMI), our method reduces the error rates of the previous best ones by 45.5%, 25.8%, and 24.4%, respectively.
Scheduling for heterogeneous Systems using constrained critical paths
A complex computing problem may be efficiently solved on a system with multiple processing elements by dividing its implementation code into several tasks or modules that execute in parallel. The modules may then be assigned to and scheduled on the processing elements so that the total execution time is minimum. Finding an optimal schedule for parallel programs is a non-trivial task and is considered to be NP-complete. For heterogeneous systems having processors with different characteristics, most of the scheduling algorithms use greedy approach to assign processors to the modules. This paper suggests a novel approach called constrained earliest finish time (CEFT) to provide better schedules for heterogeneous systems using the concept of the constrained critical paths (CCPs). In contrast to other approaches used for heterogeneous systems, the CEFT strategy takes into account a broader view of the input task graph. Furthermore, the statically generated CCPs may be efficiently scheduled in comparison with other approaches. The experimentation results show that the CEFT scheduling strategy outperforms the well-known HEFT, DLS and LMT strategies by producing shorter schedules for a diverse collection of task graphs. 2012 Elsevier B.V. All rights reserved.
Design and Analysis of a Halbach Magnetized Magnetic Screw for Artificial Heart
This paper introduces a high-force-density linear actuator for an artificial heart, which is based on the concept of a magnetic screw-nut. The novelty of the proposed magnetic screw is adaptation of Halbach permanent-magnet (PM) array disposed helically. PMs are placed on both the screw and the nut. The configuration of the newly designed magnetic screws is presented, and its electromagnetic performances are analyzed. The proposed Halbach magnetized magnetic screw is evaluated as compared with a radially magnetized magnetic screw using the time-stepping finite-element method, verifying the advantages of the proposed structure.
Factors effecting service oriented architecture implementation
With the advancement of computing the arena of software development life cycle and its traditional software architectural models are moving towards more flexible and agile techniques. Service oriented architecture is one of the modern architecture techniques which are composed of loosely coupled and flexible software components called services. Service oriented architecture supports usability, scalability and flexibility and each of these have their own implementation complexities. In this paper the focus is on those factors that affect service oriented architecture design and implementation. Authors aim to do a literature review for factors effecting service oriented architecture. The factors that have been identified earlier in service oriented architecture implementation are accumulated in this paper. The factors effecting service oriented architecture are categorized with a help of a proposed model. The proposed model can be used as a guideline for the SOA projects according to size of organization.
Traffic generation of IEC 61850 sampled values
The work presented in this paper is targeted at the first phase of the test and measurements product life cycle, namely standardisation. During this initial phase of any product, the emphasis is on the development of standards that support new technologies while leaving the scope of implementations as open as possible. To allow the engineer to freely create and invent tools that can quickly help him simulate or emulate his ideas are paramount. Within this scope, a traffic generation system has been developed for IEC 61850 Sampled Values which will help in the evaluation of the data models, data acquisition, data fusion, data integration and data distribution between the various devices and components that use this complex set of evolving standards in Smart Grid systems.
Steady-State Power Flow Model of Energy Router Embedded AC Network and Its Application in Optimizing Power System Operation
The energy router is an emerging device concept that is based on an advanced power electronic technique. It is able to realize flexible and dynamic electric power distribution in power systems analogous to the function of information routers in the Internet. It is of great interest to investigate how the energy router can be used to optimize power system operation. This paper formulates the steady-state power flow model of the energy router embedded system network and the related optimal power flow formulation. The role of the energy router in providing extra flexibility to optimize the system operation is studied. Case studies are carried out on a modified IEEE RTS-79 system and a modified IEEE 118 bus system with the energy router. The results show that the energy router is able to optimize the operation of the power system through controlling the power injections and voltage of ports of the energy router. Operating objective such as adjusting branch power flow, improving bus voltage, and reducing active power losses of the grid can be reached under different objective functions.
Organic solar cells : An overview
Organic solar cell research has developed during the past 30 years, but especially in the last decade it has attracted scientific and economic interest triggered by a rapid increase in power conversion efficiencies. This was achieved by the introduction of new materials, improved materials engineering, and more sophisticated device structures. Today, solar power conversion efficiencies in excess of 3% have been accomplished with several device concepts. Though efficiencies of these thin-film organic devices have not yet reached those of their inorganic counterparts ( ≈ 10–20%); the perspective of cheap production (employing, e.g., roll-to-roll processes) drives the development of organic photovoltaic devices further in a dynamic way. The two competitive production techniques used today are either wet solution processing or dry thermal evaporation of the organic constituents. The field of organic solar cells profited well from the development of light-emitting diodes based on similar technologies, which have entered the market recently. We review here the current status of the field of organic solar cells and discuss different production technologies as well as study the important parameters to improve their performance.
Robust Feature Selection Using Ensemble Feature Selection Techniques
Robustness or stability of feature selection techniques is a topic of recent interest, and is an important issue when selected feature subsets are subsequently analysed by domain experts to gain more insight into the problem modelled. In this work, we investigate the use of ensemble feature selection techniques, where multiple feature selection methods are combined to yield more robust results. We show that these techniques show great promise for high-dimensional domains with small sample sizes, and provide more robust feature subsets than a single feature selection technique. In addition, we also investigate the effect of ensemble feature selection techniques on classification performance, giving rise to a new model selection strategy.
A Study on Software Risk Management Strategies and Mapping with SDLC
In recent years, despite several risk management models proposed by different researchers, software projects still have a high degree of failures. Improper risk assessment during software development was the major reason behind these unsuccessful projects as risk analysis was done on overall projects. This work attempts in identifying key risk factors and risk types for each of the development phases of SDLC, which would help in identifying the risks at a much early stage of development.
Decision quality and satisfaction: the effects of online information sources and self-efficacy
Purpose – Digital libraries and social media are two sources of online information with different characteristics. This study integrates self-efficacy into the analysis of the relationship between information sources and decision-making, with the aim of exploring the effect of self-efficacy on decision-making, as well as the interacting effect of self-efficacy and information sources on decision-making. Design/methodology/approach –Survey data were collected and the partial least squares (PLS) structural equation modelling (SEM) was employed to verify the research model. Findings – The effect of digital library usage for acquiring information on perceived decision quality is larger than that of social media usage for acquiring information on perceived decision quality. Self-efficacy in acquiring information stands out as the key determinant for perceived decision quality. The effect of social media usage for acquiring information on perceived decision quality is positively moderated by self-efficacy in acquiring information. Practical implications – Decision-making is a fundamental activity for individuals, but human decision-making is often subject to biases. The findings of this study provide useful insights into decision quality improvement, highlighting the importance of self-efficacy in acquiring information in the face of information overload. Originality/value – This study integrates self-efficacy into the analysis of the relationship between information sources and decision-making, presenting a new perspective for decision-making research and practice alike.
A Novel Evolutionary Kernel Intuitionistic Fuzzy $C$ -means Clustering Algorithm
This study proposes a novel evolutionary kernel intuitionistic fuzzy c-means clustering algorithm (EKIFCM) that combines Atanassov's intuitionistic fuzzy sets (IFSs) with kernel-based fuzzy c-means (KFCM), and genetic algorithms (GA) are optimally used simultaneously to select the parameters of the EKIFCM. The EKIFCM can obtain the advantages of intuitionistic fuzzy sets, kernel functions, and GA in actual clustering problems. Experiments on 2-D synthetic datasets and machine learning repository (http://archive.ics.uci.edu/beta/) datasets show that the proposed EKIFCM is more efficient than conventional algorithms such as the k-means (KM), FCM, Gustafson-Kessel (GK) clustering algorithm, Gath-Geva (GG) clustering algorithm, Chaira's intuitionistic fuzzy c-means (IFCM), and kernel-based fuzzy c-means with Gaussian kernel functions [KFCM(G)] in standard measurement indexes.
Scheduling Processes with Release Times, Deadlines, Precedence, and Exclusion Relations
We present an algorithm that finds an optimal schedule on a single processor for a given set of processes such that each process starts executing after its release time and completes its computation before its deadline, and a given set of precedence relations and a given set of exclusion relations defined on ordered pairs of process segments are satisfied. This algorithm can be applied to the important and previously unsolved problem of automated pre-run-time scheduling of processes with arbitrary precedence and exclusion relations in hard-
Regression Test Selection for Java Software
Regression testing is applied to modified software to provide confidence that the changed parts behave as intended and that the unchanged parts have not been adversely affected by the modifications. To reduce the cost of regression testing, test cases are selected from the test suite that was used to test the original version of the software---this process is called regression test selection. A safe regression-test-selection algorithm selects every test case in the test suite that may reveal a fault in the modified software. Safe regression-test-selection technique that, based on the use of a suitable representation, handles the features of the Java language. Unlike other safe regression test selection techniques, the presented technique also handles incomplete programs. The technique can thus be safely applied in the (very common) case of Java software that uses external libraries of components; the analysis of the external code is note required for the technique to select test cases for such software. The paper also describes RETEST, a regression-test-selection algorithm can be effective in reducing the size of the test suite.
Post-infective and chronic fatigue syndromes precipitated by viral and non-viral pathogens: prospective cohort study.
OBJECTIVE To delineate the risk factors, symptom patterns, and longitudinal course of prolonged illnesses after a variety of acute infections. DESIGN Prospective cohort study following patients from the time of acute infection with Epstein-Barr virus (glandular fever), Coxiella burnetii (Q fever), or Ross River virus (epidemic polyarthritis). SETTING The region surrounding the township of Dubbo in rural Australia, encompassing a 200 km geographical radius and 104,400 residents. PARTICIPANTS 253 patients enrolled and followed at regular intervals over 12 months by self report, structured interview, and clinical assessment. OUTCOME MEASURES Detailed medical, psychiatric, and laboratory evaluations at six months to apply diagnostic criteria for chronic fatigue syndrome. Premorbid and intercurrent illness characteristics recorded to define risk factors for chronic fatigue syndrome. Self reported illness phenotypes compared between infective groups. RESULTS Prolonged illness characterised by disabling fatigue, musculoskeletal pain, neurocognitive difficulties, and mood disturbance was evident in 29 (12%) of 253 participants at six months, of whom 28 (11%) met the diagnostic criteria for chronic fatigue syndrome. This post-infective fatigue syndrome phenotype was stereotyped and occurred at a similar incidence after each infection. The syndrome was predicted largely by the severity of the acute illness rather than by demographic, psychological, or microbiological factors. CONCLUSIONS A relatively uniform post-infective fatigue syndrome persists in a significant minority of patients for six months or more after clinical infection with several different viral and non-viral micro-organisms. Post-infective fatigue syndrome is a valid illness model for investigating one pathophysiological pathway to chronic fatigue syndrome.