title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Prognostic factors and sites of metastasis in unresectable locally advanced pancreatic cancer | Due to differences in natural history and therapy, clinical trials of patients with advanced pancreatic cancer have recently been subdivided into unresectable locally advanced pancreatic cancer (LAPC) and metastatic disease. We aimed to evaluate prognostic factors in LAPC patients who were treated with first-line chemotherapy and describe patterns of disease progression. Patients with LAPC who initiated first-line palliative chemotherapy, 2001-2011 at the BC Cancer Agency were included. A retrospective chart review was conducted to identify clinicopathologic variables, treatment, and subsequent sites of metastasis. Kaplan-Meier and Cox-regression survival analyses were performed. A total of 244 patients were included in this study. For the majority of patients (94.3%), first-line therapy was single-agent gemcitabine. About 144 (59%) patients developed distant metastatic disease and the most frequent metastatic sites included peritoneum/omentum (42.3%), liver (41%), lungs (13.9%), and distant lymph nodes (9%). Median overall survival (OS) for the entire cohort was 11.7 months (95% CI, 10.6-12.8). Development of distant metastases was associated with significantly inferior survival (HR 3.56, 95% CI 2.57-4.93), as was ECOG 2/3 versus 0/1 (HR 1.69, 95% CI 1.28-2.23), CA 19.9 > 1000 versus ≤ 1000 (HR 1.59, 95% CI 1.19-2.14) and female gender, (HR 1.57, 95% CI 1.19-2.08). In this population-based study, 41% of LAPC patients treated with first-line chemotherapy died without evidence of distant metastases. Prognostic factors for LAPC were baseline performance status, elevated CA 19.9, gender, and development of distant metastasis. Results highlight the heterogeneity of LAPC and the importance of locoregional tumor control. |
The impact of an AIDS symposium on attitudes of providers of pediatric health care. | Following reports of concern among health-care workers regarding the occupational risk of infection with the human immunodeficiency virus (HIV), a symposium was designed in 1987 to demonstrate to health-care providers at three hospitals in The Bronx, New York, the low risk of occupational HIV infection and techniques for avoiding infection. After the symposium, 103 of the health-care providers who had attended it completed a questionnaire assessing the impact of the symposium on their attitudes; the responses from 100 of these providers were used in this study. Twenty-nine of the responding providers reported that the symposium had increased their concerns regarding their risk of HIV infection; this group was composed of seven of the 15 medical students who responded, 12 of the 36 housestaff, seven of the 28 faculty, and three of the 21 other medical staff. The findings of the present study suggest that a symposium designed to decrease concerns of occupational HIV infection among health-care workers may have the opposite effect on some of those who attend it, especially medical students. Education alone may be inadequate to reassure some providers. The authors recommend small-group sessions addressing the emotional aspects of health-care providers' concerns. |
Severe disturbance of higher cognition after bilateral frontal lobe ablation: patient EVR. | After bilateral ablation of orbital and lower mesial frontal cortices, a patient had profound changes of behavior that have remained stable for 8 years. Although he could not meet personal and professional responsibilities, his "measurable" intelligence was superior, and he was therefore considered a "malingerer." Neurologic and neuropsychological examinations were otherwise intact. CT, MRI, and SPET revealed a localized lesion of the orbital and lower mesial frontal cortices. All other cerebral areas had normal structure and radioactivity patterns. Such impairments of motivation and complex social behavior were not seen in control cases with superior mesial or dorsolateral frontal lesions. |
[Effects of aroma self-foot reflexology massage on stress and immune responses and fatigue in middle-aged women in rural areas]. | PURPOSE
This study was done to examine the effects of aroma self-foot reflexology massage on stress and immune responses and fatigue in middle-aged women in rural areas.
METHODS
The study was a nonequivalent control group pre-post test design. The participants were 52 middle-aged women from rural areas of which 26 were assigned to the experimental group and 26 to the control group. Data were collected from July to September, 2011 and analyzed using SPSS Win 17.0 version program. The intervention was conducted 3 times a week for six weeks.
RESULTS
There were significant differences in reported perceived stress, systolic blood pressure, diastolic blood pressure and fatigue between the two groups. However, the issue of salivary cortisol and immune response were not significant.
CONCLUSION
Aroma self-foot reflexology massage can be utilized as an effective intervention for perceived stress, systolic blood pressure, diastolic blood pressure and fatigue in middle-aged woman in rural areas. |
Prevention of mother-to-child transmission of HIV-1 through breast-feeding by treating infants prophylactically with lamivudine in Dar es Salaam, Tanzania: the Mitra Study. | OBJECTIVE
To investigate the possibility of reducing mother-to-child transmission (MTCT) of HIV-1 through breast-feeding by prophylactic antiretroviral (ARV) treatment of the infant during the breast-feeding period.
DESIGN
An open-label, nonrandomized, prospective cohort study in Tanzania (Mitra).
METHODS
HIV-1-infected pregnant women were treated according to regimen A of the Petra trial with zidovudine (ZDV) and lamivudine (3TC) from week 36 to 1 week postpartum. Infants were treated with ZDV and 3TC from birth to 1 week of age (Petra arm A) and then with 3TC alone during breast-feeding (maximum of 6 months). Counseling emphasized exclusive breast-feeding. HIV transmission was analyzed using the Kaplan-Meier survival technique. Cox regression was used for comparison with the breast-feeding population in arm A of the Petra trial, taking CD4 cell count and other possible confounders into consideration.
RESULTS
There were 398 infants included in the transmission analysis in the Mitra study. The estimated cumulative proportion of HIV-1-infected infants was 3.8% (95% confidence interval [CI]: 2.0 to 5.6) at week 6 after delivery and 4.9% (95% CI: 2.7 to 7.1) at month 6. The median time of breast-feeding was 18 weeks. High viral load and a low CD4 T-cell count at enrollment were associated with transmission. The Kaplan-Meier estimated risk of HIV-1 infection at 6 months in infants who were HIV-negative at 6 weeks was 1.2% (95% CI: 0.0 to 2.4). The cumulative HIV-1 infection or death rate at 6 months was 8.5% (95% CI: 5.7 to 11.4). No serious adverse events related to the ARV treatment of infants occurred. The HIV-1 transmission rate during breast-feeding in the Mitra study up to 6 months after delivery was more than 50% lower than in the breast-feeding population of Petra arm A (relative hazard=2.61; P=0.001; adjusted values). The difference in transmission up to 6 months was significant also in the subpopulation of mothers with CD4 counts>or=200 cells/microL.
CONCLUSIONS
The rates of MTCT of HIV-1 in the Mitra study at 6 weeks and 6 months after delivery are among the lowest reported in a breast-feeding population in sub-Saharan Africa. Prophylactic 3TC treatment of infants to prevent MTCT of HIV during breast-feeding was well tolerated by the infants and could be a useful strategy to prevent breast milk transmission of HIV when mothers do not need ARV treatment for their own health. |
Learning Hierarchical Features for Automated Extraction of Road Markings From 3-D Mobile LiDAR Point Clouds | This paper presents a novel method for automated extraction of road markings directly from three dimensional (3-D) point clouds acquired by a mobile light detection and ranging (LiDAR) system. First, road surface points are segmented from a raw point cloud using a curb-based approach. Then, road markings are directly extracted from road surface points through multisegment thresholding and spatial density filtering. Finally, seven specific types of road markings are further accurately delineated through a combination of Euclidean distance clustering, voxel-based normalized cut segmentation, large-size marking classification based on trajectory and curb-lines, and small-size marking classification based on deep learning, and principal component analysis (PCA). Quantitative evaluations indicate that the proposed method achieves an average completeness, correctness, and F-measure of 0.93, 0.92, and 0.93, respectively. Comparative studies also demonstrate that the proposed method achieves better performance and accuracy than those of the two existing methods. |
Bayesian Theory of Mind: Modeling Joint Belief-Desire Attribution | We present a computational framework for understanding Theory of Mind (ToM): the human capacity for reasoning about agents’ mental states such as beliefs and desires. Our Bayesian model of ToM (or BToM) expresses the predictive model of beliefand desire-dependent action at the heart of ToM as a partially observable Markov decision process (POMDP), and reconstructs an agent’s joint belief state and reward function using Bayesian inference, conditioned on observations of the agent’s behavior in some environmental context. We test BToM by showing participants sequences of agents moving in simple spatial scenarios and asking for joint inferences about the agents’ desires and beliefs about unobserved aspects of the environment. BToM performs substantially better than two simpler variants: one in which desires are inferred without reference to an agent’s beliefs, and another in which beliefs are inferred without reference to the agent’s dynamic observations in the environment. |
A prospective study of different test targets for the near point of convergence. | PURPOSE
To determine whether different test targets including an accommodative target (AT), a transilluminator (TR), and a transilluminator with a red lens (RL), affect the near point of convergence (NPC) value; and to determine which test target is most sensitive to identify convergence insufficiency (CI) in young adults.
METHODS
Subjects were 36 optometry students from the Illinois College of Optometry, including 18 subjects with normal binocular vision (control group) and 18 subjects with CI. None of the subjects had accommodative insufficiency. The NPC break and recovery were measured by three methods: AT, TR, and RL. Each test method was administered by a different examiner and the test sequence was randomized.
RESULTS
The mean NPC break values for AT, TR, and RL in the control group were 4.31, 3.76, and 4.08 cm respectively, compared to 10.05, 11.37 and 13.04 cm in the CI group. The mean recovery values were 6.23, 5.56, and 5.95 cm for AT, TR, and RL respectively in the control group, vs 12.21, 14.37, 16.40 cm in the CI group. Significant differences in NPC break and recovery values were detected in the CI group between RL and AT, but not between AT and TR, or TR and RL. There was no significant difference in NPC values using the three targets in the control group. For an NPC cut point of 6 cm (break) and 9 cm (recovery), RL had higher sensitivity (100%) and specificity (88.9%) as well as lower false positive (10%) and false negative (0%) values compared to AT.
CONCLUSION
NPC with RL is a more sensitive method to identify abnormal findings and assist in diagnosing CI compared to using AT or TR. We recommend that NPC with RL be routinely used to evaluate patients suspected of having CI. |
A Reference Model for Requirements and Specifications | M a y / J u n e 2 0 0 0 I E E E S O F T W A R E 37 between the user-requirements specification and the software-requirements specification, mandating complete documentation of each according to various rules. Other cases emphasize this distinction less. For instance, some groups at Microsoft argue that the difficulty of keeping a technical specification consistent with the program is more trouble than the benefit merits.2 We can find a wide range of views in industry literature and from the many organizations that write software. Is it possible to clarify these various artifacts and study their properties, given the wide variations in the use of terms and the many different kinds of software being written? Our aim is to provide a framework for talking about key artifacts, their attributes, and relationships at a general level, but precisely enough that we can rigorously analyze substantive properties. |
Aboutness: towards foundations for the Information Artifact Ontology | The Information Artifact Ontology (IAO) was created to serve as a domain‐neutral resource for the representation of types of information content entities (ICEs) such as documents, data‐bases, and digital im‐ ages. We identify a series of problems with the current version of the IAO and suggest solutions designed to advance our understanding of the relations between ICEs and associated cognitive representations in the minds of human subjects. This requires embedding IAO in a larger framework of ontologies, including most importantly the Mental Func‐ tioning Ontology (MFO). It also requires a careful treatment of the aboutness relations between ICEs and associated cognitive representa‐ tions and their targets in reality. |
Is e-Learning the Solution for Individual Learning?. | Despite the fact that e-Learning exists for a relatively long time, it is still in its infancy. Current eLearning systems on the market are limited to technical gadgets and organizational aspects of teaching, instead of supporting the learning. As a result the learner has become deindividualized and demoted to a noncritical homogenous user. One way out of this drawback is the creation of individual e-Learning materials. For this purpose a flexible multidimensional data model and the generation of individual content are the solution. It is necessary to enable the interaction between the learners and the content in e-Learning systems in the same manner. |
The effect of albendazole treatment on seizure outcomes in patients with symptomatic neurocysticercosis. | BACKGROUND
Randomized controlled trials have found an inconsistent effect of anthelmintic treatment on long-term seizure outcomes in neurocysticercosis. The objective of this study was to further explore the effect of albendazole treatment on long-term seizure outcomes and to determine if there is evidence for a differential effect by seizure type.
METHODS
In this trial, 178 patients with active or transitional neurocysticercosis cysts and new-onset symptoms were randomized to 8 days of treatment with albendazole (n=88) or placebo (n=90), both with prednisone, and followed for 24 months. We used negative binomial regression and logistic regression models to determine the effect of albendazole on the number of seizures and probability of recurrent or new-onset seizures, respectively, over follow-up.
RESULTS
Treatment with albendazole was associated with a reduction in the number of seizures during 24 months of follow-up, but this was only significant for generalized seizures during months 1-12 (unadjusted rate ratio [RR] 0.19; 95% CI: 0.04-0.91) and months 1-24 (unadjusted RR 0.06; 95% CI: 0.01-0.57). We did not detect a significant effect of albendazole on reducing the number of focal seizures or on the probability of having a seizure, regardless of seizure type or time period.
CONCLUSIONS
Albendazole treatment may be associated with some symptomatic improvement; however, this association seems to be specific to generalized seizures. Future research is needed to identify strategies to better reduce long-term seizure burden in patients with neurocysticercosis. |
Syntactic Simplification and Text Cohesion | Syntactic simplification is the process of reducing the grammatical complexity of a text, while retaining its information content and meaning. The aim of syntactic simplification is to make text easier to comprehend for human readers, or process by programs. In this thesis, I describe how syntactic simplification can be achieved using shallow robust analysis, a small set of hand-crafted simplification rules and a detailed analysis of the discourse-level aspects of syntactically rewriting text. I offer a treatment of relative clauses, apposition, coordination and subordination. I present novel techniques for relative clause and appositive attachment. I argue that these attachment decisions are not purely syntactic. My approaches rely on a shallow discourse model and on animacy information obtained from a lexical knowledge base. I also show how clause and appositive boundaries can be determined reliably using a decision procedure based on local context, represented by part-of-speech tags and noun chunks. I then formalise the interactions that take place between syntax and discourse during the simplification process. This is important because the usefulness of syntactic simplification in making a text accessible to a wider audience can be undermined if the rewritten text lacks cohesion. I describe how various generation issues like sentence ordering, cue-word selection, referring-expression generation, determiner choice and pronominal use can be resolved so as to preserve conjunctive and anaphoric cohesive-relations during syntactic simplification. In order to perform syntactic simplification, I have had to address various natural language processing problems, including clause and appositive identification and attachment, pronoun resolution and referring-expression generation. I evaluate my approaches to solving each problem individually, and also present a holistic evaluation of my syntactic simplification system. |
LLLR Parsing: a Combination of LL and LR Parsing | A new parsing method called LLLR parsing is defined and a method for producing LLLR parsers is described. An LLLR parser uses an LL parser as its backbone and parses as much of its input string using LL parsing as possible. To resolve LL conflicts it triggers small embedded LR parsers. An embedded LR parser starts parsing the remaining input and once the LL conflict is resolved, the LR parser produces the left parse of the substring it has just parsed and passes the control back to the backbone LL parser. The LLLR(k) parser can be constructed for any LR(k) grammar. It produces the left parse of the input string without any backtracking and, if used for a syntax-directed translation, it evaluates semantic actions using the top-down strategy just like the canonical LL(k) parser. An LLLR(k) parser is appropriate for grammars where the LL(k) conflicting nonterminals either appear relatively close to the bottom of the derivation trees or produce short substrings. In such cases an LLLR parser can perform a significantly better error recovery than an LR parser since the most part of the input string is parsed with the backbone LL parser. LLLR parsing is similar to LL(∗) parsing except that it (a) uses LR(k) parsers instead of finite automata to resolve the LL(k) conflicts and (b) does not perform any backtracking. 1998 ACM Subject Classification F.4.2 Grammars and Other Rewriting Systems (Parsing), D.3.1 Formal Definitions and Theory (Syntax), D.3.4 Processors (Parsing) |
Learning structured prediction models: a large margin approach | We consider large margin estimation in a broad range of prediction models where inference involves solving combinatorial optimization problems, for example, weighted graph-cuts or matchings. Our goal is to learn parameters such that inference using the model reproduces correct answers on the training data. Our method relies on the expressive power of convex optimization problems to compactly capture inference or solution optimality in structured prediction models. Directly embedding this structure within the learning formulation produces concise convex problems for efficient estimation of very complex and diverse models. We describe experimental results on a matching task, disulfide connectivity prediction, showing significant improvements over state-of-the-art methods. |
Parental stress when caring for a child with cancer in Jordan: a cross-sectional survey | BACKGROUND
Most studies report that being parents of a child with cancer is a stressful experience, but these have tended to focus on mothers and few have included both parents. Moreover, studies have focussed on families in Western countries and none have been published examining the psychological outcomes for parents living in an Arabic country.This research explores the stress levels of Jordanian parents caring for a child with cancer in order to identify the psychological needs of parents in this environment and to explore how mothers and fathers stress levels might differ.
METHODS
The study was carried out in Jordan using the Perceived Stress Scale 10-items (PSS10). The questionnaire was completed by 300 couples with a child who has cancer and a comparison group of 528 couples where the children do not have any serious illness. Multivariate backward regression analysis was carried out.
RESULTS
Analysis adjusting for spousal stress and sociodemographic predictors revealed that stress levels of mothers with a child who had cancer remained significantly higher than mothers whose children did not have any serious illness (p < 0.001). However, having a child with cancer did not show a significant association with the fathers' reported stress scores (p = 0.476) when spousal stress was in the model, but was highly significant once that was removed (p < 0.001).Parental stress was analysed for those with a child who has cancer and in models which included spouse's stress scores, sociodemographic and cancer-related predictors 64 % of the variance was explained for mothers (adjusted R2 = 0.64, p < 0.001) and fathers (adjusted R2 = 0.64, p < 0.001). Models excluding spousal stress scores explained just 26 % of the variance for fathers and 22 % for mothers.
CONCLUSIONS
This is the first study into the psychological outcomes for parents living in an Arabic country who care for a child with cancer. Both mothers and fathers with a child diagnosed with cancer reported higher stress levels than those from the normal Jordanian parent population. Mothers and fathers of children with cancer reported significantly different levels of stress to each other but models reveal significant contributions of the stress score of fathers upon mothers, and vice versa. The findings provide evidence of the need for psychological support to be developed for families caring for a child with cancer in Jordan. |
Domain-specific Question Generation from a Knowledge Base | Question generation has been a research topic for a long time, where a big challenge is how to generate deep and natural questions. To tackle this challenge, we propose a system to generate natural language questions from a domain-specific knowledge base (KB) by utilizing rich web information. A small number of question templates are first created based on the KB and instantiated into questions, which are used as seed set and further expanded through the web to get more question candidates. A filtering model is then applied to select candidates with high grammaticality and domain relevance. The system is able to generate large amount of in-domain natural language questions with considerable semantic diversity and is easily applicable to other domains. We evaluate the quality of the generated questions by human judgments and the results show the effectiveness of our proposed system. |
Extracting trading rules from the multiple classifiers and technical indicators in stock market | This study intends to mine reasonable trading rules by classifying the up/down fluctuant direction of the price for Korea Stock Price Index 200 (KOSPI 200) futures. This research consists of two stages. The first stage is classifying the fluctuant direction of the price for KOSPI 200 futures with several technical indicators using artificial intelligence techniques. And the second stage is mining the trading rules to resolute conflict among the outputs of the first stage using the inductive learning. To verify the effectiveness of proposed approach, this study composes four comparable models and performs statistical test. Experimental results show that the classification performance of the proposed model outperforms that of other comparable models. In addition, the proposed model yields higher profit than other comparable models and buy-and-hold strategy. |
Dynamic binding in a neural network for shape recognition. | Given a single view of an object, humans can readily recognize that object from other views that preserve the parts in the original view. Empirical evidence suggests that this capacity reflects the activation of a viewpoint-invariant structural description specifying the object's parts and the relations among them. This article presents a neural network that generates such a description. Structural description is made possible through a solution to the dynamic binding problem: Temporary conjunctions of attributes (parts and relations) are represented by synchronized oscillatory activity among independent units representing those attributes. Specifically, the model uses synchrony (a) to parse images into their constituent parts, (b) to bind together the attributes of a part, and (c) to bind the relations to the parts to which they apply. Because it conjoins independent units temporarily, dynamic binding allows tremendous economy of representation and permits the representation to reflect the attribute structure of the shapes represented. |
Morphological Analysis and Generation for Arabic Dialects | We present MAGEAD, a morphological analyzer and generator for the Arabic language family. Our work is novel in that it explicitly addresses the need for processing the morphology of the dialects. MAGEAD provides an analysis to a root+pattern representation, it has separate phonological and orthographic representations, and it allows for combining morphemes from different dialects. |
Involvement of gut microbiome in human health and disease: brief overview, knowledge gaps and research opportunities | The commensal, symbiotic, and pathogenic microbial community which resides inside our body and on our skin (the human microbiome) can perturb host energy metabolism and immunity, and thus significantly influence development of a variety of human diseases. Therefore, the field has attracted unprecedented attention in the last decade. Although a large amount of data has been generated, there are still many unanswered questions and no universal agreements on how microbiome affects human health have been agreed upon. Consequently, this review was written to provide an updated overview of the rapidly expanding field, with a focus on revealing knowledge gaps and research opportunities. Specifically, the review covered animal physiology, optimal microbiome standard, health intervention by manipulating microbiome, knowledge base building by text mining, microbiota community structure and its implications in human diseases and health monitoring by analyzing microbiome in the blood. The review should enhance interest in conducting novel microbiota investigations that will further improve health and therapy. |
An automated region-of-interest segmentation for optic disc extraction | Optic disc segmentation in retinal fundus images plays a critical rule in diagnosing a variety of pathologies and abnormalities related to eye retina. Most of the abnormalities that are related to optic disc lead to structural changes in the inner and outer zones of optic disc. Optic disc segmentation on the level of whole retina image degrades the detection sensitivity for these zones. In this paper, we present an automated technique for the Region-Of-Interest segmentation of optic disc region in retinal images. Our segmentation technique reduces the processing area required for optic disc segmentation techniques leading to notable performance enhancement and reducing the amount of the required computational cost for each retinal image. DRIVE, DRISHTI-GS and DiaRetDB1 datasets were used to test and validate our proposed pre-processing technique. |
Parent and Emergency Physician Comfort with a System of On-Line Emergency-Focused Medical Summaries for Infants with Significant Cardiac Disease | Surveys were developed and administered to assess parental comfort with emergency care for children with special health care needs (CSHCN) with cardiac disease and the impact of a web-based database of emergency-focused clinical summaries (emergency information forms-EIF) called Midwest Emergency Medical Services for Children Information System (MEMSCIS) on parental attitudes regarding emergency care of their CSHCN. We hypothesized that MEMSCIS would improve the parent and provider outlook regarding emergencies of young children with heart disease in a randomized controlled trial. Children under age 2 were enrolled in MEMSCIS by study nurses associated with pediatric cardiac centers in a metropolitan area. Parents were surveyed at enrollment and 1 year on a 5-Point Likert Scale. Validity and reliability of the survey were evaluated. Study nurses formulated the emergency-focused summaries with cardiologists. One-hundred-seventy parent subjects, 94 study and 76 control, were surveyed at baseline and 1 year. Parents felt that hospital personnel were well-prepared for emergencies of their children and this improved from baseline 4.07 ± 1.03 to 1 year 4.24 ± 1.04 in study parents who had an EIF for their child and participated in the program (p = 0.0114) but not control parents. Parents perceived an improved comfort level by pre-hospital (p = 0.0256) and hospital (p = 0.0031) emergency personnel related to the MEMSCIS program. The MEMSCIS Program with its emergency-focused web-based clinical summary improved comfort levels for study parents. We speculate that the program facilitated normalization for parents even if the EIF was not used in an emergency during the study. The MEMSCIS program helps to prepare the family and the emergency system for care of CSHCN outside of the medical home. |
Trustworthy Medical Device Software | This report summarizes what the computing research community knows about the role of trustworthy software for safety and effectiveness of medical devices. Research shows that problems in medical device software result largely from a failure to apply well-known systems engineering techniques, especially during specification of requirements and analysis of human factors. Recommendations to increase the trustworthiness of medical device software include (1) regulatory policies that specify outcome measures rather than technology, (2) collection of statistics on the role of software in medical devices, (3) establishment of open-research platforms for innovation, (4) clearer roles and responsibility for the shared burden of software, (5) clarification of the meaning of substantial equivalence for software, and (6) an increase in FDA’s access to outside experts in software. This report draws upon material from research in software engineering and trustworthy computing, public FDA data, and accident reports to provide a high-level understanding of the issues surrounding the risks and benefits of medical device software. |
Text Steganography in chat | Invention of the Internet and its spread in the world changes various aspects of human life. In addition, Internet changed human relations. Chat is one of the new aspects which invented after the Internet and is welcomed by users, especially by young people. In chat rooms, people talk with each other using text messages. Because of the need for quick typing of the word and also because of the high amount of sentences which is exchanged between users, new abbreviations are invented for various words and phrases in chat rooms. This new language is known as SMS-texting. On the other hand, the issue of safety and security of information and especially secret relationships has led to the introduction of numerous methods for secret communication. Among these methods, steganography is a rather new method. The present paper offers a new method for secret exchange of information through chat by using and developing abbreviation text steganography with the use of the SMS-texting language. This paper has been implemented by Java programming language. |
A Dual-ISM-Band Antenna of Small Size Using a Spiral Structure With Parasitic Element | This letter presents a compact, single-feed, dual-band antenna covering both the 433-MHz and 2.45-GHz Industrial Scientific and Medical (ISM) bands. The antenna has small dimensions of 51 ×28 mm2. A square-spiral resonant element is printed on the top layer for the 433-MHz band. The remaining space within the spiral is used to introduce an additional parasitic monopole element on the bottom layer that is resonant at 2.45 GHz. Measured results show that the antenna has a 10-dB return-loss bandwidth of 2 MHz at 433 MHz and 132 MHz at 2.45 GHz, respectively. The antenna has omnidirectional radiation characteristics with a peak realized gain (measured) of -11.5 dBi at 433 MHz and +0.5 dBi at 2.45 GHz, respectively. |
Plan-Based Dialogue Management in a Physics Tutor | This paper describes an application of APE (the Atlas Planning Engine), an integrated planning and execution system at the heart of the Atlas dialogue management system. APE controls a mixedinitiative dialogue between a human user and a host system, where turns in the ‘conversation’ may include graphical actions and/or written text. APE has full unification and can handle arbitrarily nested discourse constructs, making it more powerful than dialogue managers based on finitestate machines. We illustrate this work by describing Atlas-Andes, an intelligent tutoring system built using APE with the Andes physics tutor as the host. |
Retrieval practice does not safeguard memories from interference-based forgetting | Retrieval enhances long-term retention. However, reactivation of a memory also renders it susceptible to modifications as shown by studies on memory reconsolidation. The present study explored whether retrieval diminishes or enhances subsequent retroactive interference (RI) and intrusions. Participants learned a list of objects. Two days later, they were either asked to recall the objects, given a subtle reminder, or were not reminded of the first learning session. Then, participants learned a second list of objects or performed a distractor task. After another two days, retention of List 1 was tested. Although retrieval enhanced List 1 memory, learning a second list impaired memory in all conditions. This shows that testing did not protect memory from RI. While a subtle reminder before List 2 learning caused List 2 items to later intrude into List 1 recall, very few such intrusions were observed in the testing and the no reminder conditions. The findings are discussed in reference to the reconsolidation account and the testing effect literature, and implications for educational practice are outlined. © 2015 Elsevier Inc. All rights reserved. Retrieval practice or testing is one of the most powerful memory enhancers. Testing that follows shortly after learning benefits long-term retention more than studying the to-be-remembered material again (Roediger & Karpicke, 2006a, 2006b). This effect has been shown using a variety of materials and paradigms, such as text passages (e.g., Roediger & Karpicke, 2006a), paired associates (Allen, Mahler, & Estes, 1969), general knowledge questions (McDaniel & Fisher, 1991), and word and picture lists (e.g., McDaniel & Masson, 1985; Wheeler & Roediger, 1992; Wheeler, Ewers, & Buonanno, 2003). Testing effects have been observed in traditional lab as well as educational settings (Grimaldi & Karpicke, 2015; Larsen, Butler, & Roediger, 2008; McDaniel, Anderson, Derbish, & Morrisette, 2007). Testing not only improves long-term retention, it also enhances subsequent encoding (Pastötter, Schicker, Niedernhuber, & Bäuml, 2011), protects memories from the buildup of proactive interference (PI; Nunes & Weinstein, 2012; Wahlheim, 2014), and reduces the probability that the tested items intrude into subsequently studied lists (Szpunar, McDermott, & Roediger, 2008; Weinstein, McDermott, & Szpunar, 2011). The reduced PI and intrusion rates are assumed to reflect enhanced list discriminability or improved within-list organization. Enhanced list discriminability in turn helps participants distinguish different sets or sources of information and allows them to circumscribe the search set during retrieval to the relevant list (e.g., Congleton & Rajaram, 2012; Halamish & Bjork, 2011; Szpunar et al., 2008). ∗ Correspondence to: Department of Psychology, Lehigh University, 17 Memorial Drive East, Bethlehem, PA 18015, USA. E-mail address: [email protected] http://dx.doi.org/10.1016/j.lmot.2015.01.004 0023-9690/© 2015 Elsevier Inc. All rights reserved. 24 A. Hupbach / Learning and Motivation 49 (2015) 23–30 If testing increases list discriminability, then it should also protect the tested list(s) from RI and intrusions from material that is encoded after retrieval practice. However, testing also necessarily reactivates a memory, and according to the reconsolidation account reactivation re-introduces plasticity into the memory trace, making it especially vulnerable to modifications (e.g., Dudai, 2004; Nader, Schafe, & LeDoux, 2000; for a recent review, see e.g., Hupbach, Gomez, & Nadel, 2013). Increased vulnerability to modification would suggest increased rather than reduced RI and intrusions. The few studies addressing this issue have yielded mixed results, with some suggesting that retrieval practice diminishes RI (Halamish & Bjork, 2011; Potts & Shanks, 2012), and others showing that retrieval practice can exacerbate the potential negative effects of post-retrieval learning (e.g., Chan & LaPaglia, 2013; Chan, Thomas, & Bulevich, 2009; Walker, Brakefield, Hobson, & Stickgold, 2003). Chan and colleagues (Chan & Langley, 2011; Chan et al., 2009; Thomas, Bulevich, & Chan, 2010) assessed the effects of testing on suggestibility in a misinformation paradigm. After watching a television episode, participants answered cuedrecall questions about it (retrieval practice) or performed an unrelated distractor task. Then, all participants read a narrative, which summarized the video but also contained some misleading information. A final cued-recall test revealed that participants in the retrieval practice condition recalled more misleading details and fewer correct details than participants in the distractor condition; that is, retrieval increased the misinformation effect (retrieval-enhanced suggestibility, RES). Chan et al. (2009) discuss two mechanisms that can explain this finding. First, since testing can potentiate subsequent new learning (e.g., Izawa, 1967; Tulving & Watkins, 1974), initial testing might have improved encoding of the misinformation. Indeed, when a modified final test was used, which encouraged the recall of both the correct information and the misinformation, participants in the retrieval practice condition recalled more misinformation than participants in the distractor condition (Chan et al., 2009). Second, retrieval might have rendered the memory more susceptible to interference by misinformation, an explanation that is in line with the reconsolidation account. Indeed, Chan and LaPaglia (2013) found reduced recognition of the correct information when retrieval preceded the presentation of misinformation (cf. Walker et al., 2003 for a similar effect in procedural memory). In contrast to Chan and colleagues’ findings, a study by Potts and Shanks (2012) suggests that testing protects memories from the negative influences of post-retrieval encoding of related material. Potts and Shanks asked participants to learn English–Swahili word pairs (List 1, A–B). One day later, one group of participants took a cued recall test of List 1 (testing condition) immediately before learning English–Finnish word pairs with the same English cues as were used in List 1 (List 2, A–C). Additionally, several control groups were implemented: one group was tested on List 1 without learning a second list, one group learned List 2 without prior retrieval practice, and one group did not participate in this session at all. On the third day, all participants took a final cued-recall test of List 1. Although retrieval practice per se did not enhance List 1 memory (i.e., no testing effect in the groups that did not learn List 2), it protected memory from RI (see Halamish & Bjork, 2011 for a similar result in a one-session study). Crucial for assessing the reconsolidation account is the comparison between the groups that learned List 2 either after List 1 recall or without prior List 1 recall. Contrary to the predictions derived from the reconsolidation account, final List 1 recall was enhanced when retrieval of List 1 preceded learning of List 2.1 While this clearly shows that testing counteracts RI, it would be premature to conclude that testing prevented the disruption of memory reconsolidation, because (a) retrieval practice without List 2 learning led to minimal forgetting between Day 2 and 3, while retrieval practice followed by List 2 learning led to significant memory decline, and (b) a reactivation condition that is independent from retrieval practice is missing. One could argue that repeating the cue words in List 2 likely reactivated memory for the original associations. It has been shown that the strength of reactivation (Detre, Natarajan, Gershman, & Norman, 2013) and the specific reminder structure (Forcato, Argibay, Pedreira, & Maldonado, 2009) determine whether or not a memory will be affected by post-reactivation procedures. The current study re-evaluates the question of how testing affects RI and intrusions. It uses a reconsolidation paradigm (Hupbach, Gomez, Hardt, & Nadel, 2007; Hupbach, Hardt, Gomez, & Nadel, 2008; Hupbach, Gomez, & Nadel, 2009; Hupbach, Gomez, & Nadel, 2011) to assess how testing in comparison to other reactivation procedures affects declarative memory. This paradigm will allow for a direct evaluation of the hypotheses that testing makes declarative memories vulnerable to interference, or that testing protects memories from the potential negative effects of subsequently learned material, as suggested by the list-separation hypothesis (e.g., Congleton & Rajaram, 2012; Halamish & Bjork, 2011; Szpunar et al., 2008). This question has important practical implications. For instance, when students test their memory while preparing for an exam, will such testing increase or reduce interference and intrusions from information that is learned afterwards? |
Inhibition of IL-1β Improves Fatigue in Type 2 Diabetes | Several diseases including microbial infection, rheumatoid arthritis,multiple sclerosis, and cancer have been linked to fatigue. They all have in common an upregulation of cytokines, including interleukin (IL)-1b and tumor necrosis factor-a (TNF-a), which may interfere with clock gene functions (1). Increasing evidence associates type 2 diabetes with inflammatory processes characterized by elevated production of proinflammatory cytokines and infiltration of immune cells. Reducing IL-1 activity in prediabetes and diabetes improves insulin secretion, glycemic control, andmarkers of systemic inflammation (2–4). Given this background, we hypothesized that fatigue levels may be increased in type 2 diabetes and may be improved by IL-1b antagonism. Within a placebo-controlled, doubleblind study of IL-1b antagonism with a monoclonal anti–IL-1b antibody, XOMA052, involving 30 patients with type 2 diabetes (4), we evaluated fatigue using the Fatigue Scale forMotor and Cognitive functions (5). Besides differentiating between cognitive and motor fatigue, this scale offers a subdivision into different grades of fatigue severity. At baseline, according to predefined cutoff values, 47% of the patients had no, 20% had mild, 17% had moderate, and 16% had severe fatigue, meaning that more than half of the patients suffered from considerable fatigue symptoms compared with a healthy population (5). A significant correlation between fatigue and duration of diabetes was evident (R 5 0.532, P 5 0.002). This correlation was stronger for cognitive fatigue (R 5 0.541, P 5 0.002) compared with motor fatigue (R 5 0.486, P 5 0.007). No correlation between fatigue and age, HbA1c, body weight, body temperature, and C-reactive protein was found. One month after treatment with XOMA052, a univariate ANOVA with the preand 1 month postmedication difference on total fatigue as the dependent variable and dosage as the fixed factor revealed that in the placebo and the lowest dose group (0.01 mg/kg), fatigue was slightly increased; in the two medium dose groups (0.03 mg/kg and 0.1 mg/kg), fatigue was slightly decreased; and in the two highest dose groups (0.3 mg/kg and 1.0 mg/kg), fatigue was decreased remarkably. The effect size for this dose-dependent effect was d 5 0.3. When assessing the motor and cognitive function separately, a nonparametric analysis of preand 1 month post-medication effects revealed a meaningful trend (P 5 0.07) on decrease in motor fatigue for patients under the dosage of 1.0 mg/kg of XOMA052. To further evaluate these findings with respect to the small group sizes, effect sizes for preand 1 month postmedication comparisons were calculated. Here it could be confirmed with an effect size of d5 1.05 that a dosage of 1.0 mg/kg of monoclonal anti–IL-1b antibody had a favorable effect on motor fatigue. To our knowledge, this is the first study assessing fatigue in diabetes by means of a validated fatigue instrument. It shows that type 2 diabetic patients are more prone to fatigue than normal healthy individuals with a prevalence of more than 50%. Fatigue seems to be correlated with duration of diabetic disease but not with the extent of glycemia or C-reactive protein levels. Moreover, fatigue seems to partly improve following IL-1b blockade. |
Representing Text for Joint Embedding of Text and Knowledge Bases | Models that learn to represent textual and knowledge base relations in the same continuous latent space are able to perform joint inferences among the two kinds of relations and obtain high accuracy on knowledge base completion (Riedel et al., 2013). In this paper we propose a model that captures the compositional structure of textual relations, and jointly optimizes entity, knowledge base, and textual relation representations. The proposed model significantly improves performance over a model that does not share parameters among textual relations with common sub-structure. |
Clingcon: The Next Generation | We present the third generation of the constraint answer set system clingcon, combining Answer Set Programming (ASP) with finite domain constraint processing (CP). While its predecessors rely on a blackbox approach to hybrid solving by integrating the CP solver gecode, the new clingcon system pursues a lazy approach using dedicated constraint propagators to extend propagation in the underlying ASP solver clasp. No extension is needed for parsing and grounding clingcon’s hybrid modeling language since both can be accommodated by the new generic theory handling capabilities of the ASP grounder gringo. As a whole, clingcon 3 is thus an extension of the ASP system clingo 5, which itself relies on the grounder gringo and the solver clasp. The new approach of clingcon offers a seamless integration of CP propagation into ASP solving that benefits from the whole spectrum of clasp’s reasoning modes, including for instance multi-shot solving and advanced optimization techniques. This is accomplished by a lazy approach that unfolds the representation of constraints and adds it to that of the logic program only when needed. Although the unfolding is usually dictated by the constraint propagators during solving, it can already be partially (or even totally) done during preprocessing. Moreover, clingcon’s constraint preprocessing and propagation incorporate several well established CP techniques that greatly improve its performance. We demonstrate this via an extensive empirical evaluation contrasting, first, the various techniques in the context of CSP solving and, second, the new clingcon system with other hybrid ASP systems. |
Level of Detail for Real-Time Volumetric Terrain Rendering | Terrain rendering is an important component of many GIS applications and simulators. Most methods rely on heightmap-based terrain which is simple to acquire and handle, but has limited capabilities for modeling features like caves, steep cliffs, or overhangs. In contrast, volumetric terrain models, e.g. based on isosurfaces can represent arbitrary topology. In this paper, we present a fast, practical and GPU-friendly level of detail algorithm for large scale volumetric terrain that is specifically designed for real-time rendering applications. Our algorithm is based on a longest edge bisection (LEB) scheme. The resulting tetrahedral cells are subdivided into four hexahedra, which form the domain for a subsequent isosurface extraction step. The algorithm can be used with arbitrary volumetric models such as signed distance fields, which can be generated from triangle meshes or discrete volume data sets. In contrast to previous methods our algorithm does not require any stitching between detail levels. It generates crack free surfaces with a good triangle quality. Furthermore, we efficiently extract the geometry at runtime and require no preprocessing, which allows us to render infinite procedural content with low memory |
APPLYING DATA MINING TECHNIQUES ON ACADAMIC INSTITUTIONAL SYSTEM USING WEKA S | These days, educational institutions and organizations are generating huge amount of data, more than the people can read in their lifetime. It is not possible for a person to learn, understand, decode, and interpret to find valuable information. Data mining is one of the most popular method which can be used to identify hidden patterns from large databases. User can extract historical, hidden details, and previously unknown information, from large repositories by applying required mining techniques. There are two algorithms which can be used to classify and predict, such as supervised learning and unsupervised learning. Classification is a technique which performs an induction on current data (existing data) and predicts future class. The main objective of classification is to make an unknown class to known class by consulting its neighbor class. therefore it is called as supervised learning, it builds the classifier by consulting with the known class labels such as k-nearest neighbor algorithm (k-NN), Naïve Bayes (NB), support vector machine (SVM), decision tree. Clustering is an unsupervised learning that builds a model to group similar objects into categories without consulting a class label. The main objective of clustering is find the distance between objects like nearby and faraway based on their similarities and dissimilarities it groups the objects and detects outliers. In this paper Weka tool is used to analyze by applying preprocessing, classification on institutional academic result of under graduate students of computer science & engineering. Keywords— Weka, classifier, supervised learning, |
Hepatic arterial embolization versus chemoembolization in the treatment of liver metastases from well-differentiated midgut endocrine tumors: a prospective randomized study. | BACKGROUND
Liver surgery is the best treatment for endocrine liver metastases, but it is often impossible due to diffuse disease. Systemic chemotherapy is poorly effective. Hepatic arterial embolization (HAE) and chemoembolization (HACE) have shown efficacy but have never been compared.
PATIENTS AND METHODS
Patients with progressive unresectable liver metastases from midgut endocrine tumors were randomly assigned to receive HAE or HACE (two procedures at 3-month interval). The primary end point was the 2-year progression-free survival (PFS) rate. Secondary end points were response rates, overall survival, and safety.
RESULTS
Twelve patients were assigned to receive HACE and 14 to receive HAE. The patient characteristics were well matched across the treatment arms. The 2-year PFS rates were 38 and 44% in the HACE and HAE arms, respectively (p = 0.90). Age, gender, previous resection of the primary tumor or liver metastases, extent of liver involvement, and concomitant treatment with somatostatin analogues were not associated with changes in PFS, whereas elevated baseline urinary 5-HIAA and serum chromogranin A levels were associated with shorter PFS. The 2-year overall survival rates were 80 and 100% in the HACE and HAE arms, respectively (p = 0.16). The disease control rate on CT scan was 95%. Grade 3 toxicity occurred in 19% of patients, with no treatment-related deaths and no differences in the treatment arms.
CONCLUSION
HACE and HAE are safe and permit tumor control in 95% of patients with progressive liver metastases from midgut endocrine tumors. The 2-year PFS was not higher among patients receiving HACE, not favoring the hypothesis of an additive efficacy of arterial chemotherapy or embolization alone. |
Developing Consumers' Brand Loyalty in Companies' Microblogs: The Roles of Social- and Self- Factors | This paper aims to explore how socialand self-factors may affect consumers’ brand loyalty while they follow companies’ microblogs. Drawing upon the commitment-trust theory, social influence theory, and self-congruence theory, we propose that network externalities, social norms, and self-congruence are the key determinants in the research model. The impacts of these factors on brand loyalty will be mediated by brand trust and brand commitment. We empirically test the model through an online survey on an existing microblogging site. The findings illustrate that network externalities and self-congruence can positively affect brand trust, which subsequently leads to brand commitment and brand loyalty. Meanwhile, social norms, together with self-congruence, directly posit influence on brand commitment. Brand commitment is then positively associated with brand loyalty. We believe that the findings of this research can contribute to the literature. We offer new insights regarding how consumers’ brand loyalty develops from the two social-factors and their self-congruence with the brand. Company managers could also apply our findings to strengthen their relationship marketing with consumers on microblogging sites. |
Horner’s syndrome following internal jugular vein cannulation | We present two cases of Horner’s syndrome occurring following uncomplicated internal jugular venous cannulation. An awareness of this potential complication will reduce confusion over the aetiology of anisocoria in critically ill patients. This consideration is important, since lesions in the central nervous system or carotid dissection following trauma might otherwise be suspected. |
Phosphatase activity in temperate pasture soils: Potential regulation of labile organic phosphorus turnover by phosphodiesterase activity. | Phosphatase enzymes regulate organic phosphorus (P) turnover in soil, but a clear understanding remains elusive. To investigate this, phosphomonoesterase and phosphodiesterase activities were determined by using para-nitrophenol (pNP) analogue substrates in a range of temperate pasture soils from England and Wales. Substrate-induced phosphatase activity ranged between 2.62 and 12.19 micromol pNP g-1 soil h-1 for phosphomonoesterase and between 0.25 and 2.24 micromol pNP g-1 soil h-1 for phosphodiesterase. Activities were correlated strongly with soil pH and labile organic P extracted in sodium bicarbonate, although the relationships differed markedly for the two enzymes. Acidic soils contained high phosphomonoesterase activity, low phosphodiesterase activity, and high concentrations of labile organic P, whereas the reverse was true in more neutral soils. As most of the organic P inputs to soil are phosphate diesters, it therefore seems likely that phosphodiesterase activity regulates labile organic P turnover in pasture soils. The low phosphodiesterase activity in acidic soils may be linked to the dominance of fungi or an effect of sorption on the enzyme. These results suggest that greater emphasis should be placed on understanding the role of phosphodiesterase activity in the cycling of soil organic P. |
Getting to the origins of photosynthesis. | One of the most important areas in all of biology is the evolution of photosynthesis. Some species of single-celled cyanobacteria, through photosynthesis, forever changed the atmosphere of the early Earth by filling it with oxygen, allowing a huge expansion in terms of what life was possible on the planet. Cardona et al. (2015), in the advanced online edition of Molecular Biology and Evolution, examined the evolution origins of the D1 protein in cyanobacteria, which forms the heart of Photosystem II, the oxygen-evolving machine of photosynthesis. Photosystem II’s role is to procure electrons for photosynthesis and it does this by ripping them out of water releasing oxygen as a byproduct. The research team selected all known D1 sequences from cyanobacteria and also representatives from algae and plants to compare the protein sequence variation. They showed that D1 exists in at least five major forms, some of which could have originated before the evolution of water oxidation. This allowed the team to make a detailed evolutionary tree and to propose a sequence of events for the origin of water splitting in Photosystem II at an unprecedented level of detail. The earliest diverging form of D1 has maintained ancestral characteristics and was found in the recently sequenced genome of Gloeobacter kilaueensis JS-1 (found in a lava cave in Hawaii), probably one of the most primitive types of cyanobacteria known A remarkable evolutionary innovation occurred around 3.2–2.7 Ga in a bacterial ancestor of cyanobacteria, made possible by key transitionary forms of D1. Their evidence suggests that water splitting could have evolved relatively fast after just a few changes to the ancestral D1 protein of Photosystem II. This ancestor contained several forms of D1 and may have been a lot more complex than previously thought, already highly specialized for the solar-powered oxidation of water. “I think the most significant implication of the paper is that now the evolution of biological water oxidation can be addressed experimentally,” said Cardona. |
"Cultures in negotiation": teachers' acceptance/resistance attitudes considering the infusion of technology into schools | A teachers’ training project, employing teacher-mentored in-school training approach, has been recently initiated in Greek secondary education for the introduction of Information and Communication Technology (ICT) into the classroom. Data resulting from this project indicate that although teachers express considerable interest in learning how to use technology they need consistent support and extensive training in order to consider themselves able for integrating it into their instructional practice. Teachers are interested in using ICT (a) to attain a better professional profile, and (b) to take advantage of any possible learning benefits offered by ICT but always in the context of the school culture. They are willing to explore open and communicative modes of ICT based teaching whenever school objectives permit, otherwise they appear to cautiously adapt the use of ICT to the traditional teacher-centered mode of teaching (strongly connected to the established student examination system). Teachers’ attitude to adapt ICT mode of use is supported by research evidence that emphasize the situational character of knowledge and expertise. Authors employ a model premised on Perceptual Control Theory to interpret available data and discuss the view that introducing ICT into schools can be understood as a “negotiation” process between cultures. |
Ownership types for safe programming: preventing data races and deadlocks | This paper presents a new static type system for multithreaded programs; well-typed programs in our system are guaranteed to be free of data races and deadlocks. Our type system allows programmers to partition the locks into a fixed number of equivalence classes and specify a partial order among the equivalence classes. The type checker then statically verifies that whenever a thread holds more than one lock, the thread acquires the locks in the descending order.Our system also allows programmers to use recursive tree-based data structures to describe the partial order. For example, programmers can specify that nodes in a tree must be locked in the tree order. Our system allows mutations to the data structure that change the partial order at runtime. The type checker statically verifies that the mutations do not introduce cycles in the partial order, and that the changing of the partial order does not lead to deadlocks. We do not know of any other sound static system for preventing deadlocks that allows changes to the partial order at runtime.Our system uses a variant of ownership types to prevent data races and deadlocks. Ownership types provide a statically enforceable way of specifying object encapsulation. Ownership types are useful for preventing data races and deadlocks because the lock that protects an object can also protect its encapsulated objects. This paper describes how to use our type system to statically enforce object encapsulation as well as prevent data races and deadlocks. The paper also contains a detailed discussion of different ownership type systems and the encapsulation guarantees they provide. |
Effects of Ni-Ti-Cu alloy composition and heat treatment temperature after cold working on phase transformation characteristics | With regard to Ni-50Ti-Cu (at%) shape memory alloys, phase transformation characteristics under no stress were studied at the copper content of 6–9 at% using differential scanning calorimetry and X-ray diffraction analysis. In solution-treated materials, B2-orthorhombic transformation occurs at the copper content of 7.6 at% or more. With the increase in copper content, the temperature for B2-orthorhombic transformation gradually increases, whereas the temperature for the orthorhombic-monoclinic phase transformation resulting from decreasing temperature rapidly falls. The phase transformation temperature in 27% cold-worked materials remains virtually constant, despite the copper content, and increases with increasing heat-treatment temperature. The hysteresis in the B2-orthorhombic transformation stabilized via cold working is as low as ∼18 K. |
Moving towards a new vision: implementation of a public health policy intervention | BACKGROUND
Public health systems in Canada have undergone significant policy renewal over the last decade in response to threats to the public's health, such as severe acute respiratory syndrome. There is limited research on how public health policies have been implemented or what has influenced their implementation. This paper explores policy implementation in two exemplar public health programs -chronic disease prevention and sexually-transmitted infection prevention - in Ontario, Canada. It examines public health service providers', managers' and senior managements' perspectives on the process of implementation of the Ontario Public Health Standards 2008 and factors influencing implementation.
METHODS
Public health staff from six health units representing rural, remote, large and small urban settings were included. We conducted 21 focus groups and 18 interviews between 2010 (manager and staff focus groups) and 2011 (senior management interviews) involving 133 participants. Research assistants coded transcripts and researchers reviewed these; the research team discussed and resolved discrepancies. To facilitate a breadth of perspectives, several team members helped interpret the findings. An integrated knowledge translation approach was used, reflected by the inclusion of academics as well as decision-makers on the team and as co-authors.
RESULTS
Front line service providers often were unaware of the new policies but managers and senior management incorporated them in operational and program planning. Some participants were involved in policy development or provided feedback prior to their launch. Implementation was influenced by many factors that aligned with Greenhalgh and colleagues' empirically-based Diffusion of Innovations in Service Organizations Framework. Factors and related components that were most clearly linked to the OPHS policy implementation were: attributes of the innovation itself; adoption by individuals; diffusion and dissemination; the outer context - interorganizational networks and collaboration; the inner setting - implementation processes and routinization; and, linkage at the design and implementation stage.
CONCLUSIONS
Multiple factors influenced public health policy implementation. Results provide empirical support for components of Greenhalgh et al's framework and suggest two additional components - the role of external organizational collaborations and partnerships as well as planning processes in influencing implementation. These are important to consider by government and public health organizations when promoting new or revised public health policies as they evolve over time. A successful policy implementation process in Ontario has helped to move public health towards the new vision. |
The role of hotel owners: the influence of corporate strategies on hotel performance. | Purpose – The purpose of this paper is to examine corporate strategic effects on hotel unit performance. Taking a hotel owner’s perspective, the relationship between four types of the owner’s corporate level strategies and the hotel property financial performance are examined. Design/methodology/approach – This study is built on a secondary data set provided by Smith Travel Research. A total of 2,012 hotels across the USA were analyzed for the period between 2003-2005. Findings – The findings support the existence of corporate effects in the US lodging industry. It is revealed that a hotel owner’s corporate strategies do influence hotel property level financial performance. Specifically, a hotel owner’s expertise in implementing superior strategies regarding segment, brand, operator, and location (i.e. state) are critical to hotel unit financial performance. Research limitations/implications – The main limitations of this study include the limited number of years with available data, lack of knowledge on the names of hotel owners, brands and operators, and the performance measures focusing only operating but not value/return measures. Practical implications – This research shows that a hotel owner can have significant influence on the operating performance of its hotel properties by implementing strategies regarding its properties’ locations, segments, brand affiliations and operators. Specifically, brand affiliation has shown a consistently larger impact on both revenue and profit than other corporate strategies, and consequently should receive particular attention from the owner to carefully assess the brand’s potential contribution before engaging in a franchise agreement. Originality/value – This research expands the strategy research in the hospitality field by linking two key strategy constructs – corporate effects and corporate strategy – together and by revealing their collective influence on hotel performance. |
Real-Time GPS Spoofing Detection via Correlation of Encrypted Signals | A method for detecting the spoofing of civilian GPS signals has been implemented and successfully tested in a real-time system. GPS signal spoofing is an attack method whereby a third party transmits a signal that appears authentic but induces the receiver under attack to compute an erroneous navigation solution, time, or both. The detection system described herein provides a defense against such attacks. It makes use of correlations between the unknown encrypted GPS L1 P(Y) code signals from two narrow-band civilian receivers to verify the presence or absence of spoofing. One of these receivers is assumed to be at a secure location that is not subject to spoofing. The other receiver is the potential spoofing victim for which the present developments constitute a defense. Successful detection results are presented using a reference receiver in Ithaca, New York, a victim receiver in Austin, Texas, and a spoofer in Austin, Texas. |
Preface-Emerging Technologies and Landmark Systems for Learning Mathematics and Science: Dedicated to the Memory of Erica Melis-Part 2 | It is our distinct pleasure, as co-editors, to introduce this special edition of the International Journal of Artificial Intelligence in Education. Not only dowe have four very interesting and high quality papers about emerging educational technologies in mathematics and science to present, but we also have the opportunity to dedicate this volume to our colleague and friend, EricaMelis, who passed away in February 2011 at the age of 61.We all knewErica well and two of us (McLaren, Sosnovsky) worked closely with her for several years and personally witnessed the bravery with which she fought her illness and faced her mortality. Erica was an important figure in AI in Education (AIED) research, tirelessly working on both new ideas and more established, landmark research in the area of education and technology. This volume focuses on one aspect of Erica’s research life – new ideas. A soon-to-be released companion issue of the International Journal of Artificial Intelligence in Education will focus on the second passion of Erica’s research life – landmark instructional systems for supporting learners and learning. Erica was a dedicated and rigorous researcher, and led a full life. She was born in Cuba in 1949 to a German Jewish family in exile, eventually returning to Germany to live and lead her academic life. In East Germany, where she first lived upon returning to Germany, she studied Kripke Logic. When she moved to the west, her academic interests shifted to a study of analogy and case-based reasoning, subfields of artificial intelligence. She also did seminal proof planning work with Joerg Siekmann, who eventually became her husband. She was always keenly interested in international collaborations, spending time Int J Artif Intell Educ DOI 10.1007/s40593-014-0021-0 |
CYC: A Large-Scale Investment in Knowledge Infrastructure | Since 1984, a person-century of effort has gone into building CYC, a universal schema of roughly 105 general concepts spanning human reality. Most of the time has been spent codifying knowledge about these concepts; approximately 106 commonsense axioms have been handcrafted for and entered into CYC's knowledge base, and millions more have been inferred and cached by CYC. This article examines the fundamental assumptions of doing such a large-scale project, reviews the technical lessons learned by the developers, and surveys the range of applications that are or soon will be enabled by the technology. |
A PWM LLC Type Resonant Converter Adapted to Wide Output Range in PEV Charging Applications | In conventional LLC-based plug-in electric vehicle (PEV) onboard chargers, the battery pack voltage varies in a wide range with the change of state of charge. This makes it difficult to optimally design the pulse frequency modulated LLC resonant converter. Besides, the voltage regulation of the LLC converter is highly dependent on the load conditions. In this paper, a modified pulse width modulated (PWM) LLC type resonant topology (PWM-LLC) is proposed and investigated in PEV charging applications. The switching frequency of the primary LLC network is constant and equal to the resonant frequency. The voltage regulation is achieved by modulating the duty cycle of the secondary side auxiliary mosfet. Compared with the conventional LLC topology, the proposed topology shrinks the magnetic component size and achieves a wide and fixed voltage gain range independent of load conditions. Meanwhile, zero-voltage-switching and zero-current-switching are realized among all MOSFETs and diodes, respectively. A 100-kHz, 1-kW converter prototype, generating 250–420 V output from the 390-V dc link, is designed and tested to verify the proof of concept. The prototype demonstrates 96.7% peak efficiency and robust performance over wide voltage and load ranges. |
Geographic aspects of international relations | Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance. |
Bolzano and the Analytical Tradition | In the course of the last few decades, Bolzano has emerged as an important player in accounts of the history of philosophy. This should be no surprise. Few authors stand at a more central junction in the development of modern thought. Bolzano's contributions to logic and the theory of knowledge alone straddle three of the most important philosophical traditions of the 19th and 20th centuries: the Kantian school, the early phenomenological movement and what has come to be known as analytical philosophy. This paper identifies three Bolzanian theoretical innovations that warrant his inclusion in the analytical tradition: the commitment to ‘logical realism’, the adoption of a substitutional procedure for the purpose of defining logical properties and a new theory of a priori cognition that presents itself as an alternative to Kant's. All three innovations concur to deliver what counts as the most important development of logic and its philosophy between Aristotle and Frege. In the final part of the paper, I defend Bolzano against a common objection and explain that these theoretical innovations are also supported by views on syntax, which though marginal are both workable and philosophically interesting. © 2013 John Wiley & Sons Ltd |
Flawed Self-Assessment: Implications for Health, Education, and the Workplace. | Research from numerous corners of psychological inquiry suggests that self-assessments of skill and character are often flawed in substantive and systematic ways. We review empirical findings on the imperfect nature of self-assessment and discuss implications for three real-world domains: health, education, and the workplace. In general, people's self-views hold only a tenuous to modest relationship with their actual behavior and performance. The correlation between self-ratings of skill and actual performance in many domains is moderate to meager-indeed, at times, other people's predictions of a person's outcomes prove more accurate than that person's self-predictions. In addition, people overrate themselves. On average, people say that they are "above average" in skill (a conclusion that defies statistical possibility), overestimate the likelihood that they will engage in desirable behaviors and achieve favorable outcomes, furnish overly optimistic estimates of when they will complete future projects, and reach judgments with too much confidence. Several psychological processes conspire to produce flawed self-assessments. Research focusing on health echoes these findings. People are unrealistically optimistic about their own health risks compared with those of other people. They also overestimate how distinctive their opinions and preferences (e.g., discomfort with alcohol) are among their peers-a misperception that can have a deleterious impact on their health. Unable to anticipate how they would respond to emotion-laden situations, they mispredict the preferences of patients when asked to step in and make treatment decisions for them. Guided by mistaken but seemingly plausible theories of health and disease, people misdiagnose themselves-a phenomenon that can have severe consequences for their health and longevity. Similarly, research in education finds that students' assessments of their performance tend to agree only moderately with those of their teachers and mentors. Students seem largely unable to assess how well or poorly they have comprehended material they have just read. They also tend to be overconfident in newly learned skills, at times because the common educational practice of massed training appears to promote rapid acquisition of skill-as well as self-confidence-but not necessarily the retention of skill. Several interventions, however, can be introduced to prompt students to evaluate their skill and learning more accurately. In the workplace, flawed self-assessments arise all the way up the corporate ladder. Employees tend to overestimate their skill, making it difficult to give meaningful feedback. CEOs also display overconfidence in their judgments, particularly when stepping into new markets or novel projects-for example, proposing acquisitions that hurt, rather then help, the price of their company's stock. We discuss several interventions aimed at circumventing the consequences of such flawed assessments; these include training people to routinely make cognitive repairs correcting for biased self-assessments and requiring people to justify their decisions in front of their peers. The act of self-assessment is an intrinsically difficult task, and we enumerate several obstacles that prevent people from reaching truthful self-impressions. We also propose that researchers and practitioners should recognize self-assessment as a coherent and unified area of study spanning many subdisciplines of psychology and beyond. Finally, we suggest that policymakers and other people who makes real-world assessments should be wary of self-assessments of skill, expertise, and knowledge, and should consider ways of repairing self-assessments that may be flawed. |
Emotion regulation and decision making under risk and uncertainty. | It is well established that emotion plays a key role in human social and economic decision making. The recent literature on emotion regulation (ER), however, highlights that humans typically make efforts to control emotion experiences. This leaves open the possibility that decision effects previously attributed to acute emotion may be a consequence of acute ER strategies such as cognitive reappraisal and expressive suppression. In Study 1, we manipulated ER of laboratory-induced fear and disgust, and found that the cognitive reappraisal of these negative emotions promotes risky decisions (reduces risk aversion) in the Balloon Analogue Risk Task and is associated with increased performance in the prehunch/hunch period of the Iowa Gambling Task. In Study 2, we found that naturally occurring negative emotions also increase risk aversion in Balloon Analogue Risk Task, but the incidental use of cognitive reappraisal of emotions impedes this effect. We offer evidence that the increased effectiveness of cognitive reappraisal in reducing the experience of emotions underlies its beneficial effects on decision making. |
Treatment of Refractory and Super-refractory Status Epilepticus | Refractory and super-refractory status epilepticus (SE) are serious illnesses with a high risk of morbidity and even fatality. In the setting of refractory generalized convulsive SE (GCSE), there is ample justification to use continuous infusions of highly sedating medications—usually midazolam, pentobarbital, or propofol. Each of these medications has advantages and disadvantages, and the particulars of their use remain controversial. Continuous EEG monitoring is crucial in guiding the management of these critically ill patients: in diagnosis, in detecting relapse, and in adjusting medications. Forms of SE other than GCSE (and its continuation in a “subtle” or nonconvulsive form) should usually be treated far less aggressively, often with nonsedating anti-seizure drugs (ASDs). Management of “non-classic” NCSE in ICUs is very complicated and controversial, and some cases may require aggressive treatment. One of the largest problems in refractory SE (RSE) treatment is withdrawing coma-inducing drugs, as the prolonged ICU courses they prompt often lead to additional complications. In drug withdrawal after control of convulsive SE, nonsedating ASDs can assist; medical management is crucial; and some brief seizures may have to be tolerated. For the most refractory of cases, immunotherapy, ketamine, ketogenic diet, and focal surgery are among several newer or less standard treatments that can be considered. The morbidity and mortality of RSE is substantial, but many patients survive and even return to normal function, so RSE should be treated promptly and as aggressively as the individual patient and type of SE indicate. |
Formation of non-magnetic region in ferromagnetic stainless steel by laser alloying | We studied the method of formation of non-magnetic region in ferritic stainless steel by laser alloying and the cause of cracks in this method. So we obtained non-magnetic region (its magnetic permeability is less than 1.1) free from cracks by laser alloying. |
Design of Compact Wide Stopband Microstrip Low-pass Filter using T-shaped Resonator | In this letter, a compact microstrip low-pass filter (LPF) using T-shaped resonator with wide stopband is presented. The proposed LPF has capability to remove the eighth harmonic and a low insertion loss of 0.12 dB. The bandstop structure using stepped impendence resonator and two open-circuit stubs are used to design a wide stopband with attenuation level better than −20 dB from 3.08 up to 22 GHz. The proposed filter with −3-dB cutoff frequency of 2.68 GHz has been designed, fabricated, and measured. The operating of the LPF is investigated based on equivalent circuit model. Simulation results are verified by measurement results and excellent agreement between them is observed. |
Ternary Residual Networks | Sub-8-bit representation of DNNs incur some noticeable loss of accuracy despite rigorous (re)training at low-precision. Such loss of accuracy essentially makes them equivalent to a much shallower counterpart, diminishing the power of being deep networks. To address this problem of accuracy drop we introduce the notion of residual networks where we add more low-precision edges to sensitive branches of the sub-8-bit network to compensate for the lost accuracy. Further, we present a perturbation theory to identify such sensitive edges. Aided by such an elegant trade-off between accuracy and model size, the 8-2 architecture (8-bit activations, ternary weights), enhanced by residual ternary edges, turns out to be sophisticated enough to achieve similar accuracy as 8-8 representation (∼ 1% drop from our FP-32 baseline), despite ∼ 1.6× reduction in model size, ∼ 26× reduction in number of multiplications , and potentially ∼ 2× inference speed up comparing to 8-8 representation, on the state-of-the-art deep network ResNet-101 pre-trained on ImageNet dataset. Moreover, depending on the varying accuracy requirements in a dynamic environment, the deployed low-precision model can be upgraded/downgraded on-the-fly by partially enabling/disabling residual connections. For example, disabling the least important residual connections in the above enhanced network, the accuracy drop is ∼ 2% (from our FP-32 baseline), despite ∼ 1.9× reduction in model size, ∼ 32× reduction in number of multiplications, and potentially ∼ 2.3× inference speed up comparing to 8-8 representation. Finally, all the ternary connections are sparse in nature, and the residual ternary conversion can be done in a resource-constraint setting without any low-precision (re)training and without accessing the data. |
A dual-band circularly polarized stacked microstrip antenna with single-fed for GPS applications | A single probe feeding stacked microstrip antenna is presented to obtain dual-band circularly polarized (CP) characteristics using double layers of truncated square patches. The antenna operates at both the L1 and L2 frequencies of 1575 and 1227 MHz for the global positioning system (GPS). With the optimized design, the measured axial ratio (AR) bandwidths with the centre frequencies of L1 and L2 are both greater than 50 MHz, while the impedance characteristics within AR bandwidth satisfy the requirement of VSWR less than 2. At L1 and L2 frequencies, the AR measured is 0.7 dB and 0.3 dB, respectively. |
Integrated online trajectory planning and optimization in distinctive topologies | This paper presents a novel integrated approach for efficient optimization based online trajectory planning of topologically distinctive mobile robot trajectories. Online trajectory optimization deforms an initial coarse path generated by a global planner by minimizing objectives such as path length, transition time or control effort. Kinodynamic motion properties of mobile robots and clearance from obstacles impose additional equality and inequality constraints on the trajectory optimization. Local planners account for efficiency by restricting the search space to locally optimal solutions only. However, the objective function is usually non-convex as the presence of obstacles generates multiple distinctive local optima. The proposed method maintains and simultaneously optimizes a subset of admissible candidate trajectories of distinctive topologies and thus seeking the overall best candidate among the set of alternative local solutions. Time-optimal trajectories for differential-drive and carlike robots are obtained efficiently by adopting the Timed-Elastic-Band approach for the underlying trajectory optimization problem. The investigation of various example scenarios and a comparative analysis with conventional local planners confirm the advantages of integrated exploration, maintenance and optimization of topologically distinctive trajectories. ∗Corresponding author Email address: [email protected] (Christoph Rösmann) Preprint submitted to Robotics and Autonomous Systems November 12, 2016 |
How Alluring Are Dark Personalities ? The Dark Triad and Attractiveness in Speed Dating | Copy Abstract: Dark Triad traits (narcissism, psychopathy, and Machiavellianism) are linked to the pursuit of short-term mating strategies, but they may have differential effects on actual mating success in naturalistic scenarios: Narcissism may be a facilitator for men’s short-term mating success, while Machiavellianism and psychopathy may be detrimental. To date, little is known about the attractiveness of Dark Triad traits in women. In a speed-dating study, we assessed participants’ Dark Triad traits, Big Five personality traits, and physical attractiveness in N=90 heterosexual individuals (46 women and 44 men). Each participant rated each partner’s mate appeal for shortand long-term relationships. Across both sexes, narcissism was positively associated with mate appeal for shortand long-term relationships. Further analyses indicated that these associations were due to the shared variance among narcissism and extraversion in men and narcissism and physical attractiveness in women, respectively. In women, psychopathy was also positively associated with mate appeal for short-term relationships. Regarding mating preferences, narcissism was found to involve greater choosiness in the rating of others’ mate appeal (but not actual choices) in men, while psychopathy was associated with greater openness towards short-term relationships in women. Copyright © 2016 European Association of Personality Psychology |
EmoSenticSpace: A novel framework for affective common-sense reasoning | Emotions play a key role in natural language understanding and sensemaking. Pure machine learning usually fails to recognize and interpret emotions in text accurately. The need for knowledge bases that give access to semantics and sentics (the conceptual and affective information) associated with natural language is growing exponentially in the context of big social data analysis. To this end, this paper proposes EmoSenticSpace, a new framework for affective common-sense reasoning that extends WordNet-Affect and SenticNet by providing both emotion labels and polarity scores for a large set of natural language concepts. The framework is built by means of fuzzy c-means clustering and supportvector-machine classification, and takes into account a number of similarity measures, including point-wise mutual information and emotional affinity. EmoSenticSpace was tested on three emotionrelated natural language processing tasks, namely sentiment analysis, emotion recognition, and personality detection. In all cases, the proposed framework outperforms the state-of-the-art. In particular, the direct evaluation of EmoSenticSpace against psychological features provided in the benchmark ISEAR dataset shows a 92.15% agreement. 2014 Elsevier B.V. All rights reserved. |
Late-onset exercise in female rat offspring ameliorates the detrimental metabolic impact of maternal obesity. | Rising rates of maternal obesity/overweight bring the need for effective interventions in offspring. We observed beneficial effects of postweaning exercise, but the question of whether late-onset exercise might benefit offspring exposed to maternal obesity is unanswered. Thus we examined effects of voluntary exercise implemented in adulthood on adiposity, hormone profiles, and genes involved in regulating appetite and metabolism in female offspring. Female Sprague Dawley rats were fed either normal chow or high-fat diet (HFD) ad libitum for 5 weeks before mating and throughout gestation/lactation. At weaning, female littermates received either chow or HFD and, after 7 weeks, half were exercised (running wheels) for 5 weeks. Tissues were collected at 15 weeks. Maternal obesity was associated with increased hypothalamic inflammatory markers, including suppressor of cytokine signaling 3, TNF-α, IL-1β, and IL-6 expression in the arcuate nucleus. In the paraventricular nucleus (PVN), Y1 receptor, melanocortin 4 receptor, and TNF-α mRNA were elevated. In the hippocampus, maternal obesity was associated with up-regulated fat mass and obesity-associated gene and TNF-α mRNA. We observed significant hypophagia across all exercise groups. In female offspring of lean dams, the reduction in food intake by exercise could be related to altered signaling at the PVN melanocortin 4 receptor whereas in offspring of obese dams, this may be related to up-regulated TNF-α. Late-onset exercise ameliorated the effects of maternal obesity and postweaning HFD in reducing body weight, adiposity, plasma leptin, insulin, triglycerides, and glucose intolerance, with greater beneficial effects in offspring of obese dams. Overall, hypothalamic inflammation was increased by maternal obesity or current HFD, and the effect of exercise was dependent on maternal diet. In conclusion, even after a significant sedentary period, many of the negative impacts of maternal obesity could be improved by voluntary exercise and healthy diet. |
Mobile collaboration for young children | Social interaction and collaboration are essential to the emotional and cognitive development of young children [40]. Constructionism [32] is a learning theory where children learn as they build or construct a public artifact. Creative activities that promote collaboration, especially those based on principles of constructionism, provide enhanced learning opportunities for young children. Mobile devices can support the learning experience as children can create artifacts in various contexts. The proposed research incorporates collaboration, constructionism, children, stories and mobile technologies; specifically investigating developmentally appropriate interfaces to support mobile collaboration for young children. |
SAMo: experimenting a social accountability web platform | The need for transparency and quality control of public services is crucial for a sustainable development of underserved communities. Information and data collection play a significant role in the efforts that NGOs, governments and international institutions are carrying out in this direction. In this paper we describe a platform to conduct assessment campaigns of the quality of public services and its experimentation in the rural district of Moamba, Mozambique. |
Artificial immune systems - a new computational intelligence paradigm | Give us 5 minutes and we will show you the best book to read today. This is it, the artificial immune systems a new computational intelligence paradigm that will be your best choice for better reading book. Your five times will not spend wasted by reading this website. You can take the book as a source to make better concept. Referring the books that can be situated with your needs is sometime difficult. But here, this is so easy. You can find the best thing of book that you can read. |
A case study on the application of Fuzzy QFD in TRIZ for service quality improvement | The improvement of service quality so as to enhance customer satisfaction has been widely mentioned over the past few decades. However, a creative and systematic way of achieving higher customer satisfaction in terms of service quality is rarely discussed. Recently, TRIZ, a Russian acronym which means “Theory of Inventive Problem Solving,” has been proven to be a well-structured and innovative way to solve problems in both technical and non-technical areas. In this study, a systematic model based on the TRIZ methodology is proposed to generate creative solutions for service quality improvement. This is done by examining first the determinants of service quality based on a comprehensive qualitative study in the electronic commerce sector. Then the correlation between the imprecise requirements from customers and the determinants of service quality is analyzed with Fuzzy Quality Function Deployment (QFD) in order to identify the critical determinants relating to customer satisfaction. After which, the corresponding TRIZ engineering parameters can be effectively applied in the TRIZ contradiction matrix to identify the inventive principles. A case study is illustrated to demonstrate the effectiveness of our approach in an e-commerce company, and its results are presented to show the applicability of the TRIZ methodology in the e-service sector. |
Ischemic preconditioning attenuates portal venous plasma concentrations of purines following warm liver ischemia in man. | BACKGROUND/AIMS
Degradation of adenine nucleotides to adenosine has been suggested to play a critical role in ischemic preconditioning (IPC). Thus, we questioned in patients undergoing partial hepatectomy whether (i) IPC will increase plasma purine catabolites and whether (ii) formation of purines in response to vascular clamping (Pringle maneuver) can be attenuated by prior IPC.
METHODS
75 patients were randomly assigned to three groups: group I underwent hepatectomy without vascular clamping; group II was subjected to the Pringle maneuver during resection, and group III was preconditioned (10 min ischemia and 10 min reperfusion) prior to the Pringle maneuver for resection. Central, portal venous and arterial plasma concentrations of adenosine, inosine, hypoxanthine and xanthine were determined by high-performance liquid chromatography.
RESULTS
Duration of the Pringle maneuver did not differ between patients with or without IPC. Surgery without vascular clamping had only a minor effect on plasma purine concentrations. After IPC, plasma concentrations of purines transiently increased. After the Pringle maneuver alone, purine plasma concentrations were most increased. This strong rise in plasma purines caused by the Pringle maneuver, however, was significantly attenuated by IPC. When portal venous minus arterial concentration difference was calculated for inosine or hypoxanthine, the respective differences became positive in patients subjected to the Pringle maneuver and were completely prevented by preconditioning.
CONCLUSION
These data demonstrate that (i) IPC increases formation of adenosine, and that (ii) the unwanted degradation of adenine nucleotides to purines caused by the Pringle maneuver can be attenuated by IPC. Because IPC also induces a decrease of portal venous minus arterial purine plasma concentration differences, IPC might possibly decrease disturbances in the energy metabolism in the intestine as well. |
Correlates of spontaneous clearance of hepatitis C virus among people with hemophilia. | People with hemophilia were formerly at very high risk of infection with hepatitis C virus (HCV). Approximately 20% of HCV-infected patients spontaneously clear the virus. To identify correlates of spontaneous clearance of HCV, we studied a cohort of HCV-infected hemophilic subjects without human immunodeficiency virus infection who had never been treated with interferon. Plasma HCV RNA was persistently undetectable in 192 (27.0%) of 712 HCV-seropositive subjects. In multivariate analyses, HCV clearance was more likely in subjects infected with HCV at younger age, especially with infection before age 2 years (40.1%) compared with after age 15 years (14.9%, P(trend) < .0001), and with relatively recent infection, especially after 1983 (42.8%) compared with before 1969 (18.2%, P(trend) < .0001). HCV clearance was marginally reduced with African ancestry (19%) and greatly increased with chronic hepatitis B virus (HBV) infection (59.1%, P = .001). Resolved HBV infection, coagulopathy types and severity, types of clotting factor treatment, and sex were not associated with HCV clearance. In conclusion, hemophilic subjects coinfected with chronic HBV and those infected with HCV before age 2 years or after 1983 were significantly more likely to spontaneously clear HCV viremia. These data highlight and clarify the importance of nongenetic determinants in spontaneous recovery from HCV infection. |
Structural aspects of cable-stayed bridge design. | STRUCTURAL ASPECTS OF CABLE-STAYED BRIDGE DESIGN |
Detecting anomalies in time series data via a deep learning algorithm combining wavelets, neural networks and Hilbert transform | The quest for more efficient real-time detection of anomalies in time series data is critically important in numerous applications and systems ranging from intelligent transportation, structural health monitoring, heart disease, and earthquake prediction. Although the range of application is wide, anomaly detection algorithms are usually domain specific and build on experts’ knowledge. Here a new signal processing algorithm –inspired by the deep learning paradigm – is presented that combines wavelets, neural networks, and Hilbert transform performs robustly and is transferable. The proposed neural network structure facilitates learning short and long-term pattern 2 interdependencies; a task usually hard to accomplish using standard neural network training algorithms. The paper provides guidelines for selecting the neural network's buffer size, training algorithm, and anomaly detection features. The algorithm learns online the system’s normal behavior and does not require the existence of anomalous data, for assessing its statistical significance. This is essential for applications that require customization. The anomalies are detected by analyzing hierarchically the instantaneous frequency and amplitude of the residual signal. Its applicability is demonstrated through detection of anomalies in the Seismic Electric Signal activity, that is potentially important for earthquake prediction; and automated detection of road anomalies (e.g. potholes, bumps, etc.) using smartphone sensors. The evaluation of the anomaly detection algorithm is based on the statistical significance of the Receiver Operating Characteristic curve. Finally, we propose strategies for decision-making that may increase the efficiency of the application of the algorithm, and expedite evaluation of real-time data. |
Black-box α-divergence Minimization | Dataset BB-α=BO BB-α=1 BB-α=10−6 BB-VB Avg. α Boston -2.549±0.019 -2.621±0.041 -2.614±0.021 -2.578±0.017 0.45±0.04 Concrete -3.104±0.015 -3.126±0.018 -3.119±0.010 -3.118±0.010 0.72±0.03 Energy -0.979±0.028 -1.020±0.045 -0.945±0.012 -0.994±0.014 0.72±0.03 Wine -0.949±0.009 -0.945±0.008 -0.967±0.008 -0.964±0.007 0.86±0.04 Yacht -1.102±0.039 -2.091±0.067 -1.594±0.016 -1.646±0.017 0.48±0.01 Avg. Rank 1.835 ±0.065 2.504±0.080 2.766±0.061 2.895±0.057 |
A Global/Local Affinity Graph for Image Segmentation | Construction of a reliable graph capturing perceptual grouping cues of an image is fundamental for graph-cut based image segmentation methods. In this paper, we propose a novel sparse global/local affinity graph over superpixels of an input image to capture both short- and long-range grouping cues, and thereby enabling perceptual grouping laws, including proximity, similarity, continuity, and to enter in action through a suitable graph-cut algorithm. Moreover, we also evaluate three major visual features, namely, color, texture, and shape, for their effectiveness in perceptual segmentation and propose a simple graph fusion scheme to implement some recent findings from psychophysics, which suggest combining these visual features with different emphases for perceptual grouping. In particular, an input image is first oversegmented into superpixels at different scales. We postulate a gravitation law based on empirical observations and divide superpixels adaptively into small-, medium-, and large-sized sets. Global grouping is achieved using medium-sized superpixels through a sparse representation of superpixels' features by solving a ℓ0-minimization problem, and thereby enabling continuity or propagation of local smoothness over long-range connections. Small- and large-sized superpixels are then used to achieve local smoothness through an adjacent graph in a given feature space, and thus implementing perceptual laws, for example, similarity and proximity. Finally, a bipartite graph is also introduced to enable propagation of grouping cues between superpixels of different scales. Extensive experiments are carried out on the Berkeley segmentation database in comparison with several state-of-the-art graph constructions. The results show the effectiveness of the proposed approach, which outperforms state-of-the-art graphs using four different objective criteria, namely, the probabilistic rand index, the variation of information, the global consistency error, and the boundary displacement error. |
Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting | Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low norm in a reproducing kernel Hilbert space. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze an intuitive Gaussian process upper confidence bound (GP-UCB) algorithm, and bound its cumulative regret in terms of maximal in- formation gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches. |
Analysing the Discourse of the 'War on Terror' and its Workings of Power | Since 11 September 2001, the discourse of the 'war on terror' has become one of the most over-used and hegemonic discourses that has shaped domestic politics and international relations worldwide. This paper focuses on the workings of the discourse of 'anti-terrorism' and its linkages to power structures and the institutions that are supported by them. It will look at how the discourse of the 'war on terror' has been used by the governments of Southeast Asia and Western Europe in particular in relation to oppositional forces, both legal and extra-legal; and its wider implications on the development of democracy and democratic spaces in these societies. The aim of this paper is not to study the growth, development or modalities of those movements that are—rightly or wrongly—labelled as 'militant', 'extreme' or 'radical'. Nor does it deny the reality of violent oppositional politics in some societies in both the developed or developing world. What it seeks to do instead is to critically analyse and appraise the political utility of such a discourse when it falls into the hands of ruling elites and state institutions, as well as opposition groups that take an equally instrumental approach to it. Crucially, the paper will attempt to do several things: First, to demonstrate that the discourse of the 'war on terror' is neither new nor unique, and that its antecedents date back to the earlier security discourses of the Cold War; secondly to show how this discourse differs very little from other discourses of identity-construction in its reliance of constitutive oppositional dialectics; and thirdly to show that as a discourse of containment and control the language of the 'war on terror' is just another manifestation of maximalist political power in an age of uncontrollable variables and political uncertainties, which in turn serves the controlling interests of both anti-democratic regimes and religiously fundamentalist forces alike. |
State-space solutions to standard H2 and H∞ control problems | Simple state-space formulas are presented for a controller solving a standard H∞-problem. The controller has the same state-dimension as the plant, its computation involves only two Riccati equations, and it has a separation structure reminiscent of classical LQG (i.e., H2) theory. This paper is also intended to be of tutorial value, so a standard H2-solution is developed in parallel. |
Wire Speed Name Lookup: A GPU-based Approach | This paper studies the name lookup issue with longest prefix matching, which is widely used in URL filtering, content routing/switching, etc. Recently Content-Centric Networking (CCN) has been proposed as a clean slate future Internet architecture to naturally fit the contentcentric property of today’s Internet usage: instead of addressing end hosts, the Internet should operate based on the identity/name of contents. A core challenge and enabling technique in implementing CCN is exactly to perform name lookup for packet forwarding at wire speed. In CCN, routing tables can be orders of magnitude larger than current IP routing tables, and content names are much longer and more complex than IP addresses. In pursuit of conquering this challenge, we conduct an implementation-based case study on wire speed name lookup, exploiting GPU’s massive parallel processing power. Extensive experiments demonstrate that our GPU-based name lookup engine can achieve 63.52M searches per second lookup throughput on large-scale name tables containing millions of name entries with a strict constraint of no more than the telecommunication level 100μs per-packet lookup latency. Our solution can be applied to contexts beyond CCN, such as search engines, content filtering, and intrusion prevention/detection. c ⃝Prof. Qunfeng Dong ([email protected]) and Prof. Bin Liu ([email protected]), placed in alphabetic order, are the correspondence authors of the paper. Yi Wang and Yuan Zu, placed in alphabetic order, are the lead student authors of Tsinghua University and University of Science and Technology of China, respectively. This paper is supported by 863 project (2013AA013502), NSFC (61073171, 61073184), Tsinghua University Initiative Scientific Research Program(20121080068), the Specialized Research Fund for the Doctoral Program of Higher Education of China(20100002110051), the Ministry of Education (MOE) Program for New Century Excellent Talents (NCET) in University, the Science and Technological Fund of Anhui Province for Outstanding Youth (10040606Y05), by the Fundamental Research Funds for the Central Universities (WK0110000007, WK0110000019), and Jiangsu Provincial Science Foundation (BK2011360). |
Pinnacle Point Cave 13B (Western Cape Province, South Africa) in context: The Cape Floral kingdom, shellfish, and modern human origins. | Genetic and anatomical evidence suggests that Homo sapiens arose in Africa between 200 and 100ka, and recent evidence suggests that complex cognition may have appeared between ~164 and 75ka. This evidence directs our focus to Marine Isotope Stage (MIS) 6, when from 195-123ka the world was in a fluctuating but predominantly glacial stage, when much of Africa was cooler and drier, and when dated archaeological sites are rare. Previously we have shown that humans had expanded their diet to include marine resources by ~164ka (±12ka) at Pinnacle Point Cave 13B (PP13B) on the south coast of South Africa, perhaps as a response to these harsh environmental conditions. The associated material culture documents an early use and modification of pigment, likely for symbolic behavior, as well as the production of bladelet stone tool technology, and there is now intriguing evidence for heat treatment of lithics. PP13B also includes a later sequence of MIS 5 occupations that document an adaptation that increasingly focuses on coastal resources. A model is developed that suggests that the combined richness of the Cape Floral Region on the south coast of Africa, with its high diversity and density of geophyte plants and the rich coastal ecosystems of the associated Agulhas Current, combined to provide a stable set of carbohydrate and protein resources for early modern humans along the southern coast of South Africa during this crucial but environmentally harsh phase in the evolution of modern humans. Humans structured their mobility around the use of coastal resources and geophyte abundance and focused their occupation at the intersection of the geophyte rich Cape flora and coastline. The evidence for human occupation relative to the distance to the coastline over time at PP13B is consistent with this model. |
Using Linguistic Cues for the Automatic Recognition of Personality in Conversation and Text | It is well known that utterances convey a great deal of information about the speaker in addition to their semantic content. One such type of information consists of cues to the speaker’s personality traits, the most fundamental dimension of variation between humans. Recent work explores the automatic detection of other types of pragmatic variation in text and conversation, such as emotion, deception, speaker charisma, dominance, point of view, subjectivity, opinion and sentiment. Personality affects these other aspects of linguistic production, and thus personality recognition may be useful for these tasks, in addition to many other potential applications. However, to date, there is little work on the automatic recognition of personality traits. This article reports experimental results for recognition of all Big Five personality traits, in both conversation and text, utilising both self and observer ratings of personality. While other work reports classification results, we experiment with classification, regression and ranking models. For each model, we analyse the effect of different feature sets on accuracy. Results show that for some traits, any type of statistical model performs significantly better than the baseline, but ranking models perform best overall. We also present an experiment suggesting that ranking models are more accurate than multi-class classifiers for modelling personality. In addition, recognition models trained on observed personality perform better than models trained using selfreports, and the optimal feature set depends on the personality trait. A qualitative analysis of the learned models confirms previous findings linking language and personality, while revealing many new linguistic markers. |
Affordances and Limitations of Immersive Participatory Augmented Reality Simulations for Teaching and Learning | The purpose of this study was to document how teachers and students describe and comprehend the ways in which participating in an augmented reality (AR) simulation aids or hinders teaching and learning. Like the multi-user virtual environment (MUVE) interface that underlies Internet games, AR is a good medium for immersive collaborative simulation, but has different strengths and limitations than MUVEs. Within a designbased research project, the researchers conducted multiple qualitative case studies across two middle schools (6th and 7th grade) and one high school (10th grade) in the northeastern United States to document the affordances and limitations of AR simulations from the student and teacher perspective. The researchers collected data through formal and informal interviews, direct observations, web site posts, and site documents. Teachers and students reported that the technology-mediated narrative and the interactive, situated, collaborative problem solving affordances of the AR simulation were highly engaging, especially among students who had previously presented behavioral and academic challenges for the teachers. However, while the AR simulation provided potentially transformative added value, it simultaneously presented unique technological, managerial, and cognitive challenges to teaching and learning. |
Ontological modelling of form and function for architectural design | Form, function and the relationship between the two are notions that have served a crucial role in design science. Within architectural design, key aspects of the anticipated function of buildings, or of spatial environments in general, are supposed to be determined by their structural form, i.e., their shape, layout, or connectivity. Whereas the philosophy of form and function is a well-researched topic, the practical relations and dependencies between form and function are only known implicitly by designers and architects. Specifically, the formal modelling of structural form and resulting artefactual function within design and design assistance systems remains elusive. In our work, we aim at making these definitions explicit by the ontological modelling of domain entities, their properties and related constraints. We thus have to particularly focus on formal interpretation of the terms “(structural) form” and “(artefactual) function”. We put these notions into practice by formalising ontological specifications accordingly by using modularly constructed ontologies for the architectural design domain. A key aspect of our modelling approach is the use of formal qualitative spatial calculi and conceptual requirements as a link between the structural form of a design and the differing functional capabilities that it affords or leads to. We demonstrate the manner in which our ontological modelling reflects notions of architectural form and function, and how it facilitates the conceptual modelling of requirement constraints for architectural design. |
A Control-Theoretic Approach for Dynamic Adaptive Video Streaming over HTTP | User-perceived quality-of-experience (QoE) is critical in Internet video applications as it impacts revenues for content providers and delivery systems. Given that there is little support in the network for optimizing such measures, bottlenecks could occur anywhere in the delivery system. Consequently, a robust bitrate adaptation algorithm in client-side players is critical to ensure good user experience. Previous studies have shown key limitations of state-of-art commercial solutions and proposed a range of heuristic fixes. Despite the emergence of several proposals, there is still a distinct lack of consensus on: (1) How best to design this client-side bitrate adaptation logic (e.g., use rate estimates vs. buffer occupancy); (2) How well specific classes of approaches will perform under diverse operating regimes (e.g., high throughput variability); or (3) How do they actually balance different QoE objectives (e.g., startup delay vs. rebuffering). To this end, this paper makes three key technical contributions. First, to bring some rigor to this space, we develop a principled control-theoretic model to reason about a broad spectrum of strategies. Second, we propose a novel model predictive control algorithm that can optimally combine throughput and buffer occupancy information to outperform traditional approaches. Third, we present a practical implementation in a reference video player to validate our approach using realistic trace-driven emulations. |
Code Churn: A Neglected Metric in Effort-Aware Just-in-Time Defect Prediction | Background: An increasing research effort has devoted to just-in-time (JIT) defect prediction. A recent study by Yang et al. at FSE'16 leveraged individual change metrics to build unsupervised JIT defect prediction model. They found that many unsupervised models performed similarly to or better than the state-of-the-art supervised models in effort-aware JIT defect prediction. Goal: In Yang et al.'s study, code churn (i.e. the change size of a code change) was neglected when building unsupervised defect prediction models. In this study, we aim to investigate the effectiveness of code churn based unsupervised defect prediction model in effort-aware JIT defect prediction. Methods: Consistent with Yang et al.'s work, we first use code churn to build a code churn based unsupervised model (CCUM). Then, we evaluate the prediction performance of CCUM against the state-of-the-art supervised and unsupervised models under the following three prediction settings: cross-validation, time-wise cross-validation, and cross-project prediction. Results: In our experiment, we compare CCUM against the state-of-the-art supervised and unsupervised JIT defect prediction models. Based on six open-source projects, our experimental results show that CCUM performs better than all the prior supervised and unsupervised models. Conclusions: The result suggests that future JIT defect prediction studies should use CCUM as a baseline model for comparison when a novel model is proposed. |
Integrating artifical intelligence with anylogic simulation | Simulation is one of five key technologies that PwC's Artificial Intelligence Accelerator lab uses to build Artificial Intelligence (AI) applications. Application of AI is accelerating rapidly, spawning new sectors, and resulting in unprecedented reach, power, and influence. Simulation explicitly captures the behavior of agents and processes that can either be described by or replaced by AI components. AI components can be embedded into a simulation to provide learning or adaptive behavior. And, simulation can be used to evaluate the impact of introducing AI into a “real world system” such as supply chains or production processes. In this workshop we will demonstrate an Agent-Based Model with Reinforcement Learning for Autonomous Fleet Coordination; demonstrate and describe in detail a version of the AnyLogic Consumer Market Model that has been modified to include adaptive dynamics based on deep learning; and describe approaches to integrating machine learning to the design and development of simulations. |
Twitter Data Analytics | Springer This e↵ort is dedicated to my family. Thank you for all your support and encouragement.-SK For my parents and Rio. Thank you for everything.-FM To my parents, wife, and sons.-HL Acknowledgments We would like to thank the following individuals for their help in realizing this book. We would like to thank Daniel Howe and Grant Marshall for helping to organize the examples in the book, Daria Bazzi and Luis Brown for their help in proofreading and suggestions in organizing the book, and Terry Wen for preparing the web site. We appreciate Dr. Ross Maciejewski's helpful suggestions and guidance as our data visualization mentor. We express our immense gratitude to Dr. Rebecca Goolsby for her vision and insight for using social media as a tool for Humanitarian Assistance and Disaster Relief. Finally, we thank all members of the Data Mining and Machine Learning lab for their encouragement and advice throughout this process. This book is the result of projects sponsored, in part, by the Oce of Naval Research. With their support, we developed TweetTracker and TweetXplorer, flagship projects that helped us gain the knowledge and experience needed to produce this book. |
Design and Evaluation of Hardware Pseudo-Random Number Generator MT19937 | MT19937 is a kind of Mersenne Twister, which is a pseudo-random number generator. This study presents new designs for a MT19937 circuit suitable for custom computing machinery for highperformance scientific simulations. Our designs can generate multiple random numbers per cycle (multi-port design). The estimated throughput of a 52-port design was 262 Gbps, which is 115 times higher than the software on a Pentium 4 (2.53 GHz) processor. Multi-port designs were proven to be more cost-effective than using multiple single-port designs. The initialization circuit can be included without performance loss in exchange for a slight increase of logic scale. key words: custom circuit, simulation, random number, Mersenne Twister, FPGA |
Tangible programming elements for young children | Tangible programming elements offer the dynamic and programmable properties of a computer without the complexity introduced by the keyboard, mouse and screen. This paper explores the extent to which programming skills are used by children during interactions with a set of tangible programming elements: the Electronic Blocks. An evaluation of the Electronic Blocks indicates that children become heavily engaged with the blocks, and learn simple programming with a minimum of adult support. |
Evolution of RNA- and DNA-guided antivirus defense systems in prokaryotes and eukaryotes: common ancestry vs convergence | Complementarity between nucleic acid molecules is central to biological information transfer processes. Apart from the basal processes of replication, transcription and translation, complementarity is also employed by multiple defense and regulatory systems. All cellular life forms possess defense systems against viruses and mobile genetic elements, and in most of them some of the defense mechanisms involve small guide RNAs or DNAs that recognize parasite genomes and trigger their inactivation. The nucleic acid-guided defense systems include prokaryotic Argonaute (pAgo)-centered innate immunity and CRISPR-Cas adaptive immunity as well as diverse branches of RNA interference (RNAi) in eukaryotes. The archaeal pAgo machinery is the direct ancestor of eukaryotic RNAi that, however, acquired additional components, such as Dicer, and enormously diversified through multiple duplications. In contrast, eukaryotes lack any heritage of the CRISPR-Cas systems, conceivably, due to the cellular toxicity of some Cas proteins that would get activated as a result of operon disruption in eukaryotes. The adaptive immunity function in eukaryotes is taken over partly by the PIWI RNA branch of RNAi and partly by protein-based immunity. In this review, I briefly discuss the interplay between homology and analogy in the evolution of RNA- and DNA-guided immunity, and attempt to formulate some general evolutionary principles for this ancient class of defense systems. This article was reviewed by Mikhail Gelfand and Bojan Zagrovic. |
Wireless Sensor Networks for Home Health Care | Sophisticated electronics are within reach of average users. Cooperation between wireless sensor networks and existing consumer electronic infrastructures can assist in the areas of health care and patient monitoring. This will improve the quality of life of patients, provide early detection for certain ailments, and improve doctor-patient efficiency. The goal of our work is to focus on health-related applications of wireless sensor networks. In this paper we detail our experiences building several prototypes and discuss the driving force behind home health monitoring and how current (and future) technologies will enable automated home health monitoring. |
Job insecurity and organizational citizenship behavior: exploring curvilinear and moderated relationships. | This article examined a curvilinear relationship between job insecurity and organizational citizenship behavior (OCB). Drawing from social exchange theory and research on personal control, we developed and tested an explanation for employees' reactions to job insecurity based on their conceptualization of their social exchange relationship with the organization at different levels of job insecurity. Using data from 244 Chinese employees and 102 supervisory ratings of OCB, we found support for a U-shaped relationship between job insecurity and OCB. Moreover, 2 factors--psychological capital and subordinate-supervisor guanxi--moderated the curvilinear relationship, such that the curvilinear relationship is more pronounced among those with lower psychological capital or less positive subordinate-supervisor guanxi. |
Fingerprint Model Based on Fingerprint Image Topology and Ridge Count Values | In this paper the authors propose a mathematical model of fingerprint images based on the vectors of ridge count values. The model takes into account the instability of the values of the ridge count in the region of the lines curvature and possible mutations of minutiae. This increases the stability of fingerprint template and reliability of identification. The vectors of ridge count are formed using topological descriptors, which are constructed for the neighborhood of all fingerprint minutiae. In the proposed model, a template of a fingerprint image keeps the list of minutiae, the list of topological vectors and the list of the ridge count vectors. The value of the ridge count can be represented as a fractional number. |
The effect of team-based learning in medical ethics education. | BACKGROUND
Although now an important aspect of medical education, teaching medical ethics presents challenges, including a perceived lack of value or relevance by students and a dearth of effective teaching methods for faculty. Team-based learning (TBL) was introduced into our medical ethics course to respond to these needs.
AIMS
We evaluated the impact of TBL on student engagement and satisfaction and assessed educational achievements.
METHOD
The medical ethics education using TBL consisted of four 2 h sessions for first-year medical students of Chonnam National University Medical School. The impact of TBL on student engagement and the educational achievements was based on numerical data, including scores from IRAT, GRAT, application exercise and final examination, and the students' perception of medical ethics education using TBL.
RESULTS
Most students perceived TBL activities to be more engaging, effective and enjoyable than conventional didactics. The GRAT scores were significantly higher than the IRAT scores, demonstrating the effect of cooperative learning. In addition, TBL improved student performance, especially that of academically weaker students.
CONCLUSIONS
The application of TBL to medical ethics education improved student performance and increased student engagement and satisfaction. The TBL method should be considered for broader application in medical education. |
Reliable Crowdsourcing under the Generalized Dawid-Skene Model | Crowdsourcing systems provide scalable and cost-effectiv e human-powered solutions at marginal cost, for classification tasks where humans are significantly better than the machines. Although traditional approaches in aggregating crowdsourced labels have relied on the Dawid-Skene model, t his fails to capture how some tasks are inherently more difficult than the others. Several generalizations have bee n proposed, but inference becomes intractable and typical solutions resort to heuristics. To bridge this gap, we study a recently proposed generalize Dawid-Skene model, and propose a linear-time algorithm based on spectral methods. We show near-optimality of the proposed approach, by providing an upper bound on the error and comparing it to a fun damental limit. We provide numerical experiments on synthetic data matching our analyses, and also on real datas ets demonstrating that the spectral method significantly improves over simple majority voting and is comparable to ot her methods. |
An image procesing approach for calorie intake measurement | Obesity in the world has spread to epidemic proportions. In 2008 the World Health Organization (WHO) reported that 1.5 billion adults were suffering from some sort of overweightness. Obesity treatment requires constant monitoring and a rigorous control and diet to measure daily calorie intake. These controls are expensive for the health care system, and the patient regularly rejects the treatment because of the excessive control over the user. Recently, studies have suggested that the usage of technology such as smartphones may enhance the treatments of obesity and overweight patients; this will generate a degree of comfort for the patient, while the dietitian can count on a better option to record the food intake for the patient. In this paper we propose a smart system that takes advantage of the technologies available for the Smartphones, to build an application to measure and monitor the daily calorie intake for obese and overweight patients. Via a special technique, the system records a photo of the food before and after eating in order to estimate the consumption calorie of the selected food and its nutrient components. Our system presents a new instrument in food intake measuring which can be more useful and effective. |
On the Genuine Bound States of a Non-Relativistic Particle in a Linear Finite Range Potential | We explore the energy spectrum of a non-relativistic particle bound in a linear finite range, attractive potential, envisaged as a quark-confining potential. The intricate transcendental eigenvalue equation is solved numerically to obtain the explicit eigen-energies. The linear potential, which resembles the triangular well, has potential significance in particle physics and exciting applications in electronics c |
Development of the Highly Precise Magnetic Current Sensor Module of +/−300 A Utilizing AMR Element With Bias-Magnet | More and more sensitivity improvement is required for current sensors that are used in new area of applications, such as electric vehicle, smart meter, and electricity usage monitoring system. To correspond with the technical needs, a high precision magnetic current sensor module has been developed. The sensor module features an excellent linearity and a small magnetic hysteresis. In addition, it offers 2.5-4.5 V voltage output for 0-300 A positive input current and 0.5-2.5 V voltage output for 0-300 A negative input current under -40 °C-125 °C, VCC = 5 V condition. |
Traumatic spinal cord injuries in Southeast Turkey: an epidemiological study | In 1994, a retrospective study of new cases of traumatic Spinal Cord Injury (SCI) was conducted in all the hospitals in Southeast Turkey: 75 new traumatic SCI were identified. The estimated annual incidence was 16.9 per million population. The male/female ratio was 5.8/1. The mean age was 31.3, being 31.25 for male patients and 31.36 for female patients. 70.7% of all patients were under the age of 40. The major causes of SCI were falls (37.3%) and gunshot wounds (29.3%), followed by car accidents (25.3%), and stab wounds (1.3%). Thirty one patients (41.3%) were tetraplegic and 44 (58.7%) paraplegic. In tetraplegic patients the commonest level was C5, in those with paraplegia L1. The commonest associated injury was head trauma followed by fractures of the extremity(ies). Severe head trauma, being a major cause of death, may have obscured the actual incidence of SCI. Most of gunshot injured SCI patients were young soldiers fighting against the rebels. As there was no available data for the rebels with SCI, the actual incidence of SCI in Southeast (SE)Turkey should be higher than that found in this study. |
Dynamic selection of generative-discriminative ensembles for off-line signature verification | In practice, each writer provides only a limited number of signature samples to design a signature verification (SV) system. Hybrid generative-discriminative ensembles of classifiers (EoCs) are proposed in this paper to design an off-line SV system from few samples, where the classifier selection process is performed dynamically. To design the generative stage, multiple discrete left-to-right Hidden Markov Models (HMMs) are trained using a different number of states and codebook sizes, allowing the system to learn signatures at different levels of perception. To design the discriminative stage, HMM likelihoods are measured for each training signature, and assembled into feature vectors that are used to train a diversified pool of two-class classifiers through a specialized Random Subspace Method. During verification, a new dynamic selection strategy based on the K-nearest-oracles (KNORA) algorithm and on Output Profiles selects the most accurate EoCs to classify a given input signature. This SV system is suitable for incremental learning of new signature samples. Experiments performed with real-world signature data (comprised of genuine samples, and random, simple and skilled forgeries) indicate that the proposed dynamic selection strategy can significantly reduce the overall error rates, with respect to other EoCs formed using well-known dynamic and static selection strategies. Moreover, the performance of the SV system proposed in this paper is significantly greater than or comparable to that of related systems found in the literature. |
SfSNet: Learning Shape, Reflectance and Illuminance of Faces 'in the Wild' | We present SfSNet, an end-to-end learning framework for producing an accurate decomposition of an unconstrained human face image into shape, reflectance and illuminance. SfSNet is designed to reflect a physical lambertian rendering model. SfSNet learns from a mixture of labeled synthetic and unlabeled real world images. This allows the network to capture low frequency variations from synthetic and high frequency details from real images through the photometric reconstruction loss. SfSNet consists of a new decomposition architecture with residual blocks that learns a complete separation of albedo and normal. This is used along with the original image to predict lighting. SfSNet produces significantly better quantitative and qualitative results than state-of-the-art methods for inverse rendering and independent normal and illumination estimation. |
Controlling walking behavior of passive dynamic walker utilizing passive joint compliance | The passive dynamic walker (PDW) has a remarkable characteristic that it realizes cyclic locomotion without planning the joint trajectories. However, it cannot control the walking behavior because it is dominated by the fixed body dynamics. Observing the human cyclic locomotion emerged by elastic muscles, we add the compliant hip joint on PDW, and we propose a "phasic dynamics tuner" that changes the body dynamics by tuning the joint compliance in order to control the walking behavior. The joint compliance is obtained by driving the joint utilizing antagonistic and agonistic McKibben pneumatic actuators. This paper shows that PDW with the compliant joint and the phasic dynamics tuner enhances the walking performance than present PDW with passive free joints. The phasic dynamics tuner can change the walking velocity by tuning the joint compliance. Experimental results show the effectiveness of the joint compliance and the phasic dynamics tuner. |
A six degrees of freedom haptic interface for laparoscopic training | We present the novel kinematics, workspace characterization, functional prototype and impedance control of a six degrees of freedom haptic interface designed to train surgeons for laparoscopic procedures, through virtual reality simulations. The parallel kinematics of the device is constructed by connecting a 3RRP planar parallel mechanism to a linearly actuated modified delta mechanism with a connecting link. The configuration level forward and inverse kinematics of the device assume analytic solutions, while its workspace can be shaped to enable large end-effector translations and rotations, making it well-suited for laparoscopy operations. Furthermore, the haptic interface features a low apparent inertia with high structural stiffness, thanks to its parallel kinematics with grounded actuators. A model-based open-loop impedance controller with feed-forward gravity compensation has been implemented for the device and various virtual tissue/organ stiffness levels have been rendered. |
A Printed Wide-Slot Antenna With a Modified L-Shaped Microstrip Line for Wideband Applications | A printed wide-slot antenna for wideband applications is proposed and experimentally investigated in this communication. A modified L-shaped microstrip line is used to excite the square slot. It consists of a horizontal line, a square patch, and a vertical line. For comparison, a simple L-shaped feed structure with the same line width is used as a reference geometry. The reference antenna exhibits dual resonance (lower resonant frequency <i>f</i><sub>1</sub>, upper resonant frequency <i>f</i><sub>2</sub>). When the square patch is embedded in the middle of the L-shaped line, <i>f</i><sub>1</sub> decreases, <i>f</i><sub>2</sub> remains unchanged, and a new resonance mode is formed between <i>f</i><sub>1</sub> and <i>f</i><sub>2</sub> . Moreover, if the size of the square patch is increased, an additional (fourth) resonance mode is formed above <i>f</i><sub>2</sub>. Thus, the bandwidth of a slot antenna is easily enhanced. The measured results indicate that this structure possesses a wide impedance bandwidth of 118.4%, which is nearly three times that of the reference antenna. Also, a stable radiation pattern is observed inside the operating bandwidth. The gain variation is found to be less than 1.7 dB. |
Novel trends in automotive networks: A perspective on Ethernet and the IEEE Audio Video Bridging | Ethernet is going to play a major role in automotive communications, thus representing a significant paradigm shift in automotive networking. Ethernet technology will allow for multiple in-vehicle systems (such as, multimedia/infotainment, camera-based advanced driver assistance and on-board diagnostics) to simultaneously access information over a single unshielded twisted pair cable. The leading technology for automotive applications is the IEEE Audio Video Bridging (AVB), which offers several advantages, such as open specification, multiple sources of electronic components, high bandwidth, the compliance with the challenging EMC/EMI automotive requirements, and significant savings on cabling costs, thickness and weight. This paper surveys the state of the art on Ethernet-based automotive communications and especially on the IEEE AVB, with a particular focus on the way to provide support to the so-called scheduled traffic, that is a class of time-sensitive traffic (e.g., control traffic) that is transmitted according to a time schedule. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.