title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Pattern Recognition | In this paper a novel reconstruction method is presented that uses the topological relationship of detected image features to create a highly abstract but semantically rich 3D model of the reconstructed scenes. In the first step, a combined image-based reconstruction of points and lines is performed based on the current state of art structure from motion methods. Subsequently, connected planar threedimensional structures are reconstructed by a novel method that uses the topological relationships between the detected image features. The reconstructed 3D models enable a simple extraction of geometric shapes, such as rectangles, in the scene. |
PPP-RTK: Results of CORS Network-Based PPP with Integer Ambiguity Resolution | During the last decade the technique of CORS-based Network RTK has been proven capable of providing cmlevel positioning accuracy for rovers receiving GNSS correction data from the CORS network. The technique relies on successful integer carrier-phase ambiguity resolution at both network and user level: at the level of the CORS network, ambiguity resolution is a prerequisite in order to determine very precise atmospheric corrections for (mobile) users, while these users need to resolve their integer ambiguities (with respect to a certain CORS network master reference station) to obtain precise cmlevel positioning accuracy. In case of Network RTK a user thus needs corrections from the network, plus the GNSS data of one of the CORS stations. In practice, there exists a variety of implementations of the Network RTK concept, of which VRS, FKP and MAC are best known [1], [2], [3], [4]. In this contribution we discuss a closely related concept, PPP-RTK, and show its performance on two CORS networks. |
CAT: Credibility Analysis of Arabic Content on Twitter | Data generated on Twitter has become a rich source for various data mining tasks. Those data analysis tasks that are dependent on the tweet semantics, such as sentiment analysis, emotion mining, and rumor detection among others, suffer considerably if the tweet is not credible, not real, or spam. In this paper, we perform an extensive analysis on credibility of Arabic content on Twitter. We also build a classification model (CAT) to automatically predict the credibility of a given Arabic tweet. Of particular originality is the inclusion of features extracted directly or indirectly from the author’s profile and timeline. To train and test CAT, we annotated for credibility a data set of 9, 000 Arabic tweets that are topic independent. CAT achieved consistent improvements in predicting the credibility of the tweets when compared to several baselines and when compared to the state-of-the-art approach with an improvement of 21% in weighted average Fmeasure. We also conducted experiments to highlight the importance of the userbased features as opposed to the contentbased features. We conclude our work with a feature reduction experiment that highlights the best indicative features of credibility. |
Nonisolated High Gain DC–DC Converter for DC Microgrids | DC microgrids are popular due to the integration of renewable energy sources such as solar photovoltaics and fuel cells. Owing to the low output voltage of these dc power generators, high efficient high gain dc–dc converters are in need to connect the dc microgrid. In this paper, a nonisolated high gain dc–dc converter is proposed without using the voltage multiplier cell and/or hybrid switched-capacitor technique. The proposed topology utilizes two nonisolated inductors that are connected in series/parallel during discharging/charging mode. The operation of switches with two different duty ratios is the main advantage of the converter to achieve high voltage gain without using extreme duty ratio. The steady-state analysis of the proposed converter using two different duty ratios is discussed in detail. In addition, a 100 W, 20/200 V prototype circuit of the high gain dc–dc converter is developed, and the performance is validated using experimental results. |
A Real-Time Approach to Process Control | Contents Preface xi Acknowledgements xiii Endorsement xv About the authors xvii 1 A brief history of control and simulation 1 |
Précis of The brain and emotion. | The topics treated in The brain and emotion include the definition, nature, and functions of emotion (Ch. 3); the neural bases of emotion (Ch. 4); reward, punishment, and emotion in brain design (Ch. 10); a theory of consciousness and its application to understanding emotion and pleasure (Ch. 9); and neural networks and emotion-related learning (Appendix). The approach is that emotions can be considered as states elicited by reinforcers (rewards and punishers). This approach helps with understanding the functions of emotion, with classifying different emotions, and in understanding what information-processing systems in the brain are involved in emotion, and how they are involved. The hypothesis is developed that brains are designed around reward- and punishment-evaluation systems, because this is the way that genes can build a complex system that will produce appropriate but flexible behavior to increase fitness (Ch. 10). By specifying goals rather than particular behavioral patterns of responses, genes leave much more open the possible behavioral strategies that might be required to increase fitness. The importance of reward and punishment systems in brain design also provides a basis for understanding the brain mechanisms of motivation, as described in Chapters 2 for appetite and feeding, 5 for brain-stimulation reward, 6 for addiction, 7 for thirst, and 8 for sexual behavior. |
The genetics of human tooth agenesis: new discoveries for understanding dental anomalies. | The important role of genetics has been increasingly recognized in recent years with respect to the understanding of dental anomalies, such as tooth agenesis. The lack of any real insight into the cause of this condition has led us to use a human molecular genetics approach to identify the genes perturbing normal dental development. We are reporting a strategy that can be applied to investigate the underlying cause of human tooth agenesis. Starting with a single large family presenting a clearly recognizable and well-defined form of tooth agenesis, we have identified a defective gene that affects the formation of second premolars and third molars. With the use of "the family study" method, evidence is produced showing that other genetic defects also contribute to the wide range of phenotypic variability of tooth agenesis. Identification of genetic mutations in families with tooth agenesis or other dental anomalies will enable preclinical diagnosis and permit improved orthodontic treatment. |
Lower versus higher dose of enteral caloric intake in adult critically ill patients: a systematic review and meta-analysis | BACKGROUND
There is conflicting evidence about the relationship between the dose of enteral caloric intake and survival in critically ill patients. The objective of this systematic review and meta-analysis is to compare the effect of lower versus higher dose of enteral caloric intake in adult critically ill patients on outcome.
METHODS
We reviewed MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and Scopus from inception through November 2015. We included randomized and quasi-randomized studies in which there was a significant difference in the caloric intake in adult critically ill patients, including trials in which caloric restriction was the primary intervention (caloric restriction trials) and those with other interventions (non-caloric restriction trials). Two reviewers independently extracted data on study characteristics, caloric intake, and outcomes with hospital mortality being the primary outcome.
RESULTS
Twenty-one trials mostly with moderate bias risk were included (2365 patients in the lower caloric intake group and 2352 patients in the higher caloric group). Lower compared with higher caloric intake was not associated with difference in hospital mortality (risk ratio (RR) 0.953; 95 % confidence interval (CI) 0.838-1.083), ICU mortality (RR 0.885; 95 % CI 0.751-1.042), total nosocomial infections (RR 0.982; 95 % CI 0.878-1.077), mechanical ventilation duration, or length of ICU or hospital stay. Blood stream infections (11 trials; RR 0.718; 95 % CI 0.519-0.994) and incident renal replacement therapy (five trials; RR 0.711; 95 % CI 0.545-0.928) were lower with lower caloric intake. The associations between lower compared with higher caloric intake and primary and secondary outcomes, including pneumonia, were not different between caloric restriction and non-caloric restriction trials, except for the hospital stay which was longer with lower caloric intake in the caloric restriction trials.
CONCLUSIONS
We found no association between the dose of caloric intake in adult critically ill patients and hospital mortality. Lower caloric intake was associated with lower risk of blood stream infections and incident renal replacement therapy (five trials only). The heterogeneity in the design, feeding route and timing and caloric dose among the included trials could limit our interpretation. Further studies are needed to clarify our findings. |
Value-Relevance of the Outside Corporate Governance Information: a Canadian Study | This study examines whether the corporate governance rankings published by The Globe and Mail, a reputed national Canadian newspaper, are reflected in the values that investors accord to firms. A sample of 796 observations on 289 Canadian companies from 2002-2005 inclusively was analyzed using a price model (Cazavan-Jeny and JeanJean, 2006). Results suggest that the corporate governance rankings published by this market information intermediary are related not only to firm value, but also to accounting results. Thus, the relationship between corporate governance scores and market capitalization can take two forms. First, there may be a direct relationship due to investor interest in good governance practices. Second, there may be an indirect relationship due to the impact of good governance practices on the firms' accounting results. The results of this study should be useful for accounting practitioners and the various organizations involved in the regulation of corporate governance practices and the standardization of relevant data elements. |
Evaluation of anterior talofibular ligament injury with stress radiography, ultrasonography and MR imaging | The purpose of this study was to clarify the efficacy of stress radiography (stress X-P), ultrasonography (US), and magnetic resonance (MR) imaging in the detection of the anterior talofibular ligament (ATFL) injury. Thirty-four patients with ankle sprain were involved. In all patients, Stress X-P, US, MR imaging, and arthroscopy were performed. The arthroscopic results were considered to be the gold standard. The imaging results were compared with the arthroscopic results, and the accuracy calculated. Arthroscopic findings showed ATFL injury in 30 out of 34 cases. The diagnosis of ATFL injury with stress X-P, US, MR imaging were made with an accuracy of 67, 91 and 97%. US and MR imaging demonstrated the same location of the injury as arthroscopy in 63 and 93%. We have clarified the diagnostic value of stress X-P, US, and MR imaging in diagnosis of ATFL injury. We obtained satisfactory results with US and MR imaging. |
Microscopic Nuclei Classification, Segmentation and Detection with improved Deep Convolutional Neural Network (DCNN) Approaches | Due to cellular heterogeneity, cell nuclei classification, segmentation, and detection from pathological images are challenging tasks. In the last few years, Deep Convolutional Neural Networks (DCNN) approaches have been shown state-of-the-art (SOTA) performance on histopathological imaging in different studies. In this work, we have proposed different advanced DCNN models and evaluated for nuclei classification, segmentation, and detection. First, the Densely Connected Recurrent Convolutional Network (DCRN) model is used for nuclei classification. Second, Recurrent Residual U-Net (R2U-Net) is applied for nuclei segmentation. Third, the R2U-Net regression model which is named UD-Net is used for nuclei detection from pathological images. The experiments are conducted with different datasets including Routine Colon Cancer(RCC) classification and detection dataset, and Nuclei Segmentation Challenge 2018 dataset. The experimental results show that the proposed DCNN models provide superior performance compared to the existing approaches for nuclei classification, segmentation, and detection tasks. The results are evaluated with different performance metrics including precision, recall, Dice Coefficient (DC), Means Squared Errors (MSE), F1-score, and overall accuracy. We have achieved around 3.4% and 4.5% better F-1 score for nuclei classification and detection tasks compared to recently published DCNN based method. In addition, R2U-Net shows around 92.15% testing accuracy in term of DC. These improved methods will help for pathological practices for better quantitative analysis of nuclei in Whole Slide Images(WSI) which ultimately will help for better understanding of different types of cancer in clinical workflow. |
The Concept of History in Walter Benjamin’s Critical Theory | The point of departure of this study is Walter Benjamin's last text, "Theses on the Philosophy of History". Benjamin appeals to the significance of theology for historical materialism in order to overcome one of the decisive reasons why Marx's unique theoretical project, in its positivistic interpretations, was not understood with the necessary radicality and had been in danger of losing its explanatory power and revolutionary impulse. The necessity of looking back to the past constitutes the basic theme of the study, and it is analyzed at the epistemological, ontological and political levels. The view backwards is also necessary because the past shows how all its atrocities, which we think have been overcome, may at any time return in a way which we are unable to imagine. |
Genetic origins of the Ainu inferred from combined DNA analyses of maternal and paternal lineages | AbstractThe Ainu, a minority ethnic group from the northernmost island of Japan, was investigated for DNA polymorphisms both from maternal (mitochondrial DNA) and paternal (Y chromosome) lineages extensively. Other Asian populations inhabiting North, East, and Southeast Asia were also examined for detailed phylogeographic analyses at the mtDNA sequence type as well as Y-haplogroup levels. The maternal and paternal gene pools of the Ainu contained 25 mtDNA sequence types and three Y-haplogroups, respectively. Eleven of the 25 mtDNA sequence types were unique to the Ainu and accounted for over 50% of the population, whereas 14 were widely distributed among other Asian populations. Of the 14 shared types, the most frequently shared type was found in common among the Ainu, Nivkhi in northern Sakhalin, and Koryaks in the Kamchatka Peninsula. Moreover, analysis of genetic distances calculated from the mtDNA data revealed that the Ainu seemed to be related to both the Nivkhi and other Japanese populations (such as mainland Japanese and Okinawans) at the population level. On the paternal side, the vast majority (87.5%) of the Ainu exhibited the Asian-specific YAP+ lineages (Y-haplogroups D-M55* and D-M125), which were distributed only in the Japanese Archipelago in this analysis. On the other hand, the Ainu exhibited no other Y-haplogroups (C-M8, O-M175*, and O-M122*) common in mainland Japanese and Okinawans. It is noteworthy that the rest of the Ainu gene pool was occupied by the paternal lineage (Y-haplogroup C-M217*) from North Asia including Sakhalin. Thus, the present findings suggest that the Ainu retain a certain degree of their own genetic uniqueness, while having higher genetic affinities with other regional populations in Japan and the Nivkhi among Asian populations. |
Inherent Method Variability in Dissolution Testing : The Effect of Hydrodynamics in the USP II Apparatus | Introduction Dissolution testing is routinely carried out in the pharmaceutical industry to determine the rate of dissolution of solid dosage forms. In addition to being a regulatory requirement, in-vitro dissolution testing is used to assist with formulation design, process development, and the demonstration of batch-to-batch reproducibility in production. The most common of such dissolution test apparatuses is the USP Dissolution Test Apparatus II, consisting of an unbaffled vessel stirred by a paddle, whose dimensions, characteristics, and operating conditions are detailed by the USP (Cohen et al., 1990; The United States Pharmacopeia & The National Formulary, 2004). |
Programming by Feedback | This paper advocates a new ML-based programming framework, called Programming by Feedback (PF), which involves a sequence of interactions between the active computer and the user. The latter only provides preference judgments on pairs of solutions supplied by the active computer. The active computer involves two components: the learning component estimates the user’s utility function and accounts for the user’s (possibly limited) competence; the optimization component explores the search space and returns the most appropriate candidate solution. A proof of principle of the approach is proposed, showing that PF requires a handful of interactions in order to solve some discrete and continuous benchmark problems. |
EFFECT OF PROCESS PARAMETERS IN CNC DRILLING OF ALUMINIUM MATRIX COMPOSITE BY USING GRA AND ANOVA | In aluminum matrix composites presence of hard particles inside the matrix which causes tool wear, poor surface finish and high cutting forces while machining. This paper discusses the influence of graphite particles and cutting parameters on drilling characteristics of hybrid aluminum matrix composites (AMCs)-Al6063/Alo3 and Al6063/12% SiC/4% Graphite. The composites were fabricated using stir casting method. Experiments were conducted with TiN coated carbide tools and commercial carbide tools at various cutting speeds, feeds and work pieces. MRR and surface roughness of the drilled hole was investigated with special attention paid to the effects of graphite particles. It is also find the optimal process parameters to achieve economical manufacturing of MMC by using GRA and ANOVA. |
Logic-Based Models for the Analysis of Cell Signaling Networks† | Computational models are increasingly used to analyze the operation of complex biochemical networks, including those involved in cell signaling networks. Here we review recent advances in applying logic-based modeling to mammalian cell biology. Logic-based models represent biomolecular networks in a simple and intuitive manner without describing the detailed biochemistry of each interaction. A brief description of several logic-based modeling methods is followed by six case studies that demonstrate biological questions recently addressed using logic-based models and point to potential advances in model formalisms and training procedures that promise to enhance the utility of logic-based methods for studying the relationship between environmental inputs and phenotypic or signaling state outputs of complex signaling networks. |
User Experience of On-Screen Interaction Techniques: An Experimental Investigation of Clicking, Sliding, Zooming, Hovering, Dragging, and Flipping | User Experience of On-Screen Interaction Techniques: An Experimental Investigation of Clicking, Sliding, Zooming, Hovering, Dragging, and Flipping S. Shyam Sundar a b , Saraswathi Bellur c , Jeeyun Oh d , Qian Xu e & Haiyan Jia a a The Pennsylvania State University b Sungkyunkwan University , Korea c University of Connecticut d Robert Morris University e Elon University Accepted author version posted online: 29 Mar 2013.Published online: 27 Dec 2013. |
Neural Message Passing for Quantum Chemistry | Supervised learning on molecules has incredible potential to be useful in chemistry, drug discovery, and materials science. Luckily, several promising and closely related neural network models invariant to molecular symmetries have already been described in the literature. These models learn a message passing algorithm and aggregation procedure to compute a function of their entire input graph. At this point, the next step is to find a particularly effective variant of this general approach and apply it to chemical prediction benchmarks until we either solve them or reach the limits of the approach. In this paper, we reformulate existing models into a single common framework we call Message Passing Neural Networks (MPNNs) and explore additional novel variations within this framework. Using MPNNs we demonstrate state of the art results on an important molecular property prediction benchmark; these results are strong enough that we believe future work should focus on datasets with larger molecules or more accurate ground truth labels. |
Grounded Theory Research : Procedures , Canons , and Evaluative Criteria | Using grounded theory as an example, this paper examines three methodological questions that are generally applicable to all qualitative methods. How should the usual scientific canons be reinterpreted for qualitative research? How should researchers report the procedures and canons used in their research? What evaluative criteria should be used in judging the research products? We propose that the criteria should be adapted to fit the procedures of the method. We demonstrate how this can be done for grounded theory and suggest criteria for evaluating studies following this approach. We argue that other qualitative researchers might be similarly specific about their procedures and evaluative criteria. |
Hypotensive effects of solitary addition of conventional nonfat dairy products to the routine diet: a randomized controlled trial. | BACKGROUND
The high consumption of low-fat and nonfat dairy products is associated with reduced risk of high blood pressure.
OBJECTIVE
We aimed to investigate whether the solitary addition of nonfat dairy products to the normal routine diet was capable of lowering blood pressure in middle-aged and older adults with elevated blood pressure.
DESIGN
With the use of a randomized, crossover intervention-study design, 49 adults (56% women) with elevated blood pressure (mean ± SEM age: 53 ± 2 y; systolic blood pressure: 135 ± 1; diastolic blood pressure: 80 ± 1 mm Hg) underwent a high-dairy condition (+4 servings conventional nonfat dairy products/d) and isocaloric no-dairy condition (+4 servings fruit products/d) in which all dairy products were removed. Both dietary conditions lasted 4 wk with a 2-wk washout before crossing over into the alternate condition.
RESULTS
The high-dairy condition produced reductions in systolic blood pressure (135 ± 1 to 127 ± 1 mm Hg) and pulse pressure (54 ± 1 to 48 ± 1 mm Hg) (both P < 0.05). The hypotensive effects were observed within 3 wk after the initiation of the dietary intervention and in both casual seated and ambulatory (24-h) measurements (P < 0.05). Pulse pressure was increased after the removal of all dairy products in the no-dairy condition (54 ± 1 to 56 ± 1 mm Hg; P < 0.05). There were no changes in diastolic blood pressure after either dietary condition.
CONCLUSION
We concluded that the solitary manipulation of conventional dairy products in the normal routine diet would modulate blood pressure in middle-aged and older adults with prehypertension and hypertension. This trial was registered at clinicaltrials.gov as NCT01577030. |
Keloids and Hypertrophic Scars: Pathophysiology, Classification, and Treatment. | BACKGROUND
Keloid and hypertrophic scars represent an aberrant response to the wound healing process. These scars are characterized by dysregulated growth with excessive collagen formation, and can be cosmetically and functionally disruptive to patients.
OBJECTIVE
Objectives are to describe the pathophysiology of keloid and hypertrophic scar, and to compare differences with the normal wound healing process. The classification of keloids and hypertrophic scars are then discussed. Finally, various treatment options including prevention, conventional therapies, surgical therapies, and adjuvant therapies are described in detail.
MATERIALS AND METHODS
Literature review was performed identifying relevant publications pertaining to the pathophysiology, classification, and treatment of keloid and hypertrophic scars.
RESULTS
Though the pathophysiology of keloid and hypertrophic scars is not completely known, various cytokines have been implicated, including interleukin (IL)-6, IL-8, and IL-10, as well as various growth factors including transforming growth factor-beta and platelet-derived growth factor. Numerous treatments have been studied for keloid and hypertrophic scars,which include conventional therapies such as occlusive dressings, compression therapy, and steroids; surgical therapies such as excision and cryosurgery; and adjuvant and emerging therapies including radiation therapy, interferon, 5-fluorouracil, imiquimod, tacrolimus, sirolimus, bleomycin, doxorubicin, transforming growth factor-beta, epidermal growth factor, verapamil, retinoic acid, tamoxifen, botulinum toxin A, onion extract, silicone-based camouflage, hydrogel scaffold, and skin tension offloading device.
CONCLUSION
Keloid and hypertrophic scars remain a challenging condition, with potential cosmetic and functional consequences to patients. Several therapies exist which function through different mechanisms. Better understanding into the pathogenesis will allow for development of newer and more targeted therapies in the future. |
Psychometric properties of the WeeFIM in children with cerebral palsy in Turkey. | The Functional Independence Measure for Children (WeeFIM) instrument has recently been adapted and validated for non-disabled children in Turkey. The aim of this study was to validate the instrument in children with cerebral palsy (CP). One hundred and thirty-four children with CP were assessed using the WeeFIM. Reliability was tested by internal consistency, intraclass and interrater correlation coefficients (ICCs), internal construct validity by Rasch analysis, and external construct validity by correlation with the Denver II Development Test (Denver II). Mean age of the participants (70 females, 64 males) was 4y 6mo (SD 3y 8mo, range 6mo-16y). CP type was: diplegia in 37.3%, hemiplegia in 20.2%, quadriplegia in 8.2%, 'baby at risk' (i.e. infants who show neuromotor delay but cannot be classified in a CP type) in 29.9%, and other in 4.5%. Reliability of the WeeFIM was excellent with high Cronbach's alpha and ICC values ranging between 0.91 and 0.98 for the motor and cognitive scales. After collapsing response categories, both motor and cognitive scales met Rasch model expectations. Unidimensionality of the motor scale was confirmed after adjustment for local dependency of items. There was no substantive differential item functioning and strict unidimensionality for both scales was shown by analysis of the residuals. External construct validity was supported by expected high correlations with developmental ages determined by the social, fine motor function, language, and gross motor function domains of the Denver II. We conclude that the WeeFIM is a reliable and valid instrument for evaluating the functional status of Turkish children with CP. |
Driving to Safety How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability ? | • In parallel to creating new testing methods, it is imperative to develop adaptive regulations that are designed from the outset to evolve with the technology so that society can better harness the benefits and manage the risks of these rapidly evolving and potentially transformative technologies. Key findings In the United States, roughly 32,000 people are killed and more than two million injured in crashes every year (Bureau of Transportation Statistics, 2015). U.S. motor vehicle crashes as a whole can pose economic and social costs of more than $800 billion in a single year (Blincoe et al., 2015). And, more than 90 percent of crashes are caused by human errors (National Highway Traffic Safety Administration, 2015)—such as driving too fast and misjudging other drivers’ behaviors, as well as alcohol impairment, distraction, and fatigue. Autonomous vehicles have the potential to significantly mitigate this public health crisis by eliminating many of the mistakes that human drivers routinely make (Anderson et al., 2014; Fagnant and Kockelman, 2015). To begin with, autonomous vehicles are never drunk, distracted, or tired; these factors are involved in 41 percent, 10 percent, and 2.5 percent of all fatal crashes, respectively (National Highway Traffic Safety Administration, 2011; Bureau of Transportation Statistics, 2014b; U.S. Department of Transportation, 2015).1 Their performance may also be better than human drivers because of better perception (e.g., no blind spots), better decisionmaking (e.g., more-accurate planning of complex driving maneuvers like parallel parking), and better execution (e.g., faster and more-precise control of steering, brakes, and acceleration). However, autonomous vehicles might not eliminate all crashes. For instance, inclement weather and complex driving environments pose challenges for autonomous vehicles, as well as for human drivers, and autonomous vehicles might perform worse than human drivers in some cases (Gomes, 2014). There is also the potential for autonomous vehicles to pose new and |
Thermally oxidized aluminum as catalyst-support layer for vertically aligned single-walled carbon nanotube growth using ethanol | Abstract Characteristics and role of Al oxide (Al-O) films used as catalyst-support layer for vertical growth of single-walled carbon nanotubes (SWCNTs) were studied. EB-deposited Al films (20 nm) were thermally oxidized at 400 °C (10 min, static air) to produce the most appropriate surface structure of Al-O. Al-O catalyst-support layers were characterized using various analytical measurements, i.e., atomic force microscopy (AFM), X-ray photoelectron spectroscopy (XPS), and spectroscopy ellipsometry (SE). The thermally oxidized Al-O has a highly roughened surface, and also has the most suitable surface chemical states compared to other type of Al-O support layers. We suggest that the surface of thermally oxidized Al-O characterized in this work enhanced Co catalyst activity to promote the vertically aligned SWCNT growth. |
Online Biometric Authentication Using Subject-Specific Band Power features of EEG | Biometric recognition of persons based on unique features extracted from brain signals is an emerging area of research nowadays, on account of the subject-specificity of human neural activity. This paper proposes an online Electroencephalogram (EEG) based biometric authentication system using band power features extracted from alpha, beta and gamma bands, when the subject is in relaxed rest state with eyes open or closed. The most distinct band features are chosen specifically for each subject which are then used to generate subject-specific template during enrollment. During online authentication, recorded test EEG pattern is matched with the respective template stored in the database and degree of matching in terms of its correlation coefficient predicts the genuineness of the claimant. A number of client and imposter authentication tests have been conducted in online framework among 6 subjects using the proposed system, and achieves an average recognition rate of 88.33% using 14 EEG channels. Experimental analysis shows the subject-specificity of distinct bands and features, and highlights the utility of subject-specific band power features in EEG-based biometric systems. |
Principles and Safety Measures of Electrosurgery in Laparoscopy | BACKGROUND
Electrosurgical units are the most common type of electrical equipment in the operating room. A basic understanding of electricity is needed to safely apply electrosurgical technology for patient care.
METHODS
We reviewed the literature concerning the essential biophysics, the incidence of electrosurgical injuries, and the possible mechanisms for injury. Various safety guidelines pertaining to avoidance of injuries were also reviewed.
RESULTS
Electrothermal injury may result from direct application, insulation failure, direct coupling, capacitive coupling, and so forth.
CONCLUSION
A thorough knowledge of the fundamentals of electrosurgery by the entire team in the operating room is essential for patient safety and for recognizing potential complications. Newer hemostatic technologies can be used to decrease the incidence of complications. |
A New Approach to Train Convolutional Neural Networks for Real-Time 6-DOF Camera Relocalization | Abstract-Recent proposed Machine Learning methods for the regression problem of Camera Relocalization achieved satisfactory results by training Convolutional Neural Networks as regressors. However, the main point is that Convolutional Neural Networks are mainly proposed to solve classification problems and perform well when we have sufficient data from each label. In the regression problems such as camera relocalization, the floating point output space has a wide range and number of training data for each label of the output space is low. In this paper, a new approach to train Convolutional Neural Networks for regression problems such as 6-DOF camera relocalization is proposed which converts the problem into a classification one and then generalizes it to regression. |
Intestinal Lesions Are Associated with Altered Intestinal Microbiome and Are More Frequent in Children and Young Adults with Cystic Fibrosis and Cirrhosis | BACKGROUND AND AIMS
Cirrhosis (CIR) occurs in 5-7% of cystic fibrosis (CF) patients. We hypothesized that alterations in intestinal function in CF contribute to the development of CIR.
AIMS
Determine the frequency of macroscopic intestinal lesions, intestinal inflammation, intestinal permeability and characterize fecal microbiome in CF CIR subjects and CF subjects with no liver disease (CFnoLIV).
METHODS
11 subjects with CFCIR (6 M, 12.8 yrs ± 3.8) and 19 matched with CFnoLIV (10 M, 12.6 yrs ± 3.4) underwent small bowel capsule endoscopy, intestinal permeability testing by urinary lactulose: mannitol excretion ratio, fecal calprotectin determination and fecal microbiome characterization.
RESULTS
CFCIR and CFnoLIV did not differ in key demographics or CF complications. CFCIR had higher GGT (59±51 U/L vs 17±4 p = 0.02) and lower platelet count (187±126 vs 283±60 p = 0.04) and weight (-0.86 ± 1.0 vs 0.30 ± 0.9 p = 0.002) z scores. CFCIR had more severe intestinal mucosal lesions on capsule endoscopy (score ≥4, 4/11 vs 0/19 p = 0.01). Fecal calprotectin was similar between CFCIR and CFnoLIV (166 μg/g ±175 vs 136 ± 193 p = 0.58, nl <120). Lactulose:mannitol ratio was elevated in 27/28 subjects and was slightly lower in CFCIR vs CFnoLIV (0.08±0.02 vs 0.11±0.05, p = 0.04, nl ≤0.03). Small bowel transit time was longer in CFCIR vs CFnoLIV (195±42 min vs 167±68 p<0.001, nl 274 ± 41). Bacteroides were decreased in relative abundance in CFCIR and were associated with lower capsule endoscopy score whereas Clostridium were more abundant in CFCIR and associated with higher capsule endoscopy score.
CONCLUSIONS
CFCIR is associated with increased intestinal mucosal lesions, slower small bowel transit time and alterations in fecal microbiome. Abnormal intestinal permeability and elevated fecal calprotectin are common in all CF subjects. Disturbances in intestinal function in CF combined with changes in the microbiome may contribute to the development of hepatic fibrosis and intestinal lesions. |
Architectural Styles, Design Patterns, And Objects | Architectural styles, object-oriented design, and design patterns all hold promise as approaches that simplify software design and reuse by capturing and exploiting system design knowledge. This article explores the capabilities and roles of the various approaches, their strengths, and their limitations. oftware system builders increasingly recognize the importance of exploiting design knowledge in the engineering of new systems. Several distinct but related approaches hold promise. One approach is to focus on the architectural level of system design—the gross structure of a system as a composition of interacting parts. Architectural designs illuminate such key issues as scaling and portability, the assignment of functionality to design elements, interaction protocols between elements, and global system properties such as processing rates, end-to-end capacities, and overall performance.1 Architectural descriptions tend to be informal and idiosyncratic: box-and-line diagrams convey essential system structure, with accompanying prose explaining the meaning of the symbols. Nonetheless, they provide a critical staging point for determining whether a system can meet its essential requirements, and they guide implementers in constructing the system. More recently, architectural descriptions have been used for codifying and reusing design knowledge. Much of their power comes from use of idiomatic architectural terms, such as “clientserver system,” “layered system,” or “blackboard organization.” |
A unified distance measurement for orientation coding in palmprint verification | Orientation coding based palmprint verification methods, such as competitive code, palmprint orientation code and robust line orientation code, are state-of-the-art verification algorithms with fast matching speeds. Orientation code makes use of two types of distance measure, SUM_XOR (angular distance) and OR_XOR (Hamming distance), yet little is known about the similarities and differences both SUM_XOR and OR_XOR can be regarded as special cases, and provide some principles for determining the parameters of the unified distance. Experimental results show that, using the same feature extraction and coding methods, the unified distance measure gets lower equal error rates than the original distance measures. & 2009 Elsevier B.V. All rights reserved. |
Dielectric waveguide with planar multi-mode excitation for high data-rate chip-to-chip interconnects | An all-electrical, low-cost, wideband chip-to-chip link on a multi-mode dielectric waveguide is proposed. The signal is coupled from the silicon chip to the fundamental and polarization-orthogonal degenerate Ex11 and Ey11 waveguide modes using planar electric and slot dipole antennas, respectively. This approach doubles the capacity of a single line without sacrificing robustness or adding implementation cost and complexity. Two independent ultra-wideband 30GHz channels, each from 90 GHz to 120 GHz, are demonstrated. The large available bandwidth will be channelized in frequency for optimal overall efficiency with a CMOS transceiver. Various design aspects of the structure are examined and discussed. The proposed waveguide offers a solution for Terabit-per-second (Tbps) electrical wireline links. |
Randomized Trial of Time-Limited Interruptions of Protease Inhibitor-Based Antiretroviral Therapy (ART) vs. Continuous Therapy for HIV-1 Infection | BACKGROUND
The clinical outcomes of short interruptions of PI-based ART regimens remains undefined.
METHODS
A 2-arm non-inferiority trial was conducted on 53 HIV-1 infected South African participants with viral load <50 copies/ml and CD4 T cell count >450 cells/µl on stavudine (or zidovudine), lamivudine and lopinavir/ritonavir. Subjects were randomized to a) sequential 2, 4 and 8-week ART interruptions or b) continuous ART (cART). Primary analysis was based on the proportion of CD4 count >350 cells(c)/ml over 72 weeks. Adherence, HIV-1 drug resistance, and CD4 count rise over time were analyzed as secondary endpoints.
RESULTS
The proportions of CD4 counts >350 cells/µl were 82.12% for the intermittent arm and 93.73 for the cART arm; the difference of 11.95% was above the defined 10% threshold for non-inferiority (upper limit of 97.5% CI, 24.1%; 2-sided CI: -0.16, 23.1). No clinically significant differences in opportunistic infections, adverse events, adherence or viral resistance were noted; after randomization, long-term CD4 rise was observed only in the cART arm.
CONCLUSION
We are unable to conclude that short PI-based ART interruptions are non-inferior to cART in retention of immune reconstitution; however, short interruptions did not lead to a greater rate of resistance mutations or adverse events than cART suggesting that this regimen may be more forgiving than NNRTIs if interruptions in therapy occur.
TRIAL REGISTRATION
ClinicalTrials.gov NCT00100646. |
Diagnostic performance of anti-β2 glycoprotein I and anticardiolipin assays for clinical manifestations of the antiphospholipid syndrome | The objective of the present study was to analyse the performance of the tests for detection of anti-β2 glycoprotein I (β2 GP I) and anticardiolipin (aCL) antibodies for identification of clinical manifestations of the antiphospholipid syndrome (APS). Patients with systemic lupus erythematosus (SLE) as well as carriers of infectious diseases such as Kala-azar, syphilis and leptospirosis were studied. Particular interest was given to the presence of clinical complications related to APS. Anticardiolipin and anti-β2 GP I antibodies were searched using an enzyme-linked immunosorbent assay (ELISA) assay. Clinical manifestations of APS were observed in 34 of the 152 patients (22.3%) with SLE and no patient with infectious disease had such manifestations. Antibodies to cardiolipin in moderate or high levels and anti-β2 GP I were detected in 55 of 152 (36.1%) and 36 of 152 (23.6%) patients with SLE, respectively, and in 2 of 30 (6.6%) and 16 of 30 (53.3%) patients with Kala-azar, in 9 of 39 (23%) and 6 of 34 (17.6%) patients with leptospirosis, and 14 of 74 (18.9%) and 8 of 70 (11.4%) cases of syphilis, respectively. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and likelihood ratio (LR) of the anti-β2 GP I test for the identification of the clinical manifestation of APS were, respectively, 29% [95% confidence interval (CI)=24%–34%], 78% (95% CI=73–83%), 15% (95% CI=11–19%), 89% (95%CI=85–93%) and 1.38. Regarding the aCL assay, the figure was 29% (95% CI=24–34%), 76% (95% CI=71–81%), 14% (95% CI=10–18%), 89% (95% CI=86–92%) and 1.26. As the validity and performance of the anti-β2 GP I assay were similar to the aCL in demonstrating the presence of clinical phenomena associated with APS and due to the difficulties in performing as well as the lack of standardisation of the anti-β2 GP I test, we suggest that the test for aCL should continue to be the first one performed when the presence of APS is suspected. |
Swan neck deformities in rheumatoid arthritis: a qualitative study on the patients' perspectives on hand function problems and finger splints. | OBJECTIVE
To identify hand function problems and the reasons for choosing a specific finger splint in patients with rheumatoid arthritis (RA) and swan neck deformities.
METHODS
A qualitative study was performed alongside a randomized, controlled cross-over trial comparing the effectiveness of two types of finger splints (the silver ring splint [SRS] and the prefabricated thermoplastic splint [PTS]) in 50 patients with RA and swan neck deformities. Questions on the patients' main hand function problem and reasons for choosing a specific splint type were performed at baseline and after using each splint. The qualitative analyses included the identification of meaning units and (sub)concepts related to hand function problems and splint preferences.
RESULTS
RA patients with swan neck deformities experience problems with flexion initiation, painful proximal interphalangeal joint hyperextension, grip activities and comprehensive hand function activities. Reasons for preferring or not preferring a specific type of finger splint included: effect, ease of use, appearance, comfort and side effects. Apart from the splint slipping off and a negative attitude towards the appearance of the splint, which appeared to be more frequently mentioned in connection with the SRS, no clear pattern of positive or negative appreciation of either type of splint could be distinguished.
CONCLUSION
RA patients with swan neck deformities experience a variety of problems, including impairments in functions and limitations in daily activities. With the prescription of finger splints, a substantial number of potentially positive and negative consequences of their use need to be taken into account. |
Methods and results of sphincter-preserving surgery for rectal cancer. | BACKGROUND
Sphincter preservation is the goal in the treatment of rectal cancer and should be considered in all patients with an intact sphincter. Sphincter preservation for tumors of the upper rectum is easily achieved, but surgical management of cancer of the mid and lower third of the rectum continues to evolve. Several recent advances may influence future treatment strategies.
METHODS
We reviewed the literature to identify the current methods of sphincter-preserving surgery and their oncologic and functional results.
RESULTS
Proctectomy with total mesorectal excision reduces the incidence of local recurrence to less than 10% while preserving genitourinary function. The use of preoperative radiotherapy may further diminish the risk of local recurrence. In selected patients, partial resection of the anal sphincter may avoid definitive colostomy without compromising oncologic outcome. In contrast, the role of local resection of rectal cancer remains controversial. Restoration of continuity by means of a colonic reservoir reduces stool frequency and urgency and improves continence when compared to a straight coloanal anastomosis. The transverse colpoplasty pouch may allow pouch construction in patients in whom it is currently impossible, but long-term follow-up is not yet available.
CONCLUSIONS
Sphincter-preserving surgery is possible for the majority of patients with rectal cancer. Optimal functional results may be obtained by a nerve-sparing operative technique and by use of a colonic reservoir for reconstruction following resection of mid or low rectal cancers. |
Learning Graph Convolution Filters from Data Manifold | Convolution Neural Network (CNN) has gained tremendous success in computer vision tasks with its outstanding ability to capture the local latent features. Recently, there has been an increasing interest in extending convolution operations to the nonEuclidean geometry. Although various types of convolution operations have been proposed for graphs or manifolds, their connections with traditional convolution over grid-structured data are not well-understood. In this paper, we show that depthwise separable convolution can be successfully generalized for the unification of both graph-based and grid-based convolution methods. Based on this insight we propose a novel Depthwise Separable Graph Convolution (DSGC) approach which is compatible with the tradition convolution network and subsumes existing convolution methods as special cases. It is equipped with the combined strengths in model expressiveness, compatibility (relatively small number of parameters), modularity and computational efficiency in training. Extensive experiments show the outstanding performance of DSGC in comparison with strong baselines on multi-domain benchmark datasets. |
Mitigating arcing defect at pad etch | This paper is to present method to mitigate arcing defect encountered at pad etch. The problem was detected during wafer disposition due to equipment alarm. Based on observation, this burnt- like defect material, is observed to have inhibited the wafer surface and exposing the top metal line. The inclination was observed mainly during main etch step. The wafer is believed to have encountered plasma instability during transition from Main Etch (ME) to Over Etch (OE) step. However, this is only detected during backside helium leak alarm. This arcing defect was caused by several factors, of which were related to recipe, wafer condition, processing tool and product design. The approach taken was to mitigate these issues where recipe optimization and tighter equipment parameter control were implemented. The design of experiment was presented to find the optimal setting for backside helium flow and chucking voltage. Apart from that, chamber mix run also plays an important role. |
Security in next generation air traffic communication networks | A multitude of wireless technologies are used by air traffic communication systems during different flight phases. From a conceptual perspective, all of them are insecure as security was never part of their design and the evolution of wireless security in aviation did not keep up with the state of the art. Recent contributions from academic and hacking communities have exploited this inherent vulnerability and demonstrated attacks on some of these technologies. However, these inputs revealed that a large discrepancy between the security perspective and the point of view of the aviation community exists. In this thesis, we aim to bridge this gap and combine wireless security knowledge with the perspective of aviation professionals to improve the safety of air traffic communication networks. To achieve this, we develop a comprehensive new threat model and analyse potential vulnerabilities, attacks, and countermeasures. Since not all of the required aviation knowledge is codified in academic publications, we examine the relevant aviation standards and also survey 242 international aviation experts. Besides extracting their domain knowledge, we analyse the awareness of the aviation community concerning the security of their wireless systems and collect expert opinions on the potential impact of concrete attack scenarios using insecure technologies. Based on our analysis, we propose countermeasures to secure air traffic communication that work transparently alongside existing technologies. We discuss, implement, and evaluate three different approaches based on physical and data link layer information obtained from live aircraft. We show that our countermeasures are able to defend against the injection of false data into air traffic control systems and can significantly and immediately improve the security of air traffic communication networks under the existing real-world constraints. Finally, we analyse the privacy consequences of open air traffic control protocols. We examine sensitive aircraft movements to detect large-scale events in the real world and illustrate the futility of current attempts to maintain privacy for aircraft owners. |
On the equivalence between CP-logic and LPADs | We give a detailed proof of the fact that the probabilistic logics of Logic Programs with Annotated Disjunctions (LPADs) and CP-logic are equivalent. This report contains a detailed proof of the fact that Logic Programs with Annotated Disjunctions (LPADs) (6) and CP-logic (5) are equivalent. Before moving on to this proof, we first present some preliminaries from lattice theory and logic programming, and summarize the definition of LPADs and CP-logic. |
Design-for-Assembly (DFA) by Reverse Engineering | `Design-for-Assembly (DFA)" is an engineering concept concerned with improving product designs for easier and less costly assembly operations. Much of academic or industrial eeorts in this area have been devoted to the development of analysis tools for measuring the \assemblability" of a design. On the other hand, little attention has been paid to the actual redesign process. The goal of this paper is to develop a computer-aided tool for assisting designers in redesigning a product for DFA. One method of redesign, known as the \replay and modify" paradigm, is to replay a previous design plan, and modify the plan wherever necessary and possible, in accordance to the original design intention, for newly speciied design goals 24]. The \replay and modify" paradigm is an eeective redesign method because it ooers a more global solution than simple local patch-ups. For such a paradigm, design information, such as the design plan and design rationale, must be recorded during design. Unfortunately, such design information is not usually available in practice. To handle the potential absence of the required design information and support the \replay and modify" paradigm, the redesign process is modeled as a reverse engineering activity. Reverse engineering roughly refers to an activity of inferring the process, e.g. the design plan, used in creating a given design, and using the inferred knowledge for design recreation or redesign. In this paper, the development of an interactive computer-aided redesign tool for Design-for-Assembly, called REVENGE (REVerse ENGineering), is presented. The architecture of REVENGE is composed of mainly four activities: design analysis, knowledge acquisition, design plan reconstruction, and case-based design modiication. First a DFA analysis is performed to uncover any undesirable aspects of the design with respect to its assemblability. REVENGE , then, interactively solicits designers for useful design information that might not be available from standard design documents such as design rationale. Then, a heuristic algorithm reconstructs a default design plan. A default design plan is a sequence of probable design actions that might have led to the original design. DFA problems identiied during the analysis stage are mapped to the portion of the design plan from which they might have originated. Problems that originate from the earlier portion of the design plan are attacked rst. A case-based approach is used to solve each problem by retrieving a similar redesign case and adapting it to the current situation. REVENGE has been implemented, and has been tested … |
Almost Linear VC-Dimension Bounds for Piecewise Polynomial Networks | We compute upper and lower bounds on the VC dimension and pseudodimension of feedforward neural networks composed of piecewise polynomial activation functions. We show that if the number of layers is fixed, then the VC dimension and pseudo-dimension grow as W log W, where W is the number of parameters in the network. This result stands in opposition to the case where the number of layers is unbounded, in which case the VC dimension and pseudo-dimension grow as W2. We combine our results with recently established approximation error rates and determine error bounds for the problem of regression estimation by piecewise polynomial networks with unbounded weights. |
TaskGenies: Automatically Providing Action Plans Helps People Complete Tasks | People complete tasks more quickly when they have concrete plans. However, they often fail to create such action plans. (How) can systems provide these concrete steps automatically? This article demonstrates that these benefits can also be realized when these plans are created by others or reused from similar tasks. Four experiments test these approaches, finding that people indeed complete more tasks when they receive externally-created action plans. To automatically provide plans, we introduce the Genies workflow that combines benefits of crowd wisdom, collaborative refinement, and automation. We demonstrate and evaluate this approach through the TaskGenies system, and introduce an NLP similarity algorithm for reusing plans. We demonstrate that it is possible for people to create action plans for others, and we show that it can be cost effective. |
Design of encoder and decoder for Golay code | This paper is based on cyclic redundancy check based encoding scheme. High throughput and high speed hardware for Golay code encoder and decoder could be useful in digital communication system. In this paper, a new algorithm has been proposed for CRC based encoding scheme, which devoid of any linear feedback shift registers (LFSR). In addition, efficient architectures have been proposed for both Golay encoder and decoder, which outperform the existing architectures in terms of speed and throughput. The proposed architecture implemented in virtex-4 Xilinx power estimator. Although the CRC encoder and decoder is intuitive and easy to implement, and to reduce the huge hardware complexity required. The proposed method it improve the transmission system performance level. In this architecture our work is to design a Golay code based on encoder and decoder architecture using CRC generation technique. This technique is used to reduce the circuit complexity for data transmission and reception process. |
Difference in patient's acceptance of early versus late initiation of psychosocial support in breast cancer | The present study was performed to assess the difference in acceptance of psychosocial counseling and resulting benefits between patients with breast cancer with early or late onset. In a prospective randomized controlled study conducted over 6 months, 41 women with a new diagnosis of early breast cancer (group 1) and 43 patients with advanced breast cancer (group 2) received individually tailored psychosocial support and were compared against controls. This therapy was free of charge, and the duration of support was determined by the patients' wishes and needs. Among the patients with new onset of disease acceptance of the psychosocial counseling was high, and these patients experienced significant improvements in their quality of life. In contrast, acceptance of psychosocial counseling was low in the advanced breast cancer group and the therapy did not improve quality of life over the observation period of 6 months. Early psychosocial support in patients with breast cancer meets with a high acceptance rate and improves quality of life. |
Cost-Sensitive Learning of Deep Feature Representations From Imbalanced Data | Class imbalance is a common problem in the case of real-world object detection and classification tasks. Data of some classes are abundant, making them an overrepresented majority, and data of other classes are scarce, making them an underrepresented minority. This imbalance makes it challenging for a classifier to appropriately learn the discriminating boundaries of the majority and minority classes. In this paper, we propose a cost-sensitive (CoSen) deep neural network, which can automatically learn robust feature representations for both the majority and minority classes. During training, our learning procedure jointly optimizes the class-dependent costs and the neural network parameters. The proposed approach is applicable to both binary and multiclass problems without any modification. Moreover, as opposed to data-level approaches, we do not alter the original data distribution, which results in a lower computational cost during the training process. We report the results of our experiments on six major image classification data sets and show that the proposed approach significantly outperforms the baseline algorithms. Comparisons with popular data sampling techniques and CoSen classifiers demonstrate the superior performance of our proposed method. |
Role of adjuvant chemotherapy in patients with early stage uterine papillary serous cancer. | OBJECTIVE
Uterine papillary serous carcinoma (UPSC) is an aggressive subtype of endometrial cancer. We studied survival outcomes in patients with stages I/II UPSC.
MATERIALS
A retrospective, multi-institutional study of patients with stages I/II UPSC was conducted. Patients underwent surgical staging followed by observation, adjuvant platinum-based chemotherapy (CT), or radiation therapy (RT). Continuous variables were compared via Wilcoxon rank sum test; Fisher exact test was used for the unordered categorical variables. Kaplan-Meier curves were used to estimate survival.
RESULTS
Thirty-nine women were diagnosed with stage I (n = 30) or II (n = 9) UPSC, with a median follow-up of 52 months. Of the 26 patients who did not receive adjuvant CT, 9 developed recurrences and 8 died of their disease. Of the 10 patients with no myometrial invasion who did not receive adjuvant CT, 3 developed recurrences and died. Of the 7 patients who underwent RT, 2 developed distant recurrences and died. Of the 13 patients who underwent CT, 1 developed vaginal recurrence. The 5-year overall (OS) and progression-free survival (PFS) rates for the adjuvant CT group were 100% and 92%, respectively, compared with 69% and 65% for those who did not receive CT (P = 0.002 OS, P = 0.002 PFS). The 5-year OS and PFS rates for RT group were both 71%.
CONCLUSIONS
Patients with stages I/II UPSC are at significant risk for distant recurrence and poor survival. Platinum-based adjuvant CT may decrease recurrence rate and improve survival in women with early and well-staged UPSC. |
A twin study of auditory processing indicates that dichotic listening ability is a strongly heritable trait | We administered tests commonly used in the diagnosis of auditory processing disorders (APDs) to twins recruited from the general population. We observed significant correlations in test scores between co-twins. Our analyses of test score correlations among 106 MZ and 33 DZ twin pairs indicate that dichotic listening ability is a highly heritable trait. Dichotic listening is the ability to identify and distinguish different stimuli presented simultaneously to each ear. Deficits in dichotic listening skills indicate a lesion or defect in interhemispheric information processing. Such defects or lesions can be prominent in elderly listeners, language-impaired children, stroke victims, and individuals with PAX6 mutations. Our data indicates that other auditory processing abilities are influenced by shared environment. These findings should help illuminate the etiology of APDs, and help to clarify the relationships between auditory processing abilities and learning/language disorders associated with APDs. |
Recognizing Semantic Features in Faces using Deep Learning | Human face constantly conveys information, both consciously and subconsciously. However, as basic as it is for humans to visually interpret this information, it is quite a big challenge for machines. Conventional semantic facial feature recognition and analysis techniques mostly lack robustness and suffer from high computation time. This paper aims to explore ways for machines to learn to interpret semantic information available in faces in an automated manner without requiring manual design of feature detectors, using the approach of Deep Learning. In this study, the effects of various factors and hyper-parameters of deep neural networks are investigated for an optimal network configuration that can accurately recognize semantic facial features like emotions, age, gender, ethnicity etc. Furthermore, the relation between the effect of high-level concepts on low level features is explored through the analysis of the similarities in low-level descriptors of different semantic features. This paper also demonstrates a novel idea of using a deep network to generate 3-D Active Appearance Models of faces from real-world 2-D images. For a more detailed report on this work, please see [1]. |
American Sign Language Posture Understanding with Deep Neural Networks | Sign language is a visually oriented, natural, nonverbal communication medium. Having shared similar linguistic properties with its respective spoken language, it consists of a set of gestures, postures and facial expressions. Though, sign language is a mode of communication between deaf people, most other people do not know sign language interpretations. Therefore, it would be constructive if we can translate the sign postures artificially. In this paper, a capsule-based deep neural network sign posture translator for an American Sign Language (ASL) fingerspelling (posture), has been presented. The performance validation shows that the approach can successfully identify sign language, with accuracy like 99%. Unlike previous neural network approaches, which mainly used fine-tuning and transfer learning from pre-trained models, the developed capsule network architecture does not require a pre-trained model. The framework uses a capsule network with adaptive pooling which is the key to its high accuracy. The framework is not limited to sign language understanding, but it has scope for non-verbal communication in Human-Robot Interaction (HRI) also. |
Increasing trend of wearables and multimodal interface for human activity monitoring: A review. | Activity recognition technology is one of the most important technologies for life-logging and for the care of elderly persons. Elderly people prefer to live in their own houses, within their own locality. If, they are capable to do so, several benefits can follow in terms of society and economy. However, living alone may have high risks. Wearable sensors have been developed to overcome these risks and these sensors are supposed to be ready for medical uses. It can help in monitoring the wellness of elderly persons living alone by unobtrusively monitoring their daily activities. The study aims to review the increasing trends of wearable devices and need of multimodal recognition for continuous or discontinuous monitoring of human activity, biological signals such as Electroencephalogram (EEG), Electrooculogram (EOG), Electromyogram (EMG), Electrocardiogram (ECG) and parameters along with other symptoms. This can provide necessary assistance in times of ominous need, which is crucial for the advancement of disease-diagnosis and treatment. Shared control architecture with multimodal interface can be used for application in more complex environment where more number of commands is to be used to control with better results in terms of controlling. |
Retrieval Mode Distinguishes the Testing Effect from the Generation Effect. | 0749-596X/$ see front matter 2009 Elsevier Inc doi:10.1016/j.jml.2009.11.010 * Corresponding author. Fax: +1 765 496 1264. E-mail address: [email protected] (J.D. Karpi A series of four experiments examined the effects of generation vs. retrieval practice on subsequent retention. Subjects were first exposed to a list of target words. Then the subjects were shown the targets again intact for Read trials or they were shown fragments of the targets. Subjects in Generate conditions were told to complete the fragments with the first word that came to mind while subjects in Recall conditions were told to use the fragments as retrieval cues to recall words that occurred in the first part of the experiment. The instruction manipulated retrieval mode—the Recall condition involved intentional retrieval while the Generate condition involved incidental retrieval. On a subsequent test of free recall or recognition, initial recall produced better retention than initial generation. Both generation and retrieval practice disrupted retention of order information, but retrieval enhanced retention of item-specific information to a greater extent than generation. There is a distinction between the testing effect and the generation effect and the distinction originates from retrieval mode. Intentional retrieval produces greater subsequent retention than generating targets under incidental retrieval instructions. 2009 Elsevier Inc. All rights reserved. |
Digital Blur - Creative Practice at the Boundaries of Architecture, Design and Art | Digital Blur brings together some of the world's leading practitioners and thinkers from the fields of art, architecture and design, all of whom share a common desire to exploit the latest computing technologies in their creative practice. |
The Impact of Service Quality on Customer Satisfaction , Customer Loyalty and Brand Image : Evidence from Hotel Industry of Pakistan | This study intent to investigate the impact of service quality on consumer satisfaction customer loyalty and brand image. The primary data was collected from 5 and 8 star hotels of Pakistan. The response rate was 86%. Structural equation modeling (SEM) technique was used to analyze the data. The findings suggested that high quality of services boost up the customer satisfaction and then afterward this satisfaction will strengthen the customer loyalty, our results also matched with Brodie et al., (2009). Last but not the least strong customer loyalty directly related to strong brand image. |
Aspect radiologique de l'hallux valgus | Le diagnostic de l'hallux valgus est clinique. Le bilan radiographique n'intervient qu'en seconde intention pour préciser les vices architecturaux primaires ou secondaires responsables des désaxations ostéo-musculotendineuses. Ce bilan sera toujours réalisé dans des conditions physiologiques, c'est-à-dire le pied en charge. La radiographie de face en charge apprécie la formule du pied (égyptien, grec, carré), le degré de luxation des sésamoïdes (stades 1, 2 ou 3), les valeurs angulaires (ouverture du pied, varus intermétatarsien, valgus interphalangien) et linéaires, tel l'étalement de l'avant-pied. La radiographie de profil en charge évalue la formule d'un pied creux, plat ou normo axé. L'incidence de Guntz Walter reflétant l'appui métatarsien décèle les zones d'hyperappui pathologique. En post-opératoire, ce même bilan permettra d'évaluer le geste chirurgical et de reconnaître une éventuelle hyper ou hypocorrection. The diagnosis of hallux valgus is a clinical one. Radiographic examination is involved only secondarily, to define the primary or secondary structural defects responsible for bony and musculotendinous malalignement. This examination should always be made under physiologic conditions, i.e., with the foot taking weight. The frontal radiograph in weight-bearing assesses the category of the foot (Egyptian, Greek, square), the degree of luxation of the sesamoids (stages 1, 2 or 3), the angular values (opening of the foot, intermetatarsal varus, interphalangeal valgus) and the linear values such as the spreading of the forefoot. The lateral radiograph in weight-bearing categorises the foot as cavus, flat or normally oriented. The Guntz Walter view indicates the thrust on the metatarsals and reveals zones of abnormal excessive thrust. Postoperatively, the same examination makes it possible to assess the outcome of the surgical procedure and to detect any over- or under-correction. |
Background Prior-Based Salient Object Detection via Deep Reconstruction Residual | Detection of salient objects from images is gaining increasing research interest in recent years as it can substantially facilitate a wide range of content-based multimedia applications. Based on the assumption that foreground salient regions are distinctive within a certain context, most conventional approaches rely on a number of hand-designed features and their distinctiveness is measured using local or global contrast. Although these approaches have been shown to be effective in dealing with simple images, their limited capability may cause difficulties when dealing with more complicated images. This paper proposes a novel framework for saliency detection by first modeling the background and then separating salient objects from the background. We develop stacked denoising autoencoders with deep learning architectures to model the background where latent patterns are explored and more powerful representations of data are learned in an unsupervised and bottom-up manner. Afterward, we formulate the separation of salient objects from the background as a problem of measuring reconstruction residuals of deep autoencoders. Comprehensive evaluations of three benchmark datasets and comparisons with nine state-of-the-art algorithms demonstrate the superiority of this paper. |
Perioperative glycemic control: an evidence-based review. | Hyperglycemia in perioperative patients has been identified as a risk factor for morbidity and mortality. Intensive insulin therapy (IIT) has been shown to reduce morbidity and mortality among the critically ill, decrease infection rates and improve survival after cardiac surgery, and improve outcomes in acute neurologic injury and acute myocardial infarction. However, recent evidence of severe hypoglycemia and adverse events associated with IIT brings its safety and efficacy into question. In this article, we summarize the mechanisms and rationale of hyperglycemia and IIT, review the evidence behind the use of IIT in the perioperative period, and discuss the implications of including glycemic control in national quality benchmarks. We conclude that while avoidance of hyperglycemia is clearly beneficial, the appropriate glucose target and specific subpopulations who might benefit from IIT have yet to be identified. Given the potential for harm, inclusion of glucose targets in national quality benchmarks is premature. |
Complex Linguistic Features for Text Classification: A Comprehensive Study | Previous researches on advanced representations for document retrieval have shown that statistical state-of-the-art models are not improved by a variety of different linguistic representations. Phrases, word senses and syntactic relations derived by Natural Language Processing (NLP) techniques were observed ineffective to increase retrieval accuracy. For Text Categorization (TC) are available fewer and less definitive studies on the use of advanced document representations as it is a relatively new research area (compared to document retrieval). In this paper, advanced document representations have been investigated. Extensive experimentation on representative classifiers, Rocchio and SVM, as well as a careful analysis of the literature have been carried out to study how some NLP techniques used for indexing impact TC. Cross validation over 4 different corpora in two languages allowed us to gather an overwhelming evidence that complex nominals, proper nouns and word senses are not adequate to improve TC accuracy. |
Permutation test for groups of scanpaths using normalized Levenshtein distances and application in NMR questions | This paper presents a permutation test that statistically compares two groups of scanpaths. The test uses normalized Levenshtein distances when the lengths of scanpaths are not the same. This method was applied in a recent eye-tracking experiment in which two groups of chemistry students viewed nuclear magnetic resonance (NMR) spectroscopic signals and chose the corresponding molecular structure from the candidates. A significant difference was detected between the two groups, which is consistent with the fact that students in the expert group showed more efficient scan patterns in the experiment than the novice group. Various numbers of permutations were tested and the results showed that p-values only varied in a small range with different permutation numbers and that the statistical significance was not affected. |
Long-term consistent use of a vaginal microbicide gel among HIV-1 sero-discordant couples in a phase III clinical trial (MDP 301) in rural south-west Uganda | BACKGROUND
A safe and effective vaginal microbicide could substantially reduce HIV acquisition for women. Consistent gel use is, however, of great importance to ensure continued protection against HIV infection, even with a safe and effective microbicide. We assessed the long-term correlates of consistent gel use in the MDP 301 clinical trial among HIV-negative women in sero-discordant couples in south-west Uganda.
METHODS
HIV-negative women living with an HIV-infected partner were enrolled between 2005 and 2008, in a three-arm phase III microbicide trial and randomized to 2% PRO2000, 0.5% PRO2000 or placebo gel arms. Follow-up visits continued up to September 2009. The 2% arm was stopped early due to futility and the 229 women enrolled in this arm were excluded from this analysis. Data were analyzed on 544 women on the 0.5% and placebo arms who completed at least 52 weeks of follow-up, sero-converted or became pregnant before 52 weeks. Consistent gel use was defined as satisfying all of the following three conditions: (i) reported gel use at the last sex act for at least 92% of the 26 scheduled visits or at least 92% of the visits attended if fewer than 26; (ii) at least one used applicator returned for each visit for which gel use was reported at the last sex act; (iii) attended at least 13 visits (unless the woman sero-converted or became pregnant during follow-up). Logistic regression models were fitted to investigate factors associated with consistent gel use.
RESULTS
Of the 544 women, 473 (86.9%) were followed for at least 52 weeks, 29 (5.3%) sero-converted and 42 (7.7%) became pregnant before their week 52 visit. Consistent gel use was reported by 67.8%. Women aged 25 to 34 years and those aged 35 years or older were both more than twice as likely to have reported consistently using gel compared to women aged 17 to 24 years. Living in a household with three or more rooms used for sleeping compared to one room was associated with a twofold increase in consistent gel use.
CONCLUSION
In rural Uganda younger women and women in houses with less space are likely to require additional support to achieve consistent microbicide gel use.
TRIAL REGISTRATION
Protocol Number ISRCTN64716212. |
Cuffless blood pressure estimation by error-correcting output coding method based on an aggregation of AdaBoost with a photoplethysmograph sensor | This paper presented a novel cuffless and non-invasive technique of Blood Pressure (BP) estimation with a pattern recognition method by using a Photoplethysmograph (PPG) sensor instead of a cuff. Error-Correcting Output Coding (ECOC) method was adopted as a multi-classifier machine based on an aggregation of general binary classifiers. AdaBoost was applied as binary classifier machine. 368 volunteers participated in the experiment. The estimated Systolic Blood Pressure (SBP) was calculated from their individual information and several features of their Pulse Wave (PW). As a result of the comparison between measured SBP, estimated SBP, MD = 3D-1.2 [mmHg] and SD = 3D11.7 [mmHg] were obtained. Hence, this technique would be helpful to the advance the development of continuous BP monitoring system, since the only one device to monitor BP is smaller than traditional measurements. |
Learning from Delayed Outcomes with Intermediate Observations | Optimizing for long term value is desirable in many practical applications, e.g. recommender systems. The most common approach for long term value optimization is supervised learning using long term value as the target. Unfortunately, long term metrics take a long time to measure (e.g., will customers finish reading an ebook?), and vanilla forecasters cannot learn from examples until the outcome is observed. In practical systems where new items arrive frequently, such delay can increase the training-serving skew, thereby negatively affecting the model’s predictions for new products. We argue that intermediate observations (e.g., if customers read a third of the book in 24 hours) can improve a model’s predictions. We formalize the problem as a semi-stochastic model, where instances are selected by an adversary but, given an instance, the intermediate observation and the outcome are sampled from a factored joint distribution. We propose an algorithm that exploits intermediate observations and theoretically quantify how much it can outperform any prediction method that ignores the intermediate observations. Motivated by the theoretical analysis, we propose two neural network architectures: Factored Forecaster (FF) which is ideal if our assumptions are satisfied, and Residual Factored Forecaster (RFF) that is more robust to model mis-specification. Experiments on two real world datasets, a dataset derived from GitHub repositories and another dataset from a popular marketplace, show that RFF outperforms both FF as well as an algorithm that ignores intermediate observations. |
Reinforcement learning can account for associative and perceptual learning on a visual decision task | We recently showed that improved perceptual performance on a visual motion direction–discrimination task corresponds to changes in how an unmodified sensory representation in the brain is interpreted to form a decision that guides behavior. Here we found that these changes can be accounted for using a reinforcement-learning rule to shape functional connectivity between the sensory and decision neurons. We modeled performance on the basis of the readout of simulated responses of direction-selective sensory neurons in the middle temporal area (MT) of monkey cortex. A reward prediction error guided changes in connections between these sensory neurons and the decision process, first establishing the association between motion direction and response direction, and then gradually improving perceptual sensitivity by selectively strengthening the connections from the most sensitive neurons in the sensory population. The results suggest a common, feedback-driven mechanism for some forms of associative and perceptual learning. |
Many roads lead to Rome: mapping users' problem solving strategies | Especially in ill-defined problems like complex, real-world tasks more than one way leads to a solution. Until now, the evaluation of information visualizations was often restricted to measuring outcomes only (time and error) or insights into the data set. A more detailed look into the processes which lead to or hinder task completion is provided by analyzing users' problem solving strategies. A study illustrates how they can be assessed and how this knowledge can be used in participatory design to improve a visual analytics tool. In order to provide the users a tool which functions as a real scaffold, it should allow them to choose their own path to Rome. We discuss how evaluation of problem solving strategies can shed more light on the users' "exploratory minds". |
Design and Analysis of a 3-Arm Spiral Antenna | A novel 3-arm spiral antenna structure is presented in this paper. This antenna similar to traditional two-arm or four-arm spiral antennas exhibits wideband radiation characteristic and circular polarization. Advantages offered by the new design are two fold. Unlike the traditional spiral antennas the three-arm spiral can be fed by an unbalanced transmission line, such as a coaxial line or coplanar waveguide, and therefore an external balun is not needed at the feed point. Also by proper choice of arms' dimensions the antenna can be directly matched to any practical transmission line characteristic impedance and therefore external matching networks are not required. This is accomplished by feeding the antenna at the outer radius by a coplanar waveguide (CPW) transmission line and tapering it towards the center. The antenna can also be fed from the center using a coaxial or CPW line perpendicular to the plane of the spiral antenna. A full-wave numerical simulation tool is used to optimize the geometry of the proposed 3-arm spiral to achieve a compact size, wide bandwidth operation, and low axial ratio. The antenna is also designed over a ground plane to achieve a unidirectional radiation and center loading is examined that improves the axial ratio. Simulated results like return loss, radiation pattern, gain, and axial ratio are compared with those obtained from measurements and good agreements are shown. Because of its unique feed structure and compact size, application of the proposed 3-arm spiral antenna for wideband array applications is demonstrated |
ArticulatedFusion: Real-time Reconstruction of Motion, Geometry and Segmentation Using a Single Depth Camera | This paper proposes a real-time dynamic scene reconstruction method capable of reproducing the motion, geometry, and segmentation simultaneously given live depth stream from a single RGB-D camera. Our approach fuses geometry frame by frame and uses a segmentationenhanced node graph structure to drive the deformation of geometry in registration step. A two-level node motion optimization is proposed. The optimization space of node motions and the range of physically-plausible deformations are largely reduced by taking advantage of the articulated motion prior, which is solved by an efficient node graph segmentation method. Compared to previous fusion-based dynamic scene reconstruction methods, our experiments show robust and improved reconstruction results for tangential and occluded motions. |
Fast and viewpoint robust human detection for SAR operations | There are many advantages in using UAVs for search and rescue operations. However, detecting people from a UAV remains a challenge: the embedded detector has to be fast enough and viewpoint robust to detect people in a flexible manner from aerial views. In this paper we propose a processing pipeline to 1) reduce the search space using infrared images and to 2) detect people whatever the roll and pitch angles of the UAV's acquisition system. We tested our approach on a multimodal aerial view dataset and showed that it outperforms the Integral Channel Features (ICF) detector in this context. Moreover, this approach allows real-time compatible detection. |
Reactions to ischemic pain: Interactions between individual, situational and naloxone effects | Fifty-two paid volunteers participated in two separate factorial investigations of the effects of naloxone on time tolerance of and affective reactions to ischemia, as a function of the interaction between expectations of involvement in the experimental situation and experimental variables involving stress or suggestions of analgesia. Naloxoneinduced reduction in tolerance to ischemia interacted significantly with the level of involvement expectancies. The suggestion of analgesia provided no significant naloxonesaline discriminations, but there was a significant interaction between variable memory task conditions and drug effects on the time ischemia was tolerated. These findings suggest that naloxone-opiate receptor interactions may depend on individual differences in attitudes to the situation, but may be potentiated by select environmental stimuli. Analyses of the effects of treatment on affective reactions to ischemia failed to show consistent results. |
Synthetic therapeutic peptides: science and market. | The decreasing number of approved drugs produced by the pharmaceutical industry, which has been accompanied by increasing expenses for R&D, demands alternative approaches to increase pharmaceutical R&D productivity. This situation has contributed to a revival of interest in peptides as potential drug candidates. New synthetic strategies for limiting metabolism and alternative routes of administration have emerged in recent years and resulted in a large number of peptide-based drugs that are now being marketed. This review reports on the unexpected and considerable number of peptides that are currently available as drugs and the chemical strategies that were used to bring them into the market. As demonstrated here, peptide-based drug discovery could be a serious option for addressing new therapeutic challenges. |
Why Machine Ethics? | Machine ethics isn't merely science fiction; it's a topic that requires serious consideration, given the rapid emergence of increasingly complex autonomous software agents and robots. Machine ethics is an emerging field that seeks to implement moral decision-making faculties in computers and robots. We already have semiautonomous robots and software agents that violate ethical standards as a matter of course. In the case of AI and robotics, fearful scenarios range from the future takeover of humanity by a superior form of AI to the havoc created by endlessly reproducing nanobots |
HIV patient insight on adhering to medication: a qualitative analysis. | Research on HIV medication adherence has relied mainly on quantitative methods. The objective of this study was to explore factors associated with adherence from the HIV-infected patient's perspective. Six focus groups were convened with treatment-experienced HIV-positive individuals. The discussions focused on issues that make it easy or difficult to adhere to HIV regimens. Thirty-five patients participated in the focus groups, which were conducted in Washington, D.C., and Los Angeles. The mean age was 48; 66% were male; 63% were black; and 40% contracted HIV through heterosexual contact. Six major themes emerged from the data that influenced adherence to medication: regimen complexity/medication features (including number of pills), lifestyle fit, emotional impacts (including worry, anger, stress and anxiety), side effects, medication effectiveness, and communication (including information from friends, physicians, and published sources). The data informed a conceptual framework, illustrating the possible interactions among these themes that can potentially be used by clinicians when discussing HIV treatment options with patients. This is potentially one of the first focus group studies concentrating on HIV medication adherence. The findings highlight specific factors that should be considered when trying to improve adherence and may be helpful in clinical decision-making. |
CMOS Voltage and Current Reference Circuits consisting of Subthreshold MOSFETs | The development of ultra-low power LSIs is a promising area of research in microelectronics. Such LSIs would be suitable for use in power-aware LSI applications such as portable mobile devices, implantable medical devices, and smart sensor networks [1]. These devices have to operate with ultra-low power, i.e., a few microwatts or less, because they will probably be placed under conditions where they have to get the necessary energy from poor energy sources such as microbatteries or energy scavenging devices [2]. As a step toward such LSIs, we first need to develop voltage and current reference circuits that can operate with an ultra-low current, several tens of nanoamperes or less, i.e., sub-microwatt operation. To achieve such low-power operation, the circuits have to be operated in the subthreshold region, i.e., a region at which the gate-source voltage of MOSFETs is lower than the threshold voltage [3; 4]. Voltage and current reference circuits are important building blocks for analog, digital, and mixed-signal circuit systems in microelectronics, because the performance of these circuits is determined mainly by their bias voltages and currents. The circuits generate a constant reference voltage and current for various other components such as operational amplifiers, comparators, AD/DA converters, oscillators, and PLLs. For this purpose, bandgap reference circuits with CMOS-based vertical bipolar transistors are conventionally used in CMOS LSIs [5; 6]. However, they need resistors with a high resistance of several hundred megaohms to achieve low-current, subthreshold operation. Such a high resistance needs a large area to be implemented, and this makes conventional bandgap references unsuitable for use in ultra-low power LSIs. Therefore, modified voltage and current reference circuits for lowpower LSIs have been reported (see [7]-[12], [14]-[17]). However, these circuits have various problems. For example, their power dissipations are still large, their output voltages and currents are sensitive to supply voltage and temperature variations, and they have complex circuits with many MOSFETs; these problems are inconvenient for practical use in ultra-low power LSIs. Moreover, the effect of process variations on the reference signal has not been discussed in detail. To solve these problems, I and my colleagues reported new voltage and current reference circuits [13; 18] that can operate with sub-microwatt power dissipation and with low sensitivity to temperature and supply voltage. Our circuits consist of subthreshold MOSFET circuits and use no resistors. |
Modeling and Control of Three-Phase Gravilty Separators in Oil Production Facilities | Oil production facilities exhibit complex and challenging dynamic behavior. A dynamic mathematical modeling study was done to address the tasks of design, control and optimization of such facilities. The focus of this paper is on the three-phase separator, where each phase's dynamics are modeled. The hydrodynamics of liquid-liquid separation are modeled based on the American Petroleum Institute design criteria. Given some simplifying assumptions, the oil and gas phases' dynamic behaviors are modeled assuming vapor-liquid phase equilibrium at the oil surface. In order to validate the developed mathematical model, an oil production facility simulation was designed and implemented based on such models. The simulation model consists of a two-phase separator followed by a three-phase separator. An upset in the oil component of the incoming oil-well stream is introduced to analyze its effect of the different process variables and produced oil quality. The simulation results demonstrate the sophistication of the model in spite of its simplicity. |
Efficient graph computation on hybrid CPU and GPU systems | Graphs are used to model many real objects such as social networks and web graphs. Many real applications in various fields require efficient and effective management of large-scale, graph-structured data. Although distributed graph engines such as GBase and Pregel handle billion-scale graphs, users need to be skilled at managing and tuning a distributed system in a cluster, which is a non-trivial job for ordinary users. Furthermore, these distributed systems need many machines in a cluster in order to provide reasonable performance. Several recent works proposed non-distributed graph processing platforms as complements to distributed platforms. In fact, efficient non-distributed platforms require less hardware resource and can achieve better energy efficiency than distributed ones. GraphChi is a representative non-distributed platform that is disk-based and can process billions of edges on CPUs in a single PC. However, the design drawbacks of GraphChi on I/O and computation model have limited its parallelism and performance. In this paper, we propose a general, disk-based graph engine called gGraph to process billion-scale graphs efficiently by utilizing both CPUs and GPUs in a single PC. GGraph exploits full parallelism and full overlap of computation and I/O processing as much as possible. Experiment results show that gGraph outperforms GraphChi and PowerGraph. In addition, gGraph achieves the best energy efficiency among all evaluated platforms. |
The neuroprotective agent ebselen modifies NMDA receptor function via the redox modulatory site. | Ebselen is a seleno-organic compound currently in clinical trials for the treatment of ischemic stroke and subarachnoid hemorrhage. Its putative mode of action as a neuroprotectant is via cyclical reduction and oxidation reactions, in a manner akin to glutathione peroxidase. For this reason, we have investigated the effects of ebselen on the redox-sensitive NMDA receptor. We have found that ebselen readily reversed dithiothreitol (DTT) potentiation of NMDA-mediated currents in cultured neurons and in Chinese hamster ovary (CHO) cells expressing wild-type NMDA NR1/NR2B receptors. In contrast, ebselen was unable to modulate NMDA-induced currents in neurons previously exposed to the thiol oxidant 5,5'-dithiobis(2-nitrobenzoic acid) (DTNB), or in CHO cells expressing a mutant receptor lacking the NR1 redox modulatory site, suggesting that ebselen oxidizes the NMDA receptor via this site. In addition, ebselen was substantially less effective in modifying NMDA responses in neurons exposed to alkylating agent N-ethylmaleimide (NEM) following DTT treatment. Ebselen also reversed DTT block of carbachol-mediated currents in Cos-7 cells expressing the alpha(2)beta delta epsilon subunits of the acetylcholine receptor, an additional redox-sensitive ion channel. Ebselen was observed to significantly increase cell viability following a 30-min NMDA exposure in cultured neurons. In contrast, other more typical antioxidant compounds did not afford neuroprotection in a similar paradigm. We conclude that ebselen may be neuroprotective in part due to its actions as a modulator of the NMDA receptor redox modulatory site. |
Switching from natalizumab to fingolimod in multiple sclerosis: a French prospective study. | IMPORTANCE
The safety and efficacy of switching from natalizumab to fingolimod have not yet been evaluated in a large cohort of patients with multiple sclerosis (MS) to our knowledge.
OBJECTIVE
To collect data from patients with MS switching from natalizumab to fingolimod.
DESIGN, SETTING, AND PARTICIPANTS
The Enquête Nationale sur l'Introduction du Fingolimod en Relais au Natalizumab (ENIGM) study, a survey-based, observational multicenter cohort study among MS tertiary referral centers. Participants were patients for whom a switch from natalizumab to fingolimod was planned. Clinical data were collected on natalizumab treatment, duration and management of the washout period (WP), and relapse or adverse events during the WP and after the initiation of fingolimod.
MAIN OUTCOMES AND MEASURES
Occurrence of MS relapse during the WP or during a 6-month follow-up period after the initiation of fingolimod.
RESULTS
Thirty-six French MS tertiary referral centers participated. In total, 333 patients with MS switched from natalizumab to fingolimod after a mean of 31 natalizumab infusions (female to male ratio, 2.36; mean age, 41 years; and Expanded Disability Status Scale score at the initiation of natalizumab, 3.6). Seventy-one percent were seropositive for the JC polyomavirus. The Expanded Disability Status Scale score remained stable for patients receiving natalizumab. Twenty-seven percent of patients relapsed during the WP. A WP shorter than 3 months was associated with a lower risk of relapse (odds ratio, 0.23; P = .001) and with less disease activity before natalizumab initiation (P = .03). Patients who stopped natalizumab because of poor tolerance or lack of efficacy also had a higher risk of relapse (odds ratio, 3.20; P = .004). Twenty percent of patients relapsed during the first 6 months of fingolimod therapy. Three percent stopped fingolimod for efficacy, tolerance, or compliance issues. In the multivariate analysis, the occurrence of relapse during the WP was the only significant prognostic factor for relapse during fingolimod therapy (odds ratio, 3.80; P = .05).
CONCLUSIONS AND RELEVANCE
In this study, switching from natalizumab to fingolimod was associated with a risk of MS reactivation during the WP or shortly after fingolimod initiation. The WP should be shorter than 3 months. |
Indirect exposure to client trauma and the impact on trainee clinical psychologists: Secondary traumatic stress or vicarious traumatization? | OBJECTIVES
The study investigated the relationship between exposure to trauma work and well-being (general psychological distress, trauma symptoms, and disrupted beliefs) in trainee clinical psychologists. It also assessed the contribution of individual and situational factors to well-being.
DESIGN
A Web-based survey was employed.
METHODS
The survey comprised the General Health Questionnaire, Secondary Traumatic Stress Scale, Trauma and Attachment Belief Scale, Trauma Screening Questionnaire, and specific questions about exposure to trauma work and other individual and situational factors. The link to the online survey was sent via email to trainee clinical psychologists attending courses throughout the UK RESULTS: Five hundred sixty-four trainee clinical psychologists participated. Most trainees had a caseload of one to two trauma cases in the previous 6 months; the most common trauma being sexual abuse. Exposure to trauma work was not related to general psychological distress or disrupted beliefs but was a significant predictor of trauma symptoms. Situational factors contributed to the variance in trauma symptoms; level of stress of clinical work and quality of trauma training were significant predictors of trauma symptoms. Individual and situational factors were also found to be significant predictors of general psychological distress and disrupted beliefs.
CONCLUSIONS
This study provides support for secondary traumatic stress but lacks evidence to support belief changes in vicarious traumatization or a relationship between exposure to trauma work and general psychological distress. The measurement and validity of vicarious traumatization is discussed along with clinical, theoretical implications, and suggestions for future research.
PRACTITIONER POINTS
Secondary traumatic stress is a potential risk for trainee clinical psychologists. Training courses should (a) focus on quality of trauma training as it may be protective; (b) advocate coping strategies to reduce stress of clinical work, as the level of stress of clinical work may contribute to trauma symptoms.
LIMITATIONS INCLUDE
Exposure to trauma work only uniquely explained a small proportion of variance in trauma symptoms. The study was cross-sectional in nature therefore cannot imply causality. |
A Digital Watermark Algorithm for QR Code | Technology that combines a 2D BarCode with a digital watermark is a topic of great interest in current research related to the security field. This paper presents a new digital watermark method for the QR Code(Quick Response Code). An invisible watermark is embedded in the QR Code image using watermark technology, and while this is being done, the DCT intermediate frequency coefficients are compared. To prevent the overflow of the QR Code in the DCT domain of the image, QR image need fuzzy processing and be added noise to. In order to resist image distortion after print and scan operations, the watermark is embedded repeatedly. The watermark is extracted by using the two maximum membership degree of fuzzy pattern recognition, without the original image. Experimental results have demonstrated the feasibility of the algorithm. |
Multimedia document authentication using on-line signatures as watermarks | Authentication of digital documents is an important concern as digital documents are replacing the traditional paper-based documents for official and legal purposes. This is especially true in the case of documents that are exchanged over the Internet, which could be accessed and modified by intruders. The most popular methods used for authentication of digital documents are public key encryption-based authentication and digital watermarking. Traditional watermarking techniques embed a pre-determined character string, such as the company logo, in a document. We propose a fragile watermarking system, which uses an on-line signature of the author as the watermark in a document. The embedding of a biometric characteristic such as signature in a document enables us to verify the identity of the author using a set of reference signatures, in addition to ascertaining the document integrity. The receiver of the document reconstructs the signature used to watermark the document, which is then used to verify the author’s claimed identity. The paper presents a signature encoding scheme, which facilitates reconstruction by the receiver, while reducing the chances of collusion attacks. |
Learning to Optimize | Algorithm design is a laborious process and often requires many iterations of ideation and validation. In this paper, we explore automating algorithm design and present a method to learn an optimization algorithm, which we believe to be the first method that can automatically discover a better algorithm. We approach this problem from a reinforcement learning perspective and represent any particular optimization algorithm as a policy. We learn an optimization algorithm using guided policy search and demonstrate that the resulting algorithm outperforms existing hand-engineered algorithms in terms of convergence speed and/or the final objective value. |
Effect of lisdexamfetamine dimesylate on parent-rated measures in children aged 6 to 12 years with attention-deficit/hyperactivity disorder: a secondary analysis. | UNLABELLED
To evaluate the efficacy of lisdexamfetamine dimesylate (LDX) in school-aged children with attention-deficit/hyperactivity disorder (ADHD), using the Conners' Parent Rating Scale, Revised Short Version (CPRS-R:S) and its subscales.
METHODS
This was a secondary post hoc analysis of data from a placebo-controlled, double-blind, parallel-group, forced dose-escalation trial. Boys and girls aged 6 to 12 years with a primary diagnosis of ADHD were randomly assigned to LDX (30, 50, or 70 mg/d) or placebo. Improvement on the CPRS-R:S and its subscales (ADHD Index, hyperactivity, oppositional, and cognition) at 10:00 AM, 2:00 PM, and 6:00 PM was analyzed. Safety assessments included the identification of adverse events and were conducted throughout the study.
RESULTS
Of the 290 patients randomized, 285 were included in the intent-to-treat population. Parents noted significant improvements at all 3 assessment times on the CPRS-R:S total score and for the CPRS-R:S ADHD Index, hyperactivity, and cognition subscales, regardless of the subject baseline disease severity. For the CPRS-R:S oppositional subscale, significant improvement was noted at 10:00 AM and 2:00 PM (P < 0.01), and overall, significant improvement occurred in subjects who were more severely ill at baseline. The tolerability of LDX was comparable to that of other stimulants.
CONCLUSION
Once-daily treatment with LDX was associated with significant improvement in parent-rated assessments of ADHD-related behavior throughout the day at approximately 10:00 AM, 2:00 PM, and 6:00 PM. Lisdexamfetamine dimesylate was effective and well tolerated in this study population of children aged 6 to 12 years with ADHD. |
Training CNNs with Low-Rank Filters for Efficient Image Classification | I We train CNNs with composite layers of oriented low-rank filters, of which the network learns the most effective linear combination I In effect our networks learn a basis space for filters, based on simpler low-rank filters I We propose an initialization for composite layers of heterogeneous filters, to train such networks from scratch I Our models are faster and use less parameters I With a small number of full filters, our models also generalize better |
Statistical mechanics of complex networks | Complex networks describe a wide range of systems in nature and society. Frequently cited examples include the cell, a network of chemicals linked by chemical reactions, and the Internet, a network of routers and computers connected by physical links. While traditionally these systems have been modeled as random graphs, it is increasingly recognized that the topology and evolution of real networks are governed by robust organizing principles. This article reviews the recent advances in the field of complex networks, focusing on the statistical mechanics of network topology and dynamics. After reviewing the empirical data that motivated the recent interest in networks, the authors discuss the main models and analytical tools, covering random graphs, small-world and scale-free networks, the emerging theory of evolving networks, and the interplay between topology and the network’s robustness against failures and attacks. |
Ultrawideband Double Rhombus Antenna With Stable Radiation Patterns for Phased Array Applications | A broadband microstrip-fed printed antenna is proposed for phased antenna array systems. The antenna consists of two parallel-modified dipoles of different lengths. The regular dipole shape is modified to a quasi-rhombus shape by adding two triangular patches. Using two dipoles helps maintain stable radiation patterns close to their resonance frequencies. A modified array configuration is proposed to further enhance the antenna radiation characteristics and usable bandwidth. Scanning capabilities are studied for a four-element array. The proposed antenna provides endfire radiation patterns with high gain, high front-to-back (F-to-B) ratio, low cross-polarization level, wide beamwidth, and wide scanning angles in a wide bandwidth of 103% |
QoS Aware Geographic Opportunistic Routing in Wireless Sensor Networks | QoS routing is an important research issue in wireless sensor networks (WSNs), especially for mission-critical monitoring and surveillance systems which requires timely and reliable data delivery. Existing work exploits multipath routing to guarantee both reliability and delay QoS constraints in WSNs. However, the multipath routing approach suffers from a significant energy cost. In this work, we exploit the geographic opportunistic routing (GOR) for QoS provisioning with both end-to-end reliability and delay constraints in WSNs. Existing GOR protocols are not efficient for QoS provisioning in WSNs, in terms of the energy efficiency and computation delay at each hop. To improve the efficiency of QoS routing in WSNs, we define the problem of efficient GOR for multiconstrained QoS provisioning in WSNs, which can be formulated as a multiobjective multiconstraint optimization problem. Based on the analysis and observations of different routing metrics in GOR, we then propose an Efficient QoS-aware GOR (EQGOR) protocol for QoS provisioning in WSNs. EQGOR selects and prioritizes the forwarding candidate set in an efficient manner, which is suitable for WSNs in respect of energy efficiency, latency, and time complexity. We comprehensively evaluate EQGOR by comparing it with the multipath routing approach and other baseline protocols through ns-2 simulation and evaluate its time complexity through measurement on the MicaZ node. Evaluation results demonstrate the effectiveness of the GOR approach for QoS provisioning in WSNs. EQGOR significantly improves both the end-to-end energy efficiency and latency, and it is characterized by the low time complexity. |
Archimetrix: Improved Software Architecture Recovery in the Presence of Design Deficiencies | Maintaining software systems requires up-to-date models of these systems to systematically plan, analyse and execute the necessary reengineering steps. Often, no or only outdated models of such systems exist. Thus, a reverse engineering step is needed that recovers the system's components, subsystems and connectors. However, reverse engineering methods are severely impacted by design deficiencies in the system's code base, e.g., they lead to wrong component structures. Several approaches exist today for the reverse engineering of component-based systems, however, none of them explicitly integrates a systematic design deficiency removal into the process to improve the quality of the reverse engineered architecture. Therefore, in our Archimetrix approach, we propose to regard the most relevant deficiencies with respect to the reverse engineered component-based architecture and support reengineers by presenting the architectural consequences of removing a given deficiency. We validate our approach on the Common Component Modeling Example and show that we are able to identify relevant deficiencies and that their removal leads to an improved reengineered architecture. |
Analysis and design of average current mode control using describing function-based equivalent circuit model | A small signal model for average current mode control based on equivalent circuit is proposed. The model uses the three-terminal equivalent circuit model based on linearized describing function method to include the feedback effect of the side-band frequency components of inductor current. It extends the results obtained in peak-current model control to average current mode control. The proposed small signal model is accurate up to half switching frequency, predicting the sub-harmonic instability. The proposed model is verified using SIMPLIS simulation and hardware experiments, showing good agreement with the measurement results. Based on the proposed model, a new feedback design guideline is presented. The proposed design guideline is compared with several conventional, widely used design criteria to highlight its virtue. By designing the external ramp following the proposed design guideline, quality factor of the double poles at half of switching frequency in control-to-output transfer function can be precisely controlled. This helps the feedback design to achieve widest control bandwidth and proper damping. |
Toward Predicting Popularity of Social Marketing Messages | Popularity of social marketing messages indicates the effectiveness of the corresponding marketing strategies. This research aims to discover the characteristics of social marketing messages that contribute to different level of popularity. Using messages posted by a sample of restaurants on Facebook as a case study, we measured the message popularity by the number of “likes” voted by fans, and examined the relationship between the message popularity and two properties of the messages: (1) content, and (2) media type. Combining a number of text mining and statistics methods, we have discovered some interesting patterns correlated to “more popular” and “less popular” social marketing messages. This work lays foundation for building computational models to predict the popularity of social marketing messages in the future. |
Outrageous but meaningful coincidences: dependent type-safe syntax and evaluation | Tagless interpreters for well-typed terms in some object language are a standard example of the power and benefit of precise indexing in types, whether with dependent types, or generalized algebraic datatypes. The key is to reflect object language types as indices (however they may be constituted) for the term datatype in the host language, so that host type coincidence ensures object type coincidence. Whilst this technique is widespread for simply typed object languages, dependent types have proved a tougher nut with nontrivial computation in type equality. In their type-safe representations, Danielsson [2006] and Chapman [2009] succeed in capturing the equality rules, but at the cost of representing equality derivations explicitly within terms. This article constructs a type-safe representation for a dependently typed object language, dubbed KIPLING, whose computational type equality just appropriates that of its host, Agda. The KIPLING interpreter example is not merely de rigeur - it is key to the construction. At the heart of the technique is that key component of generic programming, the universe. |
04 PASAJEROS ALEMANES AFICIONADOS A LA ORNITOLOGÍA ESMERALDAS | My topic was made whit all my good feelings in here I I put the
experiences, anecdotes that I've gathered from people and the investigation that I do
about my county Ecuador, for example officially the Republic of Ecuador,
which literally translates as "Republic of the Equator") is a representative
democratic republic in South America, bordered by Colombia on the north,
Peru on the east and south, and the Pacific Ocean to the west. Ecuador also
includes the Galapagos Islands in the Pacific, about 1,000 kilometres (620 mi)
west of the mainland. The main spoken language in Ecuador is Spanish (94% of the
population). Languages of official use in native communities include Quichua,
Shuar, and eleven other languages. Ecuador has a land area of 283,520 km
capital city is Quito, which was declared a World Heritage Site by UNESCO in
the 1970s for having the best preserved and least altered historic center in Latin
America. The country's largest city is Guayaquil. The historic center of Cuenca,
the third-largest city in the country in size and economically,
2
. Its
was also declared
a World Heritage Site in 1999 as an outstanding example of a planned, inland
Spanish-style colonial city in the Americas. Ecuador currently ranks 71st for
Global Competitiveness. |
A Semantic Concordance | A semantic concordance is a textual corpus and a lexicon So combined that every substantive word in the text is linked to its appropriate ~nse in the lexicon. Thus it can be viewed either as a corpus in which words have been tagged syntactically and semantically, or as a lexicon in which example sentences can be found for many definitions. A semantic concordance is being constructed to u s e in studies of sense resolution in context (semantic disambiguation). The Brown Corpus is the text and WordNet is the lexicon. Semantic tags (pointers to WordNet synsets) are inserted in the text manually using an interface, ConText, that was designed to facilitate the task. Another interface supports searches of the tagged text. Some practical uses for semantic concordances are proposed. 1. I N T R O D U C T I O N We wish to propose a new version of an old idea. Lexicographers have traditionally based their work on a corpus of examples taken from approved usage, but considerations of cost usually limit published dictionaries to lexical entries having only a scattering of phrases to illustrate the usages from which definitions were derived. As a consequence of this economic pressure, most dictionaries are relatively weak in providing contextual information: someone learning English as a second language will find in an English dictionary many alternative meanings for a common word, but little or no help in determining the linguistic contexts in which the word can be used to express those different meanings. Today, however, large computer memories are affordable enough that this limitation can be removed; it would now be feasible to publish a dictionary electronically along with all of the citation sentences on which it was based. The resulting combination would be more than a lexicon and more than a corpus; we propose to call it a semantic concordance. If the corpus is some specific text, it is a specific semantic concordance; ff the corpus includes many different texts, it is a universal semantic concordance. We have begun constructing a universal semantic concordance in conjunction with our work on a lexical database. The result can be viewed either as a collection of passages in which words have been tagged syntactically and semantieally, or as a lexicon in which illustrative sentences can be found for many definitions. At the present time, the correlation of a lexical meaning with examples in which a word is used to express that meaning must be done by hand. Manual semantic tagging is tedious; it should be done automatically as soon as it is possible to resolve word senses in context automatically. It is hoped that the manual creation of a semantic concordance will provide an appropriate environment for developing and testing those automatic procedures. 2. W O R D N E T : A L E X I C A L D A T A B A S E The lexical component of the universal semantic concordance that we are constructing is WordNet, an on-line lexical resource inspired by current psycholinguistic theories of haman lexical memory [1, 2]. A standard, handheld dictionary is organized alphabetically; it puts together words that are spelled alike and scatters words with related meanings. Although on-line versions of such standard dictionaries can relieve a user of alphabetical searches, it is clearly inefficient to use a computer merely as a rapid page-turner. WordNet is an example of a more efficient combination of traditional lexicography and modern computer science. The most ambitious feature of WordNet is the attempt to organize lexical information in terms of word meanings, rather than word forms. WordNet is organized by semantic relations (rather than by semantic components) within the open-class categories of noun, verb, adjective, and adverb; closed-class categories of words (pronouns, prepositions, conjunctions, etc.) are not included in WordNet. The semantic relations among open-class words include: synonymy and antonymy (which are semantic relations between words and which are found in all four syntactic categories); hyponymy and hypernymy (which are semantic relations between concepts and which organize nouns into a categorical hierarchy); meronymy and holonymy (which represent part-whole relations among noun concepts); and troponymy (manner relations) and entailment relations between verb concepts. These semantic relations were chosen to be intuitively obvious to nonlinguists and to have broad applicability throughout the lexicon. The basic elements of WordNet are sets of synonyms (or synsets), which are taken to represent lexicalized concepts. A synset is a group of words that are synonymous, in the sense that there are contexts in which they can be interchanged without changing the meaning of the statement. For example, WordNet distinguishes between the synsets: |
A model of the relationship between psychological characteristics, mobile phone addiction and use of mobile phones by Taiwanese university female students | While many researches have analyzed the psychological antecedents of mobile phone addiction and mobile phone usage behavior, their relationship with psychological characteristics remains mixed. We investigated the relationship between psychological characteristics, mobile phone addiction and use of mobile phones for 269 Taiwanese female university students who were administered Rosenberg’s selfesteem scale, Lai’s personality inventory, and a mobile phone usage questionnaire and mobile phone addiction scale. The result showing that: (1) social extraversion and anxiety have positive effects on mobile phone addiction, and self-esteem has negative effects on mobile phone addiction. (2) Mobile phone addiction has a positive predictive effect on mobile phone usage behavior. The results of this study identify personal psychological characteristics of Taiwanese female university students which can significantly predict mobile phone addiction; female university students with mobile phone addiction will make more phone calls and send more text messages. These results are discussed and suggestions for future research for school and university students are provided. 2012 Elsevier Ltd. All rights reserved. |
On the Complexity of Teaching | While most theoretical work in machine learning has focused on the complexity of learning, recently there has been increasing interest in formally studying the complexity of teaching . In this paper we study the complexity of teaching by considering a variant of the on-line learning model in which a helpful teacher selects the instances. We measure the complexity of teaching a concept from a given concept class by a combinatorial measure we call the teaching dimension. Informally, the teaching dimension of a concept class is the minimum number of instances a teacher must reveal to uniquely identify any target concept chosen from the class. A preliminary version of this paper appeared in the Proceedings of the Fourth Annual Workshop on Computational Learning Theory, pages 303{314. August 1991. Most of this research was carried out while both authors were at MIT Laboratory for Computer Science with support provided by ARO Grant DAAL03-86-K-0171, DARPA Contract N00014-89-J-1988, NSF Grant CCR-88914428, and a grant from the Siemens Corporation. S. Goldman is currently supported in part by a G.E. Foundation Junior Faculty Grant and NSF Grant CCR-9110108. |
Integrin signaling aberrations in prostate cancer. | Integrins are cell surface receptors for extracellular matrix proteins and play a key role in cell survival, proliferation, migration and gene expression. Integrin signaling has been shown to be deregulated in several types of cancer, including prostate cancer. This review is focused on integrin signaling pathways known to be deregulated in prostate cancer and known to promote prostate cancer progression. |
Rethinking reflective education: What would Dewey have done? | Reflective practice has largely failed to live up to its promise of offering a radical critique of technical rationality and of ushering in a new philosophy of nursing practice and education. I argue in this paper that the failure lies not with the idea of reflective practice itself, but with the way in which it has been misunderstood, misinterpreted and misapplied by managers, theorists, educators and practitioners over the past two decades. I suggest that if reflective practice is to offer a credible alternative to the current technical-rational evidence-based approach to nursing, then it needs to rediscover its radical origins in the work of John Dewey and Donald Schön. In particular, nurses need to look beyond their current fixation with reflection-on-action and engage fully with Schön's notion of the reflective practitioner who reflects in action through on-the-spot experimentation and hypothesis testing. Finally, the implications of this radical approach to reflective practice are developed in relation to the practice of nursing, education and scholarship, where they are applied to the challenge of resolving what Rittel and Webber refer to as 'wicked problems'. |
Bound states of massless fermions as a source for new physics | The contribution of interactions at short and large distances to particle masses is discussed in the framework of the standard model. |
RFID Application in Hospitals: A Case Study on a Demonstration RFID Project in a Taiwan Hospital | After manufacturing and retail marketing, healthcare is considered the next home for Radio Frequency Identification (RFID). Although in its infancy, RFID technology has great potential in healthcare to significantly reduce cost, and improve patient safety and medical services. However, the challenge will be to incorporate RFID into medical practice, especially when relevant experience is limited. To explore this issue, we conducted a case study that demonstrated RFID integration into the medical world at one Taiwan hospital. The project that was studied, the Location-based Medicare Service project, was partially subsidized by the Taiwan government and was aimed at containing SARS, a dangerous disease that struck Taiwan in 2003. Innovative and productive of several significant results, the project established an infrastructure and platform allowing for other applications. In this paper we describe the development strategy, design, and implementation of the project. We discuss our findings pertaining to the collaborative development strategy, device management, data management, and value generation. These findings have important implications for developing RFID applications in healthcare organizations. |
Relationship Satisfaction : The influence of Attachment , Love Styles and Religiosity | ..........................................................................................II |
Using mixture design and neural networks to build stock selection decision support systems | There are three disadvantages of weighted scoring stock selection models. First, they cannot identify the relations between weights of stock-picking concepts and performances of portfolios. Second, they cannot systematically discover the optimal combination for weights of concepts to optimize the performances. Third, they are unable to meet various investors’ preferences. This study aimed to more efficiently construct weighted scoring stock selection models to overcome these disadvantages. Since the weights of stock-picking concepts in a weighted scoring stock selection model can be regarded as components in a mixture, we used the simplex-centroid mixture design to obtain the experimental sets of weights. These sets of weights are simulated with US stock market historical data to obtain their performances. Performance prediction models were built with the simulated performance data set and artificial neural networks. Furthermore, the optimization models to reflect investors’ preferences were built up, and the performance prediction models were employed as the kernel of the optimization models, so that the optimal solutions can now be solved with optimization techniques. The empirical values of the performances of the optimal weighting combinations generated by the optimization models showed that they can meet various investors’ preferences and outperform those of S&P’s 500 not only during the training period but also during the testing period. |
Feeling and believing: the influence of emotion on trust. | The authors report results from 5 experiments that describe the influence of emotional states on trust. They found that incidental emotions significantly influence trust in unrelated settings. Happiness and gratitude--emotions with positive valence--increase trust, and anger--an emotion with negative valence--decreases trust. Specifically, they found that emotions characterized by other-person control (anger and gratitude) and weak control appraisals (happiness) influence trust significantly more than emotions characterized by personal control (pride and guilt) or situational control (sadness). These findings suggest that emotions are more likely to be misattributed when the appraisals of the emotion are consistent with the judgment task than when the appraisals of the emotion are inconsistent with the judgment task. Emotions do not influence trust when individuals are aware of the source of their emotions or when individuals are very familiar with the trustee. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.