title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
The Use of Amnion-Derived Cellular Cytokine Solution (ACCS) in Accelerating Closure of Interstices in Explanted Meshed Human Skin Grafts | UNLABELLED
Meshed, split-thickness skin grafts, especially when required to be widely spread, do not obtain immediate biologic closure. In patients with burns that cover a large percentage of the body surface area, this leaves the patient at risk for metabolic problems and life-threatening infection.
OBJECTIVE
The purpose of this study was to determine whether amnion-derived cellular cytokine solution could improve epithelialization kinetics and accelerate closure of meshed skin graft interstices.
METHODS
Human meshed, split-thickness skin grafts were explanted to athymic "nude" rats and treated with 3 different regimens of amnion-derived cellular cytokine solution (groups I, II, and III) or normal saline (group IV) as a control. Serial wound tracings of unepithelialized interstitial wound areas were compared over time. Two different preparations of amnion-derived cellular cytokine solution were also compared with one another, one containing animal components and the other free of animal components.
RESULTS
Only 67.03% of interstices in control animals closed by day 9. This compared with 92.2% closure for group I, 83.72% for group II, and 90.64% for group III. Interstices in all 3 groups treated with amnion-derived cellular cytokine solution (with or without animal-derived components) closed faster statistically than in the control animals (P < .05). There were no statistical differences among the 3 amnion-derived cellular cytokine solution-treated groups.
CONCLUSIONS
These data suggest that epithelialization kinetics and interstitial closure of meshed skin grafts can be accelerated with the use of amnion-derived cellular cytokine solution, a physiologic cocktail of cytokines, and provide support for a future clinical trial. |
Examining Factors Affecting College Students' Intention to Use Web-Based Instruction Systems: Towards an Integrated Model. | With accelerated progress of information and communication technologies (ICT), web-based instruction (WBI) is becoming a popular method for education resources distributing and delivering. This study was conducted to explore what factors influence college students’ behavioral intentions to utilize WBI systems. To achieve this aim, a WBI system was developed and employed in a vocational college in Taiwan to support undergraduate courses learning. Drawing on the concepts from Technology Acceptance Model (TAM), Theory of Reasoned Action (TRA), and Social Cognitive Theory (SCT), this study proposes a nomological framework and develops an instrument for measuring college students’ intention to use the WBI platform. The empirical results indicate that students show great readiness and positive intentions towards the system for their web-based learning activities and expose a possible benefit from its use in the long term. The research findings can provide instrumental suggestions for web-based instruction practices and may serve as instrumental guidelines for WBI systems to be effectively designed to advance college students’ interests and activations in the virtual learning environment. |
Domain Specialization Is Generally Unnecessary for Accelerators | Domain-specific accelerators (DSAs), which sacrifice programmability for efficiency, are a reaction to the waning benefits of device scaling. This article demonstrates that there are commonalities between DSAs that can be exploited with programmable mechanisms. The goals are to create a programmable architecture that can match the benefits of a DSA and to create a platform for future accelerator investigations. |
Two-Factor Authentication Resilient to Server Compromise Using Mix-Bandwidth Devices | Two-factor authentication (TFA), enabled by hardware tokens and personal devices, is gaining momentum. The security of TFA schemes relies upon a human-memorable password p drawn from some implicit dictionary D and a t-bit device-generated one-time PIN z. Compared to password-only authentication, TFA reduces the probability of adversary’s online guessing attack to 1/(|D| ∗ 2) (and to 1/2 if the password p is leaked). However, known TFA schemes do not improve security in the face of offline dictionary attacks, because an adversary who compromises the service and learns a (salted) password hash can still recover the password with O(|D|) amount of effort. This password might be reused by the user at another site employing password-only authentication. We present a suite of efficient novel TFA protocols which improve upon password-only authentication by a factor of 2 with regards to both the online guessing attack and the offline dictionary attack. To argue the security of the presented protocols, we first provide a formal treatment of TFA schemes in general. The TFA protocols we present enable utilization of devices that are connected to the client over several channel types, formed using manual PIN entry, visual QR code capture, wireless communication (Bluetooth or WiFi), and combinations thereof. Utilizing these various communication settings we design, implement, and evaluate the performance of 13 different TFA mechanisms, and we analyze them with respect to security, usability (manual effort needed beyond typing a password), and deployability (need for additional hardware or software), showing consistent advantages over known TFA schemes. |
Refining Geometry from Depth Sensors using IR Shading Images | We propose a method to refine geometry of 3D meshes from a consumer level depth camera, e.g. Kinect, by exploiting shading cues captured from an infrared (IR) camera. A major benefit to using an IR camera instead of an RGB camera is that the IR images captured are narrow band images that filter out most undesired ambient light, which makes our system robust against natural indoor illumination. Moreover, for many natural objects with colorful textures in the visible spectrum, the subjects appear to have a uniform albedo in the IR spectrum. Based on our analyses on the IR projector light of the Kinect, we define a near light source IR shading model that describes the captured intensity as a function of surface normals, albedo, lighting direction, and distance between light source and surface points. To resolve the ambiguity in our model between the normals and distances, we utilize an initial 3D mesh from the Kinect fusion and multi-view information to reliably estimate surface details that were not captured and reconstructed by the Kinect fusion. Our approach directly operates on the mesh model for geometry refinement. We ran experiments on our algorithm for geometries captured by both the Kinect I and Kinect II, as the depth acquisition in Kinect I is based on a structured-light technique and that of the Kinect II is based on a time-of-flight technology. The effectiveness of our approach is demonstrated through several challenging real-world examples. We have also performed a user study to evaluate the quality of the mesh models before and after our refinements. |
Graphical Models for Probabilistic and Causal Reasoning | This chapter surveys the development of graphical models known as Bayesian networks, summarizes their semantical basis and assesses their properties and applications to reasoning and planning. Bayesian networks are directed acyclic graphs (DAGs) in which the nodes represent variables of interest (e.g., the temperature of a device, the gender of a patient, a feature of an object, the occurrence of an event) and the links represent causal influences among the variables. The strength of an influence is represented by conditional probabilities that are attached to each cluster of parents-child nodes in the network. Figure 1 illustrates a simple yet typical Bayesian network. It describes the causal relationships among the season of the year (X1), whether rain falls (X2) during the season, whether the sprinkler is on (X3) during that season, whether the pavement would get wet (X4), and whether the pavement would be slippery (X5). All variables in this figure are binary, taking a value of either true or false, except the root variable X1 which can take one of four values: Spring, Summer, Fall, or Winter. Here, the absence of a direct link between X1 and X5, for example, captures our understanding that the influence of seasonal variations on the slipperiness of the pavement is mediated by other conditions (e.g., the wetness of the pavement). As this example illustrates, a Bayesian network constitutes a model of the environment rather than, as in many other knowledge representation schemes (e.g., logic, rule-based systems and neural networks), a model of the reasoning process. It simulates, in fact, the causal mechanisms that operate in the environment, and thus allows the investigator to answer a variety of queries, including: associational queries, such as “Having observed A, what can we expect of B?”; abductive queries, such as “What is the most plausible explanation for a given set of observations?”; and control queries; such as “What will happen if we intervene and act on the environment?” Answers to the first type of query depend only on probabilistic |
Binary inspiral: finding the right approximation | In searching for and interpreting signals from binary mergers, gravitational wave detectors need information about the features of gravitational wave bursts generated by these strong field events. Numerical relativity will ultimately provide the answers, but not on the time scale needed by the first detectors. We propose here a method in which exact numerical solutions to Einstein's equations, for periodic sources and standing waves, are used as an approximation to the strong field quasistationary epoch of inspiral. We discuss how this approximation changes the mathematical nature of the computation, and we report on progress with this problem and on remaining challenges. |
Modeling of arc welding power supply | This paper gives a method of establishing small-signal model for full-bridge inverter arc welding power supply via state-space averaging treatment and linearization method. On the basis of the model, the frequency response reflecting dynamical characteristics of arc welding power supply is analyzed by means of MATLAB. The simulation result has agreement with experiment result as regards dynamical property. The small signal frequent mathematical model established in the paper can reflect basic characteristic of practical arc welding power system. |
Tumor Consistency of PituitaryMacroadenomas : Predictive Analysis on the Basis of Imaging Features with Contrast-Enhanced 3 D FIESTA at 3 T | BACKGROUND AND PURPOSE: Preoperative evaluation of pituitary macroadenoma tumor consistency is important for neurosurgery. Thus, we aimed to retrospectively assess the role of contrast-enhanced FIESTA in predicting the tumor consistency of pituitary macroadenomas. MATERIALS AND METHODS: Twenty-nine patients with pituitary macroadenomas underwent conventional MR imaging sequences and contrast-enhanced FIESTA before surgery. Two neuroradiologists assessed the contrast-enhanced FIESTA, contrast-enhanced T1WI, and T2WI. On the basis of surgical findings, the macroadenomas were classified by the neurosurgeons as either soft or hard. Finally, Fisher exact probability tests and unpaired t tests were used to compare predictions on the basis of the MR imaging findings with the tumor consistency, collagen content, and postoperative tumor size. RESULTS: The 29 pituitary macroadenomas were classified as either solid or mosaic types. Solid type was characterized by a homogeneous pattern of tumor signal intensity without intratumoral hyperintense dots, whereas the mosaic type was characterized by many intratumoral hyperintense dots on each MR image. Statistical analyses revealed a significant correlation between tumor consistency and contrast-enhanced FIESTA findings. Sensitivity and specificity were higher for contrast-enhanced FIESTA (1.00 and 0.88 – 0.92, respectively) than for contrast-enhanced T1WI (0.80 and 0.25– 0.33, respectively) and T2WI (0.60 and 0.38 – 0.54, respectively). Compared with mosaictype adenomas, solid-type adenomas tended to have a hard tumor consistency as well as a significantly higher collagen content and lower postoperative tumor size. CONCLUSIONS: Contrast-enhanced FIESTA may provide preoperative information regarding the consistency of macroadenomas that appears to be related to the tumor collagen content. ABBREVIATIONS: CE contrast-enhanced; PCC percentage of collagen content; SI signal intensity The transsphenoidal approach of removing pituitary adenomas has been widely adopted as a safe and effective method. Recently, the endoscopic transsphenoidal technique has been applied as a minimally invasive surgery to remove pituitary adenomas. Most pituitary adenomas are soft and thus can be adequately removed by aspiration and curettage via the transsphenoidal route. However, 5–15% of pituitary adenomas are firm and fibrous. This can occur quite often, and, unfortunately, there are no preoperative predictors of its occurrence. Thus, preoperative evaluation of tumor consistency is essential for |
DETERMINANTS OF INWARD FDI IN MONGOLIA : AN APPLICATION OF THE ARDL BOUNDS TESTING APPROACH TO COINTEGRATION | The study at hand is the first of its kind that aimed to provide a comprehensive analysis of the determinants of foreign direct investment (FDI) in Mongolia by analyzing their short-run, long-run, and Granger causal relationships. In doing so, we methodically used a series of econometric methods to ensure reliable and robust estimation results that included the augmented Dickey-Fuller and Phillips-Perron unit root tests, the most recently advanced autoregressive distributed lag (ARDL) bounds testing approach to cointegration, fully modified ordinary least squares, and the Granger causality test within the vector error-correction model (VECM) framework. Our findings revealed domestic market size and human capital to have a U-shaped relationship with FDI inflows, with an initial positive impact on FDI in the short-run, which then turns negative in the long-run. Macroeconomic instability was found to deter FDI inflows in the long-run. In terms of the impact of trade on FDI, imports were found to have a complementary relationship with FDI; while exports and FDI were found to be substitutes in the short-run. Financial development was also found to induce a deterring effect on FDI inflows in both the shortand long-run; thereby also revealing a substitutive relationship between the two. Infrastructure level was not found to have a significant impact on FDI on any conventional level, in either the shortor long-run. Furthermore, the results have exhibited significant Granger causal relationships between the variables; thereby, ultimately stressing the significance of policy choice in not only attracting FDI inflows, but also in translating their positive spill-over benefits into long-run economic growth. © 2017 AESS Publications. All Rights Reserved. |
Highly programmable wavelength selective switch based on liquid crystal on silicon switching elements | We present a novel wavelength selective switch (WSS) based on a liquid crystal on silicon (LCOS) switching element. The unit operates simultaneously at both 50 and 100 GHz channel spacing and is compatible with 40 G transmission requirements. |
Conceptual Business Document Modeling using UN/CEFACT's Core Components | Before two businesses can engage in a business-tobusiness process an agreement about the process execution order and the business documents exchanged in the collaborative process must be found. Although several initiatives from different industries have started standardization initiatives for business documents a set of shortcomings still remain: (1) the different standards do not have a common semantic basis causing inter-operability problems between them and they (2) try to include every possible element any industry might need into the business document standard. (3) Moreover, most of the standards are transfer specific and (4) do not provide a conceptual representation mechanism. In this article a new concept for the standardization of business documents called UN/CEFACT’s Core Components Technical Specification is presented which solves these shortcomings. Using Core Components the business document modeler can unambiguously define documents with a common semantic basis on a conceptual level. In order to allow for a better integration into UML modeling tools we introduce the UML Profile for Core Components. With the UML based core components model and an XML schema generator the modeler can derive XML schema artifacts from the conceptual model. |
Survey and behavioral measurements of interpersonal trust | Please cite this article in press as: Evans, Journal of Research in Personality (2008), Although many studies treat trust as a situational construct, individual differences can be used to study and predict trusting behavior. We report two studies, the first showing the psychometric properties of a new trust inventory (the Propensity to Trust Survey or PTS), the second study validating this inventory using the standard economic task, the Investment Game. The first study utilized online survey data (N > 8000) to show that the PTS scales were reliable and measured broad constructs related to Big Five personality domains. Trust was related to extraversion and negative neuroticism, and trustworthiness was related to agreeableness and conscientiousness. The second study (N = 90) validated the PTS trust scale as a predictor of behavior in the Investment Game. These findings are evidence that trust and trustworthiness are compound personality traits, and that PTS scales are preferable to general Big Five measures for predicting trusting behavior. 2008 Elsevier Inc. All rights reserved. |
Semi-Dense Depth Interpolation using Deep Convolutional Neural Networks | With advances of recent technologies, augmented reality systems and autonomous vehicles gained a lot of interest from academics and industry. Both these areas rely on scene geometry understanding, which usually requires depth map estimation. However, in case of systems with limited computational resources, such as smartphones or autonomous robots, high resolution dense depth map estimation may be challenging. In this paper, we study the problem of semi-dense depth map interpolation along with low resolution depth map upsampling. We present an end-to-end learnable residual convolutional neural network architecture that achieves fast interpolation of semi-dense depth maps with different sparse depth distributions: uniform, sparse grid and along intensity image gradient. We also propose a loss function combining classical mean squared error with perceptual loss widely used in intensity image super-resolution and style transfer tasks. We show that with some modifications, this architecture can be used for depth map super-resolution. Finally, we evaluate our results on both synthetic and real data, and consider applications for autonomous vehicles and creating AR/MR video games. |
How and how not to correct for CSF-contamination in diffusion MRI | Diffusion MRI is used extensively to investigate changes in white matter microstructure related to brain development and pathology. Ageing, however, is also associated with significant white and grey matter loss which in turn can lead to cerebrospinal fluid (CSF) based partial volume artefacts in diffusion MRI metrics. This is especially problematic in regions prone to CSF contamination, such as the fornix and the genu of corpus callosum, structures that pass through or close to the ventricles respectively. The aim of this study was to model the effects of CSF contamination on diffusion MRI metrics, and to evaluate different post-acquisition strategies to correct for CSF-contamination: Controlling for whole brain volume and correcting on a voxel-wise basis using the Free Water Elimination (FWE) approach. Using the fornix as an exemplar of a structure prone to CSF-contamination, corrections were applied to tract-specific and voxel-based [tract based spatial statistics (TBSS)] analyses of empirical DT-MRI data from 39 older adults (53-93 years of age). In addition to significant age-related decreases in whole brain volume and fornix tissue volume fraction, age was also associated with a reduction in mean fractional anisotropy and increase in diffusivity metrics in the fornix. The experimental data agreed with the simulations in that diffusivity metrics (mean diffusivity, axial and radial diffusivity) were more prone to partial volume CSF-contamination errors than fractional anisotropy. After FWE-based voxel-by-voxel partial volume corrections, the significant positive correlations between age and diffusivity metrics, in particular with axial diffusivity, disappeared whereas the correlation with anisotropy remained. In contrast, correcting for whole brain volume had little effect in removing these spurious correlations. Our study highlights the importance of correcting for CSF-contamination partial volume effects in the structures of interest on a voxel-by-voxel basis prior to drawing inferences about underlying changes in white matter structures and have implications for the interpretation of many recent diffusion MRI results in ageing and disease. |
Extraordinary plasticity of ductile bulk metallic glasses. | Shear bands generally initiate strain softening and result in low ductility of metallic glasses. In this Letter, we report high-resolution electron microscope observations of shear bands in a ductile metallic glass. Strain softening caused by localized shearing was found to be effectively prevented by nanocrystallization that is in situ produced by plastic flow within the shear bands, leading to large plasticity and strain hardening. These atomic-scale observations not only well explain the extraordinary plasticity that was recently observed in some bulk metallic glasses, but also reveal a novel deformation mechanism that can effectively improve the ductility of monolithic metallic glasses. |
A general model of accounting information systems | Accounting information system composes the most important part in management information systems because accounting is stated as the language of a management. Thereby, accounting information obtained from accounting information system is a kind of basic knowledge for the efficient management system. In particular, data gained from the accounting system with computer base ensures extremely important contributions for the management systems to be successful. In this study, the development of accounting information system and its significance for the managements are examined generally. |
Deep reinforcement learning for building HVAC control | Buildings account for nearly 40% of the total energy consumption in the United States, about half of which is used by the HVAC (heating, ventilation, and air conditioning) system. Intelligent scheduling of building HVAC systems has the potential to significantly reduce the energy cost. However, the traditional rule-based and model-based strategies are often inefficient in practice, due to the complexity in building thermal dynamics and heterogeneous environment disturbances. In this work, we develop a data-driven approach that leverages the deep reinforcement learning (DRL) technique, to intelligently learn the effective strategy for operating the building HVAC systems. We evaluate the performance of our DRL algorithm through simulations using the widely-adopted EnergyPlus tool. Experiments demonstrate that our DRL-based algorithm is more effective in energy cost reduction compared with the traditional rule-based approach, while maintaining the room temperature within desired range. |
Language-based social preferences among children in South Africa. | Monolingual English-speaking children in the United States express social preferences for speakers of their native language with a native accent. Here we explore the nature of children's language-based social preferences through research with children in South Africa, a multilingual nation. Like children in the United States, Xhosa South African children preferred speakers of their first language (Xhosa) to speakers of a foreign language (French). Thus, social preferences based on language are observed not only among children with limited exposure to cultural and linguistic variation but also among children living in a diverse linguistic environment. Moreover, Xhosa children attending school in English expressed social preferences for speakers of English over speakers of Xhosa, even when tested by a Xhosa-speaking experimenter. Thus, children's language-based social preferences do not depend exclusively on preferences for more familiar or intelligible speech but also extend to preferences for speech that may convey higher status in the child's society. |
Locke and Environmental Political Theory | Few philosophers’ ideas have had as significant an impact on the natural world as those of John Locke; in particular, an examination of Locke’s account of property is central to any attempt at revealing environmentally significant themes in the traditional political theory canon. many readers have taken Locke’s theory to be utterly inimical to the use of state power to further environmentalism; this is taken to be its main advantage by opponents, and a crippling disadvantage by supporters, of environmental regulation. But a more nuanced reading suggests two ways Locke’s theory can be revised to yield more environmentally friendly results goals. On the one hand, his labor theory of value might be revised, using the notion of natural services to weakens the link between labor and private property in land. On the other hand, his view that there are provisos on the accumulation of property in the state of nature can be extended to civil society, thus providing a basis for regulation of practices that degrade natural services. Finally, I will argue that the labor theory initiates a line of thinking, running through Rousseau, Kant, Hegel, and Marx, which sees economic life as taking place in a setting human beings themselves have created through their labor. Thus, because Locke places property at the center of his political theory, we can read Locke as placing the environment at the center of politics. |
Word Sense Induction for Novel Sense Detection | We apply topic modelling to automatically induce word senses of a target word, and demonstrate that our word sense induction method can be used to automatically detect words with emergent novel senses, as well as token occurrences of those senses. We start by exploring the utility of standard topic models for word sense induction (WSI), with a pre-determined number of topics (=senses). We next demonstrate that a non-parametric formulation that learns an appropriate number of senses per word actually performs better at the WSI task. We go on to establish state-of-the-art results over two WSI datasets, and apply the proposed model to a novel sense detection task. |
All the news that's fit to post: A profile of news use on social networking sites | Facebook and other social networking sites (SNSs) are altering the way individuals communicate. These online environments allow users to keep up with friends, network with colleagues, and share their personal views and observations with others. Previous work describes typical social networking site users as young, extroverted, and technologically savvy. Little research exists, however, on the emerging role of news in the social network environment. With over 500 million global Facebook users, both print and television based media outlets are making concerted efforts to become part of this important and increasingly ubiquitous virtual world. The present study uses a sample of students, faculty, and staff from a large university to investigate the factors that are related to news use on Facebook. Findings indicate that while news use is still a minor component of overall social network site activity, certain key variables, such as gender and life satisfaction, have a significant impact on how Facebook is used for news-related purposes. Future implications for news in the social networking world are presented and discussed. 2011 Elsevier Ltd. All rights reserved. |
ON THE OPTIMAL DESIGN OF COLUMNS AGAINST BUCKLING | We establish existence, derive necessary conditions, and construct and test an algorithm for the maximization of a column’s Euler buckling load under a variety of boundary conditions over a general class of admissible designs. We prove that symmetric clamped–clamped columns possess a positive first eigenfunction and introduce a symmetric rearrangement that does not decrease the column’s buckling load. Our necessary conditions, expressed in the language of Clarke’s generalized gradient [10], subsume those proposed by Olhoff and Rasmussen [25], Masur [22], and Seiranian [34]. The work of [25], [22], and [34] sought to correct the necessary conditions of Tadjbakhsh and Keller [37] who had not foreseen the presence of a multiple least eigenvalue. This remedy has been hampered by Tadjbakhsh and Keller’s miscalculation of the buckling loads of their clamped-clamped and clamped–hinged columns. We resolve this issue in the appendix. In our numerical treatment of the associated finite dimensional optimization problem we build on the work of Overton [26] in devising an efficient means of extracting an ascent direction from the column’s least eigenvalue. Owing to its possible multiplicity this is indeed a nonsmooth problem and again the ideas of Clarke [10] are exploited. |
A Review of Sentiment Analysis Research in Chinese Language | Research on sentiment analysis in English language has undergone major developments in recent years. Chinese sentiment analysis research, however, has not evolved significantly despite the exponential growth of Chinese e-business and e-markets. This review paper aims to study past, present, and future of Chinese sentiment analysis from both monolingual and multilingual perspectives. The constructions of sentiment corpora and lexica are first introduced and summarized. Following, a survey of monolingual sentiment classification in Chinese via three different classification frameworks is conducted. Finally, sentiment classification based on the multilingual approach is introduced. After an overview of the literature, we propose that a more human-like (cognitive) representation of Chinese concepts and their inter-connections could overcome the scarceness of available resources and, hence, improve the state of the art. With the increasing expansion of Chinese language on the Web, sentiment analysis in Chinese is becoming an increasingly important research field. Concept-level sentiment analysis, in particular, is an exciting yet challenging direction for such research field which holds great promise for the future. |
Gambling behavior subtypes among respondents with gambling-related problems in a population-based sample. | Latent class analysis was used to delineate distinctive subgroups of gamblers and examine whether they differed by demographics and gambling severity. Data from three Canadian provinces focused on respondents who reported at least some risk of problem gambling in the past year (N=1,071). Three latent classes were distinguished: a low on most items group (class 1), a behaviorally conditioned/substance abusing impulsive/emotionally vulnerable (or all types) group (class 2), and a familial-genetic/behaviorally conditioned group (class 3). Gamblers in classes 2 and 3 were most likely to be moderate-risk and problem gamblers. Community-based prevention efforts may need to address each subgroup differently but also according to their characteristics. |
Breaking the News: First Impressions Matter on Online News | A growing number of people are changing the way they consume news, replacing the traditional physical newspapers and magazines by their virtual online versions or/and weblogs. The interactivity and immediacy present in online news are changing the way news are being produced and exposed by media corporations. News websites have to create effective strategies to catch people’s attention and attract their clicks. In this paper we investigate possible strategies used by online news corporations in the design of their news headlines. We analyze the content of 69,907 headlines produced by four major global media corporations during a minimum of eight consecutive months in 2014. In order to discover strategies that could be used to attract clicks, we extracted features from the text of the news headlines related to the sentiment polarity of the headline. We discovered that the sentiment of the headline is strongly related to the popularity of the news and also with the dynamics of the posted comments on that particular news. |
Understanding Visual Rhetoric in Digital Writing Environments. | This essay illustrates key features of visual rhetoric as they operate in two professional academic hypertexts and student work designed for the World Wide Web. By looking at features like audience stance, transparency, and hybridity, writing teachers can teach visual rhetoric as a transformative process of design. Critiquing and producing writing in digital environments offers a welcome return to rhetorical principles and an important pedagogy of writing as design. |
The effects of different resistance training protocols on muscular strength and endurance development in children. | BACKGROUND
Previous research has shown that children can increase their muscular strength and muscular endurance as a result of regular participation in a progressive resistance training program. However, the most effective exercise prescription regarding the number of repetitions remains questionable.
OBJECTIVE
To compare the effects of a low repetition-heavy load resistance training program and a high repetition-moderate load resistance training program on the development of muscular strength and muscular endurance in children. Design. Prospective, controlled trial.
SETTING
Community-based youth fitness center.
SUBJECTS
Eleven girls and 32 boys between the ages of 5.2 and 11.8 years.
INTERVENTION
In twice-weekly sessions of resistance training for 8 weeks, children performed 1 set of 6 to 8 repetitions with a heavy load (n = 15) or 1 set of 13 to 15 repetitions with a moderate load (n = 16) on child-size exercise machines. Children in the control group (n = 12) did not resistance train. One repetition maximum (RM) strength and muscular endurance (repetitions performed posttraining with the pretraining 1-RM load) were determined on the leg extension and chest press exercises.
RESULTS
One RM leg extension strength significantly increased in both exercise groups compared with that in the control subjects. Increases of 31.0% and 40.9%, respectively, for the low repetition-heavy load and high repetition-moderate load groups were observed. Leg extension muscular endurance significantly increased in both exercise groups compared with that in the control subjects, although gains resulting from high repetition-moderate load training (13.1 +/- 6.2 repetitions) were significantly greater than those resulting from low repetition-heavy load training (8.7 +/- 2.9 repetitions). On the chest press exercise, only the high repetition-moderate load exercise group made gains in 1-RM strength (16.3%) and muscular endurance (5.2 +/- 3.6 repetitions) that were significantly greater than gains in the control subjects.
CONCLUSION
These findings support the concept that muscular strength and muscular endurance can be improved during the childhood years and favor the prescription of higher repetition-moderate load resistance training programs during the initial adaptation period. |
Timing Attack against Protected RSA-CRT Implementation Used in PolarSSL | In this paper, we present a timing attack against the RSACRT algorithm used in the current version 1.1.4 of PolarSSL, an opensource cryptographic library for embedded systems. This implementation uses a classical countermeasure to avoid two previous attacks of Schindler and another one due to Boneh and Brumley. However, a careful analysis reveals a bias in the implementation of Montgomery multiplication. We theoretically analyse the distribution of output values for Montgomery multiplication when the output is greater than the Montgomery constant, R. In this case, we show that an extra bit is set in the top most significant word of the output and a time variance can be observed. Then we present some proofs with reasonable assumptions to explain this bias due to an extra bit. Moreover, we show it can be used to mount an attack that reveals the factorisation. We also study another countermeasure and show its resistance against attacked library. |
Error detection in content word combinations | This thesis addresses the task of error detection in the choice of content words focusing on adjective–noun and verb–object combinations. We show that error detection in content words is an under-explored area in research on learner language since (i) most previous approaches to error detection and correction have focused on other error types, and (ii) the approaches that have previously addressed errors in content words have not performed error detection proper. We show why this task is challenging for the existing algorithms and propose a novel approach to error detection in content words. We note that since content words express meaning, an error detection algorithm should take the semantic properties of the words into account. We use a compositional distribu-tional semantic framework in which we represent content words using their distributions in native English, while the meaning of the combinations is represented using models of com-positional semantics. We present a number of measures that describe different properties of the modelled representations and can reliably distinguish between the representations of the correct and incorrect content word combinations. Finally, we cast the task of error detection as a binary classification problem and implement a machine learning classifier that uses the output of the semantic measures as features. The results of our experiments confirm that an error detection algorithm that uses semantically motivated features achieves good accuracy and precision and outperforms the state-of-the-art approaches. We conclude that the features derived from the semantic representations encode important properties of the combinations that help distinguish the correct combinations from the incorrect ones. The approach presented in this work can naturally be extended to other types of content word combinations. Future research should also investigate how the error correction component for content word combinations could be implemented. 3 4 Acknowledgements First and foremost, I would like to express my profound gratitude to my supervisor, Ted Briscoe, for his constant support and encouragement throughout the course of my research. This work would not have been possible without his invaluable guidance and advice. I am immensely grateful to my examiners, Ann Copestake and Stephen Pulman, for providing their advice and constructive feedback on the final version of the dissertation. I am also thankful to my colleagues at the Natural Language and Information Processing research group for the insightful and inspiring discussions over these years. In particular, I would like to express my gratitude to would like to thank … |
Style Finder: Fine-Grained Clothing Style Detection and Retrieval | With the rapid proliferation of smartphones and tablet computers, search has moved beyond text to other modalities like images and voice. For many applications like Fashion, visual search offers a compelling interface that can capture stylistic visual elements beyond color and pattern that cannot be as easily described using text. However, extracting and matching such attributes remains an extremely challenging task due to high variability and deformability of clothing items. In this paper, we propose a fine-grained learning model and multimedia retrieval framework to address this problem. First, an attribute vocabulary is constructed using human annotations obtained on a novel fine-grained clothing dataset. This vocabulary is then used to train a fine-grained visual recognition system for clothing styles. We report benchmark recognition and retrieval results on Women's Fashion Coat Dataset and illustrate potential mobile applications for attribute-based multimedia retrieval of clothing items and image annotation. |
Dynamics of blood flow and oxygenation changes during brain activation: the balloon model. | A biomechanical model is presented for the dynamic changes in deoxyhemoglobin content during brain activation. The model incorporates the conflicting effects of dynamic changes in both blood oxygenation and blood volume. Calculations based on the model show pronounced transients in the deoxyhemoglobin content and the blood oxygenation level dependent (BOLD) signal measured with functional MRI, including initial dips and overshoots and a prolonged poststimulus undershoot of the BOLD signal. Furthermore, these transient effects can occur in the presence of tight coupling of cerebral blood flow and oxygen metabolism throughout the activation period. An initial test of the model against experimental measurements of flow and BOLD changes during a finger-tapping task showed good agreement. |
The Academic Foreland and Research Demand of Hydrology and Water Resources | There is a great difference between hydrology and water resources in academic development. A complete scientific system has been formed for hydrology, while there is no integrated scientific system for water resources until now. Research on the natural attribute of water resources (such as regenerating mechanism) is the stretch of hydrology in hydrologic cycle, and the knowledge in water resources involve hydrology, engineering, environmental science, management science, economy and etc.. It is necessary to know this for making research on hydrology and water resources and promoting construction of the academic subjects of hydrology and water resources. |
The development and validation of the Physical Appearance Comparison Scale-Revised (PACS-R). | The Physical Appearance Comparison Scale (PACS; Thompson, Heinberg, & Tantleff, 1991) was revised to assess appearance comparisons relevant to women and men in a wide variety of contexts. The revised scale (Physical Appearance Comparison Scale-Revised, PACS-R) was administered to 1176 college females. In Study 1, exploratory factor analysis and parallel analysis using one half of the sample suggested a single factor structure for the PACS-R. Study 2 utilized the remaining half of the sample to conduct confirmatory factor analysis, item analysis, and to examine the convergent validity of the scale. These analyses resulted in an 11-item measure that demonstrated excellent internal consistency and convergent validity with measures of body satisfaction, eating pathology, sociocultural influences on appearance, and self-esteem. Regression analyses demonstrated the utility of the PACS-R in predicting body satisfaction and eating pathology. Overall, results indicate that the PACS-R is a reliable and valid tool for assessing appearance comparison tendencies in women. |
A Powerful Generative Model Using Random Weights for the Deep Image Representation | To what extent is the success of deep visualization due to the training? Could we do deep visualization using untrained, random weight networks? To address this issue, we explore new and powerful generative models for three popular deep visualization tasks using untrained, random weight convolutional neural networks. First we invert representations in feature spaces and reconstruct images from white noise inputs. The reconstruction quality is statistically higher than that of the same method applied on well trained networks with the same architecture. Next we synthesize textures using scaled correlations of representations in multiple layers and our results are almost indistinguishable with the original natural texture and the synthesized textures based on the trained network. Third, by recasting the content of an image in the style of various artworks, we create artistic images with high perceptual quality, highly competitive to the prior work of Gatys et al. on pretrained networks. To our knowledge this is the first demonstration of image representations using untrained deep neural networks. Our work provides a new and fascinating tool to study the representation of deep network architecture and sheds light on new understandings on deep visualization. |
Lesbian , Gay , Bisexual , and Transgender Families | s International, 61(5-B), 2794. Butler, J. (2004). Undoing gender. New York: |
Doxycycline versus ceftriaxone for the treatment of patients with chronic Lyme borreliosis | BACKGROUND: Therapeutic guidelines for treatment of late manifestations of Lyme borreliosis have not yet become well established. Patients with symptoms suggesting central nervous system involvement are often treated with courses of intravenous ceftriaxone. This is an expensive treatment approach with potentially severe side effects. We compared the efficacy, side effects and costs of doxycycline and ceftriaxone in the treatment of such patients. PATIENTS AND METHODS: Adult patients qualified for the study if they had nonspecific symptoms suggesting central nervous system involvement for more than six months (but without overt clinical signs of the involvement), had positive serum borrelial antibody titers and/or erythema migrans prior to the onset of symptoms, had not been previously treated with antibiotics and did not have pleocytosis in the cerebrospinal fluid. Patients were given either 100 mg of oral doxycycline twice daily for 4 weeks (23 patients) or 2 g of intravenous ceftriaxone daily for 2 weeks followed by 100 mg of doxycycline twice daily for another 2 weeks (23 patients). Clinical outcome was assessed during a 12-month follow-up period. RESULTS: Improvement in the frequency and/or the intensity of symptoms was reported by more than two-thirds of the 46 patients enroled in the study. The two treatment regimens were found to be correspondingly effective. Photosensitivity reactions and gastrointestinal symptoms were noted more often among patients receiving doxycycline than in those receiving ceftriaxone. Treatment with doxycycline proved to be much cheaper than with ceftriaxone. CONCLUSIONS: In patients with previously untreated chronic Lyme borreliosis with symptoms suggesting central nervous system involvement but without overt clinical signs of it, and without pleocytosis in the cerebrospinal fluid, treatment with doxycycline is as effective as with ceftriaxone. Treatment with doxycycline is cheap and relatively safe, but gastrointestinal symptoms and photosensitivity reactions can be expected more often than with ceftriaxone. |
Selection of K in K-means clustering | The K-means algorithm is a popular data-clustering algorithm. However, one of its drawbacks is the requirement for the number of clusters, K, to be specified before the algorithm is applied. This paper first reviews existing methods for selecting the number of clusters for the algorithm. Factors that affect this selection are then discussed and a new measure to assist the selection is proposed. The paper concludes with an analysis of the results of using the proposed measure to determine the number of clusters for the K-means algorithm for different data sets. |
Cloud adoption : a goal-oriented requirements engineering approach | We motivate the need for a new requirements engineering methodology for systematically helping businesses and users to adopt cloud services and for mitigating risks in such transition. The methodology is grounded in goal oriented approaches for requirements engineering. We argue that Goal Oriented Requirements Engineering (GORE) is a promising paradigm to adopt for goals that are generic and flexible statements of users' requirements, which could be refined, elaborated, negotiated, mitigated for risks and analysed for economics considerations. We describe the steps of the proposed process and exemplify the use of the methodology through an example. The methodology can be used by small to large scale organisations to inform crucial decisions related to cloud adoption. |
An Assessment of Name Matching Algorithms | In many computer applications involving the recording and processing of personal data there is a need to allow for variations in surname spelling, caused for example by transcription errors. A number of algorithms have been developed for name matching, i.e. which attempt to identify name spelling variations, one of the best known of which is the Soundex algorithm. This paper describes a comparative analysis of a number of these algorithms and, based on an analysis of their comparative strengths and weaknesses, proposes a new and improved name matching algorithm, which we call the Phonex algorithm. The analysis takes advantage of the recent creation of a large list of “equivalent surnames”, published in the book Family History Knowledge UK [Park1992]. This list is based on data supplied by some thousands of individual genealogists, and can be presumed to be representative of British surnames and their variations over the last two or three centuries. It thus made it possible to perform what we would argue were objective tests of name matching, the results of which provide a solid basis for the analysis that we have performed, and for our claims for the merits of the new algorithm, though these are unlikely to hold fully for surnames emanating largely from other countries. |
VELOCITY AND ATTENUATION ANISOTROPY CAUSED BY MICROCRACKS AND MACROFRACTURES IN A MULTIAZIMUTH REVERSE VSP | Previous analyses of crosshole survey (CHS) and reverse vertical seismic profile (RVSP) data sets at the Conoco Borehole Test Facility (CBTF) have shown that fracture parameters can be estimated from seismic data. Continuing this development, we analyse data recorded from shear-wave sources in a shallow multiazimuthal RVSP. The RVSP data cover 160” of azimuth and display symmetry in arrival times and amplitudes about an azimuth of approximately N70”E. They can be interpreted as both velocity anisotropy (azimuthal variation of traveltimes) and attenuation anisotropy (azimuthal variation of amplitudes). Since the direction N70”E has been previously identified as the strike of the dominant macroand microfractures, as well as the direction of maximum horizontal stress at depth in the area, the observed velocity and attenuation variations can be interpreted in terms of stress-aligned fractures and microcracks. The data also show evidence of scattering characteristic of large-scale fractures. These interpretations are confirmed by matching synthetic seismograms to the field data, where the observed symmetry features in the seismograms are well-reproduced in the synthetic data. Nearby surface exposures suggest there are two approximately perpendicular fracture sets at CBTF. In such media, the polarizations of shear waves are expected to be complicated and we show that under appropriate circumstances it is possible to infer multiple fracture sets if sufficient azimuthal coverage is available in either VSPs or RVSPs. The shear-wave polarizations, when plotted in equal-area polar projections, show two maxima in directions which are approximately orthogonal and almost exactly parallel to the strikes of the two fracture sets at a neighbouring surface outcrop. The observed shear-wave polarizations are well-matched by the synthetic shearwave seismograms calculated for media with two fracture sets. The remarkable agreement between measured shear-wave polarizations from RVSP data, CHS data and borehole data and from the synthetic seismograms confirms the usefulness of shear waves for mapping subsurface fractures. There is increasing interest in fracture characterizatior using geophysical methods, particularly shear-wave anisotropy and shear-wave splitting (reviewed by Crampin and Lovell, 199 1; Crampin, 1993). The Conoco Borehole Test Facility (CBTF) in Kay County, Oklahoma, is an excellent laboratory for the study of naturally fractured rock. Over the past few years CBTF has been the site of an extensive programme of integrated geological studies (surface mapping, core analysis and well-log analysis), geophysical studies [crosshole surveys (CHSs), vertical seismic profiles (VSPs) and reverse VSPs (RVSPs)] and hydrological studies (fluid flow measurements) conducted around an array of shallow groundwater boreholes about 50 m deep. Results from various studies have been reviewed by Queen et al. (1992). This paper is part of these integrated studies and is a continuation of our early analyses of CHS, VSP and RVSP data sets (Queen and Rizer, 1990; Queen et al., 1990; Rizer, 1990; Liu et al., 1991a, 1991b; Lines et al., 1992). The RVSP data analysed here provide convincing evidence of fracture-related seismic velocity anisotropy and attenuation anisotropy. This paper makes three principal contributions. Firstly, it shows clear evidence of systematic azimuthal traveltime and amplitude variations due to fractures at shallow depths from a multiazimuthal RVSP at CBTF site. The data also show evidence of the scattering characteristic of intermediatelength fractures where the fracture length and spacing are of the same order as the seismic wavelength. The observed variations in traveltime and amplitude are indicative of fracturerelated velocity and attenuation anisotropy and are modelled with synthetic seismograms, although the scattering thought to be caused by intermediate-length fractures has not yet been adequately modelled mathematically. Secondly, it illustrates the possibility of using multiazimuth VSPs or RVSPs to map both single and multiple subsurface fracture sets. The measured initial shear-wave polarizations show two clear directions which are almost parallel to the strikes of the two fracture sets mapped from surface outcrops. This result is further supported by a comparison with theoretical predictions. Thirdly, it shows that the results obtained from seismic ‘British Geological Survey, Murchison House, West Mains Road, Edinburgh EH9 3LA ‘Department of Geology and Geophysics, University of Edinburgh, West Mains Road, Edinburgh EH9 3JW “Exploration Research and Services, Conoco Inc., P.O. Box 1267. Ponca City, Oklahoma 74603 We thank Conoco Inc. for providing the RVSP data sets. The necessary preprocessing (including the rotary source decomposition) of the data was performed at Conoco Inc., where J.B. Sinton, V.D. Cox and P.L. Buller performed most of the earty analyses with J.H. Queen. We thank P. Wild and X.-W. Zeng for help with the data processing and D.C. Booth for useful discussion and comments. We also thank D. Corrigan and another anonymous reviewer for their comments for improving the manuscript. Synthetic seismograms were calculated using the ANISEIS package of Macroc Ltd. and Applied Geophysical Software Inc. This work was supported by Conoco (UK) Ltd. through Contract No. 2.343 and the Natural Environment Research Council and is published with the approval of Conoco Inc. and the Director of the British Geological Survey (NERC). |
Plant Science View on Biohybrid Development | Biohybrid consists of a living organism or cell and at least one engineered component. Designing robot-plant biohybrids is a great challenge: it requires interdisciplinary reconsideration of capabilities intimate specific to the biology of plants. Envisioned advances should improve agricultural/horticultural/social practice and could open new directions in utilization of plants by humans. Proper biohybrid cooperation depends upon effective communication. During evolution, plants developed many ways to communicate with each other, with animals, and with microorganisms. The most notable examples are: the use of phytohormones, rapid long-distance signaling, gravity, and light perception. These processes can now be intentionally re-shaped to establish plant-robot communication. In this article, we focus on plants physiological and molecular processes that could be used in bio-hybrids. We show phototropism and biomechanics as promising ways of effective communication, resulting in an alteration in plant architecture, and discuss the specifics of plants anatomy, physiology and development with regards to the bio-hybrids. Moreover, we discuss ways how robots could influence plants growth and development and present aims, ideas, and realized projects of plant-robot biohybrids. |
The Evolution of Deadly Conflict in Liberia: From 'Paternaltarianism' to State Collapse, and: Beyond Plunder: Toward Democratic Governance in Liberia (review) | Jeremy I. Levitt. The Evolution of Deadly Conflict in Liberia: From 'Paternaltarianism' to State Collapse. Durham, N.C.: Carolina Academic Press, 2005. xvi + 257 pp. Maps. Notes. Appendix. Bibliography. Index. $45.00. Cloth. Amos Sawyer. Beyond Plunder: Toward Democratic Governance in Liberia. Boulder, CoIo.: Lynne Rienner Publishers, 2005. xiv + 201pp. Tables. References. Index. $49.95. Cloth. Amos Sawyer, a seasoned Liberian scholar/activist, and Jeremy Levitt, a young African American scholar, contribute in their respective ways to setting the intellectual if not the political stage for governance reform in Liberia. The timeliness of both publications is evident as Liberia extricates itself from a quarter-century of war and dysfunctional governance. In a way, the two books complement one another. Levitt resurrects deadly conflicts in Liberian history to explain root causes of what he calls the "great war," the conflict of 1989-2003. Sawyer sees the roots of conflict in an overcentralized, autocratic, and predatory state. Each book has a prescriptive intent. Levitt advocates democratic inclusiveness as an antidote to "settler nationalism and authoritarianism," while Sawyer speaks of transformational change from a monocentric to a polycentric governance arrangement. The Evolution of Deadly Conflict in Liberia has a three-fold aim: to address "methodological weakness in conflict studies" as it looks at the origins of the Liberian civil war rather than examining the course of the war; to provide an alternative framework (i.e., to conflict studies literature) for understanding the dynamics of warfare in Liberia; and to offer the first comprehensive study of deadly conflict in Liberia. Levitt adopts a sociopolitical and institutional approach which posits that the "nature of preexisting regime shapes the dynamics and outcomes of political transition" (11-12). Consequently, he identifies and analyzes a "continuum of circular causation between the state of affairs that led to the founding of the Liberian state, the evolution of nationalism and authoritarianism, and deadly conflict" (85). Fifteen conflicts are examined in detail, with a secondary overview of the circumstances that led to the 1980 coup d'etat and the subsequent civil war. He uncovers a system that institutionalized ethnopolitical conflict between immigrants and indigenes from 1822 through 1980, and among all Liberians between 1980 and 2003. Sawyer's purpose is to explain the path to reconstituting order following state collapse and violent conflict. In this quest he employs a variant of the institutional analysis framework of his intellectual mentors (Vincent and Elinor Ostrom) to uncover "how institutions structure incentives and influence choice within ecological and social environments" (4). His analysis leads him to "guideposts" for constructing "a system of democratic governance based on a theory of limited or shared sovereignty as an alternative to monocentric governance derived from a theory of unitary sovereignty" (11). For both Levitt and Sawyer, the essential Liberian problem has been the critical choice of the early Liberian leadership for a unitary rather than a confederal state, for immigrant nationalism and authoritarianism rather than political inclusiveness and democracy. Sawyer recalls the loose organization of early county political subdivisions as opposed to concentration of power in Monrovia, while Levitt notes that 65 percent of eligible voters boycotted Liberia's first open preindependence elections, although the leadership proceeded in the absence of a clear mandate. Both suggest no significant departure since then from the founding dispositions. They each recognize half-hearted attempts at political inclusion, Levitt characterizing Tubman's unification policy as "deceptive inclusion" (181), and Sawyer pointing out a "structural flaw" (85) that underpins attempts at governance reform Among the strengths of Sawyer's well-crafted study are his clear prescriptions for postconflict Liberia. … |
Deep Multimetric Learning for Shape-Based 3D Model Retrieval | Recently, feature-learning-based 3D shape retrieval methods have been receiving more and more attention in the 3D shape analysis community. In these methods, the hand-crafted metrics or the learned linear metrics are usually used to compute the distances between shape features. Since there are complex geometric structural variations with 3D shapes, the single hand-crafted metric or learned linear metric cannot characterize the manifold, where 3D shapes lie well. In this paper, by exploring the nonlinearity of the deep neural network and the complementarity among multiple shape features, we propose a novel deep multimetric network for 3D shape retrieval. The developed multimetric network minimizes a discriminative loss function that, for each type of shape feature, the outputs of the network from the same class are encouraged to be as similar as possible and the outputs from different classes are encouraged to be as dissimilar as possible. Meanwhile, the Hilbert-Schmidt independence criterion is employed to enforce the outputs of different types of shape features to be as complementary as possible. Furthermore, the weights of the learned multiple distance metrics can be adaptively determined in our developed deep metric network. The weighted distance metric is then used as the similarity for shape retrieval. We conduct experiments with the proposed method on the four benchmark shape datasets. Experimental results demonstrate that the proposed method can obtain better performance than the learned deep single metric and outperform the state-of-the-art 3D shape retrieval methods. |
Artificial Diversity as Maneuvers in a Control Theoretic Moving Target Defense | Moving target cyber-defense systems encompass a wide variety of techniques in multiple areas of cyber-security. The dynamic system reconfiguration aspect of moving target cyber-defense can be used as a basis for providing an adaptive attack surface. The goal of this research is to develop novel control theoretic mechanisms by which a range of cyber maneuver techniques are provided such that when an attack is detected the environment can select the most appropriate maneuver to ensure a sufficient shift in the attack surface to render the identified attack ineffective. Effective design of this control theoretic cyber maneuver approach requires the development of two additional theories. First, algorithms are required for the estimation of security state. This will identify when a maneuver is required. Second, a theory for the estimation of the cost of performing a maneuver is required. This is critical for selecting the most cost-effective maneuver while ensuring that the attack is rendered fully ineffective. Finally, we present our moving target control loop as well as a detailed case study examining the impact of our proposed cyber maneuver paradigm on DHCP attacks. |
Associativity-Based Routing for Ad Hoc Mobile Networks | This paper presents a new, simple and bandwidth-efficient distributed routing protocol to support mobile computing in a conference size ad-hoc mobile network environment. Unlike the conventional approaches such as link-state and distance-vector distributed routing algorithms, our protocol does not attempt to consistently maintain routing information in every node. In an ad-hoc mobile network where mobile hosts (MHs) are acting as routers and where routes are made inconsistent by MHs’ movement, we employ an associativity-based routing scheme where a route is selected based on nodes having associativity states that imply periods of stability. In this manner, the routes selected are likely to be long-lived and hence there is no need to restart frequently, resulting in higher attainable throughput. Route requests are broadcast on a per need basis. The association property also allows the integration of ad-hoc routing into a BS-oriented Wireless LAN (WLAN) environment, providing the fault tolerance in times of base stations (BSs) failures. To discover shorter routes and to shorten the route recovery time when the association property is violated, the localised-query and quick-abort mechanisms are respectively incorporated into the protocol. To further increase cell capacity and lower transmission power requirements, a dynamic cell size adjustment scheme is introduced. The protocol is free from loops, deadlock and packet duplicates and has scalable memory requirements. Simulation results obtained reveal that shorter and better routes can be discovered during route re-constructions. |
Output impedance design of parallel-connected UPS inverters | This paper deals with the design of the output impedance of UPS inverters with parallel-connection capability. The inner control loops are considered in the design of the controllers that makes possible the power sharing among the UPS modules. In these paralleled units, the power-sharing outer control loops are based on the P/Q droop method in order to avoid any communication among the modules. The power sharing accuracy is highly sensitive to the output impedance of the inverters, making necessary the tight adjustment of this impedance. Novel control loops are proposed to achieve stable output impedance value, and, therefore, proper power balance is guarantee when sharing both linear and nonlinear loads. |
Evolutionary timeline summarization: a balanced optimization framework via iterative substitution | Classic news summarization plays an important role with the exponential document growth on the Web. Many approaches are proposed to generate summaries but seldom simultaneously consider evolutionary characteristics of news plus to traditional summary elements. Therefore, we present a novel framework for the web mining problem named Evolutionary Timeline Summarization (ETS). Given the massive collection of time-stamped web documents related to a general news query, ETS aims to return the evolution trajectory along the timeline, consisting of individual but correlated summaries of each date, emphasizing relevance, coverage, coherence and cross-date diversity. ETS greatly facilitates fast news browsing and knowledge comprehension and hence is a necessity. We formally formulate the task as an optimization problem via iterative substitution from a set of sentences to a subset of sentences that satisfies the above requirements, balancing coherence/diversity measurement and local/global summary quality. The optimized substitution is iteratively conducted by incorporating several constraints until convergence. We develop experimental systems to evaluate on 6 instinctively different datasets which amount to 10251 documents. Performance comparisons between different system-generated timelines and manually created ones by human editors demonstrate the effectiveness of our proposed framework in terms of ROUGE metrics. |
Manufacturing Systems Engineering - a review | An academic subject, "Manufacturing Systems Engineering", proposed in 1975, is historically reviewed by defining manufacturing/production systems, and manufacturing systems. The social role and future perspective of manufacturing systems (engineering) are also discussed, stressing "socially appropriate manufacturing" as manufacturing excellence for the 21st century. |
The semantic and stylistic differentiation of synonyms and near-synonyms | If we want to describe the action of someone who is looking out a window for an extended time, how do we choose between the words gazing, staring, and peering? What exactly is the difference between an rgument, a dispute, and a row? In this paper, we describe our research in progress on the problem of lexical choice and the representations of world knowledge and of lexical structure and meaning that the task requires. In particular, we wish to deal with nuances and subtleties of denotation and connotation--shades of meaning and of style--such as those illustrated by the examples above. We are studying the task in two related contexts: machine translation, and the generation of multilingual text from a single representation of content. This work brings together several elements of our earlier research: unilingual lexical choice (Miezitis 1988); multilingual generation (R6sner and Stede 1992a,b); representing and preserving stylistic nuances in translation (DiMarco 1990; DiMarco and Hirst 1990; Mah 1991); and, more generally, analyzing and generating stylistic nuances in text (DiMarco and Hirst 1993; DiMarco et al 1992; MakutaGiluk 1991; Maknta-Giluk and DiMarco 1993; BenHassine 1992; Green 1992a,b, 1993; Hoyt forthcoming). In the present paper, we concentrate on issues in lexical representation. We describe a methodology, based on dictionary usage notes, that we are using to discover the dimensions along which similar words can be differentiated, and we discuss a two-part representation for lexical differentiation. (Our related work on lexical choice itself and its integration with other components of text generation is discussed by Stede (1993a,b, forthcoming).) aspects of their usage. 1 Such differences can include the collocational constraints of the words (e.g., groundhog and woodchuck denote the same set of animals; yet Groundhog Day, * Woodchuck Day) and the stylistic and interpersonal connotations of the words (e.g., die, pass away, snuff it; slim, skinny; police oI~icer, cop, pig). In addition, many groups of words are plesionyms (Cruse 1986)--that is, nearly synonymous; forest and woods, for example, or stared and gazed, or the German words einschrauben, festschrauben, and festziehen. ~ The notions of synonymy and plesionymy can be made more precise by means of a notion of semantic distance (such as that invoked by Hirst (1987), for example, lexical disambiguation); but this is troublesome to formalize satisfactorily. In this paper it will suffice to rely on an intuitive understanding. We consider two dimensions along which words can vary: semantic and stylistic, or, equivalently, denotative and connotative. If two words differ semantically (e.g., mist, fog), then substituting one for the other in a sentence or discourse will not necessarily preserve truth conditions; the denotations are not identical. If two words differ (solely) in stylistic features (e.g., frugal, stingy), then intersubstitution does preserve truth conditions, but the connotation--the stylistic and interpersonal effect of the sentence--is changed, s Many of the semantic distinctions between plesionyms do not lend themselves to neat, taxonomic differentiation; rather, they are fuzzy, with plesionyms often having an area of overlap. For example, the boundary between forest and wood ’tract of trees’ is vague, and there are some situations in which either word might be equally appropriate. 4 |
Inter-Rater Agreement in the Clinical Diagnosis of Essential Tremor: Data from the NEDICES-2 Pilot Study | BACKGROUND
Our aim was to assess the diagnostic agreement among the neurologists in the Neurological Disorders in Central Spain 2 (NEDICES-2) study; these neurologists were assigning diagnoses of essential tremor (ET) vs. no ET.
METHODS
Clinical histories and standardized video-taped neurological examinations of 26 individuals (11 ET, seven Parkinson's disease, three diagnostically unclear, four normal, one with a tremor disorder other than ET) were provided to seven consultant neurologists, six neurology residents, and five neurology research fellows (18 neurologists total). For each of the 26 individuals, neurologists were asked to assign a diagnosis of "ET" or "no ET" using diagnostic criteria proposed by the Movement Disorders Society (MDS). Inter-rater agreement was assessed both with percent concordance and non-weighted κ statistics.
RESULTS
Overall κ was 0.61 (substantial agreement), with no differences between consultant neurologists (κ = 0.60), neurology residents (κ = 0.61), and neurology research fellows (κ = 0.66) in subgroup analyses. Subanalyses of agreement only among those 15 subjects with a previous diagnosis of ET (11 patients) and those with a previous diagnosis of being normal (four individuals) showed an overall κ of 0.51 (moderate agreement).
DISCUSSION
In a population-based epidemiological study, substantial agreement was demonstrated for the diagnosis of ET among neurologists of different levels of expertise. However, agreement was lower than that previously reported using the Washington Heights-Inwood Genetic Study of Essential Tremor criteria, and a head-to-head comparison is needed to assess which is the tool of choice in epidemiological research in ET. |
Multi-Task CNN Model for Attribute Prediction | This paper proposes a joint multi-task learning algorithm to better predict attributes in images using deep convolutional neural networks (CNN). We consider learning binary semantic attributes through a multi-task CNN model, where each CNN will predict one binary attribute. The multi-task learning allows CNN models to simultaneously share visual knowledge among different attribute categories. Each CNN will generate attribute-specific feature representations, and then we apply multi-task learning on the features to predict their attributes. In our multi-task framework, we propose a method to decompose the overall model's parameters into a latent task matrix and combination matrix. Furthermore, under-sampled classifiers can leverage shared statistics from other classifiers to improve their performance. Natural grouping of attributes is applied such that attributes in the same group are encouraged to share more knowledge. Meanwhile, attributes in different groups will generally compete with each other, and consequently share less knowledge. We show the effectiveness of our method on two popular attribute datasets. |
A Dataset for Lane Instance Segmentation in Urban Environments | Autonomous vehicles require knowledge of the surrounding road layout, which can be predicted by state-of-the-art CNNs. This work addresses the current lack of data for determining lane instances, which are needed for various driving manoeuvres. The main issue is the timeconsuming manual labelling process, typically applied per image. We notice that driving the car is itself a form of annotation. Therefore, we propose a semi-automated method that allows for efficient labelling of image sequences by utilising an estimated road plane in 3D based on where the car has driven and projecting labels from this plane into all images of the sequence. The average labelling time per image is reduced to 5 seconds and only an inexpensive dash-cam is required for data capture. We are releasing a dataset of 24,000 images and additionally show experimental semantic segmentation and instance segmentation results. |
Computerized detection of breast lesions in multi-centre and multi-instrument DCE-MR data using 3D principal component maps and template matching. | In this study, we introduce a novel, robust and accurate computerized algorithm based on volumetric principal component maps and template matching that facilitates lesion detection on dynamic contrast-enhanced MR. The study dataset comprises 24,204 contrast-enhanced breast MR images corresponding to 4034 axial slices from 47 women in the UK multi-centre study of MRI screening for breast cancer and categorized as high risk. The scans analysed here were performed on six different models of scanner from three commercial vendors, sited in 13 clinics around the UK. 1952 slices from this dataset, containing 15 benign and 13 malignant lesions, were used for training. The remaining 2082 slices, with 14 benign and 12 malignant lesions, were used for test purposes. To prevent false positives being detected from other tissues and regions of the body, breast volumes are segmented from pre-contrast images using a fast semi-automated algorithm. Principal component analysis is applied to the centred intensity vectors formed from the dynamic contrast-enhanced T1-weighted images of the segmented breasts, followed by automatic thresholding to eliminate fatty tissues and slowly enhancing normal parenchyma and a convolution and filtering process to minimize artefacts from moderately enhanced normal parenchyma and blood vessels. Finally, suspicious lesions are identified through a volumetric sixfold neighbourhood connectivity search and calculation of two morphological features: volume and volumetric eccentricity, to exclude highly enhanced blood vessels, nipples and normal parenchyma and to localize lesions. This provides satisfactory lesion localization. For a detection sensitivity of 100%, the overall false-positive detection rate of the system is 1.02/lesion, 1.17/case and 0.08/slice, comparing favourably with previous studies. This approach may facilitate detection of lesions in multi-centre and multi-instrument dynamic contrast-enhanced breast MR data. |
IoT technologies for embedded computing: A survey | Emergence of Internet-of-Things brings a whole new class of applications and higher efficiency for existing services. Application-specific requirements, as well as connectivity and communication ability of devices have introduced new challenges for IoT applications.
This paper provides an overview of IoT technologies required from an embedded design perspective and specific properties associated with IoT in embedded systems' landscape. We investigate essential technologies for development of IoT systems, existing trends, and its distinguishing properties. By discussing the key characteristics, main application domains, and major research issues in IoT, this paper provides a comprehensive IoT perspective for embedded system design. |
Games as Neurofeedback Training for Kids with FASD | Biofeedback games help people maintain specific mental or physical states and are useful to help children with cognitive impairments learn to self-regulate their brain function. However, biofeedback games are expensive and difficult to create and are not sufficiently appealing to hold a child’s interest over the long term needed for effective biofeedback training. We present a system that turns off-the-shelf computer games into biofeedback games. Our approach uses texture-based graphical overlays that vary in their obfuscation of underlying screen elements based on the sensed physiological state of the child. The textures can be visually customized so that they appear to be integrated with the underlying game. Through a 12-week deployment, with 16 children with Fetal Alcohol Spectrum Disorder, we show that our solution can hold a child’s interest over a long term, and balances the competing needs of maintaining the fun of playing, while providing effective biofeedback training. |
Abdominal Tumor in a 14-Year-Old Adolescent: Imperforate Hymen, Resulting in Hematocolpos—A Case Report and Review of the Literature | Background. Abdominal masses in female adolescents are uncommon. A rare cause of this condition is hematocolpos due to imperforate hymen. Case. We present a case of an unusually massive asymptomatic abdominal bulk in a 14-year-old female patient, who sought for medical advice after unusual abdominal pain lasting for few weeks. The patient was otherwise asymptomatic, apart from an unusual dramatic expansion of her abdominal wall during the last month. We describe the surgical management and the follow-up of the patient. Summary and Conclusion. Clinicians should keep in mind that an imperforate hymen can cause abdominal growth due to hematocolpos and include it in the differential diagnosis of such a clinical entity in female adolescents. 2D ultrasound is usually efficient for the confirmation of the diagnosis of hematocolpos, but 3D ultrasound is more accurate. Wide excision should be undertaken, as an initial approach, to avoid recurrence. |
A forward converter topology employing a resonant auxiliary circuit to achieve soft switching and power transformer resetting | This paper presents a forward converter topology that employs a small resonant auxiliary circuit. The advantages of the proposed topology include soft switching in both the main and auxiliary switches, recovery of the leakage inductance energy, simplified power transformer achieving self-reset without using the conventional reset winding, simple gate drive and control circuit, etc. Steady-state analysis is performed herein, and a design procedure is presented for general applications. A 35–75-Vdc to 5 Vdc 100-W prototype converter switched at a frequency of 200 kHz is built to verify the design, and 90% overall efficiency has been obtained experimentally at full load. |
Looking for crossmodal correspondences between classical music and fine wine | Wine writers sometimes compare wines to pieces of music, a particular musical style or artist, or even to specific musical parameters. To date, though, it is unclear whether such comparisons merely reflect the idiosyncratic matches of the writers concerned or whether instead they reflect more general crossmodal matching tendencies that would also be shared by others (e.g., social drinkers). In our first experiment, we looked for any consensual patterns of crossmodal matching across a group of 24 participants who were presented with four distinctive wines to taste. In our second experiment, three of the wines were presented with and without music and 26 participants were asked to rate the perceived sweetness, acidity, alcohol level, fruitiness, tannin level, and their own enjoyment of the wines. The results of experiment 1 revealed the existence of a significant agreement amongst the participants in terms of specific classical music - fine wine pairings that appeared to go particularly well (or badly) together. For example, Tchaikovsky’s String Quartet No 1 in D major turned out to be a very good match for the Château Margaux 2004 (red wine). Meanwhile, Mozart’s Flute Quartet in D major, K285 was found to be a good match for the Pouilly Fumé (white wine). The results of experiment 2 revealed that participants perceived the wine as tasting sweeter and enjoyed the experience more while listening to the matching music than while tasting the wine in silence. Taken together, the results of the two experiments reported here suggest that people (social drinkers) share a number of crossmodal associations when it comes to pairing wines and music. Furthermore, listening to the appropriate classical music can enhance the overall experience associated with drinking wine. As such, our findings provide prima facie evidence to support the claim that comparing a wine to a particular style of music (as documented in the work of a number of wine writers) might provide the social drinker with useful clues about the sensory properties that they should expect to perceive in a wine should they eventually get to taste it. |
Shoot organogenesis and mass propagation of Coleus forskohlii from leaf derived callus | A high frequency shoot organogenesis and plant establishment protocol has been developed for Coleus forskohlii from leaf derived callus. Optimal callus was developed from mature leaves on Murashige and Skoog (MS) medium supplemented with 2.4 μM kinetin alone. Shoots were regenerated from the callus on MS medium supplemented with 4.6 μM kinetin and 0.54 μM 1-naphthalene acetic acid. The highest rate of shoot multiplication was achieved at the sixth subculture and more than 150 shoots were produced per callus clump. Regenerated shootlets were rooted spontaneously on half-strength MS medium devoid of growth regulators. The in vitro raised plants were established successfully in soil. The amount of forskolin in in vitroraised plants and wild plants was estimated and found that they produce comparable quantity of forskolin. This in vitro propagation protocol should be useful for conservation as well as mass propagation of this plant. |
Idea of multi cohesive areas - foundation, current status and perspective | The idea of multi cohesive areas is a new, theoretical model of quantum particle mass. This model contains a darkmatter sector. Moreover, it can explain the current experimental data on both dark matter and dark energy phenomena. In this work, the current status of this idea from theoretical and experimental perspective will be shown. Itwill bedonebypresenting themotivationbehind its creation, its theoretical foundation and how it explains the mentioned current experimental data. The result from this work is a proof that in the further MCA development, quantities like particles or fields have to find a new image in which they are created by the speed of light. The conclusion from this work is that the mentioned development can create a theory for all interactions. Moreover, such a theory will have a practical value. Namely, by using this theory, the “disappearing” matter in the visible world will be available by changing into dark matter. This, together with the fact that the current dark matter models do not yield any significance outcomes, is a proof that such a development is at least worth considering. |
Metrics for Requirements Engineering and Automated Requirements Tools | Software requirements are the foundations from which quality is measured. Measurement enables to improve the software process; assist in planning, tracking and controlling the software project and assess the quality of the software thus produced. Quality issues such as accuracy, security and performance are often crucial to the success of a software system. Quality should be maintained from starting phase of software development. Requirements management, play an important role in maintaining quality of software. A project can deliver the right solution on time and within budget with proper requirements management. Software quality can be maintained by checking quality attributes in requirements document. Requirements metrics such as volatility, traceability, size and completeness are used to measure requirements engineering phase of software development lifecycle. Manual measurement is expensive, time consuming and prone to error therefore automated tools should be used. Automated requirements tools are helpful in measuring requirements metrics. The aim of this paper is to study, analyze requirements metrics and automated requirements tools, which will help in choosing right metrics to measure software development based on the evaluation of Automated Requirements Tools |
Processing of co-continuous ceramic composites by reactive penetration method: influence of composition of ceramic preforms and infiltrating alloys | The reactive penetration of Al based alloys in massive silica glass and sintered preforms (made of silica, silica plus silicon carbide or silica plus aluminium nitride) was investigated. The reactions occurring during preform sintering and reactive metal penetration were preliminarily studied by Differential Thermal Analysis. Square bars of co-continuous composites were then processed by using this reactive metal penetration (RMP) method. The effect on infiltration rate of temperature, alloying elements added to aluminium (Mg and Si) and preform microstructure (composition and porosity) was assessed. The microstructure of both sintered preforms and final co-continuous composites was investigated by XRD, SEM and mercury intrusion porosimetry. Vickers, Charpy, bending and tensile tests were used to study the mechanical behaviour of composite bars. The addition of magnesium to molten aluminium (used for infiltration) enhances the penetration rate, while the process speed decreases with the preform porosity increase. The addition of silicon to the molten bath decreases the infiltration rate, but this element is needed in order to avoid the detrimental formation of aluminium carbide, otherwise resulting from the reaction between Al and SiC. The thermal expansion and the mechanical features of the composites greatly change when SiC or AlN are mixed to the silica powders used for preform fabrication. |
Analyzing Intention in Utterances | This paper describes a model of cooperative behavior and describes how such a model can be applied in a natural language understanding system. We assume that agents attempt to recognize the plans of other agents and, then, use this plan when deciding what response to make. In particular, we show that, given a setting in which purposeful dialogues occur, this model can account for responses that provide more information that explicitly requested and for appropriate responses to both short sentence fragments and indirect speech acts. |
Co-production of lactic acid and chitin using a pelletized filamentous fungus Rhizopus oryzae cultured on cull potatoes and glucose. | AIMS
This paper developed a novel process for lactic acid and chitin co-production of the pelletized Rhzious oryzae NRRL 395 fermentation using underutilized cull potatoes and glucose as nutrient source.
METHODS AND RESULTS
Whole potato hydrolysate medium was first used to produce the highest pelletized biomass yield accompanying the highest chitin content in biomass. An enhanced lactic acid production then followed up using batch, repeated batch and fed batch culture with glucose as carbon source and mixture of ammonia and sodium hydroxide as neutralizer. The lactic acid productivity peaked at 2.8 and 3 g l(-1 )h(-1) in repeated batch culture and batch culture, respectively. The fed batch culture had the highest lactate concentration of 140 g l(-1).
CONCLUSIONS
Separation of the biomass cultivation and the lactic acid production is able to not only improve lactic acid production, but also enhance the chitin content. Cull potato hydrolysate used as a nutrient source for biomass cultivation can significantly increase both biomass yield and chitin content.
SIGNIFICANCE AND IMPACT OF THE STUDY
The three-step process using pelletized R. oryzae fermentation innovatively integrates utilization of agricultural residues into the process of co-producing lactic acid and chitin, so as to improve the efficiency, revenues and cost of fungal lactic acid production. |
Neural Architecture Search : A Survey Neural Architecture Search : A Survey | Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and errorprone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy. |
Inter-twin contact in a case of monochorionic diamniotic twins with acrania of one twin fetus at 10–13 weeks’ gestation | To the Editor, Anencephaly can be reliably diagnosed using ultrasound late in the first trimester of pregnancy [1]. The prevalence of anencephaly in twins is higher than that in singleton pregnancies, and the prevalence of discordance in anencephaly in monochorionic twins is higher than that in dichorionic twins [2]. There have been only two reports on conventional twodimensional (2D) sonographic assessment of fetal behavior in twins discordant because of anencephaly after 20 weeks of gestation [3, 4]. However, there has been no report on four-dimensional (4D) sonographic assessment of intertwin contact in twins discordant because of anencephaly in utero. To the best of our knowledge, this is the first report on 4D sonographic assessment of inter-twin contact in a case of monochorionic diamniotic (MD) twins with acrania of one twin fetus late in the first trimester. A 28-year-old Japanese woman, gravida 3, para 1, visited our hospital because of secondary amenorrhea, and MD twin pregnancy at 7 weeks and 4 days was diagnosed. At 11 weeks and 4 days, twin pregnancy with acrania of one twin fetus was diagnosed (Fig. 1). The parents were informed about the lethality of the affected twin fetus, but they elected to continue the pregnancy. At 25 weeks and 1 day, she was admitted to our hospital because of threatened premature labor (short cervix and irritable uterine contractions). At 38 weeks and 4 days, the first of two female infants (anencephalic twin), weighing 1,948 g, and the other infant (second twin), weighing 2,372 g (Apgar score 7 at 1 min and 9 at 5 min; umbilical artery blood pH 7.089), were delivered by elective cesarean section because of two previous cesarean sections. The anencephalic twin died soon after delivery. Permission to conduct an autopsy was not granted by the parents. The second twin is doing well. Detailed descriptions of the data-collecting methods and measurement procedures used in this patient have been presented in a previous publication [5]. In brief, examinations were performed for 30 min with transabdominal 4D sonography at 11 weeks and 4 days and 13 weeks and 1 day of pregnancy, respectively. She was asked whether she would agree to a 30-min observation of fetal movements and inter-twin contact after undergoing routine sonographic examinations. This study was approved by the local ethics committee of Kagawa University School of Medicine, and standardized written informed consent was obtained from the patient. All 4D examinations were performed using Voluson 730 Expert (GE Medical Systems, Milwaukee, WI) with a transabdominal 2–5-MHz transducer. Ten types of inter-twin contact (head to head, head to arm, head to trunk, head to leg, arm to arm, arm to trunk, arm to leg, trunk to trunk, trunk to leg, and leg to leg) were analyzed during playback of the video recordings. The total number of all inter-twin contacts was determined by a single experienced observer (M.S.) and compared to the quartile range obtained from normal MD twins [5]. The frequencies of ten types of inter-twin contact were also compared to the quartile ranges obtained from normal MD twins [6]. The total number of inter-twin contacts in this patient was low compared to those of normal MD twin fetuses at 10–11 and 12–13 weeks’ gestation (Fig. 2). The frequencies of 10 types of inter-twin contact at 10–11 weeks were almost within quartile ranges (Fig. 3), T. Hata (&) K. Kanenishi U. Hanaoka M. Sasaki T. Yanagihara Department of Perinatology and Gynecology, Kagawa University School of Medicine, 1750-1 Ikenobe, Miki, Kagawa 761-0793, Japan e-mail: [email protected] |
Sleep-wake disturbances and quality of life in patients with advanced lung cancer. | PURPOSE/OBJECTIVES
To examine the scope and severity of subjective sleep-wake disturbances in patients with lung cancer and compare them to a group of healthy adults who were similar in age, gender, and race, and to examine the impact of sleep-wake disturbances on measures of health-related quality of life (QOL).
DESIGN
Descriptive, comparative.
SETTING
University-based and private urban ambulatory care clinics.
SAMPLE
43 patients with advanced non-small cell or small cell lung cancer and 36 healthy adults. All participants were cognitively intact, and none had any known neurologic disorder, polysomnographically diagnosed sleep disorder, mood or anxiety disorders, or cerebral metastasis.
METHODS
Questionnaires, interview, and medical record review.
MAIN RESEARCH VARIABLES
Nocturnal sleep (quality, quantity, and disturbance), daytime sleepiness, and health-related QOL (physical, mental).
FINDINGS
Patients with lung cancer had poor perceived nocturnal sleep quality and excessive daytime sleepiness that differed significantly from the comparison group. Sleep disturbances in the group with lung cancer were characterized by breathing difficulty, cough, nocturia, and frequent awakenings. Sleep-wake disturbances were significantly associated with poorer health-related QOL after controlling for group. Excessive daytime sleepiness was associated most often with decreases in mental health, whereas poor nocturnal sleep was associated most often with decreases in physical health.
CONCLUSIONS
Findings suggest that sleep-wake disturbances are common in patients with lung cancer and that the disturbances are significantly associated with health-related QOL. Patients with lung cancer may be at risk for sleep-disordered breathing.
IMPLICATIONS FOR NURSING
The magnitude of nocturnal sleep disturbance and daytime sleepiness identified in this study reinforces the importance of ongoing screening and effective intervention for sleep-wake disturbances in patients with lung cancer. |
A Generalized Approach to Measure Market Timing Skills of Fund Managers: Theory and Evidence | In this paper the authors extend the analysis in Woodward and Brooks (2010) to derive a generalized form of Merton’s (1981) dual beta market timing model that allows for continuous adjustment of portfolio beta in response to changing market conditions, and also includes the dual beta model as a special case. The model provides a more realistic representation of the fund return generation process. Using this model the authors test the market timing skills of fund managers for a sample of Australian superannuation funds for the period 1990 to 2002. The authors find that managed funds in which investors voluntarily select a given fund (retail funds) experience frequent rebalancing when compared to managed funds in which the investors’ contribution is involuntary (wholesale funds). The authors relate the greater sensitivity to all changes in market conditions of retail funds to higher expenses and poor performance that was found in a recent study by Langford, Faff and Marisetty (2006). The results have important implications for Australian superannuation policy, since the Australian Government, effective from 1st July 2005, has required all funds to introduce voluntary contribution schemes. A Generalized Approach to Measure Market Timing Skills of Fund Managers: Theory and Evidence |
Quantifying interactive user experience on thin clients | We describe an approach to quantifying the impact of network latency on interactive response and show that the adequacy of thin-client computing is highly variable and depend on both the application and available network quality. If near ideal network conditions (low latency and high bandwidth) can be guaranteed, thin clients offer a good computing experience. As network quality degrades, interactive performance suffers. It is latency - not bandwidth -that is the greater challenge. Tightly coupled tasks such as graphics editing suffer more than loosely coupled tasks such as Web browsing. The combination of worst anticipated network quality and most tightly coupled tasks determine whether a thin-client approach is satisfactory for an organization. |
Foldem: Heterogeneous Object Fabrication via Selective Ablation of Multi-Material Sheets | Foldem, a novel method of rapid fabrication of objects with multi-material properties is presented. Our specially formulated Foldem sheet allows users to fabricate and easily assemble objects with rigid, bendable, and flexible properties using a standard laser-cutter. The user begins by creating his designs in a vector graphics software package. A laser cutter is then used to fabricate the design by selectively ablating/vaporizing one or more layers of the Foldem sheet to achieve the desired physical properties for each joint. Herein the composition of the Foldem sheet, as well as various design considerations taken into account while building and designing the method, are described. Sample objects made with Foldem are demonstrated, each showcasing the unique attributes of Foldem. Additionally, a novel method for carefully calibrating a laser cutter for precise ablation is presented. |
A Survey of Intrusion Detection Techniques | Intrusion detection is an alternative to the situation of the security violation.Security mechanism of the network is necessary against the threat to the system. There are two types of intruders: external intruders, who are unauthorized users of the machines they attack, and internal intruders, who have permission to access the system with some restrictions. This paper describes a brief overview of various intrusion detection techniques such as fuzzy logic, neural network, pattern recognition methods, genetic algorithms and related techniques is presented. Among the several soft computing paradigms, fuzzy rule-based classifiers, decision trees, support vector machines, linear genetic programming is model fast and efficient intrusion detection systems. KeywordsIntroduction, intrusion detection methods, misuse detection techniques, anomaly detection techniques, genetic algorithms. |
Habits in everyday life: thought, emotion, and action. | To illustrate the differing thoughts and emotions involved in guiding habitual and nonhabitual behavior, 2 diary studies were conducted in which participants provided hourly reports of their ongoing experiences. When participants were engaged in habitual behavior, defined as behavior that had been performed almost daily in stable contexts, they were likely to think about issues unrelated to their behavior, presumably because they did not have to consciously guide their actions. When engaged in nonhabitual behavior, or actions performed less often or in shifting contexts, participants' thoughts tended to correspond to their behavior, suggesting that thought was necessary to guide action. Furthermore, the self-regulatory benefits of habits were apparent in the lesser feelings of stress associated with habitual than nonhabitual behavior. |
A System for Axiomatic Programming | We present the design and implementation of a system for axiomatic programming, and its application to mathematical software construction. Key novelties include a direct support for user-defined axioms establishing local equalities between types, and overload resolution based on equational theories and user-defined local axioms. We illustrate uses of axioms, and their organization into concepts, in structured generic programming as practiced in computational mathematical systems. |
The RWTH aachen university open source speech recognition system | We announce the public availability of the RWTH Aachen University speech recognition toolkit. The toolkit includes state of the art speech recognition technology for acoustic model training and decoding. Speaker adaptation, speaker adaptive training, unsupervised training, a finite state automata library, and an efficient tree search decoder are notable components. Comprehensive documentation, example setups for training and recognition, and a tutorial are provided to support newcomers. |
The academic environment: the students' perspective. | Dental education is regarded as a complex, demanding and often stressful pedagogical procedure. Undergraduates, while enrolled in programmes of 4-6 years duration, are required to attain a unique and diverse collection of competences. Despite the major differences in educational systems, philosophies, methods and resources available worldwide, dental students' views regarding their education appear to be relatively convergent. This paper summarizes dental students' standpoint of their studies, showcases their experiences in different educational settings and discusses the characteristics of a positive academic environment. It is a consensus opinion that the 'students' perspective' should be taken into consideration in all discussions and decisions regarding dental education. Moreover, it is suggested that the set of recommendations proposed can improve students' quality of life and well-being, enhance their total educational experience and positively influence their future careers as oral health physicians. The 'ideal' academic environment may be defined as one that best prepares students for their future professional life and contributes towards their personal development, psychosomatic and social well-being. A number of diverse factors significantly influence the way students perceive and experience their education. These range from 'class size', 'leisure time' and 'assessment procedures' to 'relations with peers and faculty', 'ethical climate' and 'extra-curricular opportunities'. Research has revealed that stress symptoms, including psychological and psychosomatic manifestations, are prevalent among dental students. Apparently some stressors are inherent in dental studies. Nevertheless, suggested strategies and preventive interventions can reduce or eliminate many sources of stress and appropriate support services should be readily available. A key point for the Working Group has been the discrimination between 'teaching' and 'learning'. It is suggested that the educational content should be made available to students through a variety of methods, because individual learning styles and preferences vary considerably. Regardless of the educational philosophy adopted, students should be placed at the centre of the process. Moreover, it is critical that they are encouraged to take responsibility for their own learning. Other improvements suggested include increased formative assessment and self-assessment opportunities, reflective portfolios, collaborative learning, familiarization with and increased implementation of information and communication technology applications, early clinical exposure, greater emphasis on qualitative criteria in clinical education, community placements, and other extracurricular experiences such as international exchanges and awareness of minority and global health issues. The establishment of a global network in dental education is firmly supported but to be effective it will need active student representation and involvement. |
Feasibility of closed-loop co-administration of propofol and remifentanil guided by the bispectral index in obese patients: a prospective cohort comparison. | BACKGROUND
We used an automated bispectral index (BIS)-guided dual-loop controller to determine propofol and remifentanil requirements during general anaesthesia in obese and lean surgical patients.
METHODS
Obese patients, BMI>35 kg m(-2), and lean patients (<25 kg m(-2)) having laparoscopic procedures were prospectively evaluated in this multicentre single-blind study. The automated controller targeted BIS between 40 and 60 by adjusting propofol and remifentanil administration. Propofol and remifentanil consumptions were calculated using both total body weight (TBW) and ideal body weight (IBW). Results are expressed as medians (inter-quartile range).
RESULTS
Thirty obese [BMI=43 (40-49) kg m(-2)] and 29 lean [BMI=23 (21-25) kg m(-2)] patients completed the study. BIS was between 40 and 60 during 84 (69-91)% vs 85 (78-92)% of the anaesthetic time, P=0.46. The amount of propofol given during induction [1.2 (1.1-1.6) vs 1.3 (1.0-1.7) mg kg(-1), P=0.47] and maintenance [5.2 (4.1-6) vs 5.3 (4.7-6.4) mg kg(-1) h(-1), P=0.39] calculated using TBW was similar between the two groups. The dual-loop controller delivered half as much remifentanil to the obese patients during induction [1.0 (0.8-1.6) vs 2.2 (1.5-2.7) µg kg(-1), P<0.001] and maintenance [0.12 (0.07-0.16) vs 0.25 (0.17-0.29) µg kg(-1) min(-1), P<0.001] calculated using TBW. But when remifentanil consumption was calculated using IBW, the amounts were similar during induction at 2.2 (1.6-3.5) vs 2.0 (1.6-3.0) µg kg(-1) IBW, P=0.48, and during maintenance at 0.26 (0.16-0.34) vs 0.27 (0.18-0.33 ) µg kg(-1) min(-1), P=0.50.
CONCLUSIONS
The amount of propofol-remifentanil administered by the controller is consistent with current knowledge, propofol is best dosed using TBW whereas remifentanil is best dosed using IBW.
CLINICAL TRIAL REGISTRATION
NCT00779844. |
Clustering With Multi-Layer Graphs: A Spectral Perspective | Observational data usually comes with a multimodal nature, which means that it can be naturally represented by a multi-layer graph whose layers share the same set of vertices (objects) with different edges (pairwise relationships). In this paper, we address the problem of combining different layers of the multi-layer graph for an improved clustering of the vertices compared to using layers independently. We propose two novel methods, which are based on a joint matrix factorization and a graph regularization framework respectively, to efficiently combine the spectrum of the multiple graph layers, namely the eigenvectors of the graph Laplacian matrices. In each case, the resulting combination, which we call a “joint spectrum” of multiple layers, is used for clustering the vertices. We evaluate our approaches by experiments with several real world social network datasets. Results demonstrate the superior or competitive performance of the proposed methods compared to state-of-the-art techniques and common baseline methods, such as co-regularization and summation of information from individual graphs. |
Nanofabrication and coloration study of artificial Morpho butterfly wings with aligned lamellae layers | The bright and iridescent blue color from Morpho butterfly wings has attracted worldwide attentions to explore its mysterious nature for long time. Although the physics of structural color by the nanophotonic structures built on the wing scales has been well established, replications of the wing structure by standard top-down lithography still remains a challenge. This paper reports a technical breakthrough to mimic the blue color of Morpho butterfly wings, by developing a novel nanofabrication process, based on electron beam lithography combined with alternate PMMA/LOR development/dissolution, for photonic structures with aligned lamellae multilayers in colorless polymers. The relationship between the coloration and geometric dimensions as well as shapes is systematically analyzed by solving Maxwell's Equations with a finite domain time difference simulator. Careful characterization of the mimicked blue by spectral measurements under both normal and oblique angles are carried out. Structural color in blue reflected by the fabricated wing scales, is demonstrated and further extended to green as an application exercise of the new technique. The effects of the regularity in the replicas on coloration are analyzed. In principle, this approach establishes a starting point for mimicking structural colors beyond the blue in Morpho butterfly wings. |
Rough sets and Boolean reasoning | In this article, we discuss methods based on the combination of rough sets and Boolean reasoning with applications in pattern recognition, machine learning, data mining and conflict analysis. 2006 Elsevier Inc. All rights reserved. |
Suboptimal PON network designing algorithm for minimizing deployment cost of optical fiber cables | In order to meet the bandwidth requirements for the access network, many network operators nowadays have been rapidly expanding their fiber-to-the-home (FTTH) service area. In such an area, a passive double star (PDS) network, which shares one optical fiber with multiple subscribers by using a power splitter, is commonly deployed as an infrastructure for the passive optical network (PON) systems. One of the main focuses for PON network planning is to determine the locations of the optical splitters and optical fiber cable routes that connect every splitter to the central offices (COs) based on the forecasted demands, within a limited deployment cost under realistic restrictions. In this paper, we propose and demonstrate a novel suboptimal design algorithm of PON optical distribution network (ODN). Based on the forecasted demand, it can automatically generate a suboptimal PON network in terms of the total cable deployment construction length under realistic restrictions. |
The challenge of addressing Grand Challenges: a think piece on how innovation can be driven towards the "Grand Challenges" as defined under the prospective European Union Framework Programme Horizon 2020 | The orientation towards Grand Challenges creates a challenge for science, technology, and innovation (STI) policies and practices as we know them, because they are of a different kind than usual STI policy concerns. Grand Challenges are sometimes seen as priorities for R&D and innovation stimulation, and treated that way, say, through dedicated public funding. But they should rather be seen as open-ended missions, and missions concerning the socioeconomic system as a whole, even inducing (or requiring) system transformation. Thus, Grand Challenges are ambitious, but not in the way the Manhattan Project (to develop an atom bomb) and the Apollo Project (to put a man on the moon) were. There, the challenge was technical (and organisational), and whether the goals were achieved or not was unambiguous. Grand Challenges, though, pertain to heterogeneous elements and forces, which have to be mobilised, guided and integrated, and include social innovation. Many different actors need to be involved, and the perspectives on what is the problem and what constitutes its resolution differ across various societal groups. Also, we see both “drivers of novelty and innovation as well as processes of capture and co-optation ... involved” (Kallerud et al. 2013, 4), so Grand Challenges policies have to cope with contestation, non-linearity and bifurcations in developments. This is not a message of despair, but it does imply that our present understandings and practices of STI policy are not sufficient to address Grand Challenges and set priorities accordingly. |
Quality of life and related variables in patients with ankylosing spondylitis | To evaluate quality of life (QoL) and related variables in patients with ankylosing spondylitis (AS), a chronic inflammatory disease of the spine. Nine-hundred and sixty-two patients with AS from the Turkish League Against Rheumatism AS Registry, who fulfilled the modified New York criteria, were enrolled. The patients were evaluated using the Assessment of SpondyloArthritis International Society core outcome domains including Bath Ankylosing Spondylitis Disease Activity Index (BASDAI), fatigue (BASDAI-question 1), pain (last week/spine/due to AS), Bath Ankylosing Spondylitis Functional Index (BASFI), Bath Ankylosing Spondylitis Metrology Index (BASMI), Bath Ankylosing Spondylitis Radiology Index (BASRI), Maastricht Ankylosing Spondylitis Enthesitis Score (MASES) and two QoL questionnaires (the disease-specific ASQoL and generic the Short Form-36 [SF-36]). The mean ASQoL score was 7.1 ± 5.7. SF-36 subscales of general health, physical role and bodily pain had the poorest scores. ASQoL was strongly correlated with disease duration, BASDAI, fatigue, BASFI, BASMI, BASRI, MASES, pain and SF-36 subscales (P < 0.001). SF-36 subscales were also strongly correlated with BASDAI and BASFI. Advanced educational status and regular exercise habits positively affected QoL, while smoking negatively affected QoL. In patients with AS, the most significant variables associated with QoL were BASDAI, BASFI, fatigue and pain. ASQoL was noted to be a short, rapid and simple patient-reported outcome (PRO) instrument and strongly correlated with SF-36 subscales. |
Minimum Action Path Theory Reveals the Details of Stochastic Transitions Out of Oscillatory States. | Cell state determination is the outcome of intrinsically stochastic biochemical reactions. Transitions between such states are studied as noise-driven escape problems in the chemical species space. Escape can occur via multiple possible multidimensional paths, with probabilities depending nonlocally on the noise. Here we characterize the escape from an oscillatory biochemical state by minimizing the Freidlin-Wentzell action, deriving from it the stochastic spiral exit path from the limit cycle. We also use the minimized action to infer the escape time probability density function. |
Renal and cardiac effects of antihypertensive treatment with ramipril versus metoprolol in autosomal dominant polycystic kidney disease. | Autosomal dominant polycystic kidney disease (ADPKD) affects ∼12.5 million people worldwide. Forty percent of patients are diagnosed by 45 years of age. ADPKD is the fourth most common cause for end-stage renal disease (ESRD) worldwide and this disease accounts for 5–10% of renal transplant recipients [1,2]. Therefore, any therapeutic modality that can slow down the natural course history of this disease would have a significant impact on patients’ well-being and would be financially cost-effective. The manuscript by Zeltner et al. in this issue attempts to determine if there is a difference between an angiotensinconverting enzyme (ACE) inhibitor (ramipril) and a beta blocker (metoprolol) when employing first-line therapy in ADPKD patients with hypertension. The study has a number of limitations; some of them recognized by the authors that could influence the conclusion are noted as follows: |
Web metrics for managing quality and auditing Croatian hotel web sites – cluster analysis | Intensive use of e-business can provide number of opportunities and actual benefits to companies of all activities and sizes. In general, through the use of web sites companies can create global presence and widen business boundaries. Many organizations now have websites to complement their other activities, but it is likely that a smaller proportion really know how successful their sites are and in what extent they comply with business objectives. A key enabler of web sites measurement is web site analytics and metrics. Web sites analytics especially refers to the use of data collected from a web site to determine which aspects of the web site work towards the business objectives. Advanced web analytics must play an important role in overall company strategy and should converge to web intelligence – a specific part of business intelligence which collect and analyze information collected from web sites and apply them in relevant ‘business’ context. This paper examines the importance of measuring the web site quality of the Croatian hotels. Wide range of web site metrics are discussed and finally a set of 8 dimensions and 44 attributes chosen for the evaluation of Croatian hotel’s web site quality. The objective of the survey conducted on the 30 hotels was to identify different groups of hotel web sites in relation to their quality measured with specific web metrics. Key research question was: can hotel web sites be placed into meaningful groups by consideration of variation in web metrics and number of hotel stars? To confirm the reliability of chosen metrics a Cronbach's alpha test was conducted. Apart from descriptive statistics tools, to answer the posed research question, clustering analysis was conducted and the characteristics of the three clusters were considered. Experiences and best practices of the hotel web sites clusters are taken as the prime source of recommendation for improving web sites quality level. Key-Words: web metrics, hotel web sites, web analytics, web site audit, web site quality, cluster analysis |
IoT-Based Configurable Information Service Platform for Product Lifecycle Management | Internet of Things (IoT) software is required not only to dispose of huge volumes of real-time and heterogeneous data, but also to support different complex applications for business purposes. Using an ontology approach, a Configurable Information Service Platform is proposed for the development of IoT-based application. Based on an abstract information model, information encapsulating, composing, discomposing, transferring, tracing, and interacting in Product Lifecycle Management could be carried out. Combining ontology and representational state transfer (REST)-ful service, the platform provides an information support base both for data integration and intelligent interaction. A case study is given to verify the platform. It is shown that the platform provides a promising way to realize IoT application in semantic level. |
Memory systems in the brain. | The operation of different brain systems involved in different types of memory is described. One is a system in the primate orbitofrontal cortex and amygdala involved in representing rewards and punishers, and in learning stimulus-reinforcer associations. This system is involved in emotion and motivation. A second system in the temporal cortical visual areas is involved in learning invariant representations of objects. A third system in the hippocampus is implicated in episodic memory and in spatial function. Fourth, brain systems in the frontal and temporal cortices involved in short term memory are described. The approach taken provides insight into the neuronal operations that take place in each of these brain systems, and has the aim of leading to quantitative biologically plausible neuronal network models of how each of these memory systems actually operates. |
Henry Sidgwick's Moral Epistemology | In this essay I defend the view that Henry Sidgwick’s moral epistemology is a form of intuitionist foundationalism that grants common-sense morality no evidentiary role. In §1, I outline both the problematic of The Methods of Ethics and the main elements of its argument for utilitarianism. In §§2-4 I provide my interpretation of Sidgwick’s moral epistemology. In §§5-8 I refute rival interpretations, including the Rawlsian view that Sidgwick endorses some version of reflective equilibrium and the view that he is committed to some kind of pluralistic epistemology. In§9 I contend with some remaining objections to my view. |
Machine learning in bioinformatics | This article reviews machine learning methods for bioinformatics. It presents modelling methods, such as supervised classification, clustering and probabilistic graphical models for knowledge discovery, as well as deterministic and stochastic heuristics for optimization. Applications in genomics, proteomics, systems biology, evolution and text mining are also shown. |
Landmarking Manifolds with Gaussian Processes | We present an algorithm for finding landmarks along a manifold. These landmarks provide a small set of locations spaced out along the manifold such that they capture the low-dimensional nonlinear structure of the data embedded in the high-dimensional space. The approach does not select points directly from the dataset, but instead we optimize each landmark by moving along the continuous manifold space (as approximated by the data) according to the gradient of an objective function. We borrow ideas from active learning with Gaussian processes to define the objective, which has the property that a new landmark is “repelled” by those currently selected, allowing for exploration of the manifold. We derive a stochastic algorithm for learning with large datasets and show results on several datasets, including the Million Song Dataset and articles from the New York Times. |
A visual navigation system for autonomous flight of micro air vehicles | Many applications of unmanned aerial vehicles (UAVs) require the capability to navigate to some goal and to perform precise and safe landing. In this paper, we present a visual navigation system as an alternative pose estimation method for environments and situations in which GPS is unavailable. The developed visual odometer is an incremental procedure that estimates the vehicle's ego-motion by extracting and tracking visual features, using an onboard camera. For more robustness and accuracy, the visual estimates are fused with measurements from an Inertial Measurement Unit (IMU) and a Pressure Sensor Altimeter (PSA) in order to provide accurate estimates of the vehicle's height, velocity and position relative to a given location. These estimates are then exploited by a nonlinear hierarchical controller for achieving various navigation tasks such as take-off, landing, hovering, target tracking, etc. In addition to the odometer description, the paper presents validation results from autonomous flights using a small quadrotor UAV. |
GISMO: a Graphical Interactive Student Monitoring Tool for Course Management Systems | This paper presents GISMO, a graphical interactive student monitoring and tracking system tool that extracts tracking data from an on-line course maintained with a Course Management System, and generates graphical representations that can be explored by course instructors to examine various aspects of distance students. GISMO uses techniques from Information Visualisation to render into an appropriate graphical manner the complex multidimensional student tracking data provided by the Course Management System. GISMO aims to help instructors to become aware of what is happening in their classes and provide a better support to their learners. |
Adversarial Scene Editing: Automatic Object Removal from Weak Supervision | Better shape priors improve the mask accuracy and reduce false removal. Moving down the table from the no prior case to the box priors and then to the class specific shape priors from the Pascal dataset masks the masks smaller, improves the mIoU and also reduces the false removal rate. Input image Global loss Local loss Qualitative comparison of global vs local loss. Local real-fake loss improves the in-painting results producing sharper, texture-rich images, compared to smooth blurry results obtained by the global loss. References |
Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling | Most current statistical natural language processing models use only local features so as to permit dynamic programming in inference, but this makes them unable to fully account for the long distance structure that is prevalent in language use. We show how to solve this dilemma with Gibbs sampling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference. We use this technique to augment an existing CRF-based information extraction system with long-distance dependency models, enforcing label consistency and extraction template consistency constraints. This technique results in an error reduction of up to 9% over state-of-the-art systems on two established information extraction tasks. |
Improving Generalization Performance by Switching from Adam to SGD | Despite superior training outcomes, adaptive optimization methods such as Adam, Adagrad or RMSprop have been found to generalize poorly compared to Stochastic gradient descent (SGD). These methods tend to perform well in the initial portion of training but are outperformed by SGD at later stages of training. We investigate a hybrid strategy that begins training with an adaptive method and switches to SGD when appropriate. Concretely, we propose SWATS, a simple strategy which Switches from Adam to SGD when a triggering condition is satisfied. The condition we propose relates to the projection of Adam steps on the gradient subspace. By design, the monitoring process for this condition adds very little overhead and does not increase the number of hyperparameters in the optimizer. We report experiments on several standard benchmarks such as: ResNet, SENet, DenseNet and PyramidNet for the CIFAR-10 and CIFAR-100 data sets, ResNet on the tiny-ImageNet data set and language modeling with recurrent networks on the PTB and WT2 data sets. The results show that our strategy is capable of closing the generalization gap between SGD and Adam on a majority of the tasks. |
Double-sided cooling and thermo-electrical management of power transients for silicon chips on DCB-substrates for converter applications: Design, technology and test | This paper deals with the system design, technology and test of a novel concept of integrating Si and SiC power dies along with thermo-electric coolers in order to thermally manage transients occurring during operation. The concept features double-sided cooling as well as new materials and joining technologies to integrate the dies such as transient liquid phase bonding/soldering and sintering. Coupled-field simulations are used to predict thermal performance and are verified by especially designed test stands to very good agreement. This paper is the second in a series of publications on the ongoing work. |
Anterior dental aesthetics: Dentofacial perspective | The purpose of this series is to convey the principles governing our aesthetic senses. Usually meaning visual perception, aesthetics is not merely limited to the ocular apparatus. The concept of aesthetics encompasses both the time-arts such as music, theatre, literature and film, as well as space-arts such as paintings, sculpture and architecture. |
Dexamethasone in patients with acute lung injury from acute monocytic leukaemia. | The use of steroids is not required in myeloid malignancies and remains controversial in patients with acute lung injury (ALI) or acute respiratory distress syndrome (ARDS). We sought to evaluate dexamethasone in patients with ALI/ARDS caused by acute monocytic leukaemia (AML FAB-M5) via either leukostasis or leukaemic infiltration. Dexamethasone (10 mg every 6 h until neutropenia) was added to chemotherapy and intensive care unit (ICU) management in 20 consecutive patients between 2005 and 2008, whose data were compared with those from 20 historical controls (1994-2002). ICU mortality was the primary criterion. We also compared respiratory deterioration rates, need for ventilation and nosocomial infections. 17 (85%) patients had hyperleukocytosis, 19 (95%) had leukaemic masses, and all 20 had severe pancytopenia. All patients presented with respiratory symptoms and pulmonary infiltrates prior to AML FAB-M5 diagnosis. Compared with historical controls, dexamethasone-treated patients had a significantly lower ICU mortality rate (20% versus 50%; p = 0.04) and a trend for less respiratory deterioration (50% versus 80%; p = 0.07). There were no significant increases in the rates of infections with dexamethasone. In conclusion, in patients with ALI/ARDS related to AML FAB-M5, adding dexamethasone to conventional chemotherapy seemed effective and safe. These results warrant a controlled trial of dexamethasone versus placebo in AML FAB-M5 patients with noninfectious pulmonary infiltrates. |
Evaluating and analyzing the performance of RPL in contiki | To meet the development of Internet of Things (IoT), IETF has proposed IPv6 standards working under stringent low-power and low-cost constraints. However, the behavior and performance of the proposed standards have not been fully understood, especially the RPL routing protocol lying at the heart the protocol stack. In this work, we make an in-depth study on a popular implementation of the RPL (routing protocol for low power and lossy network) to provide insights and guidelines for the adoption of these standards. Specifically, we use the Contiki operating system and COOJA simulator to evaluate the behavior of the ContikiRPL implementation. We analyze the performance for different networking settings. Different from previous studies, our work is the first effort spanning across the whole life cycle of wireless sensor networks, including both the network construction process and the functioning stage. The metrics evaluated include signaling overhead, latency, energy consumption and so on, which are vital to the overall performance of a wireless sensor network. Furthermore, based on our observations, we provide a few suggestions for RPL implemented WSN. This study can also serve as a basis for future enhancement on the proposed standards. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.