title
stringlengths
8
300
abstract
stringlengths
0
10k
The effect of mannitol on renal function after cardiopulmonary bypass in patients with established renal dysfunction.
The usefulness of mannitol in the priming fluid for cardiopulmonary bypass is uncertain in patients with normal renal function, and has not been studied in patients with established renal dysfunction. We studied 50 patients with serum creatinine between 130 and 250 micromol.l(-1) having cardiac surgery. Patients were randomised to receive mannitol 0.5 g.kg(-1), or an equivalent volume of Hartmann's solution, in the bypass prime. There were no differences between the groups in plasma creatinine or change in creatinine from baseline, urine output, or fluid balance over the first three postoperative days. We conclude that mannitol has no effect on routine measures of renal function during cardiac surgery in patients with established renal dysfunction.
The Mechanism and Calamitousness of Eutrophication
Eutrophication of waters is a worldwide ubiquitous phenomenon. Its hazard increasingly draws attention of the humankind. The mechanism and hazard of eutrophication are discussed in terms of ecology, geochemical cycle, global climate change and human activities. Facing eutrophication, for the whole world, what attitude the humankind should have and what countermeasures we should take are presented.
Investigation of circularly polarized loop antennas with a parasitic element for bandwidth enhancement
It is demonstrated that the bandwidth of circular polarization (CP) can be significantly increased when one more parasitic loop is added inside the original loop. A single-loop antenna has only one minimum axial ratio (AR) point while the two-loop antenna can create two minimum AR points. An appropriate combination of the two minimum AR points results in a significant enhancement for the CP bandwidth. A comprehensive study of the new type of broad-band circularly polarized antennas is presented. Several loop configurations, including a circular loop, a rhombic loop, and a dual rhombic loop with a series feed and a parallel feed, are investigated. The AR (/spl les/2 dB) bandwidth of the circular-loop antenna with a parasitic circular loop is found to be 20%, more than three times the AR bandwidth of a single loop. For the rhombic-loop antenna with a parasitic rhombic loop, an AR bandwidth (AR/spl les/2dB) of more than 40% can be achieved by changing the rhombus vertex angle. The AR (/spl les/2 dB) bandwidths of the series-fed and parallel-fed dual rhombic-loop antennas with a parasitic element are 30% and 50%, respectively. A broad-band balun is incorporated into the series-fed dual rhombic-loop antenna for impedance matching. The broad-band CP performance of the loop antennas is verified by experimental results.
Protocol for the ADDITION-Plus study: a randomised controlled trial of an individually-tailored behaviour change intervention among people with recently diagnosed type 2 diabetes under intensive UK general practice care
BACKGROUND The increasing prevalence of type 2 diabetes poses both clinical and public health challenges. Cost-effective approaches to prevent progression of the disease in primary care are needed. Evidence suggests that intensive multifactorial interventions including medication and behaviour change can significantly reduce cardiovascular morbidity and mortality among patients with established type 2 diabetes, and that patient education in self-management can improve short-term outcomes. However, existing studies cannot isolate the effects of behavioural interventions promoting self-care from other aspects of intensive primary care management. The ADDITION-Plus trial was designed to address these issues among recently diagnosed patients in primary care over one year. METHODS/DESIGN ADDITION-Plus is an explanatory randomised controlled trial of a facilitator-led, theory-based behaviour change intervention tailored to individuals with recently diagnosed type 2 diabetes. 34 practices in the East Anglia region participated. 478 patients with diabetes were individually randomised to receive (i) intensive treatment alone (n = 239), or (ii) intensive treatment plus the facilitator-led individual behaviour change intervention (n = 239). Facilitators taught patients key skills to facilitate change and maintenance of key behaviours (physical activity, dietary change, medication adherence and smoking), including goal setting, action planning, self-monitoring and building habits. The intervention was delivered over one year at the participant's surgery and included a one-hour introductory meeting followed by six 30-minute meetings and four brief telephone calls. Primary endpoints are physical activity energy expenditure (assessed by individually calibrated heart rate monitoring and movement sensing), change in objectively measured dietary intake (plasma vitamin C), medication adherence (plasma drug levels), and smoking status (plasma cotinine levels) at one year. We will undertake an intention-to-treat analysis of the effect of the intervention on these measures, an assessment of cost-effectiveness, and analyse predictors of behaviour change in the cohort. DISCUSSION The ADDITION-Plus trial will establish the medium-term effectiveness and cost-effectiveness of adding an externally facilitated intervention tailored to support change in multiple behaviours among intensively-treated individuals with recently diagnosed type 2 diabetes in primary care. Results will inform policy recommendations concerning the management of patients early in the course of diabetes. Findings will also improve understanding of the factors influencing change in multiple behaviours, and their association with health outcomes.
ROC Curve, Lift Chart and Calibration Plot
This paper presents ROC curve, lift chart and calibration plot, three well known graphical techniques that are useful for evaluating the quality of classification models used in data mining and machine learning. Each technique, normally used and studied separately, defines its own measure of classification quality and its visualization. Here, we give a brief survey of the methods and establish a common mathematical framework which adds some new aspects, explanations and interrelations between these techniques. We conclude with an empirical evaluation and a few examples on how to use the presented techniques to boost classification accuracy.
Eye Contact Detection via Deep Neural Networks
With the presence of ubiquitous devices in our daily lives, effectively capturing and managing user attention becomes a critical device requirement. While gaze-tracking is typically employed to determine the user’s focus of attention, gaze-lock detection to sense eyecontact with a device is proposed in [16]. This work proposes eye contact detection using deep neural networks, and makes the following contributions: (1) With a convolutional neural network (CNN) architecture, we achieve superior eye-contact detection performance as compared to [16] with minimal data pre-processing ; our algorithm is furthermore validated on multiple datasets, (2) Gaze-lock detection is improved by combining head pose and eye-gaze information consistent with social attention literature, and (3) We demonstrate gaze-locking on an Android mobile platform via CNN model compression.
Cryptography with chaos at the physical level
In this work, we devise a chaos-based secret key cryptography scheme for digital communication where the encryption is realized at the physical level, that is, the encrypting transformations are applied to the wave signal instead to the symbolic sequence. The encryption process consists of transformations applied to a two-dimensional signal composed of the message carrying signal and an encrypting signal that has to be a chaotic one. The secret key, in this case, is related to the number of times the transformations are applied. Furthermore, we show that due to its chaotic nature, the encrypting signal is able to hide the statistics of the original signal. 2004 Elsevier Ltd. All rights reserved. In this letter, we present a chaos-based cryptography scheme designed for digital communication. We depart from the traditional approach where encrypting transformations are applied to the binary sequence (the symbolic sequence) into which the wave signal is encoded [1]. In this work, we devise a scheme where the encryption is realized at the physical level, that is, a scheme that encrypts the wave signal itself. Our chaos-based cryptographic scheme takes advantage of the complexity of a chaotic transformation. This complexity is very desirable for cryptographic schemes, since security increases with the number of possibilities of encryption for a given text unit (a letter for example). One advantage of using a chaotic transformation is that it can be implemented at the physical level by means of a low power deterministic electronic circuit which can be easily etched on a chip. Another advantage is that, contrary to a stochastic transformation, a chaotic one allows an straightforward decryption. Moreover, as has been shown elsewhere [2], chaotic transformations for cryptography, enables one to introduce powerful analytical methods to analyze the method performance, besides satisfying the design axioms that guarantees security. In order to clarify our goal and the scheme devised, in what follows, we initially outline the basic ideas of our method. Given a message represented by a sequence fy i g l i1⁄41, and a chaotic encrypting signal fxi g l i1⁄41, with yi and xi 2 R and xiþ1 1⁄4 GðxiÞ, where G is a chaotic transformation, we construct an ordered pair ðxi ; y i Þ. The ith element of the sequence representing the encrypted message is the y component of the ordered pair ðxi ; yn i Þ, obtained from F n c ðxi ; y i Þ. The function Fc : R 2 ! R is a chaotic transformation and n is the number of times we apply it to the ordered pair. The nth iteration of ðxi ; y i Þ, has no inverse if n and x0i are unknown, that is, y i can not be recovered if one knows only F n c ðxi; yiÞ. As it will be clear further, this changing of initial condition is one of the factors responsible for the security of the method. Now we describe how to obtain the sequence fyigli1⁄41 by means of the sampling and quantization methods. These methods play an essential role in the field of digital communication, since they allow us to treat signals varying continuously in time as discrete signals. One instance of the use of continuous in time signals is the encoding of music or * Corresponding author. E-mail address: [email protected] (R.F. Machado). 0960-0779/$ see front matter 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.chaos.2003.12.094 1266 R.F. Machado et al. / Chaos, Solitons and Fractals 21 (2004) 1265–1269 speech where variations in the pressure of the air are represented by a continuous signal such as the voltage in an electric circuit. In the sampling process, a signal varying continuously in time is replaced by a set of measurements (samples) taken at instants separated by a suitable time interval provided by the sampling theorem [3,4]. The signals to which the sampling theorem applies are the band limited ones. By a band limited signal, we mean a function of time whose Fourier transform is null for frequencies f such that jf jPW . According to the sampling theorem, it is possible to reconstruct the original signal from samples taken at times multiple of the sampling interval TS 6 1=2W . Thus, at the end of the sampling process, the signal is converted to a sequence fs1; s02; . . . ; slg of real values, which we refer to as the s sequence. After being sampled the signal is quantized. In this process, the amplitude range of the signal, say the interval 1⁄2a; b , is divided into N subintervals Rk 1⁄4 1⁄2ak ; akþ1Þ, 16 k6N , with a1 1⁄4 a, akþ1 1⁄4 ak þ dk , aNþ1 1⁄4 b, where dk is the length of the kth subinterval. To each Rk one assigns an appropriate real amplitude value qk 2 Rk , its middle point for example. A new sequence, the y sequence, is generated by replacing each s0i by the qk associated to the Rk region to which it belongs. So, the y sequence fy 1 ; y 2 ; . . . ; y l g is a sequence where each y i 2 R takes on values from the set fq1; . . . ; qNg. In traditional digital communication, each member of the y sequence is encoded into a binary sequence of length log2 N . Thus, traditional cryptographic schemes, and even recent proposed chaotic ones [1], transforms this binary sequence (or any other discrete alphabet) into another binary sequence, which is then modulated and transmitted. In our proposed scheme, we transform the real y into another real value, and then modulate this new y value in order to transmit it. This scheme deals with signals rather than with symbols, which implies that the required transformations are performed at the hardware or physical level. Instead of applying the encrypting transformations to the binary sequence, we apply them to the y sequence, the sequence obtained by sampling and quantizing the original wave signal. Suppose, now, that the amplitude of the wave signal is restricted to the interval [0,1]. The first step of the process is to obtain the encrypting signal, a sequence fx1; x02; . . . ; xlg, 0 < x0i < 1. As we will show, this signal is obtained by either sampling a chaotic one or by a chaotic mapping. The pair ðxi ; y i Þ localizes a point in the unit square. In order to encrypt y i , we apply the baker map to the point ðxi ; y i Þ to obtain ðxi ; y i Þ 1⁄4 ð2xi b2xi c; 0:5ðy i þ b2xi cÞÞ, where b2xi c is the largest integer equal to or less than 2x0i . The encrypted signal is given by y 1 i , that is, 0:5ðy i þ b2xi cÞ. It is important to notice that y i can take 2N different values instead of N , since each y 0 i may be encoded as either 0:5 ðy i Þ < 0:5 or 0:5 ðy i þ 1Þ > 0:5, depending on whether x0i falls below or above 0:5. So, in order to digitally modulate the encrypted signal for transmission, 2N pulse amplitudes are necessary, with each binary block being encoded by two different pulses. Therefore, our method has an output format that can be straightforwardly used in digital transmissions. Suppose, for example, that N 1⁄4 2, and we have q1 1⁄4 0:25 and q2 1⁄4 0:75. If s0i < 0:5 then y i 1⁄4 0:25 and if we use n 1⁄4 1, we have y i 1⁄4 0:125 if x0i < 0:5 or y i 1⁄4 0:625 if x0i P 0:5. On the other hand, if s0i > 0:5, then y i 1⁄4 0:75 and we have y i 1⁄4 0:375, if x0i < 0:5 or y i 1⁄4 0:875 if x0i P 0:5. So, the encrypted signal takes on values from the set f0:125; 0:375; 0:625; 0:875g, where the first and third values can be decrypted as 0.25 in the non-encrypted signal while the second and the forth as 0.75. In a general case, where we apply n iterations of the mapping, y i can assume 2nN different values. In this case, if one wants to digitally transmit the cipher text, one can encode every cipher text unit using a binary block of length log2ð2NÞ and then modulate this binary stream using 2nN pulse amplitudes. Thus, the decryption is straightforward if one knows how many times the baker map was applied during the encryption. If the baker transformation (function Fc) is applied n times, there are, for each plain text unit, 2nN possible cipher text units. In this case, the complexity of the ciphertext, that is, its security, can have its upper bound estimated by the Shannon complexity Hs which is the logarithm of the possible number of ciphertext units, produced after the baker’s map have been applied n times. So, Hs 1⁄4 n logð2Þ þ logðNÞ. We see that n is much more important for security reasons than N . So, if one wishes to improve security, one could implement a dynamical secret key schedule for n. By this we mean that, based on some information of the encrypted trajectory ðxi ; y i Þ, the value of n could be changed whenever a plain text unit is encrypted. If one allows only m values for n, the number of possible cipher text units would be given by Nm Qm j1⁄41 2 nj and the complexity of the cipher text would be Pm j1⁄41 nj log 2þ m logN , which can be very high, even for small m. Thus, without knowing the number n of applications of the baker map during the encryption, the decryption renders highly improbable. In fact, n is the secret key of our cryptographic scheme and we can think of the sequence fxi g as a dynamical secret key schedule for the x-component in the initial condition represented by the ordered pair ðxi ; y i Þ. The tools necessary to perform the security analysis are provided by the information theory. In this context, information sources are modelled by random processes whose outcome may be either discrete or continuous in time. Since major interest, and ours too, is in band limited signals, we restrict ourselves to the discrete case, where the source is modelled by a discrete time random process. This is a sequence fy i g l i1⁄41 in which each y 0 i assumes values within the set A 1⁄4 fq1; q2; . . . ; qNg. This set is called the alphabet and its elements are the letters. To each letter is assigned a probability mass function pðqjÞ 1⁄4 P ðy i 1⁄4 qjÞ, that gives the probability with which the letter is selected for transmission. R.F. Machado et al. / Chaos, Solitons and Fractals 21 (2004) 1265–1269 1267 In cryptography, one deals with two messages: the plai
Query by Image and Video Content: The QBIC System
Semantic versus nonsemantic information icture yourself as a fashion designer needing images of fabrics with a particular mixture of colors, a museum cataloger looking P for artifacts of a particular shape and textured pattern, or a movie producer needing a video clip of a red car-like object moving from right to left with the camera zooming. How do you find these images? Even though today’s technology enables us to acquire, manipulate, transmit, and store vast on-line image and video collections, the search methodologies used to find pictorial information are still limited due to difficult research problems (see “Semantic versus nonsemantic” sidebar). Typically, these methodologies depend on file IDS, keywords, or text associated with the images. And, although powerful, they
Time-Varying Graphs and Social Network Analysis: Temporal Indicators and Metrics
Most instruments formalisms, concepts, and metrics for social networks analysis fail to capture their dynamics. Typical systems exhibit different scales of dynamics, ranging from the fine-grain dynamics of interactions (which recently led researchers to consider temporal versions of distance, connectivity, and related indicators), to the evolution of network properties over longer periods of time. This paper proposes a general approach to study that evolution for both atemporal and temporal indicators, based respectively on sequences of static graphs and sequences of time-varying graphs that cover successive time-windows. All the concepts and indicators, some of which are new, are expressed using a time-varying graph formalism recently proposed in [10].
Content-Based, Collaborative Recommendation
The problem of recommending items from some fixed database has been studied extensively, and two main paradigms have emerged. In content-based recommendation one tries to recommend items similar to those a given user has liked in the past, whereas in collaborative recommendation one identifies users whose tastes are similar to those of the given user and recommends items they have liked. Our approach in Fab has been to combine these two methods. Here, we explain how a hybrid system can incorporate the advantages of both methods while inheriting the disadvantages of neither. In addition to what one might call the “generic advantages” inherent in any hybrid system, the particular design of the Fab architecture brings two additional benefits. First, two scaling problems common to all Web services are addressed—an increasing number of users and an increasing number of documents. Second, the system automatically identifies emergent communities of interest in the user population, enabling enhanced group awareness and communications. Here we describe the two approaches for contentbased and collaborative recommendation, explain how a hybrid system can be created, and then describe Fab, an implementation of such a system. For more details on both the implemented architecture and the experimental design the reader is referred to [1]. The content-based approach to recommendation has its roots in the information retrieval (IR) community, and employs many of the same techniques. Text documents are recommended based on a comparison between their content and a user profile. Data
Reliable Client Accounting for P2P-Infrastructure Hybrids
Content distribution networks (CDNs) have started to adopt hybrid designs, which employ both dedicated edge servers and resources contributed by clients. Hybrid designs combine many of the advantages of infrastructurebased and peer-to-peer systems, but they also present new challenges. This paper identifies reliable client accounting as one such challenge. Operators of hybrid CDNs are accountable to their customers (i.e., content providers) for the CDN’s performance. Therefore, they need to offer reliable quality of service and a detailed account of content served. Service quality and accurate accounting, however, depend in part on interactions among untrusted clients. Using the Akamai NetSession client network in a case study, we demonstrate that a small number of malicious clients used in a clever attack could cause significant accounting inaccuracies. We present a method for providing reliable accounting of client interactions in hybrid CDNs. The proposed method leverages the unique characteristics of hybrid systems to limit the loss of accounting accuracy and service quality caused by faulty or compromised clients. We also describe RCA, a system that applies this method to a commercial hybrid content-distribution network. Using trace-driven simulations, we show that RCA can detect and mitigate a variety of attacks, at the expense of a moderate increase in logging overhead.
About Face 3: the essentials of interaction design
Some people may be laughing when looking at you reading in your spare time. Some may be admired of you. And some may want be like you who have reading hobby. What about your own feel? Have you felt right? Reading is a need and a hobby at once. This condition is the on that will make you feel that you must read. If you know are looking for the book enPDFd about face 3 the essentials of interaction design alan cooper as the choice of reading, you can find here.
GPS Digital Tracking Loops Design for High Dynamic Launching Vehicles
This paper describes the design of digital tracking loops for GPS receivers in a high dynamics environment, without external aiding. We adopted the loop structure of a frequency-locked loop (FLL)-assisted phase-locked loop (PLL) and design it to track accelerations steps, as those occurring in launching vehicles. We used a completely digital model of the loop where the FLL and PLL parts are jointly designed, as opposed to the classical discretized analog model with separately designed FLL and PLL. The new approach does not increase the computational burden. We performed simulations and real RF signal experiments of a fixed-point implementation of the loop, showing that reliable tracking of steps up to 40 g can be achieved
Life's last domain.
In what many scientists are calling a tour de force, a team of researchers report in this issue of Science (page 1058) that they9ve sequenced the entire genome of the archaeon microbe Methanococcus jannaschii. With it researchers have their first chance to compare the complete genomes of organisms from what seem to be the three major branches of life: the archaea, bacteria, and eukarya (which includes multicellular organisms such as plants and animals). A startling 44% of the archaeon9s 1700-odd genes are entirely new to science. Says one researcher: “That9s about half of the organism! It shows just how little we know about life.”
MovieGEN : A Movie Recommendation System
In this paper we introduce MovieGEN, an expert system for movie recommendation. We implement the system using machine learning and cluster analysis based on a hybrid recommendation approach. The system takes in the users’ personal information and predicts their movie preferences using well-trained support vector machine (SVM) models. Based on the SVM prediction it selects movies from the dataset, clusters the movies and generates questions to the users. Based on the users’ answers, it refines its movie set and it finally recommends movies for the users. In the process we traverse the parameter space, thus enabling the customizability of the system.
Calogero model with Yukawa-like interaction
Abstract We study an extension of one-dimensional Calogero model involving strongly coupled and electrically charged particles. Besides Calogero term g 2 x 2 , there is an extra factor described by a Yukawa-like coupling modeling short distance interactions. Mimicking Calogero analysis and using developments in formal series of the wave function Ψ ( x ) factorized as x ɛ Φ ( x ) with ɛ ( ɛ − 1 ) = g , we develop a technique to approach the spectrum of the generalized system and show that information on full spectrum is captured by Φ ( x ) and Φ ″ ( x ) at the singular point x = 0 of the potential. Convergence of ∫ d x | Ψ ( x ) | 2 requires ɛ > − 1 2 and is shown to be sensitive to the zero mode of Φ ( x ) at x = 0 .
Phase I and pharmacokinetic study of pazopanib and lapatinib combination therapy in patients with advanced solid tumors
This phase I, open-label, dose-escalation study assessed the maximum-tolerated dose, safety, pharmacokinetics, and preliminary antitumor activity of pazopanib plus lapatinib combination therapy in patients with solid tumors. Patients were to take pazopanib and lapatinib orally once daily in a fasting condition. During the escalation phase, pazopanib and lapatinib doses were escalated in serial patient cohorts, and a limited blood sampling scheme was applied for pharmacokinetic evaluation. In the expansion phase, potential pharmacokinetic interaction between pazopanib and lapatinib was evaluated more extensively. Seventy-five patients were treated. Multiple dosing levels were studied, combining pazopanib up to 800 mg/day with lapatinib up to 1,500 mg/day. Dose-limiting toxicities observed included grade 3 neutropenia, fatigue, asymptomatic decline in left ventricular ejection fraction, diarrhea, and liver enzyme elevations. The most common drug-related adverse events were diarrhea, nausea, anorexia, fatigue, vomiting, rash, hair depigmentation, and hypertension. The dose recommended for further evaluation was pazopanib 800 mg plus lapatinib 1,500 mg (paz-800/lap-1500). No clinically significant drug-drug interaction was observed at the paz-400/lap-1000 level. However, at paz-800/lap-1500, an increase in both the AUC0-t and Cmax of pazopanib was observed. Four partial responses were observed in patients with renal cancer (n = 2), giant-cell tumor of the bone (n = 1), and thyroid cancer (n = 1). Stable disease for ≥18 weeks was seen in 12 patients. Pazopanib and lapatinib can be administered in combination at their respective single-agent doses with an acceptable safety profile. Further evaluation of the combination will be pursued, exploring both paz-800/lap-1500 and paz-400/lap-1000.
Warpage Simulation and Optimization of Fan-Out Wafer level Package(FO-WLP) with TMV under Different Processes
Fan-Out wafer-level package (FO-WLP) has become a mainstream advanced packaging technology and from the first generation fan-out package, the second generation of new fan-out packaging technologies, such as FO MCP, FO POP, and POP SiP is gradually derived. In FO-WLP, wafer warpage caused by mismatch of coefficients of thermal expansion (CTE) between materials has always been an important matter. This paper aims at a FO POP package, which uses through mold via (TMV) to achieve the upper and lower interconnections between packages. The packaging method is chip-first and face-up. The wafer warpage in each thermal process of the package is simulated and analyzed. At the same time, the chip spacing, molding compound thickness, TMV diameter, carrier thickness and material effect on warpage in the plastic molding process and copper layer thickness, copper area ration, dielectric layer thickness and material effect on warpage in the re-distribution layer (RDL) process is studied. And an effective method for reducing wafer warpage in each thermal process is proposed.
Elevated serum neutrophil to lymphocyte and platelet to lymphocyte ratios could be useful in lung cancer diagnosis.
BACKGROUND Lung cancer (LC) is still the primary cause of cancer deaths worldwide, and late diagnosis is a major obstacle to improving lung cancer outcomes. Recently, elevated preoperative or pretreatment neutrophil to lymphocyte ratio (NLR), platelet to lymphocyte ratio (PLR) and mean platelet volume (MPV) detected in peripheral blood were identified as independent prognostic factors associated with poor survival with various cancers, including colon cancer, esophageal cancer, gastric cancer and breast cancer. OBJECTIVE The aim of this study was to examine whether MPV, NLR and PLR could be useful inflammatory markers to differentiate lung cancer patients from healthy controls. An investigation was also made of the relationship between these markers and other prognostic factors and histopathological subgroups. MATERIALS AND METHODS Retrospectively eighty-one lung cancer patients and 81 age-sexes matched healthy subjects included into the study. Patients with hypertension, hematological and renal disease, heart failure, chronic infection, hepatic disorder and other cancer were excluded from the study. The preoperative or pretreatment blood count data was obtained from the recorded computerized database. RESULTS NLR and PLR values were significantly higher in the LC patients compared to the healthy subjects.( NLR: 4.42 vs 2.45 p=0.001, PLR: 245.1 vs 148.2 p=0.002) MPV values were similar in both groups (7.7 vs 7.8). No statistically significant relationship was determined between these markers (MPV, NLR and PLR) and histopathological subgroups and TNM stages. CONCLUSIONS NLR and PLR can be useful biomarkers in LC patients before treatment. Larger prospective studies are required to confirm these findings.
High bandwidth underwater optical communication.
We report error-free underwater optical transmission measurements at 1 Gbit/s (10(9) bits/s) over a 2 m path in a laboratory water pipe with up to 36 dB of extinction. The source at 532 nm was derived from a 1064 nm continuous-wave laser diode that was intensity modulated, amplified, and frequency doubled in periodically poled lithium niobate. Measurements were made over a range of extinction by the addition of a Mg(OH)(2) and Al(OH)(3) suspension to the water path, and we were not able to observe any evidence of temporal pulse broadening. Results of Monte Carlo simulations over ocean water paths of several tens of meters indicate that optical communication data rates >1 Gbit/s can be supported and are compatible with high-capacity data transfer applications that require no physical contact.
How visual fatigue and discomfort impact 3D-TV quality of experience: a comprehensive review of technological, psychophysical, and psychological factors
The Quality of Experience (QoE) of 3D contents is usually considered to be the combination of the perceived visual quality, the perceived depth quality, and lastly the visual fatigue and comfort. When either fatigue or discomfort are induced, studies tend to show that observers prefer to experience a 2D version of the contents. For this reason, providing a comfortable experience is a prerequisite for observers to actually consider the depth effect as a visualization improvement. In this paper, we propose a comprehensive review on visual fatigue and discomfort induced by the visualization of 3D stereoscopic contents, in the light of physiological and psychological processes enabling depth perception. First, we review the multitude of manifestations of visual fatigue and discomfort (near triad disorders, symptoms for discomfort), as well as means for detection and evaluation. We then discuss how, in 3D displays, ocular and cognitive conflicts with real world experience may cause fatigue and discomfort; these includes the accommodation vergence conflict, the inadequacy between presented stimuli and observers depth of focus, and the cognitive integration of conflicting depth cues. We also discuss some limits for stereopsis that constrain our ability to perceive depth, and in particular the perception of planar and in-depth motion, the limited fusion range and various stereopsis disorders. Finally, this paper discusses how the different aspects of fatigue and discomfort apply to 3D technolo̊ Corresponding author Matthieu Urvoy ̊ ̈ Marcus Barkowsky ̈ Patrick Le Callet LUNAM Université, Université de Nantes, IRCCyN UMR CNRS 6597, Institut de Recherche en Communications et Cybernétique de Nantes, Polytech Nantes, rue Christian Pauc BP 50609 44306 Nantes Cedex 3 E-mail: {matthieu.urvoy, marcus.barkowsky, patrick.lecallet} @univ-nantes.fr gies and contents. We notably highlight the need for respecting a comfort zone and avoiding camera and rendering artifacts. We also discuss the influence of visual attention, exposure duration and training. Conclusions provide guidance for best practices and future research.
Oat-bran cereal lowers serum total and LDL cholesterol in hypercholesterolemic men.
Oat bran lowers serum lipid concentrations in healthy and hyperlipidemic subjects. To determine the effects of a ready-to-eat oat-bran cereal on lipid concentrations, we fed control (corn flakes) and oat-bran cereal diets for 2 wk to 12 men with undesirably high serum total-cholesterol concentrations. Subjects were randomly assigned to one of the two diets upon admission to the metabolic ward. After completing the first diet, subjects completed 2 wk on the alternate diet. Intakes of carbohydrate, protein, fat, and cholesterol were virtually identical on the two diets. The oat-bran cereal provided 25 g oat bran/d. The oat-bran cereal diet compared with the corn flakes diet lowered serum total-cholesterol and serum LDL-cholesterol concentrations significantly by 5.4% (p less than 0.05) and 8.5% (p less than 0.025), respectively. Final body weights on each of the diets were similar. Ready-to-eat oat-bran cereal provides a practical means to incorporate soluble fiber into the diet to lower serum cholesterol.
Database Foundations for Scalable RDF Processing
As more and more data is provided in RDF format, storing huge amounts of RDF data and efficiently processing queries on such data is becoming increasingly important. The first part of the lecture will introduce state-of-the-art techniques for scalably storing and querying RDF with relational systems, including alternatives for storing RDF, efficient index structures, and query optimization techniques. As centralized RDF repositories have limitations in scalability and failure tolerance, decentralized architectures have been proposed. The second part of the lecture will highlight system architectures and strategies for distributed RDF processing. We cover search engines as well as federated query processing, highlight differences to classic federated database systems, and discuss efficient techniques for distributed query processing in general and for RDF data in particular. Moreover, for the last part of this chapter, we argue that extracting knowledge from the Web is an excellent showcase – and potentially one of the biggest challenges – for the scalable management of uncertain data we have seen so far. The third part of the lecture is thus intended to provide a close-up on current approaches and platforms to make reasoning (e.g., in the form of probabilistic inference) with uncertain RDF data scalable to billions of triples. 1 RDF in centralized relational databases The increasing availability and use of RDF-based information in the last decade has led to an increasing need for systems that can store RDF and, more importantly, efficiencly evaluate complex queries over large bodies of RDF data. The database community has developed a large number of systems to satisfy this need, partly reusing and adapting well-established techniques from relational databases [122]. The majority of these systems can be grouped into one of the following three classes: 1. Triple stores that store RDF triples in a single relational table, usually with additional indexes and statistics, 2. vertically partitioned tables that maintain one table for each property, and 3. Schema-specific solutions that store RDF in a number of property tables where several properties are jointly represented. In the following sections, we will describe each of these classes in detail, focusing on two important aspects of these systems: storage and indexing, i.e., how are RDF triples mapped to relational tables and which additional support structures are created; and query processing, i.e., how SPARQL queries are mapped to SQL, which additional operators are introduced, and how efficient execution plans for queries are determined. In addition to these purely relational solutions, a number of specialized RDF systems has been proposed that built on nonrelational technologies, we will briefly discuss some of these systems. Note that we will focus on SPARQL processing, which is not aware of underlying RDF/S or OWL schema and cannot exploit any information about subclasses; this is usually done in an additional layer on top. We will explain especially the different storage variants with the running example from Figure 1, some simple RDF facts from a university scenario. Here, each line corresponds to a fact (triple, statement), with a subject (usually a resource), a property (or predicate), and an object (which can be a resource or a constant). Even though resources are represented by URIs in RDF, we use string constants here for simplicity. A collection of RDF facts can also be represented as a graph. Here, resources (and constants) are nodes, and for each fact <s,p,o>, an edge from s to o is added with label p. Figure 2 shows the graph representation for the RDF example from Figure 1. <Katja,teaches,Databases> <Katja,works_for,MPI Informatics> <Katja,PhD_from,TU Ilmenau> <Martin,teaches,Databases> <Martin,works_for,MPI Informatics> <Martin,PhD_from,Saarland University> <Ralf,teaches,Information Retrieval> <Ralf,PhD_from,Saarland University> <Ralf,works_for,Saarland University> <Saarland University,located_in,Germany> <MPI Informatics,located_in,Germany> Fig. 1. Running example for RDF data
Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.
Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.
Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review
The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. We survey the literature on visual interpretation of hand gestures in the context of its role in HCI. This discussion is organized on the basis of the method used for modeling, analyzing, and recognizing gestures. Important differences in the gesture interpretation approaches arise depending on whether a 3D model of the human hand or an image appearance model of the human hand is used. 3D hand models offer a way of more elaborate modeling of hand gestures but lead to computational hurdles that have not been overcome given the real-time requirements of HCI. Appearance-based models lead to computationally efficient “purposive” approaches that work well under constrained situations but seem to lack the generality desirable for HCI. We also discuss implemented gestural systems as well as other potential applications of vision-based gesture recognition. Although the current progress is encouraging, further theoretical as well as computational advances are needed before gestures can be widely used for HCI. We discuss directions of future research in gesture recognition, including its integration with other natural modes of humancomputer interaction. Index Terms —Vision-based gesture recognition, gesture analysis, hand tracking, nonrigid motion analysis, human-computer
Validation of a method to replace frozen section during parathyroid exploration by using the rapid parathyroid hormone assay on parathyroid aspirates.
HYPOTHESIS The parathyroid hormone (PTH) content of tissue aspirates is an accurate indicator of parathyroid tissue and can replace frozen section during parathyroid surgery. DESIGN AND SETTINGS Prospective data collection in a tertiary care hospital with a single surgeon. PATIENTS One hundred sixty-seven consecutive patients completing limited parathyroid explorations. INTERVENTIONS Parathyroid adenomas removed during limited parathyroid exploration were aspirated through a 22-gauge needle into 0.5 mL isotonic sodium chloride solution and the solution held on ice in a purple-top tube. Aspirates of in situ thyroid tissue were also taken for comparison. Samples were then assessed to monitor the physiologic impact of the surgery. MAIN OUTCOME MEASUREMENTS The PTH content of tissue aspirates was compared with histologic identification of removed putative parathyroid tissue. RESULTS Elevated tissue PTH content was associated with the identification of origin as parathyroid in every case. Tissue aspirates from pathologically proven parathyroid tissue had a mean PTH level of at least 1691 pg/mL, with 160 having values exceeding the upper limit of the assay. This measure was significantly higher than values obtained from thyroid aspirates, which had a mean PTH level of 88 pg/mL (P<.01), reflective of blood levels at the time of aspiration. Using the 99% confidence interval of histologically confirmed parathyroid glands as the lower limit of a positive test result at 1610 pg/mL, tissue aspirate PTH assay has a sensitivity of 97% and a specificity of 100%. CONCLUSIONS Positive identification of removed tissue as parathyroid is necessary adjunct to limited parathyroid exploration, where decreases in false positive blood PTH levels can be a result of operative manipulation of the neck. Analysis of tissue PTH content during the same assay that is being used for assessing blood PTH concentration is an efficient and accurate method for identifying the tissue with certainty. This measure also prevents the occasional ambiguity in frozen sections of parathyroid tissue that apparently contain thyroidlike colloid material.
Approximate computing: Challenges and opportunities
Approximate computing is gaining traction as a computing paradigm for data analytics and cognitive applications that aim to extract deep insight from vast quantities of data. In this paper, we demonstrate that multiple approximation techniques can be applied to applications in these domains and can be further combined together to compound their benefits. In assessing the potential of approximation in these applications, we took the liberty of changing multiple layers of the system stack: architecture, programming model, and algorithms. Across a set of applications spanning the domains of DSP, robotics, and machine learning, we show that hot loops in the applications can be perforated by an average of 50% with proportional reduction in execution time, while still producing acceptable quality of results. In addition, the width of the data used in the computation can be reduced to 10-16 bits from the currently common 32/64 bits with potential for significant performance and energy benefits. For parallel applications we reduced execution time by 50% using relaxed synchronization mechanisms. Finally, our results also demonstrate that benefits compounded when these techniques are applied concurrently. Our results across different applications demonstrate that approximate computing is a widely applicable paradigm with potential for compounded benefits from applying multiple techniques across the system stack. In order to exploit these benefits it is essential to re-think multiple layers of the system stack to embrace approximations ground-up and to design tightly integrated approximate accelerators. Doing so will enable moving the applications into a world in which the architecture, programming model, and even the algorithms used to implement the application are all fundamentally designed for approximate computing.
Longitudinal Alveolar Bone Loss in Postmenopausal Osteoporotic/Osteopenic Women
The purpose of this 2-year longitudinal clinical study was to investigate alveolar (oral) bone height and density changes in osteoporotic/osteopenic women compared with women with normal lumbar spine bone mineral density (BMD). Thirty-eight postmenopausal women completed this study; 21 women had normal BMD of the lumbar spine, while 17 women had osteoporosis or osteopenia of the lumbar spine at baseline. All subjects had a history of periodontitis and participated in 3- to 4-month periodontal maintenance programs. No subjects were current smokers. All patients were within 5 years of menopause at the start of the study. Four vertical bitewing radiographs of posterior sextants were taken at baseline and 2-year visits. Radiographs were examined using computer-assisted densitometric image analysis (CADIA) for changes in bone density at the crestal and subcrestal regions of interproximal bone. Changes in alveolar bone height were also measured. Radiographic data were analyzed by the t-test for two independent samples. Osteoporotic/osteopenic women exhibited a higher frequency of alveolar bone height loss (p<0.05) and crestal (p<0.025) and subcrestal (p<0.03) density loss relative to women with normal BMD. Estrogen deficiency was associated with increased frequency of alveolar bone crestal density loss in the osteoporotic/osteopenic women and in the overall study population (p<0.05). These data suggest that osteoporosis/osteopenia and estrogen deficiency are risk factors for alveolar bone density loss in postmenopausal women with a history of periodontitis.
Clickthrough-based latent semantic models for web search
This paper presents two new document ranking models for Web search based upon the methods of semantic representation and the statistical translation-based approach to information retrieval (IR). Assuming that a query is parallel to the titles of the documents clicked on for that query, large amounts of query-title pairs are constructed from clickthrough data; two latent semantic models are learned from this data. One is a bilingual topic model within the language modeling framework. It ranks documents for a query by the likelihood of the query being a semantics-based translation of the documents. The semantic representation is language independent and learned from query-title pairs, with the assumption that a query and its paired titles share the same distribution over semantic topics. The other is a discriminative projection model within the vector space modeling framework. Unlike Latent Semantic Analysis and its variants, the projection matrix in our model, which is used to map from term vectors into sematic space, is learned discriminatively such that the distance between a query and its paired title, both represented as vectors in the projected semantic space, is smaller than that between the query and the titles of other documents which have no clicks for that query. These models are evaluated on the Web search task using a real world data set. Results show that they significantly outperform their corresponding baseline models, which are state-of-the-art.
Seasonal Asset Allocation: Evidence from Mutual Fund Flows
A large body of research studies the relation between past mutual fund performance and investor flows. In this paper, we document a strong seasonality in mutual fund flows. While some of this seasonality is related to other influences, we find a strong relation between investor flows and seasonally varying riskaversion that is consistent with some investors experiencing seasonal affective disorder (SAD). Specifically, consistent with seasonally varying risk-aversion, our paper shows that substantial money moves from equity to government money market mutual funds in the fall, then back to equity funds in the spring, controlling for the influence of past performance, advertising, and capital gains overhang on fund flows. While prior evidence regarding the influence of SAD relies on seasonal patterns in the returns of asset classes, our paper provides the first direct trade-related evidence. Several prior papers have uncovered strong predictability in mutual fund flows. For example, individuals invest heavily in funds with the highest prior-year returns, and disinvest weakly from funds with the lowest returns (Sirri and Tufano, 1998; Chevalier and Ellison, 1997; and Lynch and Musto, 2003). This returnchasing behavior indicates that individuals infer investment management quality from past performance, especially for past winning funds. More recently, Ben-Rephael, Kandel, and Wohl (2010a) show that net exchanges of money from US bond to US equity funds are strongly positively correlated with past equity market excess returns, with a strong return reversal in equities during the following several months.1 Their findings support the view that fund exchanges reflect small investor sentiment. Indro (2004) also finds evidence consistent with equity fund flows being driven by investor sentiment. Further, Ben-Rephael, Kandel, and Wohl (2010b) examine daily equity fund flows in Israel and find strong autocorrelation in mutual fund flows as well as a strong correlation of flows with lagged market returns, which appear to create a temporary price pressure. Investors also react strongly to advertising of funds (Jain and Wu, 2000; and Gallaher, Kaniel, and Starks, 2006) and to other information that helps to reduce search costs (Huang, Wei, and Yan, 2007). In turn, the mutual fund industry spends more than half a billion dollars a year on advertising to attract investment inflows (see Pozen, 2002). The benefits of attracting capital flows for mutual fund management companies are clear: in 2008, fund shareholders in the United States paid fees and expenses of 1.02 percent on equity funds and 0.79 percent on bond funds – with 6.5 and 1.7 trillion dollars under management in all US-domiciled equity and bond mutual funds, respectively (Investment Company Institute, 2008). In this study, we document a heretofore unknown seasonality in mutual fund flows. Some of this seasonality is related to seasonalities in advertising expenditures, tax avoidance, and liquidity needs. However, we show that flows to funds, controlling for these influences (and others), remain strongly dependent on the season as well as the riskiness of the fund. Investors move money into (relatively) safe funds during the fall, and into risky funds during the spring. This seasonality coincides in timing with risk-aversion driven by a medical condition known as seasonal affective disorder, or SAD, which is a seasonal form of depression.2 Exchanges are movements of money between funds within a single fund family. Medical evidence firmly demonstrates that as the number of hours of daylight drops in the fall, up to ten percent of the population suffers from clinical depression associated with SAD, depending on the location and the methodology employed by the researchers. (Some studies find the incidence of SAD is as high as ten percent, such as Rosen et al.’s (1990) analysis of a sample in New Hampshire, while others find it is below two percent, such as Rosen et al.’s study of a sample in Florida.) The
Best Keyword Cover Search
It is common that the objects in a spatial database (e.g., restaurants/hotels) are associated with keyword(s) to indicate their businesses/services/features. An interesting problem known as Closest Keywords search is to query objects, called keyword cover, which together cover a set of query keywords and have the minimum inter-objects distance. In recent years, we observe the increasing availability and importance of keyword rating in object evaluation for the better decision making. This motivates us to investigate a generic version of Closest Keywords search called Best Keyword Cover which considers inter-objects distance as well as the keyword rating of objects. The baseline algorithm is inspired by the methods of Closest Keywords search which is based on exhaustively combining objects from different query keywords to generate candidate keyword covers. When the number of query keywords increases, the performance of the baseline algorithm drops dramatically as a result of massive candidate keyword covers generated. To attack this drawback, this work proposes a much more scalable algorithm called keyword nearest neighbor expansion (keyword-NNE). Compared to the baseline algorithm, keyword-NNE algorithm significantly reduces the number of candidate keyword covers generated. The in-depth analysis and extensive experiments on real data sets have justified the superiority of our keyword-NNE algorithm.
Uproar and Disorder? The Impact of Bible Christianity on the Communities of Nineteenth Century North Devon
Abstract A considerable part of the rural depopulation of North Devon in the first half of the 19th century was attributable to emigration. Examination of individual emigrants shows that the majority were not stereotypical, impoverished agricultural labourers but more substantial tradesmen, many of whom adhered to the Bible Christian faith. Although followers were described as being 'almost without exception of the lowest classes of society' (Wickes 1987: 38) and were accused of behaviour that was 'uproarious and disorderly' (North Devon Journal(NDJ) 9 December 1858); both are statements that warrant scrutiny. This article considers the impact of Bible Christianity on the inhabitants of the hamlet of Bucks Mills and the parish of Buckland Brewer. Using case studies of individuals from these areas, it investigates the characteristics of early nineteenth century Bible Christian emigrants and examines the role of their faith as a motivating factor for migration.
Object-Level Video Advertising: An Optimization Framework
In this paper, we present new models and algorithms for object-level video advertising. A framework that aims to embed content-relevant ads within a video stream is investigated in this context. First, a comprehensive optimization model is designed to minimize intrusiveness to viewers when ads are inserted in a video. For human clothing advertising, we design a deep convolutional neural network using face features to recognize human genders in a video stream. Human parts alignment is then implemented to extract human part features that are used for clothing retrieval. Second, we develop a heuristic algorithm to solve the proposed optimization problem. For comparison, we also employ the genetic algorithm to find solutions approaching the global optimum. Our novel framework is examined in various types of videos. Experimental results demonstrate the effectiveness of the proposed method for object-level video advertising.
Voice-based assessments of trustworthiness, competence, and warmth in blind and sighted adults
The study of voice perception in congenitally blind individuals allows researchers rare insight into how a lifetime of visual deprivation affects the development of voice perception. Previous studies have suggested that blind adults outperform their sighted counterparts in low-level auditory tasks testing spatial localization and pitch discrimination, as well as in verbal speech processing; however, blind persons generally show no advantage in nonverbal voice recognition or discrimination tasks. The present study is the first to examine whether visual experience influences the development of social stereotypes that are formed on the basis of nonverbal vocal characteristics (i.e., voice pitch). Groups of 27 congenitally or early-blind adults and 23 sighted controls assessed the trustworthiness, competence, and warmth of men and women speaking a series of vowels, whose voice pitches had been experimentally raised or lowered. Blind and sighted listeners judged both men's and women's voices with lowered pitch as being more competent and trustworthy than voices with raised pitch. In contrast, raised-pitch voices were judged as being warmer than were lowered-pitch voices, but only for women's voices. Crucially, blind and sighted persons did not differ in their voice-based assessments of competence or warmth, or in their certainty of these assessments, whereas the association between low pitch and trustworthiness in women's voices was weaker among blind than sighted participants. This latter result suggests that blind persons may rely less heavily on nonverbal cues to trustworthiness compared to sighted persons. Ultimately, our findings suggest that robust perceptual associations that systematically link voice pitch to the social and personal dimensions of a speaker can develop without visual input.
On Policies to Reward the Value Added by Educators
One current educational reform seeks to reward the value added by teachers and schools based on the average change in pupil test scores over time. In this paper, we outline the conditions under which the average change in scores is sufficient to rank schools in terms of value added. A key condition is that socioeconomic outcomes be a linear function of test scores. Absent this condition, one can still derive the optimal value-added policy if one knows the relationship between test scores and socioeconomic outcomes, and the distribution of test scores both before and after the intervention. Using the National Longitudinal Survey of Youth, we find a nonlinear relationship between test scores and one important outcome: log wages. We find no consistent pattern in the curvature of log wage returns to test scores (whether percentiles, scaled, or raw scores). This implies that, used alone, the average gain in test scores is an inadequate measure of school performance and current value-added methodology may misdirect school resources.
Sparse Additive Subspace Clustering
In this paper, we introduce and investigate a sparse additive model for subspace clustering problems. Our approach, named SASC (Sparse Additive Subspace Clustering), is essentially a functional extension of the Sparse Subspace Clustering (SSC) of Elhamifar & Vidal [7] to the additive nonparametric setting. To make our model computationally tractable, we express SASC in terms of a finite set of basis functions, and thus the formulated model can be estimated via solving a sequence of grouped Lasso optimization problems. We provide theoretical guarantees on the subspace recovery performance of our model. Empirical results on synthetic and real data demonstrate the effectiveness of SASC for clustering noisy data points into their original subspaces.
System level analysis of fast, per-core DVFS using on-chip switching regulators
Portable, embedded systems place ever-increasing demands on high-performance, low-power microprocessor design. Dynamic voltage and frequency scaling (DVFS) is a well-known technique to reduce energy in digital systems, but the effectiveness of DVFS is hampered by slow voltage transitions that occur on the order of tens of microseconds. In addition, the recent trend towards chip-multiprocessors (CMP) executing multi-threaded workloads with heterogeneous behavior motivates the need for per-core DVFS control mechanisms. Voltage regulators that are integrated onto the same chip as the microprocessor core provide the benefit of both nanosecond-scale voltage switching and per-core voltage control. We show that these characteristics provide significant energy-saving opportunities compared to traditional off-chip regulators. However, the implementation of on-chip regulators presents many challenges including regulator efficiency and output voltage transient characteristics, which are significantly impacted by the system-level application of the regulator. In this paper, we describe and model these costs, and perform a comprehensive analysis of a CMP system with on-chip integrated regulators. We conclude that on-chip regulators can significantly improve DVFS effectiveness and lead to overall system energy savings in a CMP, but architects must carefully account for overheads and costs when designing next-generation DVFS systems and algorithms.
Fine Motor Activities Program to Promote Fine Motor Skills in a Case Study of Down's Syndrome.
Children with Down's syndrome have developmental delays, particularly regarding cognitive and motor development. Fine motor skill problems are related to motor development. They have impact on occupational performances in school-age children with Down's syndrome because they relate to participation in school activities, such as grasping, writing, and carrying out self-care duties. This study aimed to develop a fine motor activities program and to examine the efficiency of the program that promoted fine motor skills in a case study of Down's syndrome. The case study subject was an 8 -year-old male called Kai, who had Down's syndrome. He was a first grader in a regular school that provided classrooms for students with special needs. This study used the fine motor activities program with assessment tools, which included 3 subtests of the Bruininks-Oseretsky Test of Motor Proficiency, second edition (BOT-2) that applied to Upper-limb coordination, Fine motor precision and Manual dexterity; as well as the In-hand Manipulation Checklist, and Jamar Hand Dynamometer Grip Test. The fine motor activities program was implemented separately and consisted of 3 sessions of 45 activities per week for 5 weeks, with each session taking 45 minutes. The results showed obvious improvement of fine motor skills, including bilateral hand coordination, hand prehension, manual dexterity, in-hand manipulation, and hand muscle strength. This positive result was an example of a fine motor intervention program designed and developed for therapists and related service providers in choosing activities that enhance fine motor skills in children with Down's syndrome.
Thermal degradation of DRAM retention time: Characterization and improving techniques
Variation of DRAM retention time and reliability problem induced by thermal stress was investigated. Most of the DRAM cells revealed 2-state retention time with thermal stress. The effects of hydrogen annealing condition and fluorine implantation on the variation of retention time and reliability are discussed.
Applications of big data to smart cities
Many governments are considering adopting the smart city concept in their cities and implementing big data applications that support smart city components to reach the required level of sustainability and improve the living standards. Smart cities utilize multiple technologies to improve the performance of health, transportation, energy, education, and water services leading to higher levels of comfort of their citizens. This involves reducing costs and resource consumption in addition to more effectively and actively engaging with their citizens. One of the recent technologies that has a huge potential to enhance smart city services is big data analytics. As digitization has become an integral part of everyday life, data collection has resulted in the accumulation of huge amounts of data that can be used in various beneficial application domains. Effective analysis and utilization of big data is a key factor for success in many business and service domains, including the smart city domain. This paper reviews the applications of big data to support smart cities. It discusses and compares different definitions of the smart city and big data and explores the opportunities, challenges and benefits of incorporating big data applications for smart cities. In addition it attempts to identify the requirements that support the implementation of big data applications for smart city services. The review reveals that several opportunities are available for utilizing big data in smart cities; however, there are still many issues and challenges to be addressed to achieve better utilization of this technology.
Deep Asymmetric Pairwise Hashing
Recently, deep neural networks based hashing methods have greatly improved the multimedia retrieval performance by simultaneously learning feature representations and binary hash functions. Inspired by the latest advance in the asymmetric hashing scheme, in this work, we propose a novel Deep Asymmetric Pairwise Hashing approach (DAPH) for supervised hashing. The core idea is that two deep convolutional models are jointly trained such that their output codes for a pair of images can well reveal the similarity indicated by their semantic labels. A pairwise loss is elaborately designed to preserve the pairwise similarities between images as well as incorporating the independence and balance hash code learning criteria. By taking advantage of the flexibility of asymmetric hash functions, we devise an efficient alternating algorithm to optimize the asymmetric deep hash functions and high-quality binary code jointly. Experiments on three image benchmarks show that DAPH achieves the state-of-the-art performance on large-scale image retrieval.
Leadership Challenges in the Implementation of ICT in Public Secondary Schools, Kenya.
Many authors argue that school leadership determines how Information Communication Technology (ICT) is implemented and its subsequent impact on teaching and learning. This involves Principal as a school leader to lead in implementation. A positive attitude of school leader towards implementation of ICT will encourage the school community to be actively involved in its implementation. Kenya is in the process of implementing ICT in schools. However, there are many challenges that hinder effective ICT implementation including school leadership challenge. This paper reports that school leader’s interest, their commitment and championing implementation of ICT programs in schools positively influences the whole process. The Paper recommends that all school leaders consider using ICT in their day-to-day activities of running their schools. ICT curriculum and managerial skills should be incorporated to training of school leaders in Kenya. Implementation of ICT is becoming more important to schools and the success of such implementation is often due to presence of effective school leadership. To a large extent, school leaders have been relying on government and development partners to equip schools with ICT infrastructure. This Paper recommends besides sensitizing development partners and waiting for their contributions, school leadership should consider ICT a priority in school and allocate budgets that would promote its implementation. A descriptive survey was used to collect data by administering questionnaires to selected sample of ICT/curriculum teachers, Principals and Board of Governors (BOG) chairpersons from 105 public secondary schools in Meru County, Kenya.
Water Quality Monitoring System Using IOT
Water is a most essential element required for humans to survive and therefore there must be some mechanisms put in place to test the quality of drinking water in real time. This paper proposes a system for water quality monitoring system using IOT. The system consists of various physiochemical sensors which can measures the physical and chemical parameters of the water such as Temperature, Turbidity, pH and Flow. By these sensors, water contaminants are detected. The sensor values processed by Raspberry pi and send to the cloud. The sensed data is visible on the cloud using cloud computing and the flow of the water in the pipeline is controlled through IoT.
Time series forecasting using Artificial Neural Networks vs. evolving models
Time series forecasting plays an important role in many fields such as economics, finance, business intelligence, natural sciences, and the social sciences. This forecasting task can be achieved by using different techniques such as statistical methods or Artificial Neural Networks (ANN). In this paper, we present two different approaches to time series forecasting: evolving Takagi-Sugeno (eTS) fuzzy model and ANN. These two different methods will be compared taking into account the different characteristic of each approach.
Multicenter Comparison of Lung and Oral Microbiomes of HIV-infected and HIV-uninfected Individuals.
RATIONALE Improved understanding of the lung microbiome in HIV-infected individuals could lead to better strategies for diagnosis, therapy, and prophylaxis of HIV-associated pneumonias. Differences in the oral and lung microbiomes in HIV-infected and HIV-uninfected individuals are not well defined. Whether highly active antiretroviral therapy influences these microbiomes is unclear. OBJECTIVES We determined whether oral and lung microbiomes differed in clinically healthy groups of HIV-infected and HIV-uninfected subjects. METHODS Participating sites in the Lung HIV Microbiome Project contributed bacterial 16S rRNA sequencing data from oral washes and bronchoalveolar lavages (BALs) obtained from HIV-uninfected individuals (n = 86), HIV-infected individuals who were treatment naive (n = 18), and HIV-infected individuals receiving antiretroviral therapy (n = 38). MEASUREMENTS AND MAIN RESULTS Microbial populations differed in the oral washes among the subject groups (Streptococcus, Actinomyces, Rothia, and Atopobium), but there were no individual taxa that differed among the BALs. Comparison of oral washes and BALs demonstrated similar patterns from HIV-uninfected individuals and HIV-infected individuals receiving antiretroviral therapy, with multiple taxa differing in abundance. The pattern observed from HIV-infected individuals who were treatment naive differed from the other two groups, with differences limited to Veillonella, Rothia, and Granulicatella. CD4 cell counts did not influence the oral or BAL microbiome in these relatively healthy, HIV-infected subjects. CONCLUSIONS The overall similarity of the microbiomes in participants with and without HIV infection was unexpected, because HIV-infected individuals with relatively preserved CD4 cell counts are at higher risk for lower respiratory tract infections, indicating impaired local immune function.
TAXI at SemEval-2016 Task 13: a Taxonomy Induction Method based on Lexico-Syntactic Patterns, Substrings and Focused Crawling
We present a system for taxonomy construction that reached the first place in all subtasks of the SemEval 2016 challenge on Taxonomy Extraction Evaluation. Our simple yet effective approach harvests hypernyms with substring inclusion and Hearst-style lexicosyntactic patterns from domain-specific texts obtained via language model based focused crawling. Extracted taxonomies are evaluated on English, Dutch, French and Italian for three domains each (Food, Environment and Science). Evaluations against a gold standard and by human judgment show that our method outperforms more complex and knowledge-rich approaches on most domains and languages. Furthermore, to adapt the method to a new domain or language, only a small amount of manual labour is needed.
Conjunctive representation of position, direction, and velocity in entorhinal cortex.
Grid cells in the medial entorhinal cortex (MEC) are part of an environment-independent spatial coordinate system. To determine how information about location, direction, and distance is integrated in the grid-cell network, we recorded from each principal cell layer of MEC in rats that explored two-dimensional environments. Whereas layer II was predominated by grid cells, grid cells colocalized with head-direction cells and conjunctive grid x head-direction cells in the deeper layers. All cell types were modulated by running speed. The conjunction of positional, directional, and translational information in a single MEC cell type may enable grid coordinates to be updated during self-motion-based navigation.
Ensemble of jointly trained deep neural network-based acoustic models for reverberant speech recognition
Distant speech recognition is a challenge, particularly due to the corruption of speech signals by reverberation caused by large distances between the speaker and microphone. In order to cope with a wide range of reverberations in real-world situations, we present novel approaches for acoustic modeling including an ensemble of deep neural networks (DNNs) and an ensemble of jointly trained DNNs. First, multiple DNNs are established, each of which corresponds to a different reverberation time 60 (RT60) in a setup step. Also, each model in the ensemble of DNN acoustic models is further jointly trained, including both feature mapping and acoustic modeling, where the feature mapping is designed for the dereverberation as a front-end. In a testing phase, the two most likely DNNs are chosen from the DNN ensemble using maximum a posteriori (MAP) probabilities, computed in an online fashion by using maximum likelihood (ML)-based blind RT60 estimation and then the posterior probability outputs from two DNNs are combined using the ML-based weights as a simple average. Extensive experiments demonstrate that the proposed approach leads to substantial improvements in speech recognition accuracy over the conventional DNN baseline systems under diverse reverberant conditions.
Receptors for luteinizing hormone-releasing hormone (GnRH) as therapeutic targets in triple negative breast cancers (TNBC)
Triple negative breast cancers express receptors for gonadotropin-releasing hormone (GnRH) in more than 50 % of the cases, which can be targeted with peptidic analogs of GnRH, such as triptorelin. The current study investigates cytotoxic activity of triptorelin as a monotherapy and in treatment combinations with chemotherapeutic agents and inhibitors of the PI3K and the ERK pathways in in vitro models of triple negative breast cancers (TNBC). GnRH receptor expression of TNBC cell lines MDA-MB-231 and HCC1806 was investigated. Cells were treated with triptorelin, chemotherapeutic agents (cisplatin, docetaxel, AEZS-112), PI3K/AKT inhibitors (perifosine, AEZS-129), an ERK inhibitor (AEZS-134), and dual PI3K/ERK inhibitor AEZS-136 applied as single agent therapies and in combinations. MDA-MB-231 and HCC1806 TNBC cells both expressed receptors for GnRH on messenger (m)RNA and protein level and were found sensitive to triptorelin with a respective median effective concentration (EC50) of 31.21 ± 0.21 and 58.50 ± 19.50. Synergistic effects occurred when triptorelin was combined with cisplatin. In HCC1806 cells, synergy occurred when triptorelin was applied with PI3K/AKT inhibitors perifosine and AEZS-129. In MDA-MB-231 cells, synergy was observed after co-treatment with triptorelin and ERK inhibitor AEZS-134 and dual PI3K/ERK inhibitor AEZS-136. GnRH receptors on TNBC cells can be used for targeted therapy of these cancers with GnRH agonist triptorelin. Treatment combinations based on triptorelin and PI3K and ERK inhibitors and chemotherapeutic agent cisplatin have synergistic effects in in vitro models of TNBC. If confirmed in vivo, clinical trials based on triptorelin and cisplatin could be quickly carried out, as triptorelin is FDA approved for other indications and known to be well tolerated.
A visualization tool for evaluating access control policies in facebook-style social network systems
Understanding the privacy implication of adopting a certain privacy setting is a complex task for the users of social network systems. Users need tool support to articulate potential access scenarios and perform policy analysis. Such a need is particularly acute for Facebook-style Social Network Systems (FSNSs), in which semantically rich topology-based policies are used for access control. In this work, we develop a prototypical tool for Reflective Policy Assessment (RPA) --- a process in which a user examines her profile from the viewpoint of another user in her extended neighbourhood in the social graph. We verify the utility and usability of our tool in a within-subject user study.
Disparity and occlusion estimation in multiocular systems and their coding for the communication of multiview image sequences
An efficient disparity estimation and occlusion detection algorithm for multiocular systems is presented. A dynamic programming algorithm, using a multiview matching cost as well as pure geometrical constraints, is used to estimate disparity and to identify the occluded areas in the extreme left and right views. A significant advantage of the proposed approach is that the exact number of views in which each point appears (is not occluded) can be determined. The disparity and occlusion information obtained may then be used to create virtual images from intermediate viewpoints. Furthermore, techniques are developed for the coding of occlusion and disparity information, which is needed at the receiver for the reproduction of a multiview sequence using the two encoded extreme views. Experimental results illustrate the performance of the proposed techniques.
Deep Joint Rain Detection and Removal from a Single Image
In this paper, we address a rain removal problem from a single image, even in the presence of heavy rain and rain streak accumulation. Our core ideas lie in our new rain image model and new deep learning architecture. We add a binary map that provides rain streak locations to an existing model, which comprises a rain streak layer and a background layer. We create a model consisting of a component representing rain streak accumulation (where individual streaks cannot be seen, and thus visually similar to mist or fog), and another component representing various shapes and directions of overlapping rain streaks, which usually happen in heavy rain. Based on the model, we develop a multi-task deep learning architecture that learns the binary rain streak map, the appearance of rain streaks, and the clean background, which is our ultimate output. The additional binary map is critically beneficial, since its loss function can provide additional strong information to the network. To handle rain streak accumulation (again, a phenomenon visually similar to mist or fog) and various shapes and directions of overlapping rain streaks, we propose a recurrent rain detection and removal network that removes rain streaks and clears up the rain accumulation iteratively and progressively. In each recurrence of our method, a new contextualized dilated network is developed to exploit regional contextual information and to produce better representations for rain detection. The evaluation on real images, particularly on heavy rain, shows the effectiveness of our models and architecture.
20 years old and going strong.
Richard L. Sowell, PhD, RN, FAAN, is Editor Emeritus (1996–2007), Professor and Dean, WellStar College of Health and Human Services, Kennesaw State University, Kennesaw, GA. Congratulations to the Journal of the Association of Nurses in AIDS Care (JANAC) on its 20th anniversary. The significant contributions made by JANAC over the past 20 years are a testimony to the vision and hard work nurses have provided in responding to the HIV/AIDS pandemic. JANAC from its earliest conception has brought a unique perspective to the fight against AIDS. It is the perspective of nurses who are direct care providers, nurses who are researchers, nurses who are educators, and nurses who are policy advocates. JANAC, as the only nursing journal to exclusively address HIV-related issues, continues to provide an outlet for the voices of nurses in AIDS Care—voices that have influenced the course of the pandemic—voices that attest to the role of nurses as the underpinning of health and health care delivery. It was my privilege to have been a part of JANAC’s evolution and growth during 13 years of the journal’s 20-year history. It does not seem possible that it was more than 13 years ago when Cliff Morrison, ANAC president, asked me to serve as the assistant editor for JANAC. I had no idea that 2 years later, in 1996, I would become the editor of JANAC for the next 11 years. I could not have guessed JANAC would become a major part of my professional life for over a decade as well as provide the opportunity to work with some of the most dynamic nurses in the world. If you look back through the pages of JANAC noting the individuals who have served on the editorial board, who have been peer reviewers, and who have been our authors, you cannot help but be impressed. Many of the names seen in pages of JANAC are those of individuals who have shaped the history of the pandemic. What a privilege to have worked with these leaders. The growth of JANAC into a highly respected international AIDS journal is a source of satisfaction
Biochemical and Physiological Mechanisms Related to Vernalization , Atonik and Benzyl Adenine in Pisum sativum
Vernalization at 5°C for 5 days, either singly or in combination with a foliar application of atonik at 250, 500 and 1000 mg/l or 6-benzyl adenine (BA) at 25, 50 and 100 mg/l was studied on the growth parameters and flowering response, photosynthetic pigments, different carbohydrate and nitrogen fractions, ion contents and endogenous level of different phytohormones of Pisum sativum (cv. ‘Master Bean’). All determined growth parameters (root and shoot length, fresh weight and dry weight; number of nodes/plant; number of leaves/plant; total leaf area/plant; relative water content; number of flowers/plant) decreased in response to vernalization treatment. In contrast, vernalization in combination with 1000 mg/l atonik or 50 mg/l BA led to a significant increase in these parameters. Vernalization, alone or in combination with atonik or BA, significantly increased all photosynthetic pigments and generally led to a significant increase in different carbohydrate and nitrogen fractions and ion content. On the other hand, vernalization led to a significant decrease in total auxins, gibberellic acid (GA3) and different CK fractions (zeatin, kinetin and BA) in pea plant shoots; ABA increased significantly. In contrast, vernalization combined with atonik or BA at any concentration led to a progressive increase in total auxins, GA3 and different CK fractions while ABA decreased significantly compared with control values. _____________________________________________________________________________________________________________
Advances in real-time flood forecasting.
This paper discusses the modelling of rainfall-flow (rainfall-run-off) and flow-routeing processes in river systems within the context of real-time flood forecasting. It is argued that deterministic, reductionist (or 'bottom-up') models are inappropriate for real-time forecasting because of the inherent uncertainty that characterizes river-catchment dynamics and the problems of model over-parametrization. The advantages of alternative, efficiently parametrized data-based mechanistic models, identified and estimated using statistical methods, are discussed. It is shown that such models are in an ideal form for incorporation in a real-time, adaptive forecasting system based on recursive state-space estimation (an adaptive version of the stochastic Kalman filter algorithm). An illustrative example, based on the analysis of a limited set of hourly rainfall-flow data from the River Hodder in northwest England, demonstrates the utility of this methodology in difficult circumstances and illustrates the advantages of incorporating real-time state and parameter adaption.
From Intrusion Detection to Attacker Attribution: A Comprehensive Survey of Unsupervised Methods
Over the last five years there has been an increase in the frequency and diversity of network attacks. This holds true, as more and more organizations admit compromises on a daily basis. Many misuse and anomaly based intrusion detection systems (IDSs) that rely on either signatures, supervised or statistical methods have been proposed in the literature, but their trustworthiness is debatable. Moreover, as this paper uncovers, the current IDSs are based on obsolete attack classes that do not reflect the current attack trends. For these reasons, this paper provides a comprehensive overview of unsupervised and hybrid methods for intrusion detection, discussing their potential in the domain. We also present and highlight the importance of feature engineering techniques that have been proposed for intrusion detection. Furthermore, we discuss that current IDSs should evolve from simple detection to correlation and attribution. We descant how IDS data could be used to reconstruct and correlate attacks to identify attackers, with the use of advanced data analytics techniques. Finally, we argue how the present IDS attack classes can be extended to match the modern attacks and propose three new classes regarding the outgoing network communication.
Verhoef Understanding the Effect of Customer Relationship Management Efforts on Customer Retention and Customer Share Development
Scholars have questioned the effectiveness of several customer relationship management strategies. The author investigates the differential effects of customer relationship perceptions and relationship marketing instruments on customer retention and customer share development over time. Customer relationship perceptions are considered evaluations of relationship strength and a supplier’s offerings, and customer share development is the change in customer share between two periods. The results show that affective commitment and loyalty programs that provide economic incentives positively affect both customer retention and customer share development, whereas direct mailings influence customer share development. However, the effect of these variables is rather small. The results also indicate that firms can use the same strategies to affect both customer retention and customer share development.
Training care givers of stroke patients: economic evaluation.
BACKGROUND Training care givers reduces their burden and improves psychosocial outcomes in care givers and patients at one year. However, the cost effectiveness of this approach has not been investigated. OBJECTIVE To evaluate the cost effectiveness of caregiver training by examining health and social care costs, informal care costs, and quality adjusted life years in care givers. DESIGN A single, blind, randomised controlled trial. SETTING Stroke rehabilitation unit. SUBJECTS 300 stroke patients and their care givers. INTERVENTIONS Caregiver training in basic nursing and facilitation of personal care techniques compared with no care giver training. MAIN OUTCOME MEASURES Health and social care costs, informal care costs, and quality adjusted life years in care givers over one year after stroke. RESULTS Total health and social care costs over one year for patients whose care givers received training were significantly lower (mean difference -4043 pounds sterling (7249 dollars; 6072 euros), 95% confidence interval -6544 pounds sterling to -595 pounds sterling). Inclusion of informal care costs, which were similar between the two groups, did not alter this conclusion. The cost difference was largely due to differences in length of hospital stay. The EQ-5D did not detect changes in quality adjusted life years in care givers. CONCLUSION Compared with no training, caregiver training during rehabilitation of patients reduced costs of care while improving overall quality of life in care givers at one year.
DETECTING FRAUD IN THE REAL WORLD
Finding telecommunications fraud in masses of call records is more difficult than finding a needle in a haystack. In the haystack problem, there is only one needle that does not look like hay, the pieces of hay all look similar, and neither the needle nor the hay changes much over time. Fraudulent calls may be rare like needles in haystacks, but they are much more challenging to find. Callers
Environmental context and human memory
Five experiments examined the effects of environmental context on recall and recognition. In Experiment 1, variability of input environments produced higher free recall performance than unchanged input environments. Experiment 2 showed improvements in cued recall when storage and test contexts matched, using a paradigm that unconfounded the variables of context mismatching and context change. In Experiment 3, recall of categories and recall of words within a category were better for same-context than different-context recall. In Experiment 4, subjects given identical input conditions showed strong effects of environmental context when given a free recall test, yet showed no main effects of context on a recognition test. The absence of an environmental context effect on recognition was replicated in Experiment 5, using a cued recognition task to control the semantic encodings of test words. In the discussion of these experiments, environmental context is compared with other types of context, and an attempt is made to identify the memory processes influenced by environmental context.
Simvastatin treatment reduces heat shock protein 60, 65, and 70 antibody titers in dyslipidemic patients: A randomized, double-blind, placebo-controlled, cross-over trial.
OBJECTIVE This study aimed to evaluate the effects of statin therapy on serum levels of antibodies to several specific heat shock proteins (HSPs) in dyslipidemic patients. DESIGN AND METHODS Participants (n=102) were treated with simvastatin (40mg/day), or placebo in a randomized, double-blind, placebo-controlled, cross-over trial. Anti-HSP60, 65, 70, and hs-CRP levels were measured before and after each treatment period. Seventy-seven subjects completed the study. RESULTS Treatment with simvastatin was associated with significant reductions in serum anti-HSP60, 65, and 70 titers in the dyslipidemic patients (10%, 14%, and 15% decrease, respectively) (p<0.001). There have been previous reports of reductions in serum CRP with statin treatment, and although median CRP levels were 9% lower on simvastatin treatment, this did not achieve statistical significance. CONCLUSION While it is unclear whether HSP antibodies are directly involved in atherogenesis, our findings suggest that simvastatin inhibits autoimmune responses that may contribute to the development of cardiovascular disease.
Effect of chemotherapy counseling by pharmacists on quality of life and psychological outcomes of oncology patients in Malaysia: a randomized control trial
BACKGROUND Cancer is now becoming a leading cause of death. Chemotherapy is an important treatment for cancer patients. These patients also need consultation during their treatment to improve quality of life and decrease psychological disorders. The objectives of the study were to develop, implement and evaluate the effectiveness of a chemotherapy counseling module by pharmacists among oncology patients on their quality of life and psychological outcomes in Malaysia. METHOD A single-blind randomized controlled trial was carried out among 162 oncology patients undergoing chemotherapy from July 2013 to February 2014 in a government hospital with oncology facilities in Malaysia. Participants were randomized to either the intervention group or the control group. Chemotherapy counseling using the module on 'Managing Patients on Chemotherapy' by Pharmacists was delivered to the intervention group. The outcome measures were assessed at baseline, first follow-up and second follow-up and third follow-up post-intervention. Chi-square, independent samples t-test and two-way repeated measures ANOVA were conducted in the course of the data analyses. RESULTS In assessing the impact of the chemotherapy counseling module, the study revealed that the module along with repetitive counseling showed significant improvement of quality of life in the intervention group as compared to the control group with a large effect size in physical health (p = 0.001, partial Ƞ2 = 0.66), psychological (p = 0.001, partial Ƞ2 = 0.65), social relationships (p = 0.001, partial Ƞ2 = 0.30), and environment (p = 0.001, partial Ƞ2 = 0.67) and decrease in the anxiety (p = 0.000; partial Ƞ2 = 0.23), depression (p = 0.000; partial Ƞ2 = 0.40). CONCLUSION The module on 'Managing Patients on Chemotherapy' along with repetitive counseling by pharmacists has been shown to be effective in improving quality of life and decreasing anxiety and depression among oncology patients undergoing chemotherapy. TRIAL REGISTRATION NUMBER National Medical Research Register (NMRR) of Malaysia and given a registration number NMRR-12-1057-12,363 on 21 December 2012.
Low-pressure pneumoperitoneum combined with intraperitoneal saline washout for reduction of pain after laparoscopic cholecystectomy: a prospective randomized study
We designed a prospective randomized clinical trial to investigate whether intraperitoneal saline washout combined with a low-pressure pneumoperitoneum (LPSW) was superior to low-pressure pneumoperitoneum (LP) alone as a means of reducing postoperative pain and analgesic consumption in the early recovery period after laparoscopic cholecystectomy (LC). A total of 124 consecutive patients undergoing LC due to uncomplicated symptomatic gallstones were randomized to the LP or LPSW group. In the LPSW group, normal saline at body temperature (25 ml/kg of body weight) was irrigated under the diaphragm. The fluid was evacuated via the passive-flow method through a 16-F closed drain left under the liver for 24 h. We then assessed the intensity of total abdominal postoperative pain using the Visual Analogue Scale (VAS), including the incidence of shoulder-tip pain (STP), total daily analgesia demand rate, analgesic consumption. Quality of life (QOL) within 7 days after the operation was assessed using the Medical Outcomes Study Short Form 36 Health Survey (SF-36). A p value of <0.05 was considered significant. The mean postoperative pain score was lower by 2.64 ± 0.86 in the LPSW; the difference equaled 9.64% (p < 0.05). The incidence of STP was lower in the LPSW group (LP 11.29% vs LPSW 1.6%; p = 0.028). The analgesia demand rate was remarkably lower in LPSW vs LP within 24 and 48 h postoperatively (70.96% vs 90.32%; p = 0.006 and 64.51% vs. 83.87%; p = 0.013, respectively). After LPSW vs LP, QOL was better in terms of physical functioning, role limitations due to physical problems, and bodily pain (90.32% vs 77.42%; p = 0.05, 90.32% vs 75.8%; p = 0.03, 91.93% vs 74.19%; p = 0.008, respectively). In terms of lower postoperative pain and a better QOL within the early recovery period, LPSW is superior to LP alone. The saline washout procedure should be recommended during LC because it is a simple way to reduce pain intensity, even after LP operations.
Neural Network Models for Paraphrase Identification, Semantic Textual Similarity, Natural Language Inference, and Question Answering
In this paper, we analyze several neural network designs (and their variations) for sentence pair modeling and compare their performance extensively across eight datasets, including paraphrase identification, semantic textual similarity, natural language inference, and question answering tasks. Although most of these models have claimed state-of-the-art performance, the original papers often reported on only one or two selected datasets. We provide a systematic study and show that (i) encoding contextual information by LSTM and inter-sentence interactions are critical, (ii) Tree-LSTM does not help as much as previously claimed but surprisingly improves performance on Twitter datasets, (iii) the Enhanced Sequential Inference Model (Chen et al., 2017) is the best so far for larger datasets, while the Pairwise Word Interaction Model (He and Lin, 2016) achieves the best performance when less data is available. We release our implementations as an open-source toolkit.
Named Entity Recognition With Parallel Recurrent Neural Networks
We present a new architecture for named entity recognition. Our model employs multiple independent bidirectional LSTM units across the same input and promotes diversity among them by employing an inter-model regularization term. By distributing computation across multiple smaller LSTMs we find a reduction in the total number of parameters. We find our architecture achieves state-of-the-art performance on the CoNLL 2003 NER dataset.
Collision avoidance of unmanned ships based on artificial potential field
The collision avoidance problem of autonomous ships is analyzed in this paper. Firstly, the background and the difficulties are reviewed. Then an improved artificial potential field method is designed to solve the collision avoidance problem of unmanned ships, and the system characteristics are investigated. Finally, simulation results demonstrate the ship's control effectiveness in the obstacle environment.
Evolutionary games and population dynamics: maintenance of cooperation in public goods games.
The emergence and abundance of cooperation in nature poses a tenacious and challenging puzzle to evolutionary biology. Cooperative behaviour seems to contradict Darwinian evolution because altruistic individuals increase the fitness of other members of the population at a cost to themselves. Thus, in the absence of supporting mechanisms, cooperation should decrease and vanish, as predicted by classical models for cooperation in evolutionary game theory, such as the Prisoner's Dilemma and public goods games. Traditional approaches to studying the problem of cooperation assume constant population sizes and thus neglect the ecology of the interacting individuals. Here, we incorporate ecological dynamics into evolutionary games and reveal a new mechanism for maintaining cooperation. In public goods games, cooperation can gain a foothold if the population density depends on the average population payoff. Decreasing population densities, due to defection leading to small payoffs, results in smaller interaction group sizes in which cooperation can be favoured. This feedback between ecological dynamics and game dynamics can generate stable coexistence of cooperators and defectors in public goods games. However, this mechanism fails for pairwise Prisoner's Dilemma interactions and the population is driven to extinction. Our model represents natural extension of replicator dynamics to populations of varying densities.
Evaluation of candidate genes MAP2K4, MADH4, ACVR1B, and BRCA2 in familial pancreatic cancer: deleterious BRCA2 mutations in 17%.
It is estimated that familial aggregation and genetic susceptibility play a role in as many as 10% of pancreatic ductal adenocarcinomas. To investigate the role of germ-line mutations in the etiology of pancreatic cancer, we have analyzed samples from patients with pancreatic cancer enrolled in the NFPTR for mutations in four tumor suppressor candidate genes: (a) MAP2K4; (b) MADH4; (c) ACVR1B; and (d) BRCA2 by direct sequencing of constitutional DNA. These genes are mutated in clinically sporadic pancreatic cancer, but germ-line mutations are either not reported or anecdotal in familial pancreatic cancer. Pancreatic cancer patient samples were selected from kindreds in which three or more family members were affected with pancreatic cancer, at least two of which were first-degree relatives. No mutations were identified in mitogen-activated protein kinase kinase 4 (0 of 22), MADH4 (0 of 22), or ACVR1B (0 of 29), making it unlikely that germ-line mutations in these genes account for a significant number of inherited pancreatic cancers. BRCA2 gene sequencing identified five mutations (5 of 29, 17.2%) that are believed to be deleterious and one point mutation (M192T) unreported previously. Three patients harbored the common 6174delT frameshift mutation, one had the splice site mutation IVS 16-2A > G, and one had the splice site mutation IVS 15-1G > A. Two of the five BRCA2 mutation carriers reported a family history of breast cancer, and none reported a family history of ovarian cancer. These findings confirm the increased risk of pancreatic cancer in individuals with BRCA2 mutations and identify germ-line BRCA2 mutations as the most common inherited genetic alteration yet identified in familial pancreatic cancer.
A Survey of Robot Interaction Control Schemes with Experimental Comparison
A great many control schemes for a robot manipulator interacting with the environment have been developed in the literature in the past two decades. This paper is aimed at presenting a survey of robot interaction control schemes for a manipulator, the end effector of which comes in contact with a compliant surface. A salient feature of the work is the implementation of the schemes on an industrial robot with open control architecture equipped with a wrist force sensor. Two classes of control strategies are considered, namely, those based on static model-based compensation and those based on dynamic model-based compensation. The former provide a good steadystate behavior, while the latter enhance the behavior during the transient. The performance of the various schemes is compared in the light of disturbance rejection, and a thorough analysis is developed by means of a number of case studies.
Two year follow-up of clinical and inflammation parameters in children monosensitized to mites undergoing subcutaneous and sublingual immunotherapy.
BACKGROUND Both SCIT (subcutaneous immunotherapy) and SLIT (sublingual immunotherapy) have clinical and immunologic efficacy in children with rhinitis and asthma but comparative studies are scarce. OBJECTIVE To investigate the clinical and immunological efficacy of mite-specific SLIT and SCIT in children with rhinitis and asthma. METHOD Thirty children monosensitized to house dust mite were randomized to receive either active SCIT or SLIT or placebo for 1 yr in a double-blind double-dummy placebo controlled design (Yukselen A et al., Int Arch Allergy Immunol 2012; 157:288-298). Thereafter, the placebo group was randomized to receive SCIT or SLIT, and for 1 yr all patients received active treatment with SCIT or SLIT. Symptom scores, drug usage, titrated skin prick tests, nasal and bronchial allergen provocation doses, serum house dust mite-specific immunglobulin E, sIgG4, IL-10 and IFN- g levels were evaluated. RESULTS The reduction of clinical scores with SLIT was more evident after 2 years of treatment in comparison to both the baseline and DBPC phase of the study. The change in titrated skin prick tests and nasal provocative doses was more prominent with both SCIT and SLIT at the end of the open phase. Although the increase inbronchial provocative doses was not significant at the end of the first year of treatment with SLIT, it reached a statistically significant difference after two years of treatment. CONCLUSION The clinical efficacy of SLIT is more prominent at the end of the second year, although this improvement is observed from the first year of treatment with SCIT in mite-sensitive children.
Preload or afterload reduction: Which is more beneficial for patients with ischemic heart disease?
We studied the acute hemodynamic effects of molsidomine, a selective preload reducing agent, and nifedipine, a selective afterload reducing agent. Thirty-two patients with stable angina pectoris and angiographically significant coronary artery disease were randomized into two groups: group A patients received 4 mg of molsidomine, and group B patients received 20 mg of nifedipine orally. Molsidomine was associated with a significant reduction of the left ventricular end-diastolic pressure and an increase in Vcf. Nifedipine caused a significant reduction of the mean arterial pressure and an increase of the heart rate. Hemodynamic parameters associated with chronic exertional angina pectoris in patients with angiographically significant coronary artery disease improved more with a preload reducing agent, like molsidomine.
Real-time task assignment in hyperlocal spatial crowdsourcing under budget constraints
Spatial Crowdsourcing (SC) is a novel platform that engages individuals in the act of collecting various types of spatial data. This method of data collection can significantly reduce cost and turnover time, and is particularly useful in environmental sensing, where traditional means fail to provide fine-grained field data. In this study, we introduce hyperlocal spatial crowdsourcing, where all workers who are located within the spatiotemporal vicinity of a task are eligible to perform the task, e.g., reporting the precipitation level at their area and time. In this setting, there is often a budget constraint, either for every time period or for the entire campaign, on the number of workers to activate to perform tasks. The challenge is thus to maximize the number of assigned tasks under the budget constraint, despite the dynamic arrivals of workers and tasks as well as their co-location relationship. We study two problem variants in this paper: budget is constrained for every timestamp, i.e. fixed, and budget is constrained for the entire campaign, i.e. dynamic. For each variant, we study the complexity of its offline version and then propose several heuristics for the online version which exploit the spatial and temporal knowledge acquired over time. Extensive experiments with real-world and synthetic datasets show the effectiveness and efficiency of our proposed solutions.
Ultrasound assessment of increased capsular width in temporomandibular joint internal derangements: relationship with joint pain and magnetic resonance grading of joint effusion.
OBJECTIVE The relationship between radiologic evidence of effusion in the temporomandibular joint (TMJ) and the occurrence of clinical symptoms (e.g., pain) is still unclear. Increased capsular width (CW) measured in ultrasonographic imaging (USI) of the TMJ was considered to be an indirect marker of TMJ effusion. The purpose of this study was to evaluate the relationship between the grades of magnetic resonance imaging (MRI)-depicted joint effusion (JE), increased CW measured in USI, and joint pain in TMJ internal derangement (ID) patients. STUDY DESIGN During a 4-year period, 91 patients clinically diagnosed with TMJ ID according to the Research Diagnostic Criteria for Temporomandibular Disorders classification were included in the study. Those with mainly myogenic complaints were excluded. In clinical examination, the severity of pain was assessed by visual analog scale (VAS, 0 to 10). All TMJs (n = 182) were evaluated to detect the presence of joint effusion by means of USI and MRI. MRI-depicted effusion was classified as no effusion, moderate effusion, and severe effusion. Receiver operating characteristic curve analysis was performed to depict the critical cutoff value for TMJ CW. USI sensitivity was evaluated by means of MRI effusion, and a cutoff value was depicted that was considered to be the threshold to discriminate the TMJs with and without effusion. The relationship between the joint pain and USI and MRI findings of effusion were evaluated with Friedman and Wilcoxon tests. RESULTS The average VAS scores of the TMJs without effusion was found to be 2.55, with moderate effusion 2.92, and with severe effusion 4.80. A significant positive correlation was found between the VAS scores and the intensity of MRI JE (P = .003). The most accurate cutoff value of CW is found to be 1.65 mm. The average VAS score with CW <1.65 was found to be 2.10 and the average VAS score with CW >1.65 was found to be 3.75. A significant positive correlation was found between the clinical pain scores and CW measured in USI (P = .001). CONCLUSIONS Both MRI-depicted effusion and USI assessment of CW were found to be related to the pain in TMJ ID patients.
Running head : Consequences of bilingualism Consequences of Bilingualism for Cognitive Development
Research addressing the possible cognitive consequences of bilingualism for children’s development has found mixed results when seeking effects in domains such as language ability and intelligence. The approach in the research reported in this chapter is to investigate the effect that bilingualism might have on specific cognitive processes rather than domains of skill development. Three cognitive domains are examined: concepts of quantity, task-switching and concept formation, and theory of mind. The common finding in these disparate domains is that bilingual children are more advanced than monolinguals in solving problems requiring the inhibition of misleading information. The conclusion is that bilingualism accelerates the development of a general cognitive function concerned with attention and inhibition, and that facilitating effects of bilingualism are found on tasks and processes in which this function is most required. Consequences of bilingualism 3 Consequences of Bilingualism for Cognitive Development A significant portion of children in the world enter the realm of language learning being exposed to multiple languages, required to communicate using different systems and proceed to school where the instructional discourse bears no resemblance to the language at home. Normally, few questions are asked and few concerns are expressed by parents, teachers, or politicians. In many cultures, this quiet acceptance indicates that the experience is either so common that it is not detected as anomalous or so crucial for survival that it is futile to challenge it. Yet, an experience as broad in its impact as the way in which language is learned and used in the first years may well impact on the child’s cognitive development. This chapter explores research that has addressed itself to identifying whether childhood bilingualism alters the typical course of cognitive development, either favorably or deleteriously, for children whose language acquisition has proceeded by building two linguistic systems. The cognitive effect of the linguistic environment in which children are raised appears on the surface to be an issue of psychological and educational relevance but it conceals an underlying dimension that is explosively political. Children who are recipients of this experience, for better or worse, are not randomly chosen, nor are they randomly distributed through the population. They tend to belong to specific ethnic groups, occupy particular social positions, and be members of communities who have recently immigrated. It is not surprising, then, that historically some attempts to investigate the psychological and educational questions that follow from this situation have failed to meet standards of scientific objectivity. Instead, the judgment about the effect of bilingualism on children’s development in early studies was sometimes used to reflect societal attitudes towards such issues as immigration and to reinforce preconceived views of language and its role in education. Consequences of bilingualism 4 In some nontrivial way, bilingual minds cannot resemble the more homogenous mental landscape of a monolingual. Although there is debate about the precise manner in which languages and concepts are interconnected in bilingual minds (discussed below), it is uncontroversial that the configuration is more complex than that of a monolingual for whom concepts and languages ultimately converge in unambiguous and predictable manners. Monolinguals may have multiple names for individual concepts, but the relation among those alternatives, as synonyms for example, does not invoke the activation of entire systems of meaning, as the alternative names from different languages is likely to do. From the beginning, therefore, bilingualism has consequence. What is not inevitable, however, is that one of these consequences is to influence the quality or manner of cognitive development. Early research on the cognitive consequences of bilingualism paid virtually no attention to such issues as the nature of bilingual populations tested, their facility in the language of testing, or the interpretation of the tests used. As an apparent default, cognitive ability was taken to be determined by performance on IQ tests, at best a questionable measure of intelligence (see Gould, 1981). For example, Saer (1923) used the Stanford Binet Test and compared Welsh children who were bilingual with monolingual English children and reported the inferiority and “mental confusion” of the bilinguals. Darcy (1963) reviewed many subsequent studies of this type and pointed to their common finding that bilinguals consistently scored lower on verbal tests and were often disadvantaged on performance tests as well. Although Darcy cautioned that multiple factors should be considered, a more salubrious account of this research is offered by Hakuta (1986) who attributes the inferior results of the bilinguals in comparison to their new native-speaking peers to the tests being conducted in a language they were only beginning to learn. Consequences of bilingualism 5 The antidote to the pessimistic research was almost as extreme in its claims. In a watershed study, Peal and Lambert (1962) tested a carefully selected group of French-English bilingual children and hypothesized that the linguistic abilities of the bilinguals would be superior to those of the monolinguals but that the nonverbal skills would be the same. Even the expectation of an absence of a bilingual deficit was radical departure from the existing studies. Not only was the linguistic advantage confirmed in their results, but they also found an unexpected advantage in some of the nonverbal cognitive measures involving symbolic reorganization. Their conclusion was that bilingualism endowed children with enhanced mental flexibility and that this flexibility was evident across all domains of thought. Subsequent research has supported this notion. Ricciardelli (1992), for example, found that few tests in a large battery of cognitive and metalinguistic measures were solved better by bilinguals, but those that were included tests of creativity and flexible thought. In addition, balanced bilinguals have been found to perform better on concept formation tasks (Bain, 1974), divergent thinking and creativity (Torrance, Wu, Gowan, & Alliotti, 1970), and field independence and Piagetian conservation (Duncan & De Avila, 1979). In a particularly well-designed study, Ben-Zeev (1977) reported bilingual advantages on both verbal and nonverbal measures, in spite of a significant bilingual disadvantage in vocabulary. Her explanation was that the mutual interference between languages forces bilinguals to adopt strategies that accelerate cognitive development. Although she did not develop the idea further, it is broadly consistent with the explanation proposed elsewhere (Bialystok, 2001) and below. Researchers such as Hakuta, Ferdman, and Diaz (1987), MacNab (1979), and Reynolds (1991) challenged the reliability of many of those studies reporting felicitous cognitive consequences for bilingualism and argued that the data were not yet conclusive. MacNab (1979) Consequences of bilingualism 6 was the most critical, but conceded that bilinguals consistently outperformed monolinguals in generating original uses for objects, an ability compatible with the claim of Peal and Lambert for an increase in flexibility of thought. Reynolds’ (1991) reservation depended in part on his requirement that evidence for bilingual superiority should be presented in the context of an explanation for why such effects occur. The purpose of the present review is to describe some selected cognitive processes and evaluate the evidence for bilingual influences on their development and to interpret those effects within an explanatory framework. Peal and Lambert’s idea that bilingualism would foster flexibility of thought has persisted, often accompanied by supporting evidence. Their explanation was that the experience of having two ways to describe the world gave bilinguals the basis for understanding that many things could be seen in two ways, leading to a more flexible approach to perception and interpretation. We shall return to this idea in the conclusions. The majority of the more recent literature has focused on the consequences of bilingualism for the development of children’s linguistic and metalinguistic concepts. It is entirely plausible that learning two languages in childhood could alter the course of these developments, but documenting those abilities has revealed unexpected complexity. Bilingualism is often (but not consistently) found to promote more rapid development of metalinguistic concepts. In contrast, oral language proficiency, particularly in terms of early vocabulary development, is usually delayed for bilingual children. Reading and the acquisition of literacy is less well studied, but the existing evidence gives little reason to believe that bilingualism itself significantly impacts on the manner or ease with which children learn to read. The effect of bilingualism on all these language-related developments are discussed elsewhere and will not be reviewed here (e.g., Bialystok, 2001, 2002). This chapter will examine only the Consequences of bilingualism 7 nonverbal cognitive consequences of becoming bilingual in childhood. The possibility that bilingualism can affect nonverbal cognitive development is steeped in an assumption – namely, that linguistic and nonlinguistic knowledge share resources in a domain-general representational system and can influence each other. In some theoretical conceptions of language, language representations and processes are isolated from other cognitive systems (e.g., Pinker, 1994). Although it may be possible in these views to understand that bilingualism would influence linguistic and metalinguistic development, it is difficult to imagine that the effect of constructing two languages would extend b
Effect of lamivudine in HIV-infected persons with prior exposure to zidovudine/didanosine or zidovudine/zalcitabine.
Nucleoside analog-based regimens remain an integral component of combination therapy for use in both antiretroviral treatment-naive and experienced HIV-infected patients. To further define treatment responses to new antiretroviral therapy in patients with long-term experience to dual nucleoside analog therapy (zidovudine [ZDV] plus didanosine [ddI] or ZDV plus zalcitabine [ddC]), 325 subjects derived from the AIDS Clinical Trials Group (ACTG) 175 trial were randomized to three different combination regimens: (1) continuation of ZDV + ddI or ZDV + ddC (continuation arm), (2) addition of 3TC to ZDV + ddI or ZDV + ddC (addition arm), or (3) a switch to ZDV + 3TC therapy (switch arm). Both the addition and switch arms sustained significantly greater short-term (baseline to week 4) mean CD4+ cell count increases compared with the continuation arm (+36, +28 versus -4 cells/mm3; p = 0.012) and long-term CD4+ cell count responses (baseline to weeks 40/48: +32, +19 versus -9 cells/mm3; p = 0.003). Superior short-term (baseline to week 8) mean decreases in plasma HIV RNA (p < 0.001) were achieved by both the addition and switch arms (0.53 log10 and 0.54 log10 copies/ml, respectively) compared with the continuation arm (0.13 copies/ml) whereas no differences in long-term virologic suppression were observed (p = 0.30). At week 48, no differences were observed in the proportions of subjects who had HIV RNA levels below 500 copies/mL: 18% of subjects in each treatment arm (3-way p = 1.0). Overall, the treatments were well tolerated and only nine subjects (3%) died or developed one or more AIDS-defining events. While this study confirms the intrinsic antiretroviral activity of 3TC, only modest marker changes and limited short-term viral suppression are seen with incremental addition of the drug. The current approach of using 3TC in maximally suppressive regimens is preferred.
Application of TextRank Algorithm for Credibility Assessment
In this article we examine the use of Text Rank algorithm for identifying web content credibility. Text Rank has come to be a widely applied method for automated text summarization. In our research we apply it to see how well does it fare in recognizing credible statements from a given corpus. So far, research into use of NLP algorithms in credibility assessment was focused more on extracting the most informative statements, or dealing with recognizing the relation between claims within a document. In our paper, we use a collection of 100 websites reviewed by human subjects in regard to their credibility, therefore allowing us to check the algorithm's performance in this task. The data collected showed that the Text Rank algorithm can be used for recognizing credibility on the level of aggregated statement credibility.
Equity and justice in climate change adaptation amongst natural-resource-dependent societies
Issues of equity and justice are high on international agendas dealing with the impacts of global climate change. But what are the implications of climate change for equity and justice amongst vulnerable groups at local and sub-national levels? We ask this question for three reasons: (a) there is a considerable literature suggesting that the poorest and most vulnerable groups will disproportionately experience the negative effects of 21st century climate change; (b) such changes are likely to impact significantly on developing world countries, where natural-resource dependency is high; and (c) international conventions increasingly recognise the need to centrally engage resource stakeholders in agendas in order to achieve their desired aims, as part of more holistic approaches to sustainable development. These issues however have implications for distributive and procedural justice, particularly when considered within the efforts of the UNFCCC. The issues are examined through an evaluation of key criteria relating to climate change scenarios and vulnerability in the developing world, and second through two southern African case studies that explore the ways in which livelihoods are differentially impacted by (i) inequitable natural-resource use policies, (ii) community-based natural-resource management programmes. Finally, we consider the placement of climate change amongst the package of factors affecting equity in natural-resource use, and whether this placement creates a case for considering climate change as ‘special’ amongst livelihood disturbing factors in the developing world. r 2004 Elsevier Ltd. All rights reserved.
A Panel Data Analysis of the Brain Gain
In this paper, we revisit the impact of skilled emigration on human capital accumulation using new panel data covering 147 countries on the period 1975-2000. We derive testable predictions from a stylized theoretical model and test them in dynamic regression models. Our empirical analysis predicts conditional convergence of human capital indicators. Our …ndings also reveal that skilled migration prospects foster human capital accumulation in low-income countries. In these countries, a net brain gain can be obtained if the skilled emigration rate is not too large (i.e. does not exceed 20 to 30 percent depending on other country characteristics). On the contrary, we …nd no evidence of a signi…cant incentive mechanism in middle-income and, unsuprisingly, in high-income countries. JEL Classi…cations: O15-O40-F22-F43 Keywords: human capital, convergence, brain drain We thank anonymous referees for their helpful comments. Suggestions from Barry Chiswick, Hubert Jayet, Joel Hellier and Fatemeh Shadman-Mehta were also appreciated. This article bene…ted from comments received at the SIUTE seminar (Lille, January 2006), the CReAM conference on ”Immigration: Impacts, Integration and Intergenerational Issues (London, March 2006), the Spring Meeting of Young Economists (Sevilla, May 2006), the XIV Villa Mondragone International Economic Seminar (Rome, July 2006) and the ESPE meeting (Chicago, 2007). The third author is grateful for the …nancial support from the Belgian French-speaking Community’s programme ”Action de recherches concertées” (ARC 03/08 -302) and from the Belgian Federal Government (PAI grant P6/07 Economic Policy and Finance in the Global Equilibrium Analysis and Social Evaluation). The usual disclaimers apply. Corresponding author: Michel Beine ([email protected]), University of Luxembourg, 162a av. de la Faiencerie, L-1511 Luxembourg.
Alluvial sediment sources in a glaciated catchment: The voidomatis basin, Northwest Greece
X-ray diffraction (XRD) of the fine matrix component of four alluvial units and modern channel sediments in the Voidomatis River Basin of northwest Greece shows that fine sediment sources have changed considerably during the late Quaternary. The matrix fraction of the modern channel sediments is derived predominantly from erosion of local flysch rocks and soils. During the last glaciation, however, the fine sediment load of the Voidomatis River was dominated by glacially-ground, finely comminuted limestone materials. Limestone-derived fine sediment is not produced in significant amounts under modern climatic conditions. By combining this XRD work with a detailed programme of clast lithologic 1 analysis we have reconstructed former bedload and fine sediment load composition. The lithological properties of both the coarse (8-256 mm) and fine ( < 63 pm) elements of the sediment load have varied markedly during the late Quaternary. A simple, semiquantitative assessment of fine sediment mineralogy, using diffractogram peak-height data, has provided a valuable complement to the information gathered from more traditional clast lithological techniques. Together, in favourable geological settings, fine fraction mineralogy and clast lithological analysis can provide a valuable tool for the reconstruction of late Quaternary alluvial environments.
Train longer, generalize better: closing the generalization gap in large batch training of neural networks
Background: Deep learning models are typically trained using stochastic gradient descent or one of its variants. These methods update the weights using their gradient, estimated from a small fraction of the training data. It has been observed that when using large batch sizes there is a persistent degradation in generalization performance known as the "generalization gap" phenomenon. Identifying the origin of this gap and closing it had remained an open problem. Contributions: We examine the initial high learning rate training phase. We find that the weight distance from its initialization grows logarithmically with the number of weight updates. We therefore propose a "random walk on a random landscape" statistical model which is known to exhibit similar "ultra-slow" diffusion behavior. Following this hypothesis we conducted experiments to show empirically that the "generalization gap" stems from the relatively small number of updates rather than the batch size, and can be completely eliminated by adapting the training regime used. We further investigate different techniques to train models in the large-batch regime and present a novel algorithm named "Ghost Batch Normalization" which enables significant decrease in the generalization gap without increasing the number of updates. To validate our findings we conduct several additional experiments on MNIST, CIFAR-10, CIFAR-100 and ImageNet. Finally, we reassess common practices and beliefs concerning training of deep models and suggest they may not be optimal to achieve good generalization.
High boost ratio hybrid transformer DC-DC converter for photovoltaic module applications
This paper presents a non-isolated, high boost ratio hybrid transformer dc-dc converter with applications for low voltage renewable energy sources. The proposed converter utilizes a hybrid transformer to transfer the inductive and capacitive energy simultaneously, achieving a high boost ratio with a smaller size magnetic component. As a result of incorporating the resonant operation mode into the traditional high boost ratio PWM converter, the turn off loss of the switch is reduced, increasing the efficiency of the converter under all load conditions. The input current ripple is also reduced because of the linear-sinusoidal hybrid waveforms. The voltage stresses on the active switch and diodes are maintained at a low level and are independent of the changing input voltage over a wide range as a result of the resonant capacitor transferring energy to the output. The effectiveness of the proposed converter was experimentally verified using a 220 W prototype circuit. Utilizing an input voltage ranging from 20V to 45V and a load range of 30W to 220W, the experimental results show system of efficiencies greater than 96% with a peak efficiency of 97.4% at 35V input, 160W output. Because of high efficiency over wide output power range and the ability to operate with a wide variable input voltage, the proposed converter is an attractive design for alternative low dc voltage energy sources, such as solar photovoltaic (PV) modules.
Markov Chain Monte Carlo Maximum Likelihood
Markov chain Monte Carlo (e. g., the Metropolis algorithm and Gibbs sampler) is a general tool for simulation of complex stochastic processes useful in many types of statistical inference. The basics of Markov chain Monte Carlo are reviewed, including choice of algorithms and variance estimation, and some new methods are introduced. The use of Markov chain Monte Carlo for maximum likelihood estimation is explained, and its performance is compared with maximum pseudo likelihood estimation.
Directional Field Synthesis, Design, and Processing
Direction fields and vector fields play an increasingly important role in computer graphics and geometry processing. The synthesis of directional fields on surfaces, or other spatial domains, is a fundamental step in numerous applications, such as mesh generation, deformation, texture mapping, and many more. The wide range of applications resulted in definitions for many types of directional fields: from vector and tensor fields, over line and cross fields, to frame and vector-set fields. Depending on the application at hand, researchers have used various notions of objectives and constraints to synthesize such fields. These notions are defined in terms of fairness, feature alignment, symmetry, or field topology, to mention just a few. To facilitate these objectives, various representations, discretizations, and optimization strategies have been developed. These choices come with varying strengths and weaknesses. This report provides a systematic overview of directional field synthesis for graphics applications, the challenges it poses, and the methods developed in recent years to address these challenges.
Combining Verbal and Nonverbal Features to Overcome the "Information Gap" in Task-Oriented Dialogue
Dialogue act modeling in task-oriented dialogue poses significant challenges. It is particularly challenging for corpora consisting of two interleaved communication streams: a dialogue stream and a task stream. In such corpora, information can be conveyed implicitly by the task stream, yielding a dialogue stream with seemingly missing information. A promising approach leverages rich resources from both the dialog and the task streams, combining verbal and non-verbal features. This paper presents work on dialogue act modeling that leverages body posture, which may be indicative of particular dialogue acts. Combining three information sources (dialogue exchanges, task context, and users’ posture), three types of machine learning frameworks were compared. The results indicate that some models better preserve the structure of task-oriented dialogue than others, and that automatically recognized postural features may help to disambiguate user dialogue moves.
Plasticity of ductile metallic glasses: a self-organized critical state.
We report a close correlation between the dynamic behavior of serrated flow and the plasticity in metallic glasses (MGs) and show that the plastic deformation of ductile MGs can evolve into a self-organized critical state characterized by the power-law distribution of shear avalanches. A stick-slip model considering the interaction of multiple shear bands is presented to reveal complex scale-free intermittent shear-band motions in ductile MGs and quantitatively reproduce the experimental observations. Our studies have implications for understanding the precise plastic deformation mechanism of MGs.
Working memory deficits and social problems in children with ADHD.
Social problems are a prevalent feature of ADHD and reflect a major source of functional impairment for these children. The current study examined the impact of working memory deficits on parent- and teacher-reported social problems in a sample of children with ADHD and typically developing boys (N=39). Bootstrapped, bias-corrected mediation analyses revealed that the impact of working memory deficits on social problems is primarily indirect. That is, impaired social interactions in children with ADHD reflect, to a significant extent, the behavioral outcome of being unable to maintain a focus of attention on information within working memory while simultaneously dividing attention among multiple, on-going events and social cues occurring within the environment. Central executive deficits impacted social problems through both inattentive and impulsive-hyperactive symptoms, whereas the subsidiary phonological and visuospatial storage/rehearsal systems demonstrated a more limited yet distinct relationship with children's social problems.
Coronary surgery for acute coronary syndrome: which determinants of outcome remain?
The mortality risk associated with coronary artery bypass grafting (CABG) after acute myocardial infarction remains controversial. The objective of the present study was therefore to analyze the outcome and predictors of in-hospital mortality in patients (pts) referred to CABG with acute coronary syndrome (ACS). Between January 2003 and May 2005, a total of 3,127 pts underwent primary isolated CABG at our institution, including 220 pts with ACS. Out of these, unstable angina pectoris was present in 88 pts (group I), 97 pts (group II) had non-ST-elevation infarction, whereas 35 pts (group III) had ST-elevation infarction. Clinical data, in-hospital morbidity and mortality were recorded and studied retrospectively. Overall in-hospital mortality was 6.4% (n = 14) in the complete cohort, being 2.2% in group I (n = 2), 9.2% in group II (n = 9) and 8.5% (n = 3) in group III (P < 0.05). Logistic regression and receiver operating characteristic analyses identified age, NYHA, ejection fraction < 45%, catecholamine support, cardiogenic shock, renal disease and the additive EuroSCORE > 10 (P < 0.0001) as significant predictors related to in-hospital mortality. The mean time from the onset of symptoms to revascularization differed significantly between survivors (5.1 ± 2.7 h) and no survivors (11.4 ± 3.2 h) (P < 0.0007) in the STEMI group. Preoperative cTnI did not provide any prognostic information. CABG in pts with ACS can be performed with good clinical results. The clinical outcome is particular depending on the different groups of ACS. Therefore an individual risk stratification of each pts in ACS is necessary. The time interval of 6 h seems to be crucial as prognostic variable in the STEMI-group.
Radiation effect on boundary layer flow of an Eyring–Powell fluid over an exponentially shrinking sheet
Boundary layer flow; Eyring–Powell model; Thermal radiation Abstract The aim of this paper was to examine the steady boundary layer flow of an Eyring–Powell model fluid due to an exponentially shrinking sheet. In addition, the heat transfer process in the presence of thermal radiation is considered. Using usual similarity transformations the governing equations have been transformed into non-linear ordinary differential equations. Homotopy analysis method (HAM) is employed for the series solutions. The convergence of the obtained series solutions is carefully analyzed. Numerical values of the temperature gradient are presented and discussed. It is observed that velocity increases with an increase in mass suction S. In addition, for the temperature profiles opposite behavior is observed for increment in suction. Moreover, the thermal boundary layer thickness decreases due to increase in Prandtl number Pr and thermal radiation R. 2014 Production and hosting by Elsevier B.V. on behalf of Ain Shams University.
Clinical Concept Extraction with Contextual Word Embedding
Automatic extraction of clinical concepts is an essential step for turning the unstructured data within a clinical note into structured and actionable information. In this work, we propose a clinical concept extraction model for automatic annotation of clinical problems, treatments and tests in clinical notes utilizing domainspecific contextual word embedding. A contextual word embedding model is first trained on a corpus with a mixture of clinical reports and relevant Wikipedia pages in the clinical domain. Next, a bidirectional LSTM-CRF model is trained for clinical concept extraction using the contextual word embedding model. We tested our proposed model on the I2B2 2010 challenge dataset. Our proposed model achieved the best performance among reported baseline models and outperformed the state-of-the-art models by 3.4% in terms of F1-score. 1
Paraphrasing Out-of-Vocabulary Words with Word Embeddings and Semantic Lexicons for Low Resource Statistical Machine Translation
Out-of-vocabulary (OOV) word is a crucial problem in statistical machine translation (SMT) with low resources. OOV paraphrasing that augments the translation model for the OOV words by using the translation knowledge of their paraphrases has been proposed to address the OOV problem. In this paper, we propose using word embeddings and semantic lexicons for OOV paraphrasing. Experiments conducted on a low resource setting of the OLYMPICS task of IWSLT 2012 verify the effectiveness of our proposed method.
meta-analysis Vitamin D deficiency and depression in adults : systematic review Material
Depression is associated with significant disability, mortality and healthcare costs. It is the third leading cause of disability in high-income countries, 1 and affects approximately 840 million people worldwide. 2 Although biological, psychological and environmental theories have been advanced, 3 the underlying pathophysiology of depression remains unknown and it is probable that several different mechanisms are involved. Vitamin D is a unique neurosteroid hormone that may have an important role in the development of depression. Receptors for vitamin D are present on neurons and glia in many areas of the brain including the cingulate cortex and hippocampus, which have been implicated in the pathophysiology of depression. 4 Vitamin D is involved in numerous brain processes including neuroimmuno-modulation, regulation of neurotrophic factors, neuroprotection, neuroplasticity and brain development, 5 making it biologically plausible that this vitamin might be associated with depression and that its supplementation might play an important part in the treatment of depression. Over two-thirds of the populations of the USA and Canada have suboptimal levels of vitamin D. 6,7 Some studies have demonstrated a strong relationship between vitamin D and depression, 8,9 whereas others have shown no relationship. 10,11 To date there have been eight narrative reviews on this topic, 12–19 with the majority of reviews reporting that there is insufficient evidence for an association between vitamin D and depression. None of these reviews used a comprehensive search strategy, provided inclusion or exclusion criteria, assessed risk of bias or combined study findings. In addition, several recent studies were not included in these reviews. 9,10,20,21 Therefore, we undertook a systematic review and meta-analysis to investigate whether vitamin D deficiency is associated with depression in adults in case–control and cross-sectional studies; whether vitamin D deficiency increases the risk of developing depression in cohort studies in adults; and whether vitamin D supplementation improves depressive symptoms in adults with depression compared with placebo, or prevents depression compared with placebo, in healthy adults in randomised controlled trials (RCTs). We searched the databases MEDLINE, EMBASE, PsycINFO, CINAHL, AMED and Cochrane CENTRAL (up to 2 February 2011) using separate comprehensive strategies developed in consultation with an experienced research librarian (see online supplement DS1). A separate search of PubMed identified articles published electronically prior to print publication within 6 months of our search and therefore not available through MEDLINE. The clinical trials registries clinicaltrials.gov and Current Controlled Trials (controlled-trials.com) were searched for unpublished data. The reference lists …
On the Principle of Privacy by Design and its Limits: Technology, Ethics and the Rule of Law
In the first edition of The Sciences of Artificial (1969), Herbert A. Simon lamented the lack of research on “the science of design” which characterized the curricula of both professional schools and universities throughout three decades after the Second World War. In the phrasing of the Nobel laureate, the reason hinged on academic respectability, because “in terms of the prevailing norms, academic respectability calls for subject matter that is intellectually tough, analytic, formalizable, and teachable. In the past much, if not most, of what we knew about design and about artificial sciences was intellectually soft, intuitive, informal, and cook-booky” (Simon 1996, 112). Thirty years later, in Code and Other Laws of Cyberspace (1999), Lawrence Lessig similarly stressed the lack of research on the impact of design on both social relationships and the functioning of legal systems, that is, how human behaviour may be shaped by the design of spaces, places and artefacts (op. cit., pp. 91–92). Thenceforth, the scenario has dramatically changed. Not only, according to Simon, an academically respectable “science of design” has emerged since the mid 1970s, when the Design Research Centre was founded at Carnegie Mellon University (the institute became the “Engineering Design Research Centre” in 1985). Significantly, over the last 10 years, legal scholars and social scientists have increasingly focused on the ethical and political implications of employing design mechanisms to determine people’s behaviour through the shaping of products, processes, and Information & Communication Technology (ICT)-interfaces and platforms. On one hand, let me mention work on the regulatory aspects of technology in such fields as universal usability (Shneiderman 2000); informed consent (Friedman et al. 2002); crime control and architecture (Katyal 2002, 2003); social justice (Borning et al. 2004); allegedly perfect self-enforcement technologies on the internet (Zittrain 2007); and design-based instruments for implementing social policies (Yeung 2007).
Consensus Attention-based Neural Networks for Chinese Reading Comprehension
Reading comprehension has embraced a booming in recent NLP research. Several institutes have released the Cloze-style reading comprehension data, and these have greatly accelerated the research of machine comprehension. In this work, we firstly present Chinese reading comprehension datasets, which consist of People Daily news dataset and Children’s Fairy Tale (CFT) dataset. Also, we propose a consensus attention-based neural network architecture to tackle the Cloze-style reading comprehension problem, which aims to induce a consensus attention over every words in the query. Experimental results show that the proposed neural network significantly outperforms the state-of-the-art baselines in several public datasets. Furthermore, we setup a baseline for Chinese reading comprehension task, and hopefully this would speed up the process for future research.
β-globin gene transfer to human bone marrow for sickle cell disease.
Autologous hematopoietic stem cell gene therapy is an approach to treating sickle cell disease (SCD) patients that may result in lower morbidity than allogeneic transplantation. We examined the potential of a lentiviral vector (LV) (CCL-βAS3-FB) encoding a human hemoglobin (HBB) gene engineered to impede sickle hemoglobin polymerization (HBBAS3) to transduce human BM CD34+ cells from SCD donors and prevent sickling of red blood cells produced by in vitro differentiation. The CCL-βAS3-FB LV transduced BM CD34+ cells from either healthy or SCD donors at similar levels, based on quantitative PCR and colony-forming unit progenitor analysis. Consistent expression of HBBAS3 mRNA and HbAS3 protein compromised a fourth of the total β-globin-like transcripts and hemoglobin (Hb) tetramers. Upon deoxygenation, a lower percentage of HBBAS3-transduced red blood cells exhibited sickling compared with mock-transduced cells from sickle donors. Transduced BM CD34+ cells were transplanted into immunodeficient mice, and the human cells recovered after 2-3 months were cultured for erythroid differentiation, which showed levels of HBBAS3 mRNA similar to those seen in the CD34+ cells that were directly differentiated in vitro. These results demonstrate that the CCL-βAS3-FB LV is capable of efficient transfer and consistent expression of an effective anti-sickling β-globin gene in human SCD BM CD34+ progenitor cells, improving physiologic parameters of the resulting red blood cells.
Publicity Trends in Arrests of " Online Predators " How the National Juvenile Online Victimization (n‐jov) Study Was Conducted
about " online predators " * – sex of‐ fenders who use the Internet to meet juvenile victims – has raised considerable alarm about the extent to which Internet use may be put‐ ting children and adolescents at risk for sexual abuse and exploitation. Media stories and Internet safety messages have raised fears by describing violent offenders who use the Inter‐ net to prey on naïve children by tricking them into face‐to‐face meetings or tracking them down through information posted online. Law enforcement has mobilized on a number of fronts, setting up task forces to identify and prosecute online predators, developing under‐ cover operations, and urging social networking sites to protect young users. Unfortunately, however, reliable information on the scope and nature of the online predator problem remains scarce. Established criminal justice data collection systems do not gather detailed data on such crimes that could help inform public policy and education. To remedy this information vacuum, the Crimes against Children Research Center at the University of New Hampshire conducted two waves of a The N‐JOV Study collected information from a national sample of law en‐ forcement agencies about the prevalence of arrests for and characteristics of online sex crimes against minors during two 12 month periods: July 1, 2000 through June 30, 2001 (Wave 1) and calendar year 2006 (Wave 2). For both Waves, we used a two‐phase process of mail surveys followed by telephone interviews to collect data from a national sample of the same lo‐ cal, county, state, and federal law enforcement agencies. First, we sent the mail surveys to a national sample of more than 2,500 agencies. These sur‐ veys asked if agencies had made arrests for online sex crimes against minors during the respective one‐year timeframes. Then we conducted detailed telephone interviews with law enforcement investigators about a random sample of arrest cases reported in the mail surveys. For the telephone interviews, we designed a sampling procedure that took into account the number of arrests reported by an agency, so that we would not unduly burden respondents in agencies with many cases. If an agency reported between one and three arrests for online sex crimes, we conducted follow‐up interviews for every case. For agencies that reported more than three arrests, we conducted interviews for all cases that involved youth vic‐ tims (victims who were located and contacted during the investigation), and sampled other arrest cases (i.e., …
The Origins of French Art Criticism: From the Ancien Regime to the Restoration
List of Plates. List of Figures. Photographic Credits. Abbreviations. Introduction. 1: The Salon in Context. 2: The Salon. 3: In Search of an Art Public. 4: Between the Studio and the Salon. 5: Censorshop and Diffusion of Criticism during the Ancient Regime. 6: The Status of Criticism. 7: The Language of Art Criticism. 8: The Hierarchy of the Genres. Conclusion. Appendix I: Independent Exhibitions, Lotteries, and Subscrption Schemes. Appendix II: Some Extensions to Salon Exhibitions. Appendix III: Production of Salon Pamphlets and Press Reviews. Appendix IV. Bibliography. Index
Major advances associated with environmental effects on dairy cattle.
It has long been known that season of the year has major impacts on dairy animal performance measures including growth, reproduction, and lactation. Additionally, as average production per cow has doubled, the metabolic heat output per animal has increased substantially rendering animals more susceptible to heat stress. This, in turn, has altered cooling and housing requirements for cattle. Substantial progress has been made in the last quarter-century in delineating the mechanisms by which thermal stress and photoperiod influence performance of dairy animals. Acclimation to thermal stress is now identified as a homeorhetic process under endocrine control. The process of acclimation occurs in 2 phases (acute and chronic) and involves changes in secretion rate of hormones as well as receptor populations in target tissues. The time required to complete both phases is weeks rather than days. The opportunity may exist to modify endocrine status of animals and improve their resistance to heat and cold stress. New estimates of genotype x environment interactions support use of recently available molecular and genomics tools to identify the genetic basis of heat-stress sensitivity and tolerance. Improved understanding of environmental effects on nutrient requirements has resulted in diets for dairy animals during different weather conditions. Demonstration that estrus behavior is adversely affected by heat stress has led to increased use of timed insemination schemes during the warm summer months to improve conception rates by discarding the need to detect estrus. Studies evaluating the effects of heat stress on embryonic survival support use of cooling during the immediate postbreeding period and use of embryo transfer to improve pregnancy rates. Successful cooling strategies for lactating dairy cows are based on maximizing available routes of heat exchange, convection, conduction, radiation, and evaporation. Areas in dairy operations in which cooling systems have been used to enhance cow comfort, improve milk production, reproductive efficiency, and profit include both housing and milking facilities. Currently, air movement (fans), wetting (soaking) the cow's body surface, high pressure mist (evaporation) to cool the air in the cows' environment, and facilities designed to minimize the transfer of solar radiation are used for heat abatement. Finally, improved understanding of photoperiod effects on cattle has allowed producers to maximize beneficial effects of photoperiod length while minimizing negative effects.
Influence of smoking and plasma factors on patency of femoropopliteal vein grafts.
OBJECTIVE To determine the effects of smoking, plasma lipids, lipoproteins, apolipoproteins, and fibrinogen on the patency of saphenous vein femoropopliteal bypass grafts at one year. DESIGN Prospective study of patients with saphenous vein femoropopliteal bypass grafts entered into a multicentre trial. SETTING Surgical wards, outpatient clinics, and home visits coordinated by two tertiary referral centres in London and Birmingham. PATIENTS 157 Patients (mean age 66.6 (SD 8.2) years), 113 with patent grafts and 44 with occluded grafts one year after bypass. MAIN OUTCOME MEASURE Cumulative percentage patency at one year. RESULTS Markers for smoking (blood carboxyhaemoglobin concentration (p less than 0.05) and plasma thiocyanate concentration (p less than 0.01) and plasma concentrations of fibrinogen (p less than 0.001) and apolipoproteins AI (p less than 0.04) and (a) (p less than 0.05) were significantly higher in patients with occluded grafts. Serum cholesterol concentrations were significantly higher in patients with grafts that remained patent one year after bypass (p less than 0.005). Analysis of the smoking markers indicated that a quarter of patients (40) were untruthful in their claims to have stopped smoking. Based on smoking markers, patency of grafts in smokers was significantly lower at one year by life table analysis than in non-smokers (63% v 84%, p less than 0.02). Patency was significantly higher by life table analysis in patients with a plasma fibrinogen concentration below the median than in those with a concentration above (90% v 57%, p less than 0.0002). Surprisingly, increased plasma low density lipoprotein cholesterol concentration was significantly associated with improved patency at one year (85%) at values above the median compared with patency (only 68%) at values in the lower half of the range (p less than 0.02). CONCLUSIONS Plasma fibrinogen concentration was the most important variable predicting graft occlusion, followed by smoking markers. A more forceful approach is needed to stop patients smoking; therapeutic measures to improve patency of vein grafts should focus on decreasing plasma fibrinogen concentration rather than serum cholesterol concentration.
Substance use and associated factors among Debre Berhan University students, Central Ethiopia
BACKGROUND Being a global burden of youths, substances use is unhealthy behavior that exposes youths to health and social problems. Knowledge of the prevalence and predictors of substance use behavior among university students is important for designing periodic and locally appropriate interventions. This study is conducted to assess the prevalence and predicators of substances among Debre Berhan University students. METHODS Cross-sectional quantitative study was employed in May 2016. Stratified two-stage sampling technique was applied to choose 695 students. Substance use behaviors were assessed using tools derived from World Health Organization Model Students' Substance Use Core Questionnaire. RESULT The lifetime utilization of alcohol, khat and cigarette among students was found to be 36.3%, 10.9% and 7.4% respectively. The lifetime utilization of shisha and cannabis was 4.2% and 4.5% respectively. About 17%, 5.7%, and 3.1% of students are currently using alcohol, Khat and Cigarette respectively. Using multivariate binary logistic regression, being male, feeding out of the university café, being from private preparatory school, having higher monthly income, having substance user families and friends were found to be variables significantly associated with students' substance use behaviors. CONCLUSIONS The current prevalence of substances use among Debre Berhan University students is low comparing to other Ethiopian and African universities. Youth are starting substance use at lower grades especially at preparatory schools. Substance use behaviors are affected by complex factors at individual, family, school, social, and environmental factors. Therefore, strategies to alleviate youth substance use problems should focus on changing individual perception, knowledge, and intention towards substances. There is a need for further research with more powerful sample size and weighted estimates using complex analysis. Reasons for lower prevalence of substance use from other Ethiopian universities shall be further explored using qualitative study.
A sentimental journey through France and Italy . with, The journal to Eliza ; and, A political romance
This book is far removed from the conventional travel book, and is fiction rather than reportage. Sterne travelled extensively in the 1760s, and drew on his experiences to write the narrative of Mr Yorick, the Sentimental traveller. The Journal to Eliza and A Political Romance both demonstrate the rare early satire which marked the beginning of the major phase of Sterne's career.