title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
The social context of severe child malnutrition: a qualitative household case study from a rural area of the Democratic Republic of Congo | INTRODUCTION
The magnitude of child malnutrition including severe child malnutrition is especially high in the rural areas of the Democratic Republic of Congo (the DRC). The aim of this qualitative study is to describe the social context of malnutrition in a rural part of the DRC and explore how some households succeed in ensuring that their children are well-nourished while others do not.
METHODOLOGY
This study is based on participant observation, key informant interviews, group discussions and in-depth interviews with four households with malnourished children and four with well-nourished children. We apply social field theory to link individual child nutritional outcomes to processes at local level and to the wider socio-economic environment.
FINDINGS
We identified four social fields that have implications for food security and child nutritional outcomes: 1) household size and composition which determined vulnerability to child malnutrition, 2) inter-household cooperation in the form of 'gbisa work party' which buffered scarcity of labour in peak seasons and facilitated capital accumulation, 3) the village associated with usufruct rights to land, and 4) the local NGO providing access to agricultural support, clean drinking water and health care.
CONCLUSIONS
Households that participated in inter-household cooperation were able to improve food and nutrition security. Children living in households with high pressure on productive members were at danger of food insecurity and malnutrition. Nutrition interventions need to involve local institutions for inter-household cooperation and address the problem of social inequalities in service provision. They should have special focus on households with few resources in the form of land, labour and capital. |
Mathematical modeling of microRNA-mediated mechanisms of translation repression. | MicroRNAs can affect the protein translation using nine mechanistically different mechanisms, including repression of initiation and degradation of the transcript. There is a hot debate in the current literature about which mechanism and in which situations has a dominant role in living cells. The worst, same experimental systems dealing with the same pairs of mRNA and miRNA can provide ambiguous evidences about which is the actual mechanism of translation repression observed in the experiment. We start with reviewing the current knowledge of various mechanisms of miRNA action and suggest that mathematical modeling can help resolving some of the controversial interpretations. We describe three simple mathematical models of miRNA translation that can be used as tools in interpreting the experimental data on the dynamics of protein synthesis. The most complex model developed by us includes all known mechanisms of miRNA action. It allowed us to study possible dynamical patterns corresponding to different miRNA-mediated mechanisms of translation repression and to suggest concrete recipes on determining the dominant mechanism of miRNA action in the form of kinetic signatures. Using computational experiments and systematizing existing evidences from the literature, we justify a hypothesis about co-existence of distinct miRNA-mediated mechanisms of translation repression. The actually observed mechanism will be that acting on or changing the sensitive parameters of the translation process. The limiting place can vary from one experimental setting to another. This model explains the majority of existing controversies reported. |
Bangladeshi dialect recognition using Mel Frequency Cepstral Coefficient, Delta, Delta-delta and Gaussian Mixture Model | Automatic recognition systems are generally applied successfully in speech processing to categorize observed utterances by the speaker identity, dialect and linguistic communication. A lot of research has been performed to detect speeches, dialects and languages of different region throughout the world. But the work on dialects of Bangladesh is infrequent to our research. These dialects, in turn, differ quite a bit from each other. In this paper, we present a method to detect Bangladeshi different dialects which utilizes Mel Frequency Cepstral Coefficient (MFCC), its Delta and Delta-delta as main features and Gaussian Mixture Models (GMM) to classify characteristics of a specific dialect. Particularly we extract the MFCCs, Deltas and Delta-deltas from the speech signal. Then they are merged together to form a feature vector for a specific dialect. GMM is trained using the iterative Expectation Maximization (EM) algorithm where feature vectors are served as input. This scheme is tested on 5 databases of 30 speech samples each. Speech samples contain dialects of Borishal, Noakhali, Sylhet, Chittagong and Chapai Nawabganj regions of Bangladesh. Experiments show that GMM adaptation gives comparable good performance. |
Chip-scale IMU using folded-mems approach | This paper reports a new approach to design and fabrication of chip-level inertial measurement units (IMUs). The folded-chip method utilizes a 3-D foldable silicon-on-insulator (SOI) backbone suitable for high-aspect ratio sensor fabrication. Assembly is done on the wafer level forming a compact rigid 6-axis system of sensors. Accelerometers and gyroscopes are fabricated in parallel with the folded structure on the same substrate, and are electrically and mechanically interfaced through integrated flexible polyimide film hinges and interlocking silicon latches. To demonstrate feasibility of the approach, folded IMUs are fabricated containing resonant capacitive accelerometers and gyroscopes. The value of the scale factor of the accelerometers is tunable over a range from 1.75 Hz/g to 3.7 Hz/g. Driving the gyroscope on the pyramid sidewall with 1.5 kHz operational frequency, the rotation rate is characterized in air to demonstrate operation with a scale factor of 0.43 mV/(deg/sec). Structural rigidity was verified by subjecting the IMU to oscillations from 50 Hz – 3.5 kHz over an amplitude range of 5 – 25 g. The results confirm feasibility of the proposed folded MEMS IMU approach, and may enable new integrated architectures for other multi-axis dynamic sensors, such as 3-D microphones, hydrophones, and ultrasonic transducers. |
Fiscal Volatility Shocks and Economic Activity ∗ | We study the effects of changes in uncertainty about future fiscal policy on aggregate economic activity. In light of large fiscal deficits and high public debt levels in the U.S., a fiscal consolidation seems inevitable. However, there is notable uncertainty about the policy mix and timing of such a budgetary adjustment. To evaluate the consequences of the increased uncertainty, we first estimate tax and spending processes for the U.S. that allow for timevarying volatility. We then feed these processes into an otherwise standard New Keynesian business cycle model calibrated to the U.S. economy. We find that fiscal volatility shocks can have a sizable adverse effect on economic activity. |
Corporate Social Responsibility and Values in Innovation Management | Corporate social responsibility (CSR) viewpoint have challenged the traditional perception to understand corporations position. Productionand managerial-centred views are expanding towards reference group-centred policies. Consequently, the significance of new kind of knowledge has emerged. In addition to management of the organisation, the idea of CSR emphasises the importance to recognise the value-expectations of operational environment. It is know that management is often well-aware of corporate social responsibilities, but it is less clear how well these high level goals are understood in practical product design and development work. In this study, the apprehension above proved to be real to some degree. While management was very aware of CSR it was less familiar to designers. The outcome shows that it is essential to raise ethical values and issues higher in corporate communication, if it is wished that they materialize also in products. Keywords—Corporate social responsibility, management, engineering, values. |
Detection, Tracking, and Interdiction for Amateur Drones | Unmanned aerial vehicles, also known as drones, are expected to play major roles in future smart cities, for example, by delivering goods and merchandise, serving as mobile hotspots for broadband wireless access, and maintaining surveillance and security. The goal of this survey article is to study various potential cyber and physical threats that may arise from the use of UAVs, and subsequently review various ways to detect, track, and interdict malicious drones. In particular, we review techniques that rely on ambient radio frequency signals (emitted from UAVs), radars, acoustic sensors, and computer vision techniques for detection of malicious UAVs. We present some early experimental and simulation results on range estimation of UAVs and receding horizon tracking of UAVs. Finally, we summarize common techniques that are considered for interdiction of UAVs. |
[Dialysis and the risk of poverty]. | INTRODUCTION
For patients with severe kidney failure, the only alternative to transplantation today is an enduring dialysis treatment. But dialysis is associated with manifold physical and social restrictions. The study analyses the psychosocial consequences of the chronic disease of renal insufficiency: Does chronic kidney failure increase the patients' risk to sink into poverty? SAMPLE/METHODS: In the year 2006 625 dialysis patients participated in an enquiry in 77 dialysis centres in Germany. The newly developed questionnaire included 19 items about social situation, treatment conditions, and quality of life. The response rate was 54.3%. The analyses were calculated using descriptive statistics and discriminatory analyses.
RESULTS
51.8% of the patients lived in the new German federal states (the former GDR), 44.9% are female. The mean age of the sample was 62.2 years. 57.5% of the participants were married or cohabited. There was at least one person aged younger than 18 years in 12% of the households. 54.8% of the respondents had a German CSE, 25.3% had a German Remedial School Certificate of Completion, and 12.4% had a German Abitur (German university entrance qualification). At the time of the enquiry, 60.2% of the patients were below the poverty level (60% of the mean income in Germany). Important impact factors for an existence above the poverty level were the number of persons per household and the age of the participants. The more persons per household, the greater was the risk to be below the poverty level. Households with more than two persons had a significantly higher risk to be below the poverty level (OR=63.3). Persons aged younger than 50 years had a significantly higher risk to be below the poverty level than those aged 50 years or older (OR=2.0).
CONCLUSIONS
Chronic renal insufficiency is associated with a higher poverty risk if the patients feature specific attributes. Although living alone is often regarded as a poverty risk because larger households usually have more opportunities to save costs, due to our findings patients living together with several persons in a household are at higher risk to sink into poverty. They are younger or middle-aged and have responsibility to support children or partners. A higher poverty risk results from the fact that they are at a younger age when their dialysis starts and usually receive a lower employment disability pension. The results correspond with data of the German Federal Statistical Office, which show that the number of paupers is greater within younger age groups. Relevant for prevention seems to be the impact of the physician on a possible further occupation of dialysis patients. |
Unilateral plantar flexors static-stretching effects on ipsilateral and contralateral jump measures. | The aim of this study was to evaluate the acute effects of unilateral ankle plantar flexors static-stretching (SS) on the passive range of movement (ROM) of the stretched limb, surface electromyography (sEMG) and single-leg bounce drop jump (SBDJ) performance measures of the ipsilateral stretched and contralateral non-stretched lower limbs. Seventeen young men (24 ± 5 years) performed SBDJ before and after (stretched limb: immediately post-stretch, 10 and 20 minutes and non-stretched limb: immediately post-stretch) unilateral ankle plantar flexor SS (6 sets of 45s/15s, 70-90% point of discomfort). SBDJ performance measures included jump height, impulse, time to reach peak force, contact time as well as the sEMG integral (IEMG) and pre-activation (IEMGpre-activation) of the gastrocnemius lateralis. Ankle dorsiflexion passive ROM increased in the stretched limb after the SS (pre-test: 21 ± 4° and post-test: 26.5 ± 5°, p < 0.001). Post-stretching decreases were observed with peak force (p = 0.029), IEMG (P<0.001), and IEMGpre-activation (p = 0.015) in the stretched limb; as well as impulse (p = 0.03), and jump height (p = 0.032) in the non-stretched limb. In conclusion, SS effectively increased passive ankle ROM of the stretched limb, and transiently (less than 10 minutes) decreased muscle peak force and pre-activation. The decrease of jump height and impulse for the non-stretched limb suggests a SS-induced central nervous system inhibitory effect. Key pointsWhen considering whether or not to SS prior to athletic activities, one must consider the potential positive effects of increased ankle dorsiflexion motion with the potential deleterious effects of power and muscle activity during a simple jumping task or as part of the rehabilitation process.Since decreased jump performance measures can persist for 10 minutes in the stretched leg, the timing of SS prior to performance must be taken into consideration.Athletes, fitness enthusiasts and therapists should also keep in mind that SS one limb has generalized effects upon contralateral limbs as well. |
Tracking Players and Estimation of the 3D Position of a Ball in Soccer Games | In soccer games, understanding the movement of players and a ball is essential for the analysis of matches or tactics. In this paper, we present a system to track players and a ball and to estimate their positions from video images. Our system tracks players by extracting shirt and pants regions and can cope with the posture change and occlusion by considering their colors, positions, and velocities in the image. The system extracts ball candidates by using the color and motion information, and determines the ball among them based on motion continuity. To determine the player who is holding the ball, the position of players on the eld and the 3D position of the ball are estimated. The ball position is estimated by tting a physical model of movement in the 3D space to the observed ball trajectory. Experimental results on real image sequences show the e ectiveness of the system. |
Measurement of Integrated Circuit Conducted Emissions by Using a Transverse Electromagnetic Mode ( TEM ) Cell | This paper presents a new technique for the measurement of integrated circuit (IC) conducted emissions. In particular, the spectrum of interfering current flowing through an IC port is detected by using a transverse electromagnetic mode (TEM) cell. A structure composed of a matched TEM cell with inside a transmission line is considered. The structure is excited by an interfering source connected to one end of the transmission line. The relationship between the current spectrum of the source and the spectrum of the RF power delivered to the TEM mode of the cell is derived. This relationship is evaluated for one specific structure and the experimental validation is shown. Results of conducted emission measurement performed by using such a technique are shown as well and compared with those derived by using the magnetic probe method. |
Chemical cross-linking of hemoglobin H. A possible approach to introduce cooperativity and modification of its oxygen transport properties. | Native and reconstituted hemoglobin H molecules were cross-linked with glutaraldehyde at pH values close to the physiological. The Schiff base adducts were analysed by sodium dodecyl sulfate-polyacrylamide gel electrophoresis before and after reduction with sodium borohydride. The major component had a molecular weight of about 31 000 which corresponded to the dimeric species of the beta subunit. In contrast to the native protein, which has very high oxygen affinity and no heme-heme interaction or 2,3-diphosphoglyceric acid effect, the modified hemoglobin H molecules showed cooperative oxygen binding, decreased oxygen affinity and a noticeable 2,3-diphosphoglyceric acid effect. |
On Horizontal Decomposition of the Operating System | As previous OS abstractions and structures fail to explicitly consider the separation between resource users an d providers, the shift toward server-side computing poses se rious challenges to OS structures, which is aggravated by the increasing many-core scale and workload diversity. This paper presents the horizontal OS model. We propose a new OS abstraction—subOS—an independent OS instance owning physical resources that can be created, destroyed, a nd resized swiftly. We horizontally decompose the OS into the s upervisor for the resource provider and several subOSes for r esource users. The supervisor discovers, monitors, and prov isions resources for subOSes, while each subOS independentl y runs applications. We confine state sharing among subOSes, but allow on-demand state sharing if necessary. We present the first implementation—RainForest, which supports unmodified Linux applications binaries. Our comprehensive evaluations using six benchmark suites quantit atively show RainForest outperforms Linux with three differ ent kernels, LXC, and XEN. The RainForest source code is soon available. |
Hyperspectral Remote Sensing Data Analysis and Future Challenges | Hyperspectral remote sensing technology has advanced significantly in the past two decades. Current sensors onboard airborne and spaceborne platforms cover large areas of the Earth surface with unprecedented spectral, spatial, and temporal resolutions. These characteristics enable a myriad of applications requiring fine identification of materials or estimation of physical parameters. Very often, these applications rely on sophisticated and complex data analysis methods. The sources of difficulties are, namely, the high dimensionality and size of the hyperspectral data, the spectral mixing (linear and nonlinear), and the degradation mechanisms associated to the measurement process such as noise and atmospheric effects. This paper presents a tutorial/overview cross section of some relevant hyperspectral data analysis methods and algorithms, organized in six main topics: data fusion, unmixing, classification, target detection, physical parameter retrieval, and fast computing. In all topics, we describe the state-of-the-art, provide illustrative examples, and point to future challenges and research directions. |
Research Guides: Engineering Design Technology: Professional & Educational Resources | EDT @ GTC prepares students to become CAD Design technicians who contribute to the design team by transforming ideas & solutions into engineering models, drawings,and specifications using state-of-the-art tools such as AutoCAD, Solid Works, and CATIA V5. |
Robust 2D/3D face mask presentation attack detection scheme by exploring multiple features and comparison score level fusion | The face mask presentation attack introduces a greater threat to the face recognition system. With the evolving technology in generating both 2D and 3D masks in a more sophisticated, realistic and cost effective manner encloses the face recognition system to more challenging vulnerabilities. In this paper, we present a novel Presentation Attack Detection (PAD) scheme that explores both global (i.e. face) and local (i.e. periocular or eye) region to accurately identify the presence of both 2D and 3D face masks. The proposed PAD algorithm is based on both Binarized Statistical Image Features (BSIF) and Local Binary Patterns (LBP) that can capture a prominent micro-texture features. The linear Support Vector Machine (SVM) is then trained independently on these two features that are applied on both local and global region to obtain the comparison scores. We then combine these scores using the weighted sum rule before making the decision about a normal (or real or live) or an artefact (or spoof) face. Extensive experiments are carried out on two publicly available databases for 2D and 3D face masks namely: CASIA face spoof database and 3DMAD shows the efficacy of the proposed scheme when compared with well-established state-of-the-art techniques. |
Fast Exact k-Means, k-Medians and Bregman Divergence Clustering in 1D | The k-Means clustering problem on n points is NP-Hard for any dimension d ≥ 2, however, for the 1D case there exists exact polynomial time algorithms. Previous literature reported an O(kn) time dynamic programming algorithm that uses O(kn) space. It turns out that the problem has been considered under a different name more than twenty years ago. We present all the existing work that had been overlooked and compare the various solutions theoretically. Moreover, we show how to reduce the space usage for some of them, as well as generalize them to data structures that can quickly report an optimal k-Means clustering for any k. Finally we also generalize all the algorithms to work for the absolute distance and to work for any Bregman Divergence. We complement our theoretical contributions by experiments that compare the practical performance of the various algorithms. |
Automating formal proofs for reactive systems | Implementing systems in proof assistants like Coq and proving their correctness in full formal detail has consistently demonstrated promise for making extremely strong guarantees about critical software, ranging from compilers and operating systems to databases and web browsers. Unfortunately, these verifications demand such heroic manual proof effort, even for a single system, that the approach has not been widely adopted.
We demonstrate a technique to eliminate the manual proof burden for verifying many properties within an entire class of applications, in our case reactive systems, while only expending effort comparable to the manual verification of a single system. A crucial insight of our approach is simultaneously designing both (1) a domain-specific language (DSL) for expressing reactive systems and their correctness properties and (2) proof automation which exploits the constrained language of both programs and properties to enable fully automatic, pushbutton verification. We apply this insight in a deeply embedded Coq DSL, dubbed Reflex, and illustrate Reflex's expressiveness by implementing and automatically verifying realistic systems including a modern web browser, an SSH server, and a web server. Using Reflex radically reduced the proof burden: in previous, similar versions of our benchmarks written in Coq by experts, proofs accounted for over 80% of the code base; our versions require no manual proofs. |
Extracting Product Features and Opinion Words Using Pattern Knowledge in Customer Reviews | Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product. |
What Is Disciplinary Literacy and Why Does It Matter | while giving students valuable insights into the nature of science and scientific communication. One of the major benefits attributed to nominalization (and to certain uses of passive voice) is that it shifts the emphasis from social agents to natural agents in the consideration of causation which is a central premise in most scientific endeavors. In contract, history texts and literary texts are more likely to use active voice and they are less likely to focus on nominalized subjects. They, too, are interested in the analysis of causation, but understanding human agency is more central to their purpose. Again, by having students examine these disciplinary choices or relatively specialized patterns of language use, they may be better equipped to deal with the learning demands of the particular disciplines. (There are, of course, variations within social and scientific studies: for example, in most sciences human agency is attenuated, while in ecology and environmental sciences human causation is more important.) And these language differences are only a part of what distinguishes the disciplines. Another example of a disciplinary difference with profound implications for literacy has to do with the role of the author. Research has shown (Shanahan, 1992; Shanahan & Shanahan, under review) clear differences in whether or how those in the various disciplines think about author during reading. For example, it has been shown that in history reading, author is a central construct of interpretation (Wineburg, 1991, 1998). Historians are always asking themselves who this author is and what bias this author brings to the text (somewhat analogous to the lawyer’s common probe, “What did he know and when did he know it?”). Consideration of author is deeply implicated in the process of reading history, and disciplinary literacy experts have hypothesized that “sourcing”: (thinking about the implications of author during interpretation) is an essential history reading process (Wineburg, 1991, 1998) and studies show that it can, at least Disciplinary Literacy 7 under some circumstances, be taught to students in a way that improves their learning (HyndShanahan, Holschuh, & Hubbard, 2004). However, while historians (and history students) must consider a text’s authorial source, research has revealed a very different pattern of reading for scientists (Shanahan & Shanahan, under review). Interviews with chemists have shown that they do rely on author, but more as a topical or quality screen while determining which texts to read. Chemists admit that they use author when they are deciding what to read; they consider the lab an author may be associated with to determine whether a text would be worth the time. But, once reading begins, unlike the historians, scientists try to focus their attention specifically on the text; considerations of author, according to these chemists, should play no part in interpretation of text meaning, something demonstrated in both their think-alouds during reading and in post-reading interviews. This pattern of intentionally ignoring the author was even more evident in the reading done by mathematicians, who explained, almost stridently, that thinking about author would only be a distraction and that it could help in no way within the process of making sense of the text. And, to bring things full circle, whether the author should be considered interpretively has been a matter of great controversy within the field of literary criticism (English) for more than 50 years. Literary theorists have worked long and hard to kill off the author, or at least to elbow him/her aside during interpretation (Brooks & Warren, 1938; Fish, 1980; Foucault, 1979; Gadamer, 1975; Rosenblatt, 1978; Wimsatt & Beardsley, 1946). Thus, some literary critics argue for the close reading of “authorless” texts, much in the fashion of the scientific or mathematical readings described above, while other critics allow for some consideration of the author, at least for making sense of the author’s ideological stance (à la the historical readings already described). Disciplinary Literacy 8 These differences suggest that students must always read history with an eye to the author, while never reading math in that way. Students should use author sparingly in science reading, though never to make sense of the text. When reading literature, they should sometimes interpret the author along with the text and other times, stay to the words of the literature with no consideration of the author at all. The aim of disciplinary literacy is to identify all such readingand writing-relevant distinctions among the disciplines, and to find ways of teaching students to negotiate successfully these literacy aspects of the disciplines. It is an effort, ultimately, to transform students into disciplinary insiders who are able to approach literacy tasks with some sense of agency and with a set of responses and moves that are appropriate to the specialized purposes, demands, and mores of the disciplines. For our purposes here, however, the important distinctions are not the ones separating the disciplines, but those that distinguish content area reading from disciplinary literacy. We have described the idea of disciplinary literacy in some detail. Content area literacy, on the other hand, has been around longer, and is the focus of dozens of textbooks. We should be able to summarize their agenda more efficiently. It is evident from examining several decades’ worth of content area reading/literacy textbooks that the largely-agreed-upon purpose of this approach is to provide students with a collection of generic study skills that will, more or less, boost learning in all disciplines. These approaches help students to preview books (through examinations of tables of context and indices) and chapters (use of subheadings) and to use various print devices (e.g., italics, bolding, font and point variations) to make sense of text. They promote the use of purpose setting and predicting, along with a rich collection of reading processes or strategies (e.g., visualization, Disciplinary Literacy 9 summarization, clarification, questioning), and the use of particular study or teaching devices (e.g., Cornell note-taking, three-level guides, advance organizers). The content area reading agenda aims not so much to help students to read history as an historian might, but simply to read history with some grasp of the information, using a set of learning or study tools that may be implemented in any subject. Its focus is less on providing students with an insider’s perspective of a discipline (and with ways of coping with the unique properties of the disciplines), but rather emphasizes students as students or studiers and strives to provide them with the tools of the student. The Sources of Disciplinary Literacy The roots of the disciplinary literacy concept are three-fold: they can be found in the historical development of content area reading, cognitive analyses of expert readers, and functional linguistics. The history of content area reading has been excellently described (Moore, Readence, & Rickelman, 1983) and we will rely heavily on that treatment. Moore and his colleagues trace its history to the 1920s with the recognition of the importance of reading in content subjects. From the beginning, the emphasis of content area reading was on instructional applications of the relation of reading to content subjects. For instance, the National Committee on Reading explored this topic in the classic 24 th Yearbook of the National Society for the Study of Education (Whipple, 1925) which provided guidelines and sample lessons emphasizing how to find answers to questions, follow directions, select major ideas, remember, identify key words, self-question, and make notes. As a result of the recognition of the importance of reading in school subjects accorded by the National Committee of Reading, researchers began exploring the issue. According to Moore, Disciplinary Literacy 10 et al. (1983), the studies tended to focus on the identification of important vocabulary in the textbooks from the various subjects, studies of the availability and effectiveness of various instructional procedures, and correlations of comprehension measures based on general and subject specific texts. “Although these reports indicated various degrees of similarity between “general” and “specific” comprehension, all concluded that the subjects presented distinct reading demands” (Moore, Readence, & Rickelman, 1983, p. 429). Thus, despite the fact that their methods of research did not permit differences to be discerned, content area reading researchers typically promoted the notion that reading proficiency would be subject-distinct, and this idea of specialized reading has long been rhetorically honored in pedagogical treatments of content area reading, despite the fact that they have mainly endorsed general approaches to reading that were applicable across all subject matters. Thus, the ironical role that content area reading has played in the development of disciplinary literacy has largely been aspirational. It has pointed towards a theoretical conception of literacy processes specialized to particular disciplines, while fostering a fundamentally different approach: one based upon highly generalizable learning strategies or processes that can be easily adapted and used across different school subjects. A more substantive source for disciplinary literacy emerges from a series of expert reader studies that have been carried out during the past three decades in various disciplines (Shanahan & Shanahan, under review). Drawing on the expert-novice paradigm from the cognitive sciences, these studies have used observations and think-aloud protocols to identify performance differences. In this paradigm individuals are identified who are particularly proficient in some skill, in this case, in the literacy of a particu |
Visual discrimination learning requires sleep after training | Performance on a visual discrimination task showed maximal improvement 48–96 hours after initial training, even without intervening practice. When subjects were deprived of sleep for 30 hours after training and then tested after two full nights of recovery sleep, they showed no significant improvement, despite normal levels of alertness. Together with previous findings that subjects show no improvement when retested the same day as training, this demonstrates that sleep within 30 hours of training is absolutely required for improved performance. |
Chromatin zigzags | Gene clusters (red) and deserts (green) group together in characteristic patterns. Dendrites with excess nectin 1 aberrantly touch each other (arrowheads). G ene-poor chromosomal regions are more often found in the nuclear periphery, and gene-rich regions are more often found in the nuclear interior. But Shopland et al. (page 27) are the fi rst to analyze how multiple gene-poor and gene-rich regions are organized relative to each other. They fi nd that gene-rich regions often cluster together while pushing interspersing genic deserts to the nuclear periphery, even in the absence of active transcription. Shopland et al. studied a 4.3-Mb region of mouse chromosome 14 that has four gene-rich regions interspersed with four gene deserts. FISH probes that distinguished the genic and nongenic regions showed that the chromosome bent into three classifi able patterns: a striped pattern that resembled the linear sequence order; a zigzag pattern with the four coding regions next to one another and the gene deserts displaced to one side; and a clustered " hub " of gene-rich segments with peripherally arranged deserts. Combinations of these three patterns were also evident. The deserts often lined up at the edge of the nucleus, where they might contact the lamin meshwork. The chromosomal arrangements did not appear to depend on transcription at a common site, nor did the gene-rich regions associate with aggregates of RNA splicing factors referred to as speckles. Moreover, the patterns persisted when transcription was blocked by drugs. Given the limited infl uence that transcription appeared to have on the genome organization, it remains unclear how or why the chromosome bends into these confi gurations. The researchers speculate that the gene-rich regions share some regulatory proteins, as might the deserts, and thus are drawn together by cross-talk. There are genes in the region that act in the same developmental pathways, which might support this idea, but while coexpressed they have not been shown to be coregulated. Whether such associations are the result of passive chromatin wiggling or an active pulling process remains to be seen. Opposites attract T he nectin family's preference for heterophilic interactions prevents one dendrite from forming an attachment to another and leads to proper wiring in the nervous system, according to Togashi et al. (page 141). During embryonic development, neurons send out axons and dendrites. And though dendrites bump into other dendrites, only connections between axons and dendrites mature into synapses. Cadherin and … |
Marginal Release Under Local Differential Privacy | Many analysis and machine learning tasks require the availability of marginal statistics on multidimensional datasets while providing strong privacy guarantees for the data subjects. Applications for these statistics range from finding correlations in the data to fitting sophisticated prediction models. In this paper, we provide a set of algorithms for materializing marginal statistics under the strong model of local differential privacy. We prove the first tight theoretical bounds on the accuracy of marginals compiled under each approach, perform empirical evaluation to confirm these bounds, and evaluate them for tasks such as modeling and correlation testing. Our results show that releasing information based on (local) Fourier transformations of the input is preferable to alternatives based directly on (local) marginals. |
Assessment of dyspnea in acute decompensated heart failure: insights from ASCEND-HF (Acute Study of Clinical Effectiveness of Nesiritide in Decompensated Heart Failure) on the contributions of peak expiratory flow. | OBJECTIVES
This study hypothesized that peak expiratory flow rate (PEFR) would increase with acute heart failure (AHF) treatment over the first 24 h, related to a Dyspnea Index (DI) change and treatment effect.
BACKGROUND
Dyspnea is a key symptom and clinical trial endpoint in AHF, yet objective assessment is lacking.
METHODS
In a clinical trial substudy, 421 patients (37 sites) underwent PEFR testing at baseline, 1, 6, and 24 h after randomization to nesiritide or placebo. DI (by Likert scale) was collected at hours 6 and 24.
RESULTS
Patients were median age 70 years, and 34% were female; no significant differences between nesiritide or placebo patients existed. Median baseline PEFR was 225 l/min (interquartile range [IQR]: 160 to 300 l/min) and increased to 230 l/min (2.2% increase; IQR: 170 to 315 l/min) by hour 1, 250 l/min (11.1% increase; IQR: 180 to 340 l/min) by hour 6, and 273 l/min (21.3% increase; IQR: 200 to 360 l/min) by 24 h (all p < 0.001). The 24-h PEFR change related to moderate or marked dyspnea improvement by DI (adjusted odds ratio: 1.04 for each 10 l/min improvement [95% confidence interval (CI): 1.07 to 1.10]; p < 0.01). A model incorporating time and treatment over 24 h showed greater PEFR improvement after nesiritide compared with placebo (p = 0.048).
CONCLUSIONS
PEFR increases over the first 24 h in AHF and could serve as an AHF endpoint. Nesiritide had a greater effect than placebo on PEFR, and this predicted patients with moderate/marked improvement in dyspnea, thereby providing an objective metric for assessing AHF. (Acute Study of Clinical Effectiveness of Nesiritide in Decompensated Heart Failure [ASCEND-HF]; NCT00475852). |
Combing Haar and MBLBP features for face detection using multi-exit asymmetric boosting | This paper proposes a visual face detection framework that enables fast image processing while achieving high detection rates. The proposed framework combines both multi block local binary pattern (MBLBP) features and Haar- like features using multi-exit asymmetric boosting for robust face detection. In this framework, the integral image is utilized to facilitate rapid extraction for both the MBLBP features and the Haar-like features. Experimental results showed that combing MBLBP and Haar-like features can achieve better detection rate than each of them can do individually. |
Bringing High Performance Computing to Big Data Algorithms | Many ideas of High Performance Computing are applicable to Big Data problems. The more so now, that hybrid, GPU computing gains traction in mainstream computing applications. This work discusses the differences between the High Performance Computing software stack and the Big Data software stack and then focuses on two popular computing workloads, the Alternating Least Squares algorithm and the Singular Value Decomposition, and shows how their performance can be maximized using hybrid computing techniques. |
Four-dimensional control of the cell cycle | The cell-division cycle has to be regulated in both time and space. In the time dimension, the cell ensures that mitosis does not begin until DNA replication is completed and any damaged DNA is repaired, and that DNA replication normally follows mitosis. This is achieved by the synthesis and destruction of specific cell-cycle regulators at the right time in the cell cycle. In the spatial dimension, the cell coordinates dramatic reorganizations of the subcellular architecture at the entrance to and exit from mitosis, largely through the actions of protein kinases and phosphatases that are often localized to specific subcellular structures. Evidence is now accumulating to suggest that the spatial organization of cell-cycle regulators is also important in the temporal control of the cell cycle. Here I will focus on how the locations of the main components of the cell-cycle machinery are regulated as part of the mechanism by which the cell controls when and how it replicates and divides. |
An energy market for trading electricity in smart grid neighbourhoods | The smart grid vision relies on active interaction with all of its stakeholders. As consumers are acquiring energy generation capabilities, hence becoming prosumers (producers and consumers), a meaningful way to interact among them would be to trade over a marketplace. Market-driven interactions have been proposed as a promising potential interaction method due to the monetary incentives and other benefits involved for the participants [1]. In the Internet era an on-line marketplace is an thriving concept as it overcomes potential accessibility issues, however it is not clear how they should be structured, operated, what their limits and benefits might be. The design, implementation, modus-operandi as well as the assessment of such an energy market place for smart grid neighbourhoods is presented. |
Personality traits associated with depressive and anxiety disorders in infertile women and men undergoing in vitro fertilization treatment. | OBJECTIVE
To assess which personality traits are associated with depressive and/or anxiety disorders in infertile women and men undergoing in vitro fertilization (IVF).
DESIGN
Prospective study.
SETTING
A university hospital in Sweden.
POPULATION
A total of 856 eligible women and men, 428 couples, were approached to participate. Overall 643 (75.1%) subjects filled out the Swedish Universities Scales of Personality (SSP) questionnaire. The response rates were 323 women (75.5%) and 320 men (74.8%).
METHODS
The SSP, a self-rating personality trait questionnaire, was used for evaluation. Main outcome measures. Personality traits associated with depression and/or anxiety disorders.
RESULTS
Higher mean scores on all neuroticism-related personality traits were found in women and men with depressive and/or anxiety disorders compared to women and men with no diagnosis. High scores of neuroticism and a negative pregnancy test after IVF were associated with depressive and/or anxiety disorders among women. Among men, high scores of neuroticism and unexplained or male infertility factor were associated with depressive and/or anxiety disorders. High neuroticism scores were negatively associated with live birth (p < 0.05).
CONCLUSION
High scores on neuroticism-related personality traits were associated with depressive and/or anxiety disorders in women and men undergoing IVF. |
A beaconless Opportunistic Routing based on a cross-layer approach for efficient video dissemination in mobile multimedia IoT applications | Mobile multimedia networks are enlarging the Internet of Things (IoT) portfolio with a huge number of multimedia services for different applications. Those services run on dynamic topologies due to device mobility or failures and wireless channel impairments, such as mobile robots or Unmanned Aerial Vehicle (UAV) environments for rescue or surveillance missions. In those scenarios, beaconless Opportunistic Routing (OR) allows increasing the robustness of systems for supporting routing decisions in a completely distributed manner. Moreover, the addition of a cross-layer scheme enhances the benefits of a beaconless OR, and also enables multimedia dissemination with Quality of Experience (QoE) support. However, existing beaconless OR approaches do not support a reliable and efficient cross-layer scheme to enable effective multimedia transmission under topology changes, increasing the packet loss rate, and thus reducing the video quality level based on the user’s experience. This article proposes a Link quality and Geographical beaconless OR protocol for efficient video dissemination for mobile multimedia IoT, called LinGO. This protocol relies on a beaconless OR approach and uses multiple metrics for routing decisions, including link quality, geographic location, and energy. A QoE/video-aware optimisation scheme allows increasing the packet delivery rate in presence of links errors, by adding redundant video packets based on the frame importance from the human’s point-of-view. Simulation results show that LinGO delivers live video flows with QoE support and robustness in mobile and dynamic topologies, as needed in future IoT environments. 2014 Elsevier B.V. All rights reserved. |
Noninvasive Positive-Pressure Ventilation in Acute Respiratory Distress Syndrome in Patients With Acute Pancreatitis: A Retrospective Cohort Study. | OBJECTIVES
Noninvasive positive-pressure ventilation (NPPV) in acute respiratory distress syndrome (ARDS) is controversial. We aimed to assess the efficacy of NPPV on ARDS in acute pancreatitis (AP).
METHODS
In this retrospective, single-center cohort study, demographic data, clinical and biochemical parameters of AP and developed ARDS on admission as well as before and after use of NPPV, and clinical outcomes were retrieved from the medical record database. Degrees of ARDS at presentation were retrospectively classified using the Berlin Definition.
RESULTS
Of 379 patients identified, 127 were eligible for inclusion and had NPPV for more than 24 hours. There were 44 mild, 64 moderate, and 19 severe patients with ARDS at presentation; endotracheal intubation rates were 0% (0/44), 23.4% (15/64), and 47.4% (9/19); and the mortality rates were 0% (0/44), 9.4% (6/64), and 15.8% (3/19), respectively. After NPPV treatment, systolic pressure, heart rate, respiratory rate, and fraction of inspired oxygen decreased, whereas oxygen saturation increased significantly in the NPPV success group compared with the failed group. Similar findings were also observed between survivors and nonsurvivors.
CONCLUSIONS
Noninvasive positive-pressure ventilation may be an effective option for the initial treatment of ARDS patients in AP, but the use of NPPV should be applied prudently in the most severe cases. |
BPM Governance: An Exploratory Study in Public Organizations | Business Process Management is a widely known approach focused on aligning processes of an organization in order to achieve improved efficiency and client satisfaction. Governance is an important requirement to enable successful BPM initiatives. This paper provides a qualitative empirical study to investigate what BPM governance elements are adopted by teams conducting early BPM initiatives in public organizations. The results suggest that early BPM adopters in public sector face several barriers due to difficulties in acquiring professionals with BPM expertise, bureaucracy and legislation rigidity, among others. In particular, committed sponsorship and monitoring were appointed as important BPM governance facilitators by participants of the study. Findings also show that further empirical studies are needed to increase the body of evidence in this field. |
Field-split parallel architecture for high performance multi-match packet classification using FPGAs | Multi-match packet classification is a critical function in network intrusion detection systems (NIDS), where all matching rules for a packet need to be reported. Most of the previous work is based on ternary content addressable memories (TCAMs) which are expensive and are not scalable with respect to clock rate, power consumption, and circuit area. This paper studies the characteristics of real-life Snort NIDS rule sets, and proposes a novel SRAM-based architecture. The proposed architecture is called field-split parallel bit vector (FSBV) where some header fields of a packet are further split into bit-level subfields. Unlike previous multi-match packet classification algorithms which suffer from memory explosion, the memory requirement of FSBV is linear in the number of rules. FPGA technology is exploited to provide high throughput and to support dynamic updates. Implementation results show that our architecture can store on a single Xilinx Virtex-5 FPGA the full set of packet header rules extracted from the latest Snort NIDS and sustains 100 Gbps throughput for minimum size (40 bytes) packets. The design achieves 1.25× improvement in throughput while the power consumption is approximately one fourth that of the state-of-the-art solutions. |
A randomised trial of the feasibility of a low carbohydrate diet vs standard carbohydrate counting in adults with type 1 diabetes taking body weight into account. | BACKGROUND AND OBJECTIVES
To determine the effect of a low carbohydrate diet and standard carbohydrate counting on glycaemic control, glucose excursions and daily insulin use compared with standard carbohydrate counting in participants with type 1 diabetes.
METHODS AND STUDY DESIGN
Participants (n=10) with type 1 diabetes using a basal; bolus insulin regimen, who attended a secondary care clinic, were randomly allocated (1:1) to either a standard carbohydrate counting course or the same course with added information on following a carbohydrate restricted diet (75 g per day). Participants attended visits at baseline and 12 weeks for measurements of weight, height, blood pressure, HbA1c, lipid profile and creatinine. They also completed a 3-day food diary and had 3 days of continuous subcutaneous glucose monitoring.
RESULTS
The carbohydrate restricted group had significant reductions in HbA1c (63 to 55 mmol/mol (8.9-8.2%), p<0.05) and daily insulin use (64.4 to 44.2 units/day, p<0.05) and non-significant reductions in body weight (83.2 to 78.0 kg). There were no changes in blood pressure, creatinine or lipid profile and all outcomes in the carbohydrate counting group were unchanged. There was no change in glycaemic variability as measured by the mean amplitude of glycaemic excursion in either group.
CONCLUSIONS
A low carbohydrate diet is a feasible option for people with type 1 diabetes, and may be of benefit in reducing insulin doses and improving glycaemic control, particularly for those wishing to lose weight. |
Interactive Data Analytics for the Humanities | In this vision paper, we argue that current solutions to data analytics are not suitable for complex tasks from the humanities, as they are agnostic of the user and focused on static, predefined tasks with large-scale benchmarks. Instead, we believe that the human must be put into the loop to address small data scenarios that require expert domain knowledge and fluid, incrementally defined tasks, which are common for many humanities use cases. Besides the main challenges, we discuss existing and urgently required solutions to interactive data acquisition, model development, model interpretation, and system support for interactive data analytics. In the envisioned interactive systems, human users not only provide annotations to a machine learner, but train a model by using the system and demonstrating the task. The learning system will actively query the user for feedback, refine its model in real-time, and is able to explain its decisions. Our vision links natural language processing research with recent advances in machine learning, computer vision, and data management systems, as realizing this vision relies on combining expertise from all of these scientific fields. 1 Challenges in Analyzing Humanities Data Automated data analytics, aka. data mining and machine learning, is a key technology for enriching and interpreting data, making informed decisions, and developing new data-driven scientific methods across many disciplines in industry and academia. Although the potential of interactive problem solving was recognized early on [16], this field has not progressed very far beyond the initial work. In particular, interactive machine learning and data analytics have only recently received increased attention [98]. Current data analytics solutions focus predominantly on well-defined tasks that can be solved by processing large, homogeneous datasets available in a structured form. Consider for example recommender systems, which suggest new products based on the product’s properties, the products that the customer has previously bought, and the collective behavior of the customer database [43]. The state-of-the-art relies on huge amounts of data—over one billion pairs of users and news items passively gathered—to train a deep neural network [28]. This may explain, why data analytics is conceived in a rather impersonal way, with algorithms working autonomously on passively collected data, although practice is quite the opposite. Most of the influence practitioners have, comes through interacting with data, including crafting the data and examining results. In the late 1990s, digitized data became widely available in the humanities as well. Since then, there has been a clear demand for data analytics approaches to tap into these textual and visual data, including cultural heritage collections. The research questions and strategies in the humanities are, however, radically different from data analytics tasks in other disciplines. First, despite the large amount of digitized data, there is typically only a tiny fraction that qualifies as training data for machine learning systems, because most of the data lacks cleaning, preprocessing, and gold standard labels. Data preparation tasks are often highly complex in the humanities. For text, they range from transcribing Gothic script or handwriting through the labeling of references to persons and their actions to a manual analysis of the text’s argumentative structure. For images and video, e.g., we need to correct distortions, annotate gestures, or manually describe scenes. Rather than depending on big input data, future data analytics methods for the humanities must therefore be able to cope with small data scenarios, generalize from few input signals, and at the same time avoid overfitting to the idiosyncrasies of the dataset. Second, the analysis of humanities data requires highly specific expert knowledge. This may include historical and legal facts, understanding ancient and special languages, or recognizing gestures or architectural properties in images and video. Relying on expert knowledge further limits our possibilities to manually label data, as common annotation procedures, such as crowdsourcing [54] or gamification [2], can only be used for certain subproblems or must be customized for laypeople. An even more severe problem is, however, interpreting the output of a data analytics system, which is only possible with expert domain knowledge. So far, training such a system requires vast machine learning expertise, preventing domain experts from directly participating in the development process. Inspecting and refining a model is particularly challenging in neural network architectures, as there is still little insight into the internal operation and behavior of complex models [109]. Future methods need to communicate directly with domain experts and allow them to steer the data analytics process. Third, most research questions in the humanities are not clearly defined in advance, but developed over time as the research hypothesis evolves. We therefore need data analytics methods that allow for fluid problem definitions. This is particularly true for subjective tasks, for which multiple, partially contradicting theories co-exist. Examples are different schools and traditions in philosophy as well as disparate sources and opinions in history or law. Rather than aiming at a single, universal problem definition, we thus need methods that adapt to particular users or theories and recognize shifting goals. Although some of these challenges are relevant for data science tasks in general (e.g., the small data scenario in the biomedical domain [93]), fluid problem definitions are prototypical for the humanities, as researchers have to pursue and develop competing theories and standpoints before judging them according to their merits. The humanities therefore need specific solutions for future data analytics. This requires a close cooperation of natural language processing and computer vision with machine learning and data management systems research. 2 Interactive Data Analytics In this paper, we advocate research on interactive machine learning approaches for data analytics tasks in the humanities. Interactive machine learning is characterized by incremental model updates based on a user’s actions and feedback, yielding a system that is simultaneously developed and used. Rather than teaching a machine learning system with a predefined set of training instances, as is the most common practice today, we envision an intelligent system that a user teaches by using it. This is triggered by the insight that a user will not necessarily start with a pre-defined concept that must be modeled as accurately as possible (as is often assumed in machine learning); the concept sought after is likely to evolve during the discovery process and, hence, during the process of selecting data and training a machine learning system. Indeed, this is akin to active learning—the system may ask the user to label a certain instance while learning—but transgresses it by removing the strong focus on data labeling. Active learning removes the passivity of the learning system which, in the classical setting, only receives data, and allows it to actively pose questions on the data. However, the teacher (i.e., the human expert) is passive in the sense that she has no direct influence on the models that the learner induces from the data. Her only way of influencing the results is via the provided data or labels. For that reason Shivaswamy and Joachims [94] extended this towards coactive learning, where the teacher can also correct the learner during learning if necessary, providing a slightly improved but not necessarily optimal example as feedback. In interactive learning, we envision a process where the teacher and the learner not only interact at the data and example level, but also at the model level itself. The user should be enabled to directly interact with the model, to provide feedback on the model that influences the learner, or to even directly modify parts of the learner. This way, learning becomes a fully co-adaptive process, in which a human is changing computer behavior, but the human also adapts to use machine learning more effectively and adjusts his or her data and goals in response to what is learned. This requires on the one hand ways for communicating information or feedback about the models to the learner, and, on the other hand, relies on innovative methods for communicating learned models to a domain expert who is typically inexperienced in machine learning. Thus, we envision future interactive data analytics to essentially consist of four components: Interactive Data Acquisition: The domain expert and the learning system need to interact to acquire the appropriate data as well as for annotating and labeling the data. Interactive Model Development: Besides influencing the learning process by providing suitable training data, the domain expert can interact with the learning algorithm during the model’s construction and use this to continually alter and refine the model. Interactive Model Interpretation: The learned model is not passive and intransparent, but can be actively understood and explored by the domain expert. Interactive System Support: To support the iterative learning process and the effective human–computer interaction under real-time constraints, it is essential to link interactive machine learning with data management systems. All four components have, to some extent, been explored in the literature before, but for interactive data analytics it is essential that all four are realized and tightly integrated so that their synthesis facilitates the interaction between the domain expert and the analytics tool at multiple levels. Figure 1 shows how the four components enrich the traditional data analytics process based on explicit feedback in the form of labeled data. F |
Neural Networks and Wavelet De-Noising for Stock Trading and Prediction | In this chapter, neural networks are used to predict the future stock prices and develop a suitable trading system. Wavelet analysis is used to de-noise the time series and the results are compared with the raw time series prediction without wavelet de-noising. Standard and Poor 500 (S&P 500) is used in experiments. We use a gradual data sub-sampling technique, i.e., training the network mostly with recent data, but without neglecting past data. In addition, effects of NASDAQ 100 are studied on prediction of S&P 500. A daily trading strategy is employed to buy/sell according to the predicted prices and to calculate the directional efficiency and the rate of returns for different periods. There are numerous exchange traded funds (ETF’s), which attempt to replicate the performance of S&P 500 by holding the same stocks in the same proportions as the index, and therefore, giving the same percentage returns as S&P 500. Therefore, this study can be used to help invest in any of the various ETFs, which replicates the performance of S&P 500. The experimental results show that neural networks, with appropriate training and input data, can be used to achieve high profits by investing in ETFs based on S&P 500. |
Urban growth patterns and growth management boundaries in the Central Puget Sound, Washington, 1986–2007 | Many regions of the globe are experiencing rapid urban growth, the location and intensity of which can have negative effects on ecological and social systems. In some locales, planners and policy makers have used urban growth boundaries to direct the location and intensity of development; however the empirical evidence for the efficacy of such policies is mixed. Monitoring the location of urban growth is an essential first step in understanding how the system has changed over time. In addition, if regulations purporting to direct urban growth to specific locales are present, it is important to evaluate if the desired pattern (or change in pattern) has been observed. In this paper, we document land cover and change across six dates (1986, 1991, 1995, 1999, 2002, and 2007) for six counties in the Central Puget Sound, Washington State, USA. We explore patterns of change by three different spatial partitions (the region, each county, 2000 U.S. Census Tracks), and with respect to urban growth boundaries implemented in the late 1990’s as part of the state’s Growth Management Act. Urban land cover increased from 8 to 19% of the study area between 1986 and 2007, while lowland deciduous and mixed forests decreased from 21 to 13% and grass and agriculture decreased from 11 to 8%. Land in urban classes outside of the urban growth boundaries increased more rapidly (by area and percentage of new urban land cover) than land within the urban growth boundaries, suggesting that the intended effect of the Growth Management Act to direct growth to within the urban growth boundaries may not have been accomplished by 2007. Urban sprawl, as estimated by the area of land per capita, increased overall within the region, with the more rural counties within commuting distance to cities having the highest rate of increase observed. Land cover data is increasingly available and can be used to rapidly evaluate urban development patterns over large areas. Such data are important inputs for policy makers, urban planners, and modelers alike to manage and plan for future population, land use, and land cover changes. |
Medical messages in the media--barriers and solutions to improving medical journalism. | CONTEXT
Medical issues are widely reported in the mass media. These reports influence the general public, policy makers and health-care professionals. This information should be valid, but is often criticized for being speculative, inaccurate and misleading. An understanding of the obstacles medical reporters meet in their work can guide strategies for improving the informative value of medical journalism.
OBJECTIVE
To investigate constraints on improving the informative value of medical reports in the mass media and elucidate possible strategies for addressing these.
DESIGN
We reviewed the literature and organized focus groups, a survey of medical journalists in 37 countries, and semi-structured telephone interviews.
RESULTS
We identified nine barriers to improving the informative value of medical journalism: lack of time, space and knowledge; competition for space and audience; difficulties with terminology; problems finding and using sources; problems with editors and commercialism. Lack of time, space and knowledge were the most common obstacles. The importance of different obstacles varied with the type of media and experience. Many health reporters feel that it is difficult to find independent experts willing to assist journalists, and also think that editors need more education in critical appraisal of medical news. Almost all of the respondents agreed that the informative value of their reporting is important. Nearly everyone wanted access to short, reliable and up-to-date background information on various topics available on the Internet. A majority (79%) was interested in participating in a trial to evaluate strategies to overcome identified constraints.
CONCLUSIONS
Medical journalists agree that the validity of medical reporting in the mass media is important. A majority acknowledge many constraints. Mutual efforts of health-care professionals and journalists employing a variety of strategies will be needed to address these constraints. |
The Triassic in Thailand | ZusammenfassungDie zwischen Perm und Jura in Thailand auftretenden tektonischen Bewegungen („Indosinische Orogenese“) führten zur Bildung von Hebungs- und Senkungsgebieten, die die Paläogeographie in der Trias beherrschten. Die starken orogenen Bewegungen im Nor führten zu einer Umgestaltung der paläogeographischen Verhältnisse.Die beiden während der Trias auftretenden Sedimentationszyklen bestehen aus einer marinen Faziesentwicklung vom Skyth bis zum unteren Nor und einer terrestrischen vom oberen Nor bis zum Rhät/Lias.Die fazielle und strukturelle Entwicklung der Triassedimente sowie die Intrusion bzw. Effusion von Eruptivgesteinen sind auf die seit dem Karbon erfolgende Subduktion ozeanischer Kruste und die Kollision der Eurasiatischen Platte mit der Indochinesischen Platte im Nor zurückzuführen.AbstractThe tectonic movements which occurred in Thailand between the Permian and the Jurassic (“Indosinian Orogeny”) caused the uplifting and subsidence of areas which were dominant paleogeographic features during the Triassic. The strong orogenic movements during the Norian caused an alteration of the paleogeographic conditions.The two sedimentation cycles of the Triassic comprised a marine facies development from the Skythian to the early Norian and a terrestrial one from the later Norian to the Rhaetian/Liassic.The development of the facies and structure of the Triassic sediments, as well as the intrusion and effusion of igneous rocks, are due to the subduction of oceanic crust beginning in the Carboniferous and to the collision of the Eurasian with the Indochina Plate during the Norian.RésuméLes mouvements tectoniques («Orogenèse Indosinienne») intervenus entre le Permien et le Trias en Thaïlande se sont traduits en soulèvements et affaissements qui ont dominé la paléogéographie du Trias. Les forts mouvements orogéniques du Norien ont conduit à transformer les rapports paléogéographiques.Les deux cycles sédimentaires du Trias consistent dans le développement d'un faciès marin du Scythien au Norien inférieur, et d'un faciès continental du Norien supérieur au Rhétien/Lias. L'évolution du faciès et de la structure des sédiments du Trias, l'intrusion et l'effusion des roches ignées sont attribuées à la subduction de la croute océanique consécutive au Carbonifère, et à la collision de la «Plaque Euroasiatique» avec la «Plaque Indochinoise» au Norien.Краткое содержаниеТектонические движе ния — „Индокитайский орогенез“ — в период м ежду Пермью и юрой в Таилан де привели к образова нию регионов поднятия и о пускания, которые оказали силь ное влияние на палеог еографию триаса. Мощные горообразовательны е движения в норийско м веке привели к преобразов анию палеогеографически х взаимоотношений.Оба оседконакопител ьных цикла во время тр иаса состоят от скифского яруса до нижне-норийс кого из морских отлож ений, а от верхнего норийского до рэтского-лайосово го ярусов — из контине нтальных отложений.Фациальное и структу рное развитие осадоч ных пород триаса, как и интрузии, или эффузии эруптивных пород раз вивалось под влиянием засасыв ания океанической коры, на чавшейся после карбо на, и коллизии евроазиатс кой глыбы с индокитайско й в нориском веке. |
1 Modeling , Control , and Flight Testing of a Small Ducted Fan Aircraft | Small ducted fan autonomous vehicles have potential for several applications, especially for missions in urban environments. This paper discusses the use of dynamic inversion with neural network adaptation to provide an adaptive controller for the GTSpy, a small ducted fan autonomous vehicle based on the Micro Autonomous Systems’ Helispy. This approach allows utilization of the entire low speed flight envelope with a relatively poorly understood vehicle. A simulator model is constructed from a force and moment analysis of the vehicle, allowing for a validation of the controller in preparation for flight testing. Data from flight testing of the system is provided. |
Working with men to prevent intimate partner violence in a conflict-affected setting: a pilot cluster randomized controlled trial in rural Côte d’Ivoire | BACKGROUND
Evidence from armed conflict settings points to high levels of intimate partner violence (IPV) against women. Current knowledge on how to prevent IPV is limited-especially within war-affected settings. To inform prevention programming on gender-based violence in settings affected by conflict, we evaluated the impact of adding a targeted men's intervention to a community-based prevention programme in Côte d'Ivoire.
METHODS
We conducted a two-armed, non-blinded cluster randomized trial in Côte d'Ivoire among 12 pair-matched communities spanning government-controlled, UN buffer, and rebel-controlled zones. The intervention communities received a 16-week IPV prevention intervention using a men's discussion group format. All communities received community-based prevention programmes. Baseline data were collected from couples in September 2010 (pre-intervention) and follow-up in March 2012 (one year post-intervention). The primary trial outcome was women's reported experiences of physical and/or sexual IPV in the last 12 months. We also assessed men's reported intention to use physical IPV, attitudes towards sexual IPV, use of hostility and conflict management skills, and participation in gendered household tasks. An adjusted cluster-level intention to treat analysis was used to compare outcomes between intervention and control communities at follow-up.
RESULTS
At follow-up, reported levels of physical and/or sexual IPV in the intervention arm had decreased compared to the control arm (ARR 0.52, 95% CI 0.18-1.51, not significant). Men participating in the intervention reported decreased intentions to use physical IPV (ARR 0.83, 95% CI 0.66-1.06) and improved attitudes toward sexual IPV (ARR 1.21, 95% CI 0.77-1.91). Significant differences were found between men in the intervention and control arms' reported ability to control their hostility and manage conflict (ARR 1.3, 95% CI 1.06-1.58), and participation in gendered household tasks (ARR 2.47, 95% CI 1.24-4.90).
CONCLUSIONS
This trial points to the value of adding interventions working with men alongside community activities to reduce levels of IPV in conflict-affected settings. The intervention significantly influenced men's reported behaviours related to hostility and conflict management and gender equitable behaviours. The decreased mean level of IPV and the differences between intervention and control arms, while not statistically significant, suggest that IPV in conflict-affected areas can be reduced through concerted efforts to include men directly in violence prevention programming. A larger-scale trial is needed to replicate these findings and further understand the mechanisms of change.
TRIAL REGISTRATION
clinicaltrials.gov NCT01803932. |
Using Topic Modeling and Similarity Thresholds to Detect Events | This paper presents a Retrospective Event Detection algorithm, called Eventy-Topic Detection (ETD), which automatically generates topics that describe events in a large, temporal text corpus. Our approach leverages the structure of the topic modeling framework, specifically the Latent Dirichlet Allocation (LDA), to generate topics which are then later labeled as Eventy-Topics or non-Eventy-Topics. The system first runs daily LDA topic models, then calculates the cosine similarity between the topics of the daily topic models, and then runs our novel Bump-Detection algorithm. Similar topics labeled as an Eventy-Topic are then grouped together. The algorithm is demonstrated on two Terabyte sized corpuses a Reuters News corpus and a Twitter corpus. Our method is evaluated on a human annotated test set. Our algorithm demonstrates its ability to accurately describe and label events in a temporal text corpus. |
Polyphase P4 code usage for target detection of pulse radar using Matlab and GNU radio | This paper presents the investigation of polyphase P4 code usage in pulse radar for target detection as part of range resolution. As one of essential parameters in pulse radar, the range resolution is acquired to determine the relative range between two or more close targets in order to be detected as different targets. The generation of phase-coded pulse signal and its processing are performed in Matlabreg; and GNU Radio by utilizing audio waves. From the result, it shows that the phase-coding with polyphase P4 code could provide good compression to the pulse signal. In addition, by making the code longer it generates side lobe level less than −40dB for the code with length of 2000. |
Sparse and Redundant Representations - From Theory to Applications in Signal and Image Processing | Thank you very much for downloading sparse and redundant representations from theory to applications in signal and image processing 1 ed. Maybe you have knowledge that, people have search hundreds times for their favorite readings like this sparse and redundant representations from theory to applications in signal and image processing 1 ed, but end up in infectious downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they are facing with some infectious virus inside their desktop computer. |
An Autonomous Mobile Robotic System for Surveillance of Indoor Environments | The development of intelligent surveillance systems is an active research area. In this context, mobile and multi-functional robots are generally adopted as means to reduce the environment structuring and the number of devices needed to cover a given area. Nevertheless, the number of different sensors mounted on the robot, and the number of complex tasks related to exploration, monitoring, and surveillance make the design of the overall system extremely challenging. In this paper, we present our autonomous mobile robot for surveillance of indoor environments. We propose a system able to handle autonomously general-purpose tasks and complex surveillance issues simultaneously. It is shown that the proposed robotic surveillance scheme successfully addresses a number of basic problems related to environment mapping, localization and autonomous navigation, as well as surveillance tasks, like scene processing to detect abandoned or removed objects and people detection and following. The feasibility of the approach is demonstrated through experimental tests using a multisensor platform equipped with a monocular camera, a laser scanner, and an RFID device. Real world applications of the proposed system include surveillance of wide areas (e.g. airports and museums) and buildings, and monitoring of safety equipment. |
Developmental Trajectories in Siblings of Children with Autism: Cognition and Language from 4 Months to 7 Years | We compared the cognitive and language development at 4, 14, 24, 36, 54 months, and 7 years of siblings of children with autism (SIBS-A) to that of siblings of children with typical development (SIBS-TD) using growth curve analyses. At 7 years, 40% of the SIBS-A, compared to 16% of SIBS-TD, were identified with cognitive, language and/or academic difficulties, identified using direct tests and/or parental reports. This sub-group was identified as SIBS-A-broad phenotype (BP). Results indicated that early language scores (14-54 months), but not cognitive scores of SIBS-A-BP and SIBS-A-nonBP were significantly lower compared to the language scores of SIBS-TD, and that the rate of development was also significantly different, thus pinpointing language as a major area of difficulty for SIBS-A during the preschool years. |
Better than SIFT? | Independent evaluation of the performance of feature descriptors is an important part of the process of developing better computer vision systems. In this paper, we compare the performance of several state-of-the art image descriptors including several recent binary descriptors. We test the descriptors on an image recognition application and a feature matching application. Our study includes several recently proposed methods and, despite claims to the contrary, we find that SIFT is still the most accurate performer in both application settings. We also find that general purpose binary descriptors are not ideal for image recognition applications but perform adequately in a feature matching application. |
Controlling Personality-Based Stylistic Variation with Neural Natural Language Generators | Natural language generators for taskoriented dialogue must effectively realize system dialogue actions and their associated semantics. In many applications, it is also desirable for generators to control the style of an utterance. To date, work on task-oriented neural generation has primarily focused on semantic fidelity rather than achieving stylistic goals, while work on style has been done in contexts where it is difficult to measure content preservation. Here we present three different sequence-to-sequence models and carefully test how well they disentangle content and style. We use a statistical generator, PERSONAGE, to synthesize a new corpus of over 88,000 restaurant domain utterances whose style varies according to models of personality, giving us total control over both the semantic content and the stylistic variation in the training data. We then vary the amount of explicit stylistic supervision given to the three models. We show that our most explicit model can simultaneously achieve high fidelity to both semantic and stylistic goals: this model adds a context vector of 36 stylistic parameters as input to the hidden state of the encoder at each time step, showing the benefits of explicit stylistic supervision, even when the amount of training data is large. |
Polyvinylidene fluoride film based nasal sensor to monitor human respiration pattern: An initial clinical study | Design and development of a piezoelectric polyvinylidene fluoride (PVDF) thin film based nasal sensor to monitor human respiration pattern (RP) from each nostril simultaneously is presented in this paper. Thin film based PVDF nasal sensor is designed in a cantilever beam configuration. Two cantilevers are mounted on a spectacle frame in such a way that the air flow from each nostril impinges on this sensor causing bending of the cantilever beams. Voltage signal produced due to air flow induced dynamic piezoelectric effect produce a respective RP. A group of 23 healthy awake human subjects are studied. The RP in terms of respiratory rate (RR) and Respiratory air-flow changes/alterations obtained from the developed PVDF nasal sensor are compared with RP obtained from respiratory inductance plethysmograph (RIP) device. The mean RR of the developed nasal sensor (19.65 ± 4.1) and the RIP (19.57 ± 4.1) are found to be almost same (difference not significant, p > 0.05) with the correlation coefficient 0.96, p < 0.0001. It was observed that any change/alterations in the pattern of RIP is followed by same amount of change/alterations in the pattern of PVDF nasal sensor with k = 0.815 indicating strong agreement between the PVDF nasal sensor and RIP respiratory air-flow pattern. The developed sensor is simple in design, non-invasive, patient friendly and hence shows promising routine clinical usage. The preliminary result shows that this new method can have various applications in respiratory monitoring and diagnosis. |
Gender and the effects of an economic empowerment program on attitudes toward sexual risk-taking among AIDS-orphaned adolescent youth in Uganda. | PURPOSE
This article examines gender differences in attitudes toward sexual risk-taking behaviors of acquired immune deficiency syndrome (AIDS)-orphaned youth participating in a randomized control trial testing an economic empowerment intervention in rural Uganda.
METHODS
Adolescents (average age 13.7 years) who had lost one or both parents to AIDS from 15 comparable schools were randomly assigned to either an experimental (n=135) or a control condition (n=142). Adolescents in the experimental condition, in addition to usual care, also received support and incentives to save money toward secondary education.
RESULTS
Findings indicate that although adolescent boys and girls within the experimental condition saved comparable amounts, the intervention appears to have benefited girls, in regard to the attitudes toward sexual risk-taking behavior, in a different way and to a lesser extent than boys.
CONCLUSIONS
Future research should investigate the possibility that adolescent girls might be able to develop equally large improvements in protective attitudes toward sexual risk taking through additional components that address gendered social norms. |
Impact and Sustainability of E-Government Services in Developing Countries: Lessons Learned from Tamil Nadu, India | We find that the presence of village Internet facilities, offering government to citizen services, is positively associated with the rate at which the villagers obtain some of these services. In a study of a rural Internet project in India, we identify a positive correlation for two such Internet services: obtaining birth certificates for children and applications for old age pensions. Both these government services are of considerable social and economic value to the citizens. Villagers report that the Internet based services saved them time, money, and effort compared with obtaining the services directly from the government office. We also find that these services can reduce corruption in the delivery of these services. After over one year of successful operation, however, the e-government program was not able to maintain the necessary level of local political and administrative support to remain institutionally viable. As government officers shifted from the region, or grew to find the program a threat, the e-government services faltered. We argue that this failure was due to a variety of Critical Failure Factors. We end with a simple sustainability failure model. In summary, we propose that the e-government program failed to be politically and institutionally sustainable due to people, management, cultural, and structural factors. |
Traditional Chinese acupuncture and placebo (sham) acupuncture are differentiated by their effects on μ-opioid receptors (MORs) | Controversy remains regarding the mechanisms of acupuncture analgesia. A prevailing theory, largely unproven in humans, is that it involves the activation of endogenous opioid antinociceptive systems and mu-opioid receptors (MORs). This is also a neurotransmitter system that mediates the effects of placebo-induced analgesia. This overlap in potential mechanisms may explain the lack of differentiation between traditional acupuncture and either non-traditional or sham acupuncture in multiple controlled clinical trials. We compared both short- and long-term effects of traditional Chinese acupuncture (TA) versus sham acupuncture (SA) treatment on in vivo MOR binding availability in chronic pain patients diagnosed with fibromyalgia (FM). Patients were randomized to receive either TA or SA treatment over the course of 4 weeks. Positron emission tomography (PET) with (11)C-carfentanil was performed once during the first treatment session and then repeated a month later following the eighth treatment. Acupuncture therapy evoked short-term increases in MOR binding potential, in multiple pain and sensory processing regions including the cingulate (dorsal and subgenual), insula, caudate, thalamus, and amygdala. Acupuncture therapy also evoked long-term increases in MOR binding potential in some of the same structures including the cingulate (dorsal and perigenual), caudate, and amygdala. These short- and long-term effects were absent in the sham group where small reductions were observed, an effect more consistent with previous placebo PET studies. Long-term increases in MOR BP following TA were also associated with greater reductions in clinical pain. These findings suggest that divergent MOR processes may mediate clinically relevant analgesic effects for acupuncture and sham acupuncture. |
Incremental Global Event Extraction | Event extraction is a difficult information extraction task. Li et al. (2014) explore the benefits of modeling event extraction and two related tasks, entity mention and relation extraction, jointly. This joint system achieves state-of-the-art performance in all tasks. However, as a system operating only at the sentence level, it misses valuable information from other parts of the document. In this paper, we present an incremental approach to make the global context of the entire document available to the intra-sentential, state-of-the-art event extractor. We show that our method robustly increases performance on two datasets, namely ACE 2005 and TAC 2015. |
Comparison study on the effect of prenatal administration of high dose and low dose folic acid. | OBJECTIVE
To evaluate the effect of high dose and low dose folic acid on the levels of hemocysteine (Hcy) concentration during the first trimester of pregnancy and at delivery, and to examine the association of Hcy serum levels and preeclampsia.
METHODS
In a single blinded randomized clinical trial, which was conducted in Tabriz, Iran, from 2005-2008, 246 nulliparous pregnant women in 2 similar groups, received folic acid daily from early pregnancy until delivery (5 mg/day in group one and 0.5 mg/ day in group 2). The incidence of hypertension and laboratory changes in the levels of serum Hcy, lactate dehydrogenase, and uric acid in addition to the levels of urine creatinine and protein were compared between the groups.
RESULTS
There was no presence of any type of hypertension in each group. The systolic blood pressures (BP) (mm Hg) at the first trimester were 114.01 +/- 8.78 for group one, 114.16 +/- 9.05 for group 2, and at delivery, 117.24 +/- 6.91 for group one, and 117.23 +/- 11.48 for group 2 (p=0.32). The diastolic BP at the first trimester were 74.90 +/- 7.45 for group one, 73.30 +/- 8.90 for group 2, and at delivery 76.46 +/- 5.58 for group one, and 76.69 +/- 8.62 for group 2 (p=0.42). Although the level of Hcy (micromol/L) decreased significantly at the delivery time in group one (11.81+/- 3.85 decreased to 6.44 +/- 1.88), and 2 (9.08+/- 3.24, decreased to 7.44 +/- 2.99), this decrement was more significant in the first group (p<0.001).
CONCLUSION
The results show that folic acid supplement throughout pregnancy, irrespective of the dosage, could eliminate hypertensive disorders, and decreases serum level of Hcy, although it is reduced more significant in the first group. |
Wheel Torque Distribution Criteria for Electric Vehicles With Torque-Vectoring Differentials | The continuous and precise modulation of the driving and braking torques of each wheel is considered the ultimate goal for controlling the performance of a vehicle in steady-state and transient conditions. To do so, dedicated torque-vectoring (TV) controllers that allow optimal wheel torque distribution under all possible driving conditions have to be developed. Commonly, vehicle TV controllers are based on a hierarchical approach, consisting of a high-level supervisory controller that evaluates a corrective yaw moment and a low-level controller that defines the individual wheel torque reference values. The problem of the optimal individual wheel torque distribution for a particular driving condition can be solved through an optimization-based control-allocation (CA) algorithm, which must rely on the appropriate selection of the objective function. With a newly developed offline optimization procedure, this paper assesses the performance of alternative objective functions for the optimal wheel torque distribution of a four-wheel-drive (4WD) fully electric vehicle. Results show that objective functions based on the minimum tire slip criterion provide better control performance than functions based on energy efficiency. |
Recent trends in hierarchic document clustering: A critical review | This article reviews recent research into the use of hierarchic agglomerative clustering methods for document retrieval. After an introduction to the calculation of interdocument similarities and to clustering methods that are appropriate for document clustering, the article discusses algorithms that can be used to allow the implementation of these methods on databases of nontrivial size. The validation of document hierarchies is described using tests based on the theory of random graphs and on empirical characteristics of document collections that are to be clustered. A range of search strategies is available for retrieval from document hierarchies and the results are presented of a series of research projects that have used these strategies to search the clusters resulting from several different types of hierarchic agglomerative clustering method. It is suggested that the complete linkage method is probably the most effective method in terms of retrieval performance; however, it is also difficult to implement in an efficient manner. Other applications of document clustering techniques are discussed briefly; experimental evidence suggests that nearest neighbor clusters, possibly represented as a network model, provide a reasonably efficient and effective means of including interdocument similarity information in document retrieval systems. |
Neuroticism, marital interaction, and the trajectory of marital satisfaction. | Theories of how initially satisfied marriages deteriorate or remain stable over time have been limited by a failure to distinguish between key facets of change. The present study defines the trajectory of marital satisfaction in terms of 2 separate parameters--(a) the initial level of satisfaction and (b) the rate of change in satisfaction over time--and seeks to estimate unique effects on each of these parameters with variables derived from intrapersonal and interpersonal models of marriage. Sixty newlywed couples completed measures of neuroticism, were observed during a marital interaction and provided reports of marital satisfaction every 6 months for 4 years. Neuroticism was associated with initial levels of marital satisfaction but had no additional effects on rates of change. Behavior during marital interaction predicted rates of change in marital satisfaction but was not associated with initial levels. |
Molar incisor hypomineralization: review and recommendations for clinical management. | Molar incisor hypomineralization (MIH) describes the clinical picture of hypomineralization of systemic origin affecting one or more first permanent molars (FPMs) that are associated frequently with affected incisors. Etiological associations with systemic conditions or environmental insults during the child's first 3 years have been implicated. The complex care involved in treating affected children must address their behavior and anxiety, aiming to provide a durable restoration under pain-free conditions. The challenges include adequate anaesthesia, suitable cavity design, and choice of restorative materials. Restorations in hypomineralized molars appear to fail frequently; there is little evidence-based literature to facilitate clinical decisions on cavity design and material choice. A 6-step approach to management is described: (1) risk identification; (2) early diagnosis; (3) remineralization and desensitization; (4) prevention of caries and posteruption breakdown; (5) restorations and extractions; and (6) maintenance. The high prevalence of MIH indicates the need for research to clarify etiological factors and improve the durability of restorations in affected teeth. The purpose of this paper was to describe the diagnosis, prevalence, putative etiological factors, and features of hypomineralized enamel in molar incisor hypomineralization and to present a sequential approach to management. |
Поэзия английского романтизма в России: традиции и переводы | Background. The article is devoted to the comprehension of the way the English romantic poetry is taken in Russia. The actuality of the investigation is stipulated by the necessity of examination of numerous facts of reception of English romantic poetry in Russia, done by the foregoing investigators, and by the necessity of revealing the typological affinity of the writers, their works, and acquisition of material in order to fully reconstruct the history of the Russian poetry. Materials and methods. The analysis was done on the basis of the works of the following English romantic poets G. Byron, T. Moore, W. Scott, R. Southey, S. Coleridge, W. Wordsworth and their Russian renderings of the 19 th and early 20 th century together with the works of Russian literature critics and publicists. There have been used the methods of culture and historic analysis, historic and genetic, and historic typological analyses. Results. In view of the above stated problems, three important aspects have been highlighted and analyzed, they are as follows: traditionally narrow minded perception of the English writer’s heritage both in Russian literature and in Russian literary criticism; great popularity of many works of the English writer in Russia in a short period of time and the transformation of the perception of his works due to the growth of his popularity in Russia of some other kind than in his homeland. Conclusions. As a result of the analysis some regulations have been found out concerning Russian perception of the English romanticism depending on the artistic preferences of the author in accordance with the specific social and literary trends that were growing in Russia at that time. |
The Ever-Changing Social Perception of Autism Spectrum Disorders in the United States | This paper aims to examine the comprehensive social perception of autism spectrum disorders (ASDs) within the United States today. In order to study the broad public view of those with ASDs, this study investigates the evolution of the syndrome in both sociological and scientific realms. By drawing on the scientific progression of the syndrome and the mixture of this research with concurrent social issues and media representations, this study infers why such a significant amount of stigmatization has become attached to those with ASDs and how these stigmatizations have varied throughout history. After studying this evolving social perception of ASDs in the United States, the writer details suggestions for the betterment of this awareness, including boosted and specified research efforts, increased collaboration within those experts in autism, and positive visibility of those with ASDs and their families. Overall, the writer suggests that public awareness has increased and thus negative stigmatization has decreased in recent years; however, there remains much to be done to increase general social understanding of ASDs. “Autism is about having a pure heart and being very sensitive... It is about finding a way to survive in an overwhelming, confusing world... It is about developing differently, in a different pace and with different leaps.” -Trisha Van Berkel The identification of autism, in both sociological and scientific terms, has experienced a drastic evolution since its original definition in the early 20th century. From its original designation by Leo Kanner (1943), public understanding of autism spectrum disorders (ASDs) has been shrouded in mystery and misperception. The basic core features of all ASDs include problems with basic socialization and communication, strange intonation and facial expressions, and intense preoccupations or repetitive behaviors; however, one important aspect of what makes autism so complex is the wide variation in expression of the disorder (Lord, 2011). When comparing individuals with the same autism diagnosis, one will undoubtedly encounter many different personalities, strengths and weaknesses. This wide variability between individuals diagnosed with autism, along with the lack of basic understanding of the general public, accounts for a significant amount of social stigma in our society today. Social stigma stemming from this lack of knowledge has been reported in varying degrees since the original formation of the diagnosis. Studies conducted over the past two centuries have shown perceived negative stigma from the view of both the autistic individual and the family or caretakers |
Boxlets: A Fast Convolution Algorithm for Signal Processing and Neural Networks | Signal processing and pattern recognition algorithms make extensive use of convolution. In many cases, computational accuracy is not as important as computational speed. In feature extraction, for instance, the features of interest in a signal are usually quite distorted. This form of noise justi es some level of quantization in order to achieve faster feature extraction. Our approach consists of approximating regions of the signal with low degree polynomials, and then di erentiating the resulting signals in order to obtain impulse functions (or derivatives of impulse functions). With this representation, convolution becomes extremely simple and can be implemented quite e ectively. The true convolution can be recovered by integrating the result of the convolution. This method yields substantial speed up in feature extraction and is applicable to convolutional neural networks. |
Emergent literacy in kindergartners with cochlear implants. | OBJECTIVE
A key ingredient to academic success is being able to read. Deaf individuals have historically failed to develop literacy skills comparable with those of their normal-hearing (NH) peers, but early identification and cochlear implants (CIs) have improved prospects such that these children can learn to read at the levels of their peers. The goal of this study was to examine early, or emergent, literacy in these children.
METHOD
Twenty-seven deaf children with CIs, who had just completed kindergarten were tested on emergent literacy, and on cognitive and linguistic skills that support emergent literacy, specifically ones involving phonological awareness, executive functioning, and oral language. Seventeen kindergartners with NH and eight with hearing loss, but who used hearing aids served as controls. Outcomes were compared for these three groups of children, regression analyses were performed to see whether predictor variables for emergent literacy differed for children with NH and those with CIs, and factors related to the early treatment of hearing loss and prosthesis configuration were examined for children with CIs.
RESULTS
The performance of children with CIs was roughly 1 SD or more below the mean performance of children with NH on all tasks, except for syllable counting, reading fluency, and rapid serial naming. Oral language skills explained more variance in emergent literacy for children with CIs than for children with NH. Age of first implant explained moderate amounts of variance for several measures. Having one or two CIs had no effect, but children who had some amount of bimodal experience outperformed children who had none on several measures.
CONCLUSIONS
Even deaf children who have benefitted from early identification, intervention, and implantation are still at risk for problems with emergent literacy that could affect their academic success. This finding means that intensive language support needs to continue through at least the early elementary grades. Also, a period of bimodal stimulation during the preschool years can help boost emergent literacy skills to some extent. |
Foggy: A Framework for Continuous Automated IoT Application Deployment in Fog Computing | Traditional Cloud model is not designed to handle latency-sensitive Internet of Things applications. The new trend consists on moving data to be processed close to where it was generated. To this end, Fog Computing paradigm suggests using the compute and storage power of network elements. In such environments, intelligent and scalable orchestration of thousands of heterogeneous devices in complex environments is critical for IoT Service providers. In this vision paper, we present a framework, called Foggy, that facilitates dynamic resource provisioning and automated application deployment in Fog Computing architectures. We analyze several applications and identify their requirements that need to be taken intoconsideration in our design of the Foggy framework. We implemented a proof of concept of a simple IoT application continuous deployment using Raspberry Pi boards. |
High-bandwidth and low-energy on-chip signaling with adaptive pre-emphasis in 90nm CMOS | Long on-chip wires pose well-known latency, bandwidth, and energy challenges to the designers of high-performance VLSI systems. Repeaters effectively mitigate wire RC effects but do little to improve their energy costs. Moreover, proliferating repeater farms add significant complexity to full-chip integration, motivating circuits to improve wire performance and energy while reducing the number of repeaters. Such methods include capacitive-mode signaling, which combines a capacitive driver with a capacitive load [1,2]; and current-mode signaling, which pairs a resistive driver with a resistive load [3,4]. While both can significantly improve wire performance, capacitive drivers offer added benefits of reduced voltage swing on the wire and intrinsic driver pre-emphasis. As wires scale, slow slew rates on highly resistive interconnects will still limit wire performance due to inter-symbol interference (ISI) [5]. Further improvements can come from equalization circuits on receivers [2] and transmitters [4] that trade off power for bandwidth. In this paper, we extend these ideas to a capacitively driven pulse-mode wire using a transmit-side adaptive FIR filter and a clockless receiver, and show bandwidth densities of 2.2–4.4 Gb/s/µm over 90nm 5mm links, with corresponding energies of 0.24–0.34 pJ/bit on random data. |
Miniature Continuous Coverage Antenna Array for GNSS Receivers | This letter presents a miniature conformal array that provides continuous coverage and good axial ratio from 1100-1600 MHz. Concurrently, it maintains greater than 1.5 dBic RHCP gain and return loss less than -10 dB. The four element array is comprised of two-arm slot spirals with lightweight polymer substrate dielectric loading and a novel termination resistor topology. Multiple elements provide the capability to suppress interfering signals common in GNSS applications. The array, including feeding network, is 3.5" times 3.5" times 0.8" in size and fits into the FRPA-3 footprint and radome. |
A Nonlinear-Disturbance-Observer-Based DC-Bus Voltage Control for a Hybrid AC/DC Microgrid | DC-bus voltage control is an important task in the operation of a dc or a hybrid ac/dc microgrid system. To improve the dc-bus voltage control dynamics, traditional approaches attempt to measure and feedforward the load or source power in the dc-bus control scheme. However, in a microgrid system with distributed dc sources and loads, the traditional feedforward-based methods need remote measurement with communications. In this paper, a nonlinear disturbance observer (NDO) based dc-bus voltage control is proposed, which does not need the remote measurement and enables the important “plug-and-play” feature. Based on this observer, a novel dc-bus voltage control scheme is developed to suppress the transient fluctuations of dc-bus voltage and improve the power quality in such a microgrid system. Details on the design of the observer, the dc-bus controller and the pulsewidth-modulation (PWM) dead-time compensation are provided in this paper. The effects of possible dc-bus capacitance variation are also considered. The performance of the proposed control strategy has been successfully verified in a 30 kVA hybrid microgrid including ac/dc buses, battery energy storage system, and photovoltaic (PV) power generation system. |
Air filter particulate loading detection using smartphone audio and optimized ensemble classification | Automotive engine intake filters ensure clean air delivery to the engine, though over time these filters load with contaminants hindering free airflow. Today’s open-loop approach to air filter maintenance has drivers replace elements at predetermined service intervals, causing costly and potentially harmful overand under-replacement. The result is that many vehicles consistently operate with reduced power, increased fuel consumption, or excessive particulate-related wear which may harm the catalyst or damage machined engine surfaces. We present a method of detecting filter contaminant loading from audio data collected by a smartphone and a stand microphone. Our machine learning approach to filter supervision uses Mel-Cepstrum, Fourier and Wavelet features as input into a classification model and applies feature ranking to select the best-differentiating features. We demonstrate the robustness of our technique by showing its efficacy for two vehicle types and different microphones, finding a best result of 79.7% accuracy when classifying a filter into three loading states. Refinements to this technique will help drivers supervise their filters and aid in optimally timing their replacement. This will result in an improvement in vehicle performance, efficiency, and reliability, while reducing the cost of maintenance to vehicle owners. © 2017 Elsevier Ltd. All rights reserved. |
A lung cancer outcome calculator using ensemble data mining on SEER data | We analyze the lung cancer data available from the SEER program with the aim of developing accurate survival prediction models for lung cancer using data mining techniques. Carefully designed preprocessing steps resulted in removal/modification/splitting of several attributes, and 2 of the 11 derived attributes were found to have significant predictive power. Several data mining classification techniques were used on the preprocessed data along with various data mining optimizations and validations. In our experiments, ensemble voting of five decision tree based classifiers and meta-classifiers was found to result in the best prediction performance in terms of accuracy and area under the ROC curve. Further, we have developed an on-line lung cancer outcome calculator for estimating risk of mortality after 6 months, 9 months, 1 year, 2 year, and 5 years of diagnosis, for which a smaller non-redundant subset of 13 attributes was carefully selected using attribute selection techniques, while trying to retain the predictive power of the original set of attributes. The on-line lung cancer outcome calculator developed as a result of this study is available at http://info.eecs.northwestern.edu:8080/LungCancerOutcome-Calculator/ |
Extracting Appraisal Expressions | Sentiment analysis seeks to characterize opinionated or evaluative aspects of natural language text. We suggest here that appraisal expression extraction should be viewed as a fundamental task in sentiment analysis. An appraisal expression is a textual unit expressing an evaluative stance towards some target. The task is to find and characterize the evaluative attributes of such elements. This paper describes a system for effectively extracting and disambiguating adjectival appraisal expressions in English outputting a generic representation in terms of their evaluative function in the text. Data mining on appraisal expressions gives meaningful and non-obvious insights. |
Modeling and Vector Control of Planar Magnetic Levitator | We designed and implemented a magnetically levitated stage with large planar motion capability. This planar magnetic levitator employs four novel permanent-magnet linear motors. Each motor generates vertical force for suspension against gravity, as well as horizontal force for drive. These linear levitation motors can be used as building blocks in the general class of multi-degree-of-freedom motion stages. In this paper, we discuss electromechanical modeling and real-time vector control of such a permanent-magnet levitator. We describe the dynamics in a dq frame introduced to decouple the forces acting on the magnetically levitated moving part, namely, the platen. A transformation similar to the Blondel–Park transformation is derived for commutation of the stator phase currents. We provide test results on step responses of the magnetically levitated stage. It shows 5-nm rms positioning noise inx and y, which demonstrates the applicability of such stages in the next-generation photolithography in semiconductor manufacturing. |
Decomposing Irregularly Sparse Matrices for Parallel Matrix-Vector Multiplication | Abs t r ac t . In this work, we show the deficiencies of the graph model for decomposing sparse matrices for parallel matrix-vector multiplication. Then, we propose two hypergraph models which avoid all deficiencies of the graph model. The proposed models reduce the decomposition problem to the well-known hypergraph partitioning problem widely encountered in circuit partitioning in VLSI. We have implemented fast Kernighan-Lin based graph and hypergraph partitioning heuristics and used the successful multilevel graph partitioning tool (Metis) for the experimental evaluation of the validity of the proposed hypergraph models. We have also developed a multilevel hypergraph partitioning tteuristic for experimenting the performance of the multilevel approach on hypergraph partitioning. Experimental results on sparse matrices, selected from Harwell-Boeing collection and NETLIB suite, confirm both the validity of our proposed hypergraph models and appropriateness of the multilevel approach to hypergraph partitioning. |
Biomechanical and histomorphometric evaluation of a thin ion beam bioceramic deposition on plateau root form implants: an experimental study in dogs. | UNLABELLED
The aim of this study was to evaluate the biomechanical fixation, bone-to-implant contact, and bone morphology of an ion beam assisted deposition of a 300-500 nm thick Ca- and P-based bioceramic surface on a previously alumina-blasted/acid-etched Ti-6Al-4V implant surface in a dog model.
MATERIALS AND METHODS
Thirty-six 4.5 x 11 mm plateau root form implants, control (alumina-blasted/acid-etched-AB/AE) and test groups (AB/AE+300-500 nm bioceramic coating, Nanotite) were placed along a proximal tibia of six beagle dogs remaining for 2 and 4 weeks (n = 3 animals per implantation time). Following euthanization, the implants were torqued to interface fracture at approximately 0.196 radians/sec until a 10% maximum load drop was detected. The implants in bone were nondecalcified processed to approximately 30 microm thickness slides for histomorphologic and bone-to-implant contact (BIC) assessment. Statistical analyses for torque to interface fracture were performed using a mixed model ANOVA, and BIC was evaluated by the chi2 test at 95% level of significance.
RESULTS
At 4 weeks, significantly higher torque to interface fracture was observed for the Test implant surface. Histomorphologic analysis showed higher degrees of bone organization for test implants compared to control at 2 and 4 weeks. Significantly higher BIC was observed at 4 weeks compared to 2 weeks (no statistical differences between control and test implants).
CONCLUSION
The higher torque to interface fracture and increased bone maturity obtained in this study support that the surface modification comprising a 300-500 nm Ca- and P-based bioceramic coating positively influenced healing around pleateau root form implants. |
Virtual world and biometrics as strongholds for the development of innovative port interoperable simulators for supporting both training and R&D | This paper proposes an integrated solution for port M&S involving distributed interoperable simulators of port cranes; the authors propose the architecture, the description of the model, The containerisation of these simulators for guaranteeing maximum mobility as well as their integration with biomedical devices. The proposed system is designed to be used in operation training of Gantry Crane Operators as well as Research and Development (R&D) support in Container Terminal, Port Plants and Facilities. |
Privacy policies as decision-making tools: an evaluation of online privacy notices | Studies have repeatedly shown that users are increasingly concerned about their privacy when they go online. In response to both public interest and regulatory pressures, privacy policies have become almost ubiquitous. An estimated 77% of websites now post a privacy policy. These policies differ greatly from site to site, and often address issues that are different from those that users care about. They are in most cases the users' only source of information.This paper evaluates the usability of online privacy policies, as well as the practice of posting them. We analyze 64 current privacy policies, their accessibility, writing, content and evolution over time. We examine how well these policies meet user needs and how they can be improved. We determine that significant changes need to be made to current practice to meet regulatory and usability requirements. |
An efficient lane detection algorithm for lane departure detection | In this paper, we propose an efficient lane detection algorithm for lane departure detection; this algorithm is suitable for low computing power systems like automobile black boxes. First, we extract candidate points, which are support points, to extract a hypotheses as two lines. In this step, Haar-like features are used, and this enables us to use an integral image to remove computational redundancy. Second, our algorithm verifies the hypothesis using defined rules. These rules are based on the assumption that the camera is installed at the center of the vehicle. Finally, if a lane is detected, then a lane departure detection step is performed. As a result, our algorithm has achieved 90.16% detection rate; the processing time is approximately 0.12 milliseconds per frame without any parallel computing. |
Polychlorinated dibenzo-p-dioxins (PCDDs), dibenzofurans (PCDFs), biphenyls (PCBs), and polycyclic aromatic hydrocarbons (PAHs) and 2,3,7,8-TCDD equivalents (TEQs) in sediment from the Hyeongsan River, Korea. | Sediment, pore water and water samples from the Hyeongsan River, Korea were analyzed for several classes of halogenated aromatic hydrocarbons (HAHs) and their dioxin-like activities were evaluated using the in vitro H4IIE-luc bioassay. Polychlorinated dibenzo-p-dioxins (PCDDs), dibenzofurans (PCDFs), and biphenyls (PCBs) were detected in sediments from all six sampling locations with mean concentrations of 2.8 x 10(2) pg/g, 190 pg/g, and 61.4 ng/g, dw, respectively. Polycyclic aromatic hydrocarbons (PAHs) were predominated by 4-6 ring compounds with concentrations in the range of 5.30-7680 ng/g, dw. Chemical profiles of target analytes in sediment and water samples revealed that there was a gradient of concentrations along the river from upstream to downstream, which suggested that the primary source was a wastewater reservoir adjacent to a sewage treatment plant (STP). TEQs derived by summing the product of concentrations of individual congeners by their respective relative potencies (REPs or TEFs) ranged from 4.3 x 10(-1) to 1.1 x 10(3) pg/g, dw. Raw Soxhlet extracts from all six sampling locations induced significant dioxin-like responses in the H4IIE-luc bioassay. TCDD-EQs derived from H4IIE bioassay ranged from 7 x 10(-3) to 1.5 x 10(3) pg/g, dw, which were significantly correlated with TEQs (r2 = 0.994, p < 0.05). Among the three Florisil fractions tested, PCDD/Fs in fraction (F2) induced the greatest magnitude of response (range: 24-83%-TCDD-max.) in the H4IIE-luc assay. Comparison of the TEQ and TCDD-EQ suggested little non-additive interaction between fractions and AhR-active and inactive compounds. Concentrations of individual congeners as well as TEQs and TCDD-EQs suggest inputs from the industrial center waste stream in the Hyeongsan River. |
A Model of Fluid Flow in Solid Tumors | Solid tumors consist of a porous interstitium and a neoplastic vasculature composed of a network of capillaries with highly permeable walls. Blood flows across the vasculature from the arterial entrance point to the venous exit point, and enters the tumor by convective and diffusive extravasation through the permeable capillary walls. In this paper, an integrated theoretical model of the flow through the tumor is developed. The flow through the interstitium is described by Darcy's law for an isotropic porous medium, the flow along the capillaries is described by Poiseuille's law, and the extravasation flux is described by Starling's law involving the pressure on either side of the capillaries. Given the arterial, the venous, and the ambient pressure, the problem is formulated in terms of a coupled system of integral and differential equations for the vascular and interstitial pressures. The overall hydrodynamics is described in terms of hydraulic conductivity coefficients for the arterial and venous flow rates whose functional form provides an explanation for the singular behavior of the vascular resistance observed in experiments. Numerical solutions are computed for an idealized case where the vasculature is modeled as a single tube, and charts of the hydraulic conductivities are presented for a broad range of tissue and capillary wall conductivities. The results in the physiological range of conditions are found to be in good agreement with laboratory observations. It is shown that the assumption of uniform interstitial pressure is not generally appropriate, and predictions of the extravasation rate based on it may carry a significant amount of error. © 2003 Biomedical Engineering Society. PAC2003: 8719Tt, 8710+e |
Treatment for Tobacco Dependence: Effect on Brain Nicotinic Acetylcholine Receptor Density | Cigarette smoking leads to upregulation of brain nicotinic acetylcholine receptors (nAChRs), including the common α4β2* nAChR subtype. Although a substantial percentage of smokers receive treatment for tobacco dependence with counseling and/or medication, the effect of a standard course of these treatments on nAChR upregulation has not yet been reported. In the present study, 48 otherwise healthy smokers underwent positron emission tomography (PET) scanning with the radiotracer 2-FA (for labeling α4β2* nAChRs) before and after treatment with either cognitive-behavioral therapy, bupropion HCl, or pill placebo. Specific binding volume of distribution (VS/fP), a measure proportional to α4β2* nAChR density, was determined for regions known to have nAChR upregulation with smoking (prefrontal cortex, brainstem, and cerebellum). In the overall study sample, significant decreases in VS/fP were found for the prefrontal cortex, brainstem, and cerebellum of −20 (±35), −25 (±36), and −25 (±31)%, respectively, which represented movement of VS/fP values toward values found in non-smokers (mean 58.2% normalization of receptor levels). Participants who quit smoking had significantly greater reductions in VS/fP across regions than non-quitters, and correlations were found between reductions in cigarettes per day and decreases in VS/fP for brainstem and cerebellum, but there was no between-group effect of treatment type. Thus, smoking reduction and cessation with commonly used treatments (and pill placebo) lead to decreased α4β2* nAChR densities across brain regions. Study findings could prove useful in the treatment of smokers by providing encouragement with the knowledge that decreased smoking leads to normalization of specific brain receptors. |
Introduction: Paul, Founder of Churches. Cult Foundations and the Comparative Study of Cult Origins | AbstractIn this introduction to the discussion on James C. Hanges, Paul, Founder of Churches, the significance of the comparative work on the cult founder-figure and typology of cult foundations is discussed. The essay argues that this serves to ground any interpretation of the cult founding work of the apostle Paul in an understanding of the materiality of religion. This gives impetus to a more concrete conceptualisation of Christian origins. Further reflection on this comparative enterprise is offered by means of three discussion foci, namely Discourse, imperial context, spatiality; Diaspora religion; and New Religious Movements. It is argued that the pervasiveness of imperial discourse and its spatial encoding allows us to see Paul’s cult foundations as sites of imperial resistance. Diasporas and diasporic religions provide key illuminations for understanding the broader context of the foundations of cult groups by Paul. Study of new religious movements will also aid in concrete descriptions and analysis of the making of early Christian groups and their organisation. |
Side effects of adjunct light therapy in patients with major depression | Adjunct bright-light therapy has been suggested to augment antidepressant drug treatment in patients with non-seasonal major depression. Side effects of the combined therapy have not been investigated thus far. Therefore, somatic complaints and side effects of combined therapy were evaluated in 28 patients with major depression (DSM-III-R) randomly assigned to either trimipramine or trimipramine and serially applied adjunct bright-light therapy. Response rates were comparable in both treatment groups and rates of newly emergent side effects during treatment were generally low. The most prominent unfavourable side effects of adjunct bright-light therapy as compared with trimipramine monotherapy were aggravated sedation, persisting restlessness, emerging sleep disturbance and decreased appetite as well as the worsening of vertigo. Discriminant analysis revealed that the combination of trimipramine with bright light results in a different side effect profile compared with drug monotherapy. |
Amorphous SiOx nanowires grown on silicon (100) substrates via rapid thermal process of nanodiamond films | Abstract Rapid thermal process (RTP) has been carried out on the deposited nanocrystalline diamond (NCD) films. The RTP treatments performed at 800 and 1200 °C have been shown to exert prominent influence on the morphology and structure of the NCD films. The loss of material at grain boundaries has been observed at both 800 and 1200 °C RTP treatments. Large-scale amorphous SiO x nanowires with diameters of 30–50 nm and length up to 10 μm were synthesized after RTP treatment at 1200 °C for 60 s. The synthesized nanowires were characterized in detail by scanning electron microscopy, transmission electron microscopy, selected area electron diffraction and energy-dispersed X-ray spectrometry analysis. A possible growth mechanism has been proposed to explain the observed phenomenon. |
Fuzzy Process, Hybrid Process and Uncertain Process | This paper first reviews different types of uncertainty. In order to construct fuzzy counterparts of Brownian motion and stochastic calculus, this paper proposes some basic concepts of fuzzy process, including fuzzy calculus and fuzzy differential equation. Those new concepts are also extended to hybrid process and uncertain process. A basic stock model is presented, thus opening up a way to fuzzy financial mathematics. |
Safety and efficacy of CHOP for treatment of diffuse large B-cell lymphoma with different combination antiretroviral therapy regimens: SCULPT study. | BACKGROUND
Use of combination antiretroviral therapy (cART) and cyclophosphamide, doxorubicin, vincristine and prednisone (CHOP) with or without rituximab for treatment of diffuse large B-cell lymphoma (DLBCL) in HIV substantially increases response rates but may also increase toxicity, possibly due to antiretroviral-antineoplastic drug interactions. The objective of this study was to evaluate the frequency of complete remission (CR) of DLBCL in patients treated with CHOP while receiving a protease inhibitor (PI) versus a non-PI-based cART.
METHODS
A retrospective multicentre pilot study was conducted in HIV-infected patients on cART treated for DLBCL with CHOP between 2002-2010 in three academic hospitals.
RESULTS
A total of 34 patients were included with 65% and 35% of patients receiving a PI and non-PI-based cART, respectively. Baseline characteristics between groups were similar; overall 85% were male, median age was 43 years, 50% had an International Prognostic Index (IPI) of 2-3 and median CD4(+) T-cell count was 225 cells/mm(3). CR was achieved in 77% and 58% of patients in the PI and non-PI groups, respectively (P=0.21), with 65% and 63% of patients achieving 2-year overall survival (P=1.00). A multivariate analysis showed that lower IPI score alone was significantly associated with higher CR rates (P=0.05). Toxicity was similar between both groups.
CONCLUSIONS
Similar efficacy and toxicity of CHOP was observed in patients receiving a PI and non-PI-based cART. |
Industrial Control System Simulation and Data Logging for Intrusion Detection System Research | Industrial control system intrusion detection is a popular topic of research for several years, and many intrusion detection systems (IDS) have been proposed in literature. IDS researchers lack a common framework to train and test proposed algorithms. This leads to an inability to properly compare proposed IDS and limits research progress. This paper documents 2 approaches to data sharing for the industrial control system IDS research community. First, a network traffic data log captured from a gas pipeline is presented. The gas pipeline data log was captured in a laboratory and includes artifacts of normal operation and cyberattacks. Second, an expandable virtual gas pipeline is presented which includes a human machine interface, programmable logic controller, Modbus/TCP communication, and a Simulink based gas pipeline model. The virtual gas pipeline provides the ability to model cyber-attacks and normal behavior. IDS solutions can overlay the virtual gas pipeline for training and testing. |
Joint visual denoising and classification using deep learning | Visual restoration and recognition are traditionally addressed in pipeline fashion, i.e. denoising followed by classification. Instead, observing correlations between the two tasks, for example clearer image will lead to better categorization and vice visa, we propose a joint framework for visual restoration and recognition for handwritten images, inspired by advances in deep autoencoder and multi-modality learning. Our model is a 3-pathway deep architecture with a hidden-layer representation which is shared by multi-inputs and outputs, and each branch can be composed of a multi-layer deep model. Thus, visual restoration and classification can be unified using shared representation via non-linear mapping, and model parameters can be learnt via backpropagation. Using MNIST and USPS data corrupted with structured noise, the proposed framework performs at least 20% better in classification than separate pipelines, as well as clearer recovered images. |
Linking a domain thesaurus to WordNet and conversion to WordNet-LMF | We present a methodology to link domain thesauri to general-domain lexica. This is applied in the framework of the KYOTO project to link the Species2000 thesaurus to the synsets of the English WordNet. Moreover, we study the formalisation of this thesaurus according to the ISO LMF standard and its dialect WordNet-LMF. This conversion will allow Species2000 to communicate with the other resources available in the KYOTO architecture. |
Safe and effective sedation in endoscopic submucosal dissection for early gastric cancer: a randomized comparison between propofol continuous infusion and intermittent midazolam injection | Endoscopic submucosal dissection (ESD) for early gastric cancer (EGC) generally takes longer to perform than conventional endoscopy and usually requires moderate/deep sedation with close surveillance for patient safety. The aim of this study was to compare the safety profiles and recovery scores propofol continuous infusion and intermittent midazolam (MDZ) injection as sedation for ESD. Sixty EGC patients scheduled for ESDs between August and November 2008 were included in this prospective study and randomly divided into a propofol (P-group, 28 patients) and an MDZ (M-group, 32 patients) group using an odd–even system. The P-group received a 0.8 mg/kg induction dose and a 3 mg/kg/h maintenance dose of 1% propofol using an infusion pump. All patients received 15 mg pentazocine at the start of the ESD and at 60-min intervals thereafter. We recorded and analyzed blood pressure, oxygen saturation and heart rate during and following the procedure and evaluated post-anesthetic recovery scores (PARS) and subsequent alertness scores. The propofol maintenance and total dose amounts were (mean ± standard deviation) 3.7 ± 0.6 mg/kg/h and 395 ± 202 mg, respectively. The mean total dose of MDZ was 10.3 ± 4.5 mg. There were no cases of de-saturation <90% or hypotension <80 mmHg in either group. Alertness scores 15 and 60 min after the procedures were significantly higher in the P-group (4.9/4.9) than in the M-group (4.6/4.5; p < 0.05). The mean PARS 15 and 30 min after the ESDs were significantly higher in the P-group (9.6/9.9) than in the M-group (8.6/9.2; p < 0.01). Based on our results, the ESDs for EGC performed under sedation using propofol continuous infusion were as safe as those performed using intermittent MDZ injection. Propofol-treated patients had a quicker recovery profile than those treated with MDZ. We therefore recommend the use of continuous propofol sedation for ESD, but sedation guidelines for the use of propofol are necessary. |
Ford of Europe ’ s Product Sustainability Index | Automotive industry is facing a multitude of challenges towards sustainability that can be partly also addressed by product design: o Climate change and oil dependency. The growing weight of evidence holds that manmade greenhouse gas emissions are starting to influence the world’s climate in ways that affect all parts of the globe (IPCC 2007) – along with growing concerns over the use and availability of fossil carbon. There is a need for timely action including those in vehicle design. o Air Quality and other emissions as noise. Summer smog situa tions frequently lead to traffic restrictions for vehicles not compliant to most recent emission standards. Other emissions as noise affect up to 80 million citizens – much of it caused by the transport sector (roads, railway, aircraft, etc.) (ERF 2007). o Mobility Capability. Fulfilling the societal mobility demand is a key factor enabling (sustainable) development. This is challenged where the infrastructure is not aligned to the mobility demand and where the mobility capability of the individual transport mode (cars, trains, etc.) are not fulfilling these needs – leading to unnecessary travel time and emissions (traffic jams, non-direct connections, lack of parking opportunities, etc.). In such areas, insufficient infrastructure is the reason for 38% of CO2 vehicle emissions (SINTEF 2007). Industry has also to consider changing mobility needs in aging societies. o Safety. Road accidents (including all related transport modes as well as pedestrians) result to 1.2 million fatalities globally according to the World Bank. o Affordability. As mobility is an important precondition for any development it is important that all the mobility solutions are affordable for the targeted regions and markets. All these challenges are both, risks and business opportunities. |
A Comprehensive Analysis of Knowledge Management Cycles | At present knowledge and its proper management became an essential issue for every organization. In the modern globalized world, organizations cannot survive in a sustainable way without efficient knowledge management. Knowledge management cycle (KMC) is a process of transforming information into knowledge within an organization, which explains how knowledge is captured, processed, and distributed in an organization. For the better performance organizations require a practical and coherent strategy and comprehensive KMC. The aim of this study is to examine the KMCs and how they are playing vital role for the development of organizations. |
Deeply Learned Rich Coding for Cross-Dataset Facial Age Estimation | We propose a method for leveraging publicly available labeled facial age datasets to estimate age from unconstrained face images at the ChaLearn Looking at People (LAP) challenge 2015 [9]. We first learn discriminative age related representation on multiple publicly available age datasets using deep Convolutional Neural Networks (CNN). Training CNN is supervised by rich binary codes, and thus modeled as a multi-label classification problem. The codes represent different age group partitions at multiple granularities, and also gender information. We then train a regressor from deep representation to age on the small training dataset provided by LAP organizer by fusing random forest and quadratic regression with local adjustment. Finally, we evaluate the proposed method on the provided testing data. It obtains the performance of 0.287, and ranks the 3rd place in the challenge. The experimental results demonstrate that the proposed deep representation is insensitive to cross-dataset bias, and thus generalizable to new datasets collected from other sources. |
Exploring relationships between racism, housing and child illness in remote indigenous communities. | BACKGROUND
Although racism is increasingly acknowledged as a determinant of health, few studies have examined the relationship between racism, housing and child health outcomes.
METHODS
Cross-sectional data from the Housing Improvement and Child Health study collected in ten remote indigenous communities in the Northern Territory, Australia were analysed using hierarchical logistic regression. Carer and householder self-reported racism was measured using a single item and child illness was measured using a carer report of common childhood illnesses. A range of confounders, moderators and mediators were considered, including socio-demographic and household composition, psychosocial measures for carers and householders, community environment, and health-related behaviour and hygienic state of environment.
RESULTS
Carer self-reported racism was significantly associated with child illness in this sample after adjusting for confounders (OR 1.65; 95% CI 1.09 to 2.48). Carer negative affect balance was identified as a significant mediator of this relationship. Householder self-reported racism was marginally significantly associated with child illness in this sample after adjusting for confounders (OR 1.43; 95% CI 0.94 to 2.18, p=0.09). Householder self-reported drug use was identified as a significant mediator of this relationship.
CONCLUSIONS
Consistent with evidence from adult populations and children from other ethnic minorities, this study found that vicarious racism is associated with poor health outcomes among an indigenous child population. |
PURE-LET Image Deconvolution | We propose a non-iterative image deconvolution algorithm for data corrupted by Poisson or mixed Poisson-Gaussian noise. Many applications involve such a problem, ranging from astronomical to biological imaging. We parameterize the deconvolution process as a linear combination of elementary functions, termed as linear expansion of thresholds. This parameterization is then optimized by minimizing a robust estimate of the true mean squared error, the Poisson unbiased risk estimate. Each elementary function consists of a Wiener filtering followed by a pointwise thresholding of undecimated Haar wavelet coefficients. In contrast to existing approaches, the proposed algorithm merely amounts to solving a linear system of equations, which has a fast and exact solution. Simulation experiments over different types of convolution kernels and various noise levels indicate that the proposed method outperforms the state-of-the-art techniques, in terms of both restoration quality and computational complexity. Finally, we present some results on real confocal fluorescence microscopy images and demonstrate the potential applicability of the proposed method for improving the quality of these images. |
Towards an Agile Process for Building Software Product Lines | Software product lines are sets of software systems that share common features. Product lines are built as if they were a family of products, identifying those features that change and those that can be reused. There is an evident incompatibility between the requirements of software product lines and agile practices. We report on experiments that used Feature-Driven Development to build software product lines, and describe the minor extensions that were useful for developing software product lines. Software product lines (SPL) [4] are collections of software systems that share a common set of features. SPLs are an emerging software paradigm allowing for largescale reuse for companies, since software is built as if it were a family of products rather than an individual product. A family is a set of products that have common aspects and predicted variability [4]. Once an SPL has been developed, the process of software development is one of tailoring and configuring a product line, rather than building a product wholesale. Examples of products that have been considered as SPLs include engine controllers, type managers, and anti-lock braking systems. Noteworthy amongst many of these systems is their embedded nature. SPL development is usually a time-consuming and extremely expensive process. Key challenges include identifying features and variation points, capturing the product line architecture, and managing the configuration process. Approaches used for developing SPLs are typically architecture-based, particularly those for safety critical systems such as aero-engine controllers [5]. Models are considered helpful to assist in the feature identification process and in highlighting configurations. Agile development methods, such as Feature-Driven Development (FDD) [3], have evolved to meet a need for increased productivity, while dealing with challenges such as changing requirements. SPL methods have evolved to increase productivity, ideally via increased reuse. However, there is an apparent incompatibility between agile practices, and what is needed to develop SPLs. In particular, • The agile principle of emphasising simplicity, and implementing functionality that satisfies the current instead of future requirements, goes against the requirement to support different variation points and configurations in SPLs. • The agile principle of delivering working software frequently contrasts with the substantial up-front development time for an SPL in order to provide infrastructure, which can thereafter be configured and deployed. Towards an Agile Process for Building Software Product Lines 199 Despite these apparent incompatibilities, we believe that the SPL development process can benefit from agile development techniques. To evaluate this, we have carried out several experiments in using an agile process to build a SPL [1]. Our approach was to first assess existing agile processes to determine which might provide suitable practices for identifying SPL features, configurations, and variation points. We selected FDD because of its lightweight modelling capabilities, and because it provided substantial guidance on identifying system features, something that must also be done in SPL development. We then applied FDD directly to building a microwave oven software product line. Variants of a microwave considered included one with only a simple cooking facility, one with a weight sensor to gauge temperature and cooking time, and one with built-in recipes. We encountered two difficulties in applying FDD to building SPLs: integrating SPL architecture design into FDD; and incorporating component development in FDD. An architectural description is important for SPL development since it is a part of the SPL core assets and is reused by product development. Architecture and component development were integrated into FDD with minor extensions to the overall process; architecture is considered incrementally, following [6], and SPL variations are generated as a result of the agile refactoring practice. As a result of this case study, we constructed an extension to FDD. Two new phases were added: one for consideration of architecture (based on the Architectural Tradeoff Method [2]) and one for SPL component design. An argument as to why this remains an agile process is laid out in detail in [1], but a key point of note is that the architectural and component models that are produced are the simplest and smallest that help in identifying variation points in SPL development. We then applied the revised process to a further case study an e-commerce system in order to validate and further explore the approach. Our observations are that an agile process like FDD, which explicitly considers features as first-class artifacts in system development, is well-suited to SPL development, as long as additional consideration of SPL architecture and SPL component design is added to the approach. Full details of the case studies can be found in [1]. |
Optimization of prosthetic foot stiffness to reduce metabolic cost and intact knee loading during below-knee amputee walking: a theoretical study. | Unilateral below-knee amputees develop abnormal gait characteristics that include bilateral asymmetries and an elevated metabolic cost relative to non-amputees. In addition, long-term prosthesis use has been linked to an increased prevalence of joint pain and osteoarthritis in the intact leg knee. To improve amputee mobility, prosthetic feet that utilize elastic energy storage and return (ESAR) have been designed, which perform important biomechanical functions such as providing body support and forward propulsion. However, the prescription of appropriate design characteristics (e.g., stiffness) is not well-defined since its influence on foot function and important in vivo biomechanical quantities such as metabolic cost and joint loading remain unclear. The design of feet that improve these quantities could provide considerable advancements in amputee care. Therefore, the purpose of this study was to couple design optimization with dynamic simulations of amputee walking to identify the optimal foot stiffness that minimizes metabolic cost and intact knee joint loading. A musculoskeletal model and distributed stiffness ESAR prosthetic foot model were developed to generate muscle-actuated forward dynamics simulations of amputee walking. Dynamic optimization was used to solve for the optimal muscle excitation patterns and foot stiffness profile that produced simulations that tracked experimental amputee walking data while minimizing metabolic cost and intact leg internal knee contact forces. Muscle and foot function were evaluated by calculating their contributions to the important walking subtasks of body support, forward propulsion and leg swing. The analyses showed that altering a nominal prosthetic foot stiffness distribution by stiffening the toe and mid-foot while making the ankle and heel less stiff improved ESAR foot performance by offloading the intact knee during early to mid-stance of the intact leg and reducing metabolic cost. The optimal design also provided moderate braking and body support during the first half of residual leg stance, while increasing the prosthesis contributions to forward propulsion and body support during the second half of residual leg stance. Future work will be directed at experimentally validating these results, which have important implications for future designs of prosthetic feet that could significantly improve amputee care. |
Lipoblastoma: Clinical Features, Treatment, and Outcome | Lipoblastoma is a rare, benign, encapsulated tumor arising from embryonic white fat. On histology they typically contain variably differentiated adipocytes, primitive mesenchymal cells, myxoid matrix, and fibrous trabeculae. The tumor occurs primarily in infancy and early childhood. It often occurs in the extremities and trunk, and rarely develops in the head and neck and other sites. Ten cases of histopathologically proven lipoblastoma presenting to our hospital during a 6-year period (2003–2008) were reviewed retrospectively for their clinical presentations, treatment, postoperative outcome, and follow-up. There were five males and five females ranging in age from 6 months to 20 years. The commonest presentation was a painless rapidly growing mass. Tumors occurred in an extremity (n = 5), head and neck (n = 3), trunk (n = 1), and retroperitoneum (n = 1). Preoperative diagnosis was accurate in only one case. The largest tumor measuring 25-cm × 20-cm × 7-cm and weighing 1.9 kg was excised from the retroperitoneum. All patients underwent complete surgical excision. Patient follow-up period ranging from 9 to 76 months showed no recurrences and no metastases. Lipoblastoma behaves benignly, occurs in both superficial and deep sites, and occasionally attains large size. Complete surgical excision is the treatment of choice and long-term follow-up is required because there is a reported tendency for these tumors to recur. |
Language Understanding in the Wild: Combining Crowdsourcing and Machine Learning | Social media has led to the democratisation of opinion sharing. A wealth of information about public opinions, current events, and authors' insights into specific topics can be gained by understanding the text written by users. However, there is a wide variation in the language used by different authors in different contexts on the web. This diversity in language makes interpretation an extremely challenging task. Crowdsourcing presents an opportunity to interpret the sentiment, or topic, of free-text. However, the subjectivity and bias of human interpreters raise challenges in inferring the semantics expressed by the text. To overcome this problem, we present a novel Bayesian approach to language understanding that relies on aggregated crowdsourced judgements. Our model encodes the relationships between labels and text features in documents, such as tweets, web articles, and blog posts, accounting for the varying reliability of human labellers. It allows inference of annotations that scales to arbitrarily large pools of documents. Our evaluation using two challenging crowdsourcing datasets shows that by efficiently exploiting language models learnt from aggregated crowdsourced labels, we can provide up to 25% improved classifications when only a small portion, less than 4% of documents has been labelled. Compared to the six state-of-the-art methods, we reduce by up to 67% the number of crowd responses required to achieve comparable accuracy. Our method was a joint winner of the CrowdFlower - CrowdScale 2013 Shared Task challenge at the conference on Human Computation and Crowdsourcing (HCOMP 2013). |
A visual tool for ontology alignment to enable geospatial interoperability | In distributed geospatial applications with heterogeneous databases, an ontology-driven approach to data integration relies on the alignment of the concepts of a global ontology that describe the domain, with the concepts of the ontologies that describe the data in the distributed databases. Once the alignment between the global ontology and each distributed ontology is established, agreements that encode a variety of mappings between concepts are derived. In this way, users can potentially query hundreds of geospatial databases using a single query. Using our approach, querying can be easily extended to new data sources and, therefore, to new regions. In this paper, we describe the AgreementMaker, a tool that displays the ontologies, supports several mapping layers visually, presents automatically generated mappings, and finally produces the agreements. r 2007 Elsevier Ltd. All rights reserved. |
Withdrawal from Chronic Phencyclidine Treatment Induces Long-Lasting Depression in Brain Reward Function | Phencyclidine (PCP) is a drug of abuse that has rewarding and dysphoric effects in humans. The complex actions of PCP, and PCP withdrawal in particular, on brain reward function remain unclear. The purpose of the present study was to characterize the effects of withdrawal from acute and chronic PCP treatment on brain reward function in rats. A brain stimulation reward procedure was used to evaluate the effects of acute PCP injection (0, 5, or 10 mg/kg) or chronic PCP treatment (0, 10, 15, or 20 mg/kg/day for 14 days delivered via subcutaneous osmotic minipumps) on brain reward function. Withdrawal from acute administration of 5 and 10 mg/kg PCP produced a decrease in brain reward function as indicated by a sustained elevation in brain reward thresholds. When administered chronically, 10, 15, or 20 mg/kg/day PCP induced a progressive dose-dependent potentiation of brain stimulation reward, while cessation of the treatment resulted in significant elevations in reward thresholds reflecting diminished reward. Specifically, withdrawal from 15 or 20 mg/kg/day PCP induced a depression in brain reward function that lasted for the entire month of observation. These results indicate that prolonged continuous administration of high PCP doses facilitates brain stimulation reward, while withdrawal from acute high PCP doses or chronic PCP treatment results in a protracted depression of brain reward function that may be analogous to the dysphoric and anhedonic symptoms observed in PCP dependence, depression, and schizophrenia. |
Effect of HIV-specific immune-based therapy in subjects infected with HIV-1 subtype E in Thailand. | OBJECTIVE
To examine the effect of treatment with an inactivated, gp120-depleted, HIV-1 immunogen (Remune) in 30 Thai subjects infected with HIV-1 subtype E.
DESIGN
Sixty-week open-label study.
METHODS
Thirty HIV-positive volunteers with CD4 cell counts > or = 300 x 10(6)/l were given intramuscular injections of Remune into the triceps muscle on day 1 and then at weeks 4, 8, 12, 24, 36, 48 and 60.
RESULTS
Treatment with Remune was well-tolerated and augmented HIV-1-specific immune responses. Furthermore, subjects had a significant increase in CD4 cell count (P < 0.0001), CD4 cell percentage (P < 0.0001), CD8 cell percentage (P < 0.0001), and body weight (P < 0.0001) compared with pretreatment levels. Fourteen subjects with detectable viral load at day 1 showed a decrease at week 60 (P=0.04). Retrospective Western blot analysis showed 23 subjects with increased intensity of antibody bands and 15 patients showed development of new reactivities to HIV proteins, especially towards p17 and p15.
CONCLUSION
These results indicate that HIV-specific immune-based therapeutic approaches such as Remune should be further examined in countries with different clades of HIV-1 and where access to antiviral drug therapies is limited. |
Vulnerable Narcissism Is (Mostly) a Disorder of Neuroticism. | OBJECTIVE
Increasing attention has been paid to the distinction between the dimensions of narcissistic grandiosity and vulnerability. We examine the degree to which basic traits underlie vulnerable narcissism, with a particular emphasis on the importance of Neuroticism and Agreeableness.
METHOD
Across four samples (undergraduate, online community, clinical-community), we conduct dominance analyses to partition the variance predicted in vulnerable narcissism by the Five-Factor Model personality domains, as well as compare the empirical profiles generated by vulnerable narcissism and Neuroticism.
RESULTS
These analyses demonstrate that the lion's share of variance is explained by Neuroticism (65%) and Agreeableness (19%). Similarity analyses were also conducted in which the extent to which vulnerable narcissism and Neuroticism share similar empirical networks was tested using an array of criteria, including self-, informant, and thin slice ratings of personality; interview-based ratings of personality disorder and pathological traits; and self-ratings of adverse events and functional outcomes. The empirical correlates of vulnerable narcissism and Neuroticism were nearly identical (MrICC = .94). Partial analyses demonstrated that the variance in vulnerable narcissism not shared with Neuroticism is largely specific to disagreeableness-related traits such as distrustfulness and grandiosity.
CONCLUSIONS
These findings demonstrate the parsimony of using basic personality to study personality pathology and have implications for how vulnerable narcissism might be approached clinically. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.