title
stringlengths
8
300
abstract
stringlengths
0
10k
The Effect of Cotrimoxazole Prophylactic Treatment on Malaria, Birth Outcomes, and Postpartum CD4 Count in HIV-Infected Women
BACKGROUND Limited data exist on cotrimoxazole prophylactic treatment (CPT) in pregnant women, including protection against malaria versus standard intermittent preventive therapy with sulfadoxine-pyrimethamine (IPTp). METHODS Using observational data we examined the effect of CPT in HIV-infected pregnant women on malaria during pregnancy, low birth weight and preterm birth using proportional hazards, logistic, and log binomial regression, respectively. We used linear regression to assess effect of CPT on CD4 count. RESULTS Data from 468 CPT-exposed and 768 CPT-unexposed women were analyzed. CPT was associated with protection against malaria versus IPTp (hazard ratio: 0.35, 95% Confidence Interval (CI): 0.20, 0.60). After adjustment for time period this effect was not statistically significant (adjusted hazard ratio: 0.66, 95% CI: 0.28, 1.52). Among women receiving and not receiving CPT, rates of low birth weight (7.1% versus 7.6%) and preterm birth (23.5% versus 23.6%) were similar. CPT was associated with lower CD4 counts 24 weeks postpartum in women receiving (-77.6 cells/ μ L, 95% CI: -125.2, -30.1) and not receiving antiretrovirals (-33.7 cells/ μ L, 95% CI: -58.6, -8.8). CONCLUSIONS Compared to IPTp, CPT provided comparable protection against malaria in HIV-infected pregnant women and against preterm birth or low birth weight. Possible implications of CPT-associated lower CD4 postpartum warrant further examination.
Fluticasone furoate: once-daily evening treatment versus twice-daily treatment in moderate asthma
BACKGROUND Inhaled corticosteroids are the recommended first-line treatment for asthma but adherence to therapy is suboptimal. The objectives of this study were to compare the efficacy and safety of once-daily (OD) evening and twice-daily (BD) regimens of the novel inhaled corticosteroid fluticasone furoate (FF) in asthma patients. METHODS Patients with moderate asthma (age ≥ 12 years; pre-bronchodilator forced expiratory volume in 1 second (FEV1) 40-85% predicted; FEV1 reversibility of ≥ 12% and ≥ 200 ml) were randomized to FF or fluticasone propionate (FP) regimens in a double-blind, crossover study. Patients were not permitted to have used any ICS for ≥ 8 weeks prior to enrolment and subsequently received doses of FF or FP 200 μg OD, FF or FP 100 μg BD and matching placebo by inhalation for 28 days each. Primary endpoint was Day 28 evening pre-dose (trough) FEV1; non-inferiority of FF 200 μg OD and FF 100 μg BD was assessed, as was superiority of all active treatment relative to placebo. Adverse events (AEs) and 24-hour urinary cortisol excretion were assessed. RESULTS The intent-to-treat population comprised 147 (FF) and 43 (FP) patients. On Day 28, pre-dose FEV1 showed FF 200 μg OD to be non-inferior (pre-defined limit -110 ml) to FF 100 μg BD (mean treatment difference 11 ml; 95% CI: -35 to +56 ml); all FF and FP regimens were significantly superior to placebo (p ≤ 0.02). AEs were similar to placebo; no serious AEs were reported. Urinary cortisol excretion at Day 28 for FF was lower than placebo (ratios: 200 μg OD, 0.75; 100 μg BD, 0.84; p ≤ 0.02). CONCLUSIONS FF 200 μg OD in the evening is an efficacious and well tolerated treatment for asthma patients and is not inferior to the same total BD dose. TRIAL REGISTRATION Clinicaltrials.gov; NCT00766090.
pigeo: A Python Geotagging Tool
We present pigeo, a Python geolocation prediction tool that predicts a location for a given text input or Twitter user. We discuss the design, implementation and application of pigeo, and empirically evaluate it. pigeo is able to geolocate informal text and is a very useful tool for users who require a free and easy-to-use, yet accurate geolocation service based on pre-trained models. Additionally, users can train their own models easily using pigeo’s API.
Local Wavelet Pattern: A New Feature Descriptor for Image Retrieval in Medical CT Databases
A new image feature description based on the local wavelet pattern (LWP) is proposed in this paper to characterize the medical computer tomography (CT) images for content-based CT image retrieval. In the proposed work, the LWP is derived for each pixel of the CT image by utilizing the relationship of center pixel with the local neighboring information. In contrast to the local binary pattern that only considers the relationship between a center pixel and its neighboring pixels, the presented approach first utilizes the relationship among the neighboring pixels using local wavelet decomposition, and finally considers its relationship with the center pixel. A center pixel transformation scheme is introduced to match the range of center value with the range of local wavelet decomposed values. Moreover, the introduced local wavelet decomposition scheme is centrally symmetric and suitable for CT images. The novelty of this paper lies in the following two ways: 1) encoding local neighboring information with local wavelet decomposition and 2) computing LWP using local wavelet decomposed values and transformed center pixel values. We tested the performance of our method over three CT image databases in terms of the precision and recall. We also compared the proposed LWP descriptor with the other state-of-the-art local image descriptors, and the experimental results suggest that the proposed method outperforms other methods for CT image retrieval.
Harnessing Twitter "Big Data" for Automatic Emotion Identification
User generated content on Twitter (produced at an enormous rate of 340 million tweets per day) provides a rich source for gleaning people's emotions, which is necessary for deeper understanding of people's behaviors and actions. Extant studies on emotion identification lack comprehensive coverage of "emotional situations" because they use relatively small training datasets. To overcome this bottleneck, we have automatically created a large emotion-labeled dataset (of about 2.5 million tweets) by harnessing emotion-related hash tags available in the tweets. We have applied two different machine learning algorithms for emotion identification, to study the effectiveness of various feature combinations as well as the effect of the size of the training data on the emotion identification task. Our experiments demonstrate that a combination of unigrams, big rams, sentiment/emotion-bearing words, and parts-of-speech information is most effective for gleaning emotions. The highest accuracy (65.57%) is achieved with a training data containing about 2 million tweets.
Dual port UWB diversity/MIMO antenna with dual band-notch characteristics
A compact two-port diversity/MIMO antenna with dual band-notch characteristics for ultra wideband (UWB) applications is presented in this paper. The antennas exhibit a good impedance match over frequency band of 3.1 GHz–12 GHz, while exhibiting high isolation. Enhanced isolation is achieved with a decoupling structure having circular slots. The decoupling structure provides isolation of more than 20 dB for whole UWB spectrum. By introducing U-shaped slot in the main radiator and horizontal stubs in ground plane, band-notch characteristic is achieved in proposed MIMO design. Two notches are achieved at 5.150–5.825 GHz (WLAN) and 7.25–7.75 GHz (downlink of X-band satellite). The antenna is miniaturized having dimensions of 22mm × 36mm. The results suggests that the proposed diversity/MIMO antenna design can be suitable for portable UWB applications.
Benford's law behavior of Internet traffic
In this paper, we analyze the Internet traffic from a different point of view based on Benford's law, an empirical law describing the distribution of leading digits in a collection of numbers met in naturally occurring phenomena. We claim that Benford's law holds for the inter-arrival times of TCP flows in case of normal traffic. Consequently, any type of anomalies affecting TCP flows, including intentional intrusions or unintended faults and network failures in general, can be detected by investigating the first digit distributions of the inter-arrival times of TCP SYN packets. In this paper we apply our findings to the detection of intentional attacks, whereas other types of anomalies can be studied in future works. We support our claim with the related researches that indicate the TCP flow inter-arrival times can be modeled by Weibull distribution with shape parameter less than one, and show the relation between Weibull distributed data and Benford’s law. Finally, we validate our findings on real traffic and achieve encouraging results.
Fuzzy Cognitive Maps
Fuzzy cognitive maps (FCMs) are fuzzy-graph structures for representing causal reasoning. Their fuzziness allows hazy degrees of causality between hazy causal objects (concepts). Their graph structure allows systematic causal propagation, in particular forward and backward chaining, and it allows knowledge bases to be grown by connecting different FCMs. FCMs are especially applicable to soft knowledge domains and several example FCMs are given. Causality is represented as a fuzzy relation on causal concepts. A fuzzy causal algebra for governing causal propagation on FCMs is developed. FCM matrix representation and matrix operations are presented in the Appendix.
Stationary-phase quorum-sensing signals affect autoinducer-2 and gene expression in Escherichia coli.
Quorum sensing via autoinducer-2 (AI-2) has been identified in different strains, including those from Escherichia, Vibrio, Streptococcus, and Bacillus species, and previous studies have suggested the existence of additional quorum-sensing signals working in the stationary phase of Escherichia coli cultures. To investigate the presence and global effect of these possible quorum-sensing signals other than AI-2, DNA microarrays were used to study the effect of stationary-phase signals on the gene expression of early exponential-phase cells of the AI-2-deficient strain E. coli DH5alpha. For statistically significant differential gene expression (P < 0.05), 14 genes were induced by supernatants from a stationary culture and 6 genes were repressed, suggesting the involvement of indole (induction of tnaA and tnaL) and phosphate (repression of phoA, phoB, and phoU). To study the stability of the signals, the stationary-phase supernatant was autoclaved and was used to study its effect on E. coli gene expression. Three genes were induced by autoclaved stationary-phase supernatant, and 34 genes were repressed. In total, three genes (ompC, ptsA, and btuB) were induced and five genes (nupC, phoB, phoU, argT, and ompF) were repressed by both fresh and autoclaved stationary-phase supernatants. Furthermore, supernatant from E. coli DH5alpha stationary culture was found to repress E. coli K-12 AI-2 concentrations by 4.8-fold +/- 0.4-fold, suggesting that an additional quorum-sensing system in E. coli exists and that gene expression is controlled as a network with different signals working at different growth stages.
Determining employee awareness using the Human Aspects of Information Security Questionnaire (HAIS-Q)
It is increasingly acknowledged that many threats to an organisation’s computer systems can be attributed to the behaviour of computer users. To quantify these human-based information security vulnerabilities, we are developing the Human Aspects of Information Security Questionnaire (HAIS-Q). The aim of this paper was twofold. The first aim was to outline the conceptual development of the HAIS-Q, including validity and reliability testing. The second aim was to examine the relationship between knowledge of policy and procedures, attitude towards policy and procedures and behaviour when using a work computer. Results from 500 Australian employees indicate that knowledge of policy and procedures had a stronger influence on attitude towards policy and procedure than selfreported behaviour. This finding suggests that training and education will be more effective if it outlines not only what is expected (knowledge) but also provides an understanding of why this is important (attitude). Plans for future research to further develop and test the HAIS-Q are outlined. Crown Copyright a 2014 Published by Elsevier Ltd. All rights reserved.
Caring for country: The development of a formalised structure for land management on aboriginal lands within the Northern Land Council region of the Northern Territory
In Australia, Aboriginal people constitute 2.1% of the total population and own 14% of the landmass while in the Northern Territory (NT), Aboriginal people constitute 28.5% of the population and own over 40% of the landmass with a further 10% under claim.
CDV / DT INDUCED TURN-ON IN SYNCHRONOUS BUCK REGULATORS
Cdv/dt induced turn-on of the synchronous MOSFET deteriorates performance in synchronous buck regulators. We will discuss this problem and provide several solutions that can reduce the effects. SYNCHRONOUS BUCK REGULATOR Synchronous buck topology is becoming popular in powering ultra-fast CPU cores. A standard buck circuit is shown in Figure 1(a) and a synchronous buck is shown in Figure 1(b). As shown in Figure 1(b), by replacing the freewheeling diode with a MOSFET, the standard buck regulator is converted into a synchronous buck topology. This topology will provide higher efficiency than the standard buck circuit. Typically a Schottky diode is paralleled with MOSFET Q2 but is omitted from this paper because it is not required to understand and solve the Cdv/dt induced turn-on problem. Ideal synchronous buck regulator waveforms are illustrated in Figure 2(a). The control MOSFET Q1 is used to regulate the output voltage by adjusting its duty factor. When Q1 is turned off, the inductor current of Lout continues to flow through either the synchronous MOSFET Q2 or its body diode. Figure 1. (a) Standard buck topology. (b) Synchronous buck topology. Figure 2. (a) Ideal waveforms in a synchronous buck voltage regulator; (b) waveforms due to Cdv/dt induced turn-on at Q2. Dead-times, td12 and td21 as shown in Figure 2(a) are introduced to prevent the cross conduction that would occur if Q1 and Q2 gate drive signals were overlapped. During the dead time, only the body diode of Q2 conducts and the drain voltage of Q2 is clamped to minus one diode
Color brings relief to human vision
In natural scenes, chromatic variations, and the luminance variations that are aligned with them, mainly arise from surfaces such as flowers or painted objects. Pure or near-pure luminance variations, on the other hand, mainly arise from inhomogeneous illumination such as shadows or shading. Here, I provide evidence that knowledge of these color–luminance relationships is built into the machinery of the human visual system. When a pure-luminance grating is added to a differently oriented chromatic grating, the resulting 'plaid' appears to spring into three-dimensional relief, an example of 'shape-from-shading'. By psychophysical measurements, I found that the perception of shape-from-shading in the plaid was triggered when the chromatic and luminance gratings were not aligned, and suppressed when the gratings were aligned. This finding establishes a new role for color vision in determining the three-dimensional structure of an image: one that exploits the natural relationships that exist between color and luminance in the visual world.
Chinese Word Segmentation by Classification of Characters
During the process of Chinese word segmentation, two main problems occur: segmentation ambiguities and unknown word occurrences. This paper describes a method to solve the segmentation problem. First, we use a dictionary-based approach to segment the text. We apply the Maximum Matching algorithm to segment the text forwards (FMM) and backwards (BMM). Based on the difference between FMM and BMM, and the context, we apply a classification method based on Support Vector Machines to re-assign the word boundaries. In so doing, we use the output of a dictionary-based approach, and then apply a machine-learning-based approach to solve the segmentation problem. Experimental results show that our model can achieve an F-measure of 99.0 for overall segmentation, given the condition that there are no unknown words in the text, and an F-measure of 95.1 if unknown words exist.
FurcaNet: An end-to-end deep gated convolutional, long short-term memory, deep neural networks for single channel speech separation
Deep gated convolutional networks have been proved to be very effective in single channel speech separation. However current state-of-the-art framework often considers training the gated convolutional networks in time-frequency (TF) domain. Such an approach will result in limited perceptual score, such as signal-to-distortion ratio (SDR) upper bound of separated utterances and also fail to exploit an end-to-end framework. In this paper we present an integrated simple and effective end-to-end approach to monaural speech separation, which consists of deep gated convolutional neural networks (GCNN) that takes the mixed utterance of two speakers and maps it to two separated utterances, where each utterance contains only one speaker’s voice. In addition long shortterm memory (LSTM) is employed for long term temporal modeling. For the objective, we propose to train the network by directly optimizing utterance level SDR in a permutation invariant training (PIT) style. Our experiments on the public WSJ0-2mix data corpus demonstrate that this new scheme can produce more discriminative separated utterances and leading to performance improvement on the speaker separation task.
Coordination polymers of Fe(iii) and Al(iii) ions with TCA ligand: distinctive fluorescence, CO2 uptake, redox-activity and oxygen evolution reaction.
Fe and Al belong to different groups in the periodic table, one from the p-block and the other from the d-block. In spite of their different groups, they have the similarity of exhibiting a stable 3+ oxidation state. Here we have prepared Fe(iii) and Al(iii) based coordination polymers in the form of metal-organic gels with the 4,4',4''-tricarboxyltriphenylamine (TCA) ligand, namely Fe-TCA and Al-TCA, and evaluated some important physicochemical properties. Specifically, the electrical conductivity, redox-activity, porosity, and electrocatalytic activity (oxygen evolution reaction) of the Fe-TCA system were noted to be remarkably higher than those of the Al-TCA system. As for the photophysical properties, almost complete quenching of the fluorescence originating from TCA was observed in case of the Fe-TCA system, whereas for the Al-TCA system a significant retention of fluorescence with red-shifted emission was observed. Quantum mechanical calculations based on density functional theory (DFT) were performed to unravel the origin of such discriminative behaviour of these coordination polymer systems.
Coverage-based Neural Machine Translation
Attention mechanism advanced state-of-the-art neural machine translation (NMT) by jointly learning to align and translate. However, attentional NMT ignores past alignment information, which leads to over-translation and undertranslation problems. In response to this problem, we maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust the future attention, which guides NMT to pay more attention to the untranslated source words. Experiments show that coverage-based NMT significantly improves both alignment and translation quality over NMT without coverage.
Scaling Up Programming by Demonstration for Intelligent Tutoring Systems Development: An Open-Access Web Site for Middle School Mathematics Learning
Intelligent tutoring systems (ITSs), which provide step-by-step guidance to students in complex problem-solving activities, have been shown to enhance student learning in a range of domains. However, they tend to be difficult to build. Our project investigates whether the process of authoring an ITS can be simplified, while at the same time maintaining the characteristics that make ITS effective, and also maintaining the ability to support large-scale tutor development. Specifically, our project tests whether authoring tools based on programming-by-demonstration techniques (developed in prior research) can support the development of a large-scale, real-world tutor. We are creating an open-access Web site, called Mathtutor (http://webmathtutor.org), where middle school students can solve math problems with step-by-step guidance from ITS. The Mathtutor site fields example-tracing tutors, a novel type of ITS that are built "by demonstration," without programming, using the cognitive tutor authoring tools (CTATs). The project's main contribution will be that it represents a stringent test of large-scale tutor authoring through programming by demonstration. A secondary contribution will be that it tests whether an open-access site (i.e., a site that is widely and freely available) with software tutors for math learning can attract and sustain user interest and learning on a large scale.
Low side-lobe substrate integrated cavity antenna array using unequal microstrip-ridge gap waveguide feeding network at 94 GHz
In this paper, a 4×32 low side-lobe slot antenna array using unequal microstrip-ridge gap waveguide (GWG) feeding network at 94 GHz is presented. The slot antenna array is built by two double-sided printed circuit boards (PCBs). The top one employs SIW-based 2×2 subarrays and the bottom one is a broadband 2×16 way unequal GWG feeding network. Applying Taylor amplitude weighting in the 16 way GWG feeding network, low side-lobe performance is achieved. The unequal T-junction dividers with phase compensation are proposed and designed for various output ratios. Simulated results show that the antenna array achieves 5 GHz bandwidth with a peak gain of 26.3 dBi at 94 GHz. Across the entire band, low side lobe level (SLL) below −20 dB is realized.
Markov Models for Written Language Identification
The paper presents a Markov chain-based method for automatic written language identification. Given a training document in a specific language, each word can be represented as a Markov chain of letters. Using the entire training document regarded as a set of Markov chains, the set of initial and transition probabilities can be calculated and referred to as a Markov model for that language. Given an unknown language string, the maximum likelihood decision rule was used to identify language. Experimental results showed that the proposed method achieved lower error rate and faster identification speed than the current n-gram method.
Levamisole treatment in steroid-sensitive and steroid-resistant nephrotic syndrome
Since 1992 we have treated 11 children with frequently relapsing steroid-sensitive (n=6) or steroid-resistant (n=5) nephrotic syndrome with levamisole. All had been non-responsive to other immunosuppressive medication before levamisole treatment. All steroid-sensitive patients had signs of steroid toxicity. At least 1 kidney biopsy had been performed prior to study in each patient. Five children had minimal glomerular changes and the other 6 focal segmental glomerular sclerosis. The patients were treated with levamisole (2.5 mg/kg per 48 h) for at least 2 months (up to 18 months, median 10 months). Two patients had additional immunosuppression (cyclosporine A) during levamisole treatment. All patients with steroid-sensitive nephrotic syndrome became free of proteinuria within 2 months and have remained in remission after discontinuation of levamisole (follow-up time 8–50 months, median 24 months). None of the children with steroid-resistant nephrotic syndrome experienced a remission. Side effects were observed in 2 patients and included a granulocytopenia and a severe psoriasis-like cutaneous reaction; both were reversible after discontinuation of levamisole. We conclude that levamisole is of benefit in steroid-sensitive nephrotic syndrome but not in steroid-resistant nephrotic syndrome.
"It is always on my mind": women's experiences of their bodies when living with hirsutism.
Many women suffer from excessive hair growth, often in combination with polycystic ovarian syndrome (PCOS). It is unclear how hirsutism influences such women's experiences of their bodies. Our aim is to describe and interpret women's experiences of their bodies when living with hirsutism. Interviews were conducted with 10 women with hirsutism. We used a qualitative latent content analysis. Four closely intertwined themes were disclosed: the body was experienced as a yoke, a freak, a disgrace, and as a prison. Hirsutism deeply affects women's experiences of their bodies in a negative way.
Realtime Dynamic 3D Facial Reconstruction for Monocular Video In-the-Wild
With the increasing amount of videos recorded using 2D mobile cameras, the technique for recovering the 3D dynamic facial models from these monocular videos has become a necessity for many image and video editing applications. While methods based parametric 3D facial models can reconstruct the 3D shape in dynamic environment, large structural changes are ignored. Structure-from-motion methods can reconstruct these changes but assume the object to be static. To address this problem we present a novel method for realtime dynamic 3D facial tracking and reconstruction from videos captured in uncontrolled environments. Our method can track the deforming facial geometry and reconstruct external objects that protrude from the face such as glasses and hair. It also allows users to move around, perform facial expressions freely without degrading the reconstruction quality.
Boundaries of VP and VNP
One fundamental question in the context of the geometric complexity theory approach to the VP vs. VNP conjecture is whether VP = VP, where VP is the class of families of polynomials that are of polynomial degree and can be computed by arithmetic circuits of polynomial size, and VP is the class of families of polynomials that are of polynomial degree and can be approximated infinitesimally closely by arithmetic circuits of polynomial size. The goal of this article is to study the conjecture in (Mulmuley, FOCS 2012) that VP is not contained in VP. Towards that end, we introduce three degenerations of VP (i.e., sets of points in VP), namely the stable degeneration Stable-VP, the Newton degeneration Newton-VP, and the p-definable one-parameter degeneration VP*. We also introduce analogous degenerations of VNP. We show that Stable-VP ⊆ Newton-VP ⊆ VP* ⊆ VNP, and Stable-VNP = Newton-VNP = VNP* = VNP. The three notions of degenerations and the proof of this result shed light on the problem of separating VP from VP. Although we do not yet construct explicit candidates for the polynomial families in VP \VP, we prove results which tell us where not to look for such families. Specifically, we demonstrate that the families in Newton-VP \VP based on semi-invariants of quivers would have to be non-generic by showing that, for many finite quivers (including some wild ones), any Newton degeneration of a generic semi-invariant can be computed by a circuit of polynomial size. We also show that the Newton degenerations of perfect matching Pfaffians, monotone arithmetic circuits over the reals, and Schur polynomials have polynomial-size circuits.
Acoustical Sound Database in Real Environments for Sound Scene Understanding and Hands-Free Speech Recognition
This paper reports on a project for collection of the sound scene data. The sound scene data is necessary for studies such as sound source localization, sound retrieval, sound recognition and hands-free speech recognition in real acoustical environments. There are many kinds of sound scenes in real environments. The sound scene is denoted by sound sources and room acoustics. The number of combination of the sound sources, source positions and rooms is huge in real acoustical environments. However, the sound in the environments can be simulated by convolution of the isolated sound sources and impulse responses. As an isolated sound source, a hundred kinds of non-speech sounds and speech sounds are collected. The impulse responses are collected in various acoustical environments. In this paper, progress of our sound scene database project and application to environment sound recognition are described.
Transfiguring portraits
People may look dramatically different by changing their hair color, hair style, when they grow older, in a different era style, or a different country or occupation. Some of those may transfigure appearance and inspire creative changes, some not, but how would we know without physically trying? We present a system that enables automatic synthesis of limitless numbers of appearances. A user inputs one or more photos (as many as they like) of his or her face, text queries an appearance of interest (just like they'd search an image search engine) and gets as output the input person in the queried appearance. Rather than fixing the number of queries or a dataset our system utilizes all the relevant and searchable images on the Internet, estimates a doppelgänger set for the inputs, and utilizes it to generate composites. We present a large number of examples on photos taken with completely unconstrained imaging conditions.
Abortion and medicine: A sociopolitical history
“(T)here is every indication that abortion is an absolutely universal phenomenon, and that it is impossible even to construct an imaginary social system in which no woman would ever feel at least compelled to abort [1].” So concluded an anthropologist after an exhaustive review of materials from 350 ancient and preindustrial societies. Beyond the stark fact of its universality, abortion throughout history exhibits a number of other distinctive features. First is the willingness on the part of women seeking abortion and those aiding them to defy laws and social convention; in every society that has forbidden abortion, a culture of illegal provision has emerged. Second, to a far greater degree than is the case with most other medical procedures, the status of abortion has been inextricably bound up with larger social and political factors, such as changes in women’s political power or in the population objectives of a society. Finally, the mere fact of legality does not necessarily imply universal access to abortion services. Crucial factors in the availability of abortion include the structure of health care services, and especially the willingness of the medical profession to provide abortion. With these points in mind, this chapter presents a brief historical overview of abortion provision, including the role of social movements among physicians and other clinicians in both facilitating and impeding the availability of abortion services.
Tuning of PID Controllers Based on Bode ’ s Ideal Transfer Function
This paper presents a new strategy for tuning PID controllers based on a fractional reference model. The model is represented as an ideal closed-loop system whose open-loop is given by the Bode’s ideal transfer function. The PID controller parameters are determined by the minimization of the integral square error (ISE) between the time responses of the desired fractional reference model and of the system with the PID controller. The resulting closed-loop system (with the PID controller) has the desirable feature of being robust to gain variations with step responses exhibiting an iso-damping property. Several examples are presented that demonstrate the effectiveness and validity of the proposed methodology.
A Flexible Millimeter-Wave Channel Sounder With Absolute Timing
This paper presents a novel ultrawideband wireless spread spectrum millimeter-wave (mmWave) channel sounder that supports both a wideband sliding correlator mode and a real-time spread spectrum mode, also known as wideband correlation or direct correlation. Both channel sounder modes are capable of absolute propagation delay (time of flight) measurements with up to 1 GHz of radio frequency null-to-null bandwidth, and can measure multipath with a 2-ns time resolution. The sliding correlator configuration facilitates long-distance measurements with angular spread and delay spread for up to 185 dB of maximum measurable path loss. The real-time spread spectrum mode is shown to support short-range, small-scale temporal, and Doppler measurements (minimum snapshot sampling interval of $32.753~\mu \text{s}$ ) with a substantial dynamic fading range of 40 dB for human blockage and dynamic urban scenarios. The channel sounder uses field programmable gate arrays, analog-to-digital converters, digital-to-analog converters, and low-phase-noise rubidium standard references for frequency/time synchronization and absolute time delay measurements. Using propagation theory, several methods are presented here to calibrate and verify the accuracy of the channel sounder, and an improved diffraction model for human blockage, based on the METIS model but now including directional antenna gains, is developed from measurements using the channel sounder. The mmWave channel sounder described here may be used for accurate spatial and temporal ray-tracing calibration, to identify individual multipath components, to measure antenna patterns, for constructing spatial profiles of mmWave channels, and for developing statistical channel impulse response models in time and space.
Supporting Cancer Patients in Illness Management: Usability Evaluation of a Mobile App
BACKGROUND Mobile phones and tablets currently represent a significant presence in people's everyday lives. They enable access to different information and services independent of current place and time. Such widespread connectivity offers significant potential in different app areas including health care. OBJECTIVE Our goal was to evaluate the usability of the Connect Mobile app. The mobile app enables mobile access to the Connect system, an online system that supports cancer patients in managing health-related issues. Along with symptom management, the system promotes better patient-provider communication, collaboration, and shared decision making. The Connect Mobile app enables access to the Connect system over both mobile phones and tablets. METHODS The study consisted of usability tests of a high fidelity prototype with 7 cancer patients where the objectives were to identify existing design and functionality issues and to provide patients with a real look-and-feel of the mobile system. In addition, we conducted semistructured interviews to obtain participants' feedback about app usefulness, identify the need for new system features and design requirements, and measure the acceptance of the mobile app and its features within everyday health management. RESULTS The study revealed a total of 27 design issues (13 for mobile apps and 14 for tablet apps), which were mapped to source events (ie, errors, requests for help, participants' concurrent feedback, and moderator observation). We also applied usability heuristics to identify violations of usability principles. The majority of violations were related to enabling ease of input, screen readability, and glanceability (15 issues), as well as supporting an appropriate match between systems and the real world (7 issues) and consistent mapping of system functions and interactions (4 issues). Feedback from participants also showed the cancer patients' requirements for support systems and how these needs are influenced by different context-related factors, such as type of access terminal (eg, desktop computer, tablet, mobile phone) and phases of illness. Based on the observed results, we proposed design and functionality recommendations that can be used for the development of mobile apps for cancer patients to support their health management process. CONCLUSIONS Understanding and addressing users' requirements is one of the main prerequisites for developing useful and effective technology-based health interventions. The results of this study outline different user requirements related to the design of the mobile patient support app for cancer patients. The results will be used in the iterative development of the Connect Mobile app and can also inform other developers and researchers in development, integration, and evaluation of mobile health apps and services that support cancer patients in managing their health-related issues.
Modeling the reaction mechanisms of the amide hydrolysis in an N-(o-carboxybenzoyl)-L-amino acid.
Reaction mechanisms of the amide hydrolysis from the protonated, neutral, and deprotonated forms of N-(o-carboxybenzoyl)-l-amino acid have been investigated by use of the B3LYP density functional method. Our calculations reveal that in the amide hydrolysis the reaction barrier is significantly lower in solution than that in the gas phase, in contrast with the mechanism for imide formation in which the solvent has little influence on the reaction barrier. In the model reactions, the water molecules function both as a catalyst and as a reactant. The reaction mechanism starting from the neutral form of N-(o-carboxybenzoyl)-l-amino acid, which corresponds to pH 0-3, is concluded to be the most favored, and a concerted mechanism is more favorable than a stepwise mechanism. This conclusion is in agreement with experimental observations that the optimal pH range for amide hydrolysis of N-(o-carboxybenzoyl)-l-leucine is pH 0-3 where N-(o-carboxybenzoyl)-l-leucine is predominantly in its neutral form. We suggest that besides the acid-catalyzed mechanism the addition-elimination mechanism is likely to be an alternative choice for cleaving an amide bond. For the reaction mechanism initiated by protonation at the amidic oxygen (hydrogen ion concentration H(0) < -1), the reaction of the model compound with two water molecules lowers the transition barrier significantly compared with that involving a single water molecule.
Preventing Adolescent Suicide.
The adolescent at risk `or suicidal preoccupation and behavior has become an increasing concern for schools and communities. This paper presents some of the causes of teen suicide, things adults should know about adolescent suicide prevention, and what can be done to help such youth. The transition to adolescence is a complex time when many values may be questioned. Family dysfunctions, such as poor communication skills, resistance to grieving, difficulties with single parenting, and abusive interactions can further confuse this already difficult period. Likewise, environmental pressures, such as academic achievement, constant mobility and the availability of drugs, can lead to depression and the inability to cope with stress. It is emphasized that knowledge is the most effective tool in preventing suicide. Adults should be aware of the myths associated with suicides, such as the myth that adolescents who talk about suicide never actually attempt suicide or that suicide is hereditary. Adults also must be able to recognize the profile of the suicidal adolescent so that referral and intervention can take place. Included in this profile are behaviors, such as a lack of concern about personal welfare, verbal cues, and altered behavioral patterns and personality traits. Adults can help an adolescent who manifests an interest in suicide by expressing their concern for the child, developing a rapport, and facilitating a meeting with a counselor or crisis team member. (RJM) *********************************************************************** * Reproductions supplied by EDRS are the best that can be made * from the original document. * *********************************************************************** PREVENTING ADOLESCENT
An adaptive coding model of neural function in prefrontal cortex
The prefrontal cortex has a vital role in effective, organized behaviour. Both functional neuroimaging in humans and electrophysiology in awake monkeys indicate that a fundamental principle of prefrontal function might be adaptive neural coding — in large regions of the prefrontal cortex, neurons adapt their properties to carry specifically information that is relevant to current concerns, producing a dense, distributed representation of related inputs, actions, rewards and other information. A model based on such adaptive coding integrates the role of the prefrontal cortex in working memory, attention and control. Adaptive coding points to new perspectives on several basic questions, including mapping of cognitive to neurophysiological functions, the influences of task content and difficulty, and the nature of frontal lobe specializations.
Modulation Scheme for Improved Operation of an RB-IGBT-Based Resonant Inverter Applied to Domestic Induction Heating
Domestic induction appliances require power converters that feature high efficiency and accurate power control in a wide range of operating conditions. To achieve this modulation techniques play a key role to optimize the power converter operation. In this paper, a series resonant inverter featuring reverse-blocking insulated gate bipolar transistors and an optimized modulation technique are proposed. An analytical study of the converter operation is performed, and the main simulation results are shown. The proposed topology reduces both conduction and switching losses, increasing significantly the power converter efficiency. Moreover, the proposed modulation technique achieves linear output power control, improving the final appliance performance. The results derived from this analysis are tested by means of an experimental prototype, verifying the feasibility of the proposed converter and modulation technique.
Transient Nanoscopic Phase Separation in Biological Lipid Membranes Resolved by Planar Plasmonic Antennas.
Nanoscale membrane assemblies of sphingolipids, cholesterol, and certain proteins, also known as lipid rafts, play a crucial role in facilitating a broad range of important cell functions. Whereas on living cell membranes lipid rafts have been postulated to have nanoscopic dimensions and to be highly transient, the existence of a similar type of dynamic nanodomains in multicomponent lipid bilayers has been questioned. Here, we perform fluorescence correlation spectroscopy on planar plasmonic antenna arrays with different nanogap sizes to assess the dynamic nanoscale organization of mimetic biological membranes. Our approach takes advantage of the highly enhanced and confined excitation light provided by the nanoantennas together with their outstanding planarity to investigate membrane regions as small as 10 nm in size with microsecond time resolution. Our diffusion data are consistent with the coexistence of transient nanoscopic domains in both the liquid-ordered and the liquid-disordered microscopic phases of multicomponent lipid bilayers. These nanodomains have characteristic residence times between 30 and 150 μs and sizes around 10 nm, as inferred from the diffusion data. Thus, although microscale phase separation occurs on mimetic membranes, nanoscopic domains also coexist, suggesting that these transient assemblies might be similar to those occurring in living cells, which in the absence of raft-stabilizing proteins are poised to be short-lived. Importantly, our work underscores the high potential of photonic nanoantennas to interrogate the nanoscale heterogeneity of native biological membranes with ultrahigh spatiotemporal resolution.
Association between gastrointestinal events and persistence with osteoporosis therapy: analysis of administrative claims of a U.S. managed care population.
BACKGROUND A large proportion of patients do not persist with osteoporosis (OP) therapy. Gastrointestinal (GI) events (e.g., gastroesophageal reflux disease and nausea/vomiting) are common among OP patients receiving OP therapy and may impact persistence with treatment. OBJECTIVE To examine the association of GI events and persistence with OP therapy. METHODS Using a large U.S. administrative claims database, we studied women aged ≥ 55 years who received oral bisphosphonate (BIS) as their first OP therapy from 2002-2009. The index date was the first pharmacy claim date recorded for oral BIS therapy; the baseline period was 12 months pre-index, and follow-up was 12 months post-index. Patients were considered persistent with therapy if they had continuous refills of the index drug class without additional drug therapy from a different drug class from the index date until the end of the follow-up period with no gaps in supply greater than 45 days. Discontinuation was defined as the first gap greater than 45 days during which there was no evidence of refills of OP medication. The association between post-treatment GI events and the risk of discontinuation or switching was modeled with Cox regression stratified by presence of baseline GI events and adjusted for baseline clinical and demographic characteristics. RESULTS Of the 75,593 women who met eligibility criteria, 59.9% discontinued BIS; 39.3% were persistent; and 0.5% switched to non-BIS. GI events were diagnosed in 20,073 patients (26.6%) during baseline and in 21,142 (28.0%) in the post-treatment period (12-month follow-up post-index). Patients with post-treatment GI diagnosis were 35.6% more likely to discontinue or switch treatment (HR = 1.356, 95% CI = 1.318-1.396) during the 12-month follow-up compared with those without post-treatment GI diagnosis. GI events that occurred closer to treatment discontinuation or switching were associated with a greater risk of discontinuation or switching: 37.9% (HR = 1.379, 95% CI = 1.338-1.421) for GI events within 6 months of discontinuation or switching and 45.6% (HR = 1.456, 95% CI = 1.408-1.505) for GI events within 3 months of discontinuation or switching. CONCLUSIONS Among women aged 55 years or older in a U.S. managed care population, post-treatment GI events were associated with a higher risk of discontinuation of oral BIS or switching to non-BIS.
Topic Detection and Tracking
The Topic Detection and Tracking (TDT) research program has been running for five years. starting with a pilot study and including yearly open and competitive evaluations since then. In this chapter we define the basic concepts of TDT and provide historical context for the concepts. [n describing the various TDT evaluation tasks and workshops. we provide an overview of the technical approaches that have been used and that have succeeded.
Adaptive optics scanning laser ophthalmoscope for stabilized retinal imaging.
A retinal imaging instrument that integrates adaptive optics (AO), scanning laser ophthalmoscopy (SLO), and retinal tracking components was built and tested. The system uses a Hartmann-Shack wave-front sensor (HS-WS) and MEMS-based deformable mirror (DM) for AO-correction of high-resolution, confocal SLO images. The system includes a wide-field line-scanning laser ophthalmoscope for easy orientation of the high-magnification SLO raster. The AO system corrected ocular aberrations to <0.1 mum RMS wave-front error. An active retinal tracking with custom processing board sensed and corrected eye motion with a bandwidth exceeding 1 kHz. We demonstrate tracking accuracy down to 6 mum RMS for some subjects (typically performance: 10-15 mum RMS). The system has the potential to become an important tool to clinicians and researchers for vision studies and the early detection and treatment of retinal diseases.
Comparison of three one-question, post-task usability questionnaires
Post-task ratings of difficulty in a usability test have the potential to provide diagnostic information and be an additional measure of user satisfaction. But the ratings need to be reliable as well as easy to use for both respondents and researchers. Three one-question rating types were compared in a study with 26 participants who attempted the same five tasks with two software applications. The types were a Likert scale, a Usability Magnitude Estimation (UME) judgment, and a Subjective Mental Effort Question (SMEQ). All three types could distinguish between the applications with 26 participants, but the Likert and SMEQ types were more sensitive with small sample sizes. Both the Likert and SMEQ types were easy to learn and quick to execute. The online version of the SMEQ question was highly correlated with other measures and had equal sensitivity to the Likert question type.
Eruption of an impacted canine in an adenomatid odontogenic tumor treated with combined orthodontic and surgical therapy.
An adenomatoid odontogenic tumor is an uncommon asymptomatic lesion that is often misdiagnosed as a dentigerous cyst. It originates from the odontogenic epithelium. Enucleation and curettage is the usual treatment of choice. Marsupialization may be attempted instead of extraction of the impacted tooth, since it provides an opportunity for tooth eruption. This case report is the first to report on the eruption of an impacted canine in an adenomatoid odontogenic tumor treated with combined orthodontics and marsupialization. The impacted canine erupted uneventfully, with no evidence of recurrence 3 years after the treatment.
Delay-Aware Scheduling and Resource Optimization With Network Function Virtualization
To accelerate the implementation of network functions/middle boxes and reduce the deployment cost, recently, the concept of network function virtualization (NFV) has emerged and become a topic of much interest attracting the attention of researchers from both industry and academia. Unlike the traditional implementation of network functions, a software-oriented approach for virtual network functions (VNFs) creates more flexible and dynamic network services to meet a more diversified demand. Software-oriented network functions bring along a series of research challenges, such as VNF management and orchestration, service chaining, VNF scheduling for low latency and efficient virtual network resource allocation with NFV infrastructure, among others. In this paper, we study the VNF scheduling problem and the corresponding resource optimization solutions. Here, the VNF scheduling problem is defined as a series of scheduling decisions for network services on network functions and activating the various VNFs to process the arriving traffic. We consider VNF transmission and processing delays and formulate the joint problem of VNF scheduling and traffic steering as a mixed integer linear program. Our objective is to minimize the makespan/latency of the overall VNFs' schedule. Reducing the scheduling latency enables cloud operators to service (and admit) more customers, and cater to services with stringent delay requirements, thereby increasing operators' revenues. Owing to the complexity of the problem, we develop a genetic algorithm-based method for solving the problem efficiently. Finally, the effectiveness of our heuristic algorithm is verified through numerical evaluation. We show that dynamically adjusting the bandwidths on virtual links connecting virtual machines, hosting the network functions, reduces the schedule makespan by 15%-20% in the simulated scenarios.
Data Compression and Harmonic Analysis
In this article we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon’s R(D) theory in the case of Gaussian stationary processes, which says that transforming into a Fourier basis followed by block coding gives an optimal lossy compression technique; practical developments like transformbased image compression (JPEG) have been inspired by this result. In this article we also discuss connections perhaps less familiar to the Information Theory community, growing out of the field of harmonic analysis. Recent harmonic analysis constructions, such as wavelet transforms and Gabor transforms, are essentially optimal transforms for transform coding in certain settings. Some of these transforms are under consideration for future compression standards, like JPEG-2000. We discuss some of the lessons of harmonic analysis in this century. Typically, the problems and achievements of this field have involved goals that were not obviously related to practical data compression, and have used a language not immediately accessible to outsiders. Nevertheless, through a extensive generalization of what Shannon called the ‘sampling theorem’, harmonic analysis has succeeded in developing new forms of functional representation which turn out to have significant data compression interpretations. We explain why harmonic analysis has interacted with data compression, and we describe some interesting recent ideas in the field that may affect data compression in the future.
GraphVAE: Towards Generation of Small Graphs Using Variational Autoencoders
Deep learning on graphs has become a popular research topic with many applications. However, past work has concentrated on learning graph embedding tasks, which is in contrast with advances in generative models for images and text. Is it possible to transfer this progress to the domain of graphs? We propose to sidestep hurdles associated with linearization of such discrete structures by having a decoder output a probabilistic fully-connected graph of a predefined maximum size directly at once. Our method is formulated as a variational autoencoder. We evaluate on the challenging task of molecule generation.
Dynamic load balancing on single- and multi-GPU systems
The computational power provided by many-core graphics processing units (GPUs) has been exploited in many applications. The programming techniques currently employed on these GPUs are not sufficient to address problems exhibiting irregular, and unbalanced workload. The problem is exacerbated when trying to effectively exploit multiple GPUs concurrently, which are commonly available in many modern systems. In this paper, we propose a task-based dynamic load-balancing solution for single-and multi-GPU systems. The solution allows load balancing at a finer granularity than what is supported in current GPU programming APIs, such as NVIDIA's CUDA. We evaluate our approach using both micro-benchmarks and a molecular dynamics application that exhibits significant load imbalance. Experimental results with a single-GPU configuration show that our fine-grained task solution can utilize the hardware more efficiently than the CUDA scheduler for unbalanced workload. On multi-GPU systems, our solution achieves near-linear speedup, load balance, and significant performance improvement over techniques based on standard CUDA APIs.
Corruption Cycles
Political corruption has been a persistent phenomenon throughout history and across societies. It is found today in many di€erent forms and degrees in all types of political systems, in developing countries as well as in Western democracies. The phenomenon is not solely or even largely the consequence of a moral fault or cultural backwardness. The root cause of corruption, in Europe as well as in Japan, Latin America and the United States, is found in the mixture of economic and political power. We take corruption to consist in the illegitimate use of public roles and resources for private bene®t, where `private' often refers to large groups such as political parties. This de®nition has the advantage of subsuming many di€erent kinds of corrupt behaviour, ranging from public ocers' use of their position to maximize personal gain by dispensing public bene®ts to the implementation of policies that violate the common interest in favour of special interests, such as granting large ®rms the monopoly of their services within a particular sector. Since this de®nition of corruption is based upon legal norms, not public opinion or social norms, it has the advantage of directing our attention to the contrast between ocial and social norms that is present in most societies. The literature on corruption is a large corpus of descriptive studies, but a full ̄edged theory of corruption is yet to come. As a result, many dimensions of corruption have been left unexplored. One such problem is the life-cycle of corruption. If corruption is endemic, what are the forces that allow it to prevail? What permits it to continue? An answer to this question cannot be separated from an analysis of the political and economic e€ects of corruption. There has been widespread disagreement among scholars studying the phenomenon regarding the direction of its e€ects. The so-called Moralists maintained that corruption is harmful, as it impedes development and erodes the legitimacy of institutions. Revisionists point instead to the possible bene®ts of corruption: it
Experimental Evidence for p Ka-Driven Asynchronicity in C-H Activation by a Terminal Co(III)-Oxo Complex.
C-H activation by transition metal oxo complexes is a fundamental reaction in oxidative chemistry carried out by both biological and synthetic systems. This centrality has motivated efforts to understand the patterns and mechanisms of such reactivity. We have therefore thoroughly examined the C-H activation reactivity of the recently synthesized and characterized late transition metal oxo complex PhB ( tBuIm)3CoIIIO. Precise values for the p Ka and BDFEO-H of the conjugates of this complex have been experimentally determined and provide insight into the observed reactivity. The activation parameters for the reaction between this complex and 9,10-dihydroanthracene have also been measured and compared to previous literature examples. Evaluation of the rates of reaction of PhB( tBuIm)3CoIIIO with a variety of hydrogen atom donors demonstrates that the reactivity of this complex is dependent on the p Ka of the substrate of interest rather than the BDEC-H. This observation runs counter to the commonly cited reactivity paradigm for many other transition metal oxo complexes. Experimental and computational analysis of C-H activation reactions by PhB( tBuIm)3CoIIIO reveals that the transition state for these processes contains significant proton transfer character. Nevertheless, additional experiments strongly suggest that the reaction does not occur via a stepwise process, leading to the conclusion that C-H activation by this CoIII-oxo complex proceeds by a p Ka-driven "asynchronous" concerted mechanism. This result supports a new pattern of reactivity that may be applicable to other systems and could result in alternative selectivity for C-H activation reactions mediated by transition metal oxo complexes.
Research issues in symbiotic simulation
Symbiotic simulation is a paradigm in which a simulation system and a physical system are closely associated with each other. This close relationship can be mutually beneficial. The simulation system benefits from real-time measurements about the physical system which are provided by corresponding sensors. The physical system, on the other side, may benefit from the effects of decisions made by the simulation system. An important concept in symbiotic simulation is that of the what-if analysis process which is concerned with the evaluation of a number of what-if scenarios by means of simulation. Symbiotic simulation and related paradigms have become popular in recent years because of their ability to dynamically incorporate real-time sensor data. In this paper, we explain different types of symbiotic simulation and give an overview of the state of the art. In addition, we discuss common research issues that have to be addressed when working with symbiotic simulation. While some issues have been adequately addressed, there are still research issues that remain open.
Serum galactomannan versus a combination of galactomannan and polymerase chain reaction-based Aspergillus DNA detection for early therapy of invasive aspergillosis in high-risk hematological patients: a randomized controlled trial.
BACKGROUND The benefit of the combination of serum galactomannan (GM) assay and polymerase chain reaction (PCR)-based detection of serum Aspergillus DNA for the early diagnosis and therapy of invasive aspergillosis (IA) in high-risk hematological patients remains unclear. METHODS We performed an open-label, controlled, parallel-group randomized trial in 13 Spanish centers. Adult patients with acute myeloid leukemia and myelodysplastic syndrome on induction therapy or allogeneic hematopoietic stem cell transplant recipients were randomized (1:1 ratio) to 1 of 2 arms: "GM-PCR group" (the results of serial serum GM and PCR assays were provided to treating physicians) and "GM group" (only the results of serum GM were informed). Positivity in either assay prompted thoracic computed tomography scan and initiation of antifungal therapy. No antimold prophylaxis was permitted. RESULTS Overall, 219 patients underwent randomization (105 in the GM-PCR group and 114 in the GM group). The cumulative incidence of "proven" or "probable" IA (primary study outcome) was lower in the GM-PCR group (4.2% vs 13.1%; odds ratio, 0.29 [95% confidence interval, .09-.91]). The median interval from the start of monitoring to the diagnosis of IA was lower in the GM-PCR group (13 vs 20 days; P = .022), as well as the use of empirical antifungal therapy (16.7% vs 29.0%; P = .038). Patients in the GM-PCR group had higher proven or probable IA-free survival (P = .027). CONCLUSIONS A combined monitoring strategy based on serum GM and Aspergillus DNA was associated with an earlier diagnosis and a lower incidence of IA in high-risk hematological patients. Clinical Trials Registration. NCT01742026.
Effect of Y2O3 addition on microstructure of Ni-based alloy + Y2O3/substrate laser clad
Abstract To improve hardness and corrosion resistance of aluminum substrate, using laser cladding technique, metal composite with Ni-based alloy, refining and dispersion strengthening Y 2 O 3 , particle hardening W, Cr was obtained. The microstructure and morphology, phase identification, element diffusion and composition analysis of the Ni-based alloy + Y 2 O 3 /substrate laser clad and laser clad/aluminum substrate interface were examined using scanning electron microscopy (SEM), Electron Probe Microanalyzer (EPMA), energy-dispersive spectrometer (EDS) analysis. The corrosion resistance was investigated also. The results showed, with the addition of Y 2 O 3 , more fine microstructures consisted of isometric crystal, white acicular crystal and fringe crystal; primary phases were mainly Ni 3 Al, NiAl, NiAl 3 , W, α-Al and Cr x C. The corrosion rate of aluminum substrate was nearly two times more than that of laser clad metals with addition of Y 2 O 3 .
Narrowband optical interactions in a plasmonic nanoparticle chain coupled to a metallic film.
We study the coupling of localized surface plasmon (LSP) and surface-plasmon polariton (SPP) modes in a system composed of a metallic nanoparticle chain separated from a thin metallic film by a dielectric spacer. The thickness of such a spacer influences the level of interaction between LSP and SPP modes and controls the electromagnetic enhancement in this system. An enhancement with narrow resonances can be observed for appropriate parameters. The high-resonance quality factor and tunability of this system make it a very promising candidate for biosensing and surface-enhanced spectroscopy applications.
Extensions of compressed sensing
We study the notion of Compressed Sensing (CS) as put forward in [14] and related work [20, 3, 4]. The basic idea behind CS is that a signal or image, unknown but supposed to be compressible by a known transform, (eg. wavelet or Fourier), can be subjected to fewer measurements than the nominal number of pixels, and yet be accurately reconstructed. The samples are nonadaptive and measure ‘random’ linear combinations of the transform coefficients. Approximate reconstruction is obtained by solving for the transform coefficients consistent with measured data and having the smallest possible ` norm. We perform a series of numerical experiments which validate in general terms the basic idea proposed in [14, 3, 5], in the favorable case where the transform coefficients are sparse in the strong sense that the vast majority are zero. We then consider a range of less-favorable cases, in which the object has all coefficients nonzero, but the coefficients obey an ` bound, for some p ∈ (0, 1]. These experiments show that the basic inequalities behind the CS method seem to involve reasonable constants. We next consider synthetic examples modelling problems in spectroscopy and image processing, and note that reconstructions from CS are often visually “noisy” . We post-process using translation-invariant de-noising, and find the visual appearance considerably improved. We also consider a multiscale deployment of compressed sensing, in which various scales are segregated and CS applied separately to each; this gives much better quality reconstructions than a literal deployment of the CS methodology. We also report that several workable families of ‘random’ linear combinations all behave equivalently, including random spherical, random signs, partial Fourier and partial Hadamard. These results show that, when appropriately deployed in a favorable setting, the CS framework is able to save significantly over traditional sampling, and there are many useful extensions of the basic idea.
Minimizing the evidence-practice gap – a prospective cohort study incorporating balance training into pulmonary rehabilitation for individuals with chronic obstructive pulmonary disease
BACKGROUND We have recently demonstrated the efficacy of balance training in addition to Pulmonary Rehabilitation (PR) at improving measures of balance associated with an increased risk of falls in individuals with Chronic Obstructive Pulmonary Disease (COPD). Few knowledge translation (KT) projects have been conducted in rehabilitation settings. The goal of this study was to translate lessons learnt from efficacy studies of balance training into a sustainable clinical service. METHODS Health care professionals (HCPs) responsible for delivering PR were given an hour of instruction on the principles and practical application of balance training and the researchers offered advice regarding; prescription, progression and practical demonstrations during the first week. Balance training was incorporated three times a week into conventional PR programs. Following the program, HCPs participated in a focus group exploring their experiences of delivering balance training alongside PR. Service users completed satisfaction surveys as well as standardized measures of balance control. At six month follow-up, the sustainability of balance training was explored. RESULTS HCPs considered the training to be effective at improving balance and the support provided by the researchers was viewed as helpful. HCPs identified a number of strategies to facilitate balance training within PR, including; training twice a week, incorporating an interval training program for everyone enrolled in PR, providing visual aids to training and promoting independence by; providing a set program, considering the environment and initiating a home-based exercise program early. Nineteen service users completed the balance training [ten male mean (SD) age 73 (6) y]. Sixteen patients (84%) enjoyed balance training and reported that it helped them with everyday activities and 18 (95%) indicated their wish to continue with it. Scores on balance measures improved following PR that included balance training (all p < 0.05). At six month follow-up balance training is being routinely assessed and delivered as part of standardised PR. CONCLUSIONS Implementing balance training into PR programs, with support and training for HCPs, is feasible, effective and sustainable. TRAIL REGISTRATION Clinical Trials ID: NCT02080442 (05/03/2014).
Object detection in VHR optical remote sensing images via learning rotation-invariant HOG feature
Object detection in very high resolution (VHR) optical remote sensing images is one of the most fundamental but challenging problems in the field of remote sensing image analysis. As object detection is usually carried out in feature space, effective feature representation is very important to construct a high-performance object detection system. During the last decades, a great deal of effort has been made to develop various feature representations for the detection of different types of objects. Among various features developed for visual object detection, the histogram of oriented gradients (HOG) feature is maybe one of the most popular features that has been successfully applied to computer vision community. However, although the HOG feature has achieved great success in nature scene images, it is problematic to directly use it for object detection in optical remote sensing images because it is difficult to effectively handle the problem of object rotation variations. To explore a possible solution to the problem, this paper proposes a novel method to learn rotation-invariant HOG (RIHOG) features for object detection in optical remote sensing images. This is achieved by learning a rotation-invariant transformation model via optimizing a new objective function, which constrains the training samples before and after rotation to share the similar features to achieve rotation-invariance. In the experiments, we evaluate the proposed method on a publicly available 10-class VHR geospatial object detection dataset and comprehensive comparisons with state-of-the-arts demonstrate the effectiveness the proposed method.
Customizing Low-Precision Deep Neural Networks for FPGAs
In this paper, we argue that instead of solely focusing on developing efficient architectures to accelerate well-known low-precision CNNs, we should also seek to modify the network to suit the FPGA. We develop a fully automative toolflow which focuses on modifying the network through filter pruning, such that it efficiently utilizes the FPGA hardware whilst satisfying a predefined accuracy threshold. Although fewer weights are re-moved in comparison to traditional pruning techniques designed for software implementations, the overall model complexity and feature map storage is greatly reduced. We implement the AlexNet and TinyYolo networks on the large-scale ImageNet and PascalVOC datasets, to demonstrate up to roughly 2× speedup in frames per second and 2× reduction in resource requirements over the original network, with equal or improved accuracy.
Polyhedral patterns
We study the design and optimization of polyhedral patterns, which are patterns of planar polygonal faces on freeform surfaces. Working with polyhedral patterns is desirable in architectural geometry and industrial design. However, the classical tiling patterns on the plane must take on various shapes in order to faithfully and feasibly approximate curved surfaces. We define and analyze the deformations these tiles must undertake to account for curvature, and discover the symmetries that remain invariant under such deformations. We propose a novel method to regularize polyhedral patterns while maintaining these symmetries into a plethora of aesthetic and feasible patterns.
Molecular cloning and sequencing of general odorant-binding proteins GOBP1 and GOBP2 from the tobacco hawk moth Manduca sexta: comparisons with other insect OBPs and their signal peptides.
Odorant-binding proteins (OBPs) are small, water-soluble proteins uniquely expressed in olfactory tissue of insects and vertebrates. OBPs are present in the aqueous fluid surrounding olfactory sensory dendrites and are thought to aid in the capture and transport of hydrophobic odorants into and through this fluid. OBPs may represent the initial biochemical recognition step in olfaction, because they transport odorants to the receptor neurons. Insect OBPs are represented by multiple classes: pheromone-binding proteins (PBPs) and general odorant-binding proteins (GOBP1 and GOBP2). PBPs associate with pheromone-sensitive neurons, while GOBPs associate with general odorant-sensitive neurons. Analysis of N-terminal amino acid sequences of 14 insect OBPs isolated from six species indicated that the PBPs were variable and the GOBPs were highly conserved. However, inferred properties of these proteins were based only on partial sequence data. We now report the full-length sequences of a GOBP1 and GOBP2 from the moth Manduca sexta and compare these sequences with those of PBPs from three species, including M. sexta, Antheraea polyphemus, and A. pernyi. We also compare these with a GOBP2 of A. pernyi, previously identified only as a novel OBP. These comparisons fully support our N-terminal analysis. The signal peptide sequences of seven insect OBPs reveal conserved sequences within OBP classes, but not between OBP classes even within the same animal species. This suggests that multiple OBPs may be coexpressed in the same cell type, but differentially processed in a class-specific manner. Properties of the GOBPs suggest that general olfaction is broadly receptive at the periphery. Properties of the PBPs suggest that pheromone olfaction is discriminatory at the periphery, and that the initial biochemical steps in pheromone detection may play an active role in odor perception.
An efficient and generic reversible debugger using the virtual machine based approach
The reverse execution of programs is a function where programs are executed backward in time. A reversible debugger is a debugger that provides such a functionality. In this paper, we propose a novel reversible debugger that enables reverse execution of programs written in the C language. Our approach takes the virtual machine based approach. In this approach, the target program is executed on a special virtual machine. Our contribution in this paper is two-fold. First, we propose an approach that can address problems of (1) compatibility and (2) efficiency that exist in previous works. By compatibility, we mean that previous debuggers are not generic, i.e., they support only a special language or special intermediate code. Second, our approach provides two execution modes: the native mode, where the debuggee is directly executed on a real CPU, and the virtual machine mode, where the debuggee is executed on a virtual machine. Currently, our debugger provides four types of trade-off settings (designated by unit and optimization) to consider trade-offs between granularity, accuracy, overhead and memory requirement. The user can choose the appropriate setting flexibly during debugging without finishing and restarting the debuggee.
Exploitation of Unlabeled Sequences in Hidden Markov Models
This paper presents a method for effectively using unlabeled sequential data in the learning of hidden Markov models (HMMs). With the conventional approach, class labels for unlabeled data are assigned deterministically by HMMs learned from labeled data. Such labeling often becomes unreliable when the number of labeled data is small. We propose an extended Baum-Welch (EBW) algorithm in which the labeling is undertaken probabilistically and iteratively so that the labeled and unlabeled data likelihoods are improved. Unlike the conventional approach, the EBW algorithm guarantees convergence to a local maximum of the likelihood. Experimental results on gesture data and speech data show that when labeled training data are scarce, by using unlabeled data, the EBW algorithm improves the classification performance of HMMs more robustly than the conventional naive labeling (NL) approach. keywords Unlabeled data, sequential data, hidden Markov models, extended Baum-Welch algorithm.
On Improving User Response Times in Tableau
The rapid increase in data volumes and complexity of applied analytical tasks poses a big challenge for visualization solutions. It is important to keep the experience highly interactive, so that users stay engaged and can perform insightful data exploration. Query processing usually dominates the cost of visualization generation. Therefore, in order to achieve acceptable response times, one needs to utilize backend capabilities to the fullest and apply techniques, such as caching or prefetching. In this paper we discuss key data processing components in Tableau: the query processor, query caches, Tableau Data Engine [1, 2] and Data Server. Furthermore, we cover recent performance improvements related to the number and quality of remote queries, broader reuse of cached data, and application of inter and intra query parallelism.
Query and Output: Generating Words by Querying Distributed Word Representations for Paraphrase Generation
Task: Paraphrase generation Problem: The existing sequence-to-sequence model tends to memorize the words and the patterns in the training dataset instead of learning the meaning of the words. Therefore, the generated sentences are often grammatically correct but semantically improper. Proposal: a novel model based on the encoder-decoder framework, called Word Embedding Attention Network (WEAN). Our proposed model generates the words by querying distributed word representations (i.e. neural word embeddings), hoping to capturing the meaning of the according words. Example of RNN Generated Summary Text: 昨晚,中联航空成都飞北京一架航班被发现有多人吸烟。后 因天气原因,飞机备降太原机场。有乘客要求重新安检,机长决 定继续飞行,引起机组人员与未吸烟乘客冲突。 Last night, several people were caught to smoke on a flight of China United Airlines from Chendu to Beijing. Later the flight temporarily landed on Taiyuan Airport. Some passengers asked for a security check but were denied by the captain, which led to a collision between crew and passengers. RNN: 中联航空机场发生爆炸致多人死亡。 China United Airlines exploded in the airport, leaving several people dead. Gold: 航班多人吸烟机组人员与乘客冲突。 Several people smoked on a flight which led to a collision between crew and passengers. Proposed Model Our Semantic Relevance Based neural model. It consists of decoder (above), encoder (below) and cosine similarity function. Experiments Dataset: Large Scale Chinese Short Text Summarization Dataset (LCSTS) Results of our model and baseline systems. Our models achieve substantial improvement of all ROUGE scores over baseline systems. (W: Word level; C: Character level). Example of SRB Generated Summary Text: 仔细一算,上海的互联网公司不乏成功案例,但最终成为BAT一 类巨头的几乎没有,这也能解释为何纳税百强的榜单中鲜少互联网公司 的身影。有一类是被并购,比如:易趣、土豆网、PPS、PPTV、一号店 等;有一类是数年偏安于细分市场。 With careful calculation, there are many successful Internet companies in Shanghai, but few of them becomes giant company like BAT. This is also the reason why few Internet companies are listed in top hundred companies of paying tax. Some of them are merged, such as Ebay, Tudou, PPS, PPTV, Yihaodian and so on. Others are satisfied with segment market for years. Gold:为什么上海出不了互联网巨头? Why Shanghai comes out no giant company? RNN context:上海的互联网巨头。 Shanghai's giant company. SRB:上海鲜少互联网巨头的身影。 Shanghai has few giant companies. Proposed Model Text Representation Source text representation Vt = hN Generated summary representation Vs = sM − hN Semantic Relevance cosine similarity function cos Vs, Vt = Vt∙Vs Vt Vs Training Objective function L = −p y x; θ − λ cos Vs, Vt Conclusion Our work aims at improving semantic relevance of generated summaries and source texts for Chinese social media text summarization. Our model is able to transform the text and the summary into a dense vector, and encourage high similarity of their representation. Experiments show that our model outperforms baseline systems, and the generated summary has higher semantic relevance.
Prevalence of dental erosion among young competitive swimmers: a pilot study.
The objective of the study presented was to determine the prevalence of oral problems--eg, dental erosion, rough surfaces, pain--among young competitive swimmers in India, because no such studies are reported. Its design was a cross-sectional study with a questionnaire and clinical examination protocols. It was conducted in a community setting on those who were involved in regular swimming in pools. Questionnaires were distributed to swimmers at the 25th State Level Swimming Competition, held at Thane Municipal Corporation's Swimming Pool, India. Those who returned completed questionnaires were also clinically examined. Questionnaires were analyzed and clinical examinations focused on either the presence or absence of dental erosions and rough surfaces. Reported results were on 100 swimmers who met the inclusion criteria. They included 75 males with a mean age of 18.6 ± 6.3 years and 25 females with a mean age of 15.3 ± 7.02 years. Among them, 90% showed dental erosion, 94% exhibited rough surfaces, and 88% were found to be having tooth pain of varying severity. Erosion and rough surfaces were found to be directly proportional to the duration of swimming. The authors concluded that the prevalence of dental erosion, rough surfaces, and pain is found to be very common among competitive swimmers. They recommend that swimmers practice good preventive measures and clinicians evaluate them for possible swimmer's erosion.
Generalized velocity obstacles
We address the problem of real-time navigation in dynamic environments for car-like robots. We present an approach to identify controls that will lead to a collision with a moving obstacle at some point in the future. Our approach generalizes the concept of velocity obstacles, which have been used for navigation among dynamic obstacles, and takes into account the constraints of a car-like robot. We use this formulation to find controls that will allow collision free navigation in dynamic environments. Finally, we demonstrate the performance of our algorithm on a simulated car-like robot among moving obstacles.
Feeling cybervictims' pain-The effect of empathy training on cyberbullying.
As the world's population increasingly relies on the use of modern technology, cyberbullying becomes an omnipresent risk for children and adolescents and demands counteraction to prevent negative (online) experiences. The classroom-based German preventive intervention "Medienhelden" (engl.: "Media Heroes") builds on previous knowledge about links between cognitive empathy, affective empathy, and cyberbullying, among others. For an evaluation study, longitudinal data were available from 722 high school students aged 11-17 years (M = 13.36, SD = 1.00, 51.8% female) before and six months after the implementation of the program. A 10-week version and a 1-day version were conducted and compared with a control group (controlled pre-long-term-follow-up study). Schools were asked to randomly assign their participating classes to the intervention conditions. Multi-group structural equation modeling (SEM) showed a significant effect of the short intervention on cognitive empathy and significant effects of the long intervention on affective empathy and cyberbullying reduction. The results suggest the long-term intervention to be more effective in reducing cyberbullying and promoting affective empathy. Without any intervention, cyberbullying increased and affective empathy decreased across the study period. Empathy change was not generally directly linked to change in cyberbullying behavior. "Media Heroes" provides effective teaching materials and empowers schools to address the important topic of cyberbullying in classroom settings without costly support from the outside.
New Algorithms for SIMD Alignment
Optimizing programs for modern multiprocessor or vector platforms is a major important challenge for compilers today. In this work, we focus on one challenging aspect: the SIMD ALIGNMENT problem. Previously, only heuristics were used to solve this problem, without guarantees on the number of shifts in the obtained solution. We study two interesting and realistic special cases of the SIMD ALIGNMENT problem and present two novel and efficient algorithms that provide optimal solutions for these two cases. The new algorithms employ dynamic programming and a MIN-CUT/MAX-FLOW algorithm as subroutines. We also discuss the relation between the SIMD ALIGNMENT problem and the MULTIWAY CUT and NODE MULTIWAY CUT problems; and we show how to derive an approximated solution to the SIMD ALIGNMENT problem based on approximation algorithms to these two known problems.
Anisotropic diffusion-based detail-preserving smoothing for image restoration
It is important in image restoration to remove noise while preserving meaningful details such as edges and fine features. The existing edge-preserving smoothing methods may inevitably take fine detail as noise or vice versa. In this paper, we propose a new edge-preserving smoothing technique based on a modified anisotropic diffusion. The proposed method can simultaneously preserve edges and fine details while filtering out noise in the diffusion process. Since the fine detail in the neighborhood of a small image window generally have a gray-level variance larger than that of the noisy background, the proposed diffusion model incorporates both local gradient and gray-level variance to preserve edges and fine details while effectively removing noise. Experimental results have shown that the proposed anisotropic diffusion scheme can effectively smooth noisy background, yet well preserve edge and fine details in the restored image. The proposed method has the best restoration result compared with other edge-preserving methods.
Probabilistic neural networks
-By replacing the sigmoid activation function often used in neural networks with an exponential function, a probabilistic neural network ( PNN) that can compute nonlinear decision boundaries which approach the Bayes optimal is formed. Alternate activation functions having similar properties are also discussed. A fourlayer neural network of the type proposed can map any input pattern to any number of classifications. The decision boundaries can be modified in real-time using new data as they become available, and can be implemented using artificial hardware "'neurons" that operate entirely in parallel. Provision is also made for estimating the probability and reliability of a classification as well as making the decision. The technique offers a tremendous speed advantage for problems in which the incremental adaptation time of back propagation is a significant fraction of the total computation time. For one application, the PNN paradigm was 200,000 times faster than back-propagation. Keywords--Neural network, Probability density function, Parallel processor, "Neuron", Pattern recognition, Parzen window, Bayes strategy, Associative memory.
Computational Analysis of Persuasiveness in Social Multimedia: A Novel Dataset and Multimodal Prediction Approach
Our lives are heavily influenced by persuasive communication, and it is essential in almost any types of social interactions from business negotiation to conversation with our friends and family. With the rapid growth of social multimedia websites, it is becoming ever more important and useful to understand persuasiveness in the context of social multimedia content online. In this paper, we introduce our newly created multimedia corpus of 1,000 movie review videos obtained from a social multimedia website called ExpoTV.com, which will be made freely available to the research community. Our research results presented here revolve around the following 3 main research hypotheses. Firstly, we show that computational descriptors derived from verbal and nonverbal behavior can be predictive of persuasiveness. We further show that combining descriptors from multiple communication modalities (audio, text and visual) improve the prediction performance compared to using those from single modality alone. Secondly, we investigate if having prior knowledge of a speaker expressing a positive or negative opinion helps better predict the speaker's persuasiveness. Lastly, we show that it is possible to make comparable prediction of persuasiveness by only looking at thin slices (shorter time windows) of a speaker's behavior.
The Art and Science of Synthetic Character Design
Drawing from ideas in both traditional animation and modern philosophy, we present a methodology for designing synthetic characters. The goal of our approach is to construct intentionalcharacters that are both compelling, in the sense that people can empathize with them, and understandable, in that their actions can be seen as attempts to satisfy their desires given their beliefs. We also present a simple, value-based framework that has the flexibility to implement the subsystems necessary for the construction of intentional characters.
Bilingualism protects anterior temporal lobe integrity in aging
Cerebral gray-matter volume (GMV) decreases in normal aging but the extent of the decrease may be experience-dependent. Bilingualism may be one protective factor and in this article we examine its potential protective effect on GMV in a region that shows strong age-related decreases-the left anterior temporal pole. This region is held to function as a conceptual hub and might be expected to be a target of plastic changes in bilingual speakers because of the requirement for these speakers to store and differentiate lexical concepts in 2 languages to guide speech production and comprehension processes. In a whole brain comparison of bilingual speakers (n = 23) and monolingual speakers (n = 23), regressing out confounding factors, we find more extensive age-related decreases in GMV in the monolingual brain and significantly increased GMV in left temporal pole for bilingual speakers. Consistent with a specific neuroprotective effect of bilingualism, region of interest analyses showed a significant positive correlation between naming performance in the second language and GMV in this region. The effect appears to be bilateral though because there was a nonsignificantly different effect of naming performance on GMV in the right temporal pole. Our data emphasize the vulnerability of the temporal pole to normal aging and the value of bilingualism as both a general and specific protective factor to GMV decreases in healthy aging.
Robust camera calibration of soccer video using genetic algorithm
This paper proposes an exact and robust genetic algorithm-based method for the calibration of soccer camera. According to the FIFA official soccer field layout we defined a field model for the soccer court. Camera calibration is done through finding the homography transform between the field model and the input frame which is followed with DLT decomposition. The intersection of lines in the field model and input frame form feature points and by means of a genetic algorithm, we found the correspondence between those points. Our algorithm was applied to a couple of soccer video frames and the achieved results demonstrate its robustness, accuracy and high performance.
Elaboration and validation of instrument to assess adherence to hypertension treatment
OBJECTIVE To elaborate and validate an instrument of adherence to treatment for systemic arterial hypertension, based on Item Response Theory. METHODS The process of developing this instrument involved theoretical, empirical and analytical procedures. The theoretical procedures included defining the construct of adherence to systemic arterial hypertension treatment, identifying areas involved and preparing the instrument. The instrument underwent semantic and conceptual analysis by experts. The empirical procedure involved the application of the instrument to 1,000 users with systemic arterial hypertension treated at a referral center in Fortaleza, CE, Northeastern Brazil, in 2012.. The analytical phase validated the instrument through psychometric analysis and statistical procedures. The Item Response Theory model used in the analysis was the Samejima Gradual Response model. RESULTS Twelve of the 23 items of the original instrument were calibrated and remained in the final version. Cronbach's alpha coefficient (α) was 0.81. Items related to the use of medication when presenting symptoms and the use of fat showed good performance as they were more capable of discriminating individuals who adhered to treatment. To ever stop taking the medication and the consumption of white meat showed less power of discrimination. Items related to physical exercise and routinely following the non-pharmacological treatment had most difficulty to respond. The instrument was more suitable for measuring low adherence to hypertension treatment than high. CONCLUSIONS The instrument proved to be an adequate tool to assess adherence to treatment for systemic arterial hypertension since it manages to differentiate individuals with high from those with low adherence. Its use could facilitate the identification and verification of compliance to prescribed therapy, besides allowing the establishment of goals to be achieved.
Towards Release Strategy Optimization for Apps in Google Play
In the appstore-centric ecosystem, app developers have an urgent requirement to optimize their release strategy to maximize user adoption of their apps. To address this problem, we introduce an approach to assisting developers to select the proper release opportunity based on the purpose of the update and current condition of the app. Before that, we propose the update interval to characterize release patterns of apps, and find significance of the updates through empirical analysis. We mined the release-history data of 17,820 apps from 33 categories in Google Play, over a period of 105 days. With 41,028 releases identified from these apps, we reveal important characteristics of update intervals and how these factors can influence update effects. We suggest developers to synthetically consider app ranking, rating trend, and update purpose in addition to the timing of releasing an app version. We propose a Multinomial Naive Bayes model to help decide an optimal release opportunity to gain better user adoption.
Situation Awareness in Ambient Assisted Living for Smart Healthcare
The success of providing smart healthcare services in ambient assisted living (AAL) largely depends on an effective prediction of situations in the environment. Situation awareness in AAL is to determine the environment smartness by perceiving information related to the surroundings and human behavioral changes. In AAL environment, there are plenty of ways to collect data about its inhabitants, such as through cameras, microphones, and other sensors. The collected data are complicated enough to go for an efficient processing in perceiving the situation. This paper gives an overview of the existing research results in multimodal data analysis in AAL environment to improve the living environment of the seniors, and it attempts to bring efficiency in complex event processing for real-time situational awareness. This paper thus considers multimodal sensing for detection of current situations as well as to predict future situations using decision-tree and association analysis algorithms. To illustrate the proposed approach, we consider elderly activity recognition in the AAL environment.
The age of gossip: spatial mean field regime
Disseminating a piece of information, or updates for a piece of information, has been shown to benefit greatly from simple randomized procedures, sometimes referred to as gossiping, or epidemic algorithms. Similarly, in a network where mobile nodes occasionally receive updated content from a base station, gossiping using opportunistic contacts allows for recent updates to be efficiently maintained, for a large number of nodes. In this case, however, gossiping depends on node mobility. For this reason, we introduce a new gossip model, with mobile nodes moving between different classes that can represent locations or states, which determine gossiping behavior of the nodes. Here we prove that, when the number of mobile nodes becomes large, the age of the latest updates received by mobile nodes approaches a deterministic mean-field regime. More precisely, we show that the occupancy measure of the process constructed, with the ages defined above, converges to a deterministic limit that can be entirely characterized by differential equations. This major simplification allows us to characterize how mobility, source inputs and gossiping influence the age distribution for low and high ages. It also leads to a scalable numerical evaluation of the performance of mobile update systems, which we validate (using a trace of 500 taxicabs) and use to propose infrastructure deployment.
Towards parameter-free data mining
Most data mining algorithms require the setting of many input parameters. Two main dangers of working with parameter-laden algorithms are the following. First, incorrect settings may cause an algorithm to fail in finding the true patterns. Second, a perhaps more insidious problem is that the algorithm may report spurious patterns that do not really exist, or greatly overestimate the significance of the reported patterns. This is especially likely when the user fails to understand the role of parameters in the data mining process.Data mining algorithms should have as few parameters as possible, ideally none. A parameter-free algorithm would limit our ability to impose our prejudices, expectations, and presumptions on the problem at hand, and would let the data itself speak to us. In this work, we show that recent results in bioinformatics and computational theory hold great promise for a parameter-free data-mining paradigm. The results are motivated by observations in Kolmogorov complexity theory. However, as a practical matter, they can be implemented using any off-the-shelf compression algorithm with the addition of just a dozen or so lines of code. We will show that this approach is competitive or superior to the state-of-the-art approaches in anomaly/interestingness detection, classification, and clustering with empirical tests on time series/DNA/text/video datasets.
Spatio-temporal trajectory analysis of mobile objects following the same itinerary
More and more mobile objects are now equipped with sensors allowing real time monitoring of their movements. Nowadays, the data produced by these sensors can be stored in spatio-temporal databases. The main goal of this article is to perform a data mining on a huge quantity of mobile object’s positions moving in an open space in order to deduce its behaviour. New tools must be defined to ease the detection of outliers. First of all, a zone graph is set up in order to define itineraries. Then, trajectories of mobile objects following the same itinerary are extracted from the spatio-temporal database and clustered. A statistical analysis on this set of trajectories lead to spatio-temporal patterns such as the main route and spatio-temporal channel followed by most of trajectories of the set. Using these patterns, unusual situations can be detected. Furthermore, a mobile object’s behaviour can be defined by comparing its positions with these spatio-temporal patterns. In this article, this technique is applied to ships’ movements in an open maritime area. Unusual behaviours such as being ahead of schedule or delayed or veering to the left or to the right of the main route are detected. A case study illustrates these processes based on ships’ positions recorded during two years around the Brest area. This method can be extended to almost all kinds of mobile objects (pedestrians, aircrafts, hurricanes, ...) moving in an open area.
Recent Developments on Insertion-Deletion Systems
This article gives an overview of the recent developments in the study of the operations of insertion and deletion. It presents the origin of these operations, their formal deflnition and a series of results concerning language properties, decidability and computational completeness of families of languages generated by insertion-deletion systems and their extensions with the graphcontrol. The basic proof methods are presented and the proofs for the most important results are sketched.
1 € Filter: a Simple Speed-based Low-pass Filter for Noisy Input in Interactive Systems
The 1 € filter ("one Euro filter") is a simple algorithm to filter noisy signals for high precision and responsiveness. It uses a first order low-pass filter with an adaptive cutoff frequency: at low speeds, a low cutoff stabilizes the signal by reducing jitter, but as speed increases, the cutoff is increased to reduce lag. The algorithm is easy to implement, uses very few resources, and with two easily understood parameters, it is easy to tune. In a comparison with other filters, the 1 € filter has less lag using a reference amount of jitter reduction.
Analysis of Fashion Consumers ’ Motives to Engage in Electronic Word-of-Mouth Communication through Social Media
The purpose of this paper is to analyse consumers’ interactions with fashion brands on social networking sites, focusing on consumers’ motives to engage in electronic Word-of-Mouth (eWOM) communication. Existing WOM motivation frameworks are synthesised (e.g. Dichter, 1966; Hennig-Thurau et al., 2004) in order to identify seven potential motives that influence consumers to engage in eWOM on Facebook and Twitter. Subsequently, the motives are incorporated into an extended Theory of Reasoned Action (TRA) model (Ajzen and Fishbein, 1980) and initially tested in the context of fashion-related eWOM, utilising preexisting measures in a quantitative empirical study. Based on correlation and ANOVA analysis, results demonstrate that all the hypothesised motivations influence consumers’ eWOM engagement in the fashion context. Interestingly, the results indicate that consumers are motivated by economic incentives irrespective of their level of eWOM engagement. In addition, the main tenets of the TRA are confirmed in the online social media context, with both attitude and subjective norms mediating fashion brand related eWOM engagement. 1 For the purposes of this study, ‘engaging in eWOM communication’ refers to all of the following: ‘writing’, ‘liking’, ‘sharing’, ‘recommending’ , commenting on, and ‘tweeting’ fashion brand-related messages on Facebook and/ or Twitter. A ‘fashion brand-related message’ includes a post, link, comment or ‘tweet’.
Universal genome in the origin of metazoa: thoughts about evolution.
Recent advances in paleontology, genome analysis, genetics and embryology raise a number of questions about the origin of Animal Kingdom. These questions include:(1) seemingly simultaneous appearance of diverse Metazoan phyla in Cambrian period, (2) similarities of genomes among Metazoan phyla of diverse complexity, (3) seemingly excessive complexity of genomes of lower taxons and (4) similar genetic switches of functionally similar but non-homologous developmental programs. Here I propose an experimentally testable hypothesis of Universal Genome that addresses these questions. According to this model, (a) the Universal Genome that encodes all major developmental programs essential for various phyla of Metazoa emerged in a unicellular or a primitive multicellular organism shortly before the Cambrian period; (b) The Metazoan phyla, all having similar genomes, are nonetheless so distinct because they utilize specific combinations of developmental programs. This model has two major predictions, first that a significant fraction of genetic information in lower taxons must be functionally useless but becomes useful in higher taxons, and second that one should be able to turn on in lower taxons some of the complex latent developmental programs, e.g., a program of eye development or antibody synthesis in sea urchin. An example of natural turning on of a complex latent program in a lower taxon is discussed.
Critical Thinking Why Is It So Hard to Teach ? By
irtually everyone would agree that a primary, yet insufficiently met, goal of schooling is to enable students to think critically. In layperson’s terms, critical thinking consists of seeing both sides of an issue, being open to new evidence that disconfirms your ideas, reasoning dispassionately, demanding that claims be backed by evidence, deducing and inferring conclusions from available facts, solving problems, and so forth. Then too, there are specific types of critical thinking that are characteristic of different subject matter: That’s what we mean when we refer to “thinking like a scientist” or “thinking like a historian.” This proper and commonsensical goal has very often been translated into calls to teach “critical thinking skills” and “higher-order thinking skills”—and into generic calls for teaching students to make better judgments, reason more logically, and so forth. In a recent survey of human resource officials and in testimony delivered just a few months ago before the Senate Finance Committee, business leaders have repeatedly exhorted schools to do a better job of teaching students to think critically. And they are not alone. Organizations and initiatives involved in education reform, such as the National Center on Education and the Economy, the American Diploma Project, and the Aspen Institute, have pointed out the need for students to think and/or reason critically. The College Board recently revamped the SAT to better assess students’ critical thinking. And ACT, Inc. offers a test of critical thinking for college students. These calls are not new. In 1983, A Nation At Risk, a report by the National Commission on Excellence in Education, found that many 17-year-olds did not possess the “‘higher-order’ intellectual skills” this country needed. It claimed that nearly 40 percent could not draw inferences from written material and only onefifth could write a persuasive essay. Following the release of A Nation At Risk, programs designed to teach students to think critically across the curriculum became extremely popular. By 1990, most states had initiatives designed to encourage educators to teach critical thinking, and one of the most widely used programs, Tactics for Thinking, sold 70,000 teacher guides. But, for reasons I’ll explain, the programs were not very effective—and today we still lament students’ lack of critical thinking. After more than 20 years of lamentation, exhortation, and little improvement, maybe it’s time to ask a fundamental question: Can critical thinking actually be taught? Decades of cognitive research point to a disappointing answer: not really. People who have sought to teach critical thinking have assumed that it is a skill, like riding a bicycle, and that, like other skills, once you learn it, you can apply it in any situation. Research from cognitive science shows that thinking is not that sort of skill. The processes of thinking are intertwined with the content of thought (that is, domain knowledge). Thus, if you remind a student to “look at an issue from multiple perspectives” often enough, he will learn that he ought to do so, but if he doesn’t know much about Critical Thinking
Baby Talk : Understanding and Generating Image Descriptions
We posit that visually descriptive language offers computer vision researchers both information about the world, and information about how people describe the world. The potential benefit from this source is made more significant due to the enormous amount of language data easily available today. We present a system to automatically generate natural language descriptions from images that exploits both statistics gleaned from parsing large quantities of text data and recognition algorithms from computer vision. The system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work.
On the Geology of the Port Nicholson District, New Zealand
In the southern end of the Northern Island, metamorphic or greywacke rocks bound the shores of the harbour of Port Nicholson, and constitute the range, from 500 to perhaps 2500 feet high, running to the N.N.E. from Port Nicholson, and forming abrupt mountains with precipitous and narrow gorges. The stratification is irregular, contorted, and sometimes nearly perpendicular. The rock is frequently siliceous, sometimes argillaceous, and is often traversed by veins of quartz and by igneous rocks. This range is continued also to the S.S.W. in the Middle Island, on the other side of Cook9s Straits, and appears to form its central ridge. Between the greywacke range at Port Nicholson and the western coast are tertiary sands and clays, probably resting on the greywacke rocks. This country is low and level, until reaching Wanganni, 110 miles from Wellington. The level of the country from thence to Taranahi is higher, forming cliffs, in which the strata are distinct and fossil trees are found protruding from them.
A Reconfigurable Radio Architecture for Cognitive Radio in Emergency Networks
Cognitive radio has been proposed as a promising technology to solve today's spectrum scarcity problem. Cognitive radio is able to sense the spectrum to find the free spectrum, which can be optimally used by cognitive radio without causing interference to the licensed user. In the scope of the adaptive ad-hoc freeband (AAF) project, an emergency network built on top of cognitive radio is proposed. New functional requirements and system specifications for cognitive radio have to be supported by a reconfigurable architecture. In this paper, we propose a heterogenous reconfigurable system-on-chip (SoC) architecture to enable the evolution from the traditional software defined radio to cognitive radio
Effect of Abaloparatide vs Placebo on New Vertebral Fractures in Postmenopausal Women With Osteoporosis: A Randomized Clinical Trial.
IMPORTANCE Additional therapies are needed for prevention of osteoporotic fractures. Abaloparatide is a selective activator of the parathyroid hormone type 1 receptor. OBJECTIVE To determine the efficacy and safety of abaloparatide, 80 μg, vs placebo for prevention of new vertebral fracture in postmenopausal women at risk of osteoporotic fracture. DESIGN, SETTING, AND PARTICIPANTS The Abaloparatide Comparator Trial In Vertebral Endpoints (ACTIVE) was a phase 3, double-blind, RCT (March 2011-October 2014) at 28 sites in 10 countries. Postmenopausal women with bone mineral density (BMD) T score ≤-2.5 and >-5.0 at the lumbar spine or femoral neck and radiological evidence ≥2 mild or ≥1 moderate lumbar or thoracic vertebral fracture or history of low-trauma nonvertebral fracture within the past 5 years were eligible. Postmenopausal women (>65 y) with fracture criteria and a T score ≤-2.0 and >-5.0 or without fracture criteria and a T score ≤-3.0 and >-5.0 could enroll. INTERVENTIONS Blinded, daily subcutaneous injections of placebo (n = 821); abaloparatide, 80 μg (n = 824); or open-label teriparatide, 20 μg (n = 818) for 18 months. MAIN OUTCOMES AND MEASURES Primary end point was percentage of participants with new vertebral fracture in the abaloparatide vs placebo groups. Sample size was set to detect a 4% difference (57% risk reduction) between treatment groups. Secondary end points included change in BMD at total hip, femoral neck, and lumbar spine in abaloparatide-treated vs placebo participants and time to first incident nonvertebral fracture. Hypercalcemia was a prespecified safety end point in abaloparatide-treated vs teriparatide participants. RESULTS Among 2463 women (mean age, 69 years [range, 49-86]), 1901 completed the study. New morphometric vertebral fractures occurred less frequently in the active treatment groups vs placebo. The Kaplan-Meier estimated event rate for nonvertebral fracture was lower with abaloparatide vs placebo. BMD increases were greater with abaloparatide than placebo (all P < .001). Incidence of hypercalcemia was lower with abaloparatide (3.4%) vs teriparatide (6.4%) (risk difference [RD], −2.96 [95%CI, −5.12 to −0.87]; P = .006). [table: see text]. CONCLUSIONS AND RELEVANCE Among postmenopausal women with osteoporosis, the use of subcutaneous abaloparatide, compared with placebo, reduced the risk of new vertebral and nonvertebral fractures over 18 months. Further research is needed to understand the clinical importance of RD, the risks and benefits of abaloparatide treatment, and the efficacy of abaloparatide vs other osteoporosis treatments. TRIAL REGISTRATION clinicaltrials.gov Identifier: NCT01343004.
Building a practically useful theory of goal setting and task motivation. A 35-year odyssey.
The authors summarize 35 years of empirical research on goal-setting theory. They describe the core findings of the theory, the mechanisms by which goals operate, moderators of goal effects, the relation of goals and satisfaction, and the role of goals as mediators of incentives. The external validity and practical significance of goal-setting theory are explained, and new directions in goal-setting research are discussed. The relationships of goal setting to other theories are described as are the theory's limitations.
Neighborhood rough set and SVM based hybrid credit scoring classifier
The credit scoring model development has become a very important issue, as the credit industry is highly competitive. Therefore, considerable credit scoring models have been widely studied in the areas of statistics to improve the accuracy of credit scoring during the past few years. This study constructs a hybrid SVM-based credit scoring models to evaluate the applicant’s credit score according to the applicant’s input features: (1) using neighborhood rough set to select input features; (2) using grid search to optimize RBF kernel parameters; (3) using the hybrid optimal input features and model parameters to solve the credit scoring problem with 10-fold cross validation; (4) comparing the accuracy of the proposed method with other methods. Experiment results demonstrate that the neighborhood rough set and SVM based hybrid classifier has the best credit scoring capability compared with other hybrid classifiers. It also outperforms linear discriminant analysis, logistic regression and neural networks. 2011 Elsevier Ltd. All rights reserved.
Performance Comparison between Five NoSQL Databases
Recently NoSQL databases and their related technologies are developing rapidly and are widely applied in many scenarios with their BASE (Basic Availability, Soft state, Eventual consistency) features. At present, there are more than 225 kinds of NoSQL databases. However, the overwhelming amount and constantly updated versions of databases make it challenging for people to compare their performance and choose an appropriate one. This paper is trying to evaluate the performance of five NoSQL clusters (Redis, MongoDB, Couchbase, Cassandra, HBase) by using a measurement tool – YCSB (Yahoo! Cloud Serving Benchmark), explain the experimental results by analyzing each database's data model and mechanism, and provide advice to NoSQL developers and users.
A Tutorial on Bayesian Nonparametric Models
A key problem in statistical modeling is model selection, how to choose a model at an appropriate level of complexity. This problem appears in many settings, most prominently in choosing the number of clusters in mixture models or the number of factors in factor analysis. In this tutorial we describe Bayesian nonparametric methods, a class of methods that side-steps this issue by allowing the data to determine the complexity of the model. This tutorial is a high-level introduction to Bayesian nonparametric methods and contains several examples of their application.
Beyond Seattle: Globalization and Public Opinion
The Open Economy and Its Enemies: Public Attitudes in East Asia and Eastern Europe. By Jane Duckett, William L. Miller Cambridge: Cambridge University Press, 2007. 284 pp., $85.00 cloth (ISBN: 0-521-86406-0), $34.99 paper (ISBN: 0-521-68255-8). Globalization—in its various forms—remains a much studied topic in international and comparative political economy. However, despite the likelihood of wide ranging effects brought about by the integration of goods, peoples, and ideas, scholarly attention has been limited, surprisingly, to but a few of globalization's potential consequences. Most attention has been given to questions of whether greater market integration matters for domestic policy outcomes, with “outcomes” defined in terms of public spending and the health of the national economy. Far fewer studies consider the influence of openness on the health of representative democracy. Moreover, those extant studies that examine the economic impact of globalization focus primarily on developed welfare states—that is, on those economies that are best equipped to cushion the volatile effects of opening up (see, for example, Scheve and Slaughter 2001). Very little is known about what is arguably a far more critical question: the degree of public support for globalization in lower- and middle-income democracies. In this regard, Jane Duckett and William Miller's The Open Economy and Its Enemies fills a large hole in the globalization literature. Based on an exhaustive data collection effort aimed at assessing mass and elite attitudes in two “newer democracies” (the Czech Republic and South Korea) and two “semi-democracies” (Ukraine and Vietnam), the book easily offers the most comprehensive account to date of attitudes toward the world capitalist economy in lesser-developed countries. To sift through the many issues surrounding attitudes toward economic and cultural openness, Duckett and Miller conducted a series of focus group studies (involving 130 participants in all) and representative surveys of mass publics and public officials (totaling over ten thousand interviews) in these four countries during 2002 and 2003. The utility of Duckett and Miller's approach …
A Multilevel Mixture-of-Experts Framework for Pedestrian Classification
Notwithstanding many years of progress, pedestrian recognition is still a difficult but important problem. We present a novel multilevel Mixture-of-Experts approach to combine information from multiple features and cues with the objective of improved pedestrian classification. On pose-level, shape cues based on Chamfer shape matching provide sample-dependent priors for a certain pedestrian view. On modality-level, we represent each data sample in terms of image intensity, (dense) depth, and (dense) flow. On feature-level, we consider histograms of oriented gradients (HOG) and local binary patterns (LBP). Multilayer perceptrons (MLP) and linear support vector machines (linSVM) are used as expert classifiers. Experiments are performed on a unique real-world multi-modality dataset captured from a moving vehicle in urban traffic. This dataset has been made public for research purposes. Our results show a significant performance boost of up to a factor of 42 in reduction of false positives at constant detection rates of our approach compared to a baseline intensity-only HOG/linSVM approach.
Hybrid VAE: Improving Deep Generative Models using Partial Observations
Deep neural network models trained on large labeled datasets are the state-of-theart in a large variety of computer vision tasks. In many applications, however, labeled data is expensive to obtain or requires a time consuming manual annotation process. In contrast, unlabeled data is often abundant and available in large quantities. We present a principled framework to capitalize on unlabeled data by training deep generative models on both labeled and unlabeled data. We show that such a combination is beneficial because the unlabeled data acts as a data-driven form of regularization, allowing generative models trained on few labeled samples to reach the performance of fully-supervised generative models trained on much larger datasets. We call our method Hybrid VAE (H-VAE) as it contains both the generative and the discriminative parts. We validate H-VAE on three large-scale datasets of different modalities: two face datasets: (MultiPIE, CelebA) and a hand pose dataset (NYU Hand Pose). Our qualitative visualizations further support improvements achieved by using partial observations.
Can Structural Models Price Default Risk ? Evidence from Bond and Credit Derivative Markets ∗
Using a set of structural models, we evaluate the price of default protection for a sample of US corporations. In contrast to previous evidence from corporate bond data, CDS premia are not systematically underestimated. In fact, one of our studied models has little difficulty on average in predicting their level. For robustness, we perform the same exercise for bond spreads by the same issuers on the same trading date. As expected, bond spreads relative to the Treasury curve are systematically underestimated. This is not the case when the swap curve is used as a benchmark, suggesting that previously documented underestimation results may be sensitive to the choice of risk free rate. ∗We are indebted to seminar participants at Wilfrid Laurier University, Carnegie Mellon University, the Bank of Canada, McGill University and Queen’s University for helpful discussions. In addition, we are grateful to participants of the 2005 EFA in Moscow, the 2005 NFA in Vancouver, the 2006 CFP Vallendar conference and the 2006 Moody’s NYU Credit Risk Conference. We are particularly grateful to Stephen Schaefer for comments. †Faculty of Management, McGill University, 1001 Sherbrooke Street West, Montreal QC, H3A 1G5 Canada. Tel +1 514 398-3186, Fax +1 514 398-3876, Email [email protected]. ‡Stockholm School of Economics, Department of Finance, Box 6501, S-113 83 Stockholm, Sweden. Tel: +46 8 736 9143, fax +46 8 312327. 1
Child and adolescent mental health worldwide: evidence for action
Mental health problems affect 10-20% of children and adolescents worldwide. Despite their relevance as a leading cause of health-related disability in this age group and their longlasting effects throughout life, the mental health needs of children and adolescents are neglected, especially in low-income and middle-income countries. In this report we review the evidence and the gaps in the published work in terms of prevalence, risk and protective factors, and interventions to prevent and treat childhood and adolescent mental health problems. We also discuss barriers to, and approaches for, the implementation of such strategies in low-resource settings. Action is imperative to reduce the burden of mental health problems in future generations and to allow for the full development of vulnerable children and adolescents worldwide.
A cross-cultural study of mobile music: retrieval, management and consumption
This paper reports a user study on retrieving, consuming and managing digital music content related to mobile music consumption. We study the personal relationship people have with music entertainment technology and content, and explore how music is enjoyed on the move. We also look at the typical actions related to personal music management, how they are accomplished, and where they take place. The study was carried out in New York City and Hong Kong, and the paper also reports the differences found in mobile music consumption between these cultural settings.
CoolStreaming/DONet: a data-driven overlay network for peer-to-peer live media streaming
This paper presents DONet, a data-driven overlay network for live media streaming. The core operations in DONet are very simple: every node periodically exchanges data availability information with a set of partners, and retrieves unavailable data from one or more partners, or supplies available data to partners. We emphasize three salient features of this data-driven design: 1) easy to implement, as it does not have to construct and maintain a complex global structure; 2) efficient, as data forwarding is dynamically determined according to data availability while not restricted by specific directions; and 3) robust and resilient, as the partnerships enable adaptive and quick switching among multi-suppliers. We show through analysis that DONet is scalable with bounded delay. We also address a set of practical challenges for realizing DONet, and propose an efficient member and partnership management algorithm, together with an intelligent scheduling algorithm that achieves real-time and continuous distribution of streaming contents. We have extensively evaluated the performance of DONet over the PlanetLab. Our experiments, involving almost all the active PlanetLab nodes, demonstrate that DONet achieves quite good streaming quality even under formidable network conditions. Moreover, its control overhead and transmission delay are both kept at low levels. An Internet-based DONet implementation, called CoolStreaming v.0.9, was released on May 30, 2004, which has attracted over 30000 distinct users with more than 4000 simultaneously being online at some peak times. We discuss the key issues toward designing CoolStreaming in this paper, and present several interesting observations from these large-scale tests; in particular, the larger the overlay size, the better the streaming quality it can deliver.
Avoiding plagiarism, self-plagiarism, and other questionable writing practices: A guide to ethical writing
In recognizing the importance of educating aspiring scientists in the responsible conduct of research (RCR), the Office of Research Integrity (ORI) began sponsoring the creation of instructional resources to address this pressing need in 2002. The present guide on avoiding plagiarism and other inappropriate writing practices was created to help students, as well as professionals, identify and prevent such malpractices and to develop an awareness of ethical writing and authorship. This guide is one of the many products stemming from ORI’s effort to promote the RCR.
Mapping Words to the World in Infancy : Infants ’ Expectations for Count Nouns and Adjectives
Threeexperimentsdocument that14-month-old infants’construalofobjects (e.g.,purple animals) is influenced by naming, that they can distinguish between the grammatical form noun and adjective, and that they treat this distinction as relevant to meaning. In each experiment, infants extended novel nouns (e.g., “This one is a blicket”) specifically to object categories (e.g., animal), and not to object properties (e.g., purple things). This robust noun–category link is related to grammatical form and not to surface differences in the presentation of novel words (Experiment 3). Infants’extensions of novel adjectives (e.g., “This one is blickish”) were more fragile: They extended adjectives specifically to object properties when the property was color (Experiment 1), but revealed a less precise mapping when the property was texture (Experiment 2). These results reveal that by 14 months, infants distinguish between grammatical forms and utilize these distinctions in determining the meaning of novel words.
3D seabed: 3D modeling and visualization platform for the seabed
We are building a `virtual-world' of a real world seabed for its visual analysis. Sub-bottom profile is imported in the 3D environment. “section-drilling” three-dimensional model is designed according to the characteristics of the multi-source comprehensive data under the seabed. In this model, the seabed stratigraphic profile obtained by seismic reflection is digitized into discrete points and interpolated with an approved Kriging arithmetic to produce uniform grid in every strata layer. The Delaunay triangular model is then constructed in every layer and calibrated using the drilling data to rectify the depth value of the dataset within the buffer. Finally, the constructed 3D seabed stratigraphic model is rendered in every layer by GPU shader engine. Based on this model, two state-of-the-art applications on website explorer and smartphone prove its ubiquitous feature. The resulting `3D Seabed' is used for simulation, visualization, and analysis, by a set of interlinked, real-time layers of information about the 3D Seabed and its analysis result.
Phthriasis Palpebrarum Mimicking Lid Eczema and Blepharitis
Phthiriasis palpebrarum (PP) is a rare eyelid infestation caused by phthirus pubis. We report a case of PP mimicking lid eczema and blepharitis. A 68-year-old woman had moderate itching in both eyes. Her initial diagnosis was considered to be lid eczema or blepharitis because of findings similar to exfoliative lesions and color changes in eyelids and to excretions over eyelashes. Careful observation revealed many lice and translucent nits, protuberances and hyperpigmentary changes, and the buried lice in both eyelids. No hyperemia or secretion was observed on the lids and in the conjunctiva in both eyes. The patient was treated with pilocarpine hydrochloride 4% drops. At the end of the first week, no louse or nit was present. Although it was known that PP is a rare cause of blepharoconjunctivitis, it might observe as an isolated infestation of the eyelids and this condition can easily be misdiagnosed as lid eczema and blepharitis.