title
stringlengths
8
300
abstract
stringlengths
0
10k
Wireless Power and Data Transfer via a Common Inductive Link Using Frequency Division Multiplexing
For wireless power transfer (WPT) systems, communication between the primary side and the pickup side is a challenge because of the large air gap and magnetic interferences. A novel method, which integrates bidirectional data communication into a high-power WPT system, is proposed in this paper. The power and data transfer share the same inductive link between coreless coils. Power/data frequency division multiplexing technique is applied, and the power and data are transmitted by employing different frequency carriers and controlled independently. The circuit model of the multiband system is provided to analyze the transmission gain of the communication channel, as well as the power delivery performance. The crosstalk interference between two carriers is discussed. In addition, the signal-to-noise ratios of the channels are also estimated, which gives a guideline for the design of mod/demod circuits. Finally, a 500-W WPT prototype has been built to demonstrate the effectiveness of the proposed WPT system.
The Mediator subunit MED23 couples H2B mono-ubiquitination to transcriptional control and cell fate determination.
The Mediator complex orchestrates multiple transcription factors with the Pol II apparatus for precise transcriptional control. However, its interplay with the surrounding chromatin remains poorly understood. Here, we analyze differential histone modifications between WT and MED23(-/-) (KO) cells and identify H2B mono-ubiquitination at lysine 120 (H2Bub) as a MED23-dependent histone modification. Using tandem affinity purification and mass spectrometry, we find that MED23 associates with the RNF20/40 complex, the enzyme for H2Bub, and show that this association is critical for the recruitment of RNF20/40 to chromatin. In a cell-free system, Mediator directly and substantially increases H2Bub on recombinant chromatin through its cooperation with RNF20/40 and the PAF complex. Integrative genome-wide analyses show that MED23 depletion specifically reduces H2Bub on a subset of MED23-controlled genes. Importantly, MED23-coupled H2Bub levels are oppositely regulated during myogenesis and lung carcinogenesis. In sum, these results establish a mechanistic link between the Mediator complex and a critical chromatin modification in coordinating transcription with cell growth and differentiation.
The oxidation state of the mantle and the extraction of carbon from Earth’s interior
Determining the oxygen fugacity of Earth’s silicate mantle is of prime importance because it affects the speciation and mobility of volatile elements in the interior and has controlled the character of degassing species from the Earth since the planet’s formation. Oxygen fugacities recorded by garnet-bearing peridotite xenoliths from Archaean lithosphere are of particular interest, because they provide constraints on the nature of volatile-bearing metasomatic fluids and melts active in the oldest mantle samples, including those in which diamonds are found. Here we report the results of experiments to test garnet oxythermobarometry equilibria under high-pressure conditions relevant to the deepest mantle xenoliths. We present a formulation for the most successful equilibrium and use it to determine an accurate picture of the oxygen fugacity through cratonic lithosphere. The oxygen fugacity of the deepest rocks is found to be at least one order of magnitude more oxidized than previously estimated. At depths where diamonds can form, the oxygen fugacity is not compatible with the stability of either carbonate- or methane-rich liquid but is instead compatible with a metasomatic liquid poor in carbonate and dominated by either water or silicate melt. The equilibrium also indicates that the relative oxygen fugacity of garnet-bearing rocks will increase with decreasing depth during adiabatic decompression. This implies that carbon in the asthenospheric mantle will be hosted as graphite or diamond but will be oxidized to produce carbonate melt through the reduction of Fe3+ in silicate minerals during upwelling. The depth of carbonate melt formation will depend on the ratio of Fe3+ to total iron in the bulk rock. This ‘redox melting’ relationship has important implications for the onset of geophysically detectable incipient melting and for the extraction of carbon dioxide from the mantle through decompressive melting.
Regular consumption of vitamin D-fortified yogurt drink (Doogh) improved endothelial biomarkers in subjects with type 2 diabetes: a randomized double-blind clinical trial
BACKGROUND Endothelial dysfunction has been proposed as the underlying cause of diabetic angiopathy that eventually leads to cardiovascular disease, the major cause of death in diabetes. We recently demonstrated the ameliorating effect of regular vitamin D intake on the glycemic status of patients with type 2 diabetes (T2D). In this study, the effects of improvement of vitamin D status on glycemic status, lipid profile and endothelial biomarkers in T2D subjects were investigated. METHODS Subjects with T2D were randomly allocated to one of the two groups to receive either plain yogurt drink (PYD; containing 170 mg calcium and no vitamin D/250 mL, n1 = 50) or vitamin D3-fortified yogurt drink (FYD; containing 170 mg calcium and 500 IU/250 mL, n2 = 50) twice a day for 12 weeks. Anthropometric measures, glycemic status, lipid profile, body fat mass (FM) and endothelial biomarkers including serum endothelin-1, E-selectin and matrix metalloproteinase (MMP)-9 were evaluated at the beginning and after the 12-week intervention period. RESULTS The intervention resulted in a significant improvement in fasting glucose, the Quantitative Insulin Check Index (QUICKI), glycated hemoglobin (HbA1c), triacylglycerols, high-density lipoprotein cholesterol (HDL-C), endothelin-1, E-selectin and MMP-9 in FYD compared to PYD (P < 0.05, for all). Interestingly, difference in changes of endothelin-1, E-selectin and MMP-9 concentrations in FYD compared to PYD (-0.35 ± 0.63 versus -0.03 ± 0.55, P = 0.028; -3.8 ± 7.3 versus 0.95 ± 8.3, P = 0.003 and -2.3 ± 3.7 versus 0.44 ± 7.1 ng/mL, respectively, P < 0.05 for all), even after controlling for changes of QUICKI, FM and waist circumference, remained significant for endothelin-1 and MMP-9 (P = 0.009 and P = 0.005, respectively) but disappeared for E-selectin (P = 0.092). On the contrary, after controlling for serum 25(OH)D, the differences disappeared for endothelin-1(P = 0.066) and MMP-9 (P = 0.277) but still remained significant for E-selectin (P = 0.011). CONCLUSIONS Ameliorated vitamin D status was accompanied by improved glycemic status, lipid profile and endothelial biomarkers in T2D subjects. Our findings suggest both direct and indirect ameliorating effects of vitamin D on the endothelial biomarkers. TRIAL REGISTRATION ClinicalTrials.gov: NCT01236846.
Randomized trial of filgrastim versus chemotherapy and filgrastim mobilization of hematopoietic progenitor cells for rescue in autologous transplantation.
Peripheral blood cell (PBC) rescue has become the mainstay for autologous transplantation in patients with lymphoma, multiple myeloma, and solid tumors. Different methods of hematopoietic progenitor cell (HPC) mobilization are in use without an established standard. Forty-seven patients with relapsed or refractory lymphoma received salvage chemotherapy and were randomized to have HPC mobilization using filgrastim [granulocyte-colony-stimulating factor (G-CSF)] alone for 4 days at 10 microg/kg per day (arm A) or cyclophosphamide (5 g/m(2)) and G-CSF at 10 microg/kg per day until hematologic recovery (arm B). Engraftment and ease of PBC collection were primary outcomes. All patients underwent the same high-dose chemotherapy followed by reinfusion of PBCs. There were no differences in median time to neutrophil engraftment (11 days in both arms; P =.5) or platelet engraftment (14 days in arm A, 13 days in arm B; P =.35). Combined chemotherapy and G-CSF resulted in higher CD34(+) cell collection than G-CSF alone (median, 7.2 vs 2.5 x 10(6) cells/kg; P =.004), but this did not impact engraftment. No differences were found in other PBC harvest outcomes or resource utilization measures. A high degree of tumor contamination, as studied by consensus CDR3 polymerase chain reaction of the mobilized PBCs, was present in both arms (92% in arm A vs 90% in arm B; P = 1). No differences were found in overall survival or progression-free survival at a median follow-up of 21 months. This randomized trial provides clinical evidence that the use of G-CSF alone is adequate for HPC mobilization, even in heavily pretreated patients with relapsed lymphoma.
Adaptive Kalman Filtering for anomaly detection in software appliances
Availability and reliability are often important features of key software appliances such as firewalls, web servers, etc. In this paper we seek to go beyond the simple heartbeat monitoring that is widely used for failover control. We do this by integrating more fine grained measurements that are readily available on most platforms to detect possible faults or the onset of failures. In particular, we evaluate the use of adaptive Kalman Filtering for automated CPU usage prediction that is then used to detect abnormal behaviour. Examples from experimental tests are given.
Natural Customer Ranking of Banks in Terms of Credit Risk by Using Data Mining A Case Study : Branches of Mellat Bank of Iran
Introduction Banks as the main part of the financial system has been one of the main risks that they face different credit risk. A significant volume of facilities or outstanding fuel banks, indicates a lack of appropriate models for credit risk measurement and risk management systems in the banking network [1, 3]. One of the most important tools banks need to manage and control credit risk it, is "customer credit rating system". Using the data mining analysis of information related to bank customers using credit ratings can be creditworthy loan applicants and classify them according to customer accounts and bad, without judgment on the basis of smart payment systems. So banks to determine the needs of its customers, the granting of credit facilities to identify the characteristics of them. This validation by a decrease in bank risk including credit risk is. Accreditation refers to the practice where customers natural and legal validity of financial institutions and bank credit is measured according to the information received from them. And the possibility of a better understanding of the financial situation and people to repay loans received and provide more services. According to this method, the credit risk is measured and customers based on their credit risk classification and grading are [7, 11]. Credit ratings, risk management tool that uses data and statistical techniques to rank the customer pays a little Applicants facilities. Credit ratings of the different attributes of the default of its loan applicants separately examined and calculate the probability of default of the loan applicant's credit ratings at risk of deals [16, 17]. Credit ratings of credit models into two groups: good and bad credit applicants divide. Good credit group, a group that will repay its debts on time and bad credit debt group that has a high probability they will default. Due to the dynamic development of the credit industry, today the industry has an important role in the economy. Although credit demand, increased competition and the development of new channels in the new economic environment, has In the development of trade and business throughout the world, needs to become more widespread financial dealings that led to the development of the business activities of banks and new banks were established. Banking and financial systems, credit risk management is a major problem. So check applicants' credit facility to repay the process is important and many methods have been proposed for this work is the study of data mining techniques to identify genuine bank customer credit risk will be used. Numeric and non-numeric data is the selected data set consisting of 1000. This data set contains 20 features. 7 of the 20 features and 13 features a numerical and non-numerical attributes are nominal. Actually this is 20 characters are input. There's also a feature called Class, category and class of a row of characters in the show. (Ie the output of the problem) that are included in this category are 2 classes, good and bad. In the first 300 customer data poorly (Y = 1) and 700, proper and good customer (Y = 0) exist.
Data-free parameter pruning for Deep Neural Networks
Deep Neural nets (NNs) with millions of parameters are at the heart of many stateof-the-art computer vision systems today. However, recent works have shown that much smaller models can achieve similar levels of performance. In this work, we address the problem of pruning parameters in a trained NN model. Instead of removing individual weights one at a time as done in previous works, we remove one neuron at a time. We show how similar neurons are redundant, and propose a systematic way to remove them. Our experiments in pruning the densely connected layers show that we can remove upto 85% of the total parameters in an MNIST-trained network, and about 35% for AlexNet without significantly affecting performance. Our method can be applied on top of most networks with a fully connected layer to give a smaller network.
Accurate Camera Calibration from Multi-View Stereo and Bundle Adjustment
The advent of high-resolution digital cameras and sophisticated multi-view stereo algorithms offers the promise of unprecedented geometric fidelity in image-based modeling tasks, but it also puts unprecedented demands on camera calibration to fulfill these promises. This paper presents a novel approach to camera calibration where top-down information from rough camera parameter estimates and the output of a multi-view-stereo system on scaled-down input images is used to effectively guide the search for additional image correspondences and significantly improve camera calibration parameters using a standard bundle adjustment algorithm (Lourakis and Argyros 2008). The proposed method has been tested on six real datasets including objects without salient features for which image correspondences cannot be found in a purely bottom-up fashion, and objects with high curvature and thin structures that are lost in visual hull construction even with small errors in camera parameters. Three different methods have been used to qualitatively assess the improvements of the camera parameters. The implementation of the proposed algorithm is publicly available at Furukawa and Ponce (2008b).
Hydrogen-exchange study of the conformational stability of human carbonic-anhydrase B and its metallocomplexes.
In the range of pH 4.6--8.8, 25 degrees C, the apoenzyme of carbonic anhydrase B shows no evidence of any gross conformational changes, as studied by the hydrogen-deuterium exchange method. At pH 4.6 the addition of Co(II), Cd(II) or Mn(II) to the apoenzyme results in a destabilization of the native protein conformation, but in the range of pH 5.5--8.8 these metal ions, and Zn(II), slightly increase the conformational stability of the protein, in so far as they reduce the probability phi of solvent exposure of the peptide groups. In comparison with other proteins studied, native carbonic anhydrase is characterized by a rather compact conformation; for half of the peptide groups the probability of solvent exposure is less than 10(-4), corresponding to changes in standard free energy larger than 5.5 kcal mol-1 (23 kJ mol-1) following the conformational transitions by which these groups are exposed to the solvent.
RUNE-Tag: A high accuracy fiducial marker with strong occlusion resilience
Over the last decades fiducial markers have provided widely adopted tools to add reliable model-based features into an otherwise general scene. Given their central role in many computer vision tasks, countless different solutions have been proposed in the literature. Some designs are focused on the accuracy of the recovered camera pose with respect to the tag; some other concentrate on reaching high detection speed or on recognizing a large number of distinct markers in the scene. In such a crowded area both the researcher and the practitioner are licensed to wonder if there is any need to introduce yet another approach. Nevertheless, with this paper, we would like to present a general purpose fiducial marker system that can be deemed to add some valuable features to the pack. Specifically, by exploiting the projective properties of a circular set of sizeable dots, we propose a detection algorithm that is highly accurate. Further, applying a dot pattern scheme derived from error-correcting codes, allows for robustness with respect to very large occlusions. In addition, the design of the marker itself is flexible enough to accommodate different requirements in terms of pose accuracy and number of patterns. The overall performance of the marker system is evaluated in an extensive experimental section, where a comparison with a well-known baseline technique is presented.
Ac ce pt ed M an us cr ip t Towards Energy-Efficient Scheduling for Real-Time Tasks under Uncertain Cloud Computing Environment
Green cloud computing has become a major concern in both industry and academia, and efficient scheduling approaches show promising ways to reduce the energy consumption of cloud computing platforms while guaranteeing QoS requirements of tasks. Existing scheduling approaches are inadequate for realtime tasks running in uncertain cloud environments, because those approaches assume that cloud computing environments are deterministic and pre-computed schedule decisions will be statically followed during schedule execution. In this paper, we address this issue. We introduce an interval number theory to describe the uncertainty of the computing environment and a scheduling architecture to mitigate the impact of uncertainty on the task scheduling quality for a cloud data center. Based on this architecture, we present a novel scheduling algorithm (PRS) that dynamically exploits proactive and reactive scheduling methods, for scheduling real-time, aperiodic, independent tasks. To improve energy efficiency, we propose three strategies to scale up and down the system’s computing resources according to workload to improve resource utilization and to reduce energy consumption for the cloud data center. We conduct extensive experiments to compare PRS with four typical baseline scheduling algorithms. The experimental results show that PRS performs better than those algorithms, and can effectively improve the performance of a cloud data center.
A passive approach to wireless device fingerprinting
We propose a passive blackbox-based technique for determining the type of access point (AP) connected to a network. Essentially, a stimulant (i.e., packet train) that emulates normal data transmission is sent through the access point. Since access points from different vendors are architecturally heterogeneous (e.g., chipset, firmware, driver), each AP will act upon the packet train differently. By applying wavelet analysis to the resultant packet train, a distinct but reproducible pattern is extracted allowing a clear classification of different AP types. This has two important applications: (1) as a system administrator, this technique can be used to determine if a rogue access point has connected to the network; and (2) as an attacker, fingerprinting the access point is necessary to launch driver/firmware specific attacks. Extensive experiments were conducted (over 60GB of data was collected) to differentiate 6 APs. We show that this technique can classify APs with a high accuracy (in some cases, we can classify successfully 100% of the time) with as little as 100000 packets. Further, we illustrate that this technique is independent of the stimulant traffic type (e.g., TCP or UDP). Finally, we show that the AP profile is stable across multiple models of the same AP.
Development of superficial white matter and its structural interplay with cortical gray matter in children and adolescents.
Healthy human brain undergoes significant changes during development. The developmental trajectory of superficial white matter (SWM) is less understood relative to cortical gray matter (GM) and deep white matter. In this study, a multimodal imaging strategy was applied to vertexwise map SWM microstructure and cortical thickness to characterize their developmental pattern and elucidate SWM-GM associations in children and adolescents. Microscopic changes in SWM were evaluated with water diffusion parameters including fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity (AD), and radial diffusivity (RD) in 133 healthy subjects aged 10-18 years. Results demonstrated distinct maturational patterns in SWM and GM. SWM showed increasing FA and decreasing MD and RD underneath bilateral motor sensory cortices and superior temporal auditory cortex, suggesting increasing myelination. A second developmental pattern in SWM was increasing FA and AD in bilateral orbitofrontal regions and insula, suggesting improved axonal coherence. These SWM patterns diverge from the more widespread GM maturation, suggesting that cortical thickness changes in adolescence are not explained by the encroachment of SWM myelin into the GM-WM boundary. Interestingly, age-independent intrinsic association between SWM and cortical GM seems to follow functional organization of polymodal and unimodal brain regions. Unimodal sensory areas showed positive correlation between GM thickness and FA whereas polymodal regions showed negative correlation. Axonal coherence and differences in interstitial neuron composition between unimodal and polymodal regions may account for these SWM-GM association patterns. Intrinsic SWM-GM relationships unveiled by neuroimaging in vivo can be useful for examining psychiatric disorders with known WM/GM disturbances.
The chi-square test for independence
In one of the most frequent empirical scenarios in applied linguistics, a researcher's empirical results can be summarized in a two-dimensional table, in which − the rows list the levels of a nominal/categorical variable; − the columns list the levels of another nominal/categorical variable; − the cells in the table defined by these row and column levels provide the frequencies with which combinations of row and column levels were observed in some data set. An example of data from a study of disfluencies in speech is shown in Table 1, which shows the parts of speech of 335 words following three types of disfluencies. Both the part of speech and the disfluency markers represent categorical variables. Noun Verb Conjunction Totals uh 30 70 90 190 uhm 50 20 40 110 silence 20 5 10 35 Totals 100 95 140 335 Table 1 shows that 30 uh's were followed by a noun, 20 uhm's were followed by a verb, etc. One question a researcher may be interested in exploring is whether there is a correlation between the kind of disfluency produced – the variable in the rows – and the part of speech of the word following the disfluency – the variable in the columns. An exploratory glance at the data suggests that uh mostly precedes conjunctions while silences most precede nouns, but an actual statistical test is required to determine (i) whether the distribution of the parts of speech after the disfluencies is in fact significantly different from chance and (ii) what preferences and dispreferences this data set reflects. The most frequent statistical test to analyze two-dimensional frequency tables such as Table 1 is the chi-square test for independence [A] The chi-square test for independence
Expanding Technological Frames Towards Mediated Collaboration - Groupware Adoption in Virtual Learning Teams
This paper provides an in-depth analysis of the technological and social factors that led to the successful adoption of groupware by a virtual team in a educational setting. Drawing on a theoretical framework based on the concept of technological frames, we conducted an action research study to analyse the chronological sequence of events in groupware adoption. We argue that groupware adoption can be conceptualised as a three-step process of expanding and aligning individual technological frames towards groupware. The first step comprises activities that bring knowledge of new technological opportunities to the participants. The second step involves facilitating the participants to articulate and evaluate their work practices and their use of tech© Scandinavian Journal of Information Systems, 2006, 18(2):29-68 nology. The third and final step deals with the participants' commitment to, and practical enactment of, groupware technology. The alignment of individual technological frames requires the articulation and re-evaluation of experience with collaborative practice and with the use of technology. One of the key findings is that this activity cannot take place at the outset of groupware adoption.
Abdominal obesity and the spectrum of global cardiometabolic risks in US adults
Objective:To compare the association of obesity and abdominal obesity with cardiometabolic risk factor burden and global estimated coronary heart disease (CHD) risk among multiethnic US adults.Design:Cross-sectional, survey study.Subjects:A total of 4456 participants (representing 194.9 million adults) aged 20–79 years in the 2003–2004 National Health and Nutrition Examination Survey (NHANES).Measurements:Body mass index (BMI) and waist circumference (WC) measures, CHD risk factors and a 10-year estimated CHD risk based on Framingham algorithms. Obesity was defined as a BMI ⩾30 kg/m2 and abdominal obesity as a WC >88 cm in women and >102 cm in men. High CHD risk status included diabetes, cardiovascular disease (CVD) or a 10-year Framingham risk score of >20%.Results:Overall, abdominal obesity was present in 42.3% of men and 62.5% of women and in 53.6% of whites, 56.9% of blacks and 50.5% of Hispanics (P<0.001 between gender and ethnicity). However, using International Diabetes Federation (IDF)-recommended WC cut points for Hispanics, the prevalence of abdominal obesity was 78.3%. Mean levels of low-density lipoprotein cholesterol (LDL-C), systolic and diastolic blood pressure, fasting glucose and C-reactive protein increased, and high-density lipoprotein cholesterol (HDL-C) decreased (P<0.001) according to BMI and WC categories, although these associations were attenuated in blacks for blood pressure, LDL-C, HDL-C and triglycerides. Of those with high WC, 25–35% had ⩾3 cardiometabolic risk factors. High CHD risk among those with high WC was most common in men (27.9%) and non-Hispanic whites (23.9%). Persons with a high vs normal WC, adjusted for age, gender, ethnicity and BMI were more likely to have ⩾3 cardiometabolic risk factors (odds ratio (OR)=5.1, 95% confidence interval (CI)=3.9–6.6) and were classified as high CHD risk (OR=1.5, 95% CI=1.1–2.0).Conclusion:The association of abdominal obesity with risk factors varies by ethnicity and is independently associated with high CHD risk status, further validating its clinical significance.
Frequency format diagram and probability chart for breast cancer risk communication: a prospective, randomized trial
BACKGROUND Breast cancer risk education enables women make informed decisions regarding their options for screening and risk reduction. We aimed to determine whether patient education regarding breast cancer risk using a bar graph, with or without a frequency format diagram, improved the accuracy of risk perception. METHODS We conducted a prospective, randomized trial among women at increased risk for breast cancer. The main outcome measurement was patients' estimation of their breast cancer risk before and after education with a bar graph (BG group) or bar graph plus a frequency format diagram (BG+FF group), which was assessed by previsit and postvisit questionnaires. RESULTS Of 150 women in the study, 74 were assigned to the BG group and 76 to the BG+FF group. Overall, 72% of women overestimated their risk of breast cancer. The improvement in accuracy of risk perception from the previsit to the postvisit questionnaire (BG group, 19% to 61%; BG+FF group, 13% to 67%) was not significantly different between the 2 groups (P = .10). Among women who inaccurately perceived very high risk (> or = 50% risk), inaccurate risk perception decreased significantly in the BG+FF group (22% to 3%) compared with the BG group (28% to 19%) (P = .004). CONCLUSION Breast cancer risk communication using a bar graph plus a frequency format diagram can improve the short-term accuracy of risk perception among women perceiving inaccurately high risk.
General Road Detection From a Single Image
Given a single image of an arbitrary road, that may not be well-paved, or have clearly delineated edges, or some a priori known color or texture distribution, is it possible for a computer to find this road? This paper addresses this question by decomposing the road detection process into two steps: the estimation of the vanishing point associated with the main (straight) part of the road, followed by the segmentation of the corresponding road area based upon the detected vanishing point. The main technical contributions of the proposed approach are a novel adaptive soft voting scheme based upon a local voting region using high-confidence voters, whose texture orientations are computed using Gabor filters, and a new vanishing-point-constrained edge detection technique for detecting road boundaries. The proposed method has been implemented, and experiments with 1003 general road images demonstrate that it is effective at detecting road regions in challenging conditions.
Effectiveness of peer support in reducing readmissions of persons with multiple psychiatric hospitalizations.
OBJECTIVE The study examined the feasibility and effectiveness of using peer support to reduce recurrent psychiatric hospitalizations. METHODS A randomized controlled design was used, with follow-up at nine months after an index discharge from an academically affiliated psychiatric hospital. Patients were 18 years or older with major mental illness and had been hospitalized three or more times in the prior 18 months. Seventy-four patients were recruited, randomly assigned to usual care (N=36) or to a peer mentor plus usual care (N=38), and assessed at nine months. RESULTS Participants who were assigned a peer mentor had significantly fewer rehospitalizations (.89 ± 1.35 versus 1.53 ± 1.54; p=.042 [one-tailed]) and fewer hospital days (10.08 ± 17.31 versus 19.08 ± 21.63 days; p<.03, [one tailed]). CONCLUSIONS Despite the study's limitations, findings suggest that use of peer mentors is a promising intervention for reducing recurrent psychiatric hospitalizations for patients at risk of readmission.
Application of smart antenna technologies in simultaneous wireless information and power transfer
Simultaneous wireless information and power transfer (SWIPT) is a promising solution to increase the lifetime of wireless nodes and hence alleviate the energy bottleneck of energy constrained wireless networks. As an alternative to conventional energy harvesting techniques, SWIPT relies on the use of radio frequency signals, and is expected to bring some fundamental changes to the design of wireless communication networks. This article focuses on the application of advanced smart antenna technologies to SWIPT, including multiple-input multiple-output and relaying techniques. These smart antenna technologies have the potential to significantly improve the energy efficiency and also the spectral efficiency of SWIPT. Different network topologies with single and multiple users are investigated, along with some promising solutions to achieve a favorable trade-off between system performance and complexity. A detailed discussion of future research challenges for the design of SWIPT systems is also provided.
Where do helpers look?: gaze targets during collaborative physical tasks
This study used eye-tracking technology to assess where helpers look as they are providing assistance to a worker during collaborative physical tasks. Gaze direction was coded into one of six categories: partner's head, partner's hands, task parts and tools, the completed task, and instruction manual. Results indicated that helpers rarely gazed at their partners' faces, but distributed gaze fairly evenly across the other targets. The results have implications for the design of video systems to support collaborative physical tasks.
11.5 A time-correlated single-photon-counting sensor with 14GS/S histogramming time-to-digital converter
Time-correlated single photon counting (TCSPC) is a photon-efficient technique to record ultra-fast optical waveforms found in numerous applications such as time-of-flight (ToF) range measurement (LIDAR) [1], ToF 3D imaging [2], scanning optical microscopy [3], diffuse optical tomography (DOT) and Raman sensing [4]. Typical instrumentation consists of a pulsed laser source, a discrete detector such as an avalanche photodiode (APD) or photomultiplier tube (PMT), time-to-digital converter (TDC) card and a FPGA or PC to assemble and compute histograms of photon time stamps. Cost and size restrict the number of channels of TCSPC hardware. Having few detection and conversion channels, the technique is limited to processing optical waveforms with low intensity, with less than one returned photon per laser pulse, to avoid pile-up distortion [4]. However, many ultra-fast optical waveforms exhibit high dynamic range in the number of photons emitted per laser pulse. Examples are signals observed at close range in ToF with multiple reflections, diffuse reflected photons in DOT or local variations in fluorescent dye concentration in microscopy. This paper provides a single integrated chip that reduces conventional TCSPC pile-up mechanisms by an order of magnitude through ultra-parallel realizations of both photon detection and time-resolving hardware. A TDC architecture is presented which combines the two step iterated TCSPC process of time-code generation, followed by memory lookup, increment and write, into one parallel direct-to-histogram conversion. The sensor achieves 71.4ps resolution, over 18.85ns dynamic range, with 14GS/s throughput. The sensor can process 1.7Gphoton/s and generate 21k histograms/s (with 4.6μs readout time), each capturing a total of 1.7kphotons in a 1μs exposure.
Design of multimedia network courseware for Building Materials of civil engineering
This paper illustrates the design of multimedia network courseware for Building Materials from four aspects as Exploiture Aims, Design of Structure, Designing Standards and Main Characteristics of Courseware. According to the characteristics of the course, we established the Designing Standards for the courseware and introduce an innovative section as experiments simulation to settle the lack of practice for engineering course.
A review of RF and microwave techniques for dielectric measurements on polar liquids
The requirements for dielectric measurements on polar liquids lie largely in two areas. First there is scientific interest in revealing the structure of and interactions between the molecules - this can be studied through dielectric spectroscopy. Secondly, polar liquids are widely used as dielectric reference and tissue equivalent materials for biomedical studies and for mobile telecommunications, health and safety related measurements. This review discusses these roles for polar liquids and surveys the techniques available for the measurement of their complex permittivity at RF and Microwave frequencies. One aim of the review is to guide researchers and metrologists in the choice of measurement methods and in their optimization. Particular emphasis is placed on the importance of traceability in these measurements to international standards
Thin Structures in Image Based Rendering
We propose a novel method to handle thin structures in Image-Based Rendering (IBR), and specifically structures supported by simple geometric shapes such as planes, cylinders, etc. These structures, e.g. railings, fences, oven grills etc, are present in many man-made environments and are extremely challenging for multi-view 3D reconstruction, representing a major limitation of existing IBR methods. Our key insight is to exploit multi-view information. After a handful of user clicks to specify the supporting geometry, we compute multi-view and multi-layer alpha mattes to extract the thin structures. We use two multi-view terms in a graph-cut segmentation, the first based on multi-view foreground color prediction and the second ensuring multiview consistency of labels. Occlusion of the background can challenge reprojection error calculation and we use multiview median images and variance, with multiple layers of thin structures. Our end-to-end solution uses the multi-layer segmentation to create per-view mattes and the median colors and variance to create a clean background. We introduce a new multi-pass IBR algorithm based on depth-peeling to allow free-viewpoint navigation of multi-layer semi-transparent thin structures. Our results show significant improvement in rendering quality for thin structures compared to previous image-based rendering solutions.
Estimating Human Movement Parameters Using a Software Radio-based Radar
Radar is an attractive technology for long term monitoring of human movement as it operates remotely, can be placed behind walls and is able to monitor a large area depending on its operating parameters. A radar signal reflected off a moving person carries rich information on his or her activity pattern in the form of a set of Doppler frequency signatures produced by the specific combination of limbs and torso movements. To enable classification and efficient storage and transmission of movement data, unique parameters have to be extracted from the Doppler signatures. Two of the most important human movement parameters for activity identification and classification are the velocity profile and the fundamental cadence frequency of the movement pattern. However, the complicated pattern of limbs and torso movement worsened by multipath propagation in indoor environment poses a challenge for the extraction of these human movement parameters. In this paper, three new approaches for the estimation of human walking velocity profile in indoor environment are proposed and discussed. The first two methods are based on spectrogram estimates whereas the third method is based on phase difference computation. In addition, a method to estimate the fundamental cadence frequency of the gait is suggested and discussed. The accuracy of the methods are evaluated and compared in an indoor experiment using a flexible and low-cost software defined radar platform. The results obtained indicate that the velocity estimation methods are able to estimate the velocity profile of the person’s translational motion with an error of less than 10%. The results also showed that the fundamental cadence is estimated with an error of 7%.
Wikipedia Research and Tools: Review and Comments
I here give an overview of Wikipedia and wiki research and tools. Well over 1,000 reports have been published in the field and there exist dedicated scientific meetings for Wikipedia research. It is not possible to give a complete review of all material published. This overview serves to describe some key areas of research.
Pottery Workshops in the Coastal Area of Roman Dalmatia: landscape, spatial organization, ownership
The paper’s aim is to try to assess pottery and ceramics production models present in the Roman province Dalmatia, more specifically for its northernmost part (Liburnia), by summarising known data on production facilities, location and landscape exploitation as well as products and their distribution. A wide array of typologically different data, spanning from archaeological and historical to geological and palinological, is used to reconstruct the onset and the chronology of pottery and ceramic production in Dalmatia and Liburnia, and to link this industry to other branches of the ancient economy. Though still in progress, recent research shows that some general models can be discerned, helping understanding rural settlement organisation, urban production and market demands as well, and finally aiding the reconstruction of all those cultural changes and social processes that marked the early Imperial period on the eastern Adriatic, but also the economic developments occurring at later periods.
SPECTRE: Seedless Network Alignment via Spectral Centralities
Network alignment consists of finding a correspondence between the nodes of two networks. From aligning proteins in computational biology, to de-anonymization of social networks, to recognition tasks in computer vision, this problem has applications in many diverse areas. The current approaches to network alignment mostly focus on the case where prior information is available, either in the form of a seed set of correctly-matched nodes or attributes on the nodes and/or edges. Moreover, those approaches which assume no such prior information tend to be computationally expensive and do not scale to large-scale networks. However, many real-world networks are very large in size, and prior information can be expensive, if not impossible, to obtain. In this paper we introduce SPECTRE, a scalable, accurate algorithm able to solve the network alignment problem with no prior information. SPECTRE makes use of spectral centrality measures and percolation techniques to robustly align nodes across networks, even if those networks exhibit only moderate correlation. Through extensive numerical experiments, we show that SPECTRE is able to recover high-accuracy alignments on both synthetic and real-world networks, and outperforms other algorithms in the seedless case.
Different clinical outcomes in patients with asymptomatic severe aortic stenosis according to the stage classification: Does the aortic valve area matter?
BACKGROUND The ACC/AHA guidelines introduced a new classification of severe aortic stenosis (AS) mainly based on maximum jet velocity (Vmax) and mean pressure gradient (mPG), but not on aortic valve area (AVA). However, prognostic value of this new classification has not yet been fully evaluated. METHODS AND RESULTS We studied 1512 patients with asymptomatic severe AS enrolled in the CURRENT AS registry in whom surgery was not initially planned. Patients were divided into 2 groups: Group 1 (N=122) comprised patients who met the recommendation for surgery; high-gradient (HG)-AS (Vmax≥4.0m/s or mPG≥40mmHg) with ejection fraction (EF)<50%, or very HG-AS (Vmax≥5.0m/s or mPG≥60mmHg), and Group 2 (N=1390) comprised patients who did not meet this recommendation. Group 2 was further subdivided into HG-AS with preserved EF (HGpEF-AS, N=498) and low-gradient (LG)-AS, but AVA<1.0cm2 (N=892). The excess risk of Group 1 relative to Group 2 for the primary outcome measure (a composite of aortic valve-related death or heart failure hospitalization) was significant (adjusted HR: 1.92, 95%CI: 1.37-2.68, P<0.001). The excess risk of HGpEF-AS relative to LG-AS for the primary outcome measure was also significant (adjusted HR: 1.45, 95%CI: 1.11-1.89, P=0.006). Among LG-AS patients, patients with reduced EF (<50%) (LGrEF-AS, N=103) had extremely high cumulative 5-year incidence of all-cause death (85.5%). CONCLUSION Trans-aortic valve gradient in combination with EF was a good prognostic marker in patients with asymptomatic AS. However, patients with LGrEF-AS had extremely poor prognosis when managed conservatively.
Optimal Policies for Multiechelon Inventory Problems with Markov-Modulated Demand
This paper considers a multistage serial inventory system with Markov-modulated demand. Random demand arises at Stage 1, Stage 1 orders from Stage 2, etc., and Stage N orders from an outside supplier with unlimited stock. The demand distribution in each period is determined by the current state of an exogenous Markov chain. Excess demand is backlogged. Linear holding costs are incurred at every stage, and linear backorder costs are incurred at Stage 1. The ordering costs are also linear. The objective is to minimize the long-run average costs in the system. The paper shows that the optimal policy is an echelon base-stock policy with state-dependent order-up-to levels. An efficient algorithm is also provided for determining the optimal base-stock levels. The results can be extended to serial systems in which there is a fixed ordering cost at stage N and to assembly systems with linear ordering costs.
Temporal Readout Noise Analysis and Reduction Techniques for Low-Light CMOS Image Sensors
In this paper, an analytical noise calculation is presented to derive the impact of process and design parameters on 1/f and thermal noise for a low-noise CMOS image sensor (CIS) readout chain. It is shown that dramatic noise reduction is obtained by using a thin-oxide transistor as the source follower of a typical 4T pixel. This approach is confirmed by a test chip designed in a 180-nm CIS process and embedding small arrays of the proposed new pixels together with state-ofthe-art 4T pixels for comparison. The new pixels feature a pitch of 7.5 μm and a fill factor of 66%. A 0.4erms input-referred noise and a 185-μV/econversion gain are obtained. Compared with state-of-the-art pixels, also present onto the test chip, the rms noise is divided by more than 2 and the conversion gain is multiplied by 2.2.
The learning brain: lessons for education: a précis.
This book highlights the importance of anchoring education in an evidence base derived from neuroscience. For far too long has the brain been neglected in discussions on education and often information about neuroscientific research is not easy to access. Our aim was to provide a source book that conveys the excitement of neuroscience research that is relevant to learning and education. This research has largely, but not exclusively, been carried out using neuroimaging methods in the past decade or so, ranging from investigations of brain structure and function in dyslexia and dyscalculia to investigations of the changes in the hippocampus of London taxi drivers. To speak to teachers who might not have scientific backgrounds, we have tried to use nontechnical language as far as possible and have provided an appendix illustrating the main methods and techniques currently used and a glossary, defining terms from Acetylcholine, Action Potentials and ADHD to White Matter, Word Form Area and Working Memory. We start with the idea that the brain has evolved to educate and to be educated, often instinctively and effortlessly. We believe that understanding the brain mechanisms that underlie learning and teaching could transform educational strategies and enable us to design educational programmes that optimize learning for people of all ages and of all needs. For this reason the first two-thirds of the book follows a developmental framework. The rest of the book focuses on learning in the brain at all ages. There is a vast amount brain research of direct relevance to education practice and policy. And yet neuroscience has had little impact on education. This might in part be due to a lack of interaction between educators and brain scientists. This in turn might be because of difficulties of translating the neuroscience knowledge of how learning takes place in the brain into information of value to teachers. It is here where we try to fill a gap. Interdisciplinary dialogue needs a mediator to prevent one or other discipline dominating, and, notwithstanding John Bruer’s remarks that it is cognitive psychology that ‘bridges the gap’ between neuroscience and education (Bruer, 1997), we feel that now is the time to explore the implications of brain science itself for education.
Design of Low-Offset Voltage Dynamic Latched Comparator
ABSTRACT The offset voltage of the dynamic latched comparator is analyzed in detailed and dynamic latched comparator design is optimized for the minimal offsetvoltage based on the analysis in this paper. As a result offset-voltage was reduced from 0.87μV (in conventional double tail latched comparator) to 0.3μV (in case of proposed comparator. The simulated results of the conventional as well as proposed comparator have been shown on pspice orcad 9.2 versions.
Effect of vancomycin minimal inhibitory concentration on the outcome of methicillin-susceptible Staphylococcus aureus endocarditis.
BACKGROUND Staphylococcus aureus endocarditis has a high mortality rate. Vancomycin minimum inhibitory concentration (MIC) has been shown to affect the outcome of methicillin-resistant S. aureus bacteremia, and recent data point to a similar effect on methicillin-susceptible S. aureus bacteremia. We aimed to evaluate the effect of vancomycin MIC on left-sided S. aureus infective endocarditis (IE) treated with cloxacillin. METHODS We analyzed a prospectively collected cohort of patients with IE in a single tertiary-care hospital. Vancomycin, daptomycin, and cloxacillin MIC was determined by E-test. S. aureus strains were categorized as low vancomycin MIC (<1.5 µg/mL) and high vancomycin MIC (≥1.5 µg/mL). The primary endpoint was in-hospital mortality. RESULTS We analyzed 93 patients with left-sided IE treated with cloxacillin, of whom 53 (57%) had a vancomycin MIC < 1.5 µg/mL and 40 (43%) a vancomycin MIC ≥ 1.5 µg/mL. In-hospital mortality was 30% (n = 16/53) in patients with a low vancomycin MIC and 53% (n = 21/40) in those with a high vancomycin MIC (P = .03). No correlation was found between oxacillin MIC and vancomycin or daptomycin MIC. Logistic regression analysis showed that higher vancomycin MIC increased in-hospital mortality 3-fold (odds ratio, 3.1; 95% confidence interval, 1.2-8.2) after adjustment for age, year of diagnosis, septic complications, and nonseptic complicated endocarditis. CONCLUSIONS Our results indicate that vancomycin MIC could be used to identify a subgroup of patients with methicillin-susceptible S. aureus IE at risk of higher mortality. The worse outcome of staphylococcal infections with a higher vancomycin MIC cannot be explained solely by suboptimal pharmacokinetics of antibiotics.
Effect of co-existing Co2+ ions on the aggregation of humic acid in aquatic environment: Aggregation kinetics, dynamic properties and fluorescence spectroscopic study.
The fate and transport of humic substances in the aquatic environments depend significantly on their interactions with co-existing ions. Herein, we employed dynamic light scattering (DLS) measurement, molecular dynamic (MD) simulation and fluorescence spectrometry to investigate the aggregation of humic acid (HA) in the presence of Co2+ ions. The aggregation kinetics was depicted by hydrodynamic diameter () and the attachment efficiency (α) of HA aggregates. α increases gradually in the reaction-limited (slow) regime due to the decrease of the double layer repulsion, and the energy barrier is eliminated to a certain extent in the diffusion-limited reaction while α close to unity. The complexation between functional groups (i.e. carboxylic and phenolic groups) of HA and Co2+ ions contributes significantly to the aggregation process of HA. MD simulation and density functional theory (DFT) calculation demonstrate that the aggregation process of HA can be promoted by Co2+ through several inter- or intra-molecular interactions between HA and the Co2+ ions. The results provide a pathway for insight into the interactions between HA and metal ions, which is important for deeply understanding the environmental behaviors of HA in natural aqueous systems.
A plant-based diet and coronary artery disease: a mandate for effective therapy
A 1999 autopsy study of young adults in the US between the ages of 17 and 34 years of who died from accidents, suicides, and homicides confirmed that coronary artery disease (CAD) is ubiquitous in this age group. The disease process at this stage is too early to cause coronary events but heralds their onset in the decades to follow. These data are similar to those reported in an earlier postmortem analysis of US combat casualties during the Korean conflict, which found early CAD in nearly 80% of soldiers at an average age of 20 years. From these reports, which are 17 and 63 years old, respectively, it is clear that the foundation of CAD is established by the end of high school. Yet, medicine and public health leaders have not taken any steps to forestall or eliminate the early onset of this epidemic. Smoking cessation, a diet with lean meat and low-fat dairy, and exercise are generally advised, but cardiovascular disease (CVD) remains the number one killer of women and men in the US. The question is, why? Unfortunately, such dietary gestures do not treat the primary cause of CVD. The same can be said of commonly prescribed cardiovascular medications such as beta-blockers, angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, anticoagulants, aspirin, and cholesterol lowering drugs and medical interventions such as bare metal stents, drug-eluting stents, and coronary artery bypass surgery. It is increasingly a shameful national embarrassment for the United States to have constructed a billion-dollar cardiac healthcare industry surrounding an illness that does not even exist in more than half of the planet. If you, as a cardiologist or a cardiac surgeon, decided to hang your shingle in Okinawa, the Papua Highlands of New Guinea, rural China, Central Africa, or with the Tarahumara Indians of Northern Mexico, you better plan on a different profession because these countries do not have cardiovascular disease. The common thread is that they all thrive on whole food, plant-based nutrition (WFPBN) with minimal intake of animal products. By way of contrast, in the United States, we ignore CVD inception initiated by progressive endothelial injury, inflammatory oxidative stress, decreased nitric oxide production, foam cell formation, diminished endothelial progenitor cell production and development of plaque that may rupture and cause myocardial infarction or stroke. This series of events is primarily set in motion, and worsened, by the Western diet, which consists of added oils, dairy, meat, fish, fowl, and sugary foods and drinks—all of which injure endothelial function after ingestion, making food a major, if not the major cause of CAD. In overlooking disease causation, we implement therapies that have high morbidity and mortality. The side effects of a plethora of cardiovascular drugs include the risk of diabetes, neuromuscular pain, brain fog, liver injury, chronic cough, fatigue, hemorrhage, and erectile dysfunction. Surgical interventions are fatal for tens of thousands of patients annually. Each year approximately 1.2 million stents are placed with a 1% mortality rate, causing 12,000 deaths, and 500,000 bypass surgeries are performed with a 3% mortality rate, resulting in another 15,000 deaths. In total, 27,000 patients die annually from these two procedures. It is as though in ignoring this dairy, oil, and animal-based illness, we are wedded to providing futile attempts at temporary symptomatic relief with drugs and interventional therapy, which employs an unsuccessful mechanical approach to a biological illness with no hope for cure. Patients continue to consume the very foods that are destroying them. This disastrous illness and ineffective treatments need never happen if we follow the lessons of plant-based cultures where CVD is virtually nonexistent.
A novel evolutionary data mining algorithm with applications to churn prediction
Classification is an important topic in data mining research. Given a set of data records, each of which belongs to one of a number of predefined classes, the classification problem is concerned with the discovery of classification rules that can allow records with unknown class membership to be correctly classified. Many algorithms have been developed to mine large data sets for classification models and they have been shown to be very effective. However, when it comes to determining the likelihood of each classification made, many of them are not designed with such purpose in mind. For this, they are not readily applicable to such problem as churn prediction. For such an application, the goal is not only to predict whether or not a subscriber would switch from one carrier to another, it is also important that the likelihood of the subscriber’s doing so be predicted. The reason for this is that a carrier can then choose to provide special personalized offer and services to those subscribers who are predicted with higher likelihood to churn. Given its importance, we propose a new data mining algorithm, called data mining by evolutionary learning (DMEL), to handle classification problems of which the accuracy of each predictions made has to be estimated. In performing its tasks, DMEL searches through the possible rule space using an evolutionary approach that has the following characteristics: 1) the evolutionary process begins with the generation of an initial set of first-order rules (i.e., rules with one conjunct/condition) using a probabilistic induction technique and based on these rules, rules of higher order (two or more conjuncts) are obtained iteratively; 2) when identifying interesting rules, an objective interestingness measure is used; 3) the fitness of a chromosome is defined in terms of the probability that the attribute values of a record can be correctly determined using the rules it encodes; and 4) the likelihood of predictions (or classifications) made are estimated so that subscribers can be ranked according to their likelihood to churn. Experiments with different data sets showed that DMEL is able to effectively discover interesting classification rules. In particular, it is able to predict churn accurately under different churn rates when applied to real telecom subscriber data.
The Impact of Perceived Channel Utilities, Shopping Orientations, and Demographics on the Consumer's Online Buying Behavior
This study proposed and tested a model of consumer online buying behavior. The model posits that consumer online buying behavior is affected by demographics, channel knowledge, perceived channel utilities, and shopping orientations. Data were collected by a research company using an online survey of 999 U.S. Internet users, and were cross-validated with other similar national surveys before being used to test the model. Findings of the study indicated that education, convenience orientation, Página 1 de 20 Psychographics of the Consumers in Electronic Commerce 11/10/01 http://www.ascusc.org/jcmc/vol5/issue2/hairong.html experience orientation, channel knowledge, perceived distribution utility, and perceived accessibility are robust predictors of online buying status (frequent online buyer, occasional online buyer, or non-online buyer) of Internet users. Implications of the findings and directions for future research were discussed.
Gated Neural Networks for Targeted Sentiment Analysis
Targeted sentiment analysis classifies the sentiment polarity towards each target entity mention in given text documents. Seminal methods extract manual discrete features from automatic syntactic parse trees in order to capture semantic information of the enclosing sentence with respect to a target entity mention. Recently, it has been shown that competitive accuracies can be achieved without using syntactic parsers, which can be highly inaccurate on noisy text such as tweets. This is achieved by applying distributed word representations and rich neural pooling functions over a simple and intuitive segmentation of tweets according to target entity mentions. In this paper, we extend this idea by proposing a sentencelevel neural model to address the limitation of pooling functions, which do not explicitly model tweet-level semantics. First, a bi-directional gated neural network is used to connect the words in a tweet so that pooling functions can be applied over the hidden layer instead of words for better representing the target and its contexts. Second, a three-way gated neural network structure is used to model the interaction between the target mention and its surrounding contexts. Experiments show that our proposed model gives significantly higher accuracies compared to the current best method for targeted sentiment analysis.
Comparison of long-term outcomes between children with aplastic anemia and refractory cytopenia of childhood who received immunosuppressive therapy with antithymocyte globulin and cyclosporine.
The 2008 World Health Organization classification proposed a new entity in childhood myelodysplastic syndrome, refractory cytopenia of childhood. However, it is unclear whether this morphological classification reflects clinical outcomes. We retrospectively reviewed bone marrow morphology in 186 children (median age 8 years; range 1-16 years) who were enrolled in the prospective study and received horse antithymocyte globulin and cyclosporine between July 1999 and November 2008. The median follow-up period was 87 months (range 1-146 months). Out of 186 patients, 62 (33%) were classified with aplastic anemia, 94 (49%) with refractory cytopenia of childhood, and 34 (18%) with refractory cytopenia with multilineage dysplasia. Aplastic anemia patients received granulocyte colony-stimulating factor more frequently and for longer durations than other patients (P<0.01). After six months, response rates to immunosuppressive therapy were not significantly different among the 3 groups. Acquisition of chromosomal abnormalities was observed in 5 patients with aplastic anemia, 4 patients with refractory cytopenia of childhood, and 3 patients with refractory cytopenia with multilineage dysplasia. Although the cumulative incidence of total clonal evolution at ten years was not significantly different among the 3 groups, the cumulative incidence of monosomy 7 development was significantly higher in aplastic anemia than in the other groups (P=0.02). Multivariate analysis revealed that only granulocyte colony-stimulating factor administration duration of 40 days or more was a significant risk factor for monosomy 7 development (P=0.02). These findings suggest that even the introduction of a strict morphological distinction from hypoplastic myelodysplastic syndrome cannot eradicate clonal evolution in children with aplastic anemia.
8 A 236 nW-56 . 5 dBm-Sensitivity Bluetooth Low-Energy Wakeup Receiver with Energy Harvesting in 65 nm CMOS
Batteryless operation and ultra-low-power (ULP) wireless communication will be two key enabling technologies as the IC industry races to keep pace with the IoE projections of 1T-connected sensors by 2025. Bluetooth Low-Energy (BLE) is used in many consumer IoE devices now because it offers the lowest average power for a radio that can communicate directly to a mobile device [1]. The BLE standard requires that the IoE device continuously advertises, which initiates the connection to a mobile device. Sub-1s advertisement intervals are common to minimize latency. However, this continuous advertising results in a typical minimum average power of 10’s of μW at low duty-cycles. This leads to the quoted 1-year lifetimes of event-driven IoE devices (e.g. tracking tags, ibeacons) that operate from coin-cell batteries. This minimum power is too high for robust, batteryless operation in a small form-factor.
A meta-analysis of the evidence on the impact of prenatal and early infancy exposures to mercury on autism and attention deficit/hyperactivity disorder in the childhood.
Although a measurable number of epidemiological studies have been conducted to clarify the associations between mercury exposure during embryo or early infancy and later incidences of autism spectrum disorders (ASD) or attention-deficit hyperactivity disorder (ADHD), the conclusion still remains unclear. Meta-analysis was conducted for two major exposure sources; i.e., thimerosal vaccines that contain ethylmercury (clinical exposure), and environmental sources, using relevant literature published before April 2014. While thimerosal exposures did not show any material associations with an increased risk of ASD or ADHD (the summary odds ratio (OR) 0.99, 95% confidence interval (CI) 0.80-1.24 for ASD; OR 0.91, 95% CI 0.70-1.13 for ADHD/ADD), significant associations were observed for environmental exposures in both ASD (OR 1.66, 95% CI 1.14-2.17) and ADHD (OR 1.60, 95% CI 1.10-2.33). The summary ORs were similar after excluding studies not adjusted for confounders. Moderate adverse effects were observed only between environmental inorganic or organic mercury exposures and ASD/ADHD. However, these results should be interpreted with caution since the number of epidemiological studies on this issue was limited and still at an early stage. Further studies focused on subjects with genetic vulnerabilities of developmental disorders are warranted for better understanding of the effects of such environmental exposures.
GTP but not GDP analogues promote association of ADP-ribosylation factors, 20-kDa protein activators of cholera toxin, with phospholipids and PC-12 cell membranes.
ADP-ribosylation factors (ARFs) are a family of approximately 20-kDa guanine nucleotide-binding proteins initially identified by their ability to enhance cholera toxin ADP-ribosyltransferase activity in the presence of GTP. ARFs have been purified from both membrane and cytosolic fractions. ARF purified from bovine brain cytosol requires phospholipid plus detergent for high affinity guanine nucleotide binding and for optimal enhancement of cholera toxin ADP-ribosyltransferase activity. The phospholipid requirements, combined with a putative role for ARF in vesicular transport, suggested that the soluble protein might interact reversibly with membranes. A polyclonal antibody against purified bovine ARF (sARF II) was used to detect ARF by immunoblot in membrane and soluble fractions from rat pheochromocytoma (PC-12) cell homogenates. ARF was predominantly cytosolic but increased in membranes during incubation of homogenates with nonhydrolyzable GTP analogues guanosine 5'-O-(3-thiotriphosphate), guanylyl-(beta gamma-imido)-diphosphate, and guanylyl-(beta gamma-methylene)-diphosphate, and to a lesser extent, adenosine 5'-O-(3-thiotriphosphate). GTP, GDP, GMP, and ATP were inactive. Cytosolic ARF similarly associated with added phosphatidylserine, phosphatidylinositol, or cardiolipin in GTP gamma S-dependent fashion. ARF binding to phosphatidylserine was reversible and coincident with stimulation of cholera toxin-catalyzed ADP-ribosylation. These observations may reflect a mechanism by which ARF could cycle between soluble and membrane compartments in vivo.
The Carnegie Department of Embryology at 100: Looking Forward.
Biological research has a realistic chance within the next 50 years of discovering the basic mechanisms by which metazoan genomes encode the complex morphological structures and capabilities that characterize life as we know it. However, achieving those goals is now threatened by researchers who advocate an end to basic research on nonmammalian organisms. For the sake of society, medicine, and the science of biology, the focus of biomedical research should place more emphasis on basic studies guided by the underlying evolutionary commonality of all major animals, as manifested in their genes, pathways, cells, and organs.
Deriving Verb Predicates By Clustering Verbs with Arguments
Hand-built verb clusters such as the widely used Levin classes (Levin, 1993) have proved useful, but have limited coverage. Verb classes automatically induced from corpus data such as those from VerbKB (Wijaya, 2016), on the other hand, can give clusters with much larger coverage, and can be adapted to specific corpora such as Twitter. We present a method for clustering the outputs of VerbKB: verbs with their multiple argument types, e.g.“marry(person, person)”, “feel(person, emotion).” We make use of a novel lowdimensional embedding of verbs and their arguments to produce high quality clusters in which the same verb can be in different clusters depending on its argument type. The resulting verb clusters do a better job than hand-built clusters of predicting sarcasm, sentiment, and locus of control in tweets.
On the de-facto Standard of Event-driven Process Chains: How EPC is defined in Literature
The Business Process Modelling Notation (BPMN) and the Event-driven Process Chain (EPC) are both frequently used modelling languages to create business process models. While there is a well-defined standard for BPMN, such a standard is missing for EPC. As a standard would be beneficial to improve interoperability among different vendors, this paper aims at providing the means for future EPC standardization. Therefore, we have conducted a structured literature review of the most common EPC variants in IS research. We provide a structured overview of the evolution of different EPC variants, describe means and capabilities and elaborate different criteria for decision-making in regard to including EPC variants in a standardization process.
Patent Mining: A Survey
Patent documents are important intellectual resources of protecting interests of individuals, organizations and companies. Different from general web documents, patent documents have a well-defined format including frontpage, description, nclaims, and figures. However, they are lengthy and rich in technical terms, which requires enormous human efforts for analysis. Hence, a new research area, called patent mining, emerges in recent years, aiming to assist patent analysts in investigating, processing, and analyzing patent documents. Despite the recent advances in patent mining, it is still far from being well explored in research communities. To help patent analysts and interested readers obtain a big picture of patent mining, we thus provide a systematic summary of existing research efforts along this direction. In this survey, we first present an overview of the technical trend in patent mining. We then investigate multiple research questions related to patent documents, including patent retrieval, patent classification, and patent visualization, and provide summaries and highlights for each question by delving into the corresponding research efforts.
ETPC - A Paraphrase Identification Corpus Annotated with Extended Paraphrase Typology and Negation
We present the Extended Paraphrase Typology (EPT) and the Extended Typology Paraphrase Corpus (ETPC). The EPT typology addresses several practical limitations of existing paraphrase typologies: it is the first typology that copes with the non-paraphrase pairs in the paraphrase identification corpora and distinguishes between contextual and habitual paraphrase types. ETPC is the largest corpus to date annotated with atomic paraphrase types. It is the first corpus with detailed annotation of both the paraphrase and the non-paraphrase pairs and the first corpus annotated with paraphrase and negation. Both new resources contribute to better understanding the paraphrase phenomenon, and allow for studying the relationship between paraphrasing and negation. To the developers of Paraphrase Identification systems ETPC corpus offers better means for evaluation and error analysis. Furthermore, the EPT typology and ETPC corpus emphasize the relationship with other areas of NLP such as Semantic Similarity, Textual Entailment, Summarization and Simplification.
Minutia-based enhancement of fingerprint samples
Image enhancement is a common pre-processing step before the extraction of biometric features from a fingerprint sample. This can be essential especially for images of low image quality. An ideal fingerprint image enhancement should intend to improve the end-to-end biometric performance, i.e. the performance achieved on biometric features extracted from enhanced fingerprint samples. We use a model from Deep Learning for the task of image enhancement. This work's main contribution is a dedicated cost function which is optimized during training The cost function takes into account the biometric feature extraction. Our approach intends to improve the accuracy and reliability of the biometric feature extraction process: No feature should be missed and all features should be extracted as precise as possible. By doing so, the loss function forced the image enhancement to learn how to improve the suitability of a fingerprint sample for a biometric comparison process. The effectivity of the cost function was demonstrated for two different biometric feature extraction algorithms.
Interpreting Embedding Models of Knowledge Bases: A Pedagogical Approach
Knowledge bases are employed in a variety of applications from natural language processing to semantic web search; alas, in practice their usefulness is hurt by their incompleteness. Embedding models attain state-of-the-art accuracy in knowledge base completion, but their predictions are notoriously hard to interpret. In this paper, we adapt “pedagogical approaches” (from the literature on neural networks) so as to interpret embedding models by extracting weighted Horn rules from them. We show how pedagogical approaches have to be adapted to take upon the large-scale relational aspects of knowledge bases and show experimentally their strengths and weaknesses.
AssocGEN: Engine for analyzing metadata based associations in digital evidence
Traditionally, sources of digital evidence are analyzed by individually examining the various artifacts contained therein and using the artifact metadata to validate authenticity and sequence them. However, when artifacts from forensic images, folders, log files, and network packet dumps have to be analyzed, the examination of the artifacts and the metadata in isolation presents a significant challenge. Ideally, when a source is examined, it is a valuable task to determine correlations between the artifacts and group the related artifacts. Such a grouping can simplify the task of analysis by minimizing the need for human intervention. By virtue of the value that metadata bring to an investigation and its ubiquitous nature, metadata based associations is the first step in realizing such correlations automatically during analysis. In this paper, we present the AssocGEN analysis engine which uses the metadata to determine associations between artifacts that belong to files, logs and network packet dumps, and identifies metadata associations to group the related artifacts. A metadata association can represent any type of value match1 or relationship that is deemed relevant in the context of an investigation. We have conducted preliminary evaluation of AssocGEN on the classical ownership problem to highlight the benefits of incorporating this approach in existing forensic tools.
An Online Health Prevention Intervention for Youth with Addicted or Mentally Ill Parents: Experiences and Perspectives of Participants and Providers from a Randomized Controlled Trial
BACKGROUND Mental illnesses affect many people around the world, either directly or indirectly. Families of persons suffering from mental illness or addiction suffer too, especially their children. In the Netherlands, 864,000 parents meet the diagnostic criteria for a mental illness or addiction. Evidence shows that offspring of mentally ill or addicted parents are at risk for developing mental disorders or illnesses themselves. The Kopstoring course is an online 8-week group course with supervision by 2 trained psychologists or social workers, aimed to prevent behavioral and psychological problems for children (aged 16 to 25 years) of parents with mental health problems or addictions. The course addresses themes such as roles in the family and mastery skills. An online randomized controlled trial (RCT) was conducted to assess the effectiveness of the Kopstoring course. OBJECTIVE The aim was to gain knowledge about expectations, experiences, and perspectives of participants and providers of the online Kopstoring course. METHODS A process evaluation was performed to evaluate the online delivery of Kopstoring and the experiences and perspectives of participants and providers of Kopstoring. Interviews were performed with members from both groups. Participants were drawn from a sample from the Kopstoring RCT. RESULTS Thirteen participants and 4 providers were interviewed. Five main themes emerged from these interviews: background, the requirements for the intervention, experience with the intervention, technical aspects, and research aspects. Overall, participants and providers found the intervention to be valuable because it was online; therefore, protecting their anonymity was considered a key component. Most barriers existed in the technical sphere. Additional barriers existed with conducting the RCT, namely gathering informed consent and gathering parental consent in the case of minors. CONCLUSIONS This study provides valuable insight into participants' and providers' experiences and expectations with the online preventive intervention Kopstoring. It also sheds light on the process of the online provision of Kopstoring and the accompanying RCT. The findings of this study may partly explain dropout rates when delivering online interventions. The change in the (financial) structure of the youth mental health care system in the Netherlands has financial implications for the delivery of prevention programs for youth. Lastly, there are few RCTs that assess the effectiveness and cost-effectiveness of online prevention programs in the field of (youth) mental health care and not many process evaluations of these programs exist. This hampers a good comparison between online interventions and the expectations and experiences of the participants and providers. TRIAL REGISTRATION Nederlands Trial Register: NTR1982; http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=1982 (Archived by WebCite® at http://www.webcitation.org/6d8xYDQbB).
Predictive Blacklisting as an Implicit Recommendation System
A widely used defense practice against malicious traffic on the Internet is through blacklists: lists of prolific attack sources are compiled and shared. The goal of blacklists is to predict and block future attack sources. Existing blacklisting techniques have focused on the most prolific attack sources and, more recently, on collaborative blacklisting. In this paper, we formulate the problem of forecasting attack sources (also referred to as "predictive blacklisting") based on shared attack logs, as an implicit recommendation system. We compare the performance of existing approaches against the upper bound for prediction and we demonstrate that there is much room for improvement. Inspired by the recent NetFlix competition, we propose a multi-level collaborative filtering model that is adjusted and tuned specifically for the attack forecasting problem. Our model captures and combines various factors namely: attacker-victim history (using time-series) and attackers and/or victims interactions (using neighborhood models). We evaluate our combined method on one month of logs from Dshield.org and demonstrate that it improves significantly the prediction rate over state-of-the-art methods as well as the robustness against poisoning attacks.
Knowledge-sharing and influence in online social networks via viral marketing
Online social networks are increasingly being recognized as an important source of information influencing the adoption and use of products and services. Viral marketing—the tactic of creating a process where interested people can market to each other—is therefore emerging as an important means to spread-the-word and stimulate the trial, adoption, and use of products and services. Consider the case of Hotmail, one of the earliest firms to tap the potential of viral marketing. Based predominantly on publicity from word-of-mouse [4], the Web-based email service provider garnered one million registered subscribers in its first six months, hit two million subscribers two months later, and passed the eleven million mark in eighteen months [7]. Wired magazine put this growth in perspective in its December 1998 issue: “The Hotmail user base grew faster than [that of ] any media company in history—faster than CNN, faster than AOL, even faster than Seinfeld’s audience. By mid-2000, Hotmail had over 66 million users with 270,000 new accounts being established each day.” While the potential of viral marketing to efficiently reach out to a broad set of potential users is attracting considerable attention, the value of this approach is also being questioned [5]. There needs to be a greater understanding of the contexts in which this strategy works and the characteristics of products and services for which it is most effective. This is particularly important because the inappropriate use of viral marketing can be counterproductive by creating unfavorable attitudes towards products. Work examining this phenomenon currently provides either descriptive accounts of particular initiatives [8] or advice based on anecdotal evidence [2]. What is missing is an analysis of viral marketing that highlights systematic patterns in the nature of knowledge-sharing and persuasion by influencers and responses by recipients in online social networks. To this end, we propose an organizing framework for viral marketing that draws on prior theory and highlights different behavioral mechanisms underlying knowledge-sharing, influence, and compliance in online social networks. Though the framework is descrip-
GEOLOGICAL CONDITION AND PROSPECTING DIRECTION OF COPPER MINERALIZATION IN EAST ZONE OF ZHONGTIAO RIFT——WANGWUSHAN MOUNTAINS,HENAN PROVINCE
There is favorable geological condition and prospecting direction of copper mineralization in Wangwushan,Henan.It can be compared with the Tongchangyu and Hubu copper deposits in Zhongtiaoshan with respect to geological setting,host lithology,magma formation,tectonic form,and mineralization style.The perfect mineralization conditions include: the stepwise evolution of Zhongtiao rift;great mantle-connected fault supplying plenty of ore resource and place;frequent magma activity supplying heat energy and recruit ore matter;stripping fault system providing ore contain place;and mighty deformation and metamorphism reforming orebearing formation by activation and transference.The authors point out the prospecting conditions in the Wangwushan region.
Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups.
BACKGROUND Qualitative research explores complex phenomena encountered by clinicians, health care providers, policy makers and consumers. Although partial checklists are available, no consolidated reporting framework exists for any type of qualitative design. OBJECTIVE To develop a checklist for explicit and comprehensive reporting of qualitative studies (in depth interviews and focus groups). METHODS We performed a comprehensive search in Cochrane and Campbell Protocols, Medline, CINAHL, systematic reviews of qualitative studies, author or reviewer guidelines of major medical journals and reference lists of relevant publications for existing checklists used to assess qualitative studies. Seventy-six items from 22 checklists were compiled into a comprehensive list. All items were grouped into three domains: (i) research team and reflexivity, (ii) study design and (iii) data analysis and reporting. Duplicate items and those that were ambiguous, too broadly defined and impractical to assess were removed. RESULTS Items most frequently included in the checklists related to sampling method, setting for data collection, method of data collection, respondent validation of findings, method of recording data, description of the derivation of themes and inclusion of supporting quotations. We grouped all items into three domains: (i) research team and reflexivity, (ii) study design and (iii) data analysis and reporting. CONCLUSIONS The criteria included in COREQ, a 32-item checklist, can help researchers to report important aspects of the research team, study methods, context of the study, findings, analysis and interpretations.
Netazepide, a Gastrin Receptor Antagonist, Normalises Tumour Biomarkers and Causes Regression of Type 1 Gastric Neuroendocrine Tumours in a Nonrandomised Trial of Patients with Chronic Atrophic Gastritis
INTRODUCTION Autoimmune chronic atrophic gastritis (CAG) causes hypochlorhydria and hypergastrinaemia, which can lead to enterochromaffin-like (ECL) cell hyperplasia and gastric neuroendocrine tumours (type 1 gastric NETs). Most behave indolently, but some larger tumours metastasise. Antrectomy, which removes the source of the hypergastrinaemia, usually causes tumour regression. Non-clinical and healthy-subject studies have shown that netazepide (YF476) is a potent, highly selective and orally-active gastrin/CCK-2 receptor antagonist. Also, it is effective in animal models of ECL-cell tumours induced by hypergastrinaemia. AIM To assess the effect of netazepide on tumour biomarkers, number and size in patients with type I gastric NETs. METHODS We studied 8 patients with multiple tumours and raised circulating gastrin and chromogranin A (CgA) concentrations in an open trial of oral netazepide for 12 weeks, with follow-up 12 weeks later. At 0, 6, 12 and 24 weeks, we carried out gastroscopy, counted and measured tumours, and took biopsies to assess abundances of several ECL-cell constituents. At 0, 3, 6, 9, 12 and 24 weeks, we measured circulating gastrin and CgA and assessed safety and tolerability. RESULTS Netazepide was safe and well tolerated. Abundances of CgA (p<0.05), histidine decarboxylase (p<0.05) and matrix metalloproteinase-7(p<0.10) were reduced at 6 and 12 weeks, but were raised again at follow-up. Likewise, plasma CgA was reduced at 3 weeks (p<0.01), remained so until 12 weeks, but was raised again at follow-up. Tumours were fewer and the size of the largest one was smaller (p<0.05) at 12 weeks, and remained so at follow-up. Serum gastrin was unaffected. CONCLUSION The reduction in abundances, plasma CgA, and tumour number and size by netazepide show that type 1 NETs are gastrin-dependent tumours. Failure of netazepide to increase serum gastrin further is consistent with achlorhydria. Netazepide is a potential new treatment for type 1 NETs. Longer, controlled trials are justified. TRIAL REGISTRATION European Union EudraCT database 2007-002916-24 https://www.clinicaltrialsregister.eu/ctr-search/search?query=2007-002916-24ClinicalTrials.gov NCT01339169 http://clinicaltrials.gov/ct2/show/NCT01339169?term=yf476&rank=5.
Dynamic Self-modifying Code Detection Based on Backward Analysis
Self-modifying code (SMC) is widely used in obfuscated program for enhancing the difficulty in reverse engineering. The typical mode of self-modifying code is restore-execute-hide, it drives program to conceal real behaviors at most of the time, and only under actual running will the real code be restored and executed. In order to locate the SMC and further recover the original logic of code for guiding program analysis, dynamic self-modifying code detecting method based on backward analysis is proposed. Our method first extracts execution trace such as instructions and status through dynamic analysis. Then we maintain a memory set to store the memory address of execution instructions, the memory set will update dynamically while backward searching the trace, and simultaneously will we check the memory write address to match with current memory set in order to identify the mode "modify then execute". By means of validating self-modifying code which is identified via above procedures, we can easily deobfuscate the program which use self-modifying code and achieve its original logic. A prototype that can be applied in self-modifying code detection is designed and implemented. The evaluation results show our method can trace the execution of program effectively, and can reduce the consumption in time and space.
Automated prediction of early blood transfusion and mortality in trauma patients.
BACKGROUND Prediction of blood transfusion needs and mortality for trauma patients in near real time is an unrealized goal. We hypothesized that analysis of pulse oximeter signals could predict blood transfusion and mortality as accurately as conventional vital signs (VSs). METHODS Continuous VS data were recorded for direct admission trauma patients with abnormal prehospital shock index (SI = heart rate [HR] / systolic blood pressure) greater than 0.62. Predictions of transfusion during the first 24 hours and in-hospital mortality using logistical regression models were compared with DeLong's method for areas under receiver operating characteristic curves (AUROCs) to determine the optimal combinations of prehospital SI and HR, continuous photoplethysmographic (PPG), oxygen saturation (SpO2), and HR-related features. RESULTS We enrolled 556 patients; 37 received blood within 24 hours; 7 received more than 4 U of red blood cells in less than 4 hours or "massive transfusion" (MT); and 9 died. The first 15 minutes of VS signals, including prehospital HR plus continuous PPG, and SpO2 HR signal analysis best predicted transfusion at 1 hour to 3 hours, MT, and mortality (AUROC, 0.83; p < 0.03) and no differently (p = 0.32) from a model including blood pressure. Predictions of transfusion based on the first 15 minutes of data were no different using 30 minutes to 60 minutes of data collection. SI plus PPG and SpO2 signal analysis (AUROC, 0.82) predicted 1-hour to 3-hour transfusion, MT, and mortality no differently from pulse oximeter signals alone. CONCLUSION Pulse oximeter features collected in the first 15 minutes of our trauma patient resuscitation cohort, without user input, predicted early MT and mortality in the critical first hours of care better than the currently used VS such as combinations of HR and systolic blood pressure or prehospital SI alone. LEVEL OF EVIDENCE Therapeutic/prognostic study, level II.
The factors influencing members' continuance intentions in professional virtual communities - a longitudinal study
The advance of internet technology has stimulated the rise of professional virtual communities (PVCs). The objective of PVCs is to encourage people to exploit or explore knowledge through websites. However, many virtual communities have failed due to the reluctance of members to continue their participation in these PVCs. Motivated by such concerns, this study formulates and tests a theoretical model to explain the factors influencing individuals’ intention to continue participating in PVCs’ knowledge activities. Drawing from the information system and knowledge management literatures, two academic perspectives related to PVC continuance are incorporated in the integrated model. This model posits that an individual’s intention to stay in a professional virtual community is influenced by a contextual factor and technological factors. Specifically, the antecedents of PVC members’ intention to continue sharing knowledge include social interaction ties capital and satisfaction at post-usage stage. These variables, in turn, are adjusted based on the confirmation of pre-usage expectations. A longitudinal study is conducted with 360 members of a professional virtual community. Results indicate that the contextual factor and technological factors both exert significant impacts on PVC participants’ continuance intentions.
Nail Psoriasis, the unknown burden of disease.
BACKGROUND Psoriasis can be found at several different localizations which may be of various impact on patients' quality of life (QoL). One of the easy visible, and difficult to conceal localizations are the nails. OBJECTIVE To achieve more insight into the QoL of psoriatic patients with nail psoriasis, and to characterize the patients with nail involvement which are more prone to the impact of the nail alterations caused by psoriasis. METHOD A self-administered questionnaire was distributed to all members (n = 5400) of the Dutch Psoriasis Association. The Dermatology Life Quality Index (DLQI) and the Nail Psoriasis Quality of life 10 (NPQ10) score were included as QoL measures. Severity of cutaneous lesions was determined using the self-administered psoriasis area and severity index (SAPASI). RESULTS Patients with nail psoriasis scored significantly higher mean scores on the DLQI (4.9 vs. 3.7, P = <0.001) and showed more severe psoriasis (SAPASI, 6.6 vs. 5.3, P = <0.001). Patients with coexistence of nail bed and nail matrix features showed higher DLQI scores compared with patients with involvement of one of the two localizations exclusively (5.3 vs. 4.2 vs. 4.3, P = 0.003). Patients with only nail bed alterations scored significant higher NPQ10 scores when compared with patients with only nail matrix features. Patients with psoriatic arthritis (PsA) and nail psoriasis experiences more impairments compared with nail psoriasis patients without PsA (DLQI 5.5 vs. 4.3, NPQ10 13.3 vs. 7.0). Females scored higher mean scores on all QoL scores. CONCLUSION Greater attention should be paid to the possible impact nail abnormalities have on patients with nail psoriasis, which can be identified by nail psoriasis specific questionnaires such as the NPQ10. As improving the severity of disease may have a positive influence on QoL, the outcome of QoL measurements should be taken into account when deciding on treatment strategies.
Intellectual property rights in transition: legal structures and concepts in adaptation to technological challenges towards an intellectual property system for the 21st century. A Nordic-European research programme
The following is a presentation of a research project that will be undertaken over the coming years in co-operation between legal faculties in primarily Denmark, Finland and Sweden, the Netherlands and the Max Planck Institute in Germany. But additional participants from other universities in Europe will soon be invited to participate. The programme has been developed in collaboration with professor Niklas Bruun, Helsinki, and Dr. Annette Kur, The Max Planck Institute for Foreign and International Patent, Copyright and Competition Law in Munich. The programme aims at investigating the new tendencies in the field of intellectual property and formulate solutions for adapting the system as a whole in the form of a new outward framework, if needed.
Adam Smith's Conceptualization of Power, Markets, and Politics
This paper argues that Adam Smith is a/the "founding father figure" of modern social/political economy as well as economics. Smith wrote extensively and insightfully on the subject of power , and thereby class and stratafication in society. This paper explicates four main types of power relations in Smith's analysis, notably drawing on the Wealth of Nations : wealth power, monopoly power, employer power, and political power. Smith's focus on power helps to differentiate his broader vision and rich discourse from that of many contemporary neoclassical writers and sharpens our appreciation for his contributions to social and political economy.
Integrating the Internet of Things with Business Process Management: A Process-aware Framework for Smart Objects
Due to the achievements in the Internet of Things (IoT) field, Smart Objects are often involved in business processes. However, the integration of IoT with Business Process Management (BPM) is far from mature: problems related to process compliance and Smart Objects configuration with respect to the process requirements have not been fully addressed yet; also, the interaction of Smart Objects with multiple business processes that belong to different stakeholders is still under investigation. My PhD thesis aims to fill this gap by extending the BPM lifecycle, with particular focus on the design and analysis phase, in order to explicitly support IoT and its requirements.
Securing RFID Applications: Issues, Methods, and Controls
adio frequency identification (RFID) is an automatic identification (autoID) technology developed by the Auto-ID Center at the Massachusetts Institute of Technology, relying on storing and remotely retrieving data using devices called RFID tags and readers (Auto-ID Center, 2002; Doyle, 2004; EPC, 2004b; Finkenzeller, 2000; Shepard, 2005). With RFID technology, physical assets will have embedded intelligence that allows them to communicate with each other and with the tracking points (Auto-ID Center, 2002; IBM, 2003; VeriSign, 2004). An RFID tag is a small object that can be attached to or incorporated into a physical asset, such as a book, animal, or person. When an RFID tag passes through the electromagnetic zone, it detects the reader’s activation signal. The reader decodes the data encoded in the tag’s integrated circuit (silicon chip) and the data is passed to the host computer for further processing. As shown in Table 1, there are six classes of RFID tags designed for different applications, including active tags, which require a battery to operate, and passive tags, which have no battery (Auto-ID Center, 2002). RFID tag technology generally dictates the operating parameters of an RFID system. As set forth in ABI, Inc. (2002), other than tag power source, operating frequencies is another main factor influencing the type of RFID applications, where the applications can be generally categorized as: (1) low frequency (LF), for access control or point-of-sales (POS) applications, (2) high frequency (HF), for handling baggage or library items in asset management application, (3) ultra high frequency (UHF) for SCM application, and (4) microwave frequency for electronic toll collection application.
Design of HyQ – a Hydraulically and Electrically Actuated Quadruped Robot
— A new versatile Hydraulically-powered Quadruped robot (HyQ) has been developed to serve as a platform to study not only highly dynamic motions such as running and jumping, but also careful navigation over very rough terrain. HyQ stands 1 meter tall, weighs roughly 90kg and features 12 torque-controlled joints powered by a combination of hydraulic and electric actuators. The hydraulic actuation permits the robot to perform powerful and dynamic motions that are hard to achieve with more traditional electrically actuated robots. This paper describes design and specifications of the robot and presents details on the hardware of the quadruped platform, such as the mechanical design of the four articulated legs and of the torso frame, and the configuration of the hydraulic power system. Results from the first walking experiments are presented along with test studies using a previously built prototype leg. 1 INTRODUCTION The development of mobile robotic platforms is an important and active area of research. Within this domain, the major focus has been to develop wheeled or tracked systems that cope very effectively with flat and well-structured solid surfaces (e.g. laboratories and roads). In recent years, there has been considerable success with robotic vehicles even for off-road conditions [1]. However, wheeled robots still have major limitations and difficulties in navigating uneven and rough terrain. These limitations and the capabilities of legged animals encouraged researchers for the past decades to focus on the construction of biologically inspired legged machines. These robots have the potential to outperform the more traditional designs with wheels and tracks in terms of mobility and versatility. The vast majority of the existing legged robots have been, and continue to be, actuated by electric motors with high gear-ratio reduction drives, which are popular because of their size, price, ease of use and accuracy of control. However, electric motors produce small torques relative to their size and weight, thereby making reduction drives with high ratios essential to convert velocity into torque. Unfortunately, this approach results in systems with reduced speed capability and limited passive back-driveability and therefore not very suitable for highly dynamic motions and interactions with unforeseen terrain variance. Significant examples of such legged robots are: the biped series of HRP robots [2], Toyota humanoid robot [3], and Honda's Asimo [4]; and the quadruped robot series of Hirose et al. [5], Sony's AIBO [6] and Little Dog [7]. In combination with high position gain control and …
Conformal Field Theory
Errata on second printing These errata (and eventual additional ones) are listed on the following Web page: The sign before the last term should be +, not −. The third term within braces on the first line should read −η λν σ µρ + (µ and ν should be interchanged). In addition, the term on the last line of the equation should change sign.
Integrating Gaussian mixtures into deep neural networks: Softmax layer with hidden variables
In the hybrid approach, neural network output directly serves as hidden Markov model (HMM) state posterior probability estimates. In contrast to this, in the tandem approach neural network output is used as input features to improve classic Gaussian mixture model (GMM) based emission probability estimates. This paper shows that GMM can be easily integrated into the deep neural network framework. By exploiting its equivalence with the log-linear mixture model (LMM), GMM can be transformed to a large softmax layer followed by a summation pooling layer. Theoretical and experimental results indicate that the jointly trained and optimally chosen GMM and bottleneck tandem features cannot perform worse than a hybrid model. Thus, the question “hybrid vs. tandem” simplifies to optimizing the output layer of a neural network. Speech recognition experiments are carried out on a broadcast news and conversations task using up to 12 feed-forward hidden layers with sigmoid and rectified linear unit activation functions. The evaluation of the LMM layer shows recognition gains over the classic softmax output.
Tag, you can see it!: using tags for access control in photo sharing
Users often have rich and complex photo-sharing preferences, but properly configuring access control can be difficult and time-consuming. In an 18-participant laboratory study, we explore whether the keywords and captions with which users tag their photos can be used to help users more intuitively create and maintain access-control policies. We find that (a) tags created for organizational purposes can be repurposed to create efficient and reasonably accurate access-control rules; (b) users tagging with access control in mind develop coherent strategies that lead to significantly more accurate rules than those associated with organizational tags alone; and (c) participants can understand and actively engage with the concept of tag-based access control.
Behavioral Effects of Probation Periods : An Analysis of Worker Absenteeism
The theoretical probation literature shows that individuals have incentives to mimick "good workers" during periods of employment probation. This study empirically tests at the example of absence behavior, whether such behavioral responses to the incentives of probation periods exist. We find significant responses of white collar employees and public sector workers to probation periods: Once employment probation is completed and individuals enter into regular employment contracts, the probability of work absences takes discrete jumps and is significantly above previous levels.
Real-time pen-and-ink illustration of landscapes
Non-photorealistic rendering has been proven to be particularly efficient in conveying and transmitting selected visual information. Our paper presents a NPR rendering pipeline that supports pen-and-ink illustration for, but not limited to, complex landscape scenes in real time. This encompasses a simplification framework using clustering which enables new approaches to efficient and coherent rendering of stylized silhouettes, hatching and abstract shading. Silhouette stylization is performed in image-space. This avoids explicit computation of connected lines. Further, coherent hatching of the tree foliage is performed using an approximate view-dependent parameterization computed on-the-fly within the same simplification framework. All NPR algorithms are integrated with photorealistic rendering, allowing seamless transition and combination between a variety of photorealistic and non-photorealistic drawing styles.
ROAR: An architecture for Real-Time Opportunistic Spectrum Access in Cloud-assisted Cognitive Radio Networks
Need for Radio Frequency (RF) spectrum is increasing along with increasing wireless subscriptions and devices that is causing scarcity in total available RF spectrum. Opportunistic spectrum access in cognitive radio networks is emerging for maximizing the RF spectrum efficiency where unlicensed Secondary Users (SUs) access idle spectrum bands without causing any harmful interference to licensed Primary Users (PUs). All SUs are required to scan/sense RF spectrum to find idle bands or search for idle bands in a spectrum database not to interfere with PUs while using those idle bands. In this paper, we propose a Real-time Opportunistic Spectrum Access in Cloud-assisted Cognitive Radio Networks (ROAR) architecture where SUs (i.e., USRP wireless devices) are equipped with wide-band (50 MHz - 6 GHz) antenna and GPS units. ROAR uses cloud computing platform for real-time processing of wide-band data since SUs' performance is considerably constrained by their limited power, memory and computational capacity. In ROAR architecture, there are two parts: spectrum sensing to create a database of idle channels and dynamic spectrum access for opportunistic SU communications using idle channels. For the spectrum database, RF sensors scan/sense RF bands to find idle bands and report the geo-location of idle band, channel frequency and time stamp to the database installed in the distributed cloud platform. For opportunistic spectrum access, each SU interested for opportunistic communication queries a spectrum database to find idle channels. Distributed cloud computing is used to find idle channels for the SU where geolocation and other demands (e.g., data rate) are checked to find whether the SU is admissible or not for a given geo-location and time. If the SU is admissible based on the admissibility criteria, spectrum server sends the list of channels available for a given location and time to the SU. Then SU chooses the best suited channel to communicate opportunistically. We evaluate the ROAR architecture using numerical results obtained from extensive experiments.
Sonoliards: Rendering Audible Sound Spots by Reflecting the Ultrasound Beams
This paper proposes a dynamic acoustic field generation system for a spot audio towards particular person indoors. Spot audio techniques have been explored by generating the ultrasound beams toward the target person in certain area however everyone in this area can hear the sound. Our system recognizes the positions of each person indoor using motion capture and 3D model data of the room. After that we control direction of parametric speaker in real-time so that sound reach only particular person by calculating the reflection of sound on surfaces such as wall and ceiling. We calculate direction of parametric speaker using a beam tracing method. We present generating methods of dynamic acoustic field in our system and conducted the experiments on human factor to evaluate performance of proposed system.
Leakage Current Elimination of Four-Leg Inverter for Transformerless Three-Phase PV Systems
Eliminating the leakage current is one of the most important issues for transformerless three-phase photovoltaic (PV) systems. In this paper, the leakage current elimination of a three-phase four-leg PV inverter is investigated. With the common-mode loop model established, the generation mechanism of the leakage current is clearly identified. Different typical carrier-based modulation methods and their corresponding common-mode voltages are discussed. A new modulation strategy with Boolean logic function is proposed to achieve the constant common-mode voltage for the leakage current reduction. Finally, the different modulation methods are implemented and tested on the TMS320F28335 DSP +XC3S400 FPGA digital control platform. The experimental results verify the effectiveness of the proposed solution.
Desmoglein 1 deficiency results in severe dermatitis, multiple allergies and metabolic wasting
The relative contribution of immunological dysregulation and impaired epithelial barrier function to allergic diseases is still a matter of debate. Here we describe a new syndrome featuring severe dermatitis, multiple allergies and metabolic wasting (SAM syndrome) caused by homozygous mutations in DSG1. DSG1 encodes desmoglein 1, a major constituent of desmosomes, which connect the cell surface to the keratin cytoskeleton and have a crucial role in maintaining epidermal integrity and barrier function. Mutations causing SAM syndrome resulted in lack of membrane expression of DSG1, leading to loss of cell-cell adhesion. In addition, DSG1 deficiency was associated with increased expression of a number of genes encoding allergy-related cytokines. Our deciphering of the pathogenesis of SAM syndrome substantiates the notion that allergy may result from a primary structural epidermal defect.
Hypnotic use for insomnia management in chronic obstructive pulmonary disease.
Chronic obstructive pulmonary disease (COPD) is one of the leading causes of mortality and morbidity worldwide. Because of the chronic nature of the disease, optimal care for patients includes successful treatment of comorbidities that accompany COPD, including insomnia. Insomnia symptoms and associated disruption of sleep are prevalent in COPD patients but treatment with traditional benzodiazepines may compromise respiratory function. This review summarizes the efficacy and safety consideration of current drugs available for the treatment of insomnia in COPD patients including benzodiazepines, non-benzodiazepine receptor agonists such as eszopiclone, zolpidem, and zaleplon, sedating antidepressants such as trazodone, and the melatonin receptor agonist ramelteon.
Combining active and semi-supervised learning for spoken language understanding
In this paper, we describe active and semi-supervised learning methods for reducing the labeling effort for spoken language understanding. In a goal-oriented call routing system, understanding the intent of the user can be framed as a classification problem. State of the art statistical classification systems are trained using a large number of human-labeled utterances, preparation of which is labor intensive and time consuming. Active learning aims to minimize the number of labeled utterances by automatically selecting the utterances that are likely to be most informative for labeling. The method for active learning we propose, inspired by certainty-based active learning, selects the examples that the classifier is the least confident about. The examples that are classified with higher confidence scores (hence not selected by active learning) are exploited using two semi-supervised learning methods. The first method augments the training data by using the machine-labeled classes for the unlabeled utterances. The second method instead augments the classification model trained using the human-labeled utterances with the machine-labeled ones in a weighted manner. We then combine active and semi-supervised learning using selectively sampled and automatically labeled data. This enables us to exploit all collected data and alleviates the data imbalance problem caused by employing only active or semi-supervised learning. We have evaluated these active and semi-supervised learning methods with a call classification system used for AT&T customer care. Our results indicate that it is possible to reduce human labeling effort significantly. 2004 Elsevier B.V. All rights reserved.
Relaxed Wasserstein with Applications to GANs
We propose a novel class of statistical divergences called Relaxed Wasserstein (RW) divergence. RW divergence generalizes Wasserstein divergence and is parametrized by a class of strictly convex and differentiable functions. We establish for RW divergence several probabilistic properties, which are critical for the success of Wasserstein divergence. In particular, we show that RW divergence is dominated by Total Variation (TV) and Wasserstein-L divergence, and that RW divergence has continuity, differentiability and duality representation. Finally, we provide a non-asymptotic moment estimate and a concentration inequality for RW divergence. Our experiments on image generation demonstrate that RW divergence is a suitable choice for GANs. The performance of RWGANs with Kullback-Leibler (KL) divergence is competitive with other state-of-the-art GANs approaches. Moreover, RWGANs possess better convergence properties than the existing WGANs with competitive inception scores. To the best of our knowledge, this new conceptual framework is the first to provide not only the flexibility in designing effective GANs scheme, but also the possibility in studying different loss functions under a unified mathematical framework.
Predicting Dropout-Prone Students in E-Learning Education System
High rate of students dropout in courses has been a major problem for many universities or educational institutions that offer online education. If the dropout-prone students can be identified in their early stages, the dropout rate can be reduced by providing individualized care to the students at-risk. Due to the electronic nature of the e-learning courses, various attributes of the student progress can be monitored and analyzed automatically over time. In this paper, a technique for predicting students who are prone to dropout from the online courses has been proposed that progressively analyzes a set of per-learner attributes of the students' activities overtime. Since a single machine learning technique may fail to accurately identify some dropout-prone students whereas others may succeed, this technique uses a combination of multiple classifiers (ensemble of classifiers) for this analysis. The results of the validation found the technique to be promising in predicting dropout-prone students.
Influence of benthic and pelagic environmental factors on the distribution of dinoflagellate cysts in surface sediments along the Swedish west coast
Abundance and frequency of dinoflagellate cysts in 19 surface sediment samples from the northern part of the Swedish west coast has been related to physical and chemical characters of the sediment, hydrography of the overlying water column, and plankton species data from the area. Density of cysts varied between 5000 and 101000 cysts g–1 dw, and the most commonly encountered species were Lingulodinium polyedrum and Protoceratium reticulatum. In all, 46 environmental variables were tested for their relation to dinoflagellate cyst densities, proportion of autotrophic and heterotrophic taxa, and individual species distribution and frequency. The outcomes of multivariate analyses, projection to latent structures (PLS) and canonical correspondence analysis (CCA) were consistent with each other and the actual cyst count. The density of the total cyst assemblage (>90% autotrophic taxa) was primarily related to surface temperature, macronutrients, and inversely to phytoplankton competitors, such as diatoms. The abundance of heterotrophic taxa was governed by the preferences of their prey, i.e. diatom-favourable conditions, and, in most cases, higher proportions of heterotrophic taxa were found at well-mixed sites. Some possible effects of anthropogenic contaminants were also noted. Several taxa showed distinct distribution patterns with respect to the environmental variables. A discrepancy between the species constituting the planktonic and the benthic community was revealed when data from 6 yr of plankton monitoring was compared to the data on distribution of dinoflagellate cysts. In particular, cyst-forming species were only a minor part of the plankton, suggesting that these dinoflagellates spend much of their life in the sediments.
Dude, where's my card?: RFID positioning that works with multipath and non-line of sight
RFIDs are emerging as a vital component of the Internet of Things. In 2012, billions of RFIDs have been deployed to locate equipment, track drugs, tag retail goods, etc. Current RFID systems, however, can only identify whether a tagged object is within radio range (which could be up to tens of meters), but cannot pinpoint its exact location. Past proposals for addressing this limitation rely on a line-of-sight model and hence perform poorly when faced with multipath effects or non-line-of-sight, which are typical in real-world deployments. This paper introduces the first fine-grained RFID positioning system that is robust to multipath and non-line-of-sight scenarios. Unlike past work, which considers multipath as detrimental, our design exploits multipath to accurately locate RFIDs. The intuition underlying our design is that nearby RFIDs experience a similar multipath environment (e.g., reflectors in the environment) and thus exhibit similar multipath profiles. We capture and extract these multipath profiles by using a synthetic aperture radar (SAR) created via antenna motion. We then adapt dynamic time warping (DTW) techniques to pinpoint a tag's location. We built a prototype of our design using USRP software radios. Results from a deployment of 200 commercial RFIDs in our university library demonstrate that the new design can locate misplaced books with a median accuracy of 11~cm.
Optimization of astaxanthin production by Phaffia rhodozyma through factorial design and response surface methodology.
Sequential methodology based on the application of three types of experimental designs was used to optimize the astaxanthin production of the mutant strain 25-2 of Phaffia rhodozyma in shake flask cultures. The first design employed was a factorial design 2(5), where the factors studied were: pH, temperature, percent of inoculum, carbon and nitrogen concentrations, each one at two levels. This design was performed in two medium types: rich YM medium and minimal medium, based on date juice (Yucca medium). With this first design the most important factors were determined (carbon concentration and temperature) that were used in the second experimental strategy: the method of steepest ascent was applied in order to rapidly approach the optimum. Finally, a second-order response surface design was applied using temperature and carbon concentration as factors. The optimal conditions stimulating the highest astaxanthin production were: 19.7 degrees C temperature; 11.25 g l(-1) carbon concentration; 6.0 pH; 5% inoculum and 0.5 g l(-1) nitrogen concentration. Under these conditions the astaxanthin production was 8100 microg l(-1), 92% higher than the production under the initial conditions.
Cardiopulmonary resuscitation
Diffuse hypoxic brain damage is rather common in patients with successful prehospital resuscitation by Emergency Medical System. The extent of revesible and irreversible neuropsychological lesions associated with hypoxia depends on age, preexisting diseases (e.g. cerebrovascular insufficiency) and duration of cardiorespiratory arrest . In those patients regaining consciousness despite persistent neu -ropsychological lesions the process of personality reorganization has little been studied. Personality reorganization was analysed in 20 patients with primarily deep coma after successful prehospital resuscitation (estimated mean duration of cardiac arrest 18 min.) followed by slow improvement of cerebral function and regaining "consciousness". Common features were reversible loss of short term memory and various kinds of acute exogenic reaction. Patients with possibly preexisting cerebral atrophy showed an apparently improved neuropsychological recovery. In 2 patients, personality reorganization showed striking similarities with earliest stages of childhood development. Careful analysis of personality reorganization may have most important implications for individualized rehabilitation strategies in patients after resuscitation.
A review on transfer learning for brain-computer interface classification
Due to the non-stationarity nature and poor signal-to-noise ratio (SNR) of brain signals, repeated time-consuming calibration is one of the biggest problems for today's brain-computer interfaces (BCIs). In order to reduce calibration time, many transfer learning methods have been proposed to extract discriminative or stationary information from other subjects or prior sessions for target classification task. In this paper, we review the existing transfer learning methods used for BCI classification problems and organize them into three cases based on different transfer strategies. Besides, we list the datasets used in these BCI studies.
A variable buoyancy system and a recovery system developed for a deep-sea AUV Qianlong I
As a new generation of deep-sea Autonomous Underwater Vehicle (AUV), Qianlong I is a 6000m rated glass deep-sea manganese nodules detection AUV which based on the CR01 and the CR02 deep-sea AUVs and developed by Shenyang Institute of Automation, the Chinese Academy of Sciences from 2010. The Qianlong I was tested in the thousand-isles lake in Zhejiang Province of China during November 2012 to March 2013 and the sea trials were conducted in the South China Sea during April 20-May 2, 2013 after the lake tests and the ocean application completed in October 2013. This paper describes two key problems encountered in the process of developing Qianlong I, including the launch and recovery systems development and variable buoyancy system development. Results from the recent lake and sea trails are presented, and future missions and development plans are discussed.
A Simple Equivalent Circuit Model for Plasma Dipole Antenna
Plasma antenna is an emerging technology that utilizes ionized gas as a conducting medium instead of metal. It is often convenient to represent the input impedance of the antenna by a lumped-element-equivalent circuit. Input impedance of the plasma dipole antenna is deduced using finite integration technique. A five-lumped-element equivalent circuit for the plasma dipole antenna variation with plasma frequency is investigated and optimized using the genetic algorithm (GA). The effect of plasma frequency and collision frequency of the ionized gas on input impedance variations of the plasma dipole antenna is studied with the help of the equivalent circuit model. Another equivalent circuit is synthesized using a rational function and GA. The Cauer realization method is used to deduce a new lumped-element equivalent circuit.
Biological Water or Rather Water in Biology?
There has been a lot of discussion about biological water recently. Books 1-2 and reviews 3-5 were written in the past years and a special issue of the Journal of Chemical Physics was dedicated to the topic in 2014. 6 Interestingly, papers on biological water are mostly confined to chemistry and physics journals, being remarkably rare in biological or biochemical literature. So, what is actually the biological water, we physical chemists are so concerned about? Definitions vary from " soft " to " hard " ones. A soft definition describes biological water as any water around a biomolecule (i.e., protein, DNA or RNA, or a piece of a cellular membrane) that has properties distinct from those of the aqueous bulk. More hard definitions operate with mutual tailoring of thermodynamic and dynamic properties of the biomolecule and surrounding waters, 11-15 and even with covering the protein by a shell of functional water molecules which can " slave " its motions 5 and propagate to considerable distance. 16 Finally, the recent hardest interpretations invoke in their extreme form, which can hardly be considered as strictly scientific any more, the notion of cellular water as a distinct species which itself is able to carry the biological functionalities. 17-20 2 There is little doubt that a layer of non-bulk water exists around a biomolecule. 7 The relevant question is how thick such a layer is and to what extent its properties differ from those of the aqueous bulk. Let us first get a semi-quantitative estimate of the thickness, focusing on electrostatic interactions which dominate in water. The range of these interactions is governed by the Debye screening length, 21 which amounts to less than 1 nm for the physiological ionic strength of about 150 mM. The physiological solution thus has a remarkable ability to screen out electrostatic interactions, which could hardly propagate beyond some 1-3 water molecules from the surface of the biomolecule. Still, several solvent layers can represent a non-negligible fraction of available water in the crowded cellular environment. 4 Moreover, individual biomolecular functional groups can come close enough to each other such that topological characteristics of the protein or DNA surface can in principle combine with properties of interfacial water molecules in-between these groups. 22-23 For the above reasons, the relevant question is not only how many water molecules are influenced by the biomolecule, but also how much. Leaving aside the small …
Automatic Text Decomposition Using Text Segments and Text Themes
SUTWVIXYVZXL[ \]T_^L[ `baLc![ dA^7eL`![;fhgig ekjWjWl8VZ[ZmBVnT_okg fAc!pqd/VbT_f1o7c![&l VZcrT_[&s/dhj t$aLdA`!`bd1uA[&l cb[&VZcrT_[&s/d/jvVZ[ wIXLokT_x eL[ ` dAc![;y [ w f1pzT_oLu;T_okl w c![ dA`{T_o u/jW|}a fAaLekj_dAcH~  dAc!uA[ cDVI[ZmkVI`>w d1onVZXL[ ony [qc![ akj_dAw [ ^ yA| T_p6a f1crVZdAo1V#VI[ZmkVD[Zm w [ cbakVZ`&tLVIXL[ c![ yA|<`{T_p6aBjWTWg€| T_oLuiVIXL[>c![&l VZcrT_[&s/dhj/VZdA`!]dAoL^vT_p6aLcbf s T_oLu#c![&VIc{T_[&s/dhj/[&‚ [ w&VbTWs1[ oL[ `b`H~@ƒ d1`b`!dAu1[&l j_[&s1[&j2[&s T_^L[ oLw [>d1y fAekVDVZX [$eL`![$fhg„\#fAc!^L`…T_o†j_fkw dhj%w f1oAVZ[ZmBVI` T_`#d/j_`bf>eL`![&g8ekjkg fAc„cb[ `!fhjWs T_oLu‡j_d1oLuAeLdAu1[#dAp$ykT_uAekTWV!T_[ `Kd1oL^zT_pil aLcbf s T_oLuic![&VIc{T_[&s/dhj fAekVIaLekVˆ~ ‰ \#fip6dhT_o6VI[ZmBV…^ [ w f1pqa fA`{TWVbT_f1oi`rVZcbd/VI[ u/T_[ `Kd1cb[DT_oAVZcbfk^LeLw [ ^ T_o;VZXBT_`i`{VIeL^k|/t2T_oLw&j_eL^BT_oLu}dnwZXLcbf1o f/j_fAu/T_w dhj„^ [ w f1pqa fA`{TWVbT_f1o T_oAVZf$Š(‹!Œ1Š3ˆ‹ Ž/q‹Z Š‘&tkdAoL^>`![ p6dAoAV!T_wK^L[ w f1pqa fA`{TWVbT_f1oNT_o1VIfzŠ ‹!Œ1Š Š‘’L‹Zq‹!&~ ‰ XL[KT_o1VI[ c!dAw&V!T_fAo“y [&V”\#[ [ o•VI[ZmkV%`b[ u1pq[ oAVZ`@d1o ^“VI[ZmBV VZX [ p6[ `2T_`DVIXL[ o eL`![ ^ VIf6wIXLdAc!dAw&VZ[ c{T_– [NVZ[ZmBV•`{VIc!eLw&VIeLc![/tBd1o ^ VZfNg fAc!p$ekj_dhVZ[%`ba [ w&TW—3w d/VbT_f1oL` g fAc T_okg fAc!pqd/VbT_f1ovc![&VZcrT_[&s/dhj t VI[ZmBV VZcbd s1[ cb`!dhj tLdAoL^<VZ[ZmBV]`!e p6p6dAc{T_– dhV!T_fAo ~ KEYWORDS: ‰ [ZmBV `{VZcbeLw&VZe c{T_oLuktIVZ[ZmBV@^L[ w fAp6a f1`rTWV!T_fAo t&`![ uhl p6[ oAVI`Ht3VZX [ p6[ `HtBT_okg8f1cbp6dhV!T_fAo*cb[&VZcrT_[&s/d/j t a d1`b`!dAu1[>c![&VIc{T_[&s/dhj t VZ[ZmBV]`!e p6p6dAc{T_– dhV!T_fAo ~ TEXT PASSAGES AND TEXT RELATIONSHIP MAPS SUTWVIX}VIXL[6dA^ksA[ o1V$fhg#g ekjWjWl8VZ[ZmBVi^Lfkw eLpq[ oAV•aLc!fkw [ `b`{T_oLukt@VIXL[ T_oAVZ[ cb[ `{V#T_o p6dAokT_aLekj_dhV!T_o u“VI[ZmBV…a d1`b`!dAu1[ `#c!dhVZX [ c VIXLd1o fAokjW| g eBjWjWl VI[ZmkVvTWVZ[ pq`DXLdA`•w fAo1VbT_o eL[ ^ VZfqu1cbf \“~>˜D[&VIc{T_[&s T_oLuij_dAc!uA[ VZ[ZmBVI`$T_o™dAoL`{\#[ c$VZfneL`![ czxBe [ c{T_[ `$VI[ oL^L`>VZf y [6T_oL[&šqw&T_[ o1V y [ w d1eL`b[ VZX [„eL`b[ c@T_` VZX [ o•g8f1cbw [ ^•VIfDw f1a [%\]TWVIX>j_d1cbu1[2p6dA`!`b[ ` fhgDVZ[ZmBVˆt2dAoL^;T_oL[&‚ [ w&V!TWsA[*y [ w dAeL`![qc![&j_[&s/dAoAV$VZ[ZmBV6aLdA`!`bd1uA[ ` fhg€VZ[ o aLcbf s T_^L[ y [&VbVZ[ c‡d1oL`r\N[ cb`‡VZXLdAo;w f1pqakj_[&VZ[i^ fkw eLp6[ oAV VZ[ZmBVI`H~F›{o$d1^L^BTWV!T_fAo tAaLdA`!`!dAuA[&l j_[&sA[&jL[&sBT_^ [ oLw [„dAw w fAeLoAV!T_oLu]g8f1c \#f1cb^ieL`!dAuA[„T_ozj_fkw d/jkVI[ZmBV„[ oAs T_c!fAoLp6[ oAVZ`@T_`2fhg€VZ[ o$X [&j_akg ekjkT_o T_p6a c!f s T_oLu]cb[&VZc{T_[&shd/j/[&‚ [ w&V!TWsA[ oL[ `!`&t y [ w d1eL`b[@VZXL[Kp6[ dAokT_oLu2f/g dAp$ykT_u1e f1eL`#VI[ c!pq`Ny [ w fAp6[ `2w&j_[ d1c]\DX [ oEVIXL[vj_fkw d/j w fAo1VI[ZmBV T_`DaLcbf1a [ c{jW|i`ba [ w&TW—3[ ^ ~ œWZl žhŸ T_oLw [zg ekjWj#VI[ZmkVI`>d1cb[6oL[ w [ `b`!dAc{TWjW|}w fAp6a f1`b[ ^ f/g]T_o ^kTWs T_^LeLdhj ¡I¢@£ ¤¦¥D¥8§ ̈ ©Ha6«@¬H¥…¥ ̈ˆ­ ­ ® ̄°§ ±!©q¤¦2*­ˆ¬H ̄°§•3&aq§ £ ±i ́@¬I§8¤μ® 2 ¬I¶ · ̧”¤ ±!2 ̧ ± 1 ® ̈ 2ˆ© ¬I§ ¤ ® 2D ̈ 2 © ±b ̄ o ̄8¬&2&§ »(1⁄4 »F1⁄2&3⁄4&¿&¿hÀbÁIÂ/à VI[ZmBV„aLdA`!`bd1uA[ `HtAd‡`{VIeL^k|$f/g3VZ[ZmBV„a d1`b`!dAu1[ ` T_`2dhj_`!fvT_p6a f1crVZdAo1V g8f1c…^L[&VZ[ cbpiT_okT_oLu>f sA[ c!dhjWj VZ[ZmBV]`{VIc!eLw&VIeLc![/~KÄÅ`rVZc!e w&VZeLcbd/j ^ [&l w fAp6a fA`{TWV!T_fAoDfhgkVZ[ZmBVZ` T_oAVZf…aLd1`b`!dAu1[ ` p6dH|•VZX [ o•c![&sA[ d/jAT_okg8f1crl pqd/VbT_f1oqd1y fAekV„VIXL[]V”|ka [•fhg VZ[ZmBVNe oL^L[ c2w f1oL`rT_^L[ c!dhV!T_fAo t dAoL^  oLfH\]j_[ ^Lu1[#fhg VI[ZmkV V”| a [2d1oL^>VI[ZmBV `{VIc!eLw&VIeLc![ T_o$VZeLcbo$d/‚ [ w&VZ` pqd1oA|iVI[ZmBV„XLdAoL^kjWT_oLu>fAa [ cbd/VbT_f1o `HthT_oLw&j_eL^BT_oLu>c![&VZcrT_[&s/dhj tAVZ[ZmBV cb[ d1^BT_oLu$d1o ^EVIc!d sA[ c!`bd/j t d1o ^EVI[ZmBV…`beLp6pqd1crT_– d/VbT_f1o ~ ‰ X [„`rVZc!e w&VZeLcb[ f/gkT_oL^BTWsBT_^ eLd/j VI[ZmkVI`Htˆf1c `b[&VZ`@fhgLc![&j_dhVZ[ ^DVI[ZmBVZ`Ht w dAoÆy [*`rVZeL^BT_[ ^Æy1|ÆeL`rT_oLu}dnVZ[ZmBV6cb[&j_d/VbT_f1o `!XkT_a;p6dAa;VZX d/V [Zm XkT_yBTWVZ`#VZX [Dc![ `bekjWVZ`2f/g%`rT_piTWj_dAc{TWV”|<p6[ dA`!eLcb[ p6[&oAVZ`%y [&V”\N[ [ o a d/T_cb`Df/g VI[ZmBVZ`Ht fAc]VZ[ZmBV][Zm w [ c!akVI`H~ ‰ | aBT_w d/jWjW|/t [ dAwZX<VZ[ZmBV t fAc VI[ZmBV%[ZmLw [ cbakV@T_`%c![ a c![ `![ oAVI[ ^•yA|,d…s1[ w&VIf1c fhgk\N[&T_uAXAVZ[ ^‡VZ[ cbp6` fhg]VZX [ig fAc!pÈÇiɕÊÈË8ÌkɀÍ&Î$ÌkÉ€Ï Î&ÐWÐWÐWÎ&ÐWÐWÐWÎ{ÌkɀÑ&҇\DXL[ cb[$ÌkɀÓ<c![ aLcb[&l `b[ oAVZ`%dAo>T_p6a f1crVZdAoLw [ \#[&T_u1XAVKg fAc VZ[ cbpYÔFÕDdhV!VId1wIXL[ ^vVZfv^Lfkw&l e p6[ oAV,Ç É ~ ‰ XL[iVI[ c!p6`vd/VbVZdAwZX [ ^ VZf ^Lfkw eLpq[ oAVZ`]g8f1c>w fAokl VI[ oAVzcb[ aLc![ `b[ oAVZdhV!T_fAo aLeLc!a fA`![ `‡p6d |}y [i\#fAc!^L`>fAc,a XLc!dA`![ ` ^ [ c{TWs1[ ^;g8c!fApÖVZXL[<^Lfkw eLpq[ oAV“VI[ZmkVI`$yA|×dAo;d1eBVZfAp6dhV!T_w$T_okl ^ [ZmkT_o u aLc!fkw [ ^Le c![/t dAoL^}VIXL[$VI[ c!pØ\#[&T_uAX1VI`>d1cb[iw f1pqaLekVI[ ^ yA|<VZdAAT_oLu>T_oAVZfidAw w f1e o1V2VZX [•fkw w eLcbc![ oLw [#wIXLd1cbd1w&VI[ c{T_`rV!T_w `%fhg VIXL[%VI[ c!pq`LT_o‡VZXL[FT_oL^kTWs T_^LeLdhjB^ fkw eLp6[ oAVZ` d1oL^vVZX [%^Lfkw eLpq[ o1V w fhjWj_[ w&V!T_fAo<d1`]dz\DXLf/j_[/~@œ وŸ Äv`!`beLpiT_oLu…VZXLdhV [&sA[ c{|>VZ[ZmBVˆt1f1c VI[ZmBV„[Zm w [ c!akV T_`2c![ a c![ `![ oAVI[ ^ T_o*s1[ w&VZfAc#g fAc!pÚdA`Dd<`![&VDfhg%\#[&T_u1XAVZ[ ^ VI[ c!pq`HtBTWV•T_`Da f1`b`{T_ykj_[ VIf w f1pqaLekVZ[‡aLdhT_c{\]T_`b[$`{T_piTWj_dAc{TWV”|nw fk[&š6w&T_[ oAVZ`]`!X f \]T_oLu<VZXL[ `rT_piTWj_dAc{TWV”|†y [&V”\N[ [ o a d/T_cb`•fhg„VI[ZmkVI`DyLdA`![ ^ fAo}w fhT_oLw&T_^L[ oLw [ ` T_onVZX [$VZ[ cbpÛdA`!`{T_uAoLpq[ o1VI`]VZf<VIXL[$c![ `ba [ w&VbTWs1[>TWVZ[ pq`H~ ‰ |kakl T_w dhjWjW|/t2VZX [6sA[ w&VZfAc,`rT_piTWj_dAc{TWV”| piT_uAX1V$y [6w fAp6aLeBVZ[ ^†dA`>VZXL[ T_o oL[ ciaLcbfk^LeLw&V$y [&V”\#[ [ o;w f1cbc![ `ba fAoL^kT_oLu<sA[ w&VZfAc$[&j_[ p6[ oAVZ`&t VIXLdhV•T_`Ht Ü&Ý(ÞßË(Ç É ÎbÇ$à ÒáÊ âäãÕ&å æ Ì É Ó,Ì àrÓ t dAoL^ VIXL[$`{T_pzTWl j_dAc{TWV”|qg eLo w&V!T_fAo6piT_uAXAVNy [DoLfAc!pqd/jWT_– [ ^,VZf>jWT_[•y [&V”\#[ [ o6ç>g fAc ^BT_` è{fhT_oAVDs1[ w&VIf1cb`#d1oL^†vg8f1c]w fAp6akj_[&VI[&jW|$T_^L[ oAV!T_w dhj sA[ w&VZfAc!`&~ é T_u1eLcb[nq`bXLf \D`$d V”|kakT_w dhj„VI[ZmBVzcb[&j_d/VbT_f1o `!XkT_anp6dAa†g fAc$`{TRm VI[ZmBVZ`vT_oLw&j_eL^L[ ^nT_o}VIXL[ é e oL}dAoL^nSêdAu1oLdhjWj_`>[ oLw&|kw&j_f1a [ ^kT_d ^ [ d/jWT_oLu<\]TWVIX†VZX [$u1[ oL[ cbd/j@VIf1akT_w$fhg#ë•eLw&j_[ dAc•ì2oL[ c!uh|/~ ‰ XL[ ^ fkw eLp6[ oAVZ`>dAaLa [ d1c>dA`ioLfk^L[ `Eˀs1[ c{VbT_w [ `Z҇T_oÆVIXL[<u1cbd1aLX;fhg é T_u1eLcb[vhthd1o ^“d]jWT_o qË yLc!dAoLwIX Ò dAaLa [ d1cb` y [&V”\N[ [ ovV”\#fDoLfk^L[ ` \DX [ o•VIXL[%`{T_pzTWj_d1crTWV”|•y [&V”\N[ [ o…V”\Nf#VI[ZmkVI` T_` `bekš6w&T_[ oAV!jW|…j_d1cbu1[/~ ‰ X [$`{T_piTWj_dAc{TWV”| VIXLc![ `bXLfhj_^ eL`![ ^ VIf6yLeBTWj_^†VZX [$p6dAa f/g é T_u/l e c![… T_` çk~ çL/tHVZXLdhV T_`Hthd/jWj yLcbd1oLwIXL[ `3c![ aLcb[ `![ oAV!T_oLu2d#VZ[ZmBV@`{T_pzl TWj_dAc{TWV”|idAy f s1[#çk~ ç32dAc![#`bXLf \Do$fAo>VZXL[2p6dAa ~ é T_u1e c![D]`bXLf \D` VIXLdhV$VZXL[q`{T_piTWj_dAc{TWV”|;p6[ dA`!e c![iy [&V”\N[ [ o ^Lfkw eLpq[ oAVZ`i Ùhç3ˆí dAoL^Ùhç3HžäË ë•eLw&j_[ dAc$ì2oL[ c!uh|?dAoL^™ëveLw&j_[ dAc$Sê[ dAa fAoL`IÒ>T_` d<XkT_uAX†çk~¦ï1Ù t3\DXL[ c![ dA`]oLfq`{T_uAokTW— w dAo1Vv`{T_piTWj_dAc{TWV”| [ZmBT_`{VI`•y [&l V”\#[ [ o>ð1ñAç ÙvË ë•eLw&j_[ dAc é T_`!`rT_f1o Ò dAoL^$í1íhòAðBÙDË ‰ X [ c!p6fAo eLw&j_[ dAc é eL`{T_fAo ÒZ~ 22387--Thermonuclear Fusion 19199--Radioactive Fallout 17016--Nuclear Weapons 17012--Nuclear Energy 11830--Hydrogen Bomb 8907--Fission, Nuclear 0.33 0.38 0.57 0.54
Debiasing Decisions : Improved Decision Making With a Single Training Intervention
From failures of intelligence analysis to misguided beliefs about vaccinations, biased judgment and decision making contributes to problems in policy, business, medicine, law, education, and private life. Early attempts to reduce decision biases with training met with little success, leading scientists and policy makers to focus on debiasing by using incentives and changes in the presentation and elicitation of decisions. We report the results of two longitudinal experiments that found medium to large effects of one-shot debiasing training interventions. Participants received a single training intervention, played a computer game or watched an instructional video, which addressed biases critical to intelligence analysis (in Experiment 1: bias blind spot, confirmation bias, and fundamental attribution error; in Experiment 2: anchoring, representativeness, and social projection). Both kinds of interventions produced medium to large debiasing effects immediately (games ≥ −31.94% and videos ≥ −18.60%) that persisted at least 2 months later (games ≥ −23.57% and videos ≥ −19.20%). Games that provided personalized feedback and practice produced larger effects than did videos. Debiasing effects were domain general: bias reduction occurred across problems in different contexts, and problem formats that were taught and not taught in the interventions. The results suggest that a single training intervention can improve decision making. We suggest its use alongside improved incentives, information presentation, and nudges to reduce costly errors associated with biased judgments and decisions.
Has estimation of numbers of cases of pandemic influenza H1N1 in England in 2009 provided a useful measure of the occurrence of disease?
BACKGROUND Surveillance indicators of influenza activity have generally provided robust comparative trend data for England. These indicators became less reliable, however, for monitoring trends in activity, or comparisons with previous years, during the influenza pandemic in 2009 because of changes in the perception of risk and changes in the systems of healthcare delivery. An approach was developed to estimate the number of cases of influenza-like illness (ILI) occurring because of infection with pandemic influenza virus. METHODS AND FINDINGS The number of cases was estimated each week in England on the basis of total number of patients consulting healthcare services with ILI; estimates of the proportion of individuals in the community experiencing an ILI-seeking health care; and the proportion of these positive on laboratory testing. Almost 800,000 cases (range 375,000-1·6 million) of symptomatic ILI cases were estimated to have occurred over the course of the two waves of pandemic activity in England. More cases were estimated to have occurred in the second wave than in the first. CONCLUSIONS These results underestimate the total number of infections as they do not include asymptomatic infections nor those with mild illness not meeting the definition of a case of ILI. Nevertheless, the case number estimates provide a useful indicator of the trend in influenza activity and weekly data were extensively used in media reports. Although surveillance methods differ between countries, the approach of synthesising available data sources to produce an overall estimate of case numbers could be applied more widely to provide comparative data.
A noise optimization formulation for CMOS low-noise amplifiers with on-chip low-Q inductors
A noise optimization formulation for a CMOS low-noise amplifier (LNA) with on-chip low-Q inductors is presented, which incorporates the series resistances of the on-chip low-Q inductors into the noise optimization procedure explicitly. A 10-GHz LNA is designed and implemented in a standard mixed-signal/RF bulk 0.18-/spl mu/m CMOS technology based on this formulation. The measurement results, with a power gain of 11.25 dB and a noise figure (NF) of 2.9 dB, show the lowest NF among the LNAs using bulk 0.18-/spl mu/m CMOS at this frequency.
Labial fusion and asymptomatic bacteriuria
Thirty-three female children with labial fusion were screened for bacteriuria which was defined as the growth of a single micro-organism with ≥100 000 colony-forming units/ml (≥ 100×106 colonies/l) in a properly collected urine specimen. Six girls were found to have bacteriuria. In contrast, none of the 33 girls in a control group had bacteriuria. We recommend that a urine culture be performed in girls with labial fusion and that all girls with bacteriuria should be checked for labial fusion.
The role of taurine in improving neural stem cells proliferation and differentiation.
OBJECTIVES Taurine is one of the most abundant amino acids in the central nervous system and has important functions in the promotion of brain development. This study aimed to determine the mechanistic role of taurine in improving neuronal proliferation, stem cell proliferation, and neural differentiation. METHODS The data for this review were primarily retrieved from the PubMed database from 1985 to 2015 in English. The search string included the keywords taurine, brain development, neuronal, stem cell, proliferation, differentiation, and others. Relevant publications were identified, retrieved, and reviewed. RESULTS This review introduces the source, function, and mechanisms of taurine in brain development and provides additional detail regarding the mechanistic role of taurine in improving neuronal proliferation, stem cell proliferation, and neural differentiation. Many studies concerning these aspects are discussed. CONCLUSIONS Taurine plays an important role in brain development, including neuronal proliferation, stem cell proliferation, and differentiation, via several mechanisms. Taurine can be directly used in clinical applications to improve brain development because it has no toxic effects on humans.
Deep Learning for Automated Quality Assessment of Color Fundus Images in Diabetic Retinopathy Screening
192 words] Purpose: To develop a computer based method for the automated assessment of image quality in the context of diabetic retinopathy (DR) to guide the photographer. Methods: A deep learning framework was trained to grade the images automatically. A large representative set of 7000 color fundus images were used for the experiment which were obtained from the EyePACS (http://www.eyepacs.com/) that were made available by the California Healthcare Foundation. Three retinal image analysis experts were employed to categorize these images into ‘accept’ and ‘reject’ classes based on the precise definition of image quality in the context of DR. A deep learning framework was trained using 3428 images. Results: A total of 3572 images were used for the evaluation of the proposed method. The method shows an accuracy of 100% to successfully categorise ‘accept’ and ‘reject’ images. Conclusion: Image quality is an essential prerequisite for the grading of DR. In this paper we have proposed a deep learning based automated image quality assessment method in the context of DR. The
A brain-controlled exoskeleton with cascaded event-related desynchronization classifiers
This paper describes a brain-machine interface for the online control of a powered lower-limb exoskeleton based on electroencephalogram (EEG) signals recorded over the user’s sensorimotor cortical areas. We train a binary decoder that can distinguish two different mental states, which is applied in a cascaded manner to efficiently control the exoskeleton in three different directions: walk front, turn left and turn right. This is realized by first classifying the user’s intention to walk front or change the direction. If the user decides to change the direction, a subsequent classification is performed to decide turn left or right. The user’s mental command is conditionally executed considering the possibility of obstacle collision. All five subjects were able to successfully complete the 3-way navigation task using brain signals while mounted in the exoskeleton. We observed on average 10.2% decrease in overall task completion time compared to the baseline protocol. August 25, 2016 DRAFT
Arguments about Arguments
We survey the contents of Finocchiaro's papers collected in Arguments about Arguments, pointing out, where appropriate, their expected interest for readers of Philosophy of the Social Sciences. The papers include essays about argument theory and reasoning, the nature of fallacies and fallaciousness, critiques of noteworthy contributions to argumentation theory, and historical essays on scientific thinking.
Architectural Mismatch or Why it's hard to build systems out of existing parts
Many would argue that future breakthroughs in software productivity will depend on our ability to combine existing pieces of software to produce new applications. An important step towards this goal is the development of new techniques to detect and cope with mismatches in the assembled parts. Some problems of composition are due to low-level issues of interoperability, such as mismatches in programming languages or database schemas. However, in this paper we highlight a different, and in many ways more pervasive, class of problem: architectural mismatch. Specifically, we use our experience in building a family of software design environments from existing parts to illustrate a variety of types of mismatch that center around the assumptions a reusable part makes about the structure of the application in which is to appear. Based on this experience we show how an architectural view of the mismatch problem exposes some fundamental, thorny problems for software composition and suggests possible research avenues needed to solve them.
Ulcerated midline nodule of the hard palate.
CLINICAL PRESENTATION A 14-year-old boy was referred to the Oral Medicine Clinic, School of Dentistry, Universidade Federal de Minas Gerais, for evaluation of a red nodular lesion of the palate. The lesion, which measured 10 10 5 mm, was located in the midline of the hard palate and was firm to palpation. A focal area of ulceration was noted in the center of the lesion (Fig. 1). The patient reported that the lesion had been present for 2 months and was somewhat painful, presumably secondary to the ulceration. No lymph nodes were palpable. The patient’s medical history was otherwise noncontributory. No osseous alterations were noted on occlusal radiograph (Fig. 2) or computerized tomography scanning.