title
stringlengths
8
300
abstract
stringlengths
0
10k
Health Care Utilization Patterns Among High-Cost VA Patients With Mental Health Conditions.
OBJECTIVE To inform development of intensive management programs for high-cost patients, this study investigated the relationship between psychiatric diagnoses and patterns of health care utilization among high-cost patients in the Department of Veterans Affairs (VA) health care system. METHODS The costliest 5% of patients who received care in the VA in fiscal year 2010 were assigned to five mutually exclusive hierarchical groups on the basis of diagnosis codes: no mental health condition, serious mental illness, substance use disorder, posttraumatic stress disorder (PTSD), and depression. Multivariable linear regression was used to examine associations between diagnostic groups and use of mental health and non-mental health care and costs of care, with adjustment for sociodemographic characteristics. The proportion of costs generated by mental health care was estimated for each group. RESULTS Among 261,515 high-cost VA patients, rates of depression, substance use disorder, PTSD, and serious mental illness were 29%, 20%, 17%, and 13%, respectively. Individuals in the serious mental illness and substance use disorder groups were younger and had fewer chronic general medical conditions and higher adjusted rates of mental health care utilization; they also had a greater proportion of costs generated by mental health care (41% and 31%, respectively) compared with individuals in the PTSD and depression groups (18% and 11%, respectively). CONCLUSIONS Optimal management of high-risk, high-cost patients may require stratification by psychiatric diagnoses, with integrated care models for patients with multiple chronic conditions and comorbid mental health conditions and intensive mental health services for patients whose primary needs stem from mental health conditions.
Interstitial solute transport in 3D reconstructed neuropil occurs by diffusion rather than bulk flow.
The brain lacks lymph vessels and must rely on other mechanisms for clearance of waste products, including amyloid [Formula: see text] that may form pathological aggregates if not effectively cleared. It has been proposed that flow of interstitial fluid through the brain's interstitial space provides a mechanism for waste clearance. Here we compute the permeability and simulate pressure-mediated bulk flow through 3D electron microscope (EM) reconstructions of interstitial space. The space was divided into sheets (i.e., space between two parallel membranes) and tunnels (where three or more membranes meet). Simulation results indicate that even for larger extracellular volume fractions than what is reported for sleep and for geometries with a high tunnel volume fraction, the permeability was too low to allow for any substantial bulk flow at physiological hydrostatic pressure gradients. For two different geometries with the same extracellular volume fraction the geometry with the most tunnel volume had [Formula: see text] higher permeability, but the bulk flow was still insignificant. These simulation results suggest that even large molecule solutes would be more easily cleared from the brain interstitium by diffusion than by bulk flow. Thus, diffusion within the interstitial space combined with advection along vessels is likely to substitute for the lymphatic drainage system in other organs.
Improved accuracy in estimation of left ventricular function parameters from QGS software with Tc-99m tetrofosmin gated-SPECT: a multivariate analysis.
UNLABELLED The purpose of this study was to verify whether the accuracy of left ventricular parameters related to left ventricular function from gated-SPECT improved or not, using multivariate analysis. METHODS Ninety-six patients with cardiovascular diseases were studied. Gated-SPECT with the QGS software and left ventriculography (LVG) were performed to obtain left ventricular ejection fraction (LVEF), end-diastolic volume (EDV) and end-systolic volume (ESV). Then, multivariate analyses were performed to determine empirical formulas for predicting these parameters. The calculated values of left ventricular parameters were compared with those obtained directly from the QGS software and LVG. RESULTS Multivariate analyses were able to improve accuracy in estimation of LVEF, EDV and ESV. Statistically significant improvement was seen in LVEF (from r = 0.6965 to r = 0.8093, p < 0.05). Although not statistically significant, improvements in correlation coefficients were seen in EDV (from r = 0.7199 to r = 0.7595, p = 0.2750) and ESV (from r = 0.5694 to r = 0.5871, p = 0.4281). CONCLUSION The empirical equations with multivariate analysis improved the accuracy in estimating LVEF from gated-SPECT with the QGS software.
Towards the ictalurid catfish transcriptome: generation and analysis of 31,215 catfish ESTs
EST sequencing is one of the most efficient means for gene discovery and molecular marker development, and can be additionally utilized in both comparative genome analysis and evaluation of gene duplications. While much progress has been made in catfish genomics, large-scale EST resources have been lacking. The objectives of this project were to construct primary cDNA libraries, to conduct initial EST sequencing to generate catfish EST resources, and to obtain baseline information about highly expressed genes in various catfish organs to provide a guide for the production of normalized and subtracted cDNA libraries for large-scale transcriptome analysis in catfish. A total of 17 cDNA libraries were constructed including 12 from channel catfish (Ictalurus punctatus) and 5 from blue catfish (I. furcatus). A total of 31,215 ESTs, with average length of 778 bp, were generated including 20,451 from the channel catfish and 10,764 from blue catfish. Cluster analysis indicated that 73% of channel catfish and 67% of blue catfish ESTs were unique within the project. Over 53% and 50% of the channel catfish and blue catfish ESTs, respectively, had significant similarities to known genes. All ESTs have been deposited in GenBank. Evaluation of the catfish EST resources demonstrated their potential for molecular marker development, comparative genome analysis, and evaluation of ancient and recent gene duplications. Subtraction of abundantly expressed genes in a variety of catfish tissues, identified here, will allow the production of low-redundancy libraries for in-depth sequencing. The sequencing of 31,215 ESTs from channel catfish and blue catfish has significantly increased the EST resources in catfish. The EST resources should provide the potential for microarray development, polymorphic marker identification, mapping, and comparative genome analysis.
Testing a model for the genetic structure of personality: a comparison of the personality systems of Cloninger and Eysenck.
Genetic analysis of data from 2,680 adult Australian twin pairs demonstrated significant genetic contributions to variation in scores on the Harm Avoidance, Novelty Seeking, and Reward Dependence scales of Cloninger's Tridimensional Personality Questionnaire (TPQ), accounting for between 54% and 61% of the stable variation in these traits. Multivariate genetic triangular decomposition models were fitted to determine the extent to which the TPQ assesses the same dimensions of heritable variation as the revised Eysenck Personality Questionnaire. These analyses demonstrated that the personality systems of Eysenck and Cloninger are not simply alternative descriptions of the same dimensions of personality, but rather each provide incomplete descriptions of the structure of heritable personality differences.
Toward automating a human behavioral coding system for married couples' interactions using speech acoustic features
Observational methods are fundamental to the study of human behavior in the behavioral sciences. For example, in the context of research on intimate relationships, psychologists’ hypotheses are often empirically tested by video recording interactions of couples and manually coding relevant behaviors using standardized coding systems. This coding process can be time-consuming, and the resulting coded data may have a high degree of variability because of a number of factors (e.g., inter-evaluator differences). These challenges provide an opportunity to employ engineering methods to aid in automatically coding human behavioral data. In this work, we analyzed a large corpus of married couples’ problem-solving interactions. Each spouse was manually coded with multiple session-level behavioral observations (e.g., level of blame toward other spouse), and we used acoustic speech features to automatically classify extreme instances for six selected codes (e.g., “low” vs. “high” blame). Specifically, we extracted prosodic, spectral, and voice quality features to capture global acoustic properties for each spouse and trained gender-specific and gender-independent classifiers. The best overall automatic system correctly classified 74.1% of the instances, an improvement of 3.95% absolute (5.63% relative) over our previously reported best results. We compare performance for the various factors: across codes, gender, classifier type, and feature type. 2011 Elsevier B.V. All rights reserved.
Broadband and High-Gain SIW-Fed Antenna Array for 5G Applications
A broadband and high-gain substrate integrated waveguide (SIW)-fed antenna array is demonstrated for 5G applications. Aperture coupling is adopted as the interconnecting method in the two-layer SIW feeding network. Two pairs of metallic posts act as dipoles and are surrounded by an air-filled cavity. The height of the cavity is designed to be higher than that of the posts to broaden the bandwidth. The measured impedance bandwidth of the proposed <inline-formula> <tex-math notation="LaTeX">$8{\times} 8$ </tex-math></inline-formula> antenna array is 16.3% from 35.4 to 41.7 GHz for <inline-formula> <tex-math notation="LaTeX">$|S_{11}|\leqslant -10$ </tex-math></inline-formula> dB, and the measured peak gain is 26.7 dBi at 40 GHz. The measured radiation efficiency of the antenna array is 83.2% at 40 GHz. The proposed antenna array which is suitable for 5G applications at 37 and 39 GHz bands shows stable radiation patterns, high gain, and broad bandwidth.
CUDA accelerated iris template matching on Graphics Processing Units (GPUs)
In this paper we develop a parallelized iris template matching implementation on inexpensive Graphics Processing Units (GPUs) with Nvidia's CUDA programming model to achieve matching rates of 44 million iris template comparisons per second without rotation invariance. With tolerance to head tilt, we achieve 4.2 million matches per second and compare our implementation to state of the art prior work performed on GPU and FPGA, emphasizing our improvements. Additionally a comparison to highly optimized CPU implementations of iris template matching is performed, showing a 14X speedup using our approach. In contrast to other published work, we develop an implementation for parallel iris template matching that incorporates iris code shifting for rotation invariance and provide timing data showing our proposed architecture is efficiently implemented, capitalizing on shared and texture memory to speedup the bit shifting process beyond current prior art.
Radioprotective, anticarcinogenic and antioxidant properties of the Indian holy basil, Ocimum sanctum (Tulasi).
Ocimum sanctum (Sanskrit: Tulasi; English: holy basil; family: Labiatae), is found throughout the semi­ tropical and tropical parts of India. It is grown in the gardens and is worshipped as a sacred plant by the Hindus. The medicinal value of this plant is known for millennia. The different parts of the plant are traditionally used in the Ayurveda and Siddha systems for the treatment of diverse ailments like infections, skin diseases, hepatic disorders, common cold and cough, malarial fever and as an antidote for snake bite and scorpion stingl. The Santal tribe uses the plant in fever, dropsy,. vomit�ng, constipation, cough, postnatal complamts, etc . O. sanctum is also used as an ingredient of herbal preparations with other medicinal plants. Investigations on its pharmacological and therapeutic properties during the past decades have led to a number of publications. Literature on the ethnobotany, pharmacognosy and pharmacology of Ocimum sp. has been extensively reviewed by Satyavati and co-workersl. Water extract of the leaves was reported to have a hypotensive action in dogs3. A 50 % ethanolic extract of O. sanctum leaves showed hypoglycemic effect in rats and antispasmodic activity in guinea pigs4. The ether extract and essential oil of the leaves exhibited antibacterial activity against a number of bacterial species5-8• The essential oil also exhibited antifungal . . 79 C aCtivIty'. rude extracts from the young leaves and plant showed antiviral activity against some common plant viruses like papaya leaf reduction virus 10, top . . f II 2 necrosIs VIruS 0 pea and bean mosaic virus I . The acetone extract of the plant showed insecticidal activity against Spodoptera litura13, while the water extract has nematicidal activiti4• The anti-stress effect of the plant has been reported in mice and ratsl5-19 H t t ' .. . epa opro ectIve actIvIty of the extract against carbon tetrachloride toxicity20 and paracetamol induced damage21 have been demonstrated. A methanol extract and an aqueous suspension of 0. sanctum leaves were found to have anti-inflammatory and analgesic22 and immuno­ stimulatory23 properties. Aqueous extract of the leaves and pure eugenol, which is a constituent of the extract, were reported to reduce the biochemical and membrane changes induced by restraint stress in t 24 0 . ra s . Clmum sanctum extract also protected against experimental ulcers, which was related to its ability to reduce acid secretion and increase mucus secretion25. The essential oil of O. sanctum was observed to have anti-inflammatory26.27 and anti-ulcer28 activities in rats. Oral feeding of Ocimum leaf powder for one month was found to reduce fasting blood sugar and cholesterol levels in blood, liver and heart of rats; the results indicated a hypoglycemic and hypolipidemic effect in diabetic rats29. A preliminary clinical trial showed that treatment with the aqueous extract of O. sanctum leaves gave a higher survival rate among patients suffering from viral encephalitis than in patients given steroid treatmeneo. Oral administration of 250 g of dried leaf powder daily for 6 weeks reduced the blood pressure in hypertensive patients31. There has been a high interest in the radioprotective, anticarcinogenic and antioxidant properties of this plant in the recent years . The present paper attempts to review the publications on these topics.
The VASOGRADE: A Simple Grading Scale for Prediction of Delayed Cerebral Ischemia After Subarachnoid Hemorrhage.
BACKGROUND AND PURPOSE Patients are classically at risk of delayed cerebral ischemia (DCI) after aneurysmal subarachnoid hemorrhage. We validated a grading scale-the VASOGRADE-for prediction of DCI. METHODS We used data of 3 phase II randomized clinical trials and a single hospital series to assess the relationship between the VASOGRADE and DCI. The VASOGRADE derived from previously published risk charts and consists of 3 categories: VASOGRADE-Green (modified Fisher scale 1 or 2 and World Federation of Neurosurgical Societies scale [WFNS] 1 or 2); VASOGRADE-Yellow (modified Fisher 3 or 4 and WFNS 1-3); and VASOGRADE-Red (WFNS 4 or 5, irrespective of modified Fisher grade). The relation between the VASOGRADE and DCI was assessed by logistic regression models. The predictive accuracy of the VASOGRADE was assessed by receiver operating characteristics curve and calibration plots. RESULTS In a cohort of 746 patients, the VASOGRADE significantly predicted DCI (P<0.001). The VASOGRADE-Yellow had a tendency for increased risk for DCI (odds ratio [OR], 1.31; 95% CI, 0.77-2.23) when compared with VASOGRADE-Green; those with VASOGRADE-Red had a 3-fold higher risk of DCI (OR, 3.19; 95% CI, 2.07-4.50). Studies were not a significant confounding factor between the VASOGRADE and DCI. The VASOGRADE had an adequate discrimination for prediction of DCI (area under the receiver operating characteristics curve=0.63) and good calibration. CONCLUSIONS The VASOGRADE results validated previously published risk charts in a large and diverse sample of subarachnoid hemorrhage patients, which allows DCI risk stratification on presentation after subarachnoid hemorrhage. It could help to select patients at high risk of DCI, as well as standardize treatment protocols and research studies.
Effects of aerobic training during hemodialysis on heart rate variability and left ventricular function in end-stage renal disease patients.
INTRODUCTION Decreased heart rate variability (HRV) in patients with end stage renal disease (ESRD) undergoing hemodialysis is predictive of cardiac death, especially due to sudden death. OBJECTIVE To evaluate the effects of aerobic training during hemodialysis on HRV and left ventricular function in ESRD patients. METHODS Twenty two patients were randomized into two groups: exercise (n = 11; 49.6 ± 10.6 years; 4 men) and control (n = 11; 43.5 ± 12.8; 4 men). Patients assigned to the exercise group were submitted to aerobic training, performed during the first two hours of hemodialysis, three times weekly, for 12 weeks. HRV and left ventricular function were assessed by 24 hours Holter monitoring and echocardiography, respectively. RESULTS After 12 weeks of protocol, no significant differences were observed in time and frequency domains measures of HRV in both groups. The ejection fraction improved non-significantly in exercise group (67.5 ± 12.6% vs. 70.4 ± 12%) and decreased non-significantly in control group (73.6 ± 8.4% vs. 71.4 ± 7.6%). CONCLUSION A 12-week aerobic training program performed during hemodialysis did not modify HRV and did not significantly improve the left ventricular function.
An open-label, two-stage, phase II study of bevacizumab and lapatinib in children with recurrent or refractory ependymoma: a collaborative ependymoma research network study (CERN)
Co-expression of ERBB2 and ERBB4, reported in 75 % of pediatric ependymomas, correlates with worse overall survival. Lapatinib, a selective ERBB1 and ERBB2 inhibitor has produced prolonged disease stabilization in patients with ependymoma in a phase I study. Bevacizumab exposure in ependymoma xenografts leads to ablation of tumor self-renewing cells, arresting growth. Thus, we conducted an open-label, phase II study of bevacizumab and lapatinib in children with recurrent ependymomas. Patients ≤21 years of age with recurrent ependymoma received lapatinib orally twice daily (900 mg/m2/dose to the first 10 patients, and then 700 mg/m2/dose) and bevacizumab 10 mg/kg intravenously on days 1 and 15 of a 28-day course. Lapatinib serum trough levels were analyzed prior to each course. Total and phosphorylated VEGFR2 expression was measured in peripheral blood mononuclear cells (PBMCs) before doses 1 and 2 of bevacizumab and 24–48 h following dose 2 of bevacizumab. Twenty-four patients with a median age of 10 years (range 2–21 years) were enrolled; 22 were eligible and 20 evaluable for response. Thirteen had anaplastic ependymoma. There were no objective responses; 4 patients had stable disease for ≥4 courses (range 4–14). Grade 3 toxicities included rash, elevated ALT, and diarrhea. Grade 4 toxicities included peri-tracheostomy hemorrhage (n = 1) and elevated creatinine phosphokinase (n = 1). The median lapatinib pre-dose trough concentration was 3.72 µM. Although the combination of bevacizumab and lapatinib was well tolerated in children with recurrent ependymoma, it proved ineffective.
Maternal care and birth outcomes among ethnic minority women in Finland
BACKGROUND Care during pregnancy and labour is of great importance in every culture. Studies show that people of migrant origin have barriers to obtaining accessible and good quality care compared to people in the host society. The aim of this study is to compare the access to and use of maternity services, and their outcomes among ethnic minority women having a singleton birth in Finland. METHODS The study is based on data from the Finnish Medical Birth Register in 1999-2001 linked with the information of Statistics Finland on woman's country of birth, citizenship and mother tongue. Our study data included 6,532 women of foreign origin (3.9% of all singletons) giving singleton birth in Finland during 1999-2001 (compared to 158,469 Finnish origin singletons). RESULTS Most women have migrated during the last fifteen years, mainly from Russia, Baltic countries, Somalia and East Europe. Migrant origin women participated substantially in prenatal care. Interventions performed or needed during pregnancy and childbirth varied between ethnic groups. Women of African and Somali origin had most health problems resulted in the highest perinatal mortality rates. Women from East Europe, the Middle East, North Africa and Somalia had a significant risk of low birth weight and small for gestational age newborns. Most premature newborns were found among women from the Middle East, North Africa and South Asia. Primiparous women from Africa, Somalia and Latin America and Caribbean had most caesarean sections while newborns of Latin American origin had more interventions after birth. CONCLUSION Despite good general coverage of maternal care among migrant origin women, there were clear variations in the type of treatment given to them or needed by them. African origin women had the most health problems during pregnancy and childbirth and the worst perinatal outcomes indicating the urgent need of targeted preventive and special care. These study results do not confirm either healthy migrant effect or epidemiological paradox according to which migrant origin women have considerable good birth outcomes.
Papilledema: clinical clues and differential diagnosis.
The term "papilledema" describes optic disc swelling resulting from increased intracranial pressure. A complete history and direct funduscopic examination of the optic nerve head and adjacent vessels are necessary to differentiate papilledema from optic disc swelling due to other conditions. Signs of optic disc swelling include elevation and blurring of the disc and its margins, venous congestion, and retinal hard exudates, splinter hemorrhages and infarcts. Patients with papilledema usually present with signs or symptoms of elevated intracranial pressure, such as headache, nausea, vomiting, diplopia, ataxia or altered consciousness. Causes of papilledema include intracranial tumors, idiopathic intracranial hypertension (pseudotumor cerebri), subarachnoid hemorrhage, subdural hematoma and intracranial inflammation. Optic disc edema may also occur from many conditions other than papilledema, including central retinal artery or vein occlusion, congenital structural anomalies and optic neuritis.
Automatic Mutation Test Case Generation via Dynamic Symbolic Execution
The automatic test case generation is the principal issue of the software testing activity. Dynamic symbolic execution appears to be a promising approach to this matter as it has been shown to be quite powerful in producing the sought tests. Despite its power, it has only been effectively applied to the entry level criteria of the structural criteria hierarchy such as branch testing. In this paper an extension of this technique is proposed in order to effectively generate test data based on mutation testing. The proposed approach conjoins program transformation and dynamic symbolic execution techniques in order to successfully automate the test generation process. The propositions made in this paper have been incorporated into an automated framework for producing mutation based test cases. Its evaluation on a set of benchmark programs suggests that it is able to produce tests capable of killing most of the non equivalent introduced mutants. The same study also provides some evidence that by employing efficient heuristics it can be possible to perform mutation with reasonable resources.
Unveiling the Longitudinal Association between Short Sleep Duration and the Incidence of Obesity: the Penn State Cohort
Objective:Several epidemiologic, longitudinal studies have reported that short sleep duration is a risk factor for the incidence of obesity. However, the vast majority of these studies used self-reported measures of sleep duration and did not examine the role of objective short sleep duration, subjective sleep disturbances and emotional stress.Design:Longitudinal, population-based study.Subjects:We studied a random sample of 815 non-obese adults from the Penn State Cohort in the sleep laboratory for one night using polysomnography (PSG) and followed them up for a mean of 7.5 years. Subjective and objective measures of sleep as well as emotional stress were obtained at baseline. Obesity was defined as a body mass index (BMI) ⩾30 kg/ m-2.Results:The incidence of obesity was 15% and it was significantly higher in women and in individuals who reported sleep disturbances, shorter sleep duration and higher emotional stress. Significant mediating effects showed that individuals with subjective sleep disturbances who developed obesity reported the shortest sleep duration and the highest emotional stress, and that subjective sleep disturbances and emotional stress were independent predictors of incident obesity. Further analyses revealed that the association between short sleep duration, subjective sleep disturbances and emotional stress with incident obesity was stronger in young and middle-age adults. Objective short sleep duration was not associated with a significantly increased risk of incident obesity.Conclusion:Self-reported short sleep duration in non-obese individuals at risk of developing obesity is a surrogate marker of emotional stress and subjective sleep disturbances. Objective short sleep duration is not associated with a significant increased risk of incident obesity. The detection and treatment of sleep disturbances and emotional stress should become a target of our preventive strategies against obesity.
Modeling The Intensity Function Of Point Process Via Recurrent Neural Networks
Event sequence, asynchronously generated with random timestamp, is ubiquitous among applications. The precise and arbitrary timestamp can carry important clues about the underlying dynamics, and has lent the event data fundamentally different from the time-series whereby series is indexed with fixed and equal time interval. One expressive mathematical tool for modeling event is point process. The intensity functions of many point processes involve two components: the background and the effect by the history. Due to its inherent spontaneousness, the background can be treated as a time series while the other need to handle the history events. In this paper, we model the background by a Recurrent Neural Network (RNN) with its units aligned with time series indexes while the history effect is modeled by another RNN whose units are aligned with asynchronous events to capture the long-range dynamics. The whole model with event type and timestamp prediction output layers can be trained end-to-end. Our approach takes an RNN perspective to point process, and models its background and history effect. For utility, our method allows a black-box treatment for modeling the intensity which is often a pre-defined parametric form in point processes. Meanwhile end-to-end training opens the venue for reusing existing rich techniques in deep network for point process modeling. We apply our model to the predictive maintenance problem using a log dataset by more than 1000 ATMs from a global bank headquartered in North America.
Cryptography based digital image watermarking algorithm to increase security of watermark data
Digital watermarking is one of the proposed solutions for copyright protection of multimedia data. This technique is better than Digital Signatures and other methods because it does not increase overhead. Digital Watermarking describes methods and technologies that hide information, for example a number or text, in digital media, such as images, video or audio. The embedding takes place by manipulating the content of the digital data, which means the information is not embedded in the frame around the data. In this paper cryptography based Blind image watermarking technique presented that can embed more number of watermark bits in the gray scale cover image without affecting the imperceptibility and increase the security of watermarks.
Mobile Data Offloading: How Much Can WiFi Deliver?
This paper presents a quantitative study on the performance of 3G mobile data offloading through WiFi networks. We recruited 97 iPhone users from metropolitan areas and collected statistics on their WiFi connectivity during a two-and-a-halfweek period in February 2010. Our trace-driven simulation using the acquired whole-day traces indicates that WiFi already offloads about 65% of the total mobile data traffic and saves 55% of battery power without using any delayed transmission. If data transfers can be delayed with some deadline until users enter a WiFi zone, substantial gains can be achieved only when the deadline is fairly larger than tens of minutes. With 100-s delays, the achievable gain is less than only 2%-3%, whereas with 1 h or longer deadlines, traffic and energy saving gains increase beyond 29% and 20%, respectively. These results are in contrast to the substantial gain (20%-33%) reported by the existing work even for 100-s delayed transmission using traces taken from transit buses or war-driving. In addition, a distribution model-based simulator and a theoretical framework that enable analytical studies of the average performance of offloading are proposed. These tools are useful for network providers to obtain a rough estimate on the average performance of offloading for a given WiFi deployment condition.
Multiple-Shot Person Re-identification by HPE Signature
In this paper, we propose a novel appearance-based method for person re-identification, that condenses a set of frames of the same individual into a highly informative signature, called Histogram Plus Epitome, HPE. It incorporates complementary global and local statistical descriptions of the human appearance, focusing on the overall chromatic content, via histograms representation, and on the presence of recurrent local patches, via epitome estimation. The matching of HPEs provides optimal performances against low resolution, occlusions, pose and illumination variations, defining novel state-of-the-art results on all the datasets considered.
Rickets vs. abuse: a national and international epidemic
In the May 2007 issue of Pediatric Radiology, the article “Can classic metaphyseal lesions follow uncomplicated caesarean section?” [1] suggested that enough trauma could occur under these circumstances to produce fractures previously described as “highly specific for child abuse” [2]. However, the question of whether themetaphyses were normal to begin with was not raised. Why should this be an issue? Vitamin D deficiency (DD), initially believed to primarily affect the elderly and dark-skinned populations in the US, is now being demonstrated in otherwise healthy young adults, children, and infants of all races. In a review article on vitamin D published in the New England Journal of Medicine last year [3], Holick reviewed some of the recent literature, showing deficiency and insufficiency rates of 52% among Hispanic and African-American adolescents in Boston, 48% among white preadolescent females in Maine, 42% among African American females between 15 and 49 years of age, and 32% among healthy white men and women 18 to 29 years of age in Boston. A recent study of healthy infants and toddlers aged 8 to 24 months in Boston found an insufficiency rate of 40% and a deficiency rate of 12.1% [4]. In September 2007, a number of articles about congenital rickets were published in the Archives of Diseases in Childhood including an international perspective of mother and newborn DD reported from around the world [5]. Concentrations of 25-hydroxyvitamin D [25(OH)D] less than 25 nmol/l (10 ng/ml) were found in 18%, 25%, 80%, 42% and 61% of pregnant women in the UK, UAE, Iran, northern India and New Zealand, respectively, and in 60 to 84% of non-western women in the Netherlands. Currently, most experts in the US define DD as a 25(OH)D level less than 50 nmol/l (20 ng/ml). Levels between 20 and 30 ng/ml are considered to indicate insufficiency, reflecting increasing parathyroid hormone (PTH) levels and decreasing calcium absorption [3]. With such high prevalence of DD in our healthy young women, congenital deficiency is inevitable, since neonatal 25(OH)D concentrations are approximately two-thirds the maternal level [6]. Bodnar et al. [7] at the University of Pittsburgh, in the largest US study of mother and newborn infant vitamin D levels, found deficient or insufficient levels in 83% of black women and 92% of their newborns, as well as in 47% of white women and 66% of their newborns. The deficiencies were worse in the winter than in the summer. Over 90% of these women were on prenatal vitamins. Research is currently underway to formulate more appropriate recommendations for vitamin D supplementation during pregnancy (http://clinicaltrials.gov, ID: R01 HD043921). The obvious question is, “Why has DD once again become so common?” Multiple events have led to the high rates of DD. In the past, many foods were fortified with Pediatr Radiol (2008) 38:1210–1216 DOI 10.1007/s00247-008-1001-z
Early Access to the Cardiac Catheterization Laboratory for Patients Resuscitated From Cardiac Arrest Due to a Shockable Rhythm: The Minnesota Resuscitation Consortium Twin Cities Unified Protocol
BACKGROUND In 2013 the Minnesota Resuscitation Consortium developed an organized approach for the management of patients resuscitated from shockable rhythms to gain early access to the cardiac catheterization laboratory (CCL) in the metro area of Minneapolis-St. Paul. METHODS AND RESULTS Eleven hospitals with 24/7 percutaneous coronary intervention capabilities agreed to provide early (within 6 hours of arrival at the Emergency Department) access to the CCL with the intention to perform coronary revascularization for outpatients who were successfully resuscitated from ventricular fibrillation/ventricular tachycardia arrest. Other inclusion criteria were age >18 and <76 and presumed cardiac etiology. Patients with other rhythms, known do not resuscitate/do not intubate, noncardiac etiology, significant bleeding, and terminal disease were excluded. The primary outcome was survival to hospital discharge with favorable neurological outcome. Patients (315 out of 331) who were resuscitated from VT/VF and transferred alive to the Emergency Department had complete medical records. Of those, 231 (73.3%) were taken to the CCL per the Minnesota Resuscitation Consortium protocol while 84 (26.6%) were not taken to the CCL (protocol deviations). Overall, 197 (63%) patients survived to hospital discharge with good neurological outcome (cerebral performance category of 1 or 2). Of the patients who followed the Minnesota Resuscitation Consortium protocol, 121 (52%) underwent percutaneous coronary intervention, and 15 (7%) underwent coronary artery bypass graft. In this group, 151 (65%) survived with good neurological outcome, whereas in the group that did not follow the Minnesota Resuscitation Consortium protocol, 46 (55%) survived with good neurological outcome (adjusted odds ratio: 1.99; [1.07-3.72], P=0.03). CONCLUSIONS Early access to the CCL after cardiac arrest due to a shockable rhythm in a selected group of patients is feasible in a large metropolitan area in the United States and is associated with a 65% survival rate to hospital discharge with a good neurological outcome.
Assessing structural changes in axial spondyloarthritis using a low-dose biplanar imaging system.
OBJECTIVES Patients with axial SpA experience repeated spine imaging. EOS is a new low-dose imaging system with significantly lower irradiation than conventional radiography (CR). The objective was to explore the EOS performances compared with CR for the classification and follow-up of SpA. METHODS We performed an observational, cross-sectional, single-centre study including SpA patients (definite diagnosis by expert opinion) and control patients [definite chronic mechanical low back pain (cLBP)]. All patients underwent pelvic and frontal and lateral CR of the entire spine and two-dimensional (2D) EOS imaging on the same day. Images were blindly assessed for sacroiliitis [modified New York criteria (mNY)] and for ankylosis of the spine [modified Stoke AS Spine Score (mSASSS)]. Global ease of interpretation was rated on a scale of 0-10. The primary outcome was intermodality agreement, with an a priori defined non-inferiority limit of 0.7. Interobserver, intra-observer and intermodality agreement were measured by kappa, weighted kappa, intraclass correlation coefficient and Bland-Altman plots. RESULTS Forty-eight SpA patients [mean age 47.6 years (s.d. 14.9), symptom duration 21.4 years (s.d. 13.3), 35 (70%) men] and 48 cLBP controls [mean age 49.1 years (s.d. 10.7), 9 (22.5%) men] were included. Intermodality agreement between EOS and CR was 0.50 (95% CI 0.26, 0.75) and 0.97 (95% CI 0.95, 0.98) for sacroiliitis and mSASSS, respectively. Ease of interpretation was greater for CR [8.2 (s.d. 0.9)] compared with EOS [7.2 (s.d. 0.8), P < 0.0001). CONCLUSION Our results suggest that EOS could replace CR for the follow-up of structural damage of the spine, but its place in the classification of sacroiliitis needs to be further explored.
Emotion Analysis on Twitter: The Hidden Challenge
In this paper, we present an experiment to detect emotions in tweets. Unlike much previous research, we draw the important distinction between the tasks of emotion detection in a closed world assumption (i.e. every tweet is emotional) and the complicated task of identifying emotional versus non-emotional tweets. Given an apparent lack of appropriately annotated data, we created two corpora for these tasks. We describe two systems, one symbolic and one based on machine learning, which we evaluated on our datasets. Our evaluation shows that a machine learning classifier performs best on emotion detection, while a symbolic approach is better for identifying relevant (i.e. emotional) tweets.
Anti-Aβ Oligomer IgG and Surface Sialic Acid in Intravenous Immunoglobulin: Measurement and Correlation with Clinical Outcomes in Alzheimer’s Disease Treatment
The fraction of IgG antibodies with anti-oligomeric Aβ affinity and surface sialic acid was compared between Octagam and Gammagard intravenous immunoglobulin (IVIG) using two complementary surface plasmon resonance methods. These comparisons were performed to identify if an elevated fraction existed in Gammagard, which reported small putative benefits in a recent Phase III clinical trial for Alzheimer's Disease. The fraction of anti-oligomeric Aβ IgG was found to be higher in Octagam, for which no cognitive benefits were reported. The fraction and location of surface-accessible sialic acid in the Fab domain was found to be similar between Gammagard and Octagam. These findings indicate that anti-oligomeric Aβ IgG and total surface sialic acid alone cannot account for reported clinical differences in the two IVIG products. A combined analysis of sialic acid in anti-oligomeric Aβ IgG did reveal a notable finding that this subgroup exhibited a high degree of surface sialic acid lacking the conventional α2,6 linkage. These results demonstrate that the IVIG antibodies used to engage oligomeric Aβ in both Gammagard and Octagam clinical trials did not possess α2,6-linked surface sialic acid at the time of administration. Anti-oligomeric Aβ IgG with α2,6 linkages remains untested as an AD treatment.
Beauty in a smile: the role of medial orbitofrontal cortex in facial attractiveness
The attractiveness of a face is a highly salient social signal, influencing mate choice and other social judgements. In this study, we used event-related functional magnetic resonance imaging (fMRI) to investigate brain regions that respond to attractive faces which manifested either a neutral or mildly happy face expression. Attractive faces produced activation of medial orbitofrontal cortex (OFC), a region involved in representing stimulus-reward value. Responses in this region were further enhanced by a smiling facial expression, suggesting that the reward value of an attractive face as indexed by medial OFC activity is modulated by a perceiver directed smile.
Collaborative navigation with ground vehicles and personal navigators
An integrated positioning solution termed `collaborative positioning' employs multiple location sensors with different accuracy on different platforms for sharing of their absolute and relative localizations. Typical application scenarios are dismounted soldiers, swarms of UAV's, team of robots, emergency crews and first responders. The stakeholders of the solution (i.e., mobile sensors, users, fixed stations and external databases) are involved in an iterative algorithm to estimate or improve the accuracy of each node's position based on statistical models. This paper studies the challenges to realize a public and low-cost solution, based on mass users of multiple-sensor platforms. For the investigation field experiments revolved around the concept of collaborative navigation, and partially indoor navigation. For this purpose different sensor platforms have been fitted with similar type of sensors, such as geodetic and low-cost high-sensitivity GNSS receivers, tactical grade IMU's, MEMS-based IMU's, miscellaneous sensors, including magnetometers, barometric pressure and step sensors, as well as image sensors, such as digital cameras and Flash LiDAR, and ultra-wide band (UWB) receivers. The employed platforms in the tests include a train on a building roof, mobile mapping vans, a personal navigator and a foot tracker unit. In terms of the tests, the data from the different platforms are recorded simultaneously. Several field experiments conducted in a week at the University of Nottingham are described and investigated in the paper. The personal navigator and a foot tracker unit moved on the building roof, then trough the building down to where it logged data simultaneously with the vans, all of them moving together and relative to each other. The platforms then logged data simultaneously covering various accelerations, dynamics, etc. over longer trajectories. Promising preliminary results of the field experiments showed that a positioning accuracy on the few meter level can be achieved for the navigation of the different platforms.
Australia - PaintMyTrip Travel
Among the seven continents of the world, Australia is the smallest by landmass and second smallest by population. When compared to other continents, australia is small But it is the sixth largest country in the world by landmass. Because of the concentrated..
General Survey on Security Issues on Internet of Things
Internet of Things is the integration of a variety of technologies. The Internet of Things incorporates transparently and impeccably large number of assorted end systems, providing open access to selected data for digital services. Internet of things is a promising research in commerce, industry, and education applications. The abundance of sensors and actuators motivates sensing and actuate devices in communication scenarios thus enabling sharing of information in Internet of Things. Advances in sensor data collection technology and Radio Frequency Identification technology has led large number of smart devices connected to the Internet, continuously transmitting data over time.In the context of security, due to different communication overloads and standards conventional security services are not applicable on Internet of Things as a result of which the technological loopholes leads to the generation of malicious data, devices are compromised and so on. Hence a flexible mechanism can deal with the security threats in the dynamic environment of Internet of Things and continuous researches and new ideas needs to be regulated periodically for various upcoming challenges. This paper basically tries to cover up the security issues and challenges of Internet of Things along with a
OSCAR WILDE'S GREAT FARCE THE IMPORTANCE OF BEING EARNEST
It is generally agreed that The Importance of Being Earnest is Oscar Wilde's masterpiece, but there is little agreement on why it should be thought so or on how it works as a play. Though we can sense a solid substance beneath the frothy surface, the nature of that substance remains an enigma. Surprisingly little real criticism has been written about the play, and much of that which has is sketchy or tedious. One of the few critics whose mind seems to have been genuinely engaged by the play is Mary McCarthy, but she has written about it only briefly, and despite her admiration clearly finds it repugnant. "It has the character of a ferocious idyll," she says, and complains that "Selfishness and serviiity are the moral alternatives presented."1 Most of what she says about the play cannot be denied, yet there is a wrong note somewhere. Though it is almost always feeble to complain about critics using the wrong standards, I think we have to do so here. The Importance of Being Earnest does not tackle problems of moral conduct in the way that most plays do. In it, Wilde expresses a comic vision of the human condition by deliberately distorting actuality and having most of the characters behave as if that vision were all but universal. It is fair enough to complain about the vision entire, but to complain simply about the selfishness, without asking what it suggests, is on a par with complaining about the immorality of Tom Jones. Though McCarthy uses the wrong standards, and therefore sees the play through a distorting lens, what she sees is there and needs to be
Using scalable game design to teach computer science from middle school to graduate school
A variety of approaches exist to teach computer science concepts to students from K-12 to graduate school. One such approach involves using the mass appeal of game design and creation to introduce students to programming and computational thinking. Specifically, Scalable Game Design enables students with varying levels of expertise to learn important concepts relative to their experience. This paper presents our observations using Scalable Game Design over multiple years to teach middle school students, college level students, graduate students, and even middle school teachers fundamental to complex computer science and education concepts. Results indicate that Scalable Game Design appeals broadly to students, regardless of background, and is a powerful teaching tool in getting students of all ages exposed and interested in computer science. Furthermore, it is observed that many student projects exhibit transfer enabling their games to explain complex ideas, from all disciplines, to the general public.
Molecular and genetic advances to understanding evolution and biodiversity in the polar regions.
In September 2011, Aberdeen (UK) hosted the World Conference on Marine Biodiversity (WCMB). Within this Conference, the multidisciplinary international Science Research Programme (SRP) of the Scientific Committee on Antarctic Research (SCAR) “Evolution and Biodiversity in the Antarctic — The Response of Life to Change (EBA)” was granted a generous platform, which included a full-day Side Meeting on Advances in Evolution and Biodiversity in Marine Antarctic Environments. To mark the importance of this subject, as an outcome of the WCMB we decided to promote an EBA Special Issue of Marine Genomics, focussing on marine biodiversity in the polar regions. The contributions in this issue address the role of environmental change, variability and extreme events in the biological processes of marine organisms, including subjects such as phylogeography, phylogeny, phenotypic plasticity and molecular/ physiological adaptations in the vertebrates and invertebrates of both polar oceans. Polar habitats influence the pace and nature of global environmental changes, and respond to these changes in an integrated system of biologically modulated connections. In contrast with the terrestrial environment, Antarctic marine biodiversity is high. Perhaps the most pristine region on Earth, the Antarctic marine realm is fragile with undescribed organisms, continuous speciation and high endemism. “Polar amplification” of global anthropogenic warming causes acceleration of glacier retreat, sea-ice thinning and loss, and permafrost degradation, and raises the possibility that human influence may cause a major extinction event over the next 50 years. There is unprecedented urgency and importance to better understand the patterns, diversity and controlling processes of organisms, ecosystems and habitats in the Antarctic. This requires not only knowledge of physical and ecological processes, but also the physiological and molecular responses to ecosystem changes, as genomics underly organisms' ability (or their lack of it) to adapt and survive. Better knowledge of Antarctic marine communities at their various levels of organisation (molecular, cellular, individual, population, etc.) may reveal crucial warnings of the impacts of current environmental changes. The largest challenge facing mankind today is the management of the Earth System to ensure a sustainable future. The concept of a “sustainable world” is strongly linked to that of biodiversity. All life becomes possible because biodiversity makes the planet what it is. Biodiversity depends on the diversity of genes, individuals, species, habitats, as well as their interconnections and relationships. Long-term warming, ocean acidification, and other anthropogenic impacts may present major challenges to the survival and biodiveristy of Antarctic marine communities, as well as their terrestrial and limnetic counterparts. Present patterns of biodiversity are a consequence of processes operating on physiological, ecological and
A Low-Profile High-Gain and Wideband Filtering Antenna With Metasurface
A low-profile, high-gain, and wideband metasurface (MS)-based filtering antenna with high selectivity is investigated in this communication. The planar MS consists of nonuniform metallic patch cells, and it is fed by two separated microstrip-coupled slots from the bottom. The separation between the two slots together with a shorting via is used to provide good filtering performance in the lower stopband, whereas the MS is elaborately designed to provide a sharp roll-off rate at upper band edge for the filtering function. The MS also simultaneously works as a high-efficient radiator, enhancing the impedance bandwidth and antenna gain of the feeding slots. To verify the design, a prototype operating at 5 GHz has been fabricated and measured. The reflection coefficient, radiation pattern, antenna gain, and efficiency are studied, and reasonable agreement between the measured and simulated results is observed. The prototype with dimensions of 1.3 λ0 × 1.3 λ0 × 0.06 λ0 has a 10-dB impedance bandwidth of 28.4%, an average gain of 8.2 dBi within passband, and an out-of-band suppression level of more than 20 dB within a very wide stop-band.
News Article Teaser Tweets and How to Generate Them
We define the task of teaser generation and provide an evaluation benchmark and baseline systems for it. A teaser is a short reading suggestion for an article that is illustrative and includes curiosity-arousing elements to entice potential readers to read the news item. Teasers are one of the main vehicles for transmitting news to social media users. We compile a novel dataset of teasers by systematically accumulating tweets and selecting ones that conform to the teaser definition. We compare a number of neural abstractive architectures on the task of teaser generation and the overall best performing system is See et al. (2017)’s seq2seq with pointer network.
Green Marketing and Its Impact on Consumer Buying Behavior
This study aims to give information about the effect of green marketing on customers purchasing behaviors. First of all, environment and environmental problems, one of the reason why the green marketing emerged, are mentioned, and then the concepts of green marketing and green consumer are explained. Then together with the hypothesis developed literature review has been continued and studies conducted on this subject until now were mentioned. In the last section, moreover, questionnaire results conducted on 540 consumers in Istanbul are evaluated statistically. According to the results of the analysis, environmental awareness, green product features, green promotion activities and green price affect green purchasing behaviors of the consumers in positive way. Demographic characteristics have moderate affect on model.
Super-Resolution with Deep Convolutional Sufficient Statistics
Inverse problems in image and audio, and super-resolution in particular, can be seen as high-dimensional structured prediction problems, where the goal is to characterize the conditional distribution of a high-resolution output given its lowresolution corrupted observation. When the scaling ratio is small, point estimates achieve impressive performance, but soon they suffer from the regression-to-themean problem, result of their inability to capture the multi-modality of this conditional distribution. Modeling high-dimensional image and audio distributions is a hard task, requiring both the ability to model complex geometrical structures and textured regions. In this paper, we propose to use as conditional model a Gibbs distribution, where its sufficient statistics are given by deep convolutional neural networks. The features computed by the network are stable to local deformation, and have reduced variance when the input is a stationary texture. These properties imply that the resulting sufficient statistics minimize the uncertainty of the target signals given the degraded observations, while being highly informative. The filters of the CNN are initialized by multiscale complex wavelets, and then we propose an algorithm to fine-tune them by estimating the gradient of the conditional log-likelihood, which bears some similarities with Generative Adversarial Networks. We evaluate experimentally the proposed approach in the image superresolution task, but the approach is general and could be used in other challenging ill-posed problems such as audio bandwidth extension.
Selective photothermolysis to target sebaceous glands: theoretical estimation of parameters and preliminary results using a free electron laser.
BACKGROUND AND OBJECTIVES The success of permanent laser hair removal suggests that selective photothermolysis (SP) of sebaceous glands, another part of hair follicles, may also have merit. About 30% of sebum consists of fats with copious CH(2) bond content. SP was studied in vitro, using free electron laser (FEL) pulses at an infrared CH(2) vibrational absorption wavelength band. METHODS Absorption spectra of natural and artificially prepared sebum were measured from 200 to 3,000 nm, to determine wavelengths potentially able to target sebaceous glands. The Jefferson National Accelerator superconducting FEL was used to measure photothermal excitation of aqueous gels, artificial sebum, pig skin, human scalp, and forehead skin (sebaceous sites). In vitro skin samples were exposed to FEL pulses from 1,620 to 1,720 nm, spot diameter 7-9.5 mm with exposure through a cold 4°C sapphire window in contact with the skin. Exposed and control tissue samples were stained using H&E, and nitroblue tetrazolium chloride staining (NBTC) was used to detect thermal denaturation. RESULTS Natural and artificial sebum both had absorption peaks near 1,210, 1,728, 1,760, 2,306 and 2,346 nm. Laser-induced heating of artificial sebum was approximately twice that of water at 1,710 and 1,720 nm, and about 1.5× higher in human sebaceous glands than in water. Thermal camera imaging showed transient focal heating near sebaceous hair follicles. Histologically, skin samples exposed to ~1,700 nm, ~100-125 milliseconds pulses showed evidence of selective thermal damage to sebaceous glands. Sebaceous glands were positive for NBTC staining, without evidence of selective loss in samples exposed to the laser. Epidermis was undamaged in all samples. CONCLUSIONS SP of sebaceous glands appears to be feasible. Potentially, optical pulses at ~1,720 or ~1,210 nm delivered with large beam diameter and appropriate skin cooling in approximately 0.1 seconds may provide an alternative treatment for acne.
Seismic data acquisition techniques on loess hills in the ordos basin
High-resolution exploration for lithologic targets confronted with difficulties due to the original brought out from geophysical and geologic characteristics of the loess hills and the very thick deserts in Ordos. Scientific research since mid 1990s has conducted three acquisition techniques including the high-resolution crooked line survey in valleys, high-resolution multiple straight line survey and 3D survey, under different surface conditions and for different geological targets.
Long-term effectiveness of lithium in bipolar disorder: a multicenter investigation of patients with typical and atypical features.
OBJECTIVE Poor response to long-term lithium treatment has been reported to be associated with atypical features of bipolar disorder. The purpose of this study was to investigate the influence of atypical symptoms on the effectiveness and stability of long-term lithium treatment in a prospective, multicenter cohort of bipolar patients in a naturalistic setting. METHOD Patients were initially selected according to International Classification of Diseases, 8th Revision, criteria for bipolar disorder and required long-term treatment. Their diagnoses were reconfirmed according to DSM-IV upon its publication. They were prospectively followed for an approximately 20-year period ending in 2004 in 5 centers participating in the International Group for the Study of Lithium-Treated Patients. Examinations included a comprehensive psychiatric evaluation, an assessment of typical and atypical features on an 8-item scale, and an evaluation of clinical course using the morbidity index. Unbalanced repeated-measures regression models with structured covariance matrices were used to assess the extent to which the morbidity index was influenced by atypical symptoms, duration of treatment, and pretreatment features. RESULTS A total of 242 patients were followed for a mean period of 10 years. In 142 patients, the number of typical features was greater than the number of atypical features, whereas in 100 patients the number of atypical features was greater than or equal to the number of typical features. The mean morbidity index remained stable over a period of 20 years in both groups of patients and was not significantly associated with the presence of atypical features, the duration of lithium treatment, the number or frequency of episodes, or latency from the onset of bipolar disorder to the start of lithium treatment. CONCLUSION Our study suggests that long-term response to lithium maintenance treatment is stable both in patients with typical and in patients with atypical features. The predominance of either typical or atypical features did not result in different responses to long-term lithium treatment in this sample of bipolar patients.
MMQA: A Multi-domain Multi-lingual Question-Answering Framework for English and Hindi
In this paper, we assess the challenges for multi-domain, multi-lingual question answering, create necessary resources for benchmarking and develop a baseline model. We curate 500 articles in six different domains from the web. These articles form a comparable corpora of 250 English documents and 250 Hindi documents. From these comparable corpora, we have created 5, 495 question-answer pairs with the questions and answers, both being in English and Hindi. The question can be both factoid or short descriptive types. The answers are categorized in 6 coarse and 63 finer types. To the best of our knowledge, this is the very first attempt towards creating multi-domain, multi-lingual question answering evaluation involving English and Hindi. We develop a deep learning based model for classifying an input question into the coarse and finer categories depending upon the expected answer. Answers are extracted through similarity computation and subsequent ranking. For factoid question, we obtain an MRR value of 49.10% and for short descriptive question, we obtain a BLEU score of 41.37%. Evaluation of question classification model shows the accuracies of 90.12% and 80.30% for coarse and finer classes, respectively.
Image registration by local histogram matching
We previously presented an image registration method, referred to hierarchical attribute matching mechanism for elastic registration (HAMMER), which demonstrated relatively high accuracy in inter-subject registration of MR brain images. However, the HAMMER algorithm requires the pre-segmentation of brain tissues, since the attribute vectors used to hierarchically match the corresponding pairs of points are defined from the segmented image. In many applications, the segmentation of tissues might be difficult, unreliable or even impossible to complete, which potentially limits the use of the HAMMER algorithm in more generalized applications. To overcome this limitation, we have used local spatial intensity histograms to design a new type of attribute vector for each point in an intensity image. The histogram-based attribute vector is rotationally invariant, and importantly it also captures spatial information by integrating a number of local intensity histograms from multi-resolution images of original intensity image. The new attribute vectors are able to determine the corresponding points across individual images. Therefore, by hierarchically matching new attribute vectors, the proposed method can perform as successfully as the previous HAMMER algorithm did in registering MR brain images, while providing more generalized applications in registering images of various organs. Experimental results show good performance of the proposed method in registering MR brain images, DTI brain images, CT pelvis images, and MR mouse images. 2006 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Multi-Grained Chinese Word Segmentation
Traditionally, word segmentation (WS) adopts the single-granularity formalism, where a sentence corresponds to a single word sequence. However, Sproat et al. (1996) show that the inter-nativespeaker consistency ratio over Chinese word boundaries is only 76%, indicating single-grained WS (SWS) imposes unnecessary challenges on both manual annotation and statistical modeling. Moreover, WS results of different granularities can be complementary and beneficial for high-level applications. This work proposes and addresses multi-grained WS (MWS). First, we build a large-scale pseudo MWS dataset for model training and tuning by leveraging the annotation heterogeneity of three SWS datasets. Then we manually annotate 1,500 test sentences with true MWS annotations. Finally, we propose three benchmark approaches by casting MWS as constituent parsing and sequence labeling. Experiments and analysis lead to many interesting findings.
High isolation and compact size microstrip hairpin diplexer
Since conventional microstrip hairpin filter and diplexer are inherently formed by coupled-line resonators, spurious response and poor isolation performance are unavoidable. This letter presents a simple technique that is suitable for an inhomogeneous structure such as microstrip to cure such poor performances. The technique is based on the stepped impedance coupled-line resonator and is verified by the experimental results of the designed 0.9GHz/1.8GHz microstrip hairpin diplexer.
Towards Miniaturization of a MEMS-Based Wearable Motion Capture System
This paper presents a modular architecture to develop a wearable system for real-time human motion capture. The system is based on a network of smart inertial measurement units (IMUs) distributed on the human body. Each of these modules is provided with a 32-bit RISC microcontroller (MCU) and miniaturized MEMS sensors: three-axis accelerometer, three-axis gyroscopes, and three-axis magnetometer. The MCU collects measurements from the sensors and implement the sensor fusion algorithm, a quaternion-based extended Kalman filter to estimate the attitude and the gyroscope biases. The design of the proposed IMU, in order to overcome the problems of the commercial solution, aims to improve performance and to reduce size and weight. In this way, it can be easily embedded in a tracksuit for total body motion reconstruction with considerable enhancement of the wearability and comfort. Furthermore, the main achievements will be presented with a performance comparison between the proposed IMU and some commercial platforms.
The Muse Object Architecture: A New Operating System Structuring Concept
A next generation operating system should accommodate an ultra large-scale, open, self-advancing, and distributed environment. This environment is dynamic and versatile in nature. In it, an unlimited number of objects, ranging from fine to coarse-grained, are emerging, vanishing, evolving, and being replaced; computers of various processing capacities are dynamically connected and disconnected to networks; systems can optimize object execution by automatically detecting the user's and/or programmer's requirements. In this paper, we investigate several structuring concepts in existing operating systems. These structuring concepts include layered structuring, hierarchical structuring, policy/mechanism separation, collective kernel structuring, object-based structuring, open operating system structuring, virtual machine structuring, and proxy structuring.We adjudge that these structuring concepts are not sufficient to support the environment described above because they lack the abilities to handle dynamic system behavior and transparency and to control dependency. Thus, we propose a new operating system structuring concept which we call the Muse object architecture. In this architecture, an object is a single abstraction of a computing resource in the system. Each object has a group of meta-objects which provide an execution environment. These meta-objects constitute a meta-space which is represented within the meta-hierarchy. An object is causally connected with its meta-objects: the internal structure of an object is represented by meta-objects; an object can make a request of meta-computing; a meta-object can reflect the results of meta-computing to its object. We discuss object/meta-object separation, the meta-hierarchy, and reflective computing of the architecture. We then compare the Muse object architecture with the existing structuring concepts.We also demonstrate that the Muse object architecture is suitable for structuring future operating systems by presenting several system services of the Muse operating system such as class systems, a real-time scheduler with hierarchical policies, and free-grained objects management. Class systems facilitate programming by several classes of programming languages. A real-time scheduler with hierarchical policies can meet various types of real-time constraints presented by applications. Free-grained objects management can suit the object granularity to the application, so that an object is efficiently managed according to its granularity. Finally, we present the implementation of the Muse operating system which is designed based on the Muse object architecture. Version 0.3 of the Muse kernel is running on the MC68030 based Sony NEWS workstations.
A PDTB-Styled End-to-End Discourse Parser
Since the release of the large discourse-level annotation of the Penn Discourse Treebank (PDTB), research work has been carried out on certain subtasks of this annotation, such as disambiguating discourse connectives and classifying Explicit or Implicit relations. We see a need to construct a full parser on top of these subtasks and propose a way to evaluate the parser. In this work, we have designed and developed an end-to-end discourse parser-to-parse free texts in the PDTB style in a fully data-driven approach. The parser consists of multiple components joined in a sequential pipeline architecture, which includes a connective classifier, argument labeler, explicit classifier, non-explicit classifier, and attribution span labeler. Our trained parser first identifies all discourse and non-discourse relations, locates and labels their arguments, and then classifies the sense of the relation between each pair of arguments. For the identified relations, the parser also determines the attribution spans, if any, associated with them. We introduce novel approaches to locate and label arguments, and to identify attribution spans. We also significantly improve on the current state-of-the-art connective classifier. We propose and present a comprehensive evaluation from both component-wise and error-cascading perspectives, in which we illustrate how each component performs in isolation, as well as how the pipeline performs with errors propagated forward. The parser gives an overall system F1 score of 46.80 percent for partial matching utilizing gold standard parses, and 38.18 percent with full automation.
Food and life cycle energy inputs: consequences of diet and ways to increase efficiency
Food consumption is one of the most polluting everyday activities when impacts during product life cycles are considered. Greenhouse gas emissions from the food sector are substantial and need to be lowered to stabilise climate change. Here, we present an inventory of life cycle energy inputs for 150 food items available in Sweden and discuss how energy efficient meals and diets can be composed. Energy inputs in food life cycles vary from 2 to 220 MJ per kg due to a multitude of factors related to animal or vegetable origin, degree of processing, choice of processing and preparation technology and transportation distance. Daily total life cycle energy inputs for diets with a similar dietary energy consumed by one person can vary by a factor of four, from 13 to 51 MJ. Current Swedish food consumption patterns result in life cycle energy inputs ranging from 6900 to 21,000 MJ per person and year. Choice of ingredients and gender differences in food consumption patterns explain the differences. Up to a third of the total energy inputs is related to snacks, sweets and drinks, items with little nutritional value. It is possible to compose a diet compatible with goals for energy efficiency and equal global partition of energy resources. However, such a diet is far from the Swedish average and not in line with current trends. # 2002 Elsevier Science B.V. All rights reserved.
NOSER: An algorithm for solving the inverse conductivity problem
The inverse conductivity problem is the mathematical problem that must be solved in order for electrical impedance tomography systems to be able to make images. Here we show how this inverse conductivity problem is related to a number of other inverse problems. We then explain the workings of an algorithm that we have used to make images from electrical impedance data measured on the boundary of a circle in two dimensions. This algorithm is based on the method of least squares. It takes one step of a Newton’s method, using a constant conductivity as an initial guess. Most of the calculations can therefore be done analytically. The resulting code is named NOSER, for Newton’s One-Step Error Reconstructor. It provides a reconstruction with 496 degrees of freedom. The code does not reproduce the conductivity accurately (unless it differs very little from a constant), but it yields useful images. This is illustrated by images reconstructed from numerical and experimental data, including data from a human chest. THE PROBLEM AND ITS CONNECTION WITH OTHER INVERSE PROBLEMS Electrical impedance imaging systems apply currents to the surface S of a body B , measure the induced voltages at the surface, and from this information reconstruct an approximation to the conductivity in the interior [l-31. The reader the amount entering, which implies Js Ap’)dS, = 0 . ( 3 ) Electrical impedance imaging systems not only apply current, but also measure voltages on the boundary u ( @ ) = V ( @ ) for p’ on S . (4) For convenience, we choose the ground or reference potential so that The idealized inverse conductivity problem is this: Given all possible current densities j and their corresponding voltage distributions V , find the conductivity (T in the interior of the body. This problem is closely related to a number of other inverse problems. For example, if we make the change of variables u = &”’+, then + satisfies the Schrodinger equation -V‘@(p’) + q(@)+(p’) = 0 in B 7 interested in background literature is referred to Ref. 4. Mathematically, this can be formulated as follows [5] : If u is the electric potenti;d and (T the conductivity, then u satisfies where V‘cr“‘( p‘) q (p ’ )= (Tl /z(p’) . V . cr( p’)Vu( p‘) = 0 for 6 in B (1) .-(a 7 du(p’) = j ( e ) for $ o n S , (2) The relations between the inverse conductivity problem and an inverse boundary value problem for the Schrodinger equation are studied in Refs. 6-10. This inverse boundary value problem is, in turn, related to inverse scattering problems If we now make the further change of variables w = p ”’T, where Y denotes the outward unit normal to the body and j denotes the current density applied to the surface of the body. The amount of current leaving the body must be the same as
Importance of Online Product Reviews from a Consumer ’ s Perspective
Product reviews and ratings are popular tools to support buying decisions of consumers. These tools are also valuable for online retailers, who use rating systems in order to build trust and reputation in the online market. Many online shops offer quantitative ratings, textual reviews or a combination of both. This paper examines the acceptance and usage of ratings and reviews in the context of e-commerce transactions. A survey among 104 German online shoppers was conducted to examine how consumer reviews and ratings are used to support buying decisions. The survey shows that reviews and ratings are an important source of information for consumers. However, qualitative feedback from the survey indicates that the perceived helpfulness of rating systems varies. Especially the comparison of user reviews is a very time consuming process for the customer, because of the unstructured nature of textual user reviews. In this paper we summarize similar problems and show corresponding examples to them. This will give new insight for future research in the area of user ratings and reviews.
Geolocation Prediction in Social Media Data by Finding Location Indicative Words
Geolocation prediction is vital to geospatial applications like localised search and local event detection. Predominately, social media geolocation models are based on full text data, including common words with no geospatial dimension (e.g. today) and noisy strings (tmrw), potentially hampering prediction and leading to slower/more memory-intensive models. In this paper, we focus on finding location indicative words (LIWs) via feature selection, and establishing whether the reduced feature set boosts geolocation accuracy. Our results show that an information gain ratiobased approach surpasses other methods at LIW selection, outperforming state-of-the-art geolocation prediction methods by 10.6% in accuracy and reducing the mean and median of prediction error distance by 45km and 209km, respectively, on a public dataset. We further formulate notions of prediction confidence, and demonstrate that performance is even higher in cases where our model is more confident, striking a trade-off between accuracy and coverage. Finally, the identified LIWs reveal regional language differences, which could be potentially useful for lexicographers.
Intelligent Tutoring System-Bayesian Student Model
Nowadays different approaches are coming forth to tutor students using computers. In this paper, a computer based intelligent tutoring system (ITS) is presented. It projects out a new approach dealing with diagnosis in student modeling which emphasizes on Bayesian networks (for decision making) and item response theory (for adaptive question selection). The advantage of such an approach through Bayesian networks (formal framework of uncertainty) is that this structural model allows substantial simplification when specifying parameters (conditional probabilities) which measures student ability at different levels of granularity. In addition, the probabilistic student model is proved to be more quicker, accurate and efficient. Since most of the tutoring systems are static HTML web pages of class textbooks, our intelligent system can help a student navigate through online course materials and recommended learning goals.
Baduk (the Game of Go) Improved Cognitive Function and Brain Activity in Children with Attention Deficit Hyperactivity Disorder
OBJECTIVE Attention deficit hyperactivity disorder (ADHD) symptoms are associated with the deficit in executive functions. Playing Go involves many aspect of cognitive function and we hypothesized that it would be effective for children with ADHD. METHODS Seventeen drug naïve children with ADHD and seventeen age and sex matched comparison subjects were participated. Participants played Go under the instructor's education for 2 hours/day, 5 days/week. Before and at the end of Go period, clinical symptoms, cognitive functions, and brain EEG were assessed with Dupaul's ADHD scale (ARS), Child depression inventory (CDI), digit span, the Children's Color Trails Test (CCTT), and 8-channel QEEG system (LXE3208, Laxtha Inc., Daejeon, Korea). RESULTS There were significant improvements of ARS total score (z=2.93, p<0.01) and inattentive score (z=2.94, p<0.01) in children with ADHD. However, there was no significant change in hyperactivity score (z=1.33, p=0.18). There were improvement of digit total score (z=2.60, p<0.01; z=2.06, p=0.03), digit forward score (z=2.21, p=0.02; z=2.02, p=0.04) in both ADHD and healthy comparisons. In addition, ADHD children showed decreased time of CCTT-2 (z=2.21, p=0.03). The change of theta/beta right of prefrontal cortex during 16 weeks was greater in children with ADHD than in healthy comparisons (F=4.45, p=0.04). The change of right theta/beta in prefrontal cortex has a positive correlation with ARS-inattention score in children with ADHD (r=0.44, p=0.03). CONCLUSION We suggest that playing Go would be effective for children with ADHD by activating hypoarousal prefrontal function and enhancing executive function.
QuickScorer: A Fast Algorithm to Rank Documents with Additive Ensembles of Regression Trees
Learning-to-Rank models based on additive ensembles of regression trees have proven to be very effective for ranking query results returned by Web search engines, a scenario where quality and efficiency requirements are very demanding. Unfortunately, the computational cost of these ranking models is high. Thus, several works already proposed solutions aiming at improving the efficiency of the scoring process by dealing with features and peculiarities of modern CPUs and memory hierarchies. In this paper, we present QuickScorer, a new algorithm that adopts a novel bitvector representation of the tree-based ranking model, and performs an interleaved traversal of the ensemble by means of simple logical bitwise operations. The performance of the proposed algorithm are unprecedented, due to its cache-aware approach, both in terms of data layout and access patterns, and to a control flow that entails very low branch mis-prediction rates. The experiments on real Learning-to-Rank datasets show that QuickScorer is able to achieve speedups over the best state-of-the-art baseline ranging from 2x to 6.5x.
A Drift Detection Method Based on Active Learning
Several real-world prediction problems are subject to changes over time due to their dynamic nature. These changes, named concept drift, usually lead to immediate and disastrous loss in classifier's performance. In order to cope with such a serious problem, drift detection methods have been proposed in the literature. However, current methods cannot be widely used since they are based either on performance monitoring or on fully labeled data, or even both. Focusing on overcoming these drawbacks, in this work we propose using density variation of the most significant instances as an explicit unsupervised trigger for concept drift detection. Here, density variation is based on Active Learning, and it is calculated from virtual margins projected onto the input space according to classifier confidence. In order to investigate the performance of the proposed method, we have carried out experiments on six databases, precisely four synthetic and two real databases focusing on setting up all parameters involved in our method and on comparing it to three baselines, including two supervised drift detectors and one Active Learning-based strategy. The obtained results show that our method, when compared to the supervised baselines, reached better recognition rates in the majority of the investigated databases, while keeping similar or higher detection rates. In terms of the Active Learning-based strategies comparison, our method outperformed the baseline taking into account both recognition and detection rates, even though the baseline employed much less labeled samples. Therefore, the proposed method established a better trade-off between amount of labeled samples and detection capability, as well as recognition rate.
Maternal parenting stress and its correlates in families with a young child with cerebral palsy.
OBJECTIVE To investigate factors predicting parenting stress in mothers of pre-school children with cerebral palsy. METHOD Eighty mothers and children participated. Mothers completed the Parenting Stress Index (PSI) and the following measures of family functioning: family support, family cohesion and adaptability, coping strategies, family needs and locus of control. Children were assessed using the Griffiths Scales and the Gross Motor Function measure. The child's home environment was assessed using Home Observation for Measuring the Environment. RESULTS Mothers had higher mean total PSI scores than the means for the typical sample; 43% had total PSI scores above the threshold for clinical assessment. Cluster analysis demonstrated five distinct clusters of families, more than half of whom were coping well. High stress items were role restriction, isolation and poor spouse support, and having a child who was perceived as less adaptable and more demanding. Lower stress items indicated that this sample of mothers found their children emotionally reinforcing and had close emotional bonds. Regression analysis showed that the factors most strongly related to parenting stress levels were high family needs, low family adaptability and cognitive impairment in the child. CONCLUSIONS The results confirmed the individuality of families, and that individual characteristics of coping and feeling in control, together with family support and cohesion, are associated with variation in amount of stress experienced in parenting a child with cerebral palsy.
Populism : Demand and Supply ∗
Using individual data on voting and political parties manifestos in European countries, we empirically characterize the drivers of voting for populist parties (the demand side) as well as the presence of populist parties (the supply side). We show that the economic insecurity drivers of the demand of populism are significant, especially when considering the key interactions with turnout incentives, neglected in previous studies. Once turnout effects are taken into account, economic insecurity drives consensus to populist policies directly and through indirect negative effects on trust and attitudes towards immigrants. On the supply side, populist parties are more likely to emerge when countries are faced with a systemic crisis of economic security. The orientation choice of populist parties, i.e., whether they arise on left or right of the political spectrum, is determined by the availability of political space. The typical mainstream parties response is to reduce the distance of their platform from that of successful populist entrants, amplifying the aggregate supply of populist policies.
Gene-expression-based prognostic assays for breast cancer
Several gene-expression-based reference laboratory tests are now available for prognostication of patients diagnosed with breast cancer. For clinical oncologists, it is important to understand the clinical contexts for which these assays were developed in order to use them properly. This Review is aimed at providing a conceptual and technical overview of the steps involved in the development of gene-expression profiling-based prognostic assays. MammaPrint® and Oncotype DX®, two widely utilized assays, are compared with respect to differences in the clinical contexts for their development, technologies used, and clinical utilities with the aim of providing a guide to clinical oncologists for utilization of these assays.
Deep predictive policy training using reinforcement learning
Skilled robot task learning is best implemented by predictive action policies due to the inherent latency of sensorimotor processes. However, training such predictive policies is challenging as it involves finding a trajectory of motor activations for the full duration of the action. We propose a data-efficient deep predictive policy training (DPPT) framework with a deep neural network policy architecture which maps an image observation to a sequence of motor activations. The architecture consists of three sub-networks referred to as the perception, policy and behavior super-layers. The perception and behavior super-layers force an abstraction of visual and motor data trained with synthetic and simulated training samples, respectively. The policy super-layer is a small subnetwork with fewer parameters that maps data in-between the abstracted manifolds. It is trained for each task using methods for policy search reinforcement learning. We demonstrate the suitability of the proposed architecture and learning framework by training predictive policies for skilled object grasping and ball throwing on a PR2 robot. The effectiveness of the method is illustrated by the fact that these tasks are trained using only about 180 real robot attempts with qualitative terminal rewards.
Expressive Singing Synthesis Based on Unit Selection for the Singing Synthesis Challenge 2016
Sample and statistically based singing synthesizers typically require a large amount of data for automatically generating expressive synthetic performances. In this paper we present a singing synthesizer that using two rather small databases is able to generate expressive synthesis from an input consisting of notes and lyrics. The system is based on unit selection and uses the Wide-Band Harmonic Sinusoidal Model for transforming samples. The first database focuses on expression and consists of less than 2 minutes of free expressive singing using solely vowels. The second one is the timbre database which for the English case consists of roughly 35 minutes of monotonic singing of a set of sentences, one syllable per beat. The synthesis is divided in two steps. First, an expressive vowel singing performance of the target song is generated using the expression database. Next, this performance is used as input control of the synthesis using the timbre database and the target lyrics. A selection of synthetic performances have been submitted to the Interspeech Singing Synthesis Challenge 2016, in which they are compared to other competing systems.
Measurement-Theoretic Foundations of Probabilistic Model of JND-Based Vague Predicate Logic
Vagueness is a ubiquitous feature that we know from many expressions in natural languages. It can invite a serious problem: the Sorites Paradox. The aim of this paper is to propose a new version of complete logic for vague predicates - JND-based vague predicate logic (JVL) which can avoid the Sorites Paradox and give answers to all of the Semantic Question, the Epistemological Question and the Psychological Question given by Graff. To accomplish this aim, we provide JVL with a probabilistic model by means of measurement theory.
DATA ANALYSIS IN PUBLIC SOCIAL NETWORKS
Public social networks affect significant number of people with different professional and personal background. Presented paper deals with data analysis and in addition with safety of information managed by online social networks. We will show methods for data analysis in social networks based on its scale-free characteristics. Experimental results will be discussed for the biggest social network in Slovakia which is popular for more than 10 years.
The present and future of event correlation: A need for end-to-end service fault localization
Fault localization is a process of isolating faults responsible for the observable malfunctioning of the managed system. Until recently, fault localization efforts concentrated mostly on diagnosing faults related to the availability of network resources in the lowest layers of the protocol stack. Modern enterprise environments require that fault diagnosis be performed in integrated fashion in multiple layers of the protocol stack and that it include diagnosing performance problems. This paper reviews the existing approaches to fault localization and presents its new facets revealed by the demands of modern enterprise systems. We also present end-to-end service failure diagnosis as a critical step towards multi-layer fault localization in an enterprise environment.
Skew detection of scanned document images
—Skewing of the scanned image is an inevitable process and its detection is an important issue for document recognition systems. The skew of the scanned document image specifies the deviation of the text lines from the horizontal or vertical axis. This paper surveys methods to detect this skew in
Bioinformatics: a new field in engineering education
Bioinformatics is a new engineering field poorly served by traditional engineering curricula. Bioinformatics can be defined in several ways, but the emphasis is always on the use of computer and statistical methods to understand biological data, such as the voluminous data produced by high-throughput biological experimentation including gene sequencing and gene chips. As the demand has outpaced the supply of bioinformaticians, the UCSC School of Engineering is establishing undergraduate and graduate degrees in bioinformatics. Although many schools have or are proposing graduate programs in bioinformatics, few are creating undergraduate programs. In this paper, we explore the blend of mathematics, engineering, science, and bioinformatics topics and courses needed for an undergraduate degree in
Attachment style and subjective motivations for sex.
The relation of attachment style to subjective motivations for sex was investigated in an Internet survey of 1999 respondents. The relations of attachment anxiety and avoidance to overall sexual motivation and to the specific motives for emotional closeness, reassurance, self-esteem enhancement, stress reduction, partner manipulation, protection from partner's negative affect and behavior, power exertion, physical pleasure, nurturing one's partner, and procreation were explored. As predicted, attachment anxiety was positively related to overall sexual motivation and to all specific motives for sex, with the exception of physical pleasure. Avoidance was negatively related to emotional closeness and reassurance as goals of sex and positively related to manipulative use of sex but minimally related to most other motives. Sexual passion was positively related to attachment anxiety and negatively related to avoidance, and anxiety was related to the maintenance of passion over time, whereas avoidance was related to loss of passion over time.
The begining of infinity: explanations that transform the world
and beautiful, is immensely appealing. The reviewer is tempted to summarize it in a
Microenvironment and photosynthesis of zooxanthellae in scleractinian corals studied with microsensors for 02 , pH and light
During experimental light-dark cycles, O9 in the tissue of the colonial scleractinian corals Favia sp. and Acropora sp reached >250 % of air saturation after a few minutes in light. Immediately after darkenmg, 0; was depleted rapidly, and within 5 mm the 0; concentration at the tissue surface reached <2 % of air saturation. The pH of the tissue changed within 10 min from about 8.5 in the light to 7.3 m the dark. Oxygen and pH profiles revealed a diffusive boundary layer of flow-dependent thickness, which limited coral respiration in the dark. The light field at the tissue surface (measured as scalar irradiance, Eo) differed strongly with respect to light intensity and spectral composition from the incident collimated light (measured as downwelling irradiance, Ed) Scalar irradiance reached up to 180 % of Ed at the coral tissue surface for wavelengths subject to less absorption by the coral tissue (600 to 650 run and >680 nm). The scalar irradiance spectra exhibited bands of chlorophyll a (chl a ) (675 run), chl c (630 to 640 nm) and pendinin (540 nm) absorption and a broad absorption band due to chlorophyl l~ and carotenoids between 400 and 550 nm. The shape of both action spectra and photosynthesis vs irradiance (Pvs I) curves depended on the choice of the light intensity parameter. Calculations of miha1 slopes and onset of light saturation, Ik, showed that P vs Eo curves exhibit a lower initial slope and a higher 4 than corresponding Pvs Ed curves. Coral respiration in light was calculated as the difference between the measured gross and net photosynthesis, and was found to be >6 times higher at a saturating irradiance of 350 pEm m 2 s 1 than the dark respiration measured under identical hydrodynamic conditions (flow rate of 5 to 6 cm ssl).
Myths die hard-Why?
The author defines a myth as a falsehood perpetuated by a population that chooses not to investigate the underlying truth. He points out that the myth that aerospace engineers are nonproductive in nondefense work hurts the profession. He examines the mechanism used in promoting myths and finds that some myth perpetuators, including the ones that perpetuated the aerospace-engineer myth, are well-compensated for their services. Substantial financial rewards are also seen to be available to those who can correctly recognize the falsehood in myths.<<ETX>>
Evolving Use of Distributed Semantics to Achieve Net-centricity
For the US Department of Defense (DoD)’s efforts to achieve net-centricity, more intelligent ways of handling information must be pursued, in particular using machineinterpretable semantic models, i.e., ontologies. One approach, which we’ve adopted in current and emerging research projects, is to combine Semantic Web technologies with logic programming, thereby utilizing standards-based ontologies and rules and yet ensuring that the runtime automated reasoning over these is efficient. In this paper, we discuss our current Semantic Environment for Enterprise Reasoning (SEER) architecture, which combines an Enterprise Service Bus (ESB) with our Semantic Web Ontologies and Rules for Interoperability with Efficient Reasoning (SWORIER) system. SWORIER converts OWL ontologies and SWRL rules into logic programming, thereby enabling efficient runtime reasoning using Prolog. We also briefly discuss potential enhancements to such an environment, including the use of constraint logic, metareasoning, and hybrid logic.
Efficacy of acellular dermal matrices in revisionary aesthetic breast surgery: a 6-year experience.
BACKGROUND Augmentation mammaplasty and augmentation mastopexy are associated with a substantial primary and secondary revision rate. Capsular contracture (CC), implant malposition, ptosis, asymmetry, and rippling are the main reasons for revisionary surgery in these patients. Traditional corrective techniques have not been completely reliable in preventing or treating these complications. Recently, acellular dermal matrices (ADM) have been used to assist with revisionary surgery with promising results. OBJECTIVE The authors review their 6-year experience using ADM for revisionary surgery in aesthetic patients and evaluate long-term outcomes with this approach. METHODS Patients who underwent revisionary breast augmentation or augmentation mastopexy with ADM in conjunction with standard techniques over a 6-year period between October 2005 and December 2011 were retrospectively reviewed. Only patients with at least 1 year of follow-up were included in the analysis. RESULTS A total of 197 revisions were performed (197 patients). Reasons for revision included CC (61.8%), implant malposition (31.2%), rippling (4.8%), ptosis (4.8%), implant exposure (1.6%), and breast wound (0.5%). The mean ± SD follow-up period was 3.1 ± 1.1 years (range, 0.1-6.1 years). The complication rate was 4.8%, including Baker grade III/IV CC (1.6%), infection (1.6%), implant malposition (0.5%), hematoma (0.5%), and seroma (0.5%). Most (98%) revisions were successful, with no recurrence of the presenting complaint. CONCLUSIONS The use of ADM in conjunction with standard techniques for the reinforcement of weak tissue in revision augmentation and augmentation mastopexy patients appears to be effective.
Presentation and Clinical Outcomes of Choledochal Cysts in Children and Adults: A Multi-institutional Analysis.
IMPORTANCE Choledochal cysts (CCs) are rare, with risk of infection and cancer. OBJECTIVE To characterize the natural history, management, and long-term implications of CC disease. DESIGN, SETTING, AND PARTICIPANTS A total of 394 patients who underwent resection of a CC between January 1, 1972, and April 11, 2014, were identified from an international multi-institutional database. Patients were followed up through September 27, 2014. Clinicopathologic characteristics, operative details, and outcome data were analyzed from May 1, 2014, to October 14, 2014. INTERVENTION Resection of CC. MAIN OUTCOMES AND MEASURES Management, morbidity, and overall survival. RESULTS Among 394 patients, there were 135 children (34.3%) and 318 women (80.7%). Adults were more likely to present with abdominal pain (71.8% vs 40.7%; P < .001) and children were more likely to have jaundice (31.9% vs 11.6%; P < .001). Preoperative interventions were more commonly performed in adults (64.5% vs 31.1%; P < .001), including endoscopic retrograde pancreatography (55.6% vs 27.4%; P < .001), percutaneous transhepatic cholangiography (17.4% vs 5.9%; P < .001), and endobiliary stenting (18.1% vs 4.4%; P < .001)). Type I CCs were more often seen in children vs adults (79.7% vs 64.9%; P = .003); type IV CCs predominated in the adult population (23.9% vs 12.0%; P = .006). Extrahepatic bile duct resection with hepaticoenterostomy was the most frequently performed procedure in both age groups (80.3%). Perioperative morbidity was higher in adults (35.1% vs 16.3%; P < .001). On pathologic examination, 10 patients (2.5%) had cholangiocarcinoma. After a median follow-up of 28 months, 5-year overall survival was 95.5%. On follow-up, 13 patients (3.3%), presented with biliary cancer. CONCLUSIONS AND RELEVANCE Presentation of CC varied between children and adults, and resection was associated with a degree of morbidity. Although concomitant cancer was uncommon, it occurred in 3.0% of the patients. Long-term surveillance is indicated given the possibility of future development of biliary cancer after CC resection.
Challenges, Methodologies, and Issues in the Usability Testing of Mobile Applications
Usability testing of software applications developed for mobile devices is an emerging research area that faces a variety of challenges due to unique features of mobile devices, limited bandwidth, unreliability of wireless networks, as well as the changing context (environmental factors). Traditional guidelines and methods used in usability testing of desktop applications may not be directly applicable to a mobile environment. Therefore, it is essential to develop and adopt appropriate research methodologies that can evaluate the usability of mobile applications. The contribution of this paper is to propose a generic framework for conducting usability tests for mobile applications through discussing research questions, methodologies, and usability attributes. The paper provides an overview of existing usability studies and discusses major research questions that have been investigated. Then, it proposes a generic framework and provides detailed guidelines on how to conduct such usability studies.
Large-Vocabulary Speech Recognition Algorithms
P roviding the computer with a natural interface, including the ability to understand human speech, has been a research goal for almost 40 years. Speech recognition research started with an attempt to decode isolated words from a small vocabulary. As time progressed, the research community began working on large-vocabulary and continuousspeech tasks. Practical versions of such systems have become moderately usable and commercially successful only in the past few years, however. Even now, these commercial applications either restrict the vocabulary to a few thousand words, in the case of banking or airline reservation systems, or require high-bandwidth, high-feedback situations such as dictation, which requires modifying the user’s speech to minimize recognition errors. Early attempts at speech recognition tried to apply expert knowledge about speech production and perception processes, but researchers found that such knowledge was inadequate for capturing the complexities of continuous speech. To date, statistical modeling techniques trained from hundreds of hours of speech have provided most speech recognition advancements. Speech researchers have combined these modeling techniques with the massive increase in available computing power over the past several years to explore complex models with hundreds of thousands of parameters. Multisite speech recognition research cooperation and competition, supported through government agencies such as DARPA, have also fueled advancements in this field. In addition to participating in government-sponsored competitions, industrial labs, universities, and other companies have fostered rapid advances in speech recognition technology by sharing data and algorithms (http://www.nist.gov/speech/publications/tw00). A by-product of these cooperative efforts has been that most successful systems share roughly the same architecture and algorithms because each site immediately copies other sites’ successful algorithms. To enable next-generation applications such as speech recognition over cellular phones, transcription of call center interactions, and recognition of broadcast news, researchers continue to work on the advanced speech recognition (ASR) problem. An understanding of today’s ASR systems architecture provides a basis for exploring the recent advances motivated by next-generation applications.
Power Factor Correction Boost Converter Based on the Three-State Switching Cell
The need for solid-state ac-dc converters to improve power quality in terms of power factor correction, reduced total harmonic distortion at input ac mains, and precisely regulated dc output has motivated the investigation of several topologies based on classical converters such as buck, boost, and buck-boost converters. Boost converters operating in continuous-conduction mode have become particularly popular because reduced electromagnetic interference levels result from their utilization. Within this context, this paper introduces a bridgeless boost converter based on a three-state switching cell (3SSC), whose distinct advantages are reduced conduction losses with the use of magnetic elements with minimized size, weight, and volume. The approach also employs the principle of interleaved converters, as it can be extended to a generic number of legs per winding of the autotransformers and high power levels. A literature review of boost converters based on the 3SSC is initially presented so that key aspects are identified. The theoretical analysis of the proposed converter is then developed, while a comparison with a conventional boost converter is also performed. An experimental prototype rated at 1 kW is implemented to validate the proposal, as relevant issues regarding the novel converter are discussed.
International Conference on Radiation Effects in Semiconductors.
Abstract : Selected papers given at this Conference are briefly reviewed. The meeting emphasized how much is still unknown concerning the nature of defects in semiconductors other than silicon, as well as the need to develop other microscopic probes for investigating materials that are not readily amenable to ESR studies. (Author)
A Task Allocation Schema Based on Response Time Optimization in Cloud Computing
Cloud computing is a newly emerging distributed computing which is evolved from Grid computing. Task scheduling is the core research of cloud computing which studies how to allocate the tasks among the physical nodes so that the tasks can get a balanced allocation or each task’s execution cost decreases to the minimum or the overall system performance is optimal. Unlike the previous task slices’ sequential execution of an independent task in the model of which the target is processing time, we build a model that targets at the response time, in which the task slices are executed in parallel. Then we give its solution with a method based on an improved adjusting entropy function. At last, we design a new task scheduling algorithm. Experimental results show that the response time of our proposed algorithm is much lower than the game-theoretic algorithm and balanced scheduling algorithm and compared with the balanced scheduling algorithm, game-theoretic algorithm is not necessarily superior in parallel although its objective function value is better.
Coping strategies in mothers and fathers of preschool and school-age children with autism.
Despite the theoretical and demonstrated empirical significance of parental coping strategies for the wellbeing of families of children with disabilities, relatively little research has focused explicitly on coping in mothers and fathers of children with autism. In the present study, 89 parents of preschool children and 46 parents of school-age children completed a measure of the strategies they used to cope with the stresses of raising their child with autism. Factor analysis revealed four reliable coping dimensions: active avoidance coping, problem-focused coping, positive coping, and religious/denial coping. Further data analysis suggested gender differences on the first two of these dimensions but no reliable evidence that parental coping varied with the age of the child with autism. Associations were also found between coping strategies and parental stress and mental health. Practical implications are considered including reducing reliance on avoidance coping and increasing the use of positive coping strategies.
Introductory Lectures on Contact Geometry
Though contact topology was born over two centuries ago, in the work of Huygens, Hamilton and Jacobi on geometric optics, and been studied by many great mathematicians, such as Sophus Lie, Elie Cartan and Darboux, it has only recently moved into the foreground of mathematics. The last decade has witnessed many remarkable breakthroughs in contact topology, resulting in a beautiful theory with many potential applications. More specifically, as a coherent – though sketchy – picture of contact topology has been developed, a surprisingly subtle relationship arose between contact structures and 3(and 4-) dimensional topology. In addition, the applications of contact topology have extended far beyond geometric optics to include non-holonomic dynamics, thermodynamics and more recently Hamiltonian dynamics [25, 40] and hydrodynamics [12]. Despite it long history and all the recent work in contact geometry, it is not overly accessible to those trying to get into the field for the first time. There are a few books giving a brief introduction to the more geometric aspects of the theory. Most notably the last chapter in [1], part of Chapter 3 in [34] and an appendix to the book [2]. There have not, however, been many books or survey articles (with the notable exception of [20]) giving an introduction to the more topological aspects of contact geometry. It is this topological approach that has lead to many of the recent breakthroughs in contact geometry and to which this paper is devoted. I planned these lectures when asked to give an introduction to contact geometry at the Georgia International Topology Conference in the summer of 2001. My idea was to give an introduction to the “classical” theory of contact topology, in which the characteristic foliation plays a central roll, followed by a hint at the more modern trends, where specific foliations take a back seat to dividing curves. This was much too ambitious for the approximately one and a half hours I had for these lectures, but I nonetheless decided to follow this outline in preparing these lecture notes. These notes begin with an introduction to contact structures in Section 2, here all the basic definitions are given and many examples are discussed. In the following section we consider contact structures near a point and near a surface. It is in
Learning to live with a permanent intestinal ostomy: impact on everyday life and educational needs.
PURPOSE The aim of the study was to explore the impact of a permanent stoma on patients' everyday lives and to gain further insight into their need for ostomy-related education. SUBJECTS AND SETTING The sample population comprised 15 persons with permanent ostomies. Stomas were created to manage colorectal cancer or inflammatory bowel disease. The research setting was the surgical department at a hospital in the Capitol Region of Denmark associated with the University of Copenhagen. METHODS Focus group interviews were conducted using a phenomenological hermeneutic approach. Data were collected and analyzed using qualitative content analysis. RESULTS Stoma creation led to feelings of stigma, worries about disclosure, a need for control and self-imposed limits. Furthermore, patients experienced difficulties identifying their new lives with their lives before surgery. Participants stated they need to be seen as a whole person, to have close contact with health care professionals, and receive trustworthy information about life with an ostomy. Respondents proposed group sessions conducted after hospital discharge. They further recommended that sessions be delivered by lay teachers who had a stoma themselves. CONCLUSIONS Self-imposed isolation was often selected as a strategy for avoiding disclosing the presence of a stoma. Patient education, using health promotional methods, should take the settings into account and patients' possibility of effective knowledge transfer. Respondents recommend involvement of lay teachers, who have a stoma, and group-based learning processes are proposed, when planning and conducting patient education.
An analysis of model-based Interval Estimation for Markov Decision Processes
Several algorithms for learning near-optimal policies in Markov Decision Processes have been analyzed and proven efficient. Empirical results have suggested that Model-based Interval Estimation (MBIE) learns efficiently in practice, effectively balancing exploration and exploitation. This paper presents a theoretical analysis of MBIE and a new variation called MBIE-EB, proving their efficiency even under worst-case conditions. The paper also introduces a new performance metric, average loss, and relates it to its less “online” cousins from the literature.
Yoga ameliorates performance anxiety and mood disturbance in young professional musicians.
Yoga and meditation can alleviate stress, anxiety, mood disturbance, and musculoskeletal problems, and can enhance cognitive and physical performance. Professional musicians experience high levels of stress, performance anxiety, and debilitating performance-related musculoskeletal disorders (PRMDs). The goal of this controlled study was to evaluate the benefits of yoga and meditation for musicians. Young adult professional musicians who volunteered to participate in a 2-month program of yoga and meditation were randomized to a yoga lifestyle intervention group (n = 15) or to a group practicing yoga and meditation only (n = 15). Additional musicians were recruited to a no-practice control group (n = 15). Both yoga groups attended three Kripalu Yoga or meditation classes each week. The yoga lifestyle group also experienced weekly group practice and discussion sessions as part of their more immersive treatment. All participants completed baseline and end-program self-report questionnaires that evaluated music performance anxiety, mood, PRMDs, perceived stress, and sleep quality; many participants later completed a 1-year followup assessment using the same questionnaires. Both yoga groups showed a trend towards less music performance anxiety and significantly less general anxiety/tension, depression, and anger at end-program relative to controls, but showed no changes in PRMDs, stress, or sleep. Similar results in the two yoga groups, despite psychosocial differences in their interventions, suggest that the yoga and meditation techniques themselves may have mediated the improvements. Our results suggest that yoga and meditation techniques can reduce performance anxiety and mood disturbance in young professional musicians.
A segmentation-free approach for skeletonization of gray-scale images via anisotropic vector diffusion
In this paper we describe a method for skeletonization of gray-scale images without segmentation. Our method is based on anisotropic vector diffusion. The skeleton strength map, calculated from the diffused vector field, provides us a measure of how possible each pixel could be on the skeletons. The final skeletons are traced from the skeleton strength map, which mimics the behavior of edge detection from the edge strength map of the original image. A couple of real or synthesized images will be shown to demonstrate the performance of our algorithm.
A Multi-Cloud Framework for Measuring and Describing Performance Aspects of Cloud Services Across Different Application Types
Cloud services have emerged as an innovative IT provisioning model in the recent years. However, after their usage severe considerations have emerged with regard to their varying performance due to multitenancy and resource sharing issues. These issues make it very difficult to provide any kind of performance estimation during application design or deployment time. The aim of this paper is to present a mechanism and process for measuring the performance of various Cloud services and describing this information in machine understandable format. The framework is responsible for organizing the execution and can support multiple Cloud providers. Furthermore we present approaches for measuring service performance with the usage of specialized metrics for ranking the services according to a weighted combination of cost, performance and workload.
DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker
Artificial intelligence has seen a number of breakthroughs in recent years, with games often serving as significant milestones. A common feature of games with these successes is that they involve information symmetry among the players, where all players have identical information. This property of perfect information, though, is far more common in games than in real-world problems. Poker is the quintessential game of imperfect information, and it has been a longstanding challenge problem in artificial intelligence. In this paper we introduce DeepStack, a new algorithm for imperfect information settings such as poker. It combines recursive reasoning to handle information asymmetry, decomposition to focus computation on the relevant decision, and a form of intuition about arbitrary poker situations that is automatically learned from selfplay games using deep learning. In a study involving dozens of participants and 44,000 hands of poker, DeepStack becomes the first computer program to beat professional poker players in heads-up no-limit Texas hold’em. Furthermore, we show this approach dramatically reduces worst-case exploitability compared to the abstraction paradigm that has been favored for over a decade.
Performance analysis of space time block code in MIMO-OFDM systems
MIMO-OFDM system, which is the combination of MIMO and OFDM technology, can live up to a high data transmission rate with reliability through diversity. MIMO-OFDM with STBC has excellent performance against Multi-path effects and frequency selective fading, what's more, the BER and the coding complexity is low. In this paper, a simulation model of MIMO-OFDM system based on STBC is built and transmission performances under different channels are analyzed. The simulation results show that the MIMO-OFDM system based on STBC outperforms other MIMO-OFDM system without STBC in BER performance.
Appropriateness of early breast cancer management in relation to patient and hospital characteristics: a population based study in Northern Italy
Administrative data may provide valuable information for monitoring the quality of care at population level and offer an efficient way of gathering data on individual patterns of care, and also to shed light on inequalities in access to appropriate medical care. The aim of the study was to investigate the role of patient and hospital characteristics in the initial treatment of early breast cancer using administrative data. Incident breast cancer patients were identified from hospital discharge records and linked to the radiotherapy outpatient database during 2000–2004 in the Piedmont region of Northwestern Italy. Women treated with breast-conserving surgery followed by radiotherapy (BCS + RT) were compared to those treated with BCS without radiotherapy (BCS w/o RT) or mastectomy using multinomial logistic regression models. Out of 16,022 incident cases, 46.2% received BCS + RT, 20.3% received BCS w/o RT, and 33.5% received a mastectomy. Compared to BCS + RT, the factors associated with BCS w/o RT were: increased age (OR = 1.54; 95% CI = 1.29–1.85, for ages 70–79 vs. <50), being unmarried (1.24; 1.13–1.36), presence of co-morbidities (1.32; 1.10–1.58), being treated at hospitals with low surgical volume (1.31; 1.07–1.60 for hospitals with less than 50 vs. ≥150 interventions/year), and living far from radiotherapy facilities (1.75; 1.39–2.20 for those at a distance of >45 min). These same factors were also associated with mastectomy. During the 5-year period observed, there was a trend of reduced probability of receiving a mastectomy (0.70; 0.56–0.88 for 2004 vs. 2000). The presence or absence of nodal involvement was positively associated with mastectomy (2.28; 1.83–2.85) and negatively associated with BCS w/o RT (0.65; 0.56–0.76). After adjustment for potential confounders, education level did not show any association with the type of treatment. Social and geographical factors, in addition to hospital specialization, should be considered to reduce inappropriateness of care for breast cancer.
Prospects for Mobile Health in Pakistan and Other Developing Countries
Pakistan is a developing country with more than half of its population located in rural areas. These areas neither have sufficient health care facilities nor a strong infrastructure that can address the health needs of the people. The expansion of Information and Communication Technology (ICT) around the globe has set up an unprecedented opportunity for delivery of healthcare facilities and infrastructure in these rural areas of Pakistan as well as in other developing countries. Mobile Health (mHealth)—the provision of health care services through mobile telephony—will revolutionize the way health care is delivered. From messaging campaigns to remote monitoring, mobile technology will impact every aspect of health systems. This paper highlights the growth of ICT sector and status of health care facilities in the developing countries, and explores prospects of mHealth as a transformer for health systems and service delivery especially in the remote rural areas.
A Survey on Consensus Mechanisms and Mining Management in Blockchain Networks
The past decade has witnessed the rapid evolution in blockchain technologies, which has attracted tremendous interests from both the research communities and industries. The blockchain network was originated from the Internet financial sector as a decentralized, immutable ledger system for transactional data ordering. Nowadays, it is envisioned as a powerful backbone/framework for decentralized data processing and datadriven self-organization in flat, open-access networks. In particular, the plausible characteristics of decentralization, immutability and self-organization are primarily owing to the unique decentralized consensus mechanisms introduced by blockchain networks. This survey is motivated by the lack of a comprehensive literature review on the development of decentralized consensus mechanisms in blockchain networks. In this survey, we provide a systematic vision of the organization of blockchain networks. By emphasizing the unique characteristics of incentivized consensus in blockchain networks, our in-depth review of the state-ofthe-art consensus protocols is focused on both the perspective of distributed consensus system design and the perspective of incentive mechanism design. From a game-theoretic point of view, we also provide a thorough review on the strategy adoption for self-organization by the individual nodes in the blockchain backbone networks. Consequently, we provide a comprehensive survey on the emerging applications of the blockchain networks in a wide range of areas. We highlight our special interest in how the consensus mechanisms impact these applications. Finally, we discuss several open issues in the protocol design for blockchain consensus and the related potential research directions.
Recovering traceability links in software artifact management systems using information retrieval methods
The main drawback of existing software artifact management systems is the lack of automatic or semi-automatic traceability link generation and maintenance. We have improved an artifact management system with a traceability recovery tool based on Latent Semantic Indexing (LSI), an information retrieval technique. We have assessed LSI to identify strengths and limitations of using information retrieval techniques for traceability recovery and devised the need for an incremental approach. The method and the tool have been evaluated during the development of seventeen software projects involving about 150 students. We observed that although tools based on information retrieval provide a useful support for the identification of traceability links during software development, they are still far to support a complete semi-automatic recovery of all links. The results of our experience have also shown that such tools can help to identify quality problems in the textual description of traced artifacts.
Modeling trial by trial and block feedback in perceptual learning
Feedback has been shown to play a complex role in visual perceptual learning. It is necessary for performance improvement in some conditions while not others. Different forms of feedback, such as trial-by-trial feedback or block feedback, may both facilitate learning, but with different mechanisms. False feedback can abolish learning. We account for all these results with the Augmented Hebbian Reweight Model (AHRM). Specifically, three major factors in the model advance performance improvement: the external trial-by-trial feedback when available, the self-generated output as an internal feedback when no external feedback is available, and the adaptive criterion control based on the block feedback. Through simulating a comprehensive feedback study (Herzog & Fahle, 1997), we show that the model predictions account for the pattern of learning in seven major feedback conditions. The AHRM can fully explain the complex empirical results on the role of feedback in visual perceptual learning.
Food prices and obesity: evidence and policy implications for taxes and subsidies.
CONTEXT Pricing policies have been posited as potential policy instruments to address the increasing prevalence of obesity. This article examines whether altering the cost of unhealthy, energy-dense foods, compared with healthy, less-dense foods through the use of fiscal pricing (tax or subsidy) policy instruments would, in fact, change food consumption patterns and overall diet enough to significantly reduce individuals' weight outcomes. METHODS This article examined empirical evidence regarding the food and restaurant price sensitivity of weight outcomes based on a literature search to identify peer-reviewed English-language articles published between 1990 and 2008. Studies were identified from the Medline, PubMed, Econlit, and PAIS databases. The fifteen search combinations used the terms obesity, body mass index, and BMI each in combination with the terms price, prices, tax, taxation, and subsidy. FINDINGS The studies reviewed showed that when statistically significant associations were found between food and restaurant prices (taxes) and weight outcomes, the effects were generally small in magnitude, although in some cases they were larger for low-socioeconomic status (SES) populations and for those at risk for overweight or obesity. CONCLUSIONS The limited existing evidence suggests that small taxes or subsidies are not likely to produce significant changes in BMI or obesity prevalence but that nontrivial pricing interventions may have some measurable effects on Americans' weight outcomes, particularly for children and adolescents, low-SES populations, and those most at risk for overweight. Additional research is needed to be able to draw strong policy conclusions regarding the effectiveness of fiscal-pricing interventions aimed at reducing obesity.
A robust automatic crack detection method from noisy concrete surfaces
In maintenance of concrete structures, crack detection is important for the inspection and diagnosis of concrete structures. However, it is difficult to detect cracks automatically. In this paper, we propose a robust automatic crack-detection method from noisy concrete surface images. The proposed method includes two preprocessing steps and two detection steps. The first preprocessing step is a subtraction process using the median filter to remove slight variations like shadings from concrete surface images; only an original image is used in the preprocessing. In the second preprocessing step, a multi-scale line filter with the Hessian matrix is used both to emphasize cracks against blebs or stains and to adapt the width variation of cracks. After the preprocessing, probabilistic relaxation is used to detect cracks coarsely and to prevent noises. It is unnecessary to optimize any parameters in probabilistic relaxation. Finally, using the results from the relaxation process, a locally adaptive thresholding is performed to detect cracks more finely. We evaluate robustness and accuracy of the proposed method quantitatively using 60 actual noisy concrete surface images.
Phase I study of the orally administered butyrate prodrug, tributyrin, in patients with solid tumors.
Butyrates have been studied as cancer differentiation agents in vitro and as a treatment for hemoglobinopathies. Tributyrin, a triglyceride with butyrate molecules esterified at the 1, 2, and 3 positions, induces differentiation and/or growth inhibition of a number of cell lines in vitro. When given p.o. to rodents, tributyrin produces substantial plasma butyrate concentrations. We treated 13 patients with escalating doses of tributyrin from 50 to 400 mg/kg/day. Doses were administered p.o. after an overnight fast, once daily for 3 weeks, followed by a 1-week rest. Intrapatient dose escalation occurred after two courses without toxicity greater than grade 2. The time course of butyrate in plasma was assessed on days 1 and 15 and after any dose escalation. Grade 3 toxicities consisted of nausea, vomiting, and myalgia. Grades 1 and 2 toxicities included diarrhea, headache, abdominal cramping, nausea, anemia, constipation, azotemia, lightheadedness, fatigue, rash, alopecia, odor, dysphoria, and clumsiness. There was no consistent increase in hemoglobin F with tributyrin treatment. Peak plasma butyrate concentrations occurred between 0.25 and 3 h after dose, increased with dose, and ranged from 0 to 0.45 mM. Peak concentrations did not increase in three patients who had dose escalation. Butyrate pharmacokinetics were not different on days 1 and 15. Because peak plasma concentrations near those effective in vitro (0.5-1 mM) were achieved, but butyrate disappeared from plasma by 5 h after dose, we are now pursuing dose escalation with dosing three times daily, beginning at a dose of 450 mg/kg/day.
Learning Gaussian Graphical Models With Fractional Marginal Pseudo-likelihood
We propose a Bayesian approximate inference method for learning the dependence structure of a Gaussian graphical model. Using pseudo-likelihood, we derive an analytical expression to approximate the marginal likelihood for an arbitrary graph structure without invoking any assumptions about decomposability. The majority of the existing methods for learning Gaussian graphical models are either restricted to decomposable graphs or require specification of a tuning parameter that may have a substantial impact on learned structures. By combining a simple sparsity inducing prior for the graph structures with a default reference prior for the model parameters, we obtain a fast and easily applicable scoring function that works well for even high-dimensional data. We demonstrate the favourable performance of our approach by large-scale comparisons against the leading methods for learning non-decomposable Gaussian graphical models. A theoretical justification for our method is provided by showing that it yields a consistent estimator of the graph structure.
Revisiting the commons: local lessons, global challenges.
In a seminal paper, Garrett Hardin argued in 1968 that users of a commons are caught in an inevitable process that leads to the destruction of the resources on which they depend. This article discusses new insights about such problems and the conditions most likely to favor sustainable uses of common-pool resources. Some of the most difficult challenges concern the management of large-scale resources that depend on international cooperation, such as fresh water in international basins or large marine ecosystems. Institutional diversity may be as important as biological diversity for our long-term survival.
Analysis of preprocessing methods on classification of Turkish texts
Preprocessing is an important task and critical step in information retrieval and text mining. The objective of this study is to analyze the effect of preprocessing methods in text classification on Turkish texts. We compiled two large datasets from Turkish newspapers using a crawler. On these compiled data sets and using two additional datasets, we perform a detailed analysis of preprocessing methods such as stemming, stopword filtering and word weighting for Turkish text classification on several different Turkish datasets. We report the results of extensive experiments.
Marriage, honesty, and stability
Many centralized two-sided markets form a matching between participants by running a stable marriage algorithm. It is a well-known fact that no matching mechanism based on a stable marriage algorithm can guarantee truthfulness as a dominant strategy for participants. However, as we will show in this paper, in a probabilistic setting where the preference lists of one side of the market are composed of only a constant (independent of the the size of the market) number of entries, each drawn from an arbitrary distribution, the number of participants that have more than one stable partner is vanishingly small. This proves (and generalizes) a conjecture of Roth and Peranson [23]. As a corollary of this result, we show that, with high probability, the truthful strategy is the best response for a given player when the other players are truthful. We also analyze equilibria of the deferred acceptance stable marriage game. We show that the game with complete information has an equilibrium in which a (1 - o(1)) fraction of the strategies are truthful in expectation. In the more realistic setting of a game of incomplete information, we will show that the set of truthful strategies form a (1 + o(1))-approximate Bayesian-Nash equilibrium. Our results have implications in many practical settings and were inspired by the work of Roth and Peranson [23] on the National Residency Matching Program.
Pressurized liquid extraction in determination of polychlorinated biphenyls and organochlorine pesticides in fish samples
Pressurized liquid extraction (PLE) is a relatively new technique applicable for the extraction of persistent organic pollutants from various matrices. The main advantages of this method are short time and low consumption of extraction solvent. The effects of various operational parameters (i.e. temperature of extraction, number of static cycles and extraction solvent mixtures) on the PLE efficiency were investigated in this study. Fish muscle tissue containing 3.2% (w/w) lipids and native polychlorinated biphenyls (PCBs), organochlorine pesticides (OCPs) and other related compounds was used for testing. Purification of crude extracts was carried out by gel permeation chromatography employing Bio-Beads S-X3. Identification and quantitation of target indicator PCBs and OCPs was performed by high-resolution gas chromatography (HRGC) with two parallel electron-capture detectors (ECDs). Results obtained by the optimized PLE procedure were compared with conventional Soxhlet extraction (the same extraction solvent mixtures hexane–dichloromethane (1:1 v/v) and hexane–acetone (4:1 v/v) were used). The recoveries obtained by PLE operated at 90–120 ◦C were either comparable to “classic” Soxhlet extraction (for higher-chlorinated PCB congeners and DDT group) or even better (for lower chlorinated analytes). The highest recoveries were obtained for three static 5 min extraction cycles. © 2004 Elsevier B.V. All rights reserved.
SemanticPaint: Interactive 3D Labeling and Learning at your Fingertips
We present a new interactive and online approach to 3D scene understanding. Our system, SemanticPaint, allows users to simultaneously scan their environment whilst interactively segmenting the scene simply by reaching out and touching any desired object or surface. Our system continuously learns from these segmentations, and labels new unseen parts of the environment. Unlike offline systems where capture, labeling, and batch learning often take hours or even days to perform, our approach is fully online. This provides users with continuous live feedback of the recognition during capture, allowing to immediately correct errors in the segmentation and/or learning—a feature that has so far been unavailable to batch and offline methods. This leads to models that are tailored or personalized specifically to the user's environments and object classes of interest, opening up the potential for new applications in augmented reality, interior design, and human/robot navigation. It also provides the ability to capture substantial labeled 3D datasets for training large-scale visual recognition systems.
Is parenting style a predictor of suicide attempts in a representative sample of adolescents?
BACKGROUND Suicidal ideation and suicide attempts are serious but not rare conditions in adolescents. However, there are several research and practical suicide-prevention initiatives that discuss the possibility of preventing serious self-harm. Profound knowledge about risk and protective factors is therefore necessary. The aim of this study is a) to clarify the role of parenting behavior and parenting styles in adolescents' suicide attempts and b) to identify other statistically significant and clinically relevant risk and protective factors for suicide attempts in a representative sample of German adolescents. METHODS In the years 2007/2008, a representative written survey of N = 44,610 students in the 9th grade of different school types in Germany was conducted. In this survey, the lifetime prevalence of suicide attempts was investigated as well as potential predictors including parenting behavior. A three-step statistical analysis was carried out: I) As basic model, the association between parenting and suicide attempts was explored via binary logistic regression controlled for age and sex. II) The predictive values of 13 additional potential risk/protective factors were analyzed with single binary logistic regression analyses for each predictor alone. Non-significant predictors were excluded in Step III. III) In a multivariate binary logistic regression analysis, all significant predictor variables from Step II and the parenting styles were included after testing for multicollinearity. RESULTS Three parental variables showed a relevant association with suicide attempts in adolescents - (all protective): mother's warmth and father's warmth in childhood and mother's control in adolescence (Step I). In the full model (Step III), Authoritative parenting (protective: OR: .79) and Rejecting-Neglecting parenting (risk: OR: 1.63) were identified as significant predictors (p < .001) for suicidal attempts. Seven further variables were interpreted to be statistically significant and clinically relevant: ADHD, female sex, smoking, Binge Drinking, absenteeism/truancy, migration background, and parental separation events. CONCLUSIONS Parenting style does matter. While children of Authoritative parents profit, children of Rejecting-Neglecting parents are put at risk - as we were able to show for suicide attempts in adolescence. Some of the identified risk factors contribute new knowledge and potential areas of intervention for special groups such as migrants or children diagnosed with ADHD.