title
stringlengths
8
300
abstract
stringlengths
0
10k
Attitudes of adult 46, XY intersex persons to clinical management policies.
PURPOSE We surveyed a clinic sample of adult 46,XY intersex patients regarding attitudes to clinical management policies. MATERIALS AND METHODS All adult former patients of 1 pediatric endocrine clinic in the eastern United States whose addresses could be obtained and who consented to participation were surveyed by a comprehensive written followup questionnaire. Three questions on attitudes concerning the desirability of a third gender category and the age at which genital surgery should be done were presented in the context of ratings of satisfaction with gender, genital status and sexual functioning. RESULTS A total of 72 English speaking patients with 46,XY, including 32 men and 40 women 18 to 60 years old, completed the questionnaire. The majority of respondents stated that they were mainly satisfied with being the assigned gender, did not have a time in life when they felt unsure about gender, did not agree to a third gender policy, did not think that the genitals looked unusual (although the majority of men rated their penis as too small), were somewhat or mainly satisfied with sexual functioning, did not agree that corrective genital surgery should be postponed to adulthood and stated that their genital surgeries should have been performed before adulthood, although there were some significant and important differences among subgroups. CONCLUSIONS The majority of adult patients with intersexuality appeared to be satisfied with gender and genital status, and did not support major changes in the prevailing policy. However, a significant minority was dissatisfied and endorsed policy changes.
Ten-fold Improvement in Visual Odometry Using Landmark Matching
Our goal is to create a visual odometry system for robots and wearable systems such that localization accuracies of centimeters can be obtained for hundreds of meters of distance traveled. Existing systems have achieved approximately a 1% to 5% localization error rate whereas our proposed system achieves close to 0.1% error rate, a ten-fold reduction. Traditional visual odometry systems drift over time as the frame-to-frame errors accumulate. In this paper, we propose to improve visual odometry using visual landmarks in the scene. First, a dynamic local landmark tracking technique is proposed to track a set of local landmarks across image frames and select an optimal set of tracked local landmarks for pose computation. As a result, the error associated with each pose computation is minimized to reduce the drift significantly. Second, a global landmark based drift correction technique is proposed to recognize previously visited locations and use them to correct drift accumulated during motion. At each visited location along the route, a set of distinctive visual landmarks is automatically extracted and inserted into a landmark database dynamically. We integrate the landmark based approach into a navigation system with 2 stereo pairs and a low-cost inertial measurement unit (IMU) for increased robustness. We demonstrate that a real-time visual odometry system using local and global landmarks can precisely locate a user within 1 meter over 1000 meters in unknown indoor/outdoor environments with challenging situations such as climbing stairs, opening doors, moving foreground objects etc..
FairAccess: a new Blockchain-based access control framework for the Internet of Things
Security and privacy are huge challenges in Internet of Things (IoT) environments, but unfortunately, the harmonization of the IoT-related standards and protocols is hardly and slowly widespread. In this paper, we propose a new framework for access control in IoT based on the blockchain technology. Our first contribution consists in providing a reference model for our proposed framework within the Objectives, Models, Architecture and Mechanism specification in IoT. In addition, we introduce FairAccess as a fully decentralized pseudonymous and privacy preserving authorization management framework that enables users to own and control their data. To implement our model, we use and adapt the blockchain into a decentralized access control manager. Unlike financial bitcoin transactions, FairAccess introduces new types of transactions that are used to grant, get, delegate, and revoke access. As a proof of concept, we establish an initial implementation with a Raspberry PI device and local blockchain. Finally, we discuss some limitations and propose further opportunities. Copyright © 2017 John Wiley & Sons, Ltd.
A novel feature selection approach for biomedical data classification
This paper presents a novel feature selection approach to deal with issues of high dimensionality in biomedical data classification. Extensive research has been performed in the field of pattern recognition and machine learning. Dozens of feature selection methods have been developed in the literature, which can be classified into three main categories: filter, wrapper and hybrid approaches. Filter methods apply an independent test without involving any learning algorithm, while wrapper methods require a predetermined learning algorithm for feature subset evaluation. Filter and wrapper methods have their, respectively, drawbacks and are complementary to each other in that filter approaches have low computational cost with insufficient reliability in classification while wrapper methods tend to have superior classification accuracy but require great computational power. The approach proposed in this paper integrates filter and wrapper methods into a sequential search procedure with the aim to improve the classification performance of the features selected. The proposed approach is featured by (1) adding a pre-selection step to improve the effectiveness in searching the feature subsets with improved classification performances and (2) using Receiver Operating Characteristics (ROC) curves to characterize the performance of individual features and feature subsets in the classification. Compared with the conventional Sequential Forward Floating Search (SFFS), which has been considered as one of the best feature selection methods in the literature, experimental results demonstrate that (i) the proposed approach is able to select feature subsets with better classification performance than the SFFS method and (ii) the integrated feature pre-selection mechanism, by means of a new selection criterion and filter method, helps to solve the over-fitting problems and reduces the chances of getting a local optimal solution.
Automatically Extracting Word Relationships as Templates for Pun Generation
Computational models can be built to capture the syntactic structures and semantic patterns of human punning riddles. This model is then used as rules by a computer to generate its own puns. This paper presents T-PEG, a system that utilizes phonetic and semantic linguistic resources to automatically extract word relationships in puns and store the knowledge in template form. Given a set of training examples, it is able to extract 69.2% usable templates, resulting in computer-generated puns that received an average score of 2.13 as compared to 2.70 for human-generated puns from user feedback.
pdf2table: A Method to Extract Table Information from PDF Files
Tables are a common structuring element in many documents, s uch as PDF files. To reuse such tables, appropriate methods need to b e develop, which capture the structure and the content information. We have d e loped several heuristics which together recognize and decompose tables i n PDF files and store the extracted data in a structured data format (XML) for easi er reuse. Additionally, we implemented a prototype, which gives the user the ab ility of making adjustments on the extracted data. Our work shows that purel y heuristic-based approaches can achieve good results, especially for lucid t ables.
VeTrack: Real Time Vehicle Tracking in Uninstrumented Indoor Environments
Although location awareness and turn-by-turn instructions are prevalent outdoors due to GPS, we are back into the darkness in uninstrumented indoor environments such as underground parking structures. We get confused, disoriented when driving in these mazes, and frequently forget where we parked, ending up circling back and forth upon return.In this paper, we propose VeTrack, a smartphone-only system that tracks the vehicle's location in real time using the phone's inertial sensors. It does not require any environment instrumentation or cloud backend. It uses a novel "shadow" tracing method to accurately estimate the vehicle's trajectories despite arbitrary phone/vehicle poses and frequent disturbances. We develop algorithms in a Sequential Monte Carlo framework to represent vehicle states probabilistically, and harness constraints by the garage map and detected landmarks to robustly infer the vehicle location. We also find landmark (e.g., speed bumps, turns) recognition methods reliable against noises, disturbances from bumpy rides and even hand-held movements. We implement a highly efficient prototype and conduct extensive experiments in multiple parking structures of different sizes and structures, with multiple vehicles and drivers. We find that VeTrack can estimate the vehicle's real time location with almost negligible latency, with error of 2-4 parking spaces at 80-percentile.
Natural doubling of the apparent switching frequency using three-level ANPC converter
The 3L-NPC (three-level neutral-point-clamped) is the most popular multilevel converter used in high-power medium-voltage applications. An important disadvantage of this structure is the unequal distribution of losses among the switches. The performances of 3L-NPC structure were improved by developing the 3L-ANPC (Active-NPC) converter which has more degrees of freedom. In this paper the switching states and the loss distribution problem are studied for different PWM strategies. A new PWM strategy is also proposed in the paper. It has numerous advantages: (a) natural doubling of the apparent switching frequency without using the flying-capacitor concept, (b) dead times do not influence the operating mode at 50% of the duty cycle, (c) operating at both high and small switching frequencies without structural modifications and (d) better balancing of loss distribution in switches. The PSIM simulation results are shown in order to validate the proposed PWM strategy and the analysis of the switching states.
Corpus callosum and cingulum tractography in Parkinson's disease.
BACKGROUND In Parkinson's disease (PD) cell loss in the substantia nigra is known to result in motor symptoms; however widespread pathological changes occur and may be associated with non-motor symptoms such as cognitive impairment. Diffusion tensor imaging is a quantitative imaging method sensitive to the micro-structure of white matter tracts. OBJECTIVE To measure fractional anisotropy (FA) and mean diffusivity (MD) values in the corpus callosum and cingulum pathways, defined by diffusion tensor tractography, in patients with PD, PD with dementia (PDD) and controls and to determine if these measures correlate with Mini-Mental Status Examination (MMSE) scores in parkinsonian patients. METHODS Patients with PD (17 Males [M], 12 Females [F]), mild PDD (5 M, 1 F) and controls (8 M, 7 F) underwent cognitive testing and MRI scans. The corpus callosum was divided into four regions and the cingulum into two regions bilaterally to define tracts using the program DTIstudio (Johns Hopkins University) using the fiber assignment by continuous tracking algorithm. Volumetric MRI scans were used to measure white and gray matter volumes. RESULTS Groups did not differ in age or education. There were no overall FA or MD differences between groups in either the corpus callosum or cingulum pathways. In PD subjects the MMSE score correlated with MD within the corpus callosum. These findings were independent of age, sex and total white matter volume. CONCLUSIONS The data suggest that the corpus callosum or its cortical connections are associated with cognitive impairment in PD patients.
Ethnobotanical classification system and medical ethnobotany of the Eastern Band of the Cherokee Indians
The Eastern Band of the Cherokee Indians live in one of the planet’s most floristically diverse temperate zone environments. Their relationship with the local flora was initially investigated by James Mooney and revisited by subsequent researchers such as Frans Olbrechts, John Witthoft, and William Banks, among others. This work interprets the collective data recorded by Cherokee ethnographers, much of it in the form of unpublished archival material, as it reflects the Cherokee ethnobotanical classification system and their medical ethnobotany. Mooney’s proposed classification system for the Cherokee is remarkably similar to contemporary models of folk biological classification systems. His recognition of this inherent system, 60 years before contemporary models were proposed, provides evidence for their universality in human cognition. Examination of the collective data concerning Cherokee medical ethnobotany provides a basis for considering change in Cherokee ethnobotanical knowledge, for reevaluation of the statements of the various researchers, and a means to explore trends that were not previously apparent. Index Words: Eastern Band of the Cherokee Indians, Ethnobiological Classification Systems, Ethnohistory, Ethnomedicine, Historical Ethnobotany, Medical Ethnobotany, Native American Medicine, Tradition Botanical Knowledge. ETHNOBOTANICAL CLASSIFICATION SYSTEM AND MEDICAL ETHNOBOTANY OF THE EASTERN BAND OF THE CHEROKEE INDIANS
Effects of a green tea extract, Polyphenon E, on systemic biomarkers of growth factor signalling in women with hormone receptor-negative breast cancer.
BACKGROUND Observational and experimental data support a potential breast cancer chemopreventive effect of green tea. METHODS We conducted an ancillary study using archived blood/urine from a phase IB randomised, placebo-controlled dose escalation trial of an oral green tea extract, Polyphenon E (Poly E), in breast cancer patients. Using an adaptive trial design, women with stage I-III breast cancer who completed adjuvant treatment were randomised to Poly E 400 mg (n = 16), 600 mg (n = 11) and 800 mg (n = 3) twice daily or matching placebo (n = 10) for 6 months. Blood and urine collection occurred at baseline, and at 2, 4 and 6 months. Biological endpoints included growth factor [serum hepatocyte growth factor (HGF), vascular endothelial growth factor (VEGF)], lipid (serum cholesterol, triglycerides), oxidative damage and inflammatory biomarkers. RESULTS From July 2007-August 2009, 40 women were enrolled and 34 (26 Poly E, eight placebo) were evaluable for biomarker endpoints. At 2 months, the Poly E group (all dose levels combined) compared to placebo had a significant decrease in mean serum HGF levels (-12.7% versus +6.3%, P = 0.04). This trend persisted at 4 and 6 months but was no longer statistically significant. For the Poly E group, serum VEGF decreased by 11.5% at 2 months (P = 0.02) and 13.9% at 4 months (P = 0.05) but did not differ compared to placebo. At 2 months, there was a trend toward a decrease in serum cholesterol with Poly E (P = 0.08). No significant differences were observed for other biomarkers. CONCLUSIONS Our findings suggest potential mechanistic actions of tea polyphenols in growth factor signalling, angiogenesis and lipid metabolism.
Post-operative cognitive functions after general anesthesia with sevoflurane and desflurane in South Asian elderly.
BACKGROUND The duration of the recovery of cognition after anesthesia and surgery is multifactorial and is dependent on the type of anesthesia used, the type of surgery, and the patient. The present study compared the speed of recovery in elderly patients undergoing general anesthesia with sevoflurane or desflurane and the incidence and duration of cognitive impairment in them. METHODS The prospective study was conducted at a tertiary care centre in Bangalore from November 2008 to March 2010. Patients aged above 65 years with American Society of Anaesthesiology (ASA) physical status I, II, III undergoing surgeries under general anesthesia lasting from 45 min up to 3 hours were included in the study. The times from discontinuing nitrous oxide to eye opening, tracheal extubation, obeying commands, and the time to orientation to name and place were assessed at 30-60 s intervals. At 1, 3, 6 h after the end of anesthesia, the patient's cognitive functions were assessed by asking them to repeat the Mini Mental Score Examination. STATISTICAL ANALYSIS USED Student t-test, Chi-square test. RESULTS The time to eye opening, time until extubation, time to follow commands and orientation to time, place were significantly better with desflurane compared to sevoflurane (p < .001). Hundred percent of patients in the desflurane group and 97% in the sevoflurane group demonstrated completely normal cognitive function at 6 h postoperatively (p = 0.31). CONCLUSION Desflurane was associated with a faster early recovery than sevoflurane in elderly patients. However, postoperative recovery of cognitive function was similar with both volatile anaesthetics.
Continuous User Authentication on Mobile Devices
Recent developments in sensing and communication technologies have led to an explosion in the use of mobile devices such as smartphones and tablets. With the increase in the use of mobile devices, one has to constantly worry about the security and privacy as the loss of a mobile device could compromise personal information of the user. To deal with this problem, continuous authentication (also known as active authentication) systems have been proposed in which users are continuously monitored after the initial access to the mobile device. In this paper, we provide an overview of different continuous authentication methods on mobile devices. We discuss the merits and drawbacks of available approaches and identify promising avenues of research in this rapidly evolving field.
The Visual Display of Quantitative Information
This is a surprisingly entertaining and informative book and should be of interest to all who employ the services of specialists for the presentation of graphics, as well as the producers of such visual aids. It makes a strong case for properly designed and interesting graphics of a statistical nature, where content and integrity rather than artistic appeal should prevail. The initial chapter, “Graphical Excellence,” establishes basic ground rules which should be observed in the design and execution of statistical graphics, to facilitate the viewer’s understanding of complex data sets. The examples provided cover a broad spectrum of subject matter and treatment. Several of these were produced over a century ago, and are excellent examples in terms of ease of comprehension of complex statistical data. For the reader not familiar with the history of this topic, the graphics provided include some of the work of C. J. Minard, France, published in the mid-1800’s. These are quite impressive, treating subjects involving rather complex statistics with a quality of design and clarity that is rarely found today. I believe the inclusion of additional good examples of a more contemporary nature might possibly improve the value of this chapter, if a subsequent edition is ever considered. Two chapters are dedicated to the subject of graphical inte,gity, and there are lessons here for everyone, including the viewerhser of data graphics. While, unfortunately, there will always be a market for “Lying Graphics,” the employment of visual and statistical tricks for intentional false impressions, the author’s position is that this practice is detrimental and has inhibited acceptance of data graphics for applications where their full value lies. Another, and more widespread fault, which the author singles out, is the practice of leaving the basic graphic design to those with artistic backgrounds whose priorities are biased in favor of data beautification rather than statistical integrity. The remaining chapters deal with the theory, design, and execution of data graphics, and the treatment continues to be interesting, with both good and bad examples provided. Some of the specific issues addressed are the necessity for simplicity and clarity of presentation, the elimination of “chart junk,” maintaining the proper data-to-ink ratio, and the value of the tabular versus graphic presentations when limited data are to be shown. It appears that one of the author’s major goals is to increase the respectability of visual displays of statistical data, thereby broadening their acceptance and utilization for applications which are virtually unlimited. This book provides the rationale and the tools for the effective treatment and display of complex multivariate data, for those responsible for concepts as well as execution of statistical graphics. *
Carbonized Silk Fabric for Ultrastretchable, Highly Sensitive, and Wearable Strain Sensors.
A carbonized plain-weave silk fabric is fabricated into wearable and robust strain sensors, which can be stretched up to 500% and show high sensitivity in a wide strain range. This sensor can be assembled into wearable devices for detection of both large and subtle human activities, showing great potential for monitoring human motions and personal health.
Slow dynamics in glasses
SummaryWe will review some of the theoretical progresses that have been recently done in the study of slow dynamics of glassy systems: the general techniques used for studying the dynamics in the mean-field approximation and the emergence of a pure dynamical transition in some of these systems. We show how the results obtained for a random Hamiltonian may be also applied to a given Hamiltonian. These two results open the way to a better understanding of the glassy transition in real systems.
Deep Belief Networks with Feature Selection for Sentiment Classification
Due to the complexity of human languages, most of sentiment classification algorithms are suffered from a huge-scale dimension of vocabularies which are mostly noisy and redundant. Deep Belief Networks (DBN) tackle this problem by learning useful information in input corpus with their several hidden layers. Unfortunately, DBN is a time-consuming and computationally expensive process for large-scale applications. In this paper, a semi-supervised learning algorithm, called Deep Belief Networks with Feature Selection (DBNFS) is developed. Using our chi-squared based feature selection, the complexity of the vocabulary input is decreased since some irrelevant features are filtered which makes the learning phase of DBN more efficient. The experimental results of our proposed DBNFS shows that the proposed DBNFS can achieve higher classification accuracy and can speed up training time compared with others well-known semi-supervised learning algorithms.
Opinion observer: analyzing and comparing opinions on the Web
The Web has become an excellent source for gathering consumer opinions. There are now numerous Web sites containing such opinions, e.g., customer reviews of products, forums, discussion groups, and blogs. This paper focuses on online customer reviews of products. It makes two contributions. First, it proposes a novel framework for analyzing and comparing consumer opinions of competing products. A prototype system called Opinion Observer is also implemented. The system is such that with a single glance of its visualization, the user is able to clearly see the strengths and weaknesses of each product in the minds of consumers in terms of various product features. This comparison is useful to both potential customers and product manufacturers. For a potential customer, he/she can see a visual side-by-side and feature-by-feature comparison of consumer opinions on these products, which helps him/her to decide which product to buy. For a product manufacturer, the comparison enables it to easily gather marketing intelligence and product benchmarking information. Second, a new technique based on language pattern mining is proposed to extract product features from Pros and Cons in a particular type of reviews. Such features form the basis for the above comparison. Experimental results show that the technique is highly effective and outperform existing methods significantly.
Case records of the Massachusetts General Hospital. Case 30-2005. A 56-year-old man with fever and axillary lymphadenopathy.
N Engl J Med 2005;353:1387-94. Copyright © 2005 Massachusetts Medical Society. A 56-year-old man was referred to the transplantation infectious-disease clinic because of a low-grade fever and left axillary lymphadenopathy. The patient had received a cadaveric kidney transplant five years earlier for polycystic kidney disease. He had been in his usual state of health until three weeks before the referral to the infectious-disease clinic, when he discovered palpable, tender lymph nodes in the left epitrochlear region and axilla. Ten days later a low-grade fever, dry cough, nasal congestion, and night sweats developed, for which trimethoprim–sulfamethoxazole was prescribed, without benefit. He was referred to a specialist in infectious diseases. The patient did not have headache, sore throat, chest or abdominal pain, dyspnea, diarrhea, or dysuria. He had hypertension, gout, nephrolithiasis, gastroesophageal reflux disease, and prostate cancer, which had been treated with radiation therapy two years earlier. He was a policeman who worked in an office. He had not traveled outside of the United States recently. He had acquired a kitten several months earlier and recalled receiving multiple scratches on his hands when he played with it. His medications were cyclosporine (325 mg daily), mycophenolate mofetil (2 g daily), amlodipine, furosemide, colchicine, doxazosin, and pravastatin. Prednisone had been discontinued one year previously. He reported no allergies to medications. The temperature was 36.0°C and the blood pressure 105/75 mm Hg. On physical examination, the patient appeared well. The head, neck, lungs, heart, and abdomen were unremarkable. On the dorsum of the left hand was a single, violaceous nodule with a flat, necrotic eschar on top (Fig. 1); there was no erythema, fluctuance, pus, or other drainage, and there was no sinus tract. The patient said that this lesion had nearly healed, but that he had been scratching it and thought that this irritation prevented it from healing. There was a tender left epitrochlear lymph node, 2 cm by 2 cm, and a mass of matted, tender lymph nodes, 5 cm in diameter, in the left axilla. There was no lymphangitic streaking or cellulitis. The results of a complete blood count revealed no abnormalities (Table 1). Additional laboratory studies were obtained, and clarithromycin (500 mg, twice a day) was prescribed. Within a day of starting treatment, the patient’s temperature rose to 39.4°C, and the fever was accompanied by shaking chills. He was admitted to the hospital. The temperature was 38.6°C, the pulse was 78 beats per minute, and the blood pressure was 100/60 mm Hg. The results of a physical examination were unchanged presentation of case
A Broadband Dual-Element Folded Dipole Antenna With a Reflector
A wideband dual-element planar folded dipole antenna is proposed in this letter. It is composed of a pair of asymmetric coplanar strip folded dipoles. The dual-element folded dipole antenna is fed by a microstrip feedline. With suitable design of the asymmetric coplanar strip folded dipole and the microstrip feedline, a 10-dB return-loss operating bandwidth of 107% (1.23 ~ 4.07 GHz) is achieved. By adding a metal plate, a wideband unidirectional antenna can be obtained with a measured impedance bandwidth of 113% (1.15 ~ 4.07 GHz) with better than 10 dB return loss. Its fabricated prototype is also measured with stable unidirectional radiation patterns in working band 1.13 ~ 3.3 GHz with a gain in + z-direction higher than 8.2 dBi and a front-to-back ratio better than 11.5 dB.
PSANet: Point-wise Spatial Attention Network for Scene Parsing
We notice information flow in convolutional neural networks is restricted inside local neighborhood regions due to the physical design of convolutional filters, which limits the overall understanding of complex scenes. In this paper, we propose the point-wise spatial attention network (PSANet) to relax the local neighborhood constraint. Each position on the feature map is connected to all the other ones through a self-adaptively learned attention mask. Moreover, information propagation in bi-direction for scene parsing is enabled. Information at other positions can be collected to help the prediction of the current position and vice versa, information at the current position can be distributed to assist the prediction of other ones. Our proposed approach achieves top performance on various competitive scene parsing datasets, including ADE20K, PASCAL VOC 2012 and Cityscapes, demonstrating its effectiveness and generality.
Gingerols and shogaols: Important nutraceutical principles from ginger.
Gingerols are the major pungent compounds present in the rhizomes of ginger (Zingiber officinale Roscoe) and are renowned for their contribution to human health and nutrition. Medicinal properties of ginger, including the alleviation of nausea, arthritis and pain, have been associated with the gingerols. Gingerol analogues are thermally labile and easily undergo dehydration reactions to form the corresponding shogaols, which impart the characteristic pungent taste to dried ginger. Both gingerols and shogaols exhibit a host of biological activities, ranging from anticancer, anti-oxidant, antimicrobial, anti-inflammatory and anti-allergic to various central nervous system activities. Shogaols are important biomarkers used for the quality control of many ginger-containing products, due to their diverse biological activities. In this review, a large body of available knowledge on the biosynthesis, chemical synthesis and pharmacological activities, as well as on the structure-activity relationships of various gingerols and shogaols, have been collated, coherently summarised and discussed. The manuscript highlights convincing evidence indicating that these phenolic compounds could serve as important lead molecules for the development of therapeutic agents to treat various life-threatening human diseases, particularly cancer. Inclusion of ginger or ginger extracts in nutraceutical formulations could provide valuable protection against diabetes, cardiac and hepatic disorders.
Autonomous exploration of mobile robots through deep neural networks
The exploration problem of mobile robots aims to allow mobile robots to explore an unknown environment. We describe an indoor exploration algorithm for mobile robots using a hierarchical structure that fuses several convolutional neural network layers with the decision-making process. The whole system is trained end to end by taking only visual information (RGB+D information) as input and generates a sequence of main moving direction as output so that the robot achieves autonomous exploration ability. The robot is a Turtlebot with a Kinect mounted on it. The model is trained and tested in a real world environment. And the training dataset is provided for download. The outputs of the test date are compared with the human decision. We use Gaussian process latent variable model to visualize the feature map of last convolutional layer and it proves the effectiveness of this deep convolution neural network model. we also present a novel and lightweight deep-learning library libcnn especially for deep-learning processing of robotics tasks.
A Double-Stage Kalman Filter for Orientation Tracking With an Integrated Processor in 9-D IMU
This paper presents an application-specific integrated processor for an angular estimation system that works with 9-D inertial measurement units. The application-specific instruction-set processor (ASIP) was implemented on field-programmable gate array and interfaced with a gyro-plus-accelerometer 6-D sensor and with a magnetic compass. Output data were recorded on a personal computer and also used to perform a live demo. During system modeling and design, it was chosen to represent angular position data with a quaternion and to use an extended Kalman filter as sensor fusion algorithm. For this purpose, a novel two-stage filter was designed: The first stage uses accelerometer data, and the second one uses magnetic compass data for angular position correction. This allows flexibility, less computational requirements, and robustness to magnetic field anomalies. The final goal of this work is to realize an upgraded application-specified integrated circuit that controls the microelectromechanical systems (MEMS) sensor and integrates the ASIP. This will allow the MEMS sensor gyro plus accelerometer and the angular estimation system to be contained in a single package; this system might optionally work with an external magnetic compass.
Effect of antecedent hypertension and follow-up blood pressure on outcomes after high-risk myocardial infarction.
The influence of blood pressure on outcomes after high-risk myocardial infarction is not well characterized. We studied the relationship between blood pressure and the risk of cardiovascular events in 14 703 patients with heart failure, left ventricular systolic dysfunction, or both after acute myocardial infarction in the Valsartan in Myocardial Infarction Trial. We assessed the relationship between antecedent hypertension and outcomes and the association between elevated (systolic: >140 mm Hg) or low blood pressure (systolic: <100 mm Hg) in 2 of 3 follow-up visits during the first 6 months and subsequent cardiovascular events over a median 24.7 months of follow-up. Antecedent hypertension independently increased the risk of heart failure (hazard ratio [HR]: 1.19; 95% CI: 1.08 to 1.32), stroke (HR: 1.27; 95% CI: 1.02 to 1.58), cardiovascular death (HR: 1.11; 95% CI: 1.01 to 1.22), and the composite of death, myocardial infarction, heart failure, stroke, or cardiac arrest (HR: 1.13; 95% CI: 1.06 to 1.21). While low blood pressure in the postmyocardial infarction period was associated with increased risk of adverse events, patients with elevated blood pressure (n=1226) were at significantly higher risk of stroke (adjusted HR: 1.64; 95% CI: 1.17 to 2.29) and combined cardiovascular events (adjusted HR: 1.14; 95% CI: 1.00 to 1.31). Six months after a high-risk myocardial infarction, elevated systolic blood pressure, a potentially modifiable risk factor, is associated with an increased risk of subsequent stroke and cardiovascular events. Whether aggressive antihypertensive treatment can reduce this risk remains unknown.
Security Enhanced (SE) Android: Bringing Flexible MAC to Android
The Android software stack for mobile devices defines and enforces its own security model for apps through its application-layer permissions model. However, at its foundation, Android relies upon the Linux kernel to protect the system from malicious or flawed apps and to isolate apps from one another. At present, Android leverages Linux discretionary access control (DAC) to enforce these guarantees, despite the known shortcomings of DAC. In this paper, we motivate and describe our work to bring flexible mandatory access control (MAC) to Android by enabling the effective use of Security Enhanced Linux (SELinux) for kernel-level MAC and by developing a set of middleware MAC extensions to the Android permissions model. We then demonstrate the benefits of our security enhancements for Android through a detailed analysis of how they mitigate a number of previously published exploits and vulnerabilities for Android. Finally, we evaluate the overheads imposed by our security enhancements.
Universal Semantic Parsing
Universal Dependencies (UD) provides a cross-linguistically uniform syntactic representation, with the aim of advancing multilingual applications of parsing and natural language understanding. Reddy et al. (2016) recently developed a semantic interface for (English) Stanford Dependencies, based on the lambda calculus. In this work, we introduce UDEPLAMBDA, a similar semantic interface for UD, which allows mapping natural language to logical forms in an almost language-independent framework. We evaluate our approach on semantic parsing for the task of question answering against Freebase. To facilitate multilingual evaluation, we provide German and Spanish translations of the WebQuestions and GraphQuestions datasets. Results show that UDEPLAMBDA outperforms strong baselines across languages and datasets. For English, it achieves the strongest result to date on GraphQuestions, with competitive results on WebQuestions.
A comparison of minutia triplet based features for fingerprint indexing
The sensitivities of features extracted from minutia triplets, in fingerprint indexing studies, against distortions on impression images have been compared. It has been shown that the geometric features of triangles can be made robust against elastic distortions and features used in other studies called minutia type, ridge counts and ridge pattern representation decrease indexing performance.
SINGLE PRECISION FLOATING POINT
Binary Division is one of the most crucial and silicon-intensive and of immense importance in the field of hardware implementation. A Divider is one of the key hardware blocks in most of applications such as digital signal processing, encryption and decryption algorithms in cryptography and in other logical computations. Being sequential type of operation, it is more prominent in terms of computational complexity and latency. This paper deals with the novel division algorithm for single precision floating point division Verilog Code is written and implemented on Virtex-5 FPGA series. Power dissipation has been reduced. Moreover, significant improvement has been observed in terms of area-utilisation and latency bounds. KeywordsSingle precision, Binary Division, Long Division, Vedic, Virtex, FPGA, IEEE-754.
Adrenal hyperplasia and adenomas are associated with inhibition of phosphodiesterase 11A in carriers of PDE11A sequence variants that are frequent in the population.
Several types of adrenocortical tumors that lead to Cushing syndrome may be caused by aberrant cyclic AMP (cAMP) signaling. We recently identified patients with micronodular adrenocortical hyperplasia who were carriers of inactivating mutations in the 2q-located phosphodiesterase 11A (PDE11A) gene. We now studied the frequency of two missense substitutions, R804H and R867G, in conserved regions of the enzyme in several sets of normal controls, including 745 individuals enrolled in a longitudinal cohort study, the New York Cancer Project. In the latter, we also screened for the presence of the previously identified PDE11A nonsense mutations. R804H and R867G were frequent among patients with adrenocortical tumors; although statistical significance was not reached, these variants affected significantly enzymatic function in vitro with variable increases in cAMP and/or cyclic guanosine 3',5'-monophosphate levels in HeLa and HEK293 cells. Adrenocortical tissues carrying the R804H mutation showed 2q allelic losses and higher cyclic nucleotide levels and cAMP-responsive element binding protein phosphorylation. We conclude that missense mutations of the PDE11A gene that affect enzymatic activity in vitro are present in the general population; protein-truncating PDE11A mutations may also contribute to a predisposition to other tumors, in addition to their association with adrenocortical hyperplasia. We speculate that PDE11A genetic defects may be associated with adrenal pathology in a wider than previously suspected clinical spectrum that includes asymptomatic individuals.
Efficiency-Oriented Design of ZVS Half-Bridge Series Resonant Inverter With Variable Frequency Duty Cycle Control
The efficiency of zero voltage switching half-bridge series resonant inverter can be decreased under certain load conditions due to the high switching frequencies required. The proposed variable frequency duty cycle (VFDC) control is intended to improve the efficiency in the medium and low output power levels because of the decreased switching frequencies. The study performed in this letter includes, in a first step, a theoretical analysis of power balance as a function of control parameters. In addition, restrictions due to snubber capacitors and deadtime, and variability of the loads have been considered. Afterward, an efficiency analysis has been carried out to determine the optimum operation point. Switching and conduction losses have been calculated to examine the overall efficiency improvement. VFDC strategy efficiency improvement is achieved by means of a switching-frequency reduction, mainly at low-medium power range, and with low-quality factor loads. Domestic induction heating application is suitable for the use of VFDC strategy due to its special load characteristics. For this reason, the simulation results have been validated using an induction heating inverter with a specially designed load.
Sleep Quality and Academic Performance in University Students : A Wake-Up Call for College Psychologists
Both sleep deprivation and poor sleep quality are prominent in American society, especially in college student populations. Sleep problems are often a primary disorder rather than secondary to depression. The purpose of the present study was to determine if sleep deprivation and/or poor sleep quality in a sample of nondepressed university students was associated with lower academic performance. A significant negative correlation between Global Sleep Quality score (GSQ) on the Pittsburgh Sleep Quality Index and grade point average supports the hypothesis that poor sleep quality is associated with lower academic performance for nondepressed students. Implications for both the remedial (assessment and treatment) and preventive (outreach) work of college and university counseling centers is discussed.
Survey of Contrast Enhancement Techniques based on Histogram Equalization
This Contrast enhancement is frequently referred to as one of the most important issues in image processing. Histogram equalization (HE) is one of the common methods used for improving contrast in digital images. Histogram equalization (HE) has proved to be a simple and effective image contrast enhancement technique. However, the conventional histogram equalization methods usually result in excessive contrast enhancement, which causes the unnatural look and visual artifacts of the processed image. This paper presents a review of new forms of histogram for image contrast enhancement. The major difference among the methods in this family is the criteria used to divide the input histogram. Brightness preserving BiHistogram Equalization (BBHE) and Quantized Bi-Histogram Equalization (QBHE) use the average intensity value as their separating point. Dual Sub-Image Histogram Equalization (DSIHE) uses the median intensity value as the separating point. Minimum Mean Brightness Error Bi-HE (MMBEBHE) uses the separating point that produces the smallest Absolute Mean Brightness Error (AMBE). Recursive Mean-Separate Histogram Equalization (RMSHE) is another improvement of BBHE. The Brightness preserving dynamic histogram equalization (BPDHE) method is actually an extension to both MPHEBP and DHE. Weighting mean-separated sub-histogram equalization (WMSHE) method is to perform the effective contrast enhancement of the digital image. Keywords-component image processing; contrast enhancement; histogram equalization; minimum mean brightness error; brightness preserving enhancement, histogram partition.
Joint Inference for Fine-grained Opinion Extraction
This paper addresses the task of finegrained opinion extraction – the identification of opinion-related entities: the opinion expressions, the opinion holders, and the targets of the opinions, and the relations between opinion expressions and their targets and holders. Most existing approaches tackle the extraction of opinion entities and opinion relations in a pipelined manner, where the interdependencies among different extraction stages are not captured. We propose a joint inference model that leverages knowledge from predictors that optimize subtasks of opinion extraction, and seeks a globally optimal solution. Experimental results demonstrate that our joint inference approach significantly outperforms traditional pipeline methods and baselines that tackle subtasks in isolation for the problem of opinion extraction.
A Text Line Detection Method for Mathematical Formula Recognition
Text line detection is a prerequisite procedure of mathematical formula recognition, however, many incorrectly segmented text lines are often produced due to the two-dimensional structures of mathematics when using existing segmentation methods such as Projection Profiles Cutting or white space analysis. In consequence, mathematical formula recognition is adversely affected by these incorrectly detected text lines, with errors propagating through further processes. Aimed at mathematical formula recognition, we propose a text line detection method to produce reliable line segmentation. Based on the results produced by PPC, a learning based merging strategy is presented to combine incorrectly split text lines. In the merging strategy, the features of layout and text for a text line and those between successive lines are utilised to detect the incorrectly split text lines. Experimental results show that the proposed approach obtains good performance in detecting text lines from mathematical documents. Furthermore, the error rate in mathematical formula identification is reduced significantly through adopting the proposed text line detection method.
Semantic Role Labeling via FrameNet, VerbNet and PropBank
This article describes a robust semantic parser that uses a broad knowledge base created by interconnecting three major resources: FrameNet, VerbNet and PropBank. The FrameNet corpus contains the examples annotated with semantic roles whereas the VerbNet lexicon provides the knowledge about the syntactic behavior of the verbs. We connect VerbNet and FrameNet by mapping the FrameNet frames to the VerbNet Intersective Levin classes. The PropBank corpus, which is tightly connected to the VerbNet lexicon, is used to increase the verb coverage and also to test the effectiveness of our approach. The results indicate that our model is an interesting step towards the design of more robust semantic parsers.
Action learning and leadership
Action learning has quickly emerged as one of the most powerful and effective tools employed by organizations worldwide to develop and build their leaders. Companies such as Boeing, DuPont, Motorola, Alcoa, and Nokia have recently turned to action learning as to solve their critical, complex problems as well as to grow the competencies and attributes needed by their leaders if they are to succeed in the 21 century.
Radiofrequency ablation vs antiarrhythmic drugs as first-line treatment of paroxysmal atrial fibrillation (RAAFT-2): a randomized trial.
IMPORTANCE Atrial fibrillation (AF) is the most common rhythm disorder seen in clinical practice. Antiarrhythmic drugs are effective for reduction of recurrence in patients with symptomatic paroxysmal AF. Radiofrequency ablation is an accepted therapy in patients for whom antiarrhythmic drugs have failed; however, its role as a first-line therapy needs further investigation. OBJECTIVE To compare radiofrequency ablation with antiarrhythmic drugs (standard therapy) in treating patients with paroxysmal AF as a first-line therapy. DESIGN, SETTING, AND PATIENTS A randomized clinical trial involving 127 treatment-naive patients with paroxysmal AF were randomized at 16 centers in Europe and North America to received either antiarrhythmic therapy or ablation. The first patient was enrolled July 27, 2006; the last patient, January 29, 2010. The last follow-up was February 16, 2012. INTERVENTIONS Sixty-one patients in the antiarrhythmic drug group and 66 in the radiofrequency ablation group were followed up for 24 months. MAIN OUTCOMES AND MEASURES The time to the first documented atrial tachyarrhythmia of more than 30 seconds (symptomatic or asymptomatic AF, atrial flutter, or atrial tachycardia), detected by either scheduled or unscheduled electrocardiogram, Holter, transtelephonic monitor, or rhythm strip, was the primary outcome. Secondary outcomes included symptomatic recurrences of atrial tachyarrhythmias and quality of life measures assessed by the EQ-5D tool. RESULTS Forty-four patients (72.1%) in the antiarrhythmic group and in 36 patients (54.5%) in the ablation group experienced the primary efficacy outcome (hazard ratio [HR], 0.56 [95% CI, 0.35-0.90]; P = .02). For the secondary outcomes, 59% in the drug group and 47% in the ablation group experienced the first recurrence of symptomatic AF, atrial flutter, atrial tachycardia (HR, 0.56 [95% CI, 0.33-0.95]; P = .03). No deaths or strokes were reported in either group; 4 cases of cardiac tamponade were reported in the ablation group. In the standard treatment group, 26 patients (43%) underwent ablation after 1-year. Quality of life was moderately impaired at baseline in both groups and improved at the 1 year follow-up. However, improvement was not significantly different among groups. CONCLUSIONS AND RELEVANCE Among patients with paroxysmal AF without previous antiarrhythmic drug treatment, radiofrequency ablation compared with antiarrhythmic drugs resulted in a lower rate of recurrent atrial tachyarrhythmias at 2 years. However, recurrence was frequent in both groups. TRIAL REGISTRATION clinicaltrials.gov Identifier: NCT00392054.
Collaborative Filtering beyond the User-Item Matrix: A Survey of the State of the Art and Future Challenges
Over the past two decades, a large amount of research effort has been devoted to developing algorithms that generate recommendations. The resulting research progress has established the importance of the user-item (U-I) matrix, which encodes the individual preferences of users for items in a collection, for recommender systems. The U-I matrix provides the basis for collaborative filtering (CF) techniques, the dominant framework for recommender systems. Currently, new recommendation scenarios are emerging that offer promising new information that goes beyond the U-I matrix. This information can be divided into two categories related to its source: rich side information concerning users and items, and interaction information associated with the interplay of users and items. In this survey, we summarize and analyze recommendation scenarios involving information sources and the CF algorithms that have been recently developed to address them. We provide a comprehensive introduction to a large body of research, more than 200 key references, with the aim of supporting the further development of recommender systems exploiting information beyond the U-I matrix. On the basis of this material, we identify and discuss what we see as the central challenges lying ahead for recommender system technology, both in terms of extensions of existing techniques as well as of the integration of techniques and technologies drawn from other research areas.
Vibrotactile Display: Perception, Technology, and Applications
This paper reviews the technology and applications of vibrotactile display, an effective information transfer modality for the emerging area of haptic media. Our emphasis is on summarizing foundational knowledge in this area and providing implementation guidelines for application designers who do not yet have a background in haptics. Specifically, we explain the relevant human vibrotactile perceptual capabilities, detail the main types of commercial vibrotactile actuators, and describe how to build both monolithic and localized vibrotactile displays. We then identify exemplary vibrotactile display systems in application areas ranging from the presentation of physical object properties to broadcasting vibrotactile media content.
Leveraging Multi-Domain Prior Knowledge in Topic Models
Topic models have been widely used to identify topics in text corpora. It is also known that purely unsupervised models often result in topics that are not comprehensible in applications. In recent years, a number of knowledge-based models have been proposed, which allow the user to input prior knowledge of the domain to produce more coherent and meaningful topics. In this paper, we go one step further to study how the prior knowledge from other domains can be exploited to help topic modeling in the new domain. This problem setting is important from both the application and the learning perspectives because knowledge is inherently accumulative. We human beings gain knowledge gradually and use the old knowledge to help solve new problems. To achieve this objective, existing models have some major difficulties. In this paper, we propose a novel knowledge-based model, called MDK-LDA, which is capable of using prior knowledge from multiple domains. Our evaluation results will demonstrate its effectiveness.
Compiling Abstract Scientific Workflows into Web Service Workflows
Scientists from many different fields and disciplines, e.g., life sciences, earth sciences, and environmental sciences, increasingly rely on web-accessible information sources and computational tools in order to support or even drive their scientific research. Simple access methods to these scientific resources via form-based interfaces and generic web browsers have quickly led to their popularity and widespread use. However, the serious limitations of the “copy-paste-click” mode of end-users interacting directly with different web sites have now become apparent, and this error-prone and inefficient manual approach to data and process integration is not adequate for the goal of automated, high throughput scientific workflows.
Hybrid feedforward-feedback active noise control
This work presents an architecture for single source, single point noise cancellation that seeks adequate gain margin and high performance for both stationary and nonstationary noise sources by combining feedforward and feedback control. Gain margins and noise reduction performance of the hybrid control architecture are validated experimentally using an earcup from a circumaural hearing protector. Results show that the hybrid system provides 5 to 30 dB active performance in the frequency range 50-800 Hz for tonal noise and 18-27 dB active performance in the same frequency range for nonstationary noise, such as aircraft or helicopter cockpit noise, improving low frequency (> 100 Hz) performance by up to 15 dB over either control component acting individually.
Ultrasound-guided bilateral transversus abdominis plane block for postoperative analgesia after breast reconstruction by DIEP flap.
BACKGROUND Autologous breast reconstruction by deep inferior epigastric perforator (DIEP) flap provides higher postoperative pain at the abdominal donor site than at the thoracic one. The authors evaluated the analgesic efficacy of ultrasound-guided transverse abdominis plane block for postoperative analgesia after immediate breast reconstruction by DIEP flap. METHODS The authors conducted an open prospective study of 30 consecutive women undergoing immediate DIEP flap breast reconstruction after modified radical mastectomy for cancer. The last 15 patients received a bilateral ultrasound-guided block with 1.5 mg/kg ropivacaine on each side after DIEP flap harvesting, under general anesthesia. All patients received postoperative acetaminophen and patient-controlled intravenous morphine and were assessed for morphine use, satisfaction with pain relief, and adverse effects. RESULTS Morphine requirements were significantly lower in the block group than in the control group for the 0- to 12-hour (17.7 mg versus 22.7 mg, p = 0.0047) and 12- to 24-hour (14.2 mg versus 17.4 mg, p = 0.01) intervals but not for the 24- to 36-hour (11.3 mg versus 12.2 mg, p = 0.30) and 36- to 48-hour (8.6 mg versus 8.4 mg, p = 0.65) intervals. Cumulative morphine use was lower in the block group than in the control group for the first 24 hours (32.0 mg versus 40.2 mg, p = 0.0057) and the first 48 hours (51.7 mg versus 60.5 mg, p = 0.03). There was no complication attributable to the block, with an average follow-up of 9 months. CONCLUSIONS Bilateral ultrasound-guided transversus abdominis plane block after breast reconstruction by DIEP flap reduces the interval and cumulative morphine requirements for the first 24 and 48 hours, respectively. CLINICAL QUESTION/LEVEL OF EVIDENCE : Therapeutic, II.(Figure is included in full-text article.).
High rates of psychosis for black inpatients in Padua and Montreal: different contexts, similar findings
This study tested the hypothesis that despite differences in setting, specifically in Padua or Montreal, black psychiatric inpatients will have higher rates of assigned diagnosis of psychosis than their non-black counterparts. Data on psychotic patients admitted to the psychiatry ward were extracted from records of general hospitals in Padua and Montreal. Logistic regression analyses were conducted separately for each site to determine the relation between being black and receiving a diagnosis of psychosis, while controlling for sex and age. Most black patients at both sites received a diagnosis of psychosis (76% in Padua and 81% in Montreal). Being black was independently and positively associated with being diagnosed with psychosis compared to patients from other groups. Black patients admitted to psychiatry, whether in Padua or Montreal, were more likely to be assigned a diagnosis of psychosis than were other patients.
Prognostic importance of temporal changes in resting heart rate in heart failure patients: an analysis of the CHARM program.
BACKGROUND Resting heart rate (HR) is a predictor of adverse outcome in patients with heart failure (HF). Whether changes in HR over time in patients with chronic HF are also associated with adverse outcome is unknown. We explored the relationship between changes in HR from a preceding visit, time-updated HR (i.e. most recent available HR value from a clinic visit) and subsequent outcomes in patients with chronic HF. METHODS AND RESULTS We studied 7599 patients enrolled in the candesartan in heart failure: assessment of reduction in mortality and morbidity program. We calculated change in HR from the preceding visit and explored its association with outcomes in Cox proportional hazards models, as well the association between time-updated HR and outcome. An increase in HR from preceding visit was associated with a higher risk of all-cause mortality and the composite endpoint of cardiovascular death or hospitalization for HF (adjusted hazard ratio 1.06, 95% confidence intervals, CI: 1.05-1.08, P < 0.001, per 5 b.p.m. higher HR), with lowering of HR being associated with lower risk, adjusting for covariates, including time-updated β-blocker dose and baseline HR. Time-updated resting HR at each visit was also associated with risk (adjusted hazard ratio 1.07, 95% CI: 1.06-1.09; P < 0.001, per 5 b.p.m. higher HR). CONCLUSIONS Change in HR over time predicts outcome in patients with chronic HF, as does time-updated HR during follow-up. These data suggest that frequent outpatient monitoring of HR, and identification of changes over time, possibly with remote technologies, may identify patients with HF who may be at increased risk of rehospitalization or death.
Forensic Application-Fingerprinting Based on File System Metadata
While much work has been invested in tools for aquisition and extraction of digital evidence, there are only few tools that allow for automatic event reconstruction. In this paper, we present a generic approach for forensic event reconstruction based on digital evidence from file systems. Our approach applies the idea of fingerprinting to changes made by applications in file system metadata. We present a system with which it is possible to automatically compute file system fingerprints of individual actions. Using NTFS timestamps as an example, we show that with our approach it is possible to automatically reconstruct actions performed by different applications even if the set of files accessed by those actions overlap.
Dynamic circular work-stealing deque
The non-blocking work-stealing algorithm of Arora, Blumofe, and Plaxton (henceforth ABP work-stealing) is on its way to becoming the multiprocessor load balancing technology of choice in both industry and academia. This highly efficient scheme is based on a collection of array-based double-ended queues (deques) with low cost synchronization among local and stealing processes. Unfortunately, the algorithm's synchronization protocol is strongly based on the use of fixed size arrays, which are prone to overflows, especially in the multiprogrammed environments for which they are designed. We present a work-stealing deque that does not have the overflow problem.The only ABP-style work-stealing algorithm that eliminates the overflow problem is the list-based one presented by Hendler, Lev and Shavit. Their algorithm indeed deals with the overflow problem, but it is complicated, and introduces a trade-off between the space and time complexity, due to the extra work required to maintain the list.Our new algorithm presents a simple lock-free work-stealing deque, which stores the elements in a cyclic array that can grow when it overflows. The algorithm has no limit other than integer overflow (and the system's memory size) on the number of elements that may be on the deque, and the total memory required is linear in the number of elements in the deque.
A compensated radiolucent electrode array for combined EIT and mammography.
Electrical impedance tomography (EIT), a non-invasive technique used to image the electrical conductivity and permittivity within a body from measurements taken on the body's surface, could be used as an indicator for breast cancer. Because of the low spatial resolution of EIT, combining it with other modalities may enhance its utility. X-ray mammography, the standard screening technique for breast cancer, is the first choice for that other modality. Here, we describe a radiolucent electrode array that can be attached to the compression plates of a mammography unit enabling EIT and mammography data to be taken simultaneously and in register. The radiolucent electrode array is made by depositing thin layers of metal on a plastic substrate. The structure of the array is presented along with data showing its x-ray absorbance and electrical properties. The data show that the electrode array has satisfactory radiolucency and sufficiently low resistance.
Hydrolysis of mixed Ni(2+)-Fe3+ and Mg(2+)-Fe3+ solutions and mechanism of formation of layered double hydroxides.
The hydrolytic behavior of mixed metallic solutions containing Ni(2+)-Fe(3+) and Mg(2+)-Fe(3+) has been studied with respect to the relative proportion of the divalent and trivalent cations in solution as well as the quantity of NaOH added. The combination of X-ray diffraction and vibrational spectroscopy provides a deep insight into both the nature of the phases and the structure of the formed LDH. The relative abundance of each phase is determined by using a mass balance diagram and is in good agreement with the solid characterization. We showed that the slow hydrolysis of mixed metallic solutions involved first the precipitation of Fe(3+) to form an akaganeite phase, and then the formation of a precursor on the iron oxyhydroxide surface, which transforms into LDH by diffusion of Fe(III) species from the akaganeite phase to the precursor. Interestingly, whatever the iron content in solution, the same fraction of Fe(III) is incorporated into the LDH phase which is correlated to the nature of the formed precursor. For Ni(2+)-Fe(3+) solution, the precursor is an α-Ni hydroxide, which formed a LDH phase with a very low iron content (x(layer) = 0.1), but a high charge density provided by structural hydroxyl default. This result unambiguously demonstrated that the LDH phase is formed from the precursor structure. For Mg(2+)-Fe(3+) solution, the precursor is structurally equivalent to a β-Mg(OH)2 phase, leading to a LDH with a higher x(layer) value of ~0.2. In both cases, at the end of the titration experiments, a mixture of different phases was systematically observed. Hydrothermal treatment allows the recovery of a pure LDH phase exclusively for the Ni(2+)-Fe(3+) solution.
Nrityakosha: Preserving the intangible heritage of Indian classical dance
Preservation of intangible cultural heritage, such as music and dance, requires encoding of background knowledge together with digitized records of the performances. We present an ontology-based approach for designing a cultural heritage repository for that purpose. Since dance and music are recorded in multimedia format, we use Multimedia Web Ontology Language (MOWL) to encode the domain knowledge. We propose an architectural framework that includes a method to construct the ontology with a labeled set of training data and use of the ontology to automatically annotate new instances of digital heritage artifacts. The annotations enable creation of a semantic navigation environment in a cultural heritage repository. We have demonstrated the efficacy of our approach by constructing an ontology for the cultural heritage domain of Indian classical dance, and have developed a browsing application for semantic access to the heritage collection of Indian dance videos.
Block cipher based separable reversible data hiding in encrypted images
While most reversible data hiding in encrypted images (RDH-EI) are based on stream cipher, this paper aims to present an alternative method feasible for block-enciphered images. Before uploading data to a remote server, the content owner encrypts the original image with a block cipher algorithm using an encryption key. Then, the server embeds additional bits into the encrypted image with an embedding key to generate the marked encrypted image. On the recipient side, the additional bits can be extracted if the receiver has the embedding key. In case the receiver has only the encryption key, the marked encrypted image can be directly deciphered to a plaintext image with good quality. When both the embedding and encryption keys are available for the receiver, he can recover the original image without any errors. Compared with the existing block cipher based RDH-EI method, drawbacks of the encryption and the recovery are avoided, and good embedding payloads are achieved.
Toolkits and Libraries for Deep Learning
Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.
A Fast KNN Algorithm for Text Categorization
The KNN algorithm applied to text categorization is a simple, valid and non-parameter method. The traditional KNN has a fatal defect that the time of similarity computing is huge. The practicality will be lost when the KNN algorithm is applied to text categorization with the high dimension and huge samples. In this paper, a method called TFKNN(Tree-Fast-K-Nearest-Neighbor) is presented, which can search the exact k nearest neighbors quickly. In the method, a SSR tree for searching K nearest neighbors is created, in which all child nodes of each non-leaf node are ranked according to the distances between their central points and the central point of their parent. Then the searching scope is reduced based on the tree. Subsequently , the time of similarity computing is decreased largely.
Phosphor-Free Monolithic White-Light LED
Phosphor-free monolithic InGaN-based white-light LED has the advantages of simpler device process and potentially higher efficiency. Several techniques have been developed for implementing such white-light LEDs. Among them, the key issue is the growth of a high-quality high-indium InGaN/GaN quantum well (QW). An underlying InGaN layer growth technique is introduced for enhancing the crystal quality of a high-indium QW. To demonstrate the superior properties of a QW grown with this technique, a green LED is fabricated based on the underlying growth technique to compare with another LED of the same emission wavelength based on the conventional growth method. Then, the underlying growth technique is used to grow three yellow-emitting QWs of high efficiency. The yellow photons mix with blue light from an overgrown blue-emitting QW to produce white light. The improved properties of the phosphor-free monolithic white-light LED are discussed in detail.
Evaluation of Web Service Recommendation Performance via Sparsity Alleviating by Specificity-Aware Ontology-Based Clustering
With the development of information technology, considerable web services are published on the Internet rapidly in the last few years. It becomes a challenging task to recommend applicable web services to users and service recommendation becomes an influential approach to guide users to discover suitable services. In this situation, Collaborative Filtering (CF) based on rating is one of the powerful approaches for service recommendation but suffers from the data sparsity and cold-start problems due to the insufficiency of user-service information. In this paper, we present a novel ontology-based clustering approach that based on the terms specificity and similarity to overcome those limitations. We alleviate the sparsity problem using this novel clustering approach and then service user similarity is calculated using a Pearson Correlation Coefficient (PCC) measurement. Finally, user rating is predicted based on the alleviated user ratings and PCC values and recommendation is based on these predictions. We evaluated in the several viewpoints based on our previous work and results show that our approach can successfully alleviate the sparsity and cold-start problems and works effectively by lower prediction error compared with existing approaches.
Algorithmic Prediction of Health-Care Costs
The rising cost of health care is one of the world’s most important problems. Accordingly, predicting such costs with accuracy is a significant first step in addressing this problem. Since the 1980s, there has been research on the predictive modeling of medical costs based on (health insurance) claims data using heuristic rules and regression methods. These methods, however, have not been appropriately validated using populations that the methods have not seen. We utilize modern data-mining methods, specifically classification trees and clustering algorithms, along with claims data from over 800,000 insured individuals over three years, to provide rigorously validated predictions of health-care costs in the third year, based on medical and cost data from the first two years. We quantify the accuracy of our predictions using unseen (out-of-sample) data from over 200,000 members. The key findings are: (a) our data-mining methods provide accurate predictions of medical costs and represent a powerful tool for prediction of health-care costs, (b) the pattern of past cost data is a strong predictor of future costs, and (c) medical information only contributes to accurate prediction of medical costs of high-cost members.
A Broadband Commonly Fed Dual-Polarized Antenna
A broadband commonly fed antenna with dual polarization is proposed in this letter. The main radiator of the antenna is designed as a loop formed by four staircase-like branches. In this structure, the 0° polarization and 90° polarization share the same radiator and reflector. Measurement shows that the proposed antenna obtains a broad impedance bandwidth of 70% (1.5–3.1 GHz) with <inline-formula><tex-math notation="LaTeX">$\vert {{S}}_{11}\vert < -{\text{10 dB}}$</tex-math></inline-formula> and a high port-to-port isolation of 35 dB. The antenna gain within the operating frequency band is between 7.2 and 9.5 dBi, which indicates a stable broadband radiation performance. Moreover, a high cross-polarization discrimination of 25 dB is achieved across the whole operating frequency band.
Learning experience from the Coventry Community Personality Disorder Service
The Coventry Community Personality Disorder Service is a new service that has been developed successfully over the past two years, despite several challenges such as real difficulties with recruitment. This is a description of our model, its implementation and some of the emergent issues we encountered along the way.
Plasmonic beaming and active control over fluorescent emission.
Nanometallic optical antennas are rapidly gaining popularity in applications that require exquisite control over light concentration and emission processes. The search is on for high-performance antennas that offer facile integration on chips. Here we demonstrate a new, easily fabricated optical antenna design that achieves an unprecedented level of control over fluorescent emission by combining concepts from plasmonics, radiative decay engineering and optical beaming. The antenna consists of a nanoscale plasmonic cavity filled with quantum dots coupled to a miniature grating structure that can be engineered to produce one or more highly collimated beams. Electromagnetic simulations and confocal microscopy were used to visualize the beaming process. The metals defining the plasmonic cavity can be utilized to electrically control the emission intensity and wavelength. These findings facilitate the realization of a new class of active optical antennas for use in new optical sources and a wide range of nanoscale optical spectroscopy applications.
Management of fulminant dissecting cellulitis of the scalp in the pediatric population: Case report and literature review.
A case of fulminant dissecting cellulitis of the scalp in a fifteen-year-old African American male is reported. The presentation was refractory to standard medical treatment such that treatment required radical subgaleal excision of the entire hair-bearing scalp. Reconstruction was in the form of split-thickness skin grafting at the level of the pericranium following several days of vacuum-assisted closure dressing to promote an acceptable wound bed for skin grafting and to ensure appropriate clearance of infection. Numerous nonsurgical modalities have been described for the treatment of dissecting cellulitis of the scalp, with surgical intervention reserved for patients refractory to medical treatment. The present paper reports a fulminant form of the disease in an atypical age of presentation, adolescence. The pathophysiology, etiology, natural history, complications and treatment options for dissecting cellulitis of the scalp are reviewed, and the authors suggest this method of treatment to be efficacious for severe presentations refractory to medical therapy.
Risk factors for suicide in later life
Suicide rates are higher in later life than in any other age group. The design of effective suicide prevention strategies hinges on the identification of specific, quantifiable risk factors. Methodological challenges include the lack of systematically applied terminology in suicide and risk factor research, the low base rate of suicide, and its complex, multidetermined nature. Although variables in mental, physical, and social domains have been correlated with completed suicide in older adults, controlled studies are necessary to test hypothesized risk factors. Prospective cohort and retrospective case control studies indicate that affective disorder is a powerful independent risk factor for suicide in elders. Other mental illnesses play less of a role. Physical illness and functional impairment increase risk, but their influence appears to be mediated by depression. Social ties and their disruption are significantly and independently associated with risk for suicide in later life, relationships between which may be moderated by a rigid, anxious, and obsessional personality style. Affective illness is a highly potent risk factor for suicide in later life with clear implications for the design of prevention strategies. Additional research is needed to define more precisely the interactions between emotional, physical, and social factors that determine risk for suicide in the older adult.
Amitriptyline or not, that is the question: pharmacogenetic testing of CYP2D6 and CYP2C19 identifies patients with low or high risk for side effects in amitriptyline therapy.
BACKGROUND Amitriptyline has been replaced in many countries by alternative and more expensive drugs based on claims of improved tolerability and toxicity and despite slightly reduced efficacy. Preliminary studies indicate that adverse effects could be linked to polymorphisms of drug-metabolizing enzymes, but information on their clinical impact remains scanty and includes mainly case reports. We conducted a prospective blinded two-center study seeking correlations between CYP2C19 and CYP2D6 genotypes, drug concentrations, adverse events, and therapy response. METHODS Fifty Caucasian inpatients with at least medium-grade depressive disorder received amitriptyline at a fixed dose of 75 mg twice a day. Blood samples for concentration monitoring of amitriptyline and nortriptyline were taken weekly until discharge along with evaluations of depression (Hamilton Depression Scale and Clinical Global Impression Scale) and side effect (Dosage Record and Treatment Emergent Symptoms Scale; DOTES) scores. RESULTS In a ROC analysis, nortriptyline but not amitriptyline concentrations correlated with side effects (DOTES sum score >or=5; area under the curve, 0.733; P = 0.008). Carriers of two functional CYP2D6 alleles had a significantly lower risk of side effects than carriers of only one functional allele (12.1% vs 76.5%; P = 0.00001). The lowest risk was observed for carriers of two functional CYP2D6 alleles combined with only one functional CYP2C19 allele [0 of 13 (0%) vs 9 of 11 (81.8%) for the high-risk group; P = 0.00004]. We found no correlations between drug concentrations or genotypes and therapeutic response. CONCLUSIONS Combined pharmacogenetic testing for CYP2D6 and CYP2C19 identifies patients with low risk for side effects in amitriptyline therapy and could possibly be used to individualize antidepressive regimens and reduce treatment cost. Identification of genotypes associated with slightly reduced intermediate metabolism may be more important than currently anticipated. It could also be the key to demonstrating cost-effectiveness for CYP2D6 genotyping in critical dose drugs.
Performance comparison of FPGA, GPU and CPU in image processing
Many applications in image processing have high inherent parallelism. FPGAs have shown very high performance in spite of their low operational frequency by fully extracting the parallelism. In recent micro processors, it also becomes possible to utilize the parallelism using multi-cores which support improved SIMD instructions, though programmers have to use them explicitly to achieve high performance. Recent GPUs support a large number of cores, and have a potential for high performance in many applications. However, the cores are grouped, and data transfer between the groups is very limited. Programming tools for FPGA, SIMD instructions on CPU and a large number of cores on GPU have been developed, but it is still difficult to achieve high performance on these platforms. In this paper, we compare the performance of FPGA, GPU and CPU using three applications in image processing; two-dimensional filters, stereo-vision and k-means clustering, and make it clear which platform is faster under which conditions.
''Black'' Cultural Capital, Status Positioning, and Schooling Conflicts for Low-Income African American Youth
Previous literature has failed to empirically demonstrate the conceptual distinction that social scientists make between “dominant” and “non-dominant” cultural capital. This article provides evidence of the coexistence of these two forms of capital within the social and academic lives of poor ethnic minority students. Using indepth interviews with 44 low-income African American youth, I illustrate how these students negotiate their perceptions of the differential values placed by educators on these two forms of capital. Often, scholars research the effects of (dominant) cultural capital in social reproduction across various social classes, but not the influence of (non-dominant) cultural capital on status relations within socially marginalized communities. By taking into account the interplay between these two forms of capital in the lives of low-income minority students, researchers might develop a more complete and nuanced understanding of how culture ultimately affects the prospects of mobility for lower status social groups.
Sacrococcygeal teratoma with anorectal malformation.
A 7-month-old child presented with imperforate anus, penoscrotal hypospadias and transposition, and a midline mucosa-lined perineal mass. At surgery the mass was found to be supplied by the median sacral artery. It was excised and the anorectal malformation was repaired by posterior sagittal anorectoplasty. Histologically the mass revealed well-differentiated colonic tissue. The final diagnosis was well-differentiated sacrococcygeal teratoma in association with anorectal malformation.
Adaptive Speed Control for Permanent-Magnet Synchronous Motor System With Variations of Load Inertia
Considering the variations of inertia in real applications, an adaptive control scheme for the permanent-magnet synchronous motor speed-regulation system is proposed in this paper. First, a composite control method, i.e., the extended-state-observer (ESO)-based control method, is employed to ensure the performance of the closed-loop system. The ESO can estimate both the states and the disturbances simultaneously so that the composite speed controller can have a corresponding part to compensate for the disturbances. Then, considering the case of variations of load inertia, an adaptive control scheme is developed by analyzing the control performance relationship between the feedforward compensation gain and the system inertia. By using inertia identification techniques, a fuzzy-inferencer-based supervisor is designed to automatically tune the feedforward compensation gain according to the identified inertia. Simulation and experimental results both show that the proposed method achieves a better speed response in the presence of inertia variations.
53Gbps native GF(24)2 composite-field AES-encrypt/decrypt accelerator for content-protection in 45nm high-performance microprocessors
An on-die, reconfigurable AES encrypt/decrypt hardware accelerator is fabricated in 45nm CMOS, targeted for content-protection in high-performance microprocessors. Compared to conventional AES implementations, this design computes the entire AES round in native GF(24)2 composite-field with one-time GF(28)-to-GF(24)2 mapping cost amortized over multiple AES iterations. This approach along with a fused Mix/InvMixColumns circuit and folded ShiftRow datapath results in 20% area savings and 67% reduction in worst-case interconnect length, enabling AES-128/192/256 ECB block throughput of 53/44/38Gbps, 125mW power measured at 1.1V, 50°C.
High aquatic niche overlap in the newts Triturus cristatus and T. marmoratus (Amphibia, Urodela)
We studied spatial niche metrics of large-bodied newts (Triturus cristatus and T. marmoratus) in three breeding ponds in western France. Adults and larvae were sampled with underwater funnel traps. Larvae were identified to the species with diagnostic microsatellite DNA markers. The distribution of adult T. cristatus and T. marmoratus across pond regions differed in one out of six cases, no differences were observed between larvae (two ponds studied). Niche overlap and niche breadth indices across resource states defined as pond regions or individual traps were high (Schoener's C: pond regions 0.60–0.98, traps 0.35–0.71; Levins' B: pond regions 0.71–0.98, traps 0.35–0.76). Adults of large-bodied newts significantly differed in resource use from small-bodied newts (T. helveticus). The results are discussed in view of the occurrence of interspecific breeding attempts, and the unpredictable ecological characteristics of newt breeding ponds.
Hand Gesture Recognition using Multi-Scale Colour Features, Hierarchical Models and Particle Filtering
This paper presents algorithms and a prototype system for hand tracking and hand posture recognition. Hand postures are represented in terms of hierarchies of multi-scale colour image features at different scales, with qualitative inter-relations in terms of scale, position and orientation. In each image, detection of multi-scale colour features is performed. Hand states are then simultaneously detected and tracked using particle filtering, with an extension of layered sampling referred to as hierarchical layered sampling. Experiments are presented showing that the performance of the system is substantially improved by performing feature detection in colour space and including a prior with respect to skin colour. These components have been integrated into a real-time prototype system, applied to a test problem of controlling consumer electronics using hand gestures. In a simplified demo scenario, this system has been successfully tested by participants at two fairs during 2001.
VoCo: text-based insertion and replacement in audio narration
Editing audio narration using conventional software typically involves many painstaking low-level manipulations. Some state of the art systems allow the editor to work in a text transcript of the narration, and perform select, cut, copy and paste operations directly in the transcript; these operations are then automatically applied to the waveform in a straightforward manner. However, an obvious gap in the text-based interface is the ability to type new words not appearing in the transcript, for example inserting a new word for emphasis or replacing a misspoken word. While high-quality voice synthesizers exist today, the challenge is to synthesize the new word in a voice that matches the rest of the narration. This paper presents a system that can synthesize a new word or short phrase such that it blends seamlessly in the context of the existing narration. Our approach is to use a text to speech synthesizer to say the word in a generic voice, and then use voice conversion to convert it into a voice that matches the narration. Offering a range of degrees of control to the editor, our interface supports fully automatic synthesis, selection among a candidate set of alternative pronunciations, fine control over edit placements and pitch profiles, and even guidance by the editors own voice. The paper presents studies showing that the output of our method is preferred over baseline methods and often indistinguishable from the original voice.
Bioavailability of phenolic acids
Two large classes of phenolic acids were comprised in this review: benzoic acid derivatives and cinnamic acid derivatives. They have been found to be very extended in fruits and vegetables at different concentrations. For example, hydroxycinnamic acids concentration was higher than that found for hydroxybenzoic acids. Concerning their consumption, hydroxycinnamic acids provide larger contributions to the total polyphenol intake than benzoic acid derivatives or flavonoids. This phenolic acid intake is led by the coffee intake since it has very rich concentrations in hydroxycinnamic acids. Moreover, several experimental and epidemiological studies report the protection of phenolic acids against various degenerative diseases. However, despite all these interesting attributions and even if phenolic acids are the main polyphenols consumed, their bioavailability has not received as attention as that flavonoids. This concept is an essential step to understand the health-promoting properties of phenolic acids and to serve as tool to design in vivo and in vitro experiments to know their biological properties. Therefore, a compilation of bioavailability data of phenolic acids have been presented here paying attention to the two types of phenolic acid bioavailability, direct and indirect derived from the direct phenolic acid and flavonoid consumption, respectively. Then, a new relevant concept which may be named as total bioavailability of phenolic acids includes the direct absorption and metabolism of phenolic acids from food consumption and phenolic acids bioavailability as a result of the cleavage on the main skeleton ring of flavonoids by the gut microflora.
Transmit/receive isolator for UHF RFID reader with wideband balanced directional coupler
A wideband balanced directional coupler for UHF RFID reader is proposed. The wideband balanced coupler has a high isolation feature insensitive to load variations. The transmit and receive isolation characteristic is achieved for UHF RFID band (860∼960 MHz). The proposed structure has an isolation of more than 45dB in the UHF RFID band according to the variations in matching.
Mobile web and cloud services enabling Internet of Things
The Internet of Things (IoT) represents a comprehensive environment that consists of a large number of sensors and mediators interconnecting heterogeneous physical objects to the Internet. IoT applications are prominent in many areas such as smart city, smart workplace, smart plants, smart agriculture and various ubiquitous computing areas. The research roadmap of IoT spans across vast domains such as mobile computing, wireless and sensor networks, service oriented computing, middleware, cloud computing and big data analytics, taking advantage of several recent breakthroughs in the respective domains. Primarily, the challenges associated with realization of IoT scenarios can be summarized across three layers: sensing and smart devices layer, connectivity layer and cloud layer. The first layer deals with the physical objects, including energy-efficient communication of the devices and developing the associated standards so that the interaction among the devices is seamless. The connectivity layer deals with the sensor data acquisition and provisioning, through gateways and sinks. The top cloud layer deals with resource provisioning for storage and processing of the acquired data, in extracting domain specific information. The participation of smart phones both as sensors and the gateways, brings in the scope for mobile web services and mobile cloud services, into this cloud based IoT architecture. The paper takes the cross-layered approach and tries to address the primary challenges of IoT through mobile web and cloud services. The paper also discusses the state of the art of each of the respective research domains along with scope for extensions and recent trends.
Why Heuristics Work.
The adaptive toolbox is a Darwinian-inspired theory that conceives of the mind as a modular system that is composed of heuristics, their building blocks, and evolved capacities. The study of the adaptive toolbox is descriptive and analyzes the selection and structure of heuristics in social and physical environments. The study of ecological rationality is prescriptive and identifies the structure of environments in which specific heuristics either succeed or fail. Results have been used for designing heuristics and environments to improve professional decision making in the real world.
Opportunistic Computation Offloading in Mobile Edge Cloud Computing Environments
The dynamic mobility and limitations in computational power, battery resources, and memory availability are main bottlenecks in fully harnessing mobile devices as data mining platforms. Therefore, the mobile devices are augmented with cloud resources in mobile edge cloud computing (MECC) environments to seamlessly execute data mining tasks. The MECC infrastructures provide compute, network, and storage services in one-hop wireless distance from mobile devices to minimize the latency in communication as well as provide localized computations to reduce the burden on federated cloud systems. However, when and how to offload the computation is a hard problem. In this paper, we present an opportunistic computation offloading scheme to efficiently execute data mining tasks in MECC environments. The scheme provides the suitable execution mode after analyzing the amount of unprocessed data, privacy configurations, contextual information, and available on-board local resources (memory, CPU, and battery power). We develop a mobile application for online activity recognition and evaluate the proposed scheme using the event data stream of 5 million activities collected from 12 users for 15 days. The experiments show significant improvement in execution time and battery power consumption resulting in 98% data reduction.
Safe Adaptive Importance Sampling
Importance sampling has become an indispensable strategy to speed up optimization algorithms for large-scale applications. Improved adaptive variants—using importance values defined by the complete gradient information which changes during optimization— enjoy favorable theoretical properties, but are typically computationally infeasible. In this paper we propose an efficient approximation of gradient-based sampling, which is based on safe bounds on the gradient. The proposed sampling distribution is (i) provably the best sampling with respect to the given bounds, (ii) always better than uniform sampling and fixed importance sampling and (iii) can efficiently be computed—in many applications at negligible extra cost. The proposed sampling scheme is generic and can easily be integrated into existing algorithms. In particular, we show that coordinate-descent (CD) and stochastic gradient descent (SGD) can enjoy significant a speed-up under the novel scheme. The proven efficiency of the proposed sampling is verified by extensive numerical testing.
Original Article Development of functional biscuit from soy flour &rice bran
The research intended to explore the possibility of fortifying the soy flour and rice bran to formulate the functional biscuit which have the ability to improve the quality of food products due to various functional properties. Suplementation of wheat flour with soya and rice bran was tried at 10 %, 15%, 20%, 25% level each. Prepared biscuit is subjected to physical, Sensory and nutritional analysis to evaluate the suitability of biscuit for consumption. The width of biscuit decreases from 44 to 36.2 with increasing in the level of substitution of composite flour of rice bran and soya. Similar tread shown by spread radio. However, biscuit thickness increases from 9.2 to 10.6 with increasing level of substitution. Nine-Point Hedonic Score System was used for sensory evaluation of prepared biscuit which is generally decreases with increasing the level of substitution. From overall acceptability rating, 15% soy flour +15% rice bran incorporated biscuit obtained the highest rating compare to other treatments. At p≤0.05, there were no significant difference between the control treatment and best rated supplemented biscuit (70:15:15) in general preference of sensory rating. Nutritional evaluation of best rated supplemented biscuit were protein (15.7%), fat (19.5%), fiber (2.2%), moisture (3.6%). Thus supplementation of soy flour and rice bran at 15% level each, would improve the nutritional quality without adversely affecting the sensory parameters. © 2011 Universal Research Publications. All rights reserved
A Graph Derivation Based Approach for Measuring and Comparing Structural Semantics of Ontologies
Ontology reuse offers great benefits by measuring and comparing ontologies. However, the state of art approaches for measuring ontologies neglects the problems of both the polymorphism of ontology representation and the addition of implicit semantic knowledge. One way to tackle these problems is to devise a mechanism for ontology measurement that is stable, the basic criteria for automatic measurement. In this paper, we present a graph derivation representation based approach (GDR) for stable semantic measurement, which captures structural semantics of ontologies and addresses those problems that cause unstable measurement of ontologies. This paper makes three original contributions. First, we introduce and define the concept of semantic measurement and the concept of stable measurement. We present the GDR based approach, a three-phase process to transform an ontology to its GDR. Second, we formally analyze important properties of GDRs based on which stable semantic measurement and comparison can be achieved successfully. Third but not the least, we compare our GDR based approach with existing graph based methods using a dozen real world exemplar ontologies. Our experimental comparison is conducted based on nine ontology measurement entities and distance metric, which stably compares the similarity of two ontologies in terms of their GDRs.
Treatment of Lennox-Gastaut syndrome: overview and recent findings
Lennox-Gastaut syndrome (LGS) is a rare, age-related syndrome, characterized by multiple seizure types, a specific electro-encephalographic pattern, and mental regression. However, published data on the etiology, evolution, and therapeutic approach of LGS are contradictory, partly because the precise definition of LGS used in the literature varies. In the most recent classification, LGS belongs to the epileptic encephalopathies and is highly refractory to all antiepileptic drugs. Numerous treatments, medical and non-medical, have been proposed and results mostly from open studies or case series have been published. Sometimes, patients with LGS are included in a more global group of patients with refractory epilepsy. Only 6 randomized double-blind controlled trials of medical treatments, which included patients with LGS, have been published. Overall, treatment is rarely effective and the final prognosis remains poor in spite of new therapeutic strategies. Co-morbidities need specific treatment. This paper summarizes the definition, diagnosis and therapeutic approach to LGS, including not only recognized antiepileptic drugs, but also "off label" medications, immune therapy, diet, surgery and some perspectives for the future.
An entropy based classification scheme for land applications of polarimetric SAR
In this paper we outline a new scheme for parameterizing polarimetric scattering problems, which has application in the quantitative analysis of polarimetric SAR data. The method relies on an eigenvalue analysis of the coherency matrix and employs a three-level Bernoulli statistical model to generate estimates of the average target scattering matrix parameters from the data. The scattering entropy is a key parameter is determining the randomness in this model and is seen as a fundamental parameter in assessing the importance of polarimetry in remote sensing problems. We show application of the method to some important classical random media scattering problems and apply it to POLSAR data from the NASA/JPL AIRSAR data base.
A G-Band (140–220 GHz) planar stubbed branch-line balun in BCB technology
A G-Band planar stubbed branch-line balun is designed and fabricated in 3μm thick BCB technology. This topology of the balun does not need thru-substrate via hole or thin-film resistor which makes it extremely suitable for realization on single-layer high-resistivity substrates commonly used at millimeter-wave or post-processed BCB layers on top of standard semi-insulating wafers. The design is simulated and validated by measurements. Measurement results on two fabricated back-to-back baluns show better than 10 dB input and output return loss and 3.2 dB insertion loss from 140 to 220 GHz.
Abstract commensurators of profinite groups
COMMENSURATORS OF PROFINITE GROUPS YIFTACH BARNEA, MIKHAIL ERSHOV, AND THOMAS WEIGEL Abstract. In this paper we initiate a systematic study of the abstract commensurators of profinite groups. The abstract commensurator of a profinite group G is a group Comm(G) which depends only on the commensurability class of G. We study various properties of Comm(G); in particular, we find two natural ways to turn it into a topological group. We also use Comm(G) to study topological groups which contain G as an open subgroup (all such groups are totally disconnected and locally compact). For instance, we construct a topologically simple group which contains the pro-2 completion of the Grigorchuk group as an open subgroup. On the other hand, we show that some profinite groups cannot be embedded as open subgroups of compactly generated topologically simple groups. Several celebrated rigidity theorems, such as Pink’s analogue of Mostow’s strong rigidity theorem for simple algebraic groups defined over local fields and the Neukirch-Uchida theorem, can be reformulated as structure theorems for the commensurators of certain profinite groups. In this paper we initiate a systematic study of the abstract commensurators of profinite groups. The abstract commensurator of a profinite group G is a group Comm(G) which depends only on the commensurability class of G. We study various properties of Comm(G); in particular, we find two natural ways to turn it into a topological group. We also use Comm(G) to study topological groups which contain G as an open subgroup (all such groups are totally disconnected and locally compact). For instance, we construct a topologically simple group which contains the pro-2 completion of the Grigorchuk group as an open subgroup. On the other hand, we show that some profinite groups cannot be embedded as open subgroups of compactly generated topologically simple groups. Several celebrated rigidity theorems, such as Pink’s analogue of Mostow’s strong rigidity theorem for simple algebraic groups defined over local fields and the Neukirch-Uchida theorem, can be reformulated as structure theorems for the commensurators of certain profinite groups.
A Q-band (40–45 GHz) 16-element phased-array transmitter in 0.18-μm SiGe BiCMOS technology
A 16-element phased-array transmitter based on 4-bit RF phase shifters is designed in 0.18-mum SiGe BiCMOS technology for Q-band applications. The phased-array shows 12.5plusmn1.5 dB of power gain per channel at 42.5 GHz for all phase states and the 3-dB gain bandwidth is 40-45.6 GHz. The input and output return loss is less than -10 dB at 37.5-50 GHz. The transmitter also results in les8.8deg of RMS phase error and les1.2 dB of RMS gain error for all phase states at 30-50 GHz. The maximum saturated output power is -2.5plusmn1.5 dBm per channel at 42.5 GHz. The RMS gain variation and RMS phase mismatch between all 16 channels is les0.5 dB and les4.5deg, respectively. The chip consumes 720 mA from a 5 V supply voltage and overall chip size is 2.6times3.2 mm2. To our knowledge, this is the first implementation of a 16-element phased array on a silicon chip with the RF phase shifting architecture at any frequency.
Number of people required for usability evaluation: the 10+/-2 rule
Introduction Usability evaluation is essential to make sure that software products newly released are easy to use, efficient, and effective to reach goals, and satisfactory to users. For example, when a software company wants to develop and sell a new product, the company needs to evaluate usability of the new product before launching it at a market to avoid the possibility that the new product may contain usability problems, which span from cosmetic problems to severe functional problems. Three widely used methods for usability evaluation are Think Aloud (TA), Heuristic Evaluation (HE) and Cognitive Walkthrough (CW). TA method is commonly employed with a lab-based user testing, while there are variants of TA methods, including thinking out aloud at user's workplace instead of at labs. What we discuss here is the TA method that is combined with a lab-based user testing, in which test users use products while simultaneously and continuously thinking out aloud, and experimenters record users' behaviors and verbal protocols in the laboratory. HE is a usability inspection method, in which a small number of evaluators find usability problems in a user interface design by examining an interface and judging its compliance with well-known usability principles, called heuristics. CW is a theory-based method, in which evaluators evaluate every step necessary to perform a scenario-based task, and look for usability problems that would interfere with learning by exploration. These three methods have their own advantages and disadvantages. For instance, TA method provides good qualitative data from a small number of test users, but laboratory environment may influence test user's behaviors. HE is a cheap, fast and easy-to-use method, while it often finds too specific and low-priority usability problems, including even not real problems. CW helps find mismatches between users' and designers' conceptualization of a task, but it needs extensive knowledge of cognitive psychology and technical details to apply. However, even though these advantages and disadvantages show overall characteristics of three major usability evaluation methods, we cannot compare them quantitatively and see their efficiency clearly. Because one of reasons why so-called discounted methods, such as HE and CW, were developed is to save costs of usability evaluation, cost-related criteria for comparing usability evaluation are meaningful to usability practitioners as well as usability researchers. One of the most disputable issues related to cost of usability evaluation is sample size. That is, how many users or evaluators are needed to achieve a targeted usability evaluation performance, for example, 80% of overall discovery rate? The sample size of usability evaluation is known to depend on an estimate of problem discovery rate across participants. The overall discovery rate is a common quantitative measure that is used to show the effectiveness of a specific usability evaluation method in most of usability evaluation studies. It is also called overall detection rate or thoroughness measure, which is the ratio of 'the sum of unique usability problems detected by all experiment participants' against 'the number of usability problems that exist in the evaluated systems', ranging between 0 and 1. The overall discovery rates were reported more than any other criterion measure in the usability evaluation experiments and also a key component for projecting required sample size for usability evaluation study. Thus, how many test users or evaluators participate in the usability evaluation is a critical issue, considering its cost-effectiveness.
NewsReader: Using knowledge resources in a cross-lingual reading machine to generate more knowledge from massive streams of news
In this article, we describe a system that reads news articles in four different languages and detects what happened, who is involved, where and when. This event-centric information is represented as episodic situational knowledge on individuals in an interoperable RDF format that allows for reasoning on the implications of the events. Our system covers the complete path from unstructured text to structured knowledge, for which we defined a formal model that links interpreted textual mentions of things to their representation as instances. The model forms the skeleton for interoperable interpretation across different sources and languages. The real content, however, is defined using multilingual and cross-lingual knowledge resources, both semantic and episodic. We explain how these knowledge resources are used for the processing of text and ultimately define the actual content of the episodic situational knowledge that is reported in the news. The knowledge and model in our system can be seen as an example how the Semantic Web helps NLP. However, our systems also generate massive episodic knowledge of the same type as the Semantic Web is built on. We thus envision a cycle of knowledge acquisition and NLP improvement on a massive scale. This article reports on the details of the system but also on the performance of various high-level components. We demonstrate that our system performs at state-of-the-art level for various subtasks in the four languages of the project, but that we also consider the full integration of these tasks in an overall system with the purpose of reading text. We applied our system to millions of news articles, generating billions of triples expressing formal semantic properties. This shows the capacity of the system to perform at an unprecedented scale.
Deficits in AIDS/HIV knowledge among physicians and nurses at a Minnesota public teaching hospital.
We administered a questionnaire pertaining to recent gains in knowledge about HIV/AIDS treatment and natural history in mid-1990 to all physicians and nurses at a 455-bed public teaching hospital. Surveys were returned by 127 physicians (46%) and 541 nurses (77%). Responses indicated that only 37% of physicians and 18% of nurses knew that the risk for an AIDS-related opportunistic infection becomes significant when the T-helper cell count falls below 200 cells per cubic millimeter. One-fourth of physicians (23%) and more than one-half of nurses (55%) were not aware that the HIV enzyme immunoassay test alone is insufficient to properly determine a patient's HIV serostatus. The survey results revealed a broad deficit in knowledge about the natural history and treatment of HIV infection and demonstrated the need for a clinically relevant core HIV/AIDS knowledge curriculum and for strategies to better educate health care providers to improve care given to HIV-infected patients.
The Impact of Stigma and Personal Experiences on the Help-Seeking Behaviors of Medical Students With Burnout.
PURPOSE Because of the high prevalence of burnout among medical students and its association with professional and personal consequences, the authors evaluated the help-seeking behaviors of medical students with burnout and compared their stigma perceptions with those of the general U.S. population and age-matched individuals. METHOD The authors surveyed students at six medical schools in 2012. They measured burnout, symptoms of depression, and quality of life using validated instruments and explored help-seeking behaviors, perceived stigma, personal experiences, and attitudes toward seeking mental health treatment. RESULTS Of 2,449 invited students, 873 (35.6%) responded. A third of respondents with burnout (154/454; 33.9%) sought help for an emotional/mental health problem in the last 12 months. Respondents with burnout were more likely than those without burnout to agree or strongly agree with 8 of 10 perceived stigma items. Respondents with burnout who sought help in the last 12 months were twice as likely to report having observed supervisors negatively judge students who sought care (odds ratio [OR] 2.06 [95% confidence interval (CI) 1.25-3.39], P < .01). They also were more likely to have observed peers reveal a student's emotional/mental health problem to others (OR 1.63 [95% CI 1.08-2.47], P = .02). A smaller percentage of respondents would definitely seek professional help for a serious emotional problem (235/872; 26.9%) than of the general population (44.3%) and age-matched individuals (38.8%). CONCLUSIONS Only a third of medical students with burnout seek help. Perceived stigma, negative personal experiences, and the hidden curriculum may contribute.
Energy management in a parallel hybrid electric vehicle with a continuously variable transmission
This paper describes a control strategy for the energy management of a post-transmission parallel hybrid electric vehicle (PHEV) equipped with a continuously variable transmission (CVT). Results are presented highlighting the dynamic behavior of the model, as well as the fuel consumption normalized to a base case. A dynamic, forward simulation of a complete compact class vehicle including driver model and computer controller was written in Matlab / Simulink. Modeled vehicle components include: internal combustion engine, engine clutch, CVT, electric motor, lead-acid battery, vehicle driveline, hydraulic brakes, and the vehicle and tire dynamics.
Sevelamer is cost effective versus calcium carbonate for the first-line treatment of hyperphosphatemia in new patients to hemodialysis: a patient-level economic evaluation of the INDEPENDENT-HD study.
BACKGROUND The recent multicenter, randomized, open-label INDEPENDENT study demonstrated that sevelamer improves survival in new to hemodialysis (HD) patients compared with calcium carbonate. The objective of this study was to determine the cost-effectiveness of sevelamer versus calcium carbonate for patients new to HD, using patient-level data from the INDEPENDENT study. STUDY DESIGN Cost-effectiveness analysis. SETTING AND POPULATION Adult patients new to HD in Italy. MODEL, PERSPECTIVE, TIMEFRAME A patient-level cost-effectiveness analysis was conducted from the perspective of the Servizio Sanitario Nazionale, Italy's national health service. The analysis was conducted for a 3-year time horizon. The cost of dialysis was excluded from the base case analysis. INTERVENTION Sevelamer was compared to calcium carbonate. OUTCOMES Total life years (LYs), total costs, and the incremental cost per LY gained were calculated. Bootstrapping was used to estimate confidence intervals around LYs, costs, and cost-effectiveness and to calculate the cost-effectiveness acceptability curve. RESULTS Sevelamer was associated with a gain of 0.26 in LYs compared to calcium carbonate, over the 3-year time horizon. Total drug costs were €3,282 higher for sevelamer versus calcium carbonate, while total hospitalization costs were €2,020 lower for sevelamer versus calcium carbonate. The total incremental cost of sevelamer versus calcium carbonate was €1,262, resulting in a cost per LY gained of €4,897. The bootstrap analysis demonstrated that sevelamer was cost effective compared with calcium carbonate in 99.4 % of 10,000 bootstrap replicates, assuming a willingness-to-pay threshold of €20,000 per LY gained. LIMITATIONS Data on hospitalizations was taken from a post hoc retrospective chart review of the patients included in the INDEPENDENT study. Patient quality of life or health utility was not included in the analysis. CONCLUSIONS Sevelamer is a cost-effective alternative to calcium carbonate for the first-line treatment of hyperphosphatemia in new to HD patients in Italy.
Visual social attention in autism spectrum disorder: Insights from eye tracking studies
We review different aspects of visual social attention in autism spectrum disorders (ASD) from infancy to adulthood in light of the eye-tracking literature. We first assess the assumption that individuals with ASD demonstrate a deficit in social orienting together with decreased attention to socially relevant stimuli such as faces compared to TD individuals. Results show that social orienting is actually not qualitatively impaired and that decreased attention to faces does not generalized across contexts. We also assess the assumption that individuals with ASD demonstrate excess mouth and diminished eye gaze compared to TD individuals. We find that this assumption receives little support across ages and discuss some factors that might have initially lead to this conjecture. We report that the assessment of the ability to follow the direction of another person's gaze needs to be further examined and that eye-tracking studies add to the evidence that individuals with ASD demonstrate difficulties in interpreting gaze cues. Finally, we highlight innovative data acquisition and analyses that are increasingly shedding light on the more subtle nature of the profound social difficulties experienced by individuals with ASD.
Plasma concentrations of estradiol in women suffering from schizophrenia treated with conventional versus atypical antipsychotics
BACKGROUND Low estrogen levels leading to an elevated rate of menstrual dysfunctions such as amenorrhea and irregular menstruation have been described in women with schizophrenia and have often been attributed to antipsychotic-induced hyperprolactinemia. However, there is some evidence that "hypoestrogenism" in schizophrenic women does not occur exclusively under medication with hyperprolactinemia-inducing antipsychotics. While the precise mechanism of low estrogen levels in schizophrenic women has not been elucidated yet, "hypoestrogenism" is of clinical relevance because estrogen seems to endow an antipsychotic-like effect in schizophrenia and thus positively affect the course of illness in schizophrenic women. In addition, low levels of estrogen might have a negative effect on bone mineral density and on the cardiovascular system. METHODS To test the "hypoestrogenism hypothesis", hormone levels in 75 women with schizophrenia diagnosed according to DSM-IV and ICD-10 were determined in the follicular, periovulatory, and luteal phases of the menstrual cycle. Levels of estradiol, prolactin, luteinizing hormone (LH), follicle-stimulating hormone (FSH), progesterone, and testosterone were assessed. RESULTS The serum levels of estradiol were generally reduced during the entire menstrual cycle compared to normal reference values. With low levels of LH over the entire cycle and of progesterone in the luteal phase, anovulatory cycles were assumed. Hypoestrogenism was found in about 60% of the patients in accordance with a strict definition (estradiol serum level below 30 pg/ml in the follicular phase and below 100 pg/ml in the periovulatory phase). To rule out a possible effect of hyperprolactinemia on the gonadal axis and a subsequent effect on estradiol levels from treatment with conventional ("typical") antipsychotics, serum estradiol levels of patients treated with certain atypical antipsychotics known to induce only a mild increase in prolactin, or no increase at all, were compared with those from patients treated with conventional antipsychotics. The data clearly indicate high prolactin levels in the latter, but low levels in the group treated with atypical antipsychotics. In both groups, however, low levels of estradiol compared to normal reference values were measured. CONCLUSIONS The present findings provide evidence that hypoestrogenism in schizophrenia occurs in women with and without antipsychotic-induced hyperprolactinemia. Further research should be conducted to clarify the cause of hypoestrogenism in schizophrenic women and focus on possible clinical implications.
Efficient Algorithms for Mining Outliers from Large Data Sets
In this paper, we propose a novel formulation for distance-based outliers that is based on the distance of a point from its kth nearest neighbor. We rank each point on the basis of its distance to its kth nearest neighbor and declare the top n points in this ranking to be outliers. In addition to developing relatively straightforward solutions to finding such outliers based on the classical nested-loop join and index join algorithms, we develop a highly efficient partition-based algorithm for mining outliers. This algorithm first partitions the input data set into disjoint subsets, and then prunes entire partitions as soon as it is determined that they cannot contain outliers. This results in substantial savings in computation. We present the results of an extensive experimental study on real-life and synthetic data sets. The results from a real-life NBA database highlight and reveal several expected and unexpected aspects of the database. The results from a study on synthetic data sets demonstrate that the partition-based algorithm scales well with respect to both data set size and data set dimensionality.
Efficient Tree-Based Topic Modeling
Topic modeling with a tree-based prior has been used for a variety of applications because it can encode correlations between words that traditional topic modeling cannot. However, its expressive power comes at the cost of more complicated inference. We extend the SPARSELDA (Yao et al., 2009) inference scheme for latent Dirichlet allocation (LDA) to tree-based topic models. This sampling scheme computes the exact conditional distribution for Gibbs sampling much more quickly than enumerating all possible latent variable assignments. We further improve performance by iteratively refining the sampling distribution only when needed. Experiments show that the proposed techniques dramatically improve the computation time.
Intrinsic and Extrinsic Motivations: Classic Definitions and New Directions.
Intrinsic and extrinsic types of motivation have been widely studied, and the distinction between them has shed important light on both developmental and educational practices. In this review we revisit the classic definitions of intrinsic and extrinsic motivation in light of contemporary research and theory. Intrinsic motivation remains an important construct, reflecting the natural human propensity to learn and assimilate. However, extrinsic motivation is argued to vary considerably in its relative autonomy and thus can either reflect external control or true self-regulation. The relations of both classes of motives to basic human needs for autonomy, competence and relatedness are discussed. Copyright 2000 Academic Press.
Generic and unified model of Switched Capacitor Converters
A generic modeling methodology that analyzes the losses in Switched Capacitors Converters (SCC) was developed and verified by simulation and experiments. The proposed analytical approach is unified, covering both hard and soft switched SCC topologies. The major advantage of the proposed model is that it expresses the losses as a function of the currents passing through each flying capacitor. Since these currents are linearly proportional to the output current, the model is also applicable to SCC with multiple capacitors. The proposed model provides an insight into the expected losses in SCC and the effects of their operational conditions such as duty cycle. As such, the model can help in the optimization of SCC systems and their control to achieve desired regulations.
Homomorphic Lower Digits Removal and Improved FHE Bootstrapping
Bootstrapping is a crucial operation in Gentry’s breakthrough work on fully homomorphic encryption (FHE), where a homomorphic encryption scheme evaluates its own decryption algorithm. There has been a couple of implementations of bootstrapping, among which HElib arguably marks the state-of-the-art in terms of throughput, ciphertext/message size ratio and support for large plaintext moduli. In this work, we applied a family of “lowest digit removal” polynomials to design an improved homomorphic digit extraction algorithm which is a crucial part in bootstrapping for both FV and BGV schemes. When the secret key has 1-norm h = ||s||1 and the plaintext modulus is t = p, we achieved bootstrapping depth log h + log(logp(ht)) in FV scheme. In case of the BGV scheme, we brought down the depth from log h+ 2 log t to log h + log t. We implemented bootstrapping for FV in the SEAL library. We also introduced another “slim mode”, which restrict the plaintexts to batched vectors in Zpr . The slim mode has similar throughput as the full mode, while each individual run is much faster and uses much smaller memory. For example, bootstrapping takes 6.75 seconds for vectors over GF (127) with 64 slots and 1381 seconds for vectors over GF (257) with 128 slots. We also implemented our improved digit extraction procedure for the BGV scheme in HElib.
Дагестан сегодня: архаизация, или затянувшийся кризис традиционного общества
In the article, processes taking place in modern Dagestan are analyzed in the context of global trends of social development and of internal dynamics in Russian society. An attempt to analyze the key problems of modern Dagestan society from the standpoint of the crisis of traditional society is made. The author emphasizes changes in social spheres of Dagestan society, which have significantly changed professional, gender, demographic, ethnical and territorial structure of the society. In authothor'sopinioncrisis condition of Dagestan society is mainly connected with the fact that these changes were not reflected in political system of the society.
Multimodal Word Distributions
Word embeddings provide point representations of words containing useful semantic information. We introduce multimodal word distributions formed from Gaussian mixtures, for multiple word meanings, entailment, and rich uncertainty information. To learn these distributions, we propose an energy-based max-margin objective. We show that the resulting approach captures uniquely expressive semantic information, and outperforms alternatives, such as word2vec skip-grams, and Gaussian embeddings, on benchmark datasets such as word similarity and entailment.
Culture fusion and English language teaching
This paper is first to exemplify the problems caused by culture difference. After this, a survey is made on Chinese-western culture difference and its main aspects, followed by the Chinese and western culture difference on the layer of language. In the second part the discussion of the relationship between language and culture reveals the importance of culture education in language teaching. The author is of the opinion that one cannot really master a target language without adequate knowledge of the culture related to that language. The paper concludes with several suggestions about useful ways to get culture fusion in ELT and points out that human beings do not live in the objective world alone, nor alone in the world of social activity as ordinarily understood but are very much at the mercy of the particular language which has become the medium of expression for their society. It is quite an illusion that language is merely an incidental means of solving specific problems of communication or reflection.