title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Approximate querying of RDF graphs via path alignment | A query over RDF data is usually expressed in terms of matching between a graph representing the target and a huge graph representing the source. Unfortunately, graph matching is typically performed in terms of subgraph isomorphism, which makes semantic data querying a hard problem. In this paper we illustrate a novel technique for querying RDF data in which the answers are built by combining paths of the underlying data graph that align with paths specified by the query. The approach is approximate and generates the combinations of the paths that best align with the query. We show that, in this way, the complexity of the overall process is significantly reduced and verify experimentally that our framework exhibits an excellent behavior with respect to other approaches in terms of both efficiency and effectiveness. |
Restoration and Enhancement of Underwater Image based on Wavelength Compensation and Image Dehazing Technique | Getting clear images in underwater environments is an important issue in ocean engineering . The quality of underwater images plays a important role in scientific world. Capturing images underwater is difficult, generally due to deflection and reflection of water particles, and color change due to light travelling in water with different wavelengths. Light dispersion and color transform result in contrast loss and color deviation in images acquired underwater. Restoration and Enhancement of an underwater object from an image distorted by moving water waves is a very challenging task. This paper proposes wavelength compensation and image dehazing technique to balance the color change and light scattering respectively. It also removes artificial light by using depth map technique. Water depth is estimated by background color. Color change compensation is done by residual energy ratio method. A new approach is presented in this paper. We make use of a special technique called wavelength compensation and dehazing technique along with the artificial light removal technique simultaneously to analyze the raw image sequences and recover the true object. We test our approach on both pretended and data of real world, separately. Such technique has wide applications to areas such. |
DTN routing in body sensor networks with dynamic postural partitioning | This paper presents novel store-and-forward packet routing algorithms for Wireless Body Area Networks (WBAN) with frequent postural partitioning. A prototype WBAN has been constructed for experimentally characterizing on-body topology disconnections in the presence of ultra short range radio links, unpredictable RF attenuation, and human postural mobility. On-body DTN routing protocols are then developed using a stochastic link cost formulation, capturing multi-scale topological localities in human postural movements. Performance of the proposed protocols are evaluated experimentally and via simulation, and are compared with a number of existing single-copy DTN routing protocols and an on-body packet flooding mechanism that serves as a performance benchmark with delay lower-bound. It is shown that via multi-scale modeling of the spatio-temporal locality of on-body link disconnection patterns, the proposed algorithms can provide better routing performance compared to a number of existing probabilistic, opportunistic, and utility-based DTN routing protocols in the literature. |
Middle ear application of a sodium hyaluronate gel loaded with neomycin in a Guinea pig model. | OBJECTIVE
Establishing methods for topical administration of drugs to the inner ear have great clinical relevance and potential even in a relatively short perspective. To evaluate the efficacy of sodium hyaluronate (HYA) as a vehicle for drugs that could be used for treatment of inner ear disorders.
METHODS
The cochlear hair cell loss and round window membrane (RWM) morphology were investigated after topical application of neomycin and HYA into the middle ear. Sixty-five albino guinea pigs were used and divided into groups depending on the type of the treatment. Neomycin was chosen as tracer for drug release and pharmacodynamic effect. HYA loaded with 3 different concentrations of neomycin was injected to the middle ear cavity of guinea pigs. Phalloidin stained surface preparations of the organ of Corti were used to estimate hair cell loss induced by neomycin. The thickness of the midportion of the RWM was measured and compared with that of controls using light and electron microscopy. All animal procedures were pe rformed in accordance with the ethical standards of Karolinska Institutet.
RESULT
Neomycin induced a considerable hair cell loss in guinea pigs receiving a middle ear injection of HYA loaded with the drug, demonstrating that neomycin was released from the gel and delivered to the inner ear. The resulting hair cell loss showed a clear dose-dependence. Only small differences in hair cell loss were noted between animals receiving neomycin solution and animals exposed to neomycin in HYA suggesting that the vehicle neither facilitated nor hindered drug transport between the middle ear cavity and the inner ear. One week after topical application, the thickness of the RWM had increased and was dependent upon the concentration of neomycin administered to the middle ear. At 4 weeks the thickness of the RWM had returned to normal.
CONCLUSION
HYA is a safe vehicle for drugs aimed to pass into the inner ear through the RWM. Neomycin was released from HYA and transported into the inner ear as evidenced by hair cell loss. |
Automated long-term recording and analysis of neural activity in behaving animals | Addressing how neural circuits underlie behavior is routinely done by measuring electrical activity from single neurons in experimental sessions. While such recordings yield snapshots of neural dynamics during specified tasks, they are ill-suited for tracking single-unit activity over longer timescales relevant for most developmental and learning processes, or for capturing neural dynamics across different behavioral states. Here we describe an automated platform for continuous long-term recordings of neural activity and behavior in freely moving rodents. An unsupervised algorithm identifies and tracks the activity of single units over weeks of recording, dramatically simplifying the analysis of large datasets. Months-long recordings from motor cortex and striatum made and analyzed with our system revealed remarkable stability in basic neuronal properties, such as firing rates and inter-spike interval distributions. Interneuronal correlations and the representation of different movements and behaviors were similarly stable. This establishes the feasibility of high-throughput long-term extracellular recordings in behaving animals. |
Facial Signs of Emotional Experience | Spontaneous facial expressions were found to provide accurate information about more specific aspects of emotional experience than just the pleasant versus unpleasant distinction. Videotape records were gathered while subjects viewed motion picture films and then reported on their subjective experience. A new technique for measuring facial movement isolated a particular type of smile that was related to differences in reported happiness between those who showed this action and those who did not, to the intensity of happiness, and to which of two happy experiences was reported as happiest. Those who showed a set of facial actions hypothesized to be signs of various negative affects reported experiencing more negative emotion than those who did not show these actions. How much these facial actions were shown was related to the reported intensity of negative affect. Specific facial actions associated with the experience of disgust were identified. |
A game theoretic approach to provide incentive and service differentiation in P2P networks | Traditional peer-to-peer (P2P) networks do not provide service differentiation and incentive for users. Consequently, users can obtain services without themselves contributing any information or service to a P2P community. This leads to the "free-riding" and "tragedy of the commons" problems, in which the majority of information requests are directed towards a small number of P2P nodes willing to share their resources. The objective of this work is to enable service differentiation in a P2P network based on the amount of services each node has provided to its community, thereby encouraging all network nodes to share resources. We first introduce a resource distribution mechanism between all information sharing nodes. The mechanism is driven by a distributed algorithm which has linear time complexity and guarantees Pareto-optimal resource allocation. Besides giving incentive, the mechanism distributes resources in a way that increases the aggregate utility of the whole network. Second, we model the whole resource request and distribution process as a competition game between the competing nodes. We show that this game has a Nash equilibrium and is collusion-proof. To realize the game, we propose a protocol in which all competing nodes interact with the information providing node to reach Nash equilibrium in a dynamic and efficient manner. Experimental results are reported to illustrate that the protocol achieves its service differentiation objective and can induce productive information sharing by rational network nodes. Finally, we show that our protocol can properly adapt to different node arrival and departure events, and to different forms of network congestion. |
Impact of Internet Use on Loneliness and Contact with Others Among Older Adults: Cross-Sectional Analysis | BACKGROUND
Older adults are at increased risk of experiencing loneliness and depression, particularly as they move into different types of care communities. Information and communication technology (ICT) usage may help older adults to maintain contact with social ties. However, prior research is not consistent about whether ICT use increases or decreases isolation and loneliness among older adults.
OBJECTIVE
The purpose of this study was to examine how Internet use affects perceived social isolation and loneliness of older adults in assisted and independent living communities. We also examined the perceptions of how Internet use affects communication and social interaction.
METHODS
One wave of data from an ongoing study of ICT usage among older adults in assisted and independent living communities in Alabama was used. Regression analysis was used to determine the relationship between frequency of going online and isolation and loneliness (n=205) and perceptions of the effects of Internet use on communication and social interaction (n=60).
RESULTS
After controlling for the number of friends and family, physical/emotional social limitations, age, and study arm, a 1-point increase in the frequency of going online was associated with a 0.147-point decrease in loneliness scores (P=.005). Going online was not associated with perceived social isolation (P=.14). Among the measures of perception of the social effects of the Internet, each 1-point increase in the frequency of going online was associated with an increase in agreement that using the Internet had: (1) made it easier to reach people (b=0.508, P<.001), (2) contributed to the ability to stay in touch (b=0.516, P<.001), (3) made it easier to meet new people (b=0.297, P=.01, (4) increased the quantity of communication with others (b=0.306, P=.01), (5) made the respondent feel less isolated (b=0.491, P<.001), (6) helped the respondent feel more connected to friends and family (b=0.392, P=.001), and (7) increased the quality of communication with others (b=0.289, P=.01).
CONCLUSIONS
Using the Internet may be beneficial for decreasing loneliness and increasing social contact among older adults in assisted and independent living communities. |
Sampling for qualitative research. | The probability sampling techniques used for quantitative studies are rarely appropriate when conducting qualitative research. This article considers and explains the differences between the two approaches and describes three broad categories of naturalistic sampling: convenience, judgement and theoretical models. The principles are illustrated with practical examples from the author's own research. |
Sportsman hernia: what can we do? | Sportsman (sports) hernia is a medially located bulge in the posterior wall of the inguinal canal that is common in football players. About 90% of cases occur in males. The injury is also found in the general population. The presenting symptom is chronic groin pain which develops during exercise, aggravated by sudden movements, accompanied by subtle physical examination findings and a medial inguinal bulge on ultrasound. Pain persists after a game, abates during a period of lay-off, but returns on the resumption of sport. Frequently, sports hernia is one component of a more extensive pattern of injury known as ‘groin disruption injury’ consisting of osteitis pubis, conjoint tendinopathy, adductor tendinopathy and obturator nerve entrapment. Certain risk factors have been identified, including reduced hip range of motion and poor muscle balance around the pelvis, limb length discrepancy and pelvic instability. The suggested aetiology of the injury is repetitive athletic loading of the symphysis pubis disc, leading to accelerated disc degeneration with consequent pelvic instability and vulnerability to micro-fracturing along the pubic osteochondral junction, periosteal stripping of the pubic ligaments and para-symphyseal tendon tears, causing tendon dysfunction. Diagnostic imaging includes an erect pelvic radiograph (X-ray) with flamingo stress views of the symphysis pubis, real-time ultrasound and, occasionally, computed tomography (CT) scanning and magnetic resonance imaging (MRI), but seldom contrast herniography. Other imaging tests occasionally performed can include nuclear bone scan, limb leg measurement and test injections of local anaesthetic/corticosteroid. The injury may be prevented by the detection and monitoring of players at risk and by correcting significant limb length inequality. Groin reconstruction operation consists of a Maloney darn hernia repair technique, repair of the conjoint tendon, transverse adductor tenotomy and obturator nerve release. Rehabilitation involves core stabilisation exercises and the maintenance of muscle control and strength around the pelvis. Using this regimen of groin reconstruction and post-operative rehabilitation, a player would be anticipated to return to their pre-injury level of activity approximately 3 months after surgery. |
Correlation and Simple Linear Regression 1 In this tutorial | 1 From the Department of Radiology, Brigham and Women’s Hospital (K.H.Z., K.T., S.G.S.) and Department of Health Care Policy (K.H.Z.), Harvard Medical School, 180 Longwood Ave, Boston, MA 02115. Received September 10, 2001; revision requested October 31; revision received December 26; accepted January 21, 2002. Address correspondence to K.H.Z. (e-mail: [email protected]). © RSNA, 2003 Correlation and Simple Linear Regression |
Effect of a Proton Pump Inhibitor or an H2-Receptor Antagonist on Prevention of Bleeding From Ulcer After Endoscopic Submucosal Dissection of Early Gastric Cancer: A Prospective Randomized Controlled Trial | OBJECTIVES:With conventional methods of endoscopic mucosal resection for early gastric cancer (EGC), proton pump inhibitors (PPIs) and H2-receptor antagonists (H2RAs) have a similar effect on preventing bleeding from artificial ulcers. An objective of this study is to investigate whether a stronger acid suppressant (i.e., PPI) more effectively prevents bleeding after the recent advanced technique of endoscopic submucosal dissection (ESD) for EGC.METHODS:This was a prospective randomized controlled trial performed in a referral cancer center. A total of 143 patients with EGC who underwent ESD were randomly assigned to the treatment groups. They received either rabeprazole 20 mg (PPI group) or cimetidine 800 mg (H2RA group) on the day before ESD and continued for 8 wk. The primary end point was the incidence of bleeding that was defined as hematemesis or melena that required endoscopic hemostasis and decreased the hemoglobin count by more than 2 g/dL.RESULTS:In baseline data, the endoscopists who performed the ESD were significantly different between the groups. Finally, 66 of 73 patients in the PPI group and 64 of 70 in the H2RA group were analyzed. Bleeding occurred in four patients in the PPI group and 11 in the H2RA group (P = 0.057). Multivariate analysis revealed that treatment with the PPI significantly reduced the risk of bleeding: adjusted hazard ratio 0.47, 95% confidence interval 0.22–0.92, P = 0.028. One delayed perforation was experienced in the H2RA group.CONCLUSIONS:PPI therapy more effectively prevented delayed bleeding from the ulcer created after ESD than did H2RA treatment. |
Central obesity and high blood pressure in pediatric patients with atopic dermatitis. | IMPORTANCE
Atopic dermatitis (AD) is associated with multiple potential risk factors for obesity and high blood pressure (BP), including chronic inflammation, sleep disturbance, and mental health comorbidity. Previous studies found associations between general obesity and AD. However, it is unknown whether AD is associated with central obesity and/or high BP.
OBJECTIVES
To determine whether central obesity and high BP are increased in pediatric AD.
DESIGN, SETTING, AND PARTICIPANTS
This case-control study performed in multicenter pediatric dermatology practices in the United States recruited 132 children (age range, 4-17 years) with active moderate to severe AD and 143 healthy controls from April 1, 2009, through December 31, 2012.
EXPOSURES
Diagnosis and severity of AD assessed by a pediatric dermatologist.
MAIN OUTCOMES AND MEASURES
Body mass index, waist circumference, waist to height ratio, systolic BP, and diastolic BP.
RESULTS
Moderate to severe AD was associated with body mass index for age and sex of 97th percentile or greater (logistic regression; odds ratio [OR], 2.64; 95% CI, 1.15-6.06), International Obesity Task Force obesity cutoffs (OR, 2.38; 95% CI, 1.06-5.34), waist circumference in the 85th percentile or greater (OR, 3.92; 95% CI, 1.50-10.26), and waist to height ratio of 0.5 or greater (OR, 2.22; 95% CI, 1.10-4.50). Atopic dermatitis was associated with higher BP for age, sex, and height percentiles (systolic BP: OR, 2.94; 95% CI, 1.04-8.36; diastolic BP: OR, 3.68; 95% CI, 1.19-11.37), particularly a systolic BP in the 90th percentile or higher (OR, 2.06; 95% CI, 1.09-3.90), in multivariate models that controlled for demographics, body mass index and waist circumference percentiles, and history of using prednisone or cyclosporine. Atopic dermatitis was associated with higher systolic BP in Hispanics/Latinos (general linear model; β, .23; 95% CI, .04-.43) and Asians (β, .16; 95% CI, .03-.30). Severe to very severe AD was associated with systolic BP in the 90th percentile or higher (adjusted OR, 3.14; 95% CI, 1.13-8.70). Atopic dermatitis was associated with a family history of hypertension (adjusted OR, 1.88; 95% CI, 1.14-3.10) and type 2 diabetes mellitus (adjusted OR, 1.64; 95% CI, 1.02-2.68) but not obesity or hyperlipidemia.
CONCLUSIONS AND RELEVANCE
Moderate to severe pediatric AD may be associated with central obesity and increased systolic BP. |
Effect of application of benzyl benzoate on house dust mite allergen levels. | BACKGROUND
Several acaricides have become available for reducing house dust mite allergen levels.
OBJECTIVE
The purpose of this study was to assess whether the use of benzyl benzoate (Acarosan) provides additional benefit to the usual mite control measures including encasement of mattress and pillows with vinyl covers.
METHODS
A randomized controlled trial was carried out in 26 homes (14 control versus 12 treatment) of asthmatic patients in two cities (Vancouver and Winnipeg). The control group had the usual house dust mite control measures including the use of vinyl covers for mattresses and pillows while the treatment group had application of benzyl benzoate to mattresses and carpets in the bedroom and the most commonly used room, in addition to the above control measures. Mite allergen levels were measured 3 months and immediately before, 1 week, and 1 and 3 months after the application of house dust mite control measures. Patients kept diary cards on asthma symptoms and peak expiratory flow rates morning and evening one month before and three months after the onset of mite allergen control measures.
RESULTS
A reduction of mite allergen level was found in mattress samples in both groups, statistically significant at all times in the treatment group and at one and three months in the control group. Mite allergen levels on floor carpets also showed progressive reduction in both groups, but were significantly different in the treatment group (compared with controls) at 1 week, and were lower compared with baseline in the treatment group up to 3 months. No significant changes in asthma symptoms, peak expiratory flow rates, spirometric measurements, or bronchial hyperresponsiveness were observed among treatment or control group subjects.
CONCLUSION
The addition of benzyl benzoate to conventional house dust mite control measures resulted in a significant reduction in floor carpet dust mite levels that persisted for 3 months. The results of this study should be confirmed in a larger and longer study. |
Ubiquitin-Dependent Sorting into the Multivesicular Body Pathway Requires the Function of a Conserved Endosomal Protein Sorting Complex, ESCRT-I | The multivesicular body (MVB) pathway is responsible for both the biosynthetic delivery of lysosomal hydrolases and the downregulation of numerous activated cell surface receptors which are degraded in the lysosome. We demonstrate that ubiquitination serves as a signal for sorting into the MVB pathway. In addition, we characterize a 350 kDa complex, ESCRT-I (composed of Vps23, Vps28, and Vps37), that recognizes ubiquitinated MVB cargo and whose function is required for sorting into MVB vesicles. This recognition event depends on a conserved UBC-like domain in Vps23. We propose that ESCRT-I represents a conserved component of the endosomal sorting machinery that functions in both yeast and mammalian cells to couple ubiquitin modification to protein sorting and receptor downregulation in the MVB pathway. |
A hierarchical type-2 fuzzy logic control architecture for autonomous mobile robots | Autonomous mobile robots navigating in changing and dynamic unstructured environments like the outdoor environments need to cope with large amounts of uncertainties that are inherent of natural environments. The traditional type-1 fuzzy logic controller (FLC) using precise type-1 fuzzy sets cannot fully handle such uncertainties. A type-2 FLC using type-2 fuzzy sets can handle such uncertainties to produce a better performance. In this paper, we present a novel reactive control architecture for autonomous mobile robots that is based on type-2 FLC to implement the basic navigation behaviors and the coordination between these behaviors to produce a type-2 hierarchical FLC. In our experiments, we implemented this type-2 architecture in different types of mobile robots navigating in indoor and outdoor unstructured and challenging environments. The type-2-based control system dealt with the uncertainties facing mobile robots in unstructured environments and resulted in a very good performance that outperformed the type-1-based control system while achieving a significant rule reduction compared to the type-1 system. |
Automated refactoring of objects for application partitioning | Distributed infrastructures are becoming more and more diverse in nature. An application may often need to be redeployed in various scenarios. Ideally, given an application designed for one deployment scenario, one should be able to generate an application version for a new scenario through an automated refactoring process. For this to happen, one of the principal requirements is that application components should be amenable to partitioning. To achieve this: (i) We use a structurally simple and slightly modified model of object called breakable object (BoB), for structuring such applications. BoB can be treated as an object which is designed to be well disposed towards automated refactoring. We also devise a programming model for BoBs in Java called Java/sub BoB/. (ii) We provide algorithms for automated refactoring of a Java/sub BoB/ based program. |
Style-based reuse for software architectures | Although numerous mechanisms for promoting software reuse have been proposed and implemented over the years, most have focused on the reuse of implementation code. There is much conjecture and some empirical evidence , however, that the most effective forms of reuse are generally found at more abstract levels of software design. In this paper we discuss software reuse at the architectural level of design. Specifically, we argue that the concept of " architectural style " is useful for supporting the classification , storage, and retrieval of reusable architectural design elements. We briefly describe the Aesop system's Software Shelf, a tool that assists designers in selecting appropriate design elements and patterns based on stylistic information and design constraints. |
An enhanced test case generation technique based on activity diagrams | Test case generation is a core phase in any testing process, therefore automating it plays a tremendous role in reducing the time and effort spent during the testing process. This paper proposes an enhanced XML-based automated approach for generating test cases from activity diagrams. The proposed architecture creates a special table called Activity Dependency Table (ADT) for each XML file. The ADT covers all the functionalities in the activity diagram as well as handling the decisions, loops, fork, join, merge, object and conditional threads. Then it automatically generates a directed graph called Activity Dependency Graph (ADG) that is used in conjunction with the ADT to extract all the possible final test cases. The proposed model validates the generated test paths during the generation process to ensure meeting a hybrid coverage criterion. The generated test cases can be sent to any requirements management tool to be traced against the requirements. The proposed model is prototyped on 30 differently sized activity diagrams in different domains An experimental evaluation of the proposed model is done as well. It saves time and effort besides, increases the quality of generated test cases, therefore optimizes the overall performance of the testing process Moreover, the generated test cases can be executed on the system under test using any automatic test execution tool. |
Disability Insurance and the Dynamics of the Incentive Insurance Trade-Off. | We provide a life-cycle framework for comparing insurance and disincentive effects of disability benefits. The risks that individuals face and the parameters of the Disability Insurance (DI ) program are estimated from consumption, health, disability insurance, and wage data. We characterize the effects of disability insurance and study how policy reforms impact behavior and welfare. DI features high rejection rates of disabled applicants and some acceptance of healthy applicants. Despite worse incentives, welfare increases as programs become less strict or generosity increases. Disability insurance interacts with welfare programs: making unconditional means-tested programs more generous improves disability insurance targeting and increases welfare. |
Smart Cars on Smart Roads: An IEEE Intelligent Transportation Systems Society Update | To promote tighter collaboration between the IEEE Intelligent Transportation Systems Society and the pervasive computing research community, the authors introduce the ITS Society and present several pervasive computing-related research topics that ITS Society researchers are working on. This department is part of a special issue on Intelligent Transportation. |
Screening for pre-eclampsia by using maternal serum inhibin A, activin A, human chorionic gonadotropin, unconjugated estriol, and alpha-fetoprotein levels and uterine artery Doppler in the second trimester of pregnancy. | AIMS
To analyse the predictive power of maternal serum inhibin A, activin A, human chorionic gonadotropin (hCG), unconjugated estriol (uE(3)), alpha-fetoprotein (AFP) levels and uterine artery Doppler in the second trimester of pregnancy in screening for pre-eclampsia.
METHODS
Maternal serum inhibin A, activin A, hCG, uE(3), and AFP levels and uterine artery Doppler were determined in 178 healthy, pregnant women in the second trimester of pregnancy. Serum samples were collected between the 16th and 18th weeks of gestation, and Doppler investigation was performed between the 24th and 26th weeks of gestation. Receiver operating characteristic curves were created to analyse the predictive powers of the above parameters in screening for pre-eclampsia. Different combinations also were analysed.
RESULTS
The rate of pre-eclampsia was 7.9% (14/178). Maternal serum inhibin A, activin A, hCG, AFP levels, the rate of presence of the prediastolic notch and uterine artery resistance index (RI) values in pre-eclamptic pregnancies were significantly higher than those in healthy pregnancies. Presence of the prediastolic notch, uterine artery RI, maternal serum activin A and inhibin A levels had high predictive efficacy, and each had a sensitivity between 70 and 93% and a specificity between 87% and 98%. The addition of inhibin A or activin A measurement to the Doppler velocimetry improved the specificity to 99-100%.
CONCLUSIONS
Maternal serum inhibin A and activin A levels and uterine artery Doppler appear to be useful screening tests during the second trimester for pre-eclampsia. However, addition of these hormonal markers to Doppler velocimetry only slightly improves the predictive efficacy, which appears clinically insignificant. |
Whose Space? Differences Among Users and Non-Users of Social Network Sites | Are there systematic differences between people who use social network sites and those who stay away, despite a familiarity with them? Based on data from a survey administered to a diverse group of young adults, this article looks at the predictors of SNS usage, with particular focus on Facebook, MySpace, Xanga, and Friendster. Findings suggest that use of such sites is not randomly distributed across a group of highly wired users. A person's gender, race and ethnicity, and parental educational background are all associated with use, but in most cases only when the aggregate concept of social network sites is disaggregated by service. Additionally, people with more experience and autonomy of use are more likely to be users of such sites. Unequal participation based on user background suggests that differential adoption of such services may be contributing to digital inequality. |
Throughput Maximization for UAV-Enabled Mobile Relaying Systems | In this paper, we consider a novel mobile relaying technique, where the relay nodes are mounted on unmanned aerial vehicles (UAVs) and hence are capable of moving at high speed. Compared with conventional static relaying, mobile relaying offers a new degree of freedom for performance enhancement via careful relay trajectory design. We study the throughput maximization problem in mobile relaying systems by optimizing the source/relay transmit power along with the relay trajectory, subject to practical mobility constraints (on the UAV's speed and initial/final relay locations), as well as the information-causality constraint at the relay. It is shown that for the fixed relay trajectory, the throughput-optimal source/relay power allocations over time follow a “staircase” water filling structure, with non-increasing and non-decreasing water levels at the source and relay, respectively. On the other hand, with given power allocations, the throughput can be further improved by optimizing the UAV's trajectory via successive convex optimization. An iterative algorithm is thus proposed to optimize the power allocations and relay trajectory alternately. Furthermore, for the special case with free initial and final relay locations, the jointly optimal power allocation and relay trajectory are derived. Numerical results show that by optimizing the trajectory of the relay and power allocations adaptive to its induced channel variation, mobile relaying is able to achieve significant throughput gains over the conventional static relaying. |
The use of Continuous Subcutaneous Insulin Infusion (CSII): parental and professional perceptions of self-care mastery and autonomy in children and adolescents. | OBJECTIVE
To describe parent-perceived mastery of Continuous Subcutaneous Insulin Infusion (CSII) specific skills and level of autonomy for these tasks among youth with type 1 diabetes.
METHODS
One hundred and sixty-three parents of youth using CSII and 142 diabetes clinicians participated. Parents reported their child's mastery and autonomy of CSII-specific skills. Clinicians indicated the age at which 50% of their patients mastered these skills.
RESULTS
Parents report CSII skill mastery between 10.9 and 12.8 years. Very few achieved skill mastery on all CSII-related tasks. Parent- and clinician-expectations for age of skill acquisition were consistent with one another. Parents shared CSII task responsibility with their children even after their children have attained skill mastery.
CONCLUSION
The recent emphasis on maintaining parental involvement in diabetes care seems to have been translated into clinical practice. Parents remain involved in their child's CSII care even after they believe their child has mastered these skills. |
A Machine Learning Approach for Cryptanalysis | The paper introduces a novel approach of using neural networks to perform cryptanalysis on a lightweight cipher like the Simon cipher. The neural network takes in plaintexts and their corresponding ciphertexts to predict the key that was used to encrypt the plaintext. The neural network is tested with various rounds of the cipher system to see how it fares, and is also tested with various configurations of the neural network to determine which configuration could give us the highest accuracy at determining the key correctly. |
Multi-Modality Cascaded Convolutional Neural Networks for Alzheimer’s Disease Diagnosis | Accurate and early diagnosis of Alzheimer’s disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. Experimental results show that the proposed method achieves an accuracy of 93.26% for classification of AD vs. NC and 82.95% for classification pMCI vs. NC, demonstrating the promising classification performance. |
Optimization of process parameters in drilling of GFRP composite using Taguchi method | The objective of the present work is to optimize process parameters namely, cutting speed, feed, point angle and chisel edge width in drilling of glass fiber reinforced polymer (GFRP) composites. In this work, experiments were carried out as per the Taguchi experimental design and an L9 orthogonal array was used to study the influence of various combinations of process parameters on hole quality. Analysis of variance (ANOVA) test was conducted to determine the significance of each process parameter on drilling. The results indicate that feed rate is the most significant factor influencing the thrust force followed by speed, chisel edge width and point angle; cutting speed is the most significant factor affecting the torque, speed and the circularity of the hole followed by feed, chisel edge width and point angle. This work is useful in selecting optimum values of various process parameters that would not only minimize the thrust force and torque but also reduce the delimitation and improve the quality of the drilled hole. © 2013 Brazilian Metallurgical, Materials and Mining Association. Published by Elsevier Editora Ltda. |
Structural and incremental validity of the Wechsler Adult Intelligence Scale-Fourth Edition with a clinical sample. | Structural and incremental validity of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV; Wechsler, 2008a) was examined with a sample of 300 individuals referred for evaluation at a university-based clinic. Confirmatory factor analysis indicated that the WAIS-IV structure was best represented by 4 first-order factors as well as a general intelligence factor in a direct hierarchical model. The general intelligence factor accounted for the most common and total variance among the subtests. Incremental validity analyses indicated that the Full Scale IQ (FSIQ) generally accounted for medium to large portions of academic achievement variance. For all measures of academic achievement, the first-order factors combined accounted for significant achievement variance beyond that accounted for by the FSIQ, but individual factor index scores contributed trivial amounts of achievement variance. Implications for interpreting WAIS-IV results are discussed. |
Comparing the Usability of Cryptographic APIs | Potentially dangerous cryptography errors are well-documented in many applications. Conventional wisdom suggests that many of these errors are caused by cryptographic Application Programming Interfaces (APIs) that are too complicated, have insecure defaults, or are poorly documented. To address this problem, researchers have created several cryptographic libraries that they claim are more usable, however, none of these libraries have been empirically evaluated for their ability to promote more secure development. This paper is the first to examine both how and why the design and resulting usability of different cryptographic libraries affects the security of code written with them, with the goal of understanding how to build effective future libraries. We conducted a controlled experiment in which 256 Python developers recruited from GitHub attempt common tasks involving symmetric and asymmetric cryptography using one of five different APIs. We examine their resulting code for functional correctness and security, and compare their results to their self-reported sentiment about their assigned library. Our results suggest that while APIs designed for simplicity can provide security benefits – reducing the decision space, as expected, prevents choice of insecure parameters – simplicity is not enough. Poor documentation, missing code examples, and a lack of auxiliary features such as secure key storage, caused even participants assigned to simplified libraries to struggle with both basic functional correctness and security. Surprisingly, the availability of comprehensive documentation and easy-to-use code examples seems to compensate for more complicated APIs in terms of functionally correct results and participant reactions, however, this did not extend to security results. We find it particularly concerning that for about 20% of functionally correct tasks, across libraries, participants believed their code was secure when it was not. Our results suggest that while new cryptographic libraries that want to promote effective security should offer a simple, convenient interface, this is not enough: they should also, and perhaps more importantly, ensure support for a broad range of common tasks and provide accessible documentation with secure, easy-to-use code examples. |
How to write a patient case report. | PURPOSE
Guidelines for writing patient case reports, with a focus on medication-related reports, are provided.
SUMMARY
The format of a patient case report encompasses the following five sections: an abstract, an introduction and objective that contain a literature review, a description of the case report, a discussion that includes a detailed explanation of the literature review, a summary of the case, and a conclusion. The abstract of a patient case report should succinctly include the four sections of the main text of the report. The introduction section should provide the subject, purpose, and merit of the case report. It must explain why the case report is novel or merits review, and it should include a comprehensive literature review that corroborates the author's claims. The case presentation section should describe the case in chronological order and in enough detail for the reader to establish his or her own conclusions about the case's validity. The discussion section is the most important section of the case report. It ought to evaluate the patient case for accuracy, validity, and uniqueness; compare and contrast the case report with the published literature; derive new knowledge; summarize the essential features of the report; and draw recommendations. The conclusion section should be brief and provide a conclusion with evidence-based recommendations and applicability to practice.
CONCLUSION
Patient case reports are valuable resources of new and unusual information that may lead to vital research. |
A Sequential Importance Sampling Algorithm for Generating Random Graphs with Prescribed Degrees | Random graphs with a given degree sequence are a useful model capturing several features absent in the classical Erdős-Rényi model, such as dependent edges and non-binomial degrees. In this paper, we use a characterization due to Erdős and Gallai to develop a sequential algorithm for generating a random labeled graph with a given degree sequence. The algorithm is easy to implement and allows surprisingly efficient sequential importance sampling. Applications are given, including simulating a biological network and estimating the number of graphs with a given degree sequence. |
On collocations and topic models | We investigate the impact of preextracting and tokenizing bigram collocations on topic models. Using extensive experiments on four different corpora, we show that incorporating bigram collocations in the document representation creates more parsimonious models and improves topic coherence. We point out some problems in interpreting test likelihood and test perplexity to compare model fit, and suggest an alternate measure that penalizes model complexity. We show how the Akaike information criterion is a more appropriate measure, which suggests that using a modest number (up to 1000) of top-ranked bigrams is the optimal topic modelling configuration. Using these 1000 bigrams also results in improved topic quality over unigram tokenization. Further increases in topic quality can be achieved by using up to 10,000 bigrams, but this is at the cost of a more complex model. We also show that multiword (bigram and longer) named entities give consistent results, indicating that they should be represented as single tokens. This is the first work to explicitly study the effect of n-gram tokenization on LDA topic models, and the first work to make empirical recommendations to topic modelling practitioners, challenging the standard practice of unigram-based tokenization. |
Situated Anonymity: Impacts of Anonymity, Ephemerality, and Hyper-Locality on Social Media | Anonymity, ephemerality, and hyper-locality are an uncommon set of features in the design of online communities. However, these features were key to Yik Yak's initial success and popularity. In an interview-based study, we found that these three features deeply affected the identity of the community as a whole, the patterns of use, and the ways users committed to this community. We conducted interviews with 18 Yik Yak users on an urban American university campus and found that these three focal design features contributed to casual commitment, transitory use, and emergent community identity. We describe situated anonymity, which is the result of anonymity, ephemerality, and hyper-locality coexisting as focal design features of an online community. This work extends our understanding of use and identity-versus-bond based commitment, which has implications for the design and study of other atypical online communities. |
AdaBoost-Based Algorithm for Network Intrusion Detection | Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data. |
A German Twitter Snapshot | We present a new corpus of German tweets. Due to the relatively small number of German messages on Twitter, it is possible to collect a virtually complete snapshot of German twitter messages over a period of time. In this paper, we present our collection method which produced a 24 million tweet corpus, representing a large majority of all German tweets sent in April, 2013. Further, we analyze this representative data set and characterize the German twitterverse. While German Twitter data is similar to other Twitter data in terms of its temporal distribution, German Twitter users are much more reluctant to share geolocation information with their tweets. Finally, the corpus collection method allows for a study of discourse phenomena in the Twitter data, structured into discussion threads. |
Pedicle screw-rod fixation: a feasible treatment for dogs with severe degenerative lumbosacral stenosis | BACKGROUND
Degenerative lumbosacral stenosis is a common problem in large breed dogs. For severe degenerative lumbosacral stenosis, conservative treatment is often not effective and surgical intervention remains as the last treatment option. The objective of this retrospective study was to assess the middle to long term outcome of treatment of severe degenerative lumbosacral stenosis with pedicle screw-rod fixation with or without evidence of radiological discospondylitis.
RESULTS
Twelve client-owned dogs with severe degenerative lumbosacral stenosis underwent pedicle screw-rod fixation of the lumbosacral junction. During long term follow-up, dogs were monitored by clinical evaluation, diagnostic imaging, force plate analysis, and by using questionnaires to owners. Clinical evaluation, force plate data, and responses to questionnaires completed by the owners showed resolution (n = 8) or improvement (n = 4) of clinical signs after pedicle screw-rod fixation in 12 dogs. There were no implant failures, however, no interbody vertebral bone fusion of the lumbosacral junction was observed in the follow-up period. Four dogs developed mild recurrent low back pain that could easily be controlled by pain medication and an altered exercise regime.
CONCLUSIONS
Pedicle screw-rod fixation offers a surgical treatment option for large breed dogs with severe degenerative lumbosacral stenosis with or without evidence of radiological discospondylitis in which no other treatment is available. Pedicle screw-rod fixation alone does not result in interbody vertebral bone fusion between L7 and S1. |
Dynalog: an automated dynamic analysis framework for characterizing android applications | Android is becoming ubiquitous and currently has the largest share of the mobile OS market with billions of application downloads from the official app market. It has also become the platform most targeted by mobile malware that are becoming more sophisticated to evade state-of-the-art detection approaches. Many Android malware families employ obfuscation techniques in order to avoid detection and this may defeat static analysis based approaches. Dynamic analysis on the other hand may be used to overcome this limitation. Hence in this paper we propose DynaLog, a dynamic analysis based framework for characterizing Android applications. The framework provides the capability to analyse the behaviour of applications based on an extensive number of dynamic features. It provides an automated platform for mass analysis and characterization of apps that is useful for quickly identifying and isolating malicious applications. The DynaLog framework leverages existing open source tools to extract and log high level behaviours, API calls, and critical events that can be used to explore the characteristics of an application, thus providing an extensible dynamic analysis platform for detecting Android malware. DynaLog is evaluated using real malware samples and clean applications demonstrating its capabilities for effective analysis and detection of malicious applications. |
Lifetime-Aware Scheduling and Power Control for M2M Communications in LTE Networks | In this paper the scheduling and transmit power control are investigated to minimize the energy consumption for battery-driven devices deployed in LTE networks. To enable efficient scheduling for a massive number of machine-type subscribers, a novel distributed scheme is proposed to let machine nodes form local clusters and communicate with the base-station through the cluster-heads. Then, uplink scheduling and power control in LTE networks are introduced and lifetime-aware solutions are investigated to be used for the communication between cluster-heads and the base-station. Beside the exact solutions, low-complexity suboptimal solutions are presented in this work which can achieve near optimal performance with much lower computational complexity. The performance evaluation shows that the network lifetime is significantly extended using the proposed protocols. |
Prevalence and causative fungal species of tinea capitis among schoolchildren in Gabon. | Tinea capitis is endemic among schoolchildren in tropical Africa. The objective was to determine the prevalence of symptomatic tinea capitis in schoolchildren in Gabon. A cross-sectional study was conducted with 454 children aged 4-17 years, attending a rural school and an urban school. The diagnosis of tinea capitis was based on clinically manifest infection, direct microscopic examination using 20% potassium hydroxide (KOH) solution and fungal culture. Based on clinical examination, 105 (23.1%) of 454 children had tinea capitis. Seventy-four (16.3%) children were positive by direct examination (KOH) and/or fungal culture. The prevalence of tinea capitis depended on the school studied and ranged from 20.4% in the urban school with a higher socioeconomic status to 26.3% in the rural school with a lower socioeconomic status. Similarly, the spectrum of causative species varied between the different schools. Taken the schools together, Trichophyton soudanense (29.4%) was the most prominent species, followed by Trichophyton tonsurans (27.9%) and Microsporum audouinii (25.0%). Clinically manifest tinea capitis is endemic among schoolchildren in the Lambaréné region in Gabon. The prevalence of tinea capitis and the causative species depended on the type of school that was investigated. |
Target-driven visual navigation in indoor scenes using deep reinforcement learning | Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new goals, and (2) data inefficiency, i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows better generalization. To address the second issue, we propose the AI2-THOR framework, which provides an environment with high-quality 3D scenes and a physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment. |
Autonomous underwater vehicle nav-igation | This paper surveys the problem of navigation for autonomous underwater vehicles (AUVs). Marine robotics technology has undergone a phase of dramatic increase in capability in recent years. Navigation is one of the key challenges that limits our capability to use AUVs to address problems of critical importance to society. Good navigation information is essential for safe operation and recovery of an AUV. For the data gathered by an AUV to be of value, the location from which the data has been acquired must be accurately known. The three primary methods for navigation of AUVs are (1) dead-reckoning and inertial navigation systems, (2) acoustic navigation, and (3) geophysical navigation techniques. The current state-of-the-art in each of these areas is summarized, and topics for future research are suggested. |
Positional match demands of professional rugby league competition. | The purpose of this study was to examine the differences in physical performance and game-specific skill demands between 5 positional groups in a professional rugby league team. Positional groups consisted of the backs (n = 8), forwards (n = 8), fullback (n = 7), hooker (n = 8), and service players (n = 8). Time-motion analysis was used to determine physical performance measures (exercise intensity, distance travelled, time, frequency, and speed measures) and game-specific skill measures (ball carries, supports, ball touches, play the balls, and tackling indices) per minute of playing time. The main finding was that the fullback completed more very high-intensity running (VHIR) because of more support runs when compared to all other positional groups (p = 0.017). THe VHIR (p = 0.004) and sprinting indices (p < 0.002) were also greater in the second half of a match for the fullback than in any other positional group. The hooker spent more time jogging than the backs and forwards (p < 0.001) and touched the ball on more occasions than any other positional group (p < 0.001). The backs spent more time walking than the forwards, hooker, and service players (p < 0.001). The forwards, hooker, and service players completed more tackles per minute during a match than the backs and fullback (p < 0.001). The fullback and forwards also ran the ball on more occasions than the backs, hooker, and service players did (p < 0.001). These results show that positional roles play an important part in determining the amount of physical and game-specific skill involvement during match play. |
Use Dynamic Code Loading ( DCL ) ? Yes Provenance / entity Identification Dynamic Analysis App Execution Engine | Android has provided dynamic code loading (DCL) since API level one. DCL allows an app developer to load additional code at runtime. DCL raises numerous challenges with regards to security and accountability analysis of apps. While previous studies have investigated DCL on Android, in this paper we formulate and answer three critical questions that are missing from previous studies: (1) Where does the loaded code come from (remotely fetched or locally packaged), and who is the responsible entity to invoke its functionality? (2) In what ways is DCL utilized to harden mobile apps, specifically, application obfuscation? (3) What are the security risks and implications that can be found from DCL in off-the-shelf apps? We design and implement DYDROID, a system which uses both dynamic and static analysis to analyze dynamically loaded code. Dynamic analysis is used to automatically exercise apps, capture DCL behavior, and intercept the loaded code. Static analysis is used to investigate malicious behavior and privacy leakage in that dynamically loaded code. We have used DYDROID to analyze over 46K apps with little manual intervention, allowing us to conduct a large-scale measurement to investigate five aspects of DCL, such as source identification, malware detection, vulnerability analysis, obfuscation analysis, and privacy tracking analysis. We have several interesting findings. (1) 27 apps are found to violate the content policy of Google Play by executing code downloaded from remote servers. (2) We determine the distribution, pros/cons, and implications of several common obfuscation methods, including DEX encryption/loading. (3) DCL’s stealthiness enables it to be a channel to deploy malware, and we find 87 apps loading malicious binaries which are not detected by existing antivirus tools. (4) We found 14 apps that are vulnerable to code injection attacks due to dynamically loading code which is writable by other apps. (5) DCL is mainly used by third-party SDKs, meaning that app developers may not know what sort of sensitive functionality is injected into their apps. |
A Critical Review of Recent Progress in Mid-Range Wireless Power Transfer | Starting from Tesla's principles of wireless power transfer a century ago, this critical review outlines recent magneto-inductive research activities on wireless power transfer with the transmission distance greater than the transmitter coil dimension. It summarizes the operating principles of a range of wireless power research into 1) the maximum power transfer and 2) the maximum energy efficiency principles. The differences and the implications of these two approaches are explained in terms of their energy efficiency and transmission distance capabilities. The differences between the system energy efficiency and the transmission efficiency are also highlighted. The review covers the two-coil systems, the four-coil systems, the systems with relay resonators and the wireless domino-resonator systems. Related issues including human exposure issues and reduction of winding resistance are also addressed. The review suggests that the use of the maximum energy efficiency principle in the two-coil systems is suitable for short-range rather than mid-range applications, the use of the maximum power transfer principle in the four-coil systems is good for maximizing the transmission distance, but is under a restricted system energy efficiency (<;50%); the use of the maximum energy efficiency principle in relay or domino systems may offer a good compromise for good system energy efficiency and transmission distance on the condition that relay resonators can be placed between the power source and the load. |
Dual-Polarized Tapered Slot-Line Antenna Array Fed by Rotman Lens Air-Filled Ridge-Port Design | A novel ridge-port Rotman-lens is described, which operates as a lens with tapered slot-line ports. The lens parallel-plates mirror the ridge-ports to tapered slot-line ports. The lens height is half the height of the antenna array row, and two lenses can be stacked and feed one dual-polarized antenna array row, thus yielding a compact antenna system. The lens is air-filled, so it is easy to manufacture and repeatable in performance with no dielectric tolerances and losses, and it is lightweight compared to a dielectric lens. The lens with elongated tapered ports operates down to the antenna array low frequency, thus utilizing the large antenna bandwidth. These features make the ridge-port air-filled lens more useful than a conventional microstrip Rotman lens. |
Photonic chirped radio-frequency generator with ultra-fast sweeping rate and ultra-wide sweeping range. | A high-performance photonic sweeping-frequency (chirped) radio-frequency (RF) generator has been demonstrated. By use of a novel wavelength sweeping distributed-feedback (DFB) laser, which is operated based on the linewidth enhancement effect, a fixed wavelength narrow-linewidth DFB laser, and a wideband (dc to 50 GHz) photodiode module for the hetero-dyne beating RF signal generation, a very clear chirped RF waveform can be captured by a fast real-time scope. A very-high frequency sweeping rate (10.3 GHz/μs) with an ultra-wide RF frequency sweeping range (~40 GHz) have been demonstrated. The high-repeatability (~97%) in sweeping frequency has been verified by analyzing tens of repetitive chirped waveforms. |
Directions in Hybrid Intelligence: Complementing AI Systems with Human Intelligence | Hybrid intelligence systems combine machine and human intelligence to overcome the shortcomings of existing AI systems. This paper reviews recent research efforts towards developing hybrid systems focusing on reasoning methods for optimizing access to human intelligence and on gaining comprehensive understanding of humans as helpers of AI systems. It concludes by discussing short and long term research directions. |
Anti-forensics: The Next Step in Digital Forensics Tool Testing | We classify and present established and new attacks on digital forensics tools. In particular, we present the first and surprisingly simple code injection attack on a commercial analysis tool that potentially allows to infiltrate the analysis system. We argue that digital forensics tool testing must mature to cater for malicious adversaries. We also discuss possible countermeasures. |
How can machine learning help stock investment ? | The million-dollar question for stock investors is if the price of a stock will rise or not. The fluctuation of stock market is violent and there are many complicated financial indicators. Only people with extensive experience and knowledge can understand the meaning of the indicators, use them to make good prediction to get fortune. Most of other people can only rely on lucky to earn money from stock trading. Machine learning is an opportunity for ordinary people to gain steady fortune from stock market and also can help experts to dig out the most informative indicators and make better prediction. |
The effect of annealing on the structural and magnetic properties of zinc substitutes Ni-ferrite nanocrystals | Nanosized Ni0.5Zn0.5Fe2O4 was prepared by chemical co-precipitation technique using the chlorides of Ni, Zn and Fe (III) and oleic acid. The precursors were annealed at different temperature 500, 700, and 900 °C. The XRD of samples show the presence of single phase cubic spinel structure. Grain size of the sample has been determined using Scherrer formula and SEM technique. The particle size, Lattice parameter and X-ray density were estimated from X-ray diffraction data. The particles size is found to vary from 13nm to 31 nm and largely depends on the annealing temperature. Magnetization measurements have been carried on the nano-size Ni-Zn ferrite and it was found that saturation magnetization (Ms), Remanance (Mr) and coercivity were lower compared to bulk materials. |
Hepatitis C Virus Resistance to Direct-Acting Antiviral Drugs in Interferon-Free Regimens. | Treatment of hepatitis C virus (HCV) infection has progressed considerably with the approval of interferon-free, direct-acting antiviral (DAA)-based combination therapies. Although most treated patients achieve virological cure, HCV resistance to DAAs has an important role in the failure of interferon-free treatment regimens. The presence of viral variants resistant to NS5A inhibitors at baseline is associated with lower rates of virological cure in certain groups of patients, such as those with genotype 1a or 3 HCV, those with cirrhosis, and/or prior nonresponders to pegylated interferon-based regimens. DAA-resistant HCV is generally dominant at virological failure (most often relapse). Viruses resistant to NS3-4A protease inhibitors disappear from peripheral blood in a few weeks to months, whereas NS5A inhibitor-resistant viruses persist for years. Re-treatment options are available, but first-line treatment strategies should be optimized to efficiently prevent treatment failure due to HCV resistance. |
Parallel auto-encoder for efficient outlier detection | Detecting outliers from big data plays an important role in network security. Previous outlier detection algorithms are generally incapable of handling big data. In this paper we present an parallel outlier detection method for big data, based on a new parallel auto-encoder method. Specifically, we build a replicator model of the input data to obtain the representation of sample data. Then, the replicator model is used to measure the replicability of test data, where records having higher reconstruction errors are classified as outliers. Experimental results show the performance of the proposed parallel algorithm. |
How transferable are features in deep neural networks? | Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset. |
Testosterone Levels Following Decreases in Serum Osteocalcin | Recent preclinical studies suggest that osteoblasts are able to induce testosterone production by the testis, a process mediated by osteocalcin. Bisphosphonates substantially reduce osteocalcin levels. If osteocalcin is an important regulator of testosterone levels in adult men, it would be expected that the substantial reductions in osteocalcin induced by zoledronate would impact negatively on testosterone levels. Previously, we carried out a 2-year randomized, controlled trial of annual 4 mg zoledronate in 43 HIV-infected men. To explore the relationship between osteocalcin and testosterone further, we measured serum testosterone at baseline, 3 months, and 2 years; luteinizing hormone at 3 months and 2 years; and total osteocalcin at 2 years in 28 trial participants with available blood samples. At 2 years, total osteocalcin was 39 % lower in the zoledronate group than the placebo group (zoledronate mean 10.1 [SD 3.0] μg/L, placebo 16.5 [SD 4.9] μg/L, P = 0.003). Despite these substantial differences in osteocalcin levels, testosterone levels did not change over time in either group and there were no between-group differences over time, P = 0.4 (mean change at 2 years [adjusted for baseline levels] in zoledronate group −0.4 nmol/L, 95 % CI −2.5 to 1.6; placebo group 0.4 nmol/L, 95 % CI −1.6 to 2.5). Luteinizing hormone was within the normal range and did not differ between the groups at either 3 months or 2 years. Thus, the absence of a change in testosterone despite a substantial reduction in osteocalcin following zoledronate treatment argues against a biologically significant role for osteocalcin in the regulation of testosterone in adult men. This provides reassurance that men receiving potent antiresorptive drugs are not at risk of iatrogenic hypogonadism. |
Millimeter-Wave Receiver Concepts for 77 GHz Automotive Radar in Silicon-Germanium Technology | The first € price and the £ and $ price are net prices, subject to local VAT. Prices indicated with * include VAT for books; the €(D) includes 7% for Germany, the €(A) includes 10% for Austria. Prices indicated with ** include VAT for electronic products; 19% for Germany, 20% for Austria. All prices exclusive of carriage charges. Prices and other details are subject to change without notice. All errors and omissions excepted. D. Kissinger Millimeter-Wave Receiver Concepts for 77 GHz Automotive Radar in Silicon-Germanium Technology |
A Solution to the Next Best View Problem for Automated Surface Acquisition | ÐA solution to the anext best viewo (NBV) problem for automated surface acquisition is presented. The NBV problem is to determine which areas of a scanner's viewing volume need to be scanned to sample all of the visible surfaces of an a priori unknown object and where to position/control the scanner to sample them. It is argued that solutions to the NBV problem are constrained by the other steps in a surface acquisition system and by the range scanner's particular sampling physics. A method for determining the unscanned areas of the viewing volume is presented. In addition, a novel representation, positional space (PS), is presented which facilitates a solution to the NBV problem by representing what must be and what can be scanned in a single data structure. The number of costly computations needed to determine if an area of the viewing volume would be occluded from some scanning position is decoupled from the number of positions considered for the NBV, thus reducing the computational cost of choosing one. An automated surface acquisition systems designed to scan all visible surfaces of an a priori unknown object is demonstrated on real |
Barriers to Physicians’ Adoption of Healthcare Information Technology: An Empirical Study on Multiple Hospitals | Prior research on technology usage had largely overlooked the issue of user resistance or barriers to technology acceptance. Prior research on the Electronic Medical Records had largely focused on technical issues but rarely on managerial issues. Such oversight prevented a better understanding of users’ resistance to new technologies and the antecedents of technology rejection. Incorporating the enablers and the inhibitors of technology usage intention, this study explores physicians’ reactions towards the electronic medical record. The main focus is on the barriers, perceived threat and perceived inequity. 115 physicians from 6 hospitals participated in the questionnaire survey. Structural Equation Modeling was employed to verify the measurement scale and research hypotheses. According to the results, perceived threat shows a direct and negative effect on perceived usefulness and behavioral intentions, as well as an indirect effect on behavioral intentions via perceived usefulness. Perceived inequity reveals a direct and positive effect on perceived threat, and it also shows a direct and negative effect on perceived usefulness. Besides, perceived inequity reveals an indirect effect on behavioral intentions via perceived usefulness with perceived threat as the inhibitor. The research finding presents a better insight into physicians’ rejection and the antecedents of such outcome. For the healthcare industry understanding the factors contributing to physicians’ technology acceptance is important as to ensure a smooth implementation of any new technology. The results of this study can also provide change managers reference to a smooth IT introduction into an organization. In addition, our proposed measurement scale can be applied as a diagnostic tool for them to better understand the status quo within their organizations and users’ reactions to technology acceptance. By doing so, barriers to physicians’ acceptance can be identified earlier and more effectively before leading to technology rejection. |
Stable and Efficient Representation Learning with Nonnegativity Constraints | Orthogonal matching pursuit (OMP) is an efficient approximation algorithm for computing sparse representations. However, prior research has shown that the representations computed by OMP may be of inferior quality, as they deliver suboptimal classification accuracy on several image datasets. We have found that this problem is caused by OMP’s relatively weak stability under data variations, which leads to unreliability in supervised classifier training. We show that by imposing a simple nonnegativity constraint, this nonnegative variant of OMP (NOMP) can mitigate OMP’s stability issue and is resistant to noise overfitting. In this work, we provide extensive analysis and experimental results to examine and validate the stability advantage of NOMP. In our experiments, we use a multi-layer deep architecture for representation learning, where we use K-means for feature learning and NOMP for representation encoding. The resulting learning framework is not only efficient and scalable to large feature dictionaries, but also is robust against input noise. This framework achieves the state-of-the-art accuracy on the STL-10 dataset. |
Fludarabine, cyclophosphamide, and rituximab chemoimmunotherapy is highly effective treatment for relapsed patients with CLL. | Optimal management of patients with relapsed/refractory chronic lymphocytic leukemia (CLL) is dictated by patient characteristics, prior therapy, and response to prior therapy. We report the final analysis of combined fludarabine, cyclophosphamide, and rituximab (FCR) for previously treated patients with CLL and identify patients who benefit most from this therapy. We explore efficacy of FCR in patients beyond first relapse, patients with prior exposure to fludarabine and alkylating agent combinations, and patients with prior exposure to rituximab. The FCR regimen was administered to 284 previously treated patients with CLL. Patients were assessed for response and progression by 1996 National Cancer Institute-Working Group (NCI-WG) criteria for CLL and followed for survival. The overall response rate was 74%, with 30% complete remission. The estimated median overall survival was 47 months and median progression-free survival for all patients was 21 months. Subgroup analyses indicated that the following patients were most suitable for FCR treatment: patients with up to 3 prior treatments, fludarabine-sensitive patients irrespective of prior rituximab exposure, and patients without chromosome 17 abnormalities. FCR is an active and well-tolerated therapy for patients with relapsed CLL. The addition of rituximab to FC improved quality and durability of response in this patient population. |
Material-Efficient Permanent-Magnet Shape for Torque Pulsation Minimization in SPM Motors for Automotive Applications | This paper focuses on the design and analysis of a novel material-efficient permanent-magnet (PM) shape for surface-mounted PM (SPM) motors used in automotive actuators. Most of such applications require smooth torque with minimum pulsation for an accurate position control. The proposed PM shape is designed to be sinusoidal and symmetrical in the axial direction for minimizing the amount of rare earth magnets as well as for providing balanced axial electromagnetic force, which turns out to obtain better sinusoidal electromotive force, less cogging torque, and, consequently, smooth electromagnetic torque. The contribution of the novel PM shape to motor characteristics is first estimated by 3-D finite-element method, and all of the simulation results are compared with those of SPM motors with two conventional arched PM shapes: one previously reported sinusoidal PM shape and one step skewed PM shape. Finally, some finite-element analysis results are confirmed by experimental results. |
Utopias of Participation: Feminism, Design, and the Futures | This essay addresses the question of how participatory design (PD) researchers and practitioners can pursue commitments to social justice and democracy while retaining commitments to reflective practice, the voices of the marginal, and design experiments “in the small.” I argue that contemporary feminist utopianism has, on its own terms, confronted similar issues, and I observe that it and PD pursue similar agendas, but with complementary strengths. I thus propose a cooperative engagement between feminist utopianism and PD at the levels of theory, methodology, and on-the-ground practice. I offer an analysis of a case—an urban renewal project in Taipei, Taiwan—as a means of exploring what such a cooperative engagement might entail. I argue that feminist utopianism and PD have complementary strengths that could be united to develop and to propose alternative futures that reflect democratic values and procedures, emerging technologies and infrastructures as design materials, a commitment to marginalized voices (and the bodies that speak them), and an ambitious, even literary, imagination. |
Treatment of elderly patients with isolated systolic hypertension with isosorbide dinitrate in an asymmetric dosing schedule | Nitrates decrease pulse pressure more than mean arterial pressure (MAP) and are advocated for the treatment of isolated systolic hypertension (ISH). Nitrates show drug tolerance during chronic treatment so an asymmetric dosing regimen may prevent loss of effect of nitrates. This study investigates the anti-hypertensive effect of isosorbide dinitrate (ISDN) given in a twice daily asymmetric dosing regimen in elderly patients with ISH.After a 6-week placebo run-in period, patients entered the double-blind study. Ten patients received placebo and 11 patients ISDN 20 mg b.i.d. for 8 weeks. This dose could be doubled once. Office systolic and diastolic blood pressures (SBP/DBP) and ambulatory BP were measured. Pulse pressure was calculated as SBP–DBP.Office pulse pressure was more reduced during ISDN (17.9%) than with placebo (5%; P < 0.05). sbp and map decreased compared to baseline, but the changes were not statistically significant between the two groups. dbp tended to increase with isdn compared to placebo. mean 24-h, mean daytime and mean night-time pulse pressure decreased after treatment with isdn (10.7%, 12.1%, 7.9%, respectively). pulse pressure tended to decrease more during the day than during the night with isdn. no changes could be demonstrated with placebo.In conclusion, pulse pressure decreased with ISDN, resulting in a lower SBP without a decrease in DBP. The latter may preserve coronary perfusion in ISH. With the asymmetric dosing regimen the decrease in pulse pressure was not clear at night. Whether a decrease in nocturnal BP, in addition to the spontaneous decrease, is advisable in ISH remains a matter of debate. |
Can We Gain More from Orthogonality Regularizations in Training Deep CNNs? | OVERVIEW • We develop novel orthogonality regularizations on training deep CNNs, by borrowing ideas and tools from sparse optimization. • These plug-and-play regularizations can be conveniently incorporated into training almost any CNN without extra hassle. • The proposed regularizations can consistently improve the performances of baseline deep networks on CIFAR-10/100, ImageNet and SVHN datasets, based on intensive empirical experiments, as well as accelerate/stabilize the training curves. • The proposed orthogonal regularizations outperform existing competitors. |
Deep Learning for Object Saliency Detection and Image Segmentation | In this paper, we propose several novel deep learning methods for object saliency detection based on the powerful convolutional neural networks. In our approach, we use a gradient descent method to iteratively modify an input image based on the pixel-wise gradients to reduce a cost function measuring the class-specific objectness of the image. The pixel-wise gradients can be efficiently computed using the back-propagation algorithm. The discrepancy between the modified image and the original one may be used as a saliency map for the image. Moreover, we have further proposed several new training methods to learn saliency-specific convolutional nets for object saliency detection, in order to leverage the available pixel-wise segmentation information. Our methods are extremely computationally efficient (processing 20-40 images per second in one GPU). In this work, we use the computed saliency maps for image segmentation. Experimental results on two benchmark tasks, namely Microsoft COCO and Pascal VOC 2012, have shown that our proposed methods can generate high-quality salience maps, clearly outperforming many existing methods. In particular, our approaches excel in handling many difficult images, which contain complex background, highly-variable salient objects, multiple objects, and/or very small salient objects. |
Cities and complexity - understanding cities with cellular automata, agent-based models, and fractals | Title Type cities and complexity understanding cities with cellular automata agent-based models and fractals PDF the complexity of cooperation agent-based models of competition and collaboration PDF party competition an agent-based model princeton studies in complexity PDF sharing cities a case for truly smart and sustainable cities urban and industrial environments PDF global metropolitan globalizing cities in a capitalist world questioning cities PDF state of the worlds cities 201011 cities for all bridging the urban divide PDF new testament cities in western asia minor light from archaeology on cities of paul and the seven churches of revelation PDF |
Automatic information extraction from semi-structured Web pages by pattern discovery | The World Wide Web is now undeniably the richest and most dense source of information; yet, its structure makes it difficult to make use of that information in a systematic way. This paper proposes a pattern discovery approach to the rapid generation of information extractors that can extract structured data from semi-structured Web documents. Previous work in wrapper induction aims at learning extraction rules from user-labeled training examples, which, however, can be expensive in some practical applications. In this paper, we introduce IEPAD (an acronym for Information Extraction based on PAttern Discovery), a system that discovers extraction patterns from Web pages without user-labeled examples. IEPAD applies several pattern discovery techniques, including PAT-trees, multiple string alignments and pattern matching algorithms. Extractors generated by IEPAD can be generalized over unseen pages from the same Web data source. We empirically evaluate the performance of IEPAD on an information extraction task from 14 real Web data sources. Experimental results show that with the extraction rules discovered from a single page, IEPAD achieves 96% average retrieval rate, and with less than five example pages, IEPAD achieves 100% retrieval rate for 10 of the sample Web data sources. D 2002 Elsevier Science B.V. All rights reserved. |
Robust stock trading using fuzzy decision trees | Stock market analysis has traditionally been proven to be difficult due to the large amount of noise present in the data. Different approaches have been proposed to predict stock prices including the use of computational intelligence and data mining techniques. Many of these methods operate on closing stock prices or on known technical indicators. Limited studies have shown that Japanese candlestick analysis serve as rich information sources for the market. In this paper decision trees based on the ID3 algorithm are used to derive short-term trading decisions from candlesticks. To handle the large amount of uncertainty in the data, both inputs and output classifications are fuzzified using well-defined membership functions. Testing results of the derived decision trees show significant gains compared to ideal mid and long-term trading simulations both in frictionless and realistic markets. |
Cognitive style predicts entry into physical sciences and humanities: Questionnaire and performance tests of empathy and systemizing | It is often questioned as to why fewer women enter science. This study assesses whether a cognitive style characterized by systemizing being at a higher level than empathizing (SNE) is better than sex in predicating entry into the physical sciences compared to humanities. 415 students in both types of discipline (203 males, 212 females) were given questionnaire and performance measures of systemizing and empathy. 59.1% of the science students were male and 70.1% of the humanities students were female. There were significant sex differences on the Empathy Quotient (EQ) (females on average scoring higher) and on the Systemizing Quotient (SQ) (males on average scoring higher), confirming earlier studies. Scientists also scored higher on the SQ, and scored lower on the EQ, compared to those in the humanities. Thus, independent of sex, SQ was a significant predictor of entry into the physical sciences. Results from questionnaire data and performance data indicate an SNE profile for physical science students as a group, and an ENS profile for humanities students as a group, regardless of sex. We interpret this as evidence that whilst on average males show stronger systemizing and females show stronger empathizing, individuals with a strong systemizing drive are more likely to enter the physical sciences, irrespective of their sex. © 2007 Elsevier Inc. All rights reserved. |
The acceptability of assistive technology to older people | Assistive technology (AT) is defined in this paper as ‘any device or system that allows an individual to perform a task that they would otherwise be unable to do, or increases the ease and safety with which the task can be performed’ (Cowan and Turner-Smith 1999). Its importance in contributing to older people’s independence and autonomy is increasingly recognised, but there has been little research into the viability of extensive installations of AT. This paper focuses on the acceptability of AT to older people, and reports one component of a multidisciplinary research project that examined the feasibility, acceptability, costs and outcomes of introducing AT into their homes. Sixty-seven people aged 70 or more years were interviewed in-depth during 2001 to find out about their use and experience of a wide range of assistive technologies. The findings suggest a complex model of acceptability, in which a ‘ felt need’ for assistance combines with ‘product quality ’. The paper concludes by considering the tensions that may arise in the delivery of acceptable assistive technology. |
Wheelchair tilt-in-space and recline does not reduce sacral skin perfusion as changing from the upright to the tilted and reclined position in people with spinal cord injury. | OBJECTIVE
To investigate the effect of various wheelchair tilt-in-space and recline angles on sacral skin perfusion in wheelchair users with spinal cord injury.
DESIGN
Repeated-measures, intervention and outcomes measure design.
SETTING
University research laboratory.
PARTICIPANTS
Power wheelchair users with spinal cord injury (N=11).
INTERVENTIONS
Six protocols of various wheelchair tilt-in-space and recline angles were randomly assigned to the participants: (1) 15° tilt-in-space and 100° recline, (2) 25° tilt-in-space and 100° recline, (3) 35° tilt-in-space and 100° recline, (4) 15° tilt-in-space and 120° recline, (5) 25° tilt-in-space and 120° recline, and (6) 35° tilt-in-space and 120° recline. Each protocol consisted of a 5-minute upright sitting and a 5-minute tilted and reclined period.
MAIN OUTCOME MEASURES
Skin perfusion over the sacrum (midpoint between the right posterior superior iliac spine and the adjacent spinous process) and right ischial tuberosity was measured using laser Doppler flowmetry.
RESULTS
Sacral skin perfusion did not show a significant difference in all 6 protocols of various tilt-in-space and recline angles when changing from an upright to a tilted and reclined position (not significant). However, as previously reported, skin perfusion over the ischial tuberosity showed a significant increase at 15°, 25°, and 35° tilt-in-space when combined with 120° recline and at 35° tilt-in-space when combined with 100° recline (P<.008).
CONCLUSIONS
Our results indicate that wheelchair tilt-in-space and recline enhances skin perfusion over the ischial tuberosities without reducing sacral skin perfusion when changing from an upright to a tilted and reclined position. |
A Holistic View of the Challenges and Social Implications of Online Distribution: the Case of Pensions | The market for individual and company pensions has been identified as uncompetitive and inefficient resulting in consumer confusion and apathy (Sandler, 2002; Pickering, 2002). Recommended solutions are large-scale product rationalisation and process simplification. Sandler (2002) suggests end-to-end electronic processing as a key means to achieving improvement whilst noting that “success will necessitate very broad take-up” (p. 217) requiring collective action and co-ordination. In response pension providers are seeking to develop the Internet as a low cost distribution and communication channel. ABSTRACT |
Imitating human playing styles in Super Mario Bros | We describe and compare several methods for generating game character controllers that mimic the playing style of a particular human player, or of a population of human players, across video game levels. Similarity in playing style is measured through an evaluation framework, that compares the play trace of one or several human players with the punctuated play trace of an AI player. The methods that are compared are either hand-coded, direct (based on supervised learning) or indirect (based on maximising a similarity measure). We find that a method based on neuroevolution performs best both in terms of the instrumental similarity measure and in phenomenological evaluation by human spectators. A version of the classic platform game “Super Mario Bros” is used as the testbed game in this study but the methods are applicable to other games that are based on character movement in space. |
R 47 Pixel continues to shrink ... . Small Pixels for Novel CMOS Image Sensors | This paper presents recent results of small pixel development for different applications and discusses optical and electrical characteristics of small pixels along with their respective images. Presented are basic optical and electrical characteristics of pixels with sizes in the range from 2.2μm to 1.1μm,. The paper provides a comparison of front side illumination (FSI) with back side illumination (BSI) technology and considers tradeoffs and applicability of each technology for different pixel sizes. Additional functionalities that can be added to pixel arrays with small pixel, in particular high dynamic range capabilities are also discussed. 1. FSI and BSI technology development Pixel shrinking is the common trend in image sensors for all areas of consumer electronics, including mobile imaging, digital still and video cameras, PC cameras, automotive, surveillance, and other applications. In mobile and digital still camera (DSC) applications, 1.75μm and 1.4μm pixels are widely used in production. Designers of image sensors are actively working on super-small 1.1μm and 0.9um pixels. In high-end DSC cameras with interchangeable lenses, pixel size reduces from the range of 5 – 6 μm to 3 – 4 μm, and even smaller. With very high requirements for angular pixel performance, this results in similar or even bigger challenges as for sub 1.4μm pixels. Altogether, pixel size reduction in all imaging areas has been the most powerful driving force for new technologies and innovations in pixel development. Aptina continues to develop FSI AptinaTM A-PixTM technology for pixel sizes of 1.4μm and bigger. Figures 1a and 1b illustrate a comparison of a regular pixel for a CMOS imager with Aptina’s A-Pix technology. Adding a light guide (LG) and extending the depth of the photodiode (PD) allow significant reduction of both optical and electrical crosstalk, thus significantly boosting pixel performance [1]. A-Pix technology has become a mature manufacturing process that provides high pixel performance with lower wafer cost compared to BSI technology. The latest efforts in developing A-Pix technology were focused on improving symmetry of the pixel, which resulted in extremely low optical cross-talk, reduced green imbalance and color shading. Improvements stem from improvements in the design and manufacturing of LG, along with the structure of Si PD. LG allows one to compensate for pixel asymmetry (at least its optical part) thus providing both optimal utilization of Si area, and minimal green imbalance / color shading. Figure 2 shows an example of green imbalance for 5Mpix sensors with 1.4μm pixels size designed for 27degree max CRA of the lens. Improvement of the LG design reduces green imbalance by more than 7x. BSI technology allows further reduction of pixel size to extremely small 1.1μm and 0.9μm, and more symmetrical pixel design for larger pixel nodes. Similar to A-Pix, the use of back side illumination in pixel design allows significant reduction of optical and electrical crosstalk, as illustrated in Figure 1c. Both BSI and Aptina Apix technology use the 90nm gate and 65nm pixel manufacturing process. Aptina’s BSI technology uses cost-effective P-EPI on P+ bulk silicon as starting wafers. The wafers receive normal FSI CMOS process with skipping some FSI p modules. Front side alignment marks are added for later backside alignments. The device wafers are bonded to BSI carrier wafers, and are thinned down to a few microns thick through wafer back side grinding, selective wet etch, and chemical-mechanical planarization process. The wafer thickness is matched to front side PD depth to reduce cross-talk. Finally, anti-reflective coatings are applied to backside silicon surface and micro-lens to increase pixel QE. Figure 3 shows normalized quantum efficiency spectral characteristics of 1.1μm BSI pixels. Pixels exhibit high QE for all 3 colors and small crosstalk that benefit overall image quality. Figure 4 presents luminance SNR plots for 1.4μm FSI and BSI pixels and 1.1μm BSI pixel. Due to advances of A-Pix technology, characteristics of FSI and BSI 1.4μm pixel are close, with the BSI pixel slightly outperforming FSI pixel, especially at very high CRA. However, the difference in performance is much smaller compared to conventional FSI pixel. For 1.1μm pixels, BSI technology definitely plays a key role in achieving high pixel performance. Major pixel photoelectrical characteristics are presented in Table 1. 2. Image quality of sensors with equal optical format Figure 5 presents SNR10 metrics for different pixel size inversely normalized per pixel area scene illumination at which luminance SNR is equal to 10x for specified lens conditions, integration time, and color correction matrix. As can be seen from the plot, the latest generation of pixels provides SNR10 performance that is scaled to the area, and as a result, provides the same image quality at the same optical format for the mid level of exposures. The latest generation of pixels with the size of (1.1μm – 2.2μm) in Figure 5 uses advances of A-pix technology to boost pixel performance. Many products for mobile and DSC applications use 1.4μm pixel; the latest generations of 1.75μm, 1.9μm, and 2.2μm are in mass production both for still shot and video-centric 2D and 3D applications. Bringing the latest technology to the large 5.6μm pixel has allowed us to significantly boost performance of that pixel (shown as a second bar of Figure 5 for 5.6μm pixel) for automotive applications. As was mentioned earlier, BSI technology furthers the extension of array size for the optical formats. The latest addition to the mainstream mobile cameras with 1⁄4‖ optical format is 8Mpix image sensor with 1.1μm pixels size. Figure 6 compares images from the previous 5Mpix sensor with 1⁄4‖ optical format with 1.4μm pixel size with images from the new 8Mpix sensor with 1.1μm pixel that fits into the same 1/4‖ optical format. Images were taken from the scene with ~100 lux illumination at 67ms integration time and typical f/2.8 lens for mobile applications. Zoomed fragments of the images with 100% zoom for 5Mpix sensor show very comparable quality of the images and confirm that similar image quality for a given optical format results when pixel performance that is scaled to the area continues to be the same. Figure 4 shows also the lowest achievable SNR10 for 1.4μm pixel at similar conditions for the ideal case of QE equal to 100% for all colors and no optical or electrical crosstalk – color overlaps are defined only by color filters. The shape of color filters is taken from large pixel sensor for high-end DSC application and assumes very good color reproduction. It is interesting to see that current 1.4μm pixel has only 40% lower SNR at conditions close to first acceptable image, SNR10 [2]. 3. Additional functionality for arrays with small pixels With the diffraction limits of imaging lenses, the minimum resolvable feature size (green light, Rayleigh limit) for an fnumber 2.8 lens is around 1.8 microns [3]. As pixel sizes continue to shrink below 1.8 microns, the image field produced from the optics is oversampled and system MTF does not continue to show scaled improvement based on increased frequency pixel sampling. How can we take advantage of increased frequency pixel sampling then? High Dynamic Range. Humans have the ability to gaze upon a fixed scene and clearly see very bright and dark objects simultaneously. The typical maximum brightness range visible by humans within a fixed scene is about 10,000 to 1 or 80dB [4]. Mobile and digital still cameras often struggle to match the intra-scene dynamic range of the human visual system and can’t capture high range scenes (50-80dB) primarily because the pixels in the camera’s sensors have a linear response and limited well capacities. HDR image capture technology can address the problem of limited dynamic range in today’s camera. However, a low cost technique that provides adequate performance for still and video applications is needed. Frame Multi-exposure HDR. The frame multi-exposure technique, otherwise known as exposure bracketing, is widely used in the industry to capture several photos of a scene and combine them into an HDR photo. Although this technique is simple, effective, and available to anyone with a camera with exposure control, the drawbacks relegate this technique to still scene photography and frame buffer-based post processing. If an HDR camera system is desired that doesn’t require frame memory and can reduce motion artifacts to a level where video capture is possible, the common image sensor architecture used in most cameras today must be changed. Can we use smaller pixels to provide multi-exposure HDR that doesn’t require frame memory for photos and reduces motion artifacts and allows video capture? Interleaved HDR Capture. With pixel size reduction there is an opportunity to take advantage of the diffraction limits of camera optical systems by spatially interleaving pixels with differing exposure time controls to achieve multi-exposure capture. Figure 7 shows an example of a dual exposure capture system using interleaved exposures within a standard Bayer pattern. This form of intra-frame multi-exposure HDR capture can be easily incorporated into standard CMOS sensors and doesn’t require the additional readout speed or large memories. The tradeoff of interleaving the exposures is that fewer pixels are available for each exposure image and can affect the overall captured image resolution. This is where the advantage of small pixels comes into play: as pixels shrink below the diffraction limit, the system approaches being oversampled such that the MTF doesn’t improve proportionally to pixel size. We propose that greater gain in overall image quality may be achieved by spatially sampling different exposures to capture higher scene quality rather than oversampling the image. In Figure 7, pairs of rows are used for each exposure to ens |
Feasibility and safety of virtual-reality-based early neurocognitive stimulation in critically ill patients | BACKGROUND
Growing evidence suggests that critical illness often results in significant long-term neurocognitive impairments in one-third of survivors. Although these neurocognitive impairments are long-lasting and devastating for survivors, rehabilitation rarely occurs during or after critical illness. Our aim is to describe an early neurocognitive stimulation intervention based on virtual reality for patients who are critically ill and to present the results of a proof-of-concept study testing the feasibility, safety, and suitability of this intervention.
METHODS
Twenty critically ill adult patients undergoing or having undergone mechanical ventilation for ≥24 h received daily 20-min neurocognitive stimulation sessions when awake and alert during their ICU stay. The difficulty of the exercises included in the sessions progressively increased over successive sessions. Physiological data were recorded before, during, and after each session. Safety was assessed through heart rate, peripheral oxygen saturation, and respiratory rate. Heart rate variability analysis, an indirect measure of autonomic activity sensitive to cognitive demands, was used to assess the efficacy of the exercises in stimulating attention and working memory.
RESULTS
Patients successfully completed the sessions on most days. No sessions were stopped early for safety concerns, and no adverse events occurred. Heart rate variability analysis showed that the exercises stimulated attention and working memory. Critically ill patients considered the sessions enjoyable and relaxing without being overly fatiguing.
CONCLUSIONS
The results in this proof-of-concept study suggest that a virtual-reality-based neurocognitive intervention is feasible, safe, and tolerable, stimulating cognitive functions and satisfying critically ill patients. Future studies will evaluate the impact of interventions on neurocognitive outcomes. Trial registration Clinical trials.gov identifier: NCT02078206. |
Beamforming for MIMO-OFDM Wireless Systems | The smart antennas are widely used for wireless com munication, because it has a ability to increase th coverage and capacity of a communication system. Smart anten na performs two main functions such as direction of arrival estimation (DOA) and beam forming. Using beam formi ng algorithm smart antenna is able to form main bea m towards desired user and null in the direction of i nterfering signals. In this project Direction of ar rival (DOA) is estimated by using MUSIC algorithm. Receive Beam fo r ing is performed by using LMS and LLMS algorithm .In this Paper, in order to perform secure transmission of signal over wireless communication we have used chaotic sequences. This paper evaluates the performance of B am forming with and without LMS and LLMS algorith m for MIMO-OFDM wireless system. The simulations are carr ied out using MATLAB. |
TDMA-ASAP: Sensor Network TDMA Scheduling with Adaptive Slot-Stealing and Parallelism | TDMA has been proposed as a MAC protocol for wireless sensor networks (WSNs) due to its efficiency in high WSN load. However, TDMA is plagued with shortcomings; we present modifications to TDMA that will allow for the same efficiency of TDMA, while allowing the network to conserve energy during times of low load (when there is no activity being detected). Recognizing that aggregation plays an essential role in WSNs, TDMA-ASAP adds to TDMA: (a) transmission parallelism based on a level-by-level localized graph-coloring, (b) appropriate sleeping between transmissions ("napping"), (c) judicious and controlled TDMA slot stealing to avoid empty slots to be unused and (d) intelligent scheduling/ordering transmissions. Our results show that TDMA-ASAP's unique combination of TDMA, slot-stealing, napping, and message aggregation significantly outperforms other hybrid WSN MAC algorithms and has a performance that is close to optimal in terms of energy consumption and overall delay. |
Statistical mechanics of semiflexible bundles of wormlike polymer chains. | We demonstrate that a semiflexible bundle of wormlike chains exhibits a state-dependent bending stiffness that alters fundamentally its scaling behavior with respect to the standard wormlike chain. We explore the equilibrium conformational and mechanical behavior of wormlike bundles in isolation, in cross-linked networks, and in solution. |
Weakly-supervised Learning of Mid-level Features for Pedestrian Attribute Recognition and Localization | State-of-the-art methods treat pedestrian attribute recognition as a multi-label image classification problem. The location information of person attributes is usually eliminated or simply encoded in the rigid splitting of whole body in previous work. In this paper, we formulate the task in a weakly-supervised attribute localization framework. Based on GoogLeNet, firstly, a set of mid-level attribute features are discovered by novelly designed detection layers, where a max-pooling based weakly-supervised object detection technique is used to train these layers with only imagelevel labels without the need of bounding box annotations of pedestrian attributes. Secondly, attribute labels are predicted by regression of the detection response magnitudes. Finally, the locations and rough shapes of pedestrian attributes can be inferred by performing clustering on a fusion of activation maps of the detection layers, where the fusion weights are estimated as the correlation strengths between each attribute and its relevant mid-level features. Extensive experiments are performed on the two currently largest pedestrian attribute datasets, i.e. the PETA dataset and the RAP dataset. Results show that the proposed method has achieved competitive performance on attribute recognition, compared to other state-of-the-art methods. Moreover, the results of attribute localization are visualized to understand the characteristics of the proposed method. |
Relative Entropy Policy Search | Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant policy gradients (Bagnell and Schneider 2003), many of these problems may be addressed by constraining the information loss. In this paper, we continue this path of reasoning and suggest the Relative Entropy Policy Search (REPS) method. The resulting method differs significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems. Introduction Policy search is a reinforcement learning approach that attempts to learn improved policies based on information observed in past trials or from observations of another agent’s actions (Bagnell and Schneider 2003). However, policy search, as most reinforcement learning approaches, is usually phrased in an optimal control framework where it directly optimizes the expected return. As there is no notion of the sampled data or a sampling policy in this problem statement, there is a disconnect between finding an optimal policy and staying close to the observed data. In an online setting, many methods can deal with this problem by staying close to the previous policy (e.g., policy gradient methods allow only small incremental policy updates). Hence, approaches that allow stepping further away from the data are problematic, particularly, off-policy approaches Directly optimizing a policy will automatically result in a loss of data as an improved policy needs to forget experience to avoid the mistakes of the past and to aim on the observed successes. However, choosing an improved policy purely based on its return favors biased solutions that eliminate states in which only bad actions have been tried out. This problem is known as optimization bias (Mannor et al. 2007). Optimization biases may appear in most onand off-policy reinforcement learning methods due to undersampling (e.g., if we cannot sample all state-actions pairs prescribed by a policy, we will overfit the taken actions), model errors or even the policy update step itself. Copyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Policy updates may often result in a loss of essential information due to the policy improvement step. For example, a policy update that eliminates most exploration by taking the best observed action often yields fast but premature convergence to a suboptimal policy. This problem was observed by Kakade (2002) in the context of policy gradients. There, it can be attributed to the fact that the policy parameter update δθ was maximizing it collinearity δθ∇θJ to the policy gradient while only regularized by fixing the Euclidian length of the parameter update δθ δθ = ε to a stepsize ε. Kakade (2002) concluded that the identity metric of the distance measure was the problem, and that the usage of the Fisher information metric F (θ) in a constraint δθF (θ)δθ = ε leads to a better, more natural gradient. Bagnell and Schneider (2003) clarified that the constraint introduced in (Kakade 2002) can be seen as a Taylor expansion of the loss of information or relative entropy between the path distributions generated by the original and the updated policy. Bagnell and Schneider’s (2003) clarification serves as a key insight to this paper. In this paper, we propose a new method based on this insight, that allows us to estimate new policies given a data distribution both for off-policy or on-policy reinforcement learning. We start from the optimal control problem statement subject to the constraint that the loss in information is bounded by a maximal step size. Note that the methods proposed in (Bagnell and Schneider 2003; Kakade 2002; Peters and Schaal 2008) used a small fixed step size instead. As we do not work in a parametrized policy gradient framework, we can directly compute a policy update based on all information observed from previous policies or exploratory sampling distributions. All sufficient statistics can be determined by optimizing the dual function that yields the equivalent of a value function of a policy for a data set. We show that the method outperforms the previous policy gradient algorithms (Peters and Schaal 2008) as well as SARSA (Sutton and Barto 1998). Background & Notation We consider the regular reinforcememt learning setting (Sutton and Barto 1998; Sutton et al. 2000) of a stationary Markov decision process (MDP) with n states s and m actions a. When an agent is in state s, he draws an action a ∼ π(a|s) from a stochastic policy π. Subsequently, the 1607 Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10) |
A hybrid dynamic time warping-deep neural network architecture for unsupervised acoustic modeling | We report on an architecture for the unsupervised discovery of talker-invariant subword embeddings. It is made out of two components: a dynamic-time warping based spoken term discovery (STD) system and a Siamese deep neural network (DNN). The STD system clusters word-sized repeated fragments in the acoustic streams while the DNN is trained to minimize the distance between time aligned frames of tokens of the same cluster, and maximize the distance between tokens of different clusters. We use additional side information regarding the average duration of phonemic units, as well as talker identity tags. For evaluation we use the datasets and metrics of the Zero Resource Speech Challenge. The model shows improvement over the baseline in subword unit modeling. |
VLocNet++: Deep Multitask Learning for Semantic Visual Localization and Odometry | Semantic understanding and localization are fundamental enablers of robot autonomy that have been tackled as disjoint problems for the most part. While deep learning has enabled recent breakthroughs across a wide spectrum of scene understanding tasks, its applicability to state estimation tasks has been limited due to the direct formulation that renders it incapable of encoding scene-specific constrains. In this letter, we propose the VLocNet++ architecture that employs a multitask learning approach to exploit the inter-task relationship between learning semantics, regressing 6-DoF global pose and odometry, for the mutual benefit of each of these tasks. Our network overcomes the aforementioned limitation by simultaneously embedding geometric and semantic knowledge of the world into the pose regression network. We propose a novel adaptive weighted fusion layer to aggregate motion-specific temporal information and to fuse semantic features into the localization stream based on region activations. Furthermore, we propose a self-supervised warping technique that uses the relative motion to warp intermediate network representations in the segmentation stream for learning consistent semantics. Finally, we introduce a first-of-a-kind urban outdoor localization dataset with pixel-level semantic labels and multiple loops for training deep networks. Extensive experiments on the challenging Microsoft 7-Scenes benchmark and our DeepLoc dataset demonstrate that our approach exceeds the state-of-the-art outperforming local feature-based methods while simultaneously performing multiple tasks and exhibiting substantial robustness in challenging scenarios. |
An event-based data fusion algorithm for smart cities | The last decade has seen a considerable increase in the number of sensors we interact with on a daily basis. However, it is not always possible for a single sensing system to capture the complete story. While statically mounted infrastructure sensors typically capture the what, where, how much etc aspects of a detected event, e.g. (what appliance was used, how much energy did it consume), they do not always answer the who question. On the other hand, the advent of wearables has helped answer the what and who aspects - e.g. (who used the appliance). Fusing such sensor streams that observe the same event but different attributes of it, opens up many interesting applications. In this paper, we present a globally optimal data fusion algorithm for such pairs of systems, and show why traditional bipartite algorithms do not work. We evaluate our algorithm against two greedy baselines and show that our algorithm has lesser variance in the presence of time skew, false positives and false negatives. |
Symbol Acquisition for Probabilistic High-Level Planning | We introduce a framework that enables an agent to autonomously learn its own symbolic representation of a low-level, continuous environment. Propositional symbols are formalized as names for probability distributions, providing a natural means of dealing with uncertain representations and probabilistic plans. We determine the symbols that are sufficient for computing the probability with which a plan will succeed, and demonstrate the acquisition of a symbolic representation in a computer game domain. |
FraudMiner: A Novel Credit Card Fraud Detection Model Based on Frequent Itemset Mining | This paper proposes an intelligent credit card fraud detection model for detecting fraud from highly imbalanced and anonymous credit card transaction datasets. The class imbalance problem is handled by finding legal as well as fraud transaction patterns for each customer by using frequent itemset mining. A matching algorithm is also proposed to find to which pattern (legal or fraud) the incoming transaction of a particular customer is closer and a decision is made accordingly. In order to handle the anonymous nature of the data, no preference is given to any of the attributes and each attribute is considered equally for finding the patterns. The performance evaluation of the proposed model is done on UCSD Data Mining Contest 2009 Dataset (anonymous and imbalanced) and it is found that the proposed model has very high fraud detection rate, balanced classification rate, Matthews correlation coefficient, and very less false alarm rate than other state-of-the-art classifiers. |
Data-Efficient Hierarchical Reinforcement Learning | Hierarchical reinforcement learning (HRL) is a promising approach to extend traditional reinforcement learning (RL) methods to solve more complex tasks. Yet, the majority of current HRL methods require careful task-specific design and on-policy training, making them difficult to apply in real-world scenarios. In this paper, we study how we can develop HRL algorithms that are general, in that they do not make onerous additional assumptions beyond standard RL algorithms, and efficient, in the sense that they can be used with modest numbers of interaction samples, making them suitable for real-world problems such as robotic control. For generality, we develop a scheme where lower-level controllers are supervised with goals that are learned and proposed automatically by the higher-level controllers. To address efficiency, we propose to use off-policy experience for both higherand lower-level training. This poses a considerable challenge, since changes to the lower-level behaviors change the action space for the higher-level policy, and we introduce an off-policy correction to remedy this challenge. This allows us to take advantage of recent advances in off-policy model-free RL to learn both higherand lower-level policies using substantially fewer environment interactions than on-policy algorithms. We term the resulting HRL agent HIRO and find that it is generally applicable and highly sample-efficient. Our experiments show that HIRO can be used to learn highly complex behaviors for simulated robots, such as pushing objects and utilizing them to reach target locations,1 learning from only a few million samples, equivalent to a few days of real-time interaction. In comparisons with a number of prior HRL methods, we find that our approach substantially outperforms previous state-of-the-art techniques.2 |
Adsorption of CO2 on CoII3[CoIII(CN)6]2 using DRIFTS. | Adsorption of CO(2) on dehydrated Prussian blue analogue Co(II)(3)[Co(III)(CN)(6)](2) was studied using diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS). An infrared peak at 2340 cm(-1) assigned to adsorbed CO(2) was identified and used semi-quantitatively to construct an isotherm at 298 K that followed the Langmuir-Freundlich equation in the low-coverage Henry's law limit with CO(2) pressure below about 25 kPa. Temperature-dependence at 6.8 kPa CO(2) was used to determine DeltaH(ad)=-23+/-3 kJ mol(-1), in this limit as well. Deviation from the Langmuir-Freundlich model was significant at temperatures above 298 K and attributed primarily to a loss of reliability of the DRIFT spectra at higher CO(2) pressures, particularly at higher temperatures, and the accompanying uncertainties in the difference spectra when correcting for the presence of gaseous CO(2). Based on this work, the application of DRIFTS to study CO(2) adsorption on Prussian blue analogues and other adsorbents is promising, although the range of conditions over which it can be applied appears to be more limited than with other techniques. |
Detecting anomalous behavior of PLC using semi-supervised machine learning | Industrial Control System (ICS) is used to monitor and control critical infrastructures. Programmable logic controllers (PLCs) are major components of ICS, which are used to form automation system. It is important to protect PLCs from any attacks and undesired incidents. However, it is not easy to apply traditional tools and techniques to PLCs for security protection and forensics because of its unique architectures. Semi-supervised machine learning algorithm, One-class Support Vector Machine (OCSVM), has been applied successfully to many anomaly detection problems. This paper proposes a novel methodology to detect anomalous events of PLC by using OCSVM. The methodology was applied to a simulated traffic light control system to illustrate its effectiveness and accuracy. Our results show that high accuracy of identification of anomalous PLC operations is obtained which can help investigators to perform PLC forensics efficiently and effectively. |
Iontophoretic beta-adrenergic stimulation of human sweat glands: possible assay for cystic fibrosis transmembrane conductance regulator activity in vivo. | With the advent of numerous candidate drugs for therapy in cystic fibrosis (CF), there is an urgent need for easily interpretable assays for testing their therapeutic value. Defects in the cystic fibrosis transmembrane conductance regulator (CFTR) abolished beta-adrenergic but not cholinergic sweating in CF. Therefore, the beta-adrenergic response of the sweat gland may serve both as an in vivo diagnostic tool for CF and as a quantitative assay for testing the efficacy of new drugs designed to restore CFTR function in CF. Hence, with the objective of defining optimal conditions for stimulating beta-adrenergic sweating, we have investigated the components and pharmacology of sweat secretion using cell cultures and intact sweat glands. We studied the electrical responses and ionic mechanisms involved in beta-adrenergic and cholinergic sweating. We also tested the efficacy of different beta-adrenergic agonists. Our results indicated that in normal subjects the cholinergic secretory response is mediated by activation of Ca(2+)-dependent Cl(-) conductance as well as K(+) conductances. In contrast, the beta-adrenergic secretory response is mediated exclusively by activation of a cAMP-dependent CFTR Cl(-) conductance without a concurrent activation of a K(+) conductance. Thus, the electrochemical driving forces generated by beta-adrenergic agonists are significantly smaller compared with those generated by cholinergic agonists, which in turn reflects in smaller beta-adrenergic secretory responses compared with cholinergic secretory responses. Furthermore, the beta-adrenergic agonists, isoproprenaline and salbutamol, induced sweat secretion only when applied in combination with an adenylyl cyclase activator (forskolin) or a phosphodiesterase inhibitor (3-isobutyl-1-methylxanthine, aminophylline or theophylline). We surmise that to obtain consistent beta-adrenergic sweat responses, levels of intracellular cAMP above that achievable with a beta-adrenergic agonist alone are essential. beta-Adrenergic secretion can be stimulated in vivo by concurrent iontophoresis of these drugs in normal, but not in CF, subjects. |
Compiling Comp Ling: practical weighted dynamic programming and the Dyna language | Weighted deduction with aggregation is a powerful theoretical formalism that encompasses many NLP algorithms. This paper proposes a declarative specification language, Dyna; gives generalagenda-basedalgorithms for computing weights and gradients; briefly discusses Dyna-to-Dyna program transformations; and shows that a first implementation of a Dyna-to-C++ compiler produces code that is efficient enough for real NLP research, though still several times slower than hand-crafted code. |
Distributed Spatial Data Clustering as a New Approach for Big Data Analysis | The analysis of big data requires powerful, scalable, and accurate data analytics techniques that the traditional data mining and machine learning do not have as a whole. Therefore, new data analytics frameworks are needed to deal with the big data challenges such as volumes, velocity, veracity, variety of the data. Distributed data mining constitutes a promising approach for big data sets, as they are usually produced in distributed locations, and processing them on their local sites will reduce significantly the response times, communications, etc. In this paper, we propose to study the performance of a distributed clustering, called Dynamic Distributed Clustering (DDC). DDC has the ability to remotely generate clusters and then aggregate them using an efficient aggregation algorithm. The technique is developed for spatial datasets. We evaluated the DDC using two types of communications (synchronous and asynchronous), and tested using various load distributions. The experimental results show that the approach has super-linear speed-up, scales up very well, and can take advantage of the recent programming models, such as MapReduce model, as its results are not affected by the types of communications. |
Conserving Tropical Tree Diversity and Forest Structure: The Value of Small Rainforest Patches in Moderately-Managed Landscapes | Rainforests are undergoing severe deforestation and fragmentation worldwide. A huge amount of small forest patches are being created, but their value in conserving biodiversity and forest structure is still controversial. Here, we demonstrate that in a species-rich and moderately-managed Mexican tropical landscape small rainforest patches (<100 ha) can be highly valuable for the conservation of tree diversity and forest structure. These patches showed diverse communities of native plants, including endangered species, and a new record for the country. Although the number of logged trees increased in smaller patches, patch size was a poor indicator of basal area, stem density, number of species, genera and families, and community evenness. Cumulative species-area curves indicated that all patches had a similar contribution to the regional species diversity. This idea also was supported by the fact that patches strongly differed in floristic composition (high β-diversity), independently of patch size. Thus, in agreement with the land-sharing approach, our findings support that small forest patches in moderately-managed landscapes should be included in conservation initiatives to maintain landscape heterogeneity, species diversity, and ecosystem services. |
Fuzzy pre-compensated fuzzy self-tuning fuzzy PID controller of 3 DOF planar robot manipulators | Control of an industrial robot includes nonlinearities, uncertainties and external perturbations that should be considered in the design of control laws. Proportional-integral-derivative (PID)-type fuzzy controller is a well-known conventional motion control strategy for manipulators which ensures global asymptotic stability. To enhance the PID-type fuzzy controller performance for the control of rigid planar robot manipulators, in this paper, a fuzzy pre-compensation of a fuzzy self tuning fuzzy PID controller is proposed. The proposed control scheme consists of a fuzzy logic-based pre-compensator followed by a fuzzy self tuning fuzzy PID controller. In the fuzzy self tuning fuzzy PID controller, a supervisory hierarchical fuzzy controller (SHFC) is used for tuning the input scaling factors of the fuzzy PID controller according to the actual tracking position error and the actual tracking velocity error. Numerical simulations using the dynamic model of a three DOF planar rigid robot manipulator with uncertainties show the effectiveness of the approach in set point tracking problems. Our results show that the proposed controller has superior performance compared to a conventional fuzzy PID controller. |
INFORMAL LANGUAGE LEARNING SETTING: TECHNOLOGY OR SOCIAL INTERACTION? | Based on the informal language learning theory, language learning can occur outside the classroom setting unconsciously and incidentally through interaction with the native speakers or exposure to authentic language input through technology. However, an EFL context lacks the social interaction which naturally occurs in an ESL context. To explore which source of language input would have a greater impact, this study investigated the effect of exposure on speaking proficiency. Two types of exposure were provided: audiovisual mass media as a source of language input in an EFL context and social interaction as a source of language input in an ESL context. A sample speaking test was administered to one hundred language learners in an EFL context (Iran) and another one hundred language learners in an ESL context (Malaysia). Then, thirty participants from each context who scored one standard deviation above and below the mean were selected as homogenous language learners. During the experiment, EFL participants had exposure to audiovisual mass media while the ESL participants were exposed to social interaction as a source of language input. At the end, both groups took another sample speaking test. The post-test showed that the EFL group performed better which was indicative of the fact that exposure to technology promotes speaking proficiency. |
Support vector machine-based classification of Alzheimer’s disease from whole-brain anatomical MRI | We present and evaluate a new automated method based on support vector machine (SVM) classification of whole-brain anatomical magnetic resonance imaging to discriminate between patients with Alzheimer’s disease (AD) and elderly control subjects. We studied 16 patients with AD [mean age ± standard deviation (SD) = 74.1 ± 5.2 years, mini-mental score examination (MMSE) = 23.1 ± 2.9] and 22 elderly controls (72.3 ± 5.0 years, MMSE = 28.5 ± 1.3). Three-dimensional T1-weighted MR images of each subject were automatically parcellated into regions of interest (ROIs). Based upon the characteristics of gray matter extracted from each ROI, we used an SVM algorithm to classify the subjects and statistical procedures based on bootstrap resampling to ensure the robustness of the results. We obtained 94.5% mean correct classification for AD and control subjects (mean specificity, 96.6%; mean sensitivity, 91.5%). Our method has the potential in distinguishing patients with AD from elderly controls and therefore may help in the early diagnosis of AD. |
관광 e-마켓플레이스의 참여요인에 관한 연구 | E-marketplace is a kind of B2B e-business system that supports business transactions between companies. If e-marketplace could be revitalized, we might expect not only the development of related industry but also the decrease of transaction costs between companies. Likewise, it is necessary to introduce and revitalize e-marketplace in tourism industry. Participants to tourism e-marketplace might be tourism-related companies like as travel agencies, lodging and shipping companies, etc. Also tourists want to search a variety of tourism products or contents. So tourism e-marketplace might have to include characteristics of B2C e-business systems as well as B2B e-business systems. Factors were found by analyzing the existing papers related with B2B e-marketplace and B2C web site. The purpose of this paper is to enumerate and assess significant factors that might influence in participating in the tourism e-marketplace through statistical survey. |
Topics and Label Propagation: Best of Both Worlds for Weakly Supervised Text Classification | We propose a Label Propagation based algorithm for weakly supervised text classification. We construct a graph where each document is represented by a node and edge weights represent similarities among the documents. Additionally, we discover underlying topics using Latent Dirichlet Allocation (LDA) and enrich the document graph by including the topics in the form of additional nodes. The edge weights between a topic and a text document represent level of “affinity” between them. Our approach does not require document level labelling, instead it expects manual labels only for topic nodes. This significantly minimizes the level of supervision needed as only a few topics are observed to be enough for achieving sufficiently high accuracy. The Label Propagation Algorithm is employed on this enriched graph to propagate labels among the nodes. Our approach combines the advantages of Label Propagation (through document-document similarities) and Topic Modelling (for minimal but smart supervision). We demonstrate the effectiveness of our approach on various datasets and compare with state-of-the-art weakly supervised text classification approaches. |
Outcomes of traumatic brain injury in Hong Kong: Validation with the TRISS, CRASH, and IMPACT models | We aimed to test prognostic models (the Trauma Injury Severity Score, International Mission for Prognosis and Analysis of Clinical Trials in Traumatic Brain Injury, and Corticosteroid Randomisation After Significant Head Injury models) for 14-day mortality, 6-month mortality, and 6-month unfavorable outcome in a cohort of trauma patients with traumatic brain injury (TBI) in Hong Kong. We analyzed 661 patients with significant TBI treated in a regional trauma centre in Hong Kong over a 3-year period. The discriminatory power of the models was assessed as the area under the receiver operating characteristic curve. One-sample t-tests were used to compare actual outcomes in the cohort against predicted outcomes. All three prognostic models were shown to have good discriminatory power and no significant systemic over-estimation or under-estimation. In conclusion, all three predictive models are applicable to eligible TBI patients in Hong Kong. These predictive models can be utilized to audit TBI management outcomes for trauma service development in the future. |
Metaphor Identification in Large Texts Corpora | Identifying metaphorical language-use (e.g., sweet child) is one of the challenges facing natural language processing. This paper describes three novel algorithms for automatic metaphor identification. The algorithms are variations of the same core algorithm. We evaluate the algorithms on two corpora of Reuters and the New York Times articles. The paper presents the most comprehensive study of metaphor identification in terms of scope of metaphorical phrases and annotated corpora size. Algorithms' performance in identifying linguistic phrases as metaphorical or literal has been compared to human judgment. Overall, the algorithms outperform the state-of-the-art algorithm with 71% precision and 27% averaged improvement in prediction over the base-rate of metaphors in the corpus. |
Graph-Level Operations: A High-Level Interface for Graph Visualization Technique Specification | Social networks collected by historians or sociologists typically have a large number of actors and edge attributes. Applying social network analysis (SNA) algorithms to these networks produces additional attributes such as degree, centrality, and clustering coefficients. Understanding the effects of this plethora of attributes is one of the main challenges of multivariate SNA. We present the design of GraphDice, a multivariate network visualization system for exploring the attribute space of edges and actors. GraphDice builds upon the ScatterDice system for its main multidimensional navigation paradigm, and extends it with novel mechanisms to support network exploration in general and SNA tasks in particular. Novel mechanisms include visualization of attributes of interval type and projection of numerical edge attributes to node attributes. We show how these extensions to the original ScatterDice system allow to support complex visual analysis tasks on networks with hundreds of actors and up to 30 attributes, while providing a simple and consistent interface for interacting with network data. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.