title
stringlengths
8
300
abstract
stringlengths
0
10k
Translocation (9;22) is associated with extremely poor prognosis in intensively treated children with acute lymphoblastic leukemia.
The prognostic implications of t(9;22)(q34;q11) were assessed at a median follow-up of 3.5 years in 434 children receiving intensive treatment for acute lymphoblastic leukemia (ALL). Four-year event-free and overall survivals were 81% and 88%, respectively, in 419 children lacking t(9;22), but were 0% and 20%, respectively, in 15 children with t(9;22) (P less than .001). Poor outcome for children with t(9;22)-positive ALL was particularly notable because we have reported improved survival in other historically poor prognosis ALL cytogenetic categories when treated with similarly intensive therapy. We recommend that very intensive treatment approaches, including bone marrow transplantation in first remission, be considered for all children with t(9;22)-positive ALL.
House Price Modeling over Heterogeneous Regions with Hierarchical Spatial Functional Analysis
Online real-estate information systems such as Zillow and Trulia have gained increasing popularity in recent years. One important feature offered by these systems is the online home price estimate through automated data-intensive computation based on housing information and comparative market value analysis. State-of-the-art approaches model house prices as a combination of a latent land desirability surface and a regression from house features. However, by using uniformly damping kernels, they are unable to handle irregularly shaped regions or capture land value discontinuities within the same region due to the existence of implicit sub-communities, which are common in real-world scenarios. In this paper, we explore the novel application of recent advances in spatial functional analysis to house price modeling and propose the Hierarchical Spatial Functional Model (HSFM), which decomposes house values into land desirability at both the global scale and hidden local scales as well as the feature regression component. We propose statistical learning algorithms based on finite-element spatial functional analysis and spatial constrained clustering to train our model. Extensive evaluations based on housing data in a major Canadian city show that our proposed approach can reduce the mean relative house price estimation error down to 6.60%.
Benchmarking face tracking
Face tracking is an active area of computer vision research and an important building block for many applications. However, opposed to face detection, there is no common benchmark data set to evaluate a tracker’s performance, making it hard to compare results between different approaches. In this challenge we propose a data set, annotation guidelines and a well defined evaluation protocol in order to facilitate the evaluation of face tracking systems in the future.
Stiffness and hysteresis properties of some prosthetic feet.
A prosthetic foot is an important element of a prosthesis, although it is not always fully recognized that the properties of the foot, along with the prosthetic knee joint and the socket, are in part responsible for the stability and metabolic energy cost during walking. The stiffness and the hysteresis, which are the topics of this paper, are not properly prescribed, but could be adapted to improve the prosthetic walking performance. The shape is strongly related to the cosmetic appearance and so can not be altered to effect these improvements. Because detailed comparable data on foot stiffness and hysteresis, which are necessary to quantify the differences between different types of feet, are absent in literature, these properties were measured by the authors in a laboratory setup for nine different prosthetic feet, bare and with two different shoes. One test cycle consisted of measurements of load deformation curves in 66 positions, representing the range from heel strike to toe-off. The hysteresis is defined by the energy loss as a part of the total deformation energy. Without shoes significant differences in hysteresis between the feet exist, while with sport shoes the differences in hysteresis between the feet vanish for the most part. Applying a leather shoe leads to an increase of hysteresis loss for all tested feet. The stiffness turned out to be non-constant, so mean stiffness is used.(ABSTRACT TRUNCATED AT 250 WORDS)
Large-Signal Analysis of MOS Varactors in CMOS Gm LC VCOs
MOS varactors are used extensively as tunable elements in the tank circuits of RF voltage-controlled oscillators (VCOs) based on submicrometer CMOS technologies. MOS varactor topologies include conventionalD=S=B connected, inversion-mode (I-MOS), and accumulation-mode (A-MOS) structures. When incorporated into the VCO tank circuit, the large-signal swing of the VCO output oscillation modulates the varactor capacitance in time, resulting in a VCO tuning curve that deviates from the dc tuning curve of the particular varactor structure. This paper presents a detailed analysis of this large-signal effect. Simulated results are compared to measurements for an example 2.5-GHz complementary LC VCO using I-MOS varactors implemented in 0.35m CMOS technology.
Investigation of Speech Separation as a Front-End for Noise Robust Speech Recognition
Recently, supervised classification has been shown to work well for the task of speech separation. We perform an in-depth evaluation of such techniques as a front-end for noise-robust automatic speech recognition (ASR). The proposed separation front-end consists of two stages. The first stage removes additive noise via time-frequency masking. The second stage addresses channel mismatch and the distortions introduced by the first stage; a non-linear function is learned that maps the masked spectral features to their clean counterpart. Results show that the proposed front-end substantially improves ASR performance when the acoustic models are trained in clean conditions. We also propose a diagonal feature discriminant linear regression (dFDLR) adaptation that can be performed on a per-utterance basis for ASR systems employing deep neural networks and HMM. Results show that dFDLR consistently improves performance in all test conditions. Surprisingly, the best average results are obtained when dFDLR is applied to models trained using noisy log-Mel spectral features from the multi-condition training set. With no channel mismatch, the best results are obtained when the proposed speech separation front-end is used along with multi-condition training using log-Mel features followed by dFDLR adaptation. Both these results are among the best on the Aurora-4 dataset.
An Intercountry Comparison of Textbooks Used for Introductory Psychology Courses
The preliminary results seem to indicate that the content areas emphasized by different countries in their textbooks or lectures reflect their interests and development in the field of psychology. It seems that Japan, South Africa, Germany, and to some extent the United States, place high emphasis on areas of experimental such as learning, perception, motivation, emotion, and thinking. Probably the United States is the leader in teaching abnormal, guidance and therapy, and personality areas in the introductory course. France stands unique with her philosophical orientation while Russia stands out with her emphasis on conditioning. Germany, of course, places high emphasis on theoretical, however, this emphasis is not as high at it was at one time.
Opportunities and Limits of Remote Timing Attacks
Many algorithms can take a variable amount of time to complete depending on the data being processed. These timing differences can sometimes disclose confidential information. Indeed, researchers have been able to reconstruct an RSA private key purely by querying an SSL Web server and timing the results. Our work analyzes the limits of attacks based on accurately measuring network response times and jitter over a local network and across the Internet. We present the design of filters to significantly reduce the effects of jitter, allowing an attacker to measure events with 15-100μs accuracy across the Internet, and as good as 100ns over a local network. Notably, security-related algorithms on Web servers and other network servers need to be carefully engineered to avoid timing channel leaks at the accuracy demonstrated in this article.
Topology optimization method for microstrips using boundary condition representation and adjoint analysis
This paper presents a structural optimization method for microstrip components based on topology optimization method. Topology optimization method optimizes structures via characteristic function that indicates the presence of materials at any point. In the proposed method, microstrips are modeled using an appropriate boundary condition of the utilized numerical methods. By controlling the parameter of the boundary condition locally, it is possible to represent any configuration in the given design domain. The optimization utilizes sensitivity information obtained through adjoint analysis. Thanks to the efficiency of the adjoint sensitivity method, the proposed method finds design solution within reasonable computational cost as compared to a variety of meta-heuristic methods. Numerical and experiment results are provided to illustrate our design approach.
Enhancing the Network Embedding Quality with Structural Similarity
Neural network techniques are widely used in network embedding, boosting the result of node classification, link prediction, visualization and other tasks in both aspects of efficiency and quality. All the state of art algorithms put effort on the neighborhood information and try to make full use of it. However, it is hard to recognize core periphery structures simply based on neighborhood. In this paper, we first discuss the influence brought by random-walk based sampling strategies to the embedding results. Theoretical and experimental evidences show that random-walk based sampling strategies fail to fully capture structural equivalence. We present a new method, SNS, that performs network embeddings using structural information (namely graphlets) to enhance its quality. SNS effectively utilizes both neighbor information and local-subgraphs similarity to learn node embeddings. This is the first framework that combines these two aspects as far as we know, positively merging two important areas in graph mining and machine learning. Moreover, we investigate what kinds of local-subgraph features matter the most on the node classification task, which enables us to further improve the embedding quality. Experiments show that our algorithm outperforms other unsupervised and semi-supervised neural network embedding algorithms on several real-world datasets.
Transparent Offloading and Mapping (TOM): Enabling Programmer-Transparent Near-Data Processing in GPU Systems
Main memory bandwidth is a critical bottleneck for modern GPU systems due to limited off-chip pin bandwidth. 3D-stacked memory architectures provide a promising opportunity to significantly alleviate this bottleneck by directly connecting a logic layer to the DRAM layers with high bandwidth connections. Recent work has shown promising potential performance benefits from an architecture that connects multiple such 3D-stacked memories and offloads bandwidth-intensive computations to a GPU in each of the logic layers. An unsolved key challenge in such a system is how to enable computation offloading and data mapping to multiple 3D-stacked memories without burdening the programmer such that any application can transparently benefit from near-data processing capabilities in the logic layer. Our paper develops two new mechanisms to address this key challenge. First, a compiler-based technique that automatically identifies code to offload to a logic-layer GPU based on a simple cost-benefit analysis. Second, a software/hardware cooperative mechanism that predicts which memory pages will be accessed by offloaded code, and places those pages in the memory stack closest to the offloaded code, to minimize off-chip bandwidth consumption. We call the combination of these two programmer-transparent mechanisms TOM: Transparent Offloading and Mapping. Our extensive evaluations across a variety of modern memory-intensive GPU workloads show that, without requiring any program modification, TOM significantly improves performance (by 30% on average, and up to 76%) compared to a baseline GPU system that cannot offload computation to 3D-stacked memories.
Improving the quality of life of multiple sclerosis patients through coping strategies in routine medical practice
Multiple sclerosis (MS) has a major impact on quality of life (QoL). Coping strategies which may influence QoL have not been identified. Furthermore, there is no coping scale designed to measure coping in MS patients and concise enough for routine medical practice. We used 46 items and 7 coping dimensions; we successively reduced the minimum number of dimensions through confirmatory factor analysis (CFA) and Rasch modelling. The resulting scale was submitted to psychometric validation via an independent cross-sectional analysis. After administration to 331 MS patients, we eliminated 10 of the 46 initial items; a CFA iterative algorithm identified a positive coping (PC) group and a negative coping (NC) group; an iterative reduction algorithm led to a final 10 items questionnaire, which was tested in an independent, new cross-sectional sample of 457 patients. Psychometric tests, including the Rasch model and CFA, successfully validated the scale, confirming the two dimensions and the absence of differential item functioning. The correlation between coping and QoL increased to 0.59 and 0.62 for NC and PC, respectively, compared with 0.33 found with existing scales. Our findings justify a one-dimensional overall coping scale (PC + NC). The effect of coping on QoL can be evaluated simply by adding together a positive and a negative coping strategy, for which we developed a short 10-item scale, which can be considered as an effective means of measuring the impact of coping on QoL and is ideal in routine medical practice.
The effect of midazolam on transient insomnia
We have assessed the effect of midazolam on sleep in a model of transient insomnia in healthy adults using polysomnographic recordings. The subjects were randomly assigned to one of three treatment groups (placebo or midazolam 7.5 or 15 mg) and spent a single night in the sleep laboratory. Midazolam or placebo were given double-blind immediately, before turning off the lights. Midazolam 15 mg was effective in inducing sleep, while 7.5 mg and 15 mg produced improvemnt in the maintenance of sleep. Subjectively, sleep latency and the number of awakenings were reduced dose-dependently. Midazolam did not impair psychomotor performance on the morning after administration.
Mechanical Design of a Rover for Mobile Manipulation in Uneven Terrain in the Context of the SpaceBot Cup
This paper describes the development and test of a mobile manipulation platform intended for a terrestrial robotic competition. While current space missions are planned to minimize complex manipulation tasks, plans for future space missions go beyond these restrictions. Infrastructure deployment, human-robot cooperative missions and complex sample collection require increasingly complex manipulation capabilities. To meet this need the Spacebot Cup consists of several complex manipulation tasks in unstructured terrain. These requirements were the main design driver for the presented system. The presented rover consists of a 3-Boogie-Chasis designed to increase the maximum stepping size, flexible rubber wheels to increase the maximal climbing inclination on loose surfaces and a small six degree of freedom manipulator to handle objects within the competition. The iterative simulation and experiment process used to develop the flexible rubber wheels is presented. Furthermore experiments are presented which allow a performance comparison between flexible and rigid wheels on loose surfaces.
Deformable Classifiers
Geometric variations of objects, which do not modify the object class, pose a major challenge for object recognition. These variations could be rigid as well as non-rigid transformations. In this paper, we design a framework for training deformable classifiers, where latent transformation variables are introduced, and a transformation of the object image to a reference instantiation is computed in terms of the classifier output, separately for each class. The classifier outputs for each class, after transformation, are compared to yield the final decision. As a by-product of the classification this yields a transformation of the input object to a reference pose, which can be used for downstream tasks such as the computation of object support. We apply a two-step training mechanism for our framework, which alternates between optimizing over the latent transformation variables and the classifier parameters to minimize the loss function. We show that multilayer perceptrons, also known as deep networks, are well suited for this approach and achieve state of the art results on the rotated MNIST and the Google Earth dataset, and produce competitive results on MNIST and CIFAR-10 when training on smaller subsets of training data.
NATURAL CAPACITOR VOLTAGE BALANCE IN MULTILEVEL FLYING CAPACITOR CONVERTRS . A REVIEW OF RESEARCH ACHIEVEMENTS
The flying-capacitor (FC) topology is one of the more well-established ideas of multilevel conversion, typically applied as an inverter. One of the biggest advantages of the FC converter is the ability to naturally balance capacitor voltage. When natural balancing occurs neither measurements, nor additional control is needed to maintain required capacitors voltage sharing. However, in order to achieve natural voltage balancing suitable conditions must be achieved such as the topology, number of levels, modulation strategy as well as impedance of the output circuitry. Nevertheless this method is effectively applied in various classes of the converter such as inverters, multicell DC-DC, switch-mode DC-DC, AC-AC, as well as rectifiers. The next important issue related to the natural balancing process is its dynamics. Furthermore, in order to reinforce the balancing mechanism an auxiliary resonant balancing circuit is utilized in the converter which can also be critical in the AC-AC converters or switch mode DC-DC converters. This paper also presents an issue of choosing modulation strategy for the FC converter due to the fact that the natural balancing process is well-established for phase shifted PWM whilst other types of modulation can be more favorable for the power quality.
Semantic search engine : A survey
The semantic search engines have some advantages on the web search engines from the users view. In this fast life everybody need the answer for their queries very fast .In this scenario semantic searching engines will be helpful. It deals with the actual meaning of the queries. The tremendous growth in the volume of data or the information lead the traditional search engines to get the answers syntactically correct but large in amount. That might be the reason to get into the semantic search engines which gives the selected results which the user searching for. So here in this paper, a survey is done about the semantic search engines to revel the promising features of the semantic search engines(SSE).It deals about the description of some of the best semantic search engines .
Continuous noninvasive glucose monitoring technology based on "occlusion spectroscopy".
BACKGROUND A truly noninvasive glucose-sensing device could revolutionalize diabetes treatment by leading to improved compliance with recommended glucose levels, thus reducing the long-term complications and cost of diabetes. Herein, we present the technology and evaluate the efficacy of a truly noninvasive device for continuous blood glucose monitoring, the NBM (OrSense Ltd.). METHODS In vitro analysis was used to validate the technology and algorithms. A clinical study was performed to quantify the in vivo performance of the NBM device. A total of 23 patients with type 1 (n = 12) and type 2 (n = 11) diabetes were enrolled in the clinical study and participated in 111 sessions. Accuracy was assessed by comparing NBM data with paired self-monitoring of blood glucose meter readings. RESULTS In vitro experiments showed a strong correlation between calculated and actual glucose concentrations. The clinical trial produced a total of 1690 paired glucose values (NBM vs reference). In the paired data set, the reference glucose range was 40-496 mg/dl. No systematic bias was found at any of the glucose levels examined (70, 100, 150, and 200 mg/dl). The mean relative absolute difference was 17.2%, and a Clarke error grid analysis showed that 95.5% of the measurements fall within the clinically acceptable A&B regions (zone A, 69.7%; and zone B, 25.7%). CONCLUSIONS This study indicates the potential use of OrSense's NBM device as a noninvasive sensor for continuous blood glucose evaluation. The device was safe and well tolerated.
Synthetic Sampling for Multi-Class Malignancy Prediction
We explore several oversampling techniques for an imbalanced multi-label classification problem, a setting often encountered when developing models for Computer-Aided Diagnosis (CADx) systems. While most CADx systems aim to optimize classifiers for overall accuracy without considering the relative distribution of each class, we look into using synthetic sampling to increase perclass performance when predicting the degree of malignancy. Using low-level image features and a random forest classifier, we show that using synthetic oversampling techniques increases the sensitivity of the minority classes by an average of 7.22% points, with as much as a 19.88% point increase in sensitivity for a particular minority class. Furthermore, the analysis of low-level image feature distributions for the synthetic nodules reveals that these nodules can provide insights on how to preprocess image data for better classification performance or how to supplement the original datasets when more data acquisition is feasible.
A modular manipulator for industrial applications: Design and implement
In order to solve the problem that the long cycle and the repetitive work in the process of designing the industrial robot, a modular manipulator system developed for general industrial applications is introduced in this paper. When the application scene is changed, the corresponding robotic modules can be selected to assemble a new robot configuration that meets the requirements. The modules can be divided into two categories: joint modules and link modules. Joint modules consist of three types of revolute joint modules with different torque, and link modules mainly contain T link module and L link module. By connection of different types of modules, various of configurations can be achieved. Considering the traditional 6-DoF manipulators are difficult to meet the needs of the unstructured industrial applications, a 7-DoF redundant manipulator prototype is designed on the basis of the robotic modules.
Screen Time Tantrums: How Families Manage Screen Media Experiences for Toddlers and Preschoolers
Prior work shows that setting limits on young children's screen time is conducive to healthy development but can be a challenge for families. We investigate children's (age 1 - 5) transitions to and from screen-based activities to understand the boundaries families have set and their experiences living within them. We report on interviews with 27 parents and a diary study with a separate 28 families examining these transitions. These families turn on screens primarily to facilitate parents' independent activities. Parents feel this is appropriate but self-audit and express hesitation, as they feel they are benefiting from an activity that can be detrimental to their child's well-being. We found that families turn off screens when parents are ready to give their child their full attention and technology presents a natural stopping point. Transitioning away from screens is often painful, and predictive factors determine the pain of a transition. Technology-mediated transitions are significantly more successful than parent-mediated transitions, suggesting that the design community has the power to make this experience better for parents and children by creating technologies that facilitate boundary-setting and respect families' self-defined limits.
Understanding Visual Ads by Aligning Symbols and Objects using Co-Attention
We tackle the problem of understanding visual ads where given an ad image, our goal is to rank appropriate human generated statements describing the purpose of the ad. This problem is generally addressed by jointly embedding images and candidate statements to establish correspondence. Decoding a visual ad requires inference of both semantic and symbolic nuances referenced in an image and prior methods may fail to capture such associations especially with weakly annotated symbols. In order to create better embeddings, we leverage an attention mechanism to associate image proposals with symbols and thus effectively aggregate information from aligned multimodal representations. We propose a multihop co-attention mechanism that iteratively refines the attention map to ensure accurate attention estimation. Our attention based embedding model is learned end-to-end guided by a max-margin loss function. We show that our model outperforms other baselines on the benchmark Ad dataset and also show qualitative results to highlight the advantages of using multihop co-attention.
Chess masters show a hallmark of face processing with chess.
Face processing has several distinctive hallmarks that researchers have attributed either to face-specific mechanisms or to extensive experience distinguishing faces. Here, we examined the face-processing hallmark of selective attention failure--as indexed by the congruency effect in the composite paradigm--in a domain of extreme expertise: chess. Among 27 experts, we found that the congruency effect was equally strong with chessboards and faces. Further, comparing these experts with recreational players and novices, we observed a trade-off: Chess expertise was positively related to the congruency effect with chess yet negatively related to the congruency effect with faces. These and other findings reveal a case of expertise-dependent, facelike processing of objects of expertise and suggest that face and expert-chess recognition share common processes.
Overview of Autonomous Vehicle Sensors and Systems
This paper will identify the application of various technologies that enable autonomous vehicles and also explain the advantages and disadvantages associated with each autonomous vehicle sensor. Specific sensors and systems may show favorable results, but various factors can affect their real-world use. For this reason, each system will be reviewed for an understanding of their practical application.
A Neural Probabilistic Structured-Prediction Method for Transition-Based Natural Language Processing
We propose a neural probabilistic structured-prediction method for transition-based natural language processing, which integrates beam search and contrastive learning. The method uses a global optimization model, which can leverage arbitrary features over nonlocal context. Beam search is used for efficient heuristic decoding, and contrastive learning is performed for adjusting the model according to search errors. When evaluated on both chunking and dependency parsing tasks, the proposed method achieves significant accuracy improvements over the locally normalized greedy baseline on the two tasks, respectively.
Autonomous land vehicle project at CMU
1 . Introduction This paper provides an overview of the Autonomous Land Vehicle (ALV) Project at CMU. The goal of the CMU ALV Project is to build vision and intelligence for a mobile robot capable of operating in the real world outdoors. We are attacking this on a number of fronts: building appropriate research vehicles, exploiting high. speed experimental computers, and building software for reasoning about the perceived world. Research topics includes: • Construction of research vehicles • Perception systems to perceive the natural outdoor scenes by means of multiple sensors including cameras (color, stereo, and motion), sonar sensors, and a 3D range finder • Path planning for obstacle avoidance Use of topological and terrain map • System architecture to facilitate the system integration • Utilization of parallel computer architectures Our current research vehicle is the Terregator built at CMU which is equipped with a sonar ring, a color camera, and the ERIM laser range finder. Its initial task is to follow roads and sidewalks in the park and on the campus, and avoid obstacles such as trees, humans, and traffic cones. 2. Vehicle, Sensors, and Host Computers The primary vehicle of the AMU ALV Project has been the Terregator, designed and built at CMU. The Terregator, short for Terrestrial Navigator, is designed to provide a clean separation between the vehicle itself and its sensor payload. As shown in
[Teeth whitening with 6% hydrogen peroxide vs. 35% hydrogen peroxide, a comparative controlled study].
INTRODUCTION In light of the lately changes in regulations regarding teeth whitening in Europe, the use of 6% hydrogen peroxide using a dedicated device becomes the first choice treatment option. The purpose of this controlled, randomized, parallel, blinded six months prospective study was to compare this method of teeth whitening treatment with that of in-office method using 35% hydrogen peroxide. MATERIALS AND METHODS 75 healthy american individuals, ages 18-62, participated in this study. The participants were divided into 3 groups: a 6% hydrogenperoxide group, a 35% hydrogen peroxide group and a placebo control group. Whitening procedures were performed on intact frontal teeth with color shade of A3 or higher. A controlled color measurement was performed before, immediately after, three and six months post treatment. Clinical periodontal indices, oral mucosa changes, side effects and participant satisfaction, were recorded. RESULTS In the 6% group, the change in color shades immediately after treatment, three and six months after treatment were 2.37, 2.17 and 1.95, respectively. Tooth color changes in 35% group immediately after completion of treatment, three and six months after treatment were 3.68, 2.60 and 1.70, respectively. Statistical significant differences were found in both treatment groups between the baseline color shade and the post treatment color shade. The results were stable three and six months after treatment. Statistically significant difference between the groups immediately after treatment (p < 0.0001). No statistically significant difference was found between the two groups three and six months after treatment (p > 0.5000). Side effects such as oral mucosa irritation, burns or sensitive teeth were mild and resolved without intervention. A high satisfaction level was recorded. CONCLUSIONS Tooth color shade can be substantially improved using a dedicated device with 6% hydrogen peroxide only. This whitening method can be helpful for the dentist in: home continuing treatment post in- office whitening specially in case with severe staining, maintenance of in-office whitening treatment outcomes or as an OTC home whitening procedure for patients with limmited budget.
Anticancer Activity of Amauroderma rude
More and more medicinal mushrooms have been widely used as a miraculous herb for health promotion, especially by cancer patients. Here we report screening thirteen mushrooms for anti-cancer cell activities in eleven different cell lines. Of the herbal products tested, we found that the extract of Amauroderma rude exerted the highest activity in killing most of these cancer cell lines. Amauroderma rude is a fungus belonging to the Ganodermataceae family. The Amauroderma genus contains approximately 30 species widespread throughout the tropical areas. Since the biological function of Amauroderma rude is unknown, we examined its anti-cancer effect on breast carcinoma cell lines. We compared the anti-cancer activity of Amauroderma rude and Ganoderma lucidum, the most well-known medicinal mushrooms with anti-cancer activity and found that Amauroderma rude had significantly higher activity in killing cancer cells than Ganoderma lucidum. We then examined the effect of Amauroderma rude on breast cancer cells and found that at low concentrations, Amauroderma rude could inhibit cancer cell survival and induce apoptosis. Treated cancer cells also formed fewer and smaller colonies than the untreated cells. When nude mice bearing tumors were injected with Amauroderma rude extract, the tumors grew at a slower rate than the control. Examination of these tumors revealed extensive cell death, decreased proliferation rate as stained by Ki67, and increased apoptosis as stained by TUNEL. Suppression of c-myc expression appeared to be associated with these effects. Taken together, Amauroderma rude represented a powerful medicinal mushroom with anti-cancer activities.
Improving the fingerprint verification performance of set partitioning coders at low bit rates
Wavelet transform combined with the set partitioning coders (SPC) are the most widely used fingerprint image compression approach. Many different SPC coders have been proposed in the literature to encode the wavelet transform coefficients a common feature of which is trying to maximize the global peak-signal-to-noise ratio (PSNR) at a given bit rate. Unfortunately, they have not considered the local variations of SNR within the compressed fingerprint image; therefore, different regions in the compressed image will have different ridge-valley qualities. This problem causes the verification performance to be decreased because minutiae and other useful features cannot be extracted precisely from the low-bit-rate-compressed fingerprint images. Contrast variation within the original image worsens the problem. This paper deals with those applications of fingerprint image compression in which high compression ratios and preserving or improving the verification performance of the compressed images are the main concern. We propose a compression scheme in which the local-SNR (signal-to-noise ratio) variations within the compressed image are minimized (and thus, general quality is maximized everywhere) by means of an iterative procedure. The proposed procedure can be utilized in conjunction with any SPC coder without the need to modify the SPC coder’s algorithm. In addition, we used image enhancement to further improve the ridge-valley quality as well as the verification performance of the compressed fingerprint images through alleviating the leakage effect. We evaluated the compression and verification performances of some conventional and modern SPC coders including STW, EZW, SPIHT, WDR, and ASWDR combined with the proposed scheme. This evaluation was performed on the FVC2004 dataset with respect to measures including average PSNR curve versus bit rate, verification accuracy, detection error trade-off (DET) curve, and correlation of matching scores versus the average quality of involved fingerprint images. Simulation results showed considerable improvement on verification performance of all examined SPC coders, especially the SPIHT coder, by using the proposed scheme.
Circuit Elements With Memory: Memristors, Memcapacitors, and Meminductors
We extend the notion of memristive systems to capacitive and inductive elements, namely, capacitors and inductors whose properties depend on the state and history of the system. All these elements typically show pinched hysteretic loops in the two constitutive variables that define them: current-voltage for the memristor, charge-voltage for the memcapacitor, and current-flux for the meminductor. We argue that these devices are common at the nanoscale, where the dynamical properties of electrons and ions are likely to depend on the history of the system, at least within certain time scales. These elements and their combination in circuits open up new functionalities in electronics and are likely to find applications in neuromorphic devices to simulate learning, adaptive, and spontaneous behavior.
Potential for electronic health records and online social networking to redefine medical research.
BACKGROUND Recent legislation in the US requires that all medical records become electronic over the next decade. In addition, ongoing developments in patient-oriented care, most notably with the advent of health social networking and personal health records, provide a plethora of new information sources for research. CONTENT Electronic health records (EHRs) show great potential for use in observational studies to examine drug safety via pharmacovigiliance methods that can find adverse drug events as well as expand drug safety profiles. EHRs also show promise for head-to-head comparative effectiveness trials and could play a critical role in secondary and tertiary diabetes prevention efforts. A growing subset of EHRs, personal health records (PHRs), opens up the possibility of engaging patients in their care, as well as new opportunities for participatory research and personalized medicine. Organizations nationwide, from providers to employers, are already investing heavily in PHR systems. Additionally, the explosive use of online social networking sites and mobile technologies will undoubtedly play a role in future research efforts by making available a veritable flood of information, such as real-time exercise monitoring, to health researchers. SUMMARY The future confluence of health information technologies will enable researchers and clinicians to reveal novel therapies and insights into treatments and disease management, as well as environmental and genomic interactions, at an unprecedented population scale.
Architectural support for SWAR text processing with parallel bit streams: the inductive doubling principle
Parallel bit stream algorithms exploit the SWAR (SIMD within a register) capabilities of commodity processors in high-performance text processing applications such as UTF-8 to UTF-16 transcoding, XML parsing, string search and regular expression matching. Direct architectural support for these algorithms in future SWAR instruction sets could further increase performance as well as simplifying the programming task. A set of simple SWAR instruction set extensions are proposed for this purpose based on the principle of systematic support for inductive doubling as an algorithmic technique. These extensions are shown to significantly reduce instruction count in core parallel bit stream algorithms, often providing a 3X or better improvement. The extensions are also shown to be useful for SWAR programming in other application areas, including providing a systematic treatment for horizontal operations. An implementation model for these extensions involves relatively simple circuitry added to the operand fetch components in a pipelined processor.
Patient comfort during treatment with heated humidified high flow nasal cannulae versus nasal continuous positive airway pressure: a randomised cross-over trial.
OBJECTIVE To compare patient comfort in preterm infants treated with heated humidified high flow nasal cannulae (HHHFNC) versus nasal continuous positive airway pressure (NCPAP). DESIGN Randomised cross-over trial (2×24 h). SETTING Single tertiary neonatal unit. PATIENTS 20 infants less than 34 weeks postmenstrual age treated with NCPAP due to mild respiratory illness. INTERVENTIONS After parental consent, infants were randomised to 24 h of treatment with NCPAP or HHHFNC followed by 24 h of the alternate therapy. MAIN OUTCOME MEASURES Primary outcome was patient comfort assessed by the EDIN (neonatal pain and discomfort) scale. Secondary outcomes were respiratory parameters (respiratory rate, FiO2, SpO2, TcPCO2), ambient noise, salivary cortisol and parental assessments of their child. RESULTS We found no differences between HHHFNC and NCPAP in mean cumulative EDIN score (10.7 vs 11.1, p=0.25) or ambient noise (70 vs 74 dBa, p=0.18). Parents assessed HHHFNC treatment as significantly better in the three domains, 1) child satisfied, 2) parental contact and interaction and 3) possibility to take part in care. Mean respiratory rate over 24 h was lower during HHHFNC than CPAP (41 vs 46, p=0.001). Other respiratory parameters were similar. CONCLUSIONS Using EDIN scale, we found no difference in patient comfort with HHHFNC versus NCPAP. However, parents preferred HHHFNC, and during HHHFNC respiratory rate was lower than during NCPAP. CLINICALTRIALSGOV, NUMBER NCT01526226.
Depressive Symptoms and Their Correlates with Locus of Control and Satisfaction with Life among Jordanian College Students
Objective: to establish estimates of the prevalence of depressive symptoms, and their correlates with locus of control and satisfaction with life among undergraduate students in Hashemite University (HU) Jordan. Method: A randomized sample of college students (N=492), 67 (33.9 %) were male, completed the Multidimensional Health Locus of Control Scale (MHLC), the Satisfaction with Life Scale (SLS), and the Center for Epidemiologic Studies Depression Scale (CES-D). Results: Study outcomes showed a great ratio of depressive symptoms among HU students, almost half of college-aged individuals had a major depression, and statistical analyses showed no relationship between externality of locus of control (Powerful others) and depression, while Externality of locus of control (Chance) was found to be significantly positively related to depression, and in line to previous studies a significant negative relationship was found between internality of locus of control and depression. Additionally, significant negative relationship was found between satisfaction with life (SLS) and depression. However, Satisfaction with life was found to be the first best predictor of depressive symptoms and Chance was found to be the second best predictor of depressive symptoms. Conclusion: Findings of this study hold implications for depressive symptoms interventions, such as expanding psychoeducation courses to include strategies for enhancing and maintaining a sense of personal control and selfactualization.
DeepFruits: A Fruit Detection System Using Deep Neural Networks
This paper presents a novel approach to fruit detection using deep convolutional neural networks. The aim is to build an accurate, fast and reliable fruit detection system, which is a vital element of an autonomous agricultural robotic platform; it is a key element for fruit yield estimation and automated harvesting. Recent work in deep neural networks has led to the development of a state-of-the-art object detector termed Faster Region-based CNN (Faster R-CNN). We adapt this model, through transfer learning, for the task of fruit detection using imagery obtained from two modalities: colour (RGB) and Near-Infrared (NIR). Early and late fusion methods are explored for combining the multi-modal (RGB and NIR) information. This leads to a novel multi-modal Faster R-CNN model, which achieves state-of-the-art results compared to prior work with the F1 score, which takes into account both precision and recall performances improving from 0 . 807 to 0 . 838 for the detection of sweet pepper. In addition to improved accuracy, this approach is also much quicker to deploy for new fruits, as it requires bounding box annotation rather than pixel-level annotation (annotating bounding boxes is approximately an order of magnitude quicker to perform). The model is retrained to perform the detection of seven fruits, with the entire process taking four hours to annotate and train the new model per fruit.
A four-component decomposition of POLSAR images based on the coherency matrix
A four-component decomposition scheme of the coherency matrix is presented here for the analysis of polarimetric synthetic aperture radar (SAR) images. The coherency matrix is used to deal with nonreflection symmetric scattering case, which is an extension of covariance matrix approach. The same decomposition results have been obtained. The advantage of this approach is explicit expressions of four scattering powers in terms of scattering matrix elements, which serve the interpretation of polarimetric SAR data quantitatively
Grain Boundary Segregation and Conductivity in Yttria‐Stabilized Zirconia
Grain boundary segregation of Y, Mn, La and Fe in doped and undoped 3YSZ and 8YSZ ceramics and its influence on the intra- and intergranular conductivity is studied by spatially resolved analytical transmission electron microscopy, electron energy loss spectroscopy and impedance spectroscopy. EELS profiles reveal significant enrichment of di- and trivalent ions in an 1-2nm wide grain boundary core with total concentrations of Mn 2+ , Fe 2+ , La 3+ and Y 3+ reaching up to 40cat%. The segregation factors depend on the crystallographic structure of the boundary. Enrichment of di- and trivalent dopants produces a negative grain boundary core charge. EELS scans also show the space charge layer. It extends over a few nanometers and is characterized by an increased O K absorption edge height compared to the grain core and characteristic O K ELNES features. O K edge height and fine structure directly indicate the oxygen vacancy depletion in the space charge layer and allow to track local changes in chemical bonding. Compared to 3YSZ ceramics, cation segregation is less pronounced in 8YSZ ceramics. Inter- and intragranular resistivity of doped and undoped ceramics are measured by impedance spectroscopy in the temperature range of 200-500°C. While the oxygen ion bulk conductivity of YSZ ceramics remains unaffected by the dopants, the grain boundary resistance suffers a significant increase. The behavior is explained by strong interactions of oxygen vacancies with di- and trivalent cations in the grain boundary core and by the lowered oxygen vacancy concentration in the space charge layer.
Russian corporate capitalism from Peter the Great to perestroika
Professor Owen examines corporate capitalism under the Tsarist and late Soviet regimes. Covering two hundred years from the Tsarist period through perestroika and into the present, he demonstrates the historical obstacles that have confronted Russian corporate entrepreneurs and the continuity of Russian attitudes toward corporate capitalism.
Second Language Learners' Theories on the Use of English Articles: An Analysis of the Metalinguistic Knowledge Used by Japanese Students in Acquiring the English Article System.
Although it is well known that many second language (L2) learners have trouble using articles “properly,” the primary causes of their difficulties remain unclear. This study addresses this problem by examining the metalinguistic knowledge of the English article system that learners employ when selecting articles in a given situation. By doing this, the present study attempts to better understand the process of “making sense” of the English article system by learners who are at different stages in their interlanguage development. Eighty Japanese college students with varying levels of English proficiency participated in this study. Immediately after completing a fill-in-thearticle test, a structured interview was conducted to investigate the reasons for their article choices. The quantitative and qualitative analyses reveal a number of conceptual differences with regard to their considerations of the hearer’s knowledge, specific reference, and countability, which may account for learners’ errors in article use across different proficiency groups. Disciplines Bilingual, Multilingual, and Multicultural Education Comments Reprinted from Studies in Second Language Acquisition, Volume 24, Issue 3, pages 451-480. This journal article is available at ScholarlyCommons: http://repository.upenn.edu/gse_pubs/1 SSLA, 24, 451–480. Printed in the United States of America. DOI: 10.1017.S0272263102003042 SECOND LANGUAGE LEARNERS’ THEORIES ON THE USE OF ENGLISH ARTICLES An Analysis of the Metalinguistic Knowledge Used by Japanese Students in Acquiring the English Article System
Tracing Information Flow from Erk to Target Gene Induction Reveals Mechanisms of Dynamic and Combinatorial Control.
Cell signaling networks coordinate specific patterns of protein expression in response to external cues, yet the logic by which signaling pathway activity determines the eventual abundance of target proteins is complex and poorly understood. Here, we describe an approach for simultaneously controlling the Ras/Erk pathway and monitoring a target gene's transcription and protein accumulation in single live cells. We apply our approach to dissect how Erk activity is decoded by immediate early genes (IEGs). We find that IEG transcription decodes Erk dynamics through a shared band-pass filtering circuit; repeated Erk pulses transcribe IEGs more efficiently than sustained Erk inputs. However, despite highly similar transcriptional responses, each IEG exhibits dramatically different protein-level accumulation, demonstrating a high degree of post-transcriptional regulation by combinations of multiple pathways. Our results demonstrate that the Ras/Erk pathway is decoded by both dynamic filters and logic gates to shape target gene responses in a context-specific manner.
Hematopoietic stem cell transplantation for refractory or recurrent non-Hodgkin lymphoma in children and adolescents.
We examined the role of hematopoietic stem cell transplantation (HSCT) for patients aged< or =18 years with refractory or recurrent Burkitt (n=41), lymphoblastic (n=53), diffuse large B cell (DLBCL; n=52), and anaplastic large cell lymphoma (n=36), receiving autologous (n=90) or allogeneic (n=92; 43 matched sibling and 49 unrelated donor) HSCT in 1990-2005. Risk factors affecting event-free survival (EFS) were evaluated using stratified Cox regression. Characteristics of allogeneic and autologous HSCT recipients were similar. Allogeneic donor HSCT was more likely to use irradiation-containing conditioning regimens, bone marrow (BM) stem cells, be performed in more recent years, and for lymphoblastic lymphoma. EFS rates were lower for patients not in complete remission at HSCT, regardless of donor type. After adjusting for disease status, 5-year EFS were similar after allogeneic and autologous HSCT for DLBCL (50% vs 52%), Burkitt (31% vs 27%), and anaplastic large cell lymphoma (46% vs 35%). However, EFS was higher for lymphoblastic lymphoma, after allogeneic HSCT (40% vs 4%; P < .01). Predictors of EFS for progressive or recurrent disease after HSCT included disease status at HSCT and use of allogeneic donor for lymphoblastic lymphoma. These data were unable to demonstrate a difference in outcome by donor type for the other histological subtypes.
RISE: Randomized Input Sampling for Explanation of Black-box Models
Deep neural networks are being used increasingly to automate data analysis and decision making, yet their decision-making process is largely unclear and is difficult to explain to the end users. In this paper, we address the problem of Explainable AI for deep neural networks that take images as input and output a class probability. We propose an approach called RISE that generates an importance map indicating how salient each pixel is for the model’s prediction. In contrast to white-box approaches that estimate pixel importance using gradients or other internal network state, RISE works on blackbox models. It estimates importance empirically by probing the model with randomly masked versions of the input image and obtaining the corresponding outputs. We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments. Extensive experiments on several benchmark datasets show that our approach matches or exceeds the performance of other methods, including white-box approaches.
Contribution of knowledge management activities to organisational business performance
Purpose – Within construction industry there is a growing awareness of the need for linking knowledge management (KM) to business strategy, organisational objectives, and existing performance measures. This study was undertaken within the context of construction organisations, and attempts to provide the empirical evidence about the relationships between KM activities and organisational business performance. Design/ methodology/ approach – A questionnaire survey was administered to a sample of construction contractors operating in Hong Kong to investigate the opinions of construction professionals regarding the intensity of KM activities and business performance within their organisations. In parallel to the survey, semi-structured interviews were undertaken to provide qualitative insights that helped to clarify and deepen understanding of the KM process within the context of the research target. Findings – The investigation shows that knowledge utilisation is the strongest contributor to general business performance. In addition, the impact of KM activities on the lagging performance indicators of the BSC, such as ‘financial performance’, is in an indirect manner, and through the leading indicators such as performance from ‘internal process’ as well as ‘learning and growth’ perspectives. Originality/ value – The study empirically established the linkage between intensity of KM activities and business performance, and demonstrated that KM strategies need to be explicitly formulated and measured according to organisational business objectives.
Comparison of the effectiveness of six and two acupuncture point regimens in osteoarthritis of the knee: a randomised trial.
BACKGROUND Although substantial data have supported the effectiveness of acupuncture for treating knee osteoarthritis (OA), the number of points used has varied. The objective of this study was to compare the effectiveness of six and two acupuncture points in the treatment of knee OA. METHODS A randomised trial of knee OA patients was conducted. Patients were randomly allocated into two groups of 35. The "six point group" received treatment at six acupuncture points, ST35, EX-LE4 (Neixiyan), ST36, SP9, SP10 and ST34, while the "two point group" received treatment at just the first pair of points, ST35 and EX-LE4. Both groups received twice weekly electroacupuncture on 10 occasions. Electrical stimulation was carried out at low-frequency of 3 Hz to all points, with the intensity as high as tolerable. Both groups were allowed to take a 200 mg celecoxib capsule per day for intolerable pain. Patients were assessed at baseline, week 5, week 9 and week 13, using a Thai language version of the Western Ontario and McMaster Osteoarthritis Index (WOMAC). Global assessment of change after 10 treatments was also recorded. RESULTS Acupuncture at both six and two acupuncture points was associated with significant improvement. Mean total WOMAC score at weeks 5 and 13 of patients in both groups showed no significant difference statistically (p = 0.75 and p = 0.51). Moreover, the number of celecoxib capsules taken, global assessment of global change and body weight change of both groups also showed no statistical difference. CONCLUSION This evidence suggests that electroacupuncture to two local points may be sufficient to treat knee OA, but in view of some limitations to this study further research is necessary before this can be stated conclusively.
Student Interactions in Online Discussion Forum: Empirical Research from "Media Richness Theory" Perspective.
The present study contributes to the understanding of the effectiveness of online discussion forum in student learning. A conceptual model based on ‘theory of online learning’ and ‘media richness theory’ was proposed and empirically tested. We extend the current understanding of media richness theory to suggest that use of multiple media can enrich the communication context and perceived learning. Hierarchical regression was applied to investigate the relationships between antecedent factors, interaction and perceived learning. The results show that the perceived richness of online discussion forum has significant positive effect on student participation and interaction, and learning, when used along with traditional classroom lecture. Implications of these findings are discussed as they provide important guidelines for management educators. Introduction Over the past few years, management education has been experiencing an increased interest in using internet and communication technology tools (ICT). While ICT has been widely used in distance and online mode of education, it is being increasingly used along with the faceto-face lectures to augment and support classroom learning. The use of online discussion forum (ODF) has emerged as a common tool and an effective way of engaging students outside the classroom. ODF is an e-learning platform that allows students to post messages to the discussion threads, interact and receive feedback from other students and instructor, and foster deeper understanding towards the subject under study. In an ODF there is no loss of data as the students’ written messages are stored in the virtual space, and can be retrieved and reviewed anytime. The use of online instructional tools can remove some of the communication impediments associated with the face-to-face lectures providing a forum to address issues through argumentative and collaborative discourse (Karacapilidis and Papadias, 2001). For students, the online environment is less intimidating, less prone to be dominated by a single participant and less bounded by convention (Redmon and Burger, 2004). It also provides Journal of Interactive Online Learning Balaji and Chakrabarti 2 students the flexibility of time and place to reflect on the previous postings to the discussion thread (Anderson and Kanuka, 1997) and thus actively engages them in a meaningful and intellectual experience. Biggs (1999) had suggested that active teaching methods which involve learning through active experimentation and reflective thinking encourage high-level of student participation in the learning process. This, as opposed to passive teaching approach, like traditional classroom lecture (Ebert-May,Brewer, and Allred, 1997) challenges students to construct knowledge (Struyven et al. 2006) leading to higher cognitive outcomes. The various active teaching methods include case studies, panel discussion, simulation games, project studies and problembased learning (Lantis, Kuzma and Boehrer, 2000; Reichard, 2002; Lamagna and Selim, 2005). Employing these active teaching methods increase the academic performance of the students and generate more positive attitude towards learning (Felder and Silverman, 1998; Struyven et al., 2006). As Thomas (2002) noted, the online discussion forum provides significant opportunities for students to actively engage in their learning process through active participation. Studies investigating the technology-rich classrooms found that the students demonstrated superior attitudes, involvement and engagement with the course content (Dorman and Fraser, 2009). Using technology tools as supportive to lectures can reinforce course information through multiple modes of knowledge representation and comprehension. This improves their learning outcomes by contributing to their intellectual growth and critical thinking (Pena-Shaff and Nicholls, 2004). Other important payoffs of using technology tools include flexibility, convenience and accessibility for students to complete their learning anytime and anyplace. However, studies have shown that motivating students to actively participate and contribute in online discussions was challenging. Perceived lack of relevance and usefulness seems to hinder student motivation as they assume an ‘invisible’ online role posting discussions with minimal content (Beaudoin, 2002). Confusion, anxiety, apprehension in writing and difficulty in phrasing, and time constraints are other reasons attributed for student passivity or nonparticipation in ODFs. Another potential problem has been the evaluation of the student’s contribution towards the online discussions. Pena-Shaff, Altman and Stephenson (2005) reported some students to have rebelled when discussions are graded, resulting in a negative impact on their participation. Some students found it difficult to interact when the human interface was not present; this was reflected while communicating in ODF environment (Bullen, 1998). Much of prior research has been done on comparing learning in face-to-face lectures and threaded discussions (Meyer, 2003), the role of instructor in web-based forums (Mazzolini and Maddison, 2007), student interactions in the virtual environment (Pena-Shaff and Nicholls, 2004) and assessment strategies of the discussion content (Gaytan and McEwen, 2007). The area which requires further exploration is the linkage between participation, interaction and learning when ODFs are used as adjunct with traditional classrooms lectures. Furthermore, majority of research studies in the above stream have focused on the qualitative research approach in understanding the students’ participation in an online discussion forum (De Wever et al., 2006; Jiang and Ting, 2000). While the findings obtained from this research approach has been valuable, further empirical research is required to identify the important factors that influence interaction and learning in an ODF (Brook and Oliver, 2003). Accordingly, the objective of the present investigation was to study the antecedents and outcomes of using ODFs with traditional classroom lectures. Journal of Interactive Online Learning Balaji and Chakrabarti 3 Theoretical background This paper deals with using online discussion forums with traditional classroom environment. The conceptual framework is drawn in particular from Anderson’s (2004) “Theory of Online Learning” and “Media Richness Theory” (Daft and Lengel, 1986). The ‘Theory of Online Learning’ as proposed by Anderson (2004), argues that effective learning environment affords many modalities of interactions between the three macro components namely students, instructors and content (Anderson, 2004). Anderson and Garrison (1998) present the six typologies of interactions namely student-student, student-instructor, student-content, instructor-instructor, instructor-content and content-content interactions that serve as the basis of educational process in online learning environment. These interactions are described as critical to effective learning and take place when the learning environment is learner-centered, knowledge-centered, assessment-centered, and community-centered (Anderson, 2004). Interactivity has been considered as central tenet to the concept of ‘online learning theory’. Using online instructional tools provide unique opportunities for the instructor to engage students in various activities and offer a new dimension for interaction – active and higher-order. It changes the way students interact, motivating them to be more attentive and participative, and encourages the process of learning. The role of instructor in facilitating discourse becomes decisive to overcome restrictions due to individual characteristics (e.g. personality traits) and lead to enhanced communications. Additionally, students have to demonstrate strong internet efficacy for active participation and interaction. Using online resources expand the opportunities for students to reflect upon their thinking and experience the discourse with other students and instructor. It individualizes their learning experience facilitating development of deep level learning and “new knowledge structures” (Anderson, 2004, p. 37). The asynchronous communications facilitate personalization by allowing the students to learn at their own pace and according to their interest, previous knowledge and style. This represents the knowledge-centered view of online learning theory. It is noted by Anderson (2004) that assessment determines if the learning objectives of using online tools had been accomplished. The instructor needs to structure the online discussions to configure with classroom lectures, create experiences leading to outcome, and discuss and use assessment to improve learning. Additionally, feedback is an important part of this assessmentcentered learning and influences the approach to learning. The last perspective of online learning theory is the community or social component of online learning. The interactions in the online forum promote a sense of community or social connectivity between the learners and instructors. The level of connectedness among the students results in formation of productive relationships among the class members and in collaborative exploration of the subject matter. Previous research has shown that learning communities exhibit increased student learning, and course satisfaction (Rovai, 2002). As suggested by this theory, it is proposed that learning effectiveness in using ODFs is influenced by the interactions and communication. The level of interaction depends on the learning environment (facilitating discourse, reflective thinking, assessment and feedback, and sense of community), learning process (personalization) and learner characteristics (personality and internet efficacy). Media Richness Theory (MRT), a widely known theory of media use, posits that communication efficiency will be improved by matching media to the students’ task info
Average-Voice-Based Speech Synthesis
This thesis describes a novel speech synthesis framework " Average-Voice-based Speech Synthesis. " By using the speech synthesis framework, synthetic speech of arbitrary target speakers can be obtained robustly and steadily even if speech samples available for the target speaker are very small. This speech synthesis framework consists of speaker normalization algorithm for the parameter clustering, speaker normalization algorithm for the parameter estimation, the transformation/adaptation part, and modification part of the rough transformation. In the parameter clustering using decision-tree-based context clustering techniques for average voice model, the nodes of the decision tree do not always have training data of all speakers, and some nodes have data from only one speaker. This speaker-biased node causes degradation of quality of average voice and synthetic speech after speaker adaptation, especially in prosody. Therefore, we firstly propose a new context clustering technique, named " shared-decision-tree-based context clustering " to overcome this problem. Using this technique, every node of the decision tree always has training data from all speakers included in the training speech database. As a result, we can construct decision tree common to all training speakers and each distribution of the node always reflects the statistics of all speakers. However, when training data of each training speaker differs widely, the distributions of the node often have bias depending on speaker and/or gender and this will degrade the quality of synthetic speech. Therefore, we incorporate " speaker adaptive training " into the parameter estimation procedure of average voice model to reduce the influence of speaker dependence. In the speaker adaptive training, the speaker difference between training speaker's voice and average voice is assumed to be expressed as a simple linear regres-i ii sion function of mean vector of the distribution and a canonical average voice model is estimated using the assumption. In speaker adaptation for speech synthesis, it is desirable to convert both voice characteristics and prosodic features such as F0 and phone duration. Therefore, we utilize a framework of " hidden semi-Markov model " (HSMM) which is an HMM having explicit state duration distributions and we propose an HSMM-based model adaptation algorithm to simultaneously transform both state output and state duration distributions. Furthermore, we also propose an HSMM-based speaker adaptive training algorithm to normalize both state output and state duration distributions of average voice model at the same time. Finally, we explore several speaker adaptation algorithms to transform more effectively the average voice …
Sedimentation in a dilute polydisperse system of interacting spheres. Part 1. General theory
Small rigid spherical parti&es are settling under gravity through Newtonian fluid, and the volume fraction of the particles ($) is small although sufficiently large for the effects of interactions between pairs of particles to be significant. Two neighbouring particles interact both hydrodynamically (with low-Reynolds-number flow about each particle) and through the exertion of a mutual force of molecular or electrical origin which is mainly repulsive; and they also diffuse relatively to each other by Brownian motion. The dispersion contains several species of particle which differ in radius and density. The purpose of the paper is to derive formulae for the mean velocity of the particles of each species correct to order 4, that is, with allowance for the effect of pair interactions. The method devised for the calculation of the mean velocity in a monodisperse system (Batchelor 1972) is first generalized to give the mean additional velocity of a particle of species i due to the presence of a particle of speciesj in terms of the pair mobility functions and the probability distribution pii(r) for the relative position of an i and a j particle. The second step is to determine pii(r) from a differential equation of Fokker-Planck type representing the effects of relative motion of the two particles due to gravity, the interparticle force, and Brownian diffusion. The solution of this equation is investigated for a range of special conditions, including large values of the Ptclet number (negligible effect of Brownian motion); small values of the Ptclet number; and extreme values of the ratio of the radii of the two spheres. There are found to be three different limits forpij(r) corresponding to different ways of approaching the state of equal sphere radii, equal sphere densities, and zero Brownian relative diffusivity . Consideration of the effect of relative diffusion on the pair-distribution function shows the existence of an effective interactive force between the two particles and consequently a contribution to the mean velocity of the particles of each species. The direct contributions to the mean velocity of particles of one species due to Brownian diffusion and to the interparticle force are non-zero whenever the pairdistribution function is non-isotropic, that is, at all except large values of the PCclet number. The forms taken by the expression for the mean velocity of the particles of one species in the various cases listed above are examined. Numerical values will be presented in Part 2. cp v i : qe, ?J&,, I y o : r37 'I"p 467-193
Accelerating recurrent neural network training using sequence bucketing and multi-GPU data parallelization
An efficient algorithm for recurrent neural network training is presented. The approach increases the training speed for tasks where a length of the input sequence may vary significantly. The proposed approach is based on the optimal batch bucketing by input sequence length and data parallelization on multiple graphical processing units. The baseline training performance without sequence bucketing is compared with the proposed solution for a different number of buckets. An example is given for the online handwriting recognition task using an LSTM recurrent neural network. The evaluation is performed in terms of the wall clock time, number of epochs, and validation loss value.
Big Data Analytics in Intelligent Transportation Systems: A Survey
Big data is becoming a research focus in intelligent transportation systems (ITS), which can be seen in many projects around the world. Intelligent transportation systems will produce a large amount of data. The produced big data will have profound impacts on the design and application of intelligent transportation systems, which makes ITS safer, more efficient, and profitable. Studying big data analytics in ITS is a flourishing field. This paper first reviews the history and characteristics of big data and intelligent transportation systems. The framework of conducting big data analytics in ITS is discussed next, where the data source and collection methods, data analytics methods and platforms, and big data analytics application categories are summarized. Several case studies of big data analytics applications in intelligent transportation systems, including road traffic accidents analysis, road traffic flow prediction, public transportation service plan, personal travel route plan, rail transportation management and control, and assets maintenance are introduced. Finally, this paper discusses some open challenges of using big data analytics in ITS.
Exploring motivations for contributing to open source initiatives: The roles of contribution context and personal values
We explore contextual and dispositional correlates of the motivation to contribute to open source initiatives. We examine how the context of the open source project, and the personal values of contributors, are related to the types of motivations for contributing. A web-based survey was administered to 300 contributors in two prominent open source contexts: software and content. As hypothesized, software contributors placed a greater emphasis on reputation-gaining and self-development motivations, compared with content contributors, who placed a greater emphasis on altruistic motives. Furthermore, the hypothesized relationships were found between contributors’ personal values and their motivations for contributing. 2007 Elsevier Ltd. All rights reserved.
Reinforcement Learning of Theorem Proving
We introduce a theorem proving algorithm that uses practically no domain heuristics for guiding its connection-style proof search. Instead, it runs many MonteCarlo simulations guided by reinforcement learning from previous proof attempts. We produce several versions of the prover, parameterized by different learning and guiding algorithms. The strongest version of the system is trained on a large corpus of mathematical problems and evaluated on previously unseen problems. The trained system solves within the same number of inferences over 40% more problems than a baseline prover, which is an unusually high improvement in this hard AI domain. To our knowledge this is the first time reinforcement learning has been convincingly applied to solving general mathematical problems on a large scale.
A Hardware Gaussian Noise Generator for Channel Code Evaluation
Hardware simulation of channel codes offers the potential of improving code evaluation speed by orders of magnitude over workstationor PC-based simulation. We describe a hardware-based Gaussian noise generator used as a key component in a hardware simulation system, for exploring channel code behavior at very low bit error rates (BERs) in the range of 10−9 to 10−10. The main novelty is the design and use of non-uniform piecewise linear approximations in computing trigonometric and logarithmic functions. The parameters of the approximation are chosen carefully to enable rapid computation of coefficients from the inputs, while still retaining extremely high fidelity to the modelled functions. The output of the noise generator accurately models a true Gaussian PDF even at very high σ values. Its properties are explored using: (a) several different statistical tests, including the chi-square test and the Kolmogorov-Smirnov test, and (b) an application for decoding of low density parity check (LDPC) codes. An implementation at 133MHz on a Xilinx Virtex-II XC2V4000-6 FPGA produces 133 million samples per second, which is 40 times faster than a 2.13GHz PC; another implementation on a Xilinx Spartan-IIE XC2S300E-7 FPGA at 62MHz is capable of a 20 times speedup. The performance can be improved by exploiting parallelism: an XC2V4000-6 FPGA with three parallel instances of the noise generator at 126MHz can run 100 times faster than a 2.13GHz PC. We illustrate the deterioration of clock speed with the increase in the number of instances.
Safety, tolerability, pharmacokinetics, and efficacy of an interleukin-2 agonist among HIV-infected patients receiving highly active antiretroviral therapy.
We sought to determine the safety, maximum tolerated dose, optimal dose, and preliminary dose efficacy of intermittent subcutaneously (s.c.) administered BAY 50-4798 among patients with HIV infection receiving highly active antiretroviral therapy (HAART) compared with patients receiving HAART alone. A phase I/II randomized, double-blind, dose-escalation study was conducted of the safety, tolerability, pharmacokinetics, and efficacy of s.c. BAY 50-4798 administered to HIV-infected patients already receiving stable HAART. There were no unexpected safety findings in a population of HIV-infected patients receiving HAART plus SC BAY 50-4798 as adjunctive therapy. BAY 50-4798 exhibited nearly dose-proportional pharmacokinetics, and accumulation was minimal during multiple-dose treatment. Limited efficacy data indicated that treatment with BAY 50-4798 caused at least a transient increase in CD4(+) T cell counts in some recipients, particularly at the early time points. In general, this effect appeared to increase with increasing dose. Bay 50-4798 was generally well tolerated across the dose range tested, but a lack of potent, sustained immunologic activity suggests that further optimization of dose and schedule will be necessary.
Fast Pedestrian Detection Based on HOG-PCA and Gentle AdaBoost
Pedestrian detection is a major difficulty in the field of object detection. In order to achieve a balance between speed and accuracy, we propose a new framework in pedestrian detection based on HOG-PCA and Gentle AdaBoost. Firstly, each block-based feature of the image is encoded using the histograms of oriented gradients (HOG), then Principal Components Analysis (PCA) is used to reduce the dimensions of the HOG feature set. In the end, Gentle AdaBoost is used to classify the fabric defects. HOG-PCA descriptors can reduce the complexity of computation in contrast to the state-of-the-art algorithms. The Gentle AdaBoost is used to train the pedestrian classifier can improve the efficiency of training. Experimental results demonstrate the robust of our proposed algorithm.
Representation of ideals of relational structures
The \textit{age} of a relational structure $\mathfrak A$ of signature $\mu$ is the set $age(\mathfrak A)$ of its finite induced substructures, considered up to isomorphism. This is an ideal in the poset $\Omega_\mu$ consisting of finite structures of signature $\mu$ and ordered by embeddability. If the structures are made of infinitely many relations and if, among those, infinitely many are at least binary then there are ideals which do not come from an age. We provide many examples. We particularly look at metric spaces and offer several problems. We also provide an example of an ideal $I$ of isomorphism types of at most countable structures whose signature consists of a single ternary relation symbol. This ideal does not come from the set $\age_{\mathfrak I}(\mathfrak A)$ of isomorphism types of substructures of $\mathfrak A$ induced on the members of an ideal $\mathfrak I$ of sets. This answers a question due to R. Cusin and J.F. Pabion (1970).
Type Theory and its Meaning Explanations
At the heart of intuitionistic type theory lies an intuitive semantics called the "meaning explanations"; crucially, when meaning explanations are taken as definitive for type theory, the core notion is no longer "proof" but "verification". We'll explore how type theories of this sort arise naturally as enrichments of logical theories with further judgements, and contrast this with modern proof-theoretic type theories which interpret the judgements and proofs of logics, not their propositions and verifications.
The specificity of prescription patterns in secondary stroke prevention.
We would like to comment on the important report by Landi and colleagues about the factors associated with a reduced likelihood of receiving secondary stroke prevention treatment1 and present our own data. We have demonstrated that in community-dwelling patients with chronic atrial fibrillation, living alone or in rural areas, history of previous falls, and cognitive and functional impairments are independent factors that result in physicians prescribing aspirin instead of anticoagulants, thus disregarding the common guidelines for stroke prevention.2,3 We have also shown that in some cases it does not mean malpractice.3 In elderly patients, a geriatric assessment including a shrewd evaluation of the psychosocial conditions can guide physicians in the selection of the correct treatment, thus avoiding …
Self-as-doer for diabetes: development and validation of a diabetes-specific measure of doer identification.
BACKGROUND The purpose of this study was to develop and validate a scale to measure the level of self-care behavior "doer identity" in persons with diabetes. METHODS Persons with diabetes (N = 355) completed questionnaires assessing self-as-doer identity and other related constructs. Principle components and parallel analyses and tests of reliability and validity were performed. RESULTS A 7 factor solution explained 55.24% of the total variance on behaviors. Cronbach's alpha was .93 for the overall scale. Extracted components moderately correlated with one another and theoretically similar constructs. Self-as-doer identity significantly predicted all self-care behaviors (except for blood glucose monitoring) and glycemic control over and above related variables for persons with type 1 diabetes. Self-as-doer identity also predicted diet behaviors for persons with type 2 diabetes. CONCLUSIONS Evidence for a reliable and valid factor structure of the Self-as-doer-Diabetes measure was demonstrated.
Baseband Attacks: Remote Exploitation of Memory Corruptions in Cellular Protocol Stacks
Published attacks against smartphones have concentrated on software running on the application processor. With numerous countermeasures like ASLR, DEP and code signing being deployed by operating system vendors, practical exploitation of memory corruptions on this processor has become a time-consuming endeavor. At the same time, the cellular baseband stack of most smartphones runs on a separate processor and is significantly less hardened, if at all. In this paper we demonstrate the risk of remotely exploitable memory corruptions in cellular baseband stacks. We analyze two widely deployed baseband stacks and give exemplary cases of memory corruptions that can be leveraged to inject and execute arbitrary code on the baseband processor. The vulnerabilities can be triggered over the air interface using a rogue GSM base station, for instance using OpenBTS together with a USRP software defined radio.
Epithelial ovarian cancer: testing the 'androgens hypothesis'.
In 1998, Risch proposed a hypothesis for the pathogenesis of ovarian cancer relating to the role of androgens in stimulating epithelial cell proliferation. Although this hypothesis has been widely discussed, direct evidence to support it is scant. To address this issue, we have conducted a detailed analysis of factors possibly associated with high circulating levels of androgens, including polycystic ovary syndrome (PCOS), hirsutism and acne (all clinically associated with hyperandrogenism) using the data collected in an Australia-wide, population-based case-control study. Cases aged 18-79 years with a new diagnosis of invasive epithelial ovarian cancer (n=1276) or borderline malignant tumour (n=315) were identified through a network of clinics and cancer registries throughout Australia. Controls (n=1508) were selected from the National Electoral Roll. Women self-reported a history of PCOS, acne, hirsutism and also use of testosterone supplements or the androgenic medication Danazol. We found no evidence that a history of PCOS, acne or hirsutism was associated with ovarian cancer overall, or with specific subtypes, with the exception of serous borderline tumours that were positively associated with a history of PCOS (OR 2.6; 95% CI 1.0-6.1). Women who had ever used testosterone supplements had an increased risk of ovarian cancer (OR 3.7; 95% CI 1.1-12.0); however, use of the androgenic medication Danazol did not increase risk (OR 1.0; 95% CI 0.4-2.9). Overall, our results do not support the hypothesis that androgen-related disorders increase the risk of ovarian cancer.
Miniaturized Circularly Polarized Patch Antenna With Low Back Radiation for GPS Satellite Communications
A new size-reduction technique for circularly polarized (CP) patch antenna with the use of parasitic shorting strips is presented. The operating frequency of the CP patch antenna can be controlled by those parasitic shorting strips. Rectangular, U-shaped, and meandering shorting strips are studied. It is found that the size and the back radiation of the patch antenna can be reduced simultaneously. Good impedance matching and CP radiation characteristics can be achieved with different shapes of the shorting strips. The proposed antenna with meandering shorting strip is measured. Good agreement between analytical and experimental results is obtained.
From machine learning to machine reasoning An essay
A plausible definition of “reasoning” could be “algebraically manipulating previously acquired knowledge in order to answer a new question”. This definition covers firstorder logical inference or probabilistic inference. It also includes much simpler manipulations commonly used to build large learning systems. For instance, we can build an optical character recognition system by first training a character segmenter, an isolated character recognizer, and a language model, using appropriate labelled training sets. Adequately concatenating these modules and fine tuning the resulting system can be viewed as an algebraic operation in a space of models. The resulting model answers a new question, that is, converting the image of a text page into a computer readable text. This observation suggests a conceptual continuity between algebraically rich inference systems, such as logical or probabilistic inference, and simple manipulations, such as the mere concatenation of trainable learning systems. Therefore, instead of trying to bridge the gap between machine learning systems and sophisticated “all-purpose” inference mechanisms, we can instead algebraically enrich the set of manipulations applicable to training systems, and build reasoning capabilities from the ground up.
Tryptophan depletion in SSRI-recovered depressed outpatients
Rationale: Recently, a number of studies have challenged the finding that acute tryptophan depletion (TD) increases depressive symptoms in medicated, formerly depressed patients. The present study examined the effects of acute nutritional TD on remitted depressed patients currently treated with selective serotonin reuptake inhibitors. In an attempt to clarify conflicting earlier findings, the effects of a number of clinical variables on outcome were also investigated. Methods: Ten patients underwent TD in a double-blind, controlled, balanced crossover fashion. The control session followed the procedure of Krahn et al. (1996 Neuropsychopharmacology 15:325–328). Sessions were 5–8 days apart. Results: TD was significantly related to increased scores on clinician-rated depression and anxiety scales, and on self-rated depression, anxiety, and somatic symptoms. The control challenge had no effect, despite the fact that the reductions in plasma tryptophan during the control session were unexpectedly high. Some evidence was found for a threshold in the relationship between reduction of plasma tryptophan and mood response. Conclusions: The mood effect of TD in medicated, formerly depressed patients was confirmed. A threshold may exist for mood effects following TD, implying that recent negative findings may have been caused by insufficient depletion. No other predicting or mediating factors were identified, although the variable "history of response pattern to medication" deserves further study.
Integrative Literature Reviews : Guidelines and Examples
The integrative literature review is a distinctive form of research that generates new knowledge about the topic reviewed. Little guidance is available on how to write an integrative literature review. This article discusses how to organize and write an integrative literature review and cites examples of published integrative literature reviews that illustrate how this type of research has made substantive contributions to the knowledge base of human resource development.
Performance of Rotating Biological Contactor in Wastewater Treatment – A Review
The management of the medium and small scale industries feel burden to treat waste if the cost involvement is high. Hence there is a board scope for cheaper and compact unit processes or ideal solutions for such issues. Rotating biological contactor is most popular due to its simplicity, low energy less land requirement. The rotating biological contactors are fixed film moving bed aerobic treatment processes, which able to sustain shock loadings. Unlike activated sludge processes (ASP), trickling filter etc. Rotating biological contactor does not require recirculation of secondary sludge and also hydraulic retention time is low. This review paper focuses on works done by various investigators at different operating parameters using various kinds of industrial wastewater.
Audio event and scene recognition: A unified approach using strongly and weakly labeled data
In this paper we propose a novel learning framework called Supervised and Weakly Supervised Learning where the goal is to learn simultaneously from weakly and strongly labeled data. Strongly labeled data can be simply understood as fully supervised data where all labeled instances are available. In weakly supervised learning only data is weakly labeled which prevents one from directly applying supervised learning methods. Our proposed framework is motivated by the fact that a small amount of strongly labeled data can give considerable improvement over only weakly supervised learning. The primary problem domain focus of this paper is acoustic event and scene detection in audio recordings. We first propose a naive formulation for leveraging labeled data in both forms. We then propose a more general framework for Supervised and Weakly Supervised Learning (SWSL). Based on this general framework, we propose a graph based approach for SWSL. Our main method is based on manifold regularization on graphs in which we show that the unified learning can be formulated as a constraint optimization problem which can be solved by iterative concave-convex procedure (CCCP). Our experiments show that our proposed framework can address several concerns of audio content analysis using weakly labeled data.
Optimal Power Flow by Black Hole Optimization Algorithm
In this paper, a black hole optimization algorithm (BH) is utilized to solve the optimal power flow problem considering the generation fuel cost, reduction of voltage deviation and improvement of voltage stability as an objective functions. The black hole algorithm simulate the black hole phenomenon which relay on tow operations, the star absorption and star sucking. The IEEE 30-Bus and IEEE 57-Bus systems are used to illustrate performance of the proposed algorithm and results are compared with those in literatures.
Iododerma following sitz bath with povidone-iodine.
A 50-year-old man developed numerous pustules and bullae on the trunk and limbs 15 days after anal fissure surgery. The clinicopathological diagnosis was iododerma induced by topical povidone-iodine sitz baths postoperatively. Complete resolution occurred within 3 weeks using systemic corticosteroids and forced diuresis.
Not All Dialogues are Created Equal: Instance Weighting for Neural Conversational Models
Neural conversational models require substantial amounts of dialogue data to estimate their parameters and are therefore usually learned on large corpora such as chat forums, Twitter discussions or movie subtitles. These corpora are, however, often challenging to work with, notably due to their frequent lack of turn segmentation and the presence of multiple references external to the dialogue itself. This paper shows that these challenges can be mitigated by adding a weighting model into the neural architecture. The weighting model, which is itself estimated from dialogue data, associates each training example to a numerical weight that reflects its intrinsic quality for dialogue modelling. At training time, these sample weights are included into the empirical loss to be minimised. Evaluation results on retrieval-based models trained on movie and TV subtitles demonstrate that the inclusion of such a weighting model improves the model performance on unsupervised metrics.
Superlattice-based thin-film thermoelectric modules with high cooling fluxes
In present-day high-performance electronic components, the generated heat loads result in unacceptably high junction temperatures and reduced component lifetimes. Thermoelectric modules can, in principle, enhance heat removal and reduce the temperatures of such electronic devices. However, state-of-the-art bulk thermoelectric modules have a maximum cooling flux qmax of only about 10 W cm(-2), while state-of-the art commercial thin-film modules have a qmax <100 W cm(-2). Such flux values are insufficient for thermal management of modern high-power devices. Here we show that cooling fluxes of 258 W cm(-2) can be achieved in thin-film Bi2Te3-based superlattice thermoelectric modules. These devices utilize a p-type Sb2Te3/Bi2Te3 superlattice and n-type δ-doped Bi2Te3-xSex, both of which are grown heteroepitaxially using metalorganic chemical vapour deposition. We anticipate that the demonstration of these high-cooling-flux modules will have far-reaching impacts in diverse applications, such as advanced computer processors, radio-frequency power devices, quantum cascade lasers and DNA micro-arrays.
A Feasibility Study of Autism Behavioral Markers in Spontaneous Facial, Visual, and Hand Movement Response Data
Autism spectrum disorder (ASD) is a neurodevelopmental disability with atypical traits in behavioral and physiological responses. These atypical traits in individuals with ASD may be too subtle and subjective to measure visually using tedious methods of scoring. Alternatively, the use of intrusive sensors in the measurement of psychophysical responses in individuals with ASD may likely cause inhibition and bias. This paper proposes a novel experimental protocol for non-intrusive sensing and analysis of facial expression, visual scanning, and eye-hand coordination to investigate behavioral markers for ASD. An institutional review board approved pilot study is conducted to collect the response data from two groups of subjects (ASD and control) while they engage in the tasks of visualization, recognition, and manipulation. For the first time in the ASD literature, the facial action coding system is used to classify spontaneous facial responses. Statistical analyses reveal significantly (p <0.01) higher prevalence of smile expression for the group with ASD with the eye-gaze significantly averted (p<0.05) from viewing the face in the visual stimuli. This uncontrolled manifestation of smile without proper visual engagement suggests impairment in reciprocal social communication, e.g., social smile. The group with ASD also reveals poor correlation in eye-gaze and hand movement data suggesting deficits in motor coordination while performing a dynamic manipulation task. The simultaneous sensing and analysis of multimodal response data may provide useful quantitative insights into ASD to facilitate early detection of symptoms for effective intervention planning.
Updated Three-Column Concept in surgical treatment for tibial plateau fractures - A prospective cohort study of 287 patients.
PURPOSE This study introduces an updated Three-Column Concept for the classification and treatment of complex tibial plateau fractures. A combined preoperative assessment of fracture morphology and injury mechanism is utilized to determine surgical approach, implant placement and fixation sequence. The effectiveness of this updated concept is demonstrated through evaluation of both clinical and radiographic outcome measures. PATIENTS AND METHODS From 2008 to 2012, 355 tibial plateau fractures were treated using the updated Three-Column Concept. Standard radiographic and computed tomography imaging are used to systematically assess and classify fracture patterns as follows: (1) identify column(s) injured and locate associated articular depression or comminution, (2) determine injury mechanism including varus/valgus and flexion/extension forces, and (3) determine surgical approach(es) as well as the location and function of applied fixation. Quality and maintenance of reduction and alignment, fracture healing, complications, and functional outcomes were assessed. RESULTS 287 treated fractures were followed up for a mean period of 44.5 months (range: 22-96). The mean time to radiographic bony union and full weight-bearing was 13.5 weeks (range: 10-28) and 14.8 weeks (range: 10-26) respectively. The average functional Knee Society Score was 93.0 (range: 80-95). The average range of motion of the affected knees was 1.5-121.5°. No significant difference was found in knee alignment between immediate and 18-month post-operative measurements. Additionally, no significant difference was found in functional scores and range of motion between one, two and three-column fracture groups. Twelve patients suffered superficial infection, one had limited skin necrosis and two had wound dehiscence, that healed with nonoperative management. Intraoperative vascular injury occurred in two patients. Fixation of failure was not observed in any of the fractures treated. CONCLUSION An updated Three-Column Concept assessing fracture morphology and injury mechanism in tandem can be used to guide surgical treatment of tibial plateau fractures. Limited results demonstrate successful application of biologically friendly fixation constructs while avoiding fixation failure and associated complications of both simple and complex tibial plateau fractures. LEVEL OF EVIDENCE Level II, prospective cohort study.
Short text understanding through lexical-semantic analysis
Understanding short texts is crucial to many applications, but challenges abound. First, short texts do not always observe the syntax of a written language. As a result, traditional natural language processing methods cannot be easily applied. Second, short texts usually do not contain sufficient statistical signals to support many state-of-the-art approaches for text processing such as topic modeling. Third, short texts are usually more ambiguous. We argue that knowledge is needed in order to better understand short texts. In this work, we use lexical-semantic knowledge provided by a well-known semantic network for short text understanding. Our knowledge-intensive approach disrupts traditional methods for tasks such as text segmentation, part-of-speech tagging, and concept labeling, in the sense that we focus on semantics in all these tasks. We conduct a comprehensive performance evaluation on real-life data. The results show that knowledge is indispensable for short text understanding, and our knowledge-intensive approaches are effective in harvesting semantics of short texts.
Generic Virtual Memory Management for Operating System Kernels
We discuss the rationale and design of a Generic Memory management Interface, for a family of scalable operating systems. It consists of a general interface for managing virtual memory, independently of the underlying hardware architecture (e.g. paged versus segmented memory), and independently of the operating system kernel in which it is to be integrated. In particular, this interface provides abstractions for support of a single, consistent cache for both mapped objects and explicit I/O, and control of data caching in real memory. Data management policies are delegated to external managers. A portable implementation of the Generic Memory management Interface for paged architectures, the Paged Virtual Memory manager, is detailed. The PVM uses the novel history object technique for efficient deferred copying. The GMI is used by the Chorus Nucleus, in particular to support a distributed version of Unix. Performance measurements compare favorably with other systems.
Biodiesel production from Jatropha curcas: A review
Biodiesel has attracted considerable attention during the past decade as a renewable, biodegradable and non-toxic fuel alternative to fossil fuels. Biodiesel can be obtained from vegetable oils (both edible and non-edible) and from animal fat. Jatropha curcas Linnaeus, a multipurpose plant, contains high amount of oil in its seeds which can be converted to biodiesel. J. curcas is probably the most highly promoted oilseed crop at present in the world. The availability and sustainability of sufficient supplies of less expensive feedstock in the form of vegetable oils, particularly J. curcas and efficient processing technology to biodiesel will be crucial determinants of delivering a competitive biodiesel. Oil contents, physicochemical properties, fatty acid composition of J. curcas reported in literature are provided in this review. The fuel properties of Jatropha biodiesel are comparable to those of fossil diesel and confirm to the American and European standards. The objective of this review is to give an update on the J. curcas L. plant, the production of biodiesel from the seed oil and research attempts to improve the technology of converting vegetable oil to biodiesel and the fuel properties of the Jatropha biodiesel. The technological methods that can be used to produce biodiesel are presented together with their advantages and disadvantages. The use of lipase as biotechnological solution to alkali and acid catalysis of transesterification and its advantages is discussed. There is need to carry out research on the detoxification of the seed cake to increase the benefits from J. curcas. There is also need to carry out life-cycle assessment and the environment impacts of introducing large scale plantations. There is also still a dearth of research about the influence of various cultivation-related factors and their interactions and influence on seed yield. Many other areas that need to be researched on Jatropha curcas L. are pointed out in this review.
A survey based on Smart Homes system using Internet-of-Things
Internet-of-Things (IoT) is the expansion of internet services. Applications of IoT are increasing. Uses of new technologies in IoT environment are increasing rapidly. It has been already developed in Industrial Wireless Sensor Network (WSN). A smart home is also one of the applications of IoT. Rapid growth in technologies and improvements in architecture comes out many problems that how to manage and control the whole system, Security at the server, security in smart homes, etc. This paper presents the architecture of IoT. Smart homes are those where household devices/home appliances could monitor and control remotely. When these household devices in smart homes connect with the internet using proper network architecture and standard protocols, the whole system can be called as Smart Home in IoT environment or IoT based Smart Homes. Smart Homes ease out the home automation task. This paper presents not only the problems and challenges come in IoT and Smart homes system using IoT but also some solutions that would help to overcome on some problems and challenges.
The effect of a yoga intervention on alcohol and drug abuse risk in veteran and civilian women with posttraumatic stress disorder.
BACKGROUND Individuals with posttraumatic stress disorder (PTSD) often exhibit high-risk substance use behaviors. Complementary and alternative therapies are increasingly used for mental health disorders, although evidence is sparse. OBJECTIVES Investigate the effect of a yoga intervention on alcohol and drug abuse behaviors in women with PTSD. Secondary outcomes include changes in PTSD symptom perception and management and initiation of evidence-based therapies. MATERIALS AND METHODS The current investigation analyzed data from a pilot randomized controlled trial comparing a 12-session yoga intervention with an assessment control for women age 18 to 65 years with PTSD. The Alcohol Use Disorder Identification Test (AUDIT) and Drug Use Disorder Identification Test (DUDIT) were administered at baseline, after the intervention, and a 1-month follow-up. Linear mixed models were used to test the significance of the change in AUDIT and DUDIT scores over time. Treatment-seeking questions were compared by using Fisher exact tests. RESULTS The mean AUDIT and DUDIT scores decreased in the yoga group; in the control group, mean AUDIT score increased while mean DUDIT score remained stable. In the linear mixed models, the change in AUDIT and DUDIT scores over time did not differ significantly by group. Most yoga group participants reported a reduction in symptoms and improved symptom management. All participants expressed interest in psychotherapy for PTSD, although only two participants, both in the yoga group, initiated therapy. CONCLUSIONS Results from this pilot study suggest that a specialized yoga therapy may play a role in attenuating the symptoms of PTSD, reducing risk of alcohol and drug use, and promoting interest in evidence-based psychotherapy. Further research is needed to confirm and evaluate the strength of these effects.
Characterization of immunogenic properties of polyclonal T cell vaccine intended for the treatment of rheumatoid arthritis
Two-staged technology for obtaining polyclonal T cell vaccine intended for the treatment of rheumatoid arthritis is described. Stage 1 includes antigen-dependent cultural selection of patient’s T cells and stage 2 consists in their reproduction in the needed amounts by nonspecific mitogenic stimulation. T cell vaccination induces an effective specific anti-idiotypic immune response against T cells reactive to joint antigens. Vaccine therapy significantly reduces plasma level of IFN-γ and increases IL-4 level. The results indicate immunological efficiency and safety of polyclonal T cell vaccine in patients with rheumatoid arthritis.
Modeling the Detection of Textual Cyberbullying
The scourge of cyberbullying has assumed alarming proportions with an ever-increasing number of adolescents admitting to having dealt with it either as a victim or as a bystander. Anonymity and the lack of meaningful supervision in the electronic medium are two factors that have exacerbated this social menace. Comments or posts involving sensitive topics that are personal to an individual are more likely to be internalized by a victim, often resulting in tragic outcomes. We decompose the overall detection problem into detection of sensitive topics, lending itself into text classification sub-problems. We experiment with a corpus of 4500 YouTube comments, applying a range of binary and multiclass classifiers. We find that binary classifiers for individual labels outperform multiclass classifiers. Our findings show that the detection of textual cyberbullying can be tackled by building individual topic-sensitive classifiers.
GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking
Model compression is essential for serving large deep neural nets on devices with limited resources or applications that require real-time responses. As a case study, a neural language model usually consists of one or more recurrent layers sandwiched between an embedding layer used for representing input tokens and a softmax layer for generating output tokens. For problems with a very large vocabulary size, the embedding and the softmax matrices can account for more than half of the model size. For instance, the bigLSTM model achieves great performance on the One-Billion-Word (OBW) dataset with around 800k vocabulary, and its word embedding and softmax matrices use more than 6GBytes space, and are responsible for over 90% of the model parameters. In this paper, we propose GroupReduce, a novel compression method for neural language models, based on vocabulary-partition (block) based low-rank matrix approximation and the inherent frequency distribution of tokens (the power-law distribution of words). The experimental results show our method can significantly outperform traditional compression methods such as low-rank approximation and pruning. On the OBW dataset, our method achieved 6.6 times compression rate for the embedding and softmax matrices, and when combined with quantization, our method can achieve 26 times compression rate, which translates to a factor of 12.8 times compression for the entire model with very little degradation in perplexity.
Social Brand Value and the Value Enhancing Role of Social Media Relationships for Brands
Due to the social media revolution and the emergence of communities, social networks, and user generated content portals, prevalent branding concepts need to catch up with this reality. Given the importance of social ties, social interactions and social identity in the new media environment, there is a need to account for a relationship measure in marketing and branding. Based on the concept of social capital we introduce the concept of social brand value, defined as the perceived value derived by exchange and interactions with other users of the brand within a `community. Within a qualitative study marketing experts were interviewed and highlighted the importance towards social media activities, but also indicated that they do not have a clear picture on how strategies should look like and how their success can be measured. A second quantitative study was conducted which demonstrates the influence the social brand value construct has for consumers brand evangelism and willingness to pay a price premium and hence the value contribution of the social brand value for consumers.
Complex PTSD - a better description for borderline personality disorder?
OBJECTIVE To consider the use of the diagnostic category 'complex posttraumatic stress disorder' (c-PTSD) as detailed in the forthcoming ICD-11 classification system as a less stigmatising, more clinically useful term, instead of the current DSM-5 defined condition of 'borderline personality disorder' (BPD). CONCLUSIONS Trauma, in its broadest definition, plays a key role in the development of both c-PTSD and BPD. Given this current lack of differentiation between these conditions, and the high stigma faced by people with BPD, it seems reasonable to consider using the diagnostic term 'complex posttraumatic stress disorder' to decrease stigma and provide a trauma-informed approach for BPD patients.
Distance protection of transmission line with infeed based on real-time simulator
In this paper real-time simulation, analysis, and validation of the conventional distance relay based on MATLAB/simulink and Real-Time LABoratory (RT-LAB) is presented. In addition to the detailed model of six impedance measuring units of distance relay, power system model is implemented in Opal-RT's RT-LAB simulator. Some cases are highlighted to illustrate the modelling performance. Moreover, the effect of infeed current, which is an emerging issue for power system protection, using conventional distance relay in real-time environment is evaluated.
Self-Supervised Generative Adversarial Networks
Conditional GANs are at the forefront of natural image synthesis. The main drawback of such models is the necessity for labelled data. In this work we exploit two popular unsupervised learning techniques, adversarial training and self-supervision, to close the gap between conditional and unconditional GANs. In particular, we allow the networks to collaborate on the task of representation learning, while being adversarial with respect to the classic GAN game. The role of self-supervision is to encourage the discriminator to learn meaningful feature representations which are not forgotten during training. We test empirically both the quality of the learned image representations, and the quality of the synthesized images. Under the same conditions, the self-supervised GAN attains a similar performance to stateof-the-art conditional counterparts. Finally, we show that this approach to fully unsupervised learning can be scaled to attain an FID of 33 on unconditional IMAGENET generation.
Emerging roles for telemedicine and smart technologies in dementia care
Demographic aging of the world population contributes to an increase in the number of persons diagnosed with dementia (PWD), with corresponding increases in health care expenditures. In addition, fewer family members are available to care for these individuals. Most care for PWD occurs in the home, and family members caring for PWD frequently suffer negative outcomes related to the stress and burden of observing their loved one's progressive memory and functional decline. Decreases in cognition and self-care also necessitate that the caregiver takes on new roles and responsibilities in care provision. Smart technologies are being developed to support family caregivers of PWD in a variety of ways, including provision of information and support resources online, wayfinding technology to support independent mobility of the PWD, monitoring systems to alert caregivers to changes in the PWD and their environment, navigation devices to track PWD experiencing wandering, and telemedicine and e-health services linking caregivers and PWD with health care providers. This paper will review current uses of these advancing technologies to support care of PWD. Challenges unique to widespread acceptance of technology will be addressed and future directions explored.
Acoustic Modeling Using Deep Belief Networks
Gaussian mixture models are currently the dominant technique for modeling the emission distribution of hidden Markov models for speech recognition. We show that better phone recognition on the TIMIT dataset can be achieved by replacing Gaussian mixture models by deep neural networks that contain many layers of features and a very large number of parameters. These networks are first pre-trained as a multi-layer generative model of a window of spectral feature vectors without making use of any discriminative information. Once the generative pre-training has designed the features, we perform discriminative fine-tuning using backpropagation to adjust the features slightly to make them better at predicting a probability distribution over the states of monophone hidden Markov models.
Subsets of symptomatic hand osteoarthritis in community-dwelling older adults in the United Kingdom: prevalence, inter-relationships, risk factor profiles and clinical characteristics at baseline and 3-years
OBJECTIVE To compare the population prevalence, inter-relationships, risk factor profiles and clinical characteristics of subsets of symptomatic hand osteoarthritis (OA) with a view to understanding their relative frequency and distinctiveness. METHOD 1076 community-dwelling adults with hand symptoms (60% women, mean age 64.7 years) were recruited and classified into pre-defined subsets using physical examination and standardised hand radiographs, scored with the Kellgren & Lawrence (K&L) and Verbruggen-Veys grading systems. Detailed information on selected risk factors was obtained from direct measurement (Body Mass Index (BMI)), self-complete questionnaires (excessive use of hands, previous hand injury) and medical record review (hypertension, dyslipidaemia, type 2 diabetes). Hand pain and disability were self-reported at baseline and 3-year follow-up using Australian/Canadian Osteoarthritis Hand Index (AUSCAN). RESULTS Crude population prevalence estimates for symptomatic hand OA subsets in the adult population aged 50 years and over were: thumb base OA (22.4%), nodal interphalangeal joint (IPJ) OA (15.5%), generalised hand OA (10.4%), non-nodal IPJ OA (4.9%), erosive OA (1.0%). Apart from thumb base OA, there was considerable overlap between the subsets. Erosive OA appeared the most distinctive with the highest female: male ratio, and the most disability at baseline and 3-years. A higher frequency of obesity, hypertension, dyslipidaemia, and metabolic syndrome was observed in this subset. CONCLUSION Overlap in the occurrence of hand OA subsets poses conceptual and practical challenges to the pursuit of distinct phenotypes. Erosive OA may nevertheless provide particular insight into the role of metabolic and cardiovascular risk factors in the pathogenesis of OA.
Multi-automated vehicle coordination using decoupled prioritized path planning for multi-lane one- and bi-directional traffic flow control
Within the context of autonomous driving, this paper presents a method for the coordination of multiple automated vehicles using priority schemes for decoupled motion planning for multi-lane one- and bi-directional traffic flow control. The focus is on tube-like roads and non-zero velocities (no complete standstill maneuvers). We assume inter-vehicular communication (car-2-car) and a centralized or decentralized coordination service. We distinguish between different driving modes including adaptive cruise control (ACC) and obstacle avoidance (OA) for the handling of dynamic driving situations. We further assume that any controllable vehicle is equipped with proprioceptive and exteroceptive sensors for environment perception within a particular range field. In case of failure of the inter-vehicle communication system, the controllable vehicles can act as autonomous vehicles. The motivation is the control of a) one-directional multi-lane roads available for automated as well as unautomated objects with potentially, but not necessarily, varying reference speeds, and b) bi-directional traffic flow control making use of all available lanes, allowing, in general, object- and direction-wise variable reference speeds. For the one-directional case, we discuss a suitable deterministic priority scheme for throughput maximization and quickly reaching of a platooning state. For the bi-directional scenario, we derive a binary integer linear program (BILP) for the assignment of lanes to one of the two road traversal directions that can be solved optimally via linear programming (LP). The approach is evaluated on three numerical simulation scenarios.
Sequential bortezomib, dexamethasone, and thalidomide maintenance therapy after single autologous peripheral stem cell transplantation in patients with multiple myeloma.
We report feasibility and response results of a phase II study investigating prolonged weekly bortezomib and dexamethasone followed by thalidomide and dexamethasone as maintenance therapy after single autologous stem cell transplantation (ASCT) in patients with multiple myeloma. Within 4 to 8 weeks of ASCT, patients received weekly bortezomib and dexamethasone for six cycles, followed by thalidomide and dexamethasone for six more cycles. Thalidomide alone was continued until disease progression. Forty-five patients underwent ASCT. Forty patients started maintenance therapy; of these, 36 patients received four cycles, and 32 completed six cycles of maintenance bortezomib. Of these 40 patients, nine (22%) were in complete response (CR) before ASCT, 13 (32%) achieved CR after ASCT but before bortezomib maintenance therapy, and 21 (53%) achieved CR after bortezomib maintenance therapy. Nine patients not previously in CR (33%) upgraded their response to CR with bortezomib maintenance. At 1 year post-ASCT, 20 patients achieved CR, and two achieved very good partial response. Twenty-seven patients experienced peripheral neuropathy during bortezomib therapy, all grade 1 or 2. Our findings indicate that prolonged sequential weekly bortezomib, dexamethasone, and thalidomide maintenance therapy after single ASCT is feasible and well tolerated. Bortezomib maintenance treatment upgraded post-ASCT CR responses with no severe grade 3/4 peripheral neuropathy.
RECOGNITION AND RETRIEVAL PROCESSES IN FREE RECALL
A model of free recall is described which identifies two processes in free recall: a retrieval process by which the subject accesses the words, and a recognition process by which the subject decides whether an implicitly retrieved word is a to-be-recalled word. Submodels for the recognition process and the retrieval process are described. The recognition model assumes that during the study phase, the subject associates "list markers" to the to-be-recalled words. The establishment of such associates is postulated to be an all-or-none stochastic process. In the test phase, the subject recognizes to-be-recalled words by deciding which words have relevant list markers as associates. A signal detectability model is developed for this decision process. The retrieval model is introduced as a computer program that tags associative paths between list words. In several experiments, subjects studied and were tested on a sequence of overlapping sublists sampled from a master set of common nouns. The twoprocess model predicts that the subject's ability to retrieve the words should increase as more overlapping sublists are studied, but his ability to differentiate the words on the most recent list should deteriorate. Experiments confirmed this predicted dissociation of recognition and retrieval. Further predictions derived from the free recall model were also supported.
Differential regulation of lipoprotein kinetics by atorvastatin and fenofibrate in subjects with the metabolic syndrome.
The metabolic syndrome is characterized by insulin resistance and abnormal apolipoprotein AI (apoAI) and apolipoprotein B-100 (apoB) metabolism that may collectively accelerate atherosclerosis. The effects of atorvastatin (40 mg/day) and micronised fenofibrate (200 mg/day) on the kinetics of apoAI and apoB were investigated in a controlled cross-over trial of 11 dyslipidemic men with the metabolic syndrome. ApoAI and apoB kinetics were studied following intravenous d(3)-leucine administration using gas-chromatography mass spectrometry with data analyzed by compartmental modeling. Compared with placebo, atorvastatin significantly decreased (P < 0.001) plasma concentrations of cholesterol, triglyceride, LDL cholesterol, VLDL apoB, intermediate-density lipoprotein (IDL) apoB, and LDL apoB. Fenofibrate significantly decreased (P < 0.001) plasma triglyceride and VLDL apoB and elevated HDL(2) cholesterol (P < 0.001), HDL(3) cholesterol (P < 0.01), apoAI (P = 0.01), and apoAII (P < 0.001) concentrations, but it did not significantly alter LDL cholesterol. Atorvastatin significantly increased (P < 0.002) the fractional catabolic rate (FCR) of VLDL apoB, IDL apoB, and LDL apoB but did not affect the production of apoB in any lipoprotein fraction or in the turnover of apoAI. Fenofibrate significantly increased (P < 0.01) the FCR of VLDL, IDL, and LDL apoB but did not affect the production of VLDL apoB. Relative to placebo and atorvastatin, fenofibrate significantly increased the production (P < 0.001) and FCR (P = 0.016) of apoAI. Both agents significantly lowered plasma triglycerides and apoCIII concentrations, but only atorvastatin significantly lowered (P < 0.001) plasma cholesteryl ester transfer protein activity. Neither treatment altered insulin resistance. In conclusion, these differential effects of atorvastatin and fenofibrate on apoAI and apoB kinetics support the use of combination therapy for optimally regulating dyslipoproteinemia in the metabolic syndrome.
Approximating the Colorful Caratheodory Theorem (システム数理と応用)
Let P1, . . . , Pd+1 ⊂ R be d-dimensional point sets such that the convex hull of each Pi contains the origin. We call the sets Pi color classes, and we think of the points in Pi as having color i. A colorful choice is a set with at most one point from each color class. The colorful Carathéodory theorem guarantees the existence of a colorful choice whose convex hull contains the origin. So far, the computational complexity of finding such a colorful choice is unknown. An m-colorful choice is a set that contains at most m points from each color class. We present an approximation algorithm that computes for any constant ε > 0, an dε(d + 1)e-colorful choice containing the origin in its convex hull in polynomial time. This notion of approximation has not been studied before, and it is motivated through the applications of the colorful Carathéodory theorem in the literature. Second, we show that the exact problem can be solved in d d) time if Θ(d log d) color classes are available, improving over the trivial d time algorithm.
Reaction intermediates of CO oxidation on gas phase Pd4 clusters: a density functional study.
Density functional theory (DFT) studies have revealed the energetically favorable reaction paths for oxidation of CO on Pd(4) cluster. Adsorption of various species such as O(2), 2O, O, CO, CO(2), and coadsorbate combinations, including O(2)+CO, 2O+CO, O+CO, and O+CO(2) on neutral, cationic, and anionic Pd(4) clusters were investigated. The results indicate that Pd(4)(+) and Pd(4) are more effective for catalyzing CO in comparison with Pd(4)(-). It is further observed that dissociated oxygen is a superior oxidant for CO oxidation on Pd(4)(q) (q = 0, 1, -1) than molecular and atomic oxygen.
The DARE study of relapse prevention in depression: design for a phase 1/2 translational randomised controlled trial involving mindfulness-based cognitive therapy and supported self monitoring
BACKGROUND Depression is a common condition that typically has a relapsing course. Effective interventions targeting relapse have the potential to dramatically reduce the point prevalence of the condition. Mindfulness-based cognitive therapy (MBCT) is a group-based intervention that has shown efficacy in reducing depressive relapse. While trials of MBCT to date have met the core requirements of phase 1 translational research, there is a need now to move to phase 2 translational research - the application of MBCT within real-world settings with a view to informing policy and clinical practice. The aim of this trial is to examine the clinical impact and health economics of MBCT under real-world conditions and where efforts have been made to assess for and prevent resentful demoralization among the control group. Secondary aims of the project involve extending the phase 1 agenda to an examination of the effects of co-morbidity and mechanisms of action. METHODS/DESIGN This study is designed as a prospective, multi-site, single-blind, randomised controlled trial using a group comparison design between involving the intervention, MBCT, and a self-monitoring comparison condition, Depression Relapse Active Monitoring (DRAM). Follow-up is over 2 years. The design of the study indicates recruitment from primary and secondary care of 204 participants who have a history of 3 or more episodes of Major Depression but who are currently well. Measures assessing depressive relapse/recurrence, time to first clinical intervention, treatment expectancy and a range of secondary outcomes and process variables are included. A health economics evaluation will be undertaken to assess the incremental cost of MBCT. DISCUSSION The results of this trial, including an examination of clinical, functional and health economic outcomes, will be used to assess the role that this treatment approach may have in recommendations for treatment of depression in Australia and elsewhere. If the findings are positive, we expect that this research will consolidate the evidence base to guide the decision to fund MBCT and to seek to promote its availability to those who have experienced at least 3 episodes of depression. TRIAL REGISTRATION Australian New Zealand Clinical Trials Registry: ACTRN12607000166471.
A breast cancer nutrition adjuvant study (NAS): Protocol design and initial patient adherence
To evaluate the feasibility of using a reduction in dietary fat intake as a component of treatment regimens for patients with resected breast cancer, a multi-disciplinary cooperative group protocol was developed. Females 50 to 75 years of age with stage II breast cancer who completed primary local therapy were eligible for randomization to a Control Dietary Group in which dietary fat intake was to remain unchanged from baseline level (at approximately 38% of calories derived from fat) and an Intensive Intervention Dietary Group designed to reduce dietary fat intake. Both Dietary Groups were given tamoxifen 20 mg/day. To facilitate early experience with dietary regimen delivery, patients entered during an initial pilot phase could receive any chemotherapy and/or hormonal treatment. A prerandomization nutrition ‘run-in’ of clinically eligible patients assessed adherence to nutrition data collection procedures and screened patients for nutrition eligibility criteria. Of 59 patients beginning ‘run-in’, 49 were randomized and, at present, 32 have completed at least three months follow-up. The change in dietary fat intake (as assessed by Four Day Food Records) seen in both arms is outlined below.
SSNet: Scale Selection Network for Online 3D Action Prediction
In action prediction (early action recognition), the goal is to predict the class label of an ongoing action using its observed part so far. In this paper, we focus on online action prediction in streaming 3D skeleton sequences. A dilated convolutional network is introduced to model the motion dynamics in temporal dimension via a sliding window over the time axis. As there are significant temporal scale variations of the observed part of the ongoing action at different progress levels, we propose a novel window scale selection scheme to make our network focus on the performed part of the ongoing action and try to suppress the noise from the previous actions at each time step. Furthermore, an activation sharing scheme is proposed to deal with the overlapping computations among the adjacent steps, which allows our model to run more efficiently. The extensive experiments on two challenging datasets show the effectiveness of the proposed action prediction framework.
the scientific association of the lake of constance psychiatrists 1919-1932 psychoanalytic contributions to the treatment of psychoses
Abstract From 1919 until 1932, psychiatrists from the mental hospitals near Lake Constance and from the Bellevue-Sanatorium in Kreuzlingen/Switzedand organized regular scientific meetings, where well-known representatives of all the relevant psychiatric and psychotherapeutic schools and important phenomen-ologicai philosophers came together. Getting in touch with psychoanalytical and social psychiatric trends led to a quite lively and fruitful exchange in theory and practice. Through the Reichenau psychiatrist Alfred Schwenninger, Ludwig Binswanger got to know personally the phenomenologic philosophers Alexander Pfander and Edmund Husserl, who also held lectures at meetings there. Hans Wolfgang Maier from the Burgholzli in Zurich and his colleagues from Herisau and Munsterlingen concentrated on the mainly social psychiatric and forensic work diey did. Quite a few of the lectures were published in the TZeitschrift fur die gesamte Neurologie und Psychiatrie The paper concentrates on contributions related to...
Learning to Accept New Classes without Training
Classic supervised learning makes the closed-world assumption, meaning that classes seen in testing must have been seen in training. However, in the dynamic world, new or unseen class examples may appear constantly. A model working in such an environment must be able to reject unseen classes (not seen or used in training). If enough data is collected for the unseen classes, the system should incrementally learn to accept/classify them. This learning paradigm is called openworld learning (OWL). Existing OWL methods all need some form of re-training to accept or include the new classes in the overall model. In this paper, we propose a meta-learning approach to the problem. Its key novelty is that it only needs to train a meta-classifier, which can then continually accept new classes when they have enough labeled data for the meta-classifier to use, and also detect/reject future unseen classes. No re-training of the meta-classifier or a new overall classifier covering all old and new classes is needed. In testing, the method only uses the examples of the seen classes (including the newly added classes) on-the-fly for classification and rejection. Experimental results demonstrate the effectiveness of the new approach 1.
A Robust and Efficient Approach to License Plate Detection
This paper presents a robust and efficient method for license plate detection with the purpose of accurately localizing vehicle license plates from complex scenes in real time. A simple yet effective image downscaling method is first proposed to substantially accelerate license plate localization without sacrificing detection performance compared with that achieved using the original image. Furthermore, a novel line density filter approach is proposed to extract candidate regions, thereby significantly reducing the area to be analyzed for license plate localization. Moreover, a cascaded license plate classifier based on linear support vector machines using color saliency features is introduced to identify the true license plate from among the candidate regions. For performance evaluation, a data set consisting of 3977 images captured from diverse scenes under different conditions is also presented. Extensive experiments on the widely used Caltech license plate data set and our newly introduced data set demonstrate that the proposed approach substantially outperforms state-of-the-art methods in terms of both detection accuracy and run-time efficiency, increasing the detection ratio from 91.09% to 96.62% while decreasing the run time from 672 to 42 ms for processing an image with a resolution of $1082\times 728$ . The executable code and our collected data set are publicly available.
Design of high PSRR folded cascode operational amplifier for LDO applications
This paper presents a novel CMOS folded cascode operational amplifier that leads to high PSRR and provides gain nearly equal to that of a two stage op-amp. The proposed design is implemented in GPDK 0.18μm CMOS technology. This op-amp uses a folded cascode structure in the output stage combined with the differential amplifier having PMOS input transistors to achieve good input common mode range and lower flicker noise. It has an important feature that it allows the input common mode level close to supply voltage. The proposed topology improves the PSRR of op-amp which can be used for LDO applications. Simulations using Cadence under 1.8 V show a DC gain of 72.0404 dB and a phase margin of 62.4636 degree at a unity gain bandwidth of 13.33 MHz with the power consumption smaller than 0.13 mW along with a PSRR of 72.0966 dB. The layout of the design shows that the area acquired on the chip is approximately equal to 8897.27 μm2.