title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Effect of homeopathic Lycopodium clavatum on memory functions and cerebral blood flow in memory-impaired rats. | BACKGROUND
Lycopodium clavatum (Lyc) is a widely used homeopathic medicine for the liver, urinary and digestive disorders. Recently, acetyl cholinesterase (AchE) inhibitory activity has been found in Lyc alkaloid extract, which could be beneficial in dementia disorder. However, the effect of Lyc has not yet been explored in animal model of memory impairment and on cerebral blood flow.
AIM
The present study was planned to explore the effect of Lyc on learning and memory function and cerebral blood flow (CBF) in intracerebroventricularly (ICV) administered streptozotocin (STZ) induced memory impairment in rats.
MATERIALS AND METHODS
Memory deficit was induced by ICV administration of STZ (3 mg/kg) in rats on 1st and 3rd day. Male SD rats were treated with Lyc Mother Tincture (MT) 30, 200 and 1000 for 17 days. Learning and memory was evaluated by Morris water maze test on 14th, 15th and 16th day. CBF was measured by Laser Doppler flow meter on 17th day.
RESULTS
STZ (ICV) treated rats showed impairment in learning and memory along with reduced CBF. Lyc MT and 200 showed improvement in learning and memory. There was increased CBF in STZ (ICV) treated rats at all the potencies of Lyc studied.
CONCLUSION
The above study suggests that Lyc may be used as a drug of choice in condition of memory impairment due to its beneficial effect on CBF. |
Development of An Android Application for Object Detection Based on Color, Shape, or Local Features | Object detection and recognition is an important task in many computer vision applications. In this paper an Android application was developed using Eclipse IDE and OpenCV3 Library. This application is able to detect objects in an image that is loaded from the mobile gallery, based on its color, shape, or local features. The image is processed in the HSV color domain for better color detection. Circular shapes are detected using Circular Hough Transform and other shapes are detected using Douglas-Peucker algorithm. BRISK (binary robust invariant scalable keypoints) local features were applied in the developed Android application for matching an object image in another scene image. The steps of the proposed detection algorithms are described, and the interfaces of the application are illustrated. The application is ported and tested on Galaxy S3, S6, and Note1 Smartphones. Based on the experimental results, the application is capable of detecting eleven different colors, detecting two dimensional geometrical shapes including circles, rectangles, triangles, and squares, and correctly match local features of object and scene images for different conditions. The application could be used as a standalone application, or as a part of another application such as Robot systems, traffic systems, e-learning applications, information retrieval and many others. |
Common and rare variants in SCN10A modulate the risk of atrial fibrillation. | BACKGROUND
Genome-wide association studies have shown that the common single nucleotide polymorphism rs6800541 located in SCN10A, encoding the voltage-gated Nav1.8 sodium channel, is associated with PR-interval prolongation and atrial fibrillation (AF). Single nucleotide polymorphism rs6800541 is in high linkage disequilibrium with the nonsynonymous variant in SCN10A, rs6795970 (V1073A, r(2)=0.933). We therefore sought to determine whether common and rare SCN10A variants are associated with early onset AF.
METHODS AND RESULTS
SCN10A was sequenced in 225 AF patients in whom there was no evidence of other cardiovascular disease or dysfunction (lone AF). In an association study of the rs6795970 single nucleotide polymorphism variant, we included 515 AF patients and 2 control cohorts of 730 individuals free of AF and 6161 randomly sampled individuals. Functional characterization of SCN10A variants was performed by whole-cell patch-clamping. In the lone AF cohort, 9 rare missense variants and 1 splice site donor variant were detected. Interestingly, AF patients were found to have higher G allele frequency of rs6795970, which encodes the alanine variant at position 1073 (described from here on as A1073, odds ratio =1.35 [1.16-1.54]; P=2.3×10(-5)). Both of the common variants, A1073 and P1092, induced a gain-of-channel function, whereas the rare missense variants, V94G and R1588Q, resulted in a loss-of-channel function.
CONCLUSIONS
The common variant A1073 is associated with increased susceptibility to AF. Both rare and common variants have effect on the function of the channel, indicating that these variants influence susceptibility to AF. Hence, our study suggests that SCN10A variations are involved in the genesis of AF. |
Plasma vitamin D and prostate cancer risk: results from the Selenium and Vitamin E Cancer Prevention Trial. | BACKGROUND
In vitro, animal, and ecological studies suggest that inadequate vitamin D intake could increase prostate cancer risk, but results of biomarker-based longitudinal studies are inconsistent.
METHODS
Data for this case (n = 1,731) and cohort (n = 3,203) analysis are from the Selenium and Vitamin E Cancer Prevention Trial. Cox proportional hazard models were used to test whether baseline plasma vitamin D (25-hydroxy) concentration, adjusted for season of blood collection, was associated with the risk of total and Gleason score 2-6, 7-10, and 8-10 prostate cancer.
RESULTS
There were U-shaped associations of vitamin D with total cancer risk: compared with the first quintile, HRs were 0.83 [95% confidence interval (CI), 0.66-1.03; P = 0.092], 0.74 (95% CI, 0.59-0.92; P = 0.008), 0.86 (95% CI, 0.69-1.07; P = 0.181), and 0.98 (95% CI, 0.78-1.21; P = 0.823), for the second through fifth quintiles, respectively. For Gleason 7-10 cancer, corresponding HRs were 0.63 (95% CI, 0.45-0.90; P = 0.010), 0.66 (95% CI, 0.47-0.92; P = 0.016), 0.79 (95% CI, 0.56-1.10; P = 0.165), and 0.88 (95% CI, 0.63-1.22; P = 0.436). Among African American men (n = 250 cases), higher vitamin D was associated with reduced risk of Gleason 7-10 cancer only: in the a posteriori contrast of quintiles 1-2 versus 3-5, the HR was 0.55 (95% CI, 0.31-0.97; P = 0.037), with no evidence of dose-response or a U-shaped association.
CONCLUSIONS
Both low and high vitamin D concentrations were associated with increased risk of prostate cancer, and more strongly for high-grade disease.
IMPACT
The optimal range of circulating vitamin D for prostate cancer prevention may be narrow. Supplementation of men with adequate levels may be harmful. Cancer Epidemiol Biomarkers Prev; 23(8); 1494-504. ©2014 AACR. |
Location and Time Aware Social Collaborative Retrieval for New Successive Point-of-Interest Recommendation | In location-based social networks (LBSNs), new successive point-of-interest (POI) recommendation is a newly formulated task which tries to regard the POI a user currently visits as his POI-related query and recommend new POIs the user has not visited before. While carefully designed methods are proposed to solve this problem, they ignore the essence of the task which involves retrieval and recommendation problem simultaneously and fail to employ the social relations or temporal information adequately to improve the results.
In order to solve this problem, we propose a new model called location and time aware social collaborative retrieval model (LTSCR), which has two distinct advantages: (1) it models the location, time, and social information simultaneously for the successive POI recommendation task; (2) it efficiently utilizes the merits of the collaborative retrieval model which leverages weighted approximately ranked pairwise (WARP) loss for achieving better top-n ranking results, just as the new successive POI recommendation task needs. We conducted some comprehensive experiments on publicly available datasets and demonstrate the power of the proposed method, with 46.6% growth in Precision@5 and 47.3% improvement in Recall@5 over the best previous method. |
DL-SFA: Deeply-Learned Slow Feature Analysis for Action Recognition | Most of the previous work on video action recognition use complex hand-designed local features, such as SIFT, HOG and SURF, but these approaches are implemented sophisticatedly and difficult to be extended to other sensor modalities. Recent studies discover that there are no universally best hand-engineered features for all datasets, and learning features directly from the data may be more advantageous. One such endeavor is Slow Feature Analysis (SFA) proposed by Wiskott and Sejnowski [33]. SFA can learn the invariant and slowly varying features from input signals and has been proved to be valuable in human action recognition [34]. It is also observed that the multi-layer feature representation has succeeded remarkably in widespread machine learning applications. In this paper, we propose to combine SFA with deep learning techniques to learn hierarchical representations from the video data itself. Specifically, we use a two-layered SFA learning structure with 3D convolution and max pooling operations to scale up the method to large inputs and capture abstract and structural features from the video. Thus, the proposed method is suitable for action recognition. At the same time, sharing the same merits of deep learning, the proposed method is generic and fully automated. Our classification results on Hollywood2, KTH and UCF Sports are competitive with previously published results. To highlight some, on the KTH dataset, our recognition rate shows approximately 1% improvement in comparison to state-of-the-art methods even without supervision or dense sampling. |
An OAuth based authentication mechanism for IoT networks | Internet of things (IoT) has rapidly become one of the most familiar and perhaps most discussed topic on the research field. The attention for the Internet of Things is mainly due to the new connected products intended to bring greater efficiencies and simplicity to life. Variety of IoT applications lead to equally wide variety of security issues. In this paper, we propose an approach to provide secure authentication mechanism for an IoT network which consists of various kinds of constrained devices using a security manager. This proposed approach protects IoT network from unauthenticated users with security manager using OAuth 2.0 protocol. Moreover, this approach provides flexibility in managing IoT networks. The security manager provides authentication service for multiple IoT networks, which can also help to reduce the cost overhead to maintain secure database in IoT networks. |
Improving TCP Throughput over Two-Way Asymmetric Links: Analysis and Solutions | The sharing of a common buffer by TCP data segments and acknowledgments in a network or internet has been known to produce the effect of ack compression, often causing dramatic reductions in throughput. We study several schemes for improving the performance of two-way TCP traffic over asymmetric links where the bandwidths in the two directions may differ substantially, possibly by many orders of magnitude. These approaches reduce the effect of ack compression by carefully controlling the flow of data packets and acknowledgments. We first examine a scheme where acknowledgments are transmitted at a higher priority than data. By analysis and simulation, we show that prioritizing acks can lead to starvation of the low-bandwidth connection. Next, we introduce and analyze a connection-level backpressure mechanism designed to limit the maximum amount of data buffered in the outgoing IP queue of the source of the low-bandwidth connection. We show that this approach, while minimizing the queueing delay for acks, results in unfair bandwidth allocation on the slow link. Finally, our preferred solution separates the acks from data packets in the outgoing queue, and makes use of a connection-level bandwidth allocation mechanism to control their bandwidth shares. We show that this scheme overcomes the limitations of the previous approaches, provides isolation, and enables precise control of the connection throughputs. We present analytical models of the dynamic behavior of each of these approaches, derive closed-form expressions for the expected connection efficiencies in each case, and validate them with simulation results. |
An extended VIKOR method based on prospect theory for multiple attribute decision making under interval type-2 fuzzy environment | Interval type-2 fuzzy set (IT2FS) offers interesting avenue to handle high order information and uncertainty in decision support system (DSS) when dealing with both extrinsic and intrinsic aspects of uncertainty. Recently, multiple attribute decision making (MADM) problems with interval type-2 fuzzy information have received increasing attentions both from researchers and practitioners. As a result, a number of interval type-2 fuzzy MADM methods have been developed. In this paper, we extend the VIKOR (VlseKriterijumska Optimizacijia I Kompromisno Resenje, in Serbian) method based on the prospect theory to accommodate interval type-2 fuzzy circumstances. First, we propose a new distance measure for IT2FS, which is comes as a sound alternative when being compared with the existing interval type-2 fuzzy distance measures. Then, a decision model integrating VIKOR method and prospect theory is proposed. A case study concerning a high-tech risk evaluation is provided to illustrate the applicability of the proposed method. In addition, a comparative analysis with interval type-2 fuzzy TOPSIS method is also presented. 2015 Elsevier B.V. All rights reserved. |
An integrated design of power converters for electric vehicles | This paper proposes a novel integrated power system for electric vehicles (EVs) to improve the power density and efficiency at a less cost. It integrates the on-board charger (OBC) and the low-voltage DC/DC converter (LDC) by common use of semiconductor devices, capacitors and magnetic components. Therefore, the volume and cost of the integrated system is reduced compared with the conventional two independent converters. The proposed system can operate in three modes: OBC standalone mode, LDC standalone mode and OBC-LDC simultaneous mode. The phase-shift control is the main control strategy for this system. For a light load, a PWM plus phase shift (PPS) control is added to provide expending soft switching range. As the most important integrated component, the transformer is designed by considering the soft switching ranges. The theoretical analysis of topology, operating modes, control strategy and transformer design are presented. A prototype integrating a 3.3-kW OBC and a 2-kW LDC is built to demonstrate the validity and advantages of the proposed system. |
Neighbor number, valley seeking and clustering | This paper proposes a novel nonparametric clustering algorithm capable of identifying shape-free clusters. This algorithm is based on a nonparametric estimation of the normalized density derivative (NDD) and the local convexity of the density distribution function, both of which are represented in a very concise form in terms of neighbor numbers. We use NDD to measure the dissimilarity between each pair of observations in a local neighborhood and to build a connectivity graph. Combined with the local convexity, this similarity measure can detect observations in local minima (valleys) of the density function, which separate observations in different major clusters. We demonstrate that this algorithm has a close relationship with the single-linkage hierarchical clustering and can be viewed as its extension. The performance of the algorithm is tested with both synthetic and real datasets. An example of color image segmentation is also given. Comparisons with several representative existing algorithms show that the proposed method can robustly identify major clusters even when there are complex configurations and/or large overlaps. 2006 Elsevier B.V. All rights reserved. |
Vision based smoke detection system using image energy and color information | Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas. Many commercial smoke detection sensors exist but most of them cannot be applied in open space or outdoor scenarios. With this aim, the paper presents a smoke detection system that uses a common CCD camera sensor to detect smoke in images and trigger alarms. First, a proper background model is proposed to reliably extract smoke regions and avoid over-segmentation and false positives in outdoor scenarios where many distractors are present, such as moving trees or light reflexes. A novel Bayesian approach is adopted to detect smoke regions in the scene analyzing image energy by means of the Wavelet Transform coefficients and Color Information. A statistical model of image energy is built, using a temporal Gaussian Mixture, to analyze the energy decay that typically occurs when smoke covers the scene then the detection is strengthen evaluating the color blending between a reference smoke color and the input frame. The proposed system is capable of detecting rapidly smoke events both in night and in day conditions with a reduced number of false alarms hence is particularly suitable for monitoring large outdoor scenarios where common sensors would fail. An extensive experimental campaign both on recorded videos and live cameras evaluates the efficacy and efficiency of the system in many real world scenarios, such as outdoor storages and forests. |
Unethical computer using behavior scale: A study of reliability and validity on Turkish university students | This study was carried out in a Turkish university with 216 undergraduate students of computer technology as respondents. The study aimed to develop a scale (UECUBS) to determine the unethical computer use behavior. A factor analysis of the related items revealed that the factors were can be divided under five headings; intellectual property, social impact, safety and quality, net integrity and information integrity. 2005 Elsevier Ltd. All rights reserved. |
Neuro-fuzzy control of a robotic exoskeleton with EMG signals | We have been developing robotic exoskeletons to assist motion of physically weak persons such as elderly, disabled, and injured persons. The robotic exoskeleton is controlled basically based on the electromyogram (EMG) signals, since the EMG signals of human muscles are important signals to understand how the user intends to move. Even though the EMG signals contain very important information, however, it is not very easy to predict the user's upper-limb motion (elbow and shoulder motion) based on the EMG signals in real-time because of the difficulty in using the EMG signals as the controller input signals. In this paper, we propose a robotic exoskeleton for human upper-limb motion assist, a hierarchical neuro-fuzzy controller for the robotic exoskeleton, and its adaptation method. |
Mechanical design of the humanoid robot platform, HUBO | The Korea Advanced Institute of Science and Technology (KAIST) humanoid robot 1 (KHR-1) was developed for the purpose of researching the walking action of bipeds. KHR-1, which has no hands or head, has 21 degrees of freedom (DOF): 12 DOF in the legs, 1 DOF in the torso, and 8 DOF in the arms. The second version of this humanoid robot, KHR-2, (which has 41 DOF) can walk on a living-room floor; it also moves and looks like a human. The third version, KHR-3 (HUBO), has more human-like features, a greater variety of movements, and a more human-friendly character. We present the mechanical design of HUBO, including the design concept, the lower body design, the upper body design, and the actuator selection of joints. Previously we developed and published details of KHR-1 and KHR-2. The HUBO platform, which is based on KHR-2, has 41 DOF, stands 125 cm tall, and weighs 55 kg. From a mechanical point of view, HUBO has greater mechanical stiffness and a more detailed frame design than KHR-2. The stiffness of the frame was increased and the detailed design around the joints and link frame were either modified or fully redesigned. We initially introduced an exterior art design concept for KHR-2, and that concept was implemented in HUBO at the mechanical design stage. |
Longitudinal assessment of carotid atherosclerosis after Radiation Therapy using Computed Tomography: A case control Study | To study the carotid artery plaque composition and its volume changes in a group of patients at baseline and 2 years after head and neck radiation therapy treatment (HNXRT). In this retrospective study, 62 patients (41 males; mean age 63 years; range 52–81) who underwent HNXRT and 40 patients (24 males; mean age 65) who underwent surgical resection of neoplasm and did not undergo HNXRT were assessed, with 2-year follow-up. The carotid artery plaque volumes, as well as the volume of the sub-components (fatty-mixed-calcified), were semiautomatically quantified. Mann-Whitney and Wilcoxon tests were used to test the hypothesis. In the HNXRT group, there was a statistically significant increase in the total volume of the carotid artery plaques (from 533 to 746 mm3; p = 0.001), in the fatty plaques (103 vs. 202 mm3; p = 0.001) and mixed plaque component volume (328 vs. 419 mm3; p = 0.034). A statistically significant variation (from 21.8 % to 27.6 %) in the percentage of the fatty tissue was found. of this preliminary study suggest that HNXRT promotes increased carotid artery plaque volume, particularly the fatty plaque component. • HNXRT increases carotid plaque volume. • Plaque volume increase is mainly due to increase.in fatty plaque component • Patients who undergo HNXRT have a progression of carotid artery disease. |
Special Mapping Crime : Understanding Hot Spots 0 5 | This document is not intended to create, does not create, and may not be relied upon to create any rights, substantive or procedural, enforceable by law by any party in any matter civil or criminal. Findings and conclusions of the research reported here are those of the authors and do not necessarily reflect the official position or policies of the U.S. Department of Justice. The products, manufacturers, and organizations discussed in this document are presented for informational purposes only and do not constitute product approval or endorsement by the Much of crime mapping is devoted to detecting high-crime-density areas known as hot spots. Hot spot analysis helps police identify high-crime areas, types of crime being committed, and the best way to respond. This report discusses hot spot analysis techniques and software and identifies when to use each one. The visual display of a crime pattern on a map should be consistent with the type of hot spot and possible police action. For example, when hot spots are at specific addresses, a dot map is more appropriate than an area map, which would be too imprecise. In this report, chapters progress in sophis tication. Chapter 1 is for novices to crime mapping. Chapter 2 is more advanced, and chapter 3 is for highly experienced analysts. The report can be used as a com panion to another crime mapping report ■ Identifying hot spots requires multiple techniques; no single method is suffi cient to analyze all types of crime. ■ Current mapping technologies have sig nificantly improved the ability of crime analysts and researchers to understand crime patterns and victimization. ■ Crime hot spot maps can most effective ly guide police action when production of the maps is guided by crime theories (place, victim, street, or neighborhood). |
Atrophic and a mixed pattern of acne scars improved with a 1320-nm Nd:YAG laser. | BACKGROUND
Acne scar correction remains a challenge to the dermatologic surgeon. With nonablative laser resurfacing, this correction is imputed to dermal collagen remodeling and acne scar reorganization. Although atrophic acne scars tend to respond to laser treatment, the deeper ice pick and boxcar scars tend to be laser resistant.
OBJECTIVE
To investigate the treatment of atrophic and a mixed pattern of facial acne scars, we evaluated a 1320-nm Nd:YAG laser. Twelve subjects with atrophic facial acne scars (N=6) or a combination of atrophic and pitted, sclerotic, or boxcar scars (N=6) received three laser treatments. Physician and patient acne scar ratings were performed at baseline and at 6 months after the last treatment. Acne scars were rated with a 10-point severity scale.
RESULTS
Mean acne scar improvement was 1.5 points on physician assessments (P=0.002) and 2.2 points on patient assessments (P=0.01). Acne scars were rated more severely by patients than by the physician at all intervals. There were no noted complications at 6 months.
CONCLUSION
The 1320-nm Nd:YAG laser is a safe and effective nonablative modality for the improvement of atrophic and a mixed pattern of facial acne scars. |
International interdependence and regulatory power: Authority, mobility, and markets: | This article revisits a fundamental question of international political economy: when does cross-border economic interdependence become a source of power. The view that economic interdependence is a source of potential power, not just mutual benefits, has a long lineage traceable to political realism, organizational economics, Ricardian trade theory, and structural Marxism, and researchers typically focus on preferred causal variables in isolation. Despite important contributions, little attention has been paid to understanding the interactions of multiple perspectives on asymmetric interdependence, or to making sense of contradictory expectations of the various models. As a consequence scholars engaged in globalization debates, such as those about policy convergence or private actor governance, frequently talk past one another. To deduce expectations about the relationship between power and interdependence, we build a model synthesizing standard approaches that analyze the effects of market size and mark... |
Generating Long and Diverse Responses with Neural Conversation Models | Building general-purpose conversation agents is a very challenging task, but necessary on the road toward intelligent agents that can interact with humans in natural language. Neural conversation models – purely data-driven systems trained end-to-end on dialogue corpora – have shown great promise recently, yet they often produce short and generic responses. This work presents new training and decoding methods that improve the quality, coherence, and diversity of long responses generated using sequence-to-sequence models. Our approach adds selfattention to the decoder to maintain coherence in longer responses, and we propose a practical approach, called the glimpse-model, for scaling to large datasets. We introduce a stochastic beam-search algorithm with segment-by-segment reranking which lets us inject diversity earlier in the generation process. We trained on a combined data set of over 2.3B conversation messages mined from the web. In human evaluation studies, our method produces longer responses overall, with a higher proportion rated as acceptable and excellent as length increases, compared to baseline sequence-to-sequence models with explicit length-promotion. A backoff strategy produces better responses overall, in the full spectrum of lengths. |
Rational biosynthetic engineering for optimization of geldanamycin analogues. | Tailor made: We report the rational biosynthesis of C15 hydroxylated non-quinone geldanamycin analogues by site-directed mutagenesis of the geldanamycin polyketide synthase (PKS), together with a combination of post-PKS tailoring genes. Rational biosynthetic engineering allowed the generation of geldanamycin derivatives, such as DHQ3 illustrated in the figure, which had superior pharmacological properties in comparison to the parent compound. A rational biosynthetic engineering approach was applied to the optimization of the pharmacological properties of the benzoquinone ansamycin, geldanamycin. Geldanamycin and its natural or semisynthetic derivatives have the potential to serve as anticancer chemotherapeutic agents. However, these first-generation Hsp90 inhibitors share an unfavorable structural feature that causes both reduced efficacy and toxicity during clinical evaluation. We report the rationally designed biosynthesis of C15 hydroxylated non-quinone geldanamycin analogues by site-directed mutagenesis of the geldanamycin polyketide synthase (PKS), together with a combination of post-PKS tailoring genes. A 15-hydroxyl-17-demethoxy non-quinone analogue, DHQ3, exhibited stronger inhibition of Hsp90 ATPase activity (4.6-fold) than geldanamycin. Taken together, the results of the present study indicate that rational biosynthetic engineering allows the generation of derivatives of geldanamycin with superior pharmacological properties. |
Probabilistic Inference for Cold Start Knowledge Base Population with Prior World Knowledge | Building knowledge bases (KB) automatically from text corpora is crucial for many applications such as question answering and web search. The problem is very challenging and has been divided into sub-problems such as mention and named entity recognition, entity linking and relation extraction. However, combining these components has shown to be under-constrained and often produces KBs with supersize entities and common-sense errors in relations (a person has multiple birthdates). The errors are difficult to resolve solely with IE tools but become obvious with world knowledge at the corpus level. By analyzing Freebase and a large text collection, we found that per-relation cardinality and the popularity of entities follow the power-law distribution favoring flat long tails with lowfrequency instances. We present a probabilistic joint inference algorithm to incorporate this world knowledge during KB construction. Our approach yields stateof-the-art performance on the TAC Cold Start task, and 42% and 19.4% relative improvements in F1 over our baseline on Cold Start hop-1 and all-hop queries respectively. ∗The third author is currently with The Affinity project. This work was done while she was at BBN. This work was supported by DARPA/I2O Contract No. FA8750-13-C-0008 under the DEFT program. The views, opinions, and/or findings contained in this article are those of the author and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the Department of Defense. This document does not contain technology or technical data controlled under either the U.S. International Traffic in Arms Regulations or the U.S. Export Administration Regulations. |
Can maturity models support cyber security? | We are living in a cyber space with an unprecedented rapid expansion of the space and its elements. All interactive information is processed and exchanged via this space. Clearly a well-built cyber security is vital to ensure the security of the cyber space. However the definitions and scopes of both cyber space and cyber security are still not well-defined and this makes it difficult to establish sound security models and mechanisms for protecting this space. Out of existing models, maturity models offer a manageable approach for assessing the security level of a system or organization. The paper first provides a review of various definitions of cyber space and cyber security in order to ascertain a common understanding of the space and its security. The paper investigates existing security maturity models, focusing on their defining characteristics and identifying their strengths and weaknesses. Finally, the paper discusses and suggests measures for a sound and applicable cyber security model. |
Ratiometric BJT-based thermal sensor in 32nm and 22nm technologies | Thermal sensors are used in modern microprocessors to provide information for: 1) throttling at the maximum temperature of operation, and 2) fan regulation at temperatures down to 50°C. Today's microprocessors are thermally limited in many applications, so accurate temperature readings are essential in order to maximize performance. There are fairly large thermal gradients across the core, which vary for different instructions, so it is necessary to position thermal sensors near hot-spots. In addition, the locations of the hot-spots may not be predictable during the design phase. Thus it is necessary for hot-spot sensors to be small enough to be moved late in the design cycle or even after first Silicon. |
The Neuronal Workspace Model: Conscious Processing and Learning | research and educational use including without limitation use in instruction at your institution, sending it to specific colleagues who you know, and providing a copy to your institution's administrator. All other uses, reproduction and distribution, including without limitation commercial reprints, selling or licensing copies or access, or posting on open internet sites, your personal or institution's website or repository, are prohibited. For exceptions, permission may be sought for such use through Elsevier's permissions site at: |
An information-centric energy infrastructure: The Berkeley view | We describe an approach for how to design an essentially more scalable, flexible and resilient electric power infrastructure – one that encourages efficient use, integrates local generation, and manages demand through omnipresent awareness of energy availability and use over time. We are inspired by how the Internet has revolutionized communications infrastructure, by pushing intelligence to the edges while hiding the diversity of underlying technologies through well-defined interfaces. Any end device is a traffic source or sink and intelligent endpoints adapt their traffic to what the infrastructure can |
Schema Independent Relational Learning | Learning novel relations from relational databases is an important problem with many applications. Relational learning algorithms learn the definition of a new relation in terms of existing relations in the database. Nevertheless, the same database may be represented under different schemas for various reasons, such as data quality, efficiency and usability. The output of current relational learning algorithms tends to vary quite substantially over the choice of schema. This variation complicates their off-the-shelf application. We introduce and formalize the property of schema independence of relational learning algorithms, and study both the theoretical and empirical dependence of existing algorithms on the common class of (de) composition schema transformations. We show that current algorithms are not schema independent. We propose Castor, a relational learning algorithm that achieves schema independence by leveraging data dependencies. |
The Penn Chinese TreeBank: Phrase structure annotation of a large corpus | With growing interest in Chinese Language Processing, numerous NLP tools (e.g., word segmenters, part-of-speech taggers, and parsers) for Chinese have been developed all over the world. However, since no large-scale bracketed corpora are available to the public, these tools are trained on corpora with different segmentation criteria, part-of-speech tagsets and bracketing guidelines, and therefore, comparisons are difficult. As a first step towards addressing this issue, we have been preparing a large bracketed corpus since late 1998. The first two installments of the corpus, 250 thousand words of data, fully segmented, POS-tagged and syntactically bracketed, have been released to the public via LDC (www.ldc.upenn.edu). In this paper, we discuss several Chinese linguistic issues and their implications for our treebanking efforts and how we address these issues when developing our annotation guidelines. We also describe our engineering strategies to improve speed while ensuring annotation quality. |
"Alexa is my new BFF": Social Roles, User Satisfaction, and Personification of the Amazon Echo | Amazon's Echo and its conversational agent Alexa open exciting opportunities for understanding how people perceive and interact with virtual agents. Drawing from user reviews of the Echo posted to Amazon.com, this case study explores the degree to which user reviews indicate personification of the device, sociability level of interactions, factors linked with personification, and influences on user satisfaction. Results indicate marked variance in how people refer to the device, with over half using the personified name Alexa but most referencing the device with object pronouns. Degree of device personification is linked with sociability of interactions: greater personification co-occurs with more social interactions with the Echo. Reviewers mentioning multiple member households are more likely to personify the device than reviewers mentioning living alone. Even after controlling for technical issues, personification predicts user satisfaction with the Echo. |
The Study on the Efficacy of Fibrin Glue in Preventing Post-traumatic Focal Pancreatitis (PTFP) After Radical Gastrectomy | This study was intended to evaluate the efficacy of fibrin glue (FG) in preventing post-traumatic focal pancreatitis (PTFP) after radical gastrectomy by examining the drainage fluids over 7 days post-op. Ninety-five patients who underwent D2 radical gastrectomy for gastric cancer were randomly assigned to a fibrin glue group (n = 48) receiving fibrin glue on the raw surface of the pancreas during surgery and a control group (n = 47), which did not receive fibrin glue. We found no significant difference in operation time and intraoperative blood loss between groups (p > 0.05); no deaths occurred during surgery. The volume of ascitic fluid containing blood cells in the fibrin glue group was significantly lower than that in the control group (p < 0.001) at all times observed. Amylase levels in the drained fluids were highest at 24 h postoperatively in both groups, suggesting pancreatitis, but gradually decreased to normal levels within 7 days. The amylase in the drains in the control group was significantly higher than that in the FG group (p < 0.001) at all times observed, but it returned to normal 72 h postoperatively in the FG group. One death by hemorrhagic shock associated with PTFP was recorded in the control group. Fibrin glue is safe and effective in preventing PTFP following gastric surgery and shortens the clinical course of the disease. |
Framewise phoneme classification with bidirectional LSTM and other neural network architectures | In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it. |
A Co-Regularization Approach to Semi-supervised Learning with Multiple Views | The Co-Training algorithm uses unlabeled examples in multiple views to bootstrap classifiers in each view, typically in a greedy manner, and operating under assumptions of view-independence and compatibility. In this paper, we propose a Co-Regularization framework where classifiers are learnt in each view through forms of multi-view regularization. We propose algorithms within this framework that are based on optimizing measures of agreement and smoothness over labeled and unlabeled examples. These algorithms naturally extend standard regularization methods like Support Vector Machines (SVM) and Regularized Least squares (RLS) for multi-view semi-supervised learning, and inherit their benefits and applicability to high-dimensional classification problems. An empirical investigation is presented that confirms the promise of this approach. |
ESTIMATION OF RETURN ON INVESTMENT IN SHARE MARKET THROUGH ANN* | The stock market is one of the most popular investing places because of its expected high profit. Traditionally, technical analysis approach, that predicts stock prices based on historical prices and volume, basic concepts of trends, price patterns and oscillators, is commonly used by stock investors to aid investment decisions. Advanced intelligent techniques, ranging from pure mathematical models and expert systems to fuzzy logic networks, have also been used by many financial trading systems for investing and predicting stock prices. In recent years, most of the researchers have been concentrating their research work on the future prediction of share market prices by using Neural Networks. But, in this chapter we newly propose a methodology in which the neural network is applied to the investor's financial decision making to invest all type of shares irrespective of the high / low index value of the scripts, in a continuous time frame work and further it is further extended to obtain the expected return on investment through the Neural Networks and finally it is compared with the 93 actual value. The proposed network has been tested with stock data obtained from the Indian Share Market BSE Index. Finally, the design, implementation and performance of the proposed neural network are described. 3. |
Human female orgasm and mate fluctuating asymmetry | Human, Homo sapiens, female orgasm is not necessary for conception; hence it seems reasonable to hypothesize that orgasm is an adaptation for manipulating the outcome of sperm competition resulting from facultative polyandry. If heritable differences in male viability existed in the evolutionary past, selection could have favoured female adaptations (e.g. orgasm) that biased sperm competition in favour of males possessing heritable fitness indicators. Accumulating evidence suggests that low fluctuating asymmetry is a sexually selected male feature in a variety of species, including humans, possibly because it is a marker of genetic quality. Based on these notions, the proportion of a woman’s copulations associated with orgasm is predicted to be associated with her partner’s fluctuating asymmetry. A questionnaire study of 86 sexually active heterosexual couples supported this prediction. Women with partners possessing low fluctuating asymmetry and their partners reported significantly more copulatory female orgasms than were reported by women with partners possessing high fluctuating asymmetry and their partners, even with many potential confounding variables controlled. The findings are used to examine hypotheses for female orgasm other than selective sperm retention. i 1995 The Association for the Study of Animal Behaviour The human female orgasm has attracted great interest from many evolutionary behavioy-al scientists. Several hypotheses propose that female orgasm is an adaptation. First, human female orgasm has been claimed to create and maintain the pair bond between male and female by promoting female intimacy through sexual pleasure (e.g. Morris 1967; Eibl-Eibesfeldt 1989). Second, a number of evolutionists have suggested that human female orgasm functions in selective bonding with males by promoting affiliation primarily with males who are willing to invest time or material resources in the female (Alexander 1979; Alcock 1987) and/or males of genotypic quality (Smith 1984; Alcock 1987). Third, female orgasm has been said to motivate a female to pursue multiple males to prevent male infanticide of the female’s offspring and/or to gain material benefits from multiple mates (Hrdy 1981). Fourth, Morris (1967) proposed that human female orgasm functions to induce fatigue, sleep and a prone position, and thereby passively acts to retain sperm. Correspondence: R. Thornhill, Department of Biology, University of New Mexico, Albuquerque, NM 87131. 1091, U.S.A. (email: [email protected]). Additional adaptational hypotheses suggest a more active process by which orgasm retains sperm. The ‘upsuck’ hypothesis proposes that orgasm actively retains sperm by sucking sperm into the uterus (Fox et al. 1970; see also Singer 1973). Smith (1984) modified this hypothesis into one based on sire choice; he argued that the evolved function of female orgasm is control over paternity of offspring by assisting the sperm of preferred sires and handicapping the sperm of non-preferred mates. Also, Baker & Bellis (1993; see also Baker et al. 1989) speculated that timing of the human female orgasm plays a role in sperm retention. Baker & Bellis (1993) showed that orgasm occurring near the time of male ejaculation results in greater sperm retention, as does orgasm up to 45 min. after ejaculation. Orgasm occurring more than a minute before male ejaculation appears not to enhance sperm retention. Baker & Bellis (1993) furthermore argued that orgasms occurring at one time may hinder retention of sperm from subsequent copulations up to 8 days later. In addition, a number of theorists have argued that human female orgasm has not been selected for because of its own functional significance and 0003-3472/95/121601+ 15 $12.00/O D 1995 The Association for the Study of Animal Behaviour |
Nuclear phytochrome A signaling promotes phototropism in Arabidopsis. | Phototropin photoreceptors (phot1 and phot2 in Arabidopsis thaliana) enable responses to directional light cues (e.g., positive phototropism in the hypocotyl). In Arabidopsis, phot1 is essential for phototropism in response to low light, a response that is also modulated by phytochrome A (phyA), representing a classical example of photoreceptor coaction. The molecular mechanisms underlying promotion of phototropism by phyA remain unclear. Most phyA responses require nuclear accumulation of the photoreceptor, but interestingly, it has been proposed that cytosolic phyA promotes phototropism. By comparing the kinetics of phototropism in seedlings with different subcellular localizations of phyA, we show that nuclear phyA accelerates the phototropic response, whereas in the fhy1 fhl mutant, in which phyA remains in the cytosol, phototropic bending is slower than in the wild type. Consistent with this data, we find that transcription factors needed for full phyA responses are needed for normal phototropism. Moreover, we show that phyA is the primary photoreceptor promoting the expression of phototropism regulators in low light (e.g., PHYTOCHROME KINASE SUBSTRATE1 [PKS1] and ROOT PHOTO TROPISM2 [RPT2]). Although phyA remains cytosolic in fhy1 fhl, induction of PKS1 and RPT2 expression still occurs in fhy1 fhl, indicating that a low level of nuclear phyA signaling is still present in fhy1 fhl. |
The Prevalence and Risk Factors of Functional Dyspepsia in a Multiethnic Population in the United States | BACKGROUND AND AIMS:The prevalence of functional dyspepsia (FD) in the general population is not known. The aim of this study is to measure the prevalence of FD and its risk factors in a multiethnic volunteer sample of the U.S. population.METHODS:One thousand employees at the Houston VA Medical Center were targeted with a symptom questionnaire asking about upper abdominal symptoms, followed by a request to undergo endsocopy. Dyspepsia was defined by the presence of epigastric pain, fullness, nausea, or vomiting, and FD was defined as dyspepsia in the absence of esophageal erosions, gastric ulcers, or duodenal ulcers or erosions. The presence of dyspepsia and FD was examined in multiple logistic regression analyses.RESULTS:A total of 465 employees completed the relevant questions and of those 203 had endoscopic examination. The age-adjusted prevalence rate of dyspepsia was 31.9 per 100 (95% CI: 26.7–37.1), and 15.8 per 100 (95% CI: 9.6–22.0) if participants with concomitant heartburn or acid regurgitation were excluded. Subjects with dyspepsia were more likely to report smoking, using antacids, aspirin or nonsteroidal antiinflammatory drugs (NSAIDs), and consulting a physician for their symptoms (p < 0.05) than participants without dyspepsia. Most (64.5%) participants with dyspepsia who underwent endoscopy had FD. The age-adjusted prevalence rate of FD was 29.2 per 100 (95% CI: 21.9–36.5), and 15.0 per 100 (6.7–23.3) if subjects with GERD were excluded. Apart from a trend towards association with older age in the multiple regression analysis, there were no significant predictors of FD among participants with dyspepsia.CONCLUSIONS:Most subjects with dyspepsia have FD. The prevalence of FD is high but predictors of FD remain poorly defined. |
A Preliminary Study of Hand Hygiene Compliance Characteristics with Machine Learning Methods | Increasing hospital re-admission rates due to Hospital Acquired Infections (HAIs) are a concern at many healthcare facilities. To prevent the spread of HAIs, caregivers are expected to comply with recommended hand hygiene guidelines, which require reliable and timely hand hygiene compliance measurement systems. The current standard practice of monitoring compliance involves the direct observation of caregivers’ hand cleaning as they enter or exit a patient room by a trained observer, which can be time-consuming, resource-intensive, and subject to bias. To alleviate the heavy manual effort and reduce errors, this paper studies the characteristics of compliance that could be used to assist the direct observation approach by deciding when and where to station manual auditors and to improve compliance by providing just-in-time alerts or potentially recommending training materials to non-compliant staff. The paper analyzes location and handwashing station activation data from a 30-bed intensive care unit (ICU) and uses machine learning to assess if location, time-based factors, or other data can be used to predict handwashing non-compliance events. The results show that a care provider’s entry compliance is highly indicative of the same provider’s exit compliance and that compliance of the most recent patient room visit can also predict entry compliance of a staff member’s current patient room visit |
Towards the Next Generation of Tabletop Gaming Experiences | In this paper we present a novel hardware and software platform (STARS) to realize computer augmented tabletop games that unify the strengths of traditional board games and computer games. STARS game applications preserve the social situation of traditional board games and provide a tangible interface with physical playing pieces to facilitate natural interaction. The virtual game components offer exciting new opportunities for game design and provide richer gaming experiences impossible to realize with traditional media. This paper describes STARS in terms of the hardware setup and the software platform used to develop and play STARS games. The interaction design within STARS is discussed and sample games are presented with regard to their contributions to enhancing user experience. Finally, realworld experiences with the platform are reported. |
Use of Social Media for Disaster Management: A Prescriptive Framework | Social media is emerging as an important information-based communication tool for disaster management. Yet there are many relief organizations that are not able to develop strategies and allocate resources to effectively use social media for disaster management. The reason behind this inability may be a lack of understanding regarding the different functionalities of social media. In this paper, we examine the literature using content analysis to understand the current usage of social media in disaster management. We draw on the honeycomb framework and the results of our content analysis to suggest a new framework that can help in utilizing social media more effectively during the different phases of disaster management. We also discuss the implications of our study. KEywORDS Disaster Management, Disaster Phases, Honeycomb Framework, Social Media Functionality, Social Media |
Entropy-based anomaly detection for in-vehicle networks | Due to an increased connectivity and seamless integration of information technology into modern vehicles, a trend of research in the automotive domain is the development of holistic IT security concepts. Within the scope of this development, vehicular attack detection is one concept which gains an increased attention, because of its reactive nature that allows to respond to threats during runtime. In this paper we explore the applicability of entropy-based attack detection for in-vehicle networks. We illustrate the crucial aspects for an adaptation of such an approach to the automotive domain. Moreover, we show first exemplary results by applying the approach to measurements derived from a standard vehicle's CAN-Body network. |
[French version of screening questionnaire for high-functioning autism or Asperger syndrome in adolescent: Autism Spectrum Quotient, Empathy Quotient and Systemizing Quotient. Protocol and questionnaire translation]. | AIM
No tools are currently available in France, for the detection of autism without mental retardation (high functioning autism and Asperger syndrome here referred as TED SDI). Use of screening tests by first-line clinicians would allow better detection of children who are likely to display such difficulties and to improve patients' care. In England, 3 questionnaires have been evaluated: Autism Spectrum Quotient (AQ), Empathy Quotient (EQ), and Systemizing Quotient (SQ). This is the translation and evaluation of 3 questionnaires in France for TED SDI and control adolescents.
METHODS
The translation of the questionnaires into French required two simultaneous translations, two back-translations and two consensus meetings. This is a cross-sectional study comparing scores obtained with the three AQ, EQ and SQ questionnaires. These questionnaires were completed by the parents of four groups of adolescents 11-18 years: 100 TED SDI adolescents (50 with IQ ≥ 85 and 50 with 70≤IQ<85), 50 adolescents with another psychiatric disorder (TP) and 200 control adolescents (T).
RESULTS
580 questionnaires have been sent to 40 recruiting centres. By the 28th of February, 2010, 277 completed questionnaires were received completed (TED SDI: 70 (70%); TP: 25 (50%) et T: 182 (91%)). In the control group, 92 girls (mean 14.4±1.7 years) and 66 boys (14.5±1.7 years) were recruited. In the TED SDI group, 4 girls (14.3±2.4 years) and 42 boys (14.5±1.7 years) were recruited. One girl (81) and 6 boys (72.2±7.7) have an IQ between 70 and 85, and 3 girls (95.3±4.2) and 36 boys (102.9±12) have an IQ higher than 85. In the TP group, 9 girls (15.9±1.7 years) and 4 boys (15.8±1.9 years) were recruited.
CONCLUSION
The aim of this study is to make the AQ, EQ and SQ questionnaires available in French for French speaking clinicians. This study will allow a rigorous evaluation of the usefulness of the AQ questionnaire in the screening of TED SDI in adolescents. |
Effect of home-based HIV counselling and testing on stigma and risky sexual behaviours: serial cross-sectional studies in Uganda | BACKGROUND
A large, district-wide, home-based HIV counselling and testing (HBHCT) programme was implemented in Bushenyi district of Uganda from 2004 to 2007. This programme provided free HBHCT services to all consenting adults of Bushenyi district and had a very high uptake and acceptability. We measured population-level changes in knowledge of HIV status, stigma and HIV-risk behaviours before and after HBHCT to assess whether widespread HBHCT had an effect on trends of risky sexual behaviours and on stigma and discrimination towards HIV.
METHODS
Serial cross-sectional surveys were carried out before and after the implementation of HBHCT programme in Bushenyi district of Uganda. A total of 1402 randomly selected adults (18 to 49 years) were interviewed in the baseline survey. After the implementation, a different set of randomly selected 1562 adults was interviewed using the same questionnaire. Data was collected on socio-demographic characteristics, sexual behaviour, whether respondents had ever tested for HIV and stigma and discrimination towards HIV/AIDS.
RESULTS
The proportion of people who had ever tested for HIV increased from 18.6% to 62% (p<0.001). Among people who had ever tested, the proportion of people who shared HIV test result with a sexual partner increased from 41% to 57% (p<0.001). The proportion of persons who wanted infection status of a family member not to be revealed decreased from 68% to 57% (p<0.001). Indicators of risk behaviour also improved; the proportion of people who exchanged money for sex reduced from 12% to 4% (p<0.001), who used a condom when money was exchanged during a sexual act increased from 39% to 80% (p<0.001) and who reported genital ulcer/discharge decreased from 22% to 10% (p<0.001).
CONCLUSION
These data suggest that HBHCT rapidly increased the uptake of HCT and may have led to reduction in high-risk behaviours at population level as well as reduction in stigma and discrimination. Because HBCT programmes are cost-effective, they should be considered for implementation in delivery of HIV services especially in areas where access to HCT is low. |
Hybrid thoracoscopic surgical and transvenous catheter ablation of atrial fibrillation. | OBJECTIVES
The purpose of this study was to evaluate the feasibility, safety, and clinical outcomes up to 1 year in patients undergoing combined simultaneous thoracoscopic surgical and transvenous catheter atrial fibrillation (AF) ablation.
BACKGROUND
The combination of the transvenous endocardial approach with the thoracoscopic epicardial approach in a single AF ablation procedure overcomes the limitations of both techniques and should result in better outcomes.
METHODS
A cohort of 26 consecutive patients with AF who underwent hybrid thoracoscopic surgical and transvenous catheter ablation were followed, with follow-up of up to 1 year.
RESULTS
Twenty-six patients (42% with persistent AF) underwent successful hybrid procedures. There were no complications. The mean follow-up period was 470 ± 154 days. In 23% of the patients, the epicardial lesions were not transmural, and endocardial touch-up was necessary. One-year success, defined according to the Heart Rhythm Society, European Heart Rhythm Association, and European Cardiac Arrhythmia Society consensus statement for the catheter and surgical ablation of AF, was 93% for patients with paroxysmal AF and 90% for patients with persistent AF. Two patients underwent catheter ablation for recurrent AF or left atrial flutter after the hybrid procedure.
CONCLUSIONS
A combined transvenous endocardial and thoracoscopic epicardial ablation procedure for AF is feasible and safe, with a single-procedure success rate of 83% at 1 year. |
CELIA: A Device and Architecture Co-Design Framework for STT-MRAM-Based Deep Learning Acceleration | A large variety of applications rely on deep learning to process big data, learn sophisticated features, and perform complicated tasks. Utilizing emerging non-volatile memory (NVM)'s unique characteristics, including the crossbar array structure and gray-scale cell resistances, to perform neural network (NN) computation is a well-studied approach in accelerating deep learning tasks. Compared to other NVM technologies, STT-MRAM has its unique advantages in performing NN computation. However, the state-of-the-art research have not utilized STT-MRAM for deep learning acceleration due to its device- and architecture-level challenges. Consequently, this paper enables STT-MRAM, for the firs time, as an effective and practical deep learning accelerator. In particular, it proposes a full-stack solution across multiple design layers, including device-level fabrication, circuit-level enhancement, architecture-level data quantization, and system-level accelerator design. The proposed framework significantly mitigates the model accuracy loss due to reduced data precision in a cohesive manner, constructing a comprehensive STT-MRAM accelerator system for fast NN computation with high energy efficiency and low cost. |
Prognostic significance of bcl-2 protein expression in aggressive non-Hodgkin's lymphoma. Groupe d'Etude des Lymphomes de l'Adulte (GELA). | Little is known about the expression of bcl-2 protein in intermediate and high grade non-Hodgkin's lymphoma (NHL) and its clinical and prognostic significance. We performed immunohistochemical analysis of bcl-2 expression in tumoral tissue sections of 348 patients with high or intermediate grade NHL. These patients were uniformly treated with adriamycin, cyclophosphamide, vindesine, bleomycin, and prednisone (ACVBP) in the induction phase of the LNH87 protocol. Fifty eight cases were excluded due to inadequate staining. Of the 290 remaining patients, 131 (45%) disclosed homogeneous positivity (high bcl-2 expression) in virtually all tumor cells, whereas 65 (23%) were negative and 94 (32%) exhibited intermediate staining. High bcl-2 expression was more frequent in B-cell NHL (109 of 214, 51%) than in T-cell NHL (6 of 35, 17%) (P = .0004), and was heterogeneously distributed among the different histological subtypes. Further analysis was performed on the 151 patients with diffuse large B-cell lymphoma (centroblastic and immunoblastic) to assess the clinical significance and potential prognostic value of bcl-2 expression in the most frequent and homogeneous immunohistological subgroup. High bcl-2 expression, found in 44% of these patients (67 of 151), was more frequently associated with III-IV stage disease (P = .002). Reduced disease-free survival (DFS) (P < .01) and overall survival (P < .05) were demonstrated in the patients with high bcl-2 expression. Indeed, the 3-year estimates of DFS and overall survival were 60% and 61%, respectively (high bcl-2 expression) versus 82% and 78%, respectively (negative/intermediate bcl-2 expression). A multivariate regression analysis confirmed the independent effect of bcl-2 protein expression on DFS. Thus bcl-2 protein expression, as demonstrated in routinely paraffin-embedded tissue, appears to be predictive of poor DFS, in agreement with the role of bcl-2 in chemotherapy-induced apoptosis. It might be considered as a new independent biologic prognostic parameter, which, especially in diffuse large B-cell NHL, could aid in the identification of patient risk groups. |
A Survey of Network Function Virtualization Security | Network Function Virtualization (NFV) provides many benefits to consumers because it is a cost-efficient evolution of legacy networks, allowing the enhancement and extension of networks in quick and low cost manner since the functions of the network will be provided through virtualization. However, security issues are a major concern for NFV users. This paper goes over different security threats of NFV and some of the countermeasures to mitigate these security threats. Also, we define patterns as a way of describing security solutions in an abstract and generic way. Lastly, we provide some of the security research directions in this area. |
Effectiveness and underlying mechanisms of a group-based cognitive behavioural therapy-based indicative prevention program for children with elevated anxiety levels | BACKGROUND
Anxiety is a problem for many children, particularly because of its negative consequences not only on the wellbeing of the child, but also on society. Adequate prevention and treatment might be the key in tackling this problem. Cognitive behavioural therapy (CBT) has been found effective for treating anxiety disorders. "Coping Cat" is one of the few evidence-based CBT programs designed to treat anxiety symptoms in children. The main aim of this project is to conduct a Randomized Controlled Trial (RCT) to evaluate the effectiveness of a Dutch version of Coping Cat as an indicative group-based prevention program. The second aim is to gain insight into the mechanisms underlying its effectiveness.
METHODS/DESIGN
Coping Cat will be tested in Dutch primary school children grades five through eight (ages 7 to 13) with elevated levels of anxiety. This RCT has two conditions: 130 children will be randomly assigned to the experimental (N=65, Coping Cat) and control groups (N=65, no program). All children and their mothers will be asked to complete baseline, post intervention, and 3-month follow-up assessments. In addition, children in both the experimental and control group will be asked to complete 12 weekly questionnaires matched to the treatment sessions. Main outcome measure will be the child's anxiety symptoms level (SCAS). Four potential mediators will be examined, namely active coping, positive cognitive restructuring, self efficacy and cognitions about ones coping ability (from now on coping cognitions).
DISCUSSION
It is hypothesized that children in the experimental condition will experience reduced levels of anxiety in comparison with the control group. Further, active coping, positive cognitive restructuring, and coping cognitions are expected to mediate program effectiveness. If Coping Cat proves effective as a prevention program and working mechanisms can be found, this group-based approach might lead to the development of a cost-effective program suitable for prevention purposes that would be easily implemented on a large scale.
TRIAL REGISTRATION
Nederlands Trial Register NTR3818. |
Commitment of hematopoietic stem cells in avian and mammalian embryos: an ongoing story. | During ontogeny, hematopoietic stem cells (HSC) become committed outside of hematopoietic organs. Once held to emerge from the yolk sac, they are currently thought to originate from the early aorta. However we now show that the allantois in the avian embryo and the placenta in the mouse embryo produce HSC in very large numbers. These sites thus appear to have a central role in the process of HSC emergence. |
X-Tag: A Fiducial Tag for Flexible and Accurate Bundle Adjustment | In this paper we design a novel planar 2D fiducial marker and develop fast detection algorithm aiming easy camera calibration and precise 3D reconstruction at the marker locations via the bundle adjustment. Even though an abundance of planar fiducial markers have been made and used in various tasks, none of them has properties necessary to solve the aforementioned tasks. Our marker, X-tag, enjoys a novel design, coupled with very efficient and robust detection scheme, resulting in a reduced number of false positives. This is achieved by constructing markers with random circular features in the image domain and encoding them using two true perspective invariants: cross-ratios and intersection preservation constraints. To detect the markers, we developed an effective search scheme, similar to Geometric Hashing and Hough Voting, in which the marker decoding is cast as a retrieval problem. We apply our system to the task of camera calibration and bundle adjustment. With qualitative and quantitative experiments, we demonstrate the robustness and accuracy of X-tag in spite of blur, noise, perspective and radial distortions, and showcase camera calibration, bundle adjustment and 3d fusion of depth data from precise extrinsic camera poses. |
Human performance limitations (communication, stress, prospective memory and fatigue). | A key role in anaesthetic practice is gathering and assimilating information from a variety of sources to construct and maintain an accurate mental model of what is happening to the patient, a model that will influence subsequent decisions made by the anaesthetist on the patient's behalf, as part of a larger team. Effective performance of this role requires a set of mental functions that place great demands upon the physiology and psychology of anaesthetists, functions that are vulnerable to a wide range of factors including those affecting team performance and those affecting the anaesthetist specifically. The number of tasks, their complexity, the physical and mental demands of the job, the underlying health and well-being of the anaesthetist and the environment and context within which the team attempt to meet the demands placed on them will influence the outcome of patient care. |
In vitro regulatory models for systems biology. | The reductionist approach has revolutionized biology in the past 50 years. Yet its limits are being felt as the complexity of cellular interactions is gradually revealed by high-throughput technology. In order to make sense of the deluge of "omic data", a hypothesis-driven view is needed to understand how biomolecular interactions shape cellular networks. We review recent efforts aimed at building in vitro biochemical networks that reproduce the flow of genetic regulation. We highlight how those efforts have culminated in the rational construction of biochemical oscillators and bistable memories in test tubes. We also recapitulate the lessons learned about in vivo biochemical circuits such as the importance of delays and competition, the links between topology and kinetics, as well as the intriguing resemblance between cellular reaction networks and ecosystems. |
DeepProduct: Mobile Product Search With Portable Deep Features | Features extracted by deep networks have been popular in many visual search tasks. This article studies deep network structures and training schemes for mobile visual search. The goal is to learn an effective yet portable feature representation that is suitable for bridging the domain gap between mobile user photos and (mostly) professionally taken product images while keeping the computational cost acceptable for mobile-based applications. The technical contributions are twofold. First, we propose an alternative of the contrastive loss popularly used for training deep Siamese networks, namely robust contrastive loss, where we relax the penalty on some positive and negative pairs to alleviate overfitting. Second, a simple multitask fine-tuning scheme is leveraged to train the network, which not only utilizes knowledge from the provided training photo pairs but also harnesses additional information from the large ImageNet dataset to regularize the fine-tuning process. Extensive experiments on challenging real-world datasets demonstrate that both the robust contrastive loss and the multitask fine-tuning scheme are effective, leading to very promising results with a time cost suitable for mobile product search scenarios. |
Architecture optimization of a 3-DOF translational parallel mechanism for machining applications, the orthoglide | This paper addresses the architecture optimization of a 3-DOF translational parallel mechanism designed for ma chining applications. The design optimization is conducte d on the basis of a prescribed Cartesian workspace with prescrib ed kinetostatic performances. The resulting machine, the Ort hoglide, features three fixed parallel linear joints which are mounted orthogonally and a mobile platform which moves in the Cartesian x-y-z space with fixed orientation. The interesting features of the Orthoglide are a regular Cartesian workspace shape, unifor m performances in all directions and good compactness. A smal lscale prototype of the Orthoglide under development is pres ented at the end of this paper. |
Experimenting at scale with google chrome's SSL warning | Web browsers show HTTPS authentication warnings (i.e., SSL warnings) when the integrity and confidentiality of users' interactions with websites are at risk. Our goal in this work is to decrease the number of users who click through the Google Chrome SSL warning. Prior research showed that the Mozilla Firefox SSL warning has a much lower click-through rate (CTR) than Chrome. We investigate several factors that could be responsible: the use of imagery, extra steps before the user can proceed, and style choices. To test these factors, we ran six experimental SSL warnings in Google Chrome 29 and measured 130,754 impressions. |
Make it real: Effective floating-point reasoning via exact arithmetic | Floating-point arithmetic is widely used in scientific computing. While many programmers are subliminally aware that floating-point numbers only approximate the reals, few are cognizant of the dangers this entails for programming. Such dangers range from tolerable rounding errors in sequential programs, to unexpected, divergent control flow in parallel code. To address these problems, we present a decision procedure for floating-point arithmetic (FPA) that exploits the proximity to real arithmetic (RA), via a loss-less reduction from FPA to RA. Our procedure does not involve any form of bit-blasting or bit-vectorization, and can thus generate much smaller back-end decision problems, albeit in a more complex logic. This tradeoff is beneficial for the exact and reliable analysis of parallel scientific software, which tends to give rise to large but benignly structured formulas. We have implemented a prototype decision engine and present encouraging results analyzing such software for numerical accuracy. |
Strategic Alignment of Cloud-Based Architectures for Big Data | Big Data is an increasingly significant topic for management and IT departments. In the beginning, Big Data applications were large on premise installations. Today, cloud services are used increasingly to implement Big Data applications. This can be done on different ways supporting different strategic enterprise goals. Therefore, we develop a framework that enumerates the alternatives for implementing Big Data applications using cloud-services and identify the strategic goals supported by these Alternatives. The created framework clarifies the options for Big Data initiatives using cloud-computing and thus improves the strategic alignment of Big Data applications. |
Local graph sparsification for scalable clustering | In this paper we look at how to sparsify a graph i.e. how to reduce the edgeset while keeping the nodes intact, so as to enable faster graph clustering without sacrificing quality. The main idea behind our approach is to preferentially retain the edges that are likely to be part of the same cluster. We propose to rank edges using a simple similarity-based heuristic that we efficiently compute by comparing the minhash signatures of the nodes incident to the edge. For each node, we select the top few edges to be retained in the sparsified graph. Extensive empirical results on several real networks and using four state-of-the-art graph clustering and community discovery algorithms reveal that our proposed approach realizes excellent speedups (often in the range 10-50), with little or no deterioration in the quality of the resulting clusters. In fact, for at least two of the four clustering algorithms, our sparsification consistently enables higher clustering accuracies. |
Community-based classification of noun phrases in twitter | Many event monitoring systems rely on counting known keywords in streaming text data to detect sudden spikes in frequency. But the dynamic and conversational nature of Twitter makes it hard to select known keywords for monitoring. Here we consider a method of automatically finding noun phrases (NPs) as keywords for event monitoring in Twitter. Finding NPs has two aspects, identifying the boundaries for the subsequence of words which represent the NP, and classifying the NP to a specific broad category such as politics, sports, etc. To classify an NP, we define the feature vector for the NP using not just the words but also the author's behavior and social activities. Our results show that we can classify many NPs by using a sample of training data from a knowledge-base. |
Scabies incognito presenting as urticaria pigmentosa in an infant. | Misdiagnosis is frequent in scabies of infants and children because of a low index of suspicion, secondary eczematous changes, and inappropriate therapy. Topical or systemic corticosteroids may modify the clinical presentation of scabies and that situation is referred to as scabies incognito. We describe a 10-month-old infant with scabies incognito mimicking urticaria pigmentosa. |
The Challenges of Global-Scale Data Management | Global-scale data management (GSDM) empowers systems by providing higher levels of fault-tolerance, read availability, and efficiency in utilizing cloud resources. This has led to the emergence of global-scale data management and event processing. However, the Wide-Area Network (WAN) latency separating data is orders of magnitude larger than conventional network latencies, and this requires a reevaluation of many of the traditional design trade-offs of data management systems. Therefore, data management problems must be revisited to account for the new design space. In this tutorial we survey recent developments in GSDM focusing on identifying fundamental challenges and advancements in addition to open research opportunities. |
Double emulsion solvent evaporation techniques used for drug encapsulation. | Double emulsions are complex systems, also called "emulsions of emulsions", in which the droplets of the dispersed phase contain one or more types of smaller dispersed droplets themselves. Double emulsions have the potential for encapsulation of both hydrophobic as well as hydrophilic drugs, cosmetics, foods and other high value products. Techniques based on double emulsions are commonly used for the encapsulation of hydrophilic molecules, which suffer from low encapsulation efficiency because of rapid drug partitioning into the external aqueous phase when using single emulsions. The main issue when using double emulsions is their production in a well-controlled manner, with homogeneous droplet size by optimizing different process variables. In this review special attention has been paid to the application of double emulsion techniques for the encapsulation of various hydrophilic and hydrophobic anticancer drugs, anti-inflammatory drugs, antibiotic drugs, proteins and amino acids and their applications in theranostics. Moreover, the optimized ratio of the different phases and other process parameters of double emulsions are discussed. Finally, the results published regarding various types of solvents, stabilizers and polymers used for the encapsulation of several active substances via double emulsion processes are reported. |
DeeperBind: Enhancing prediction of sequence specificities of DNA binding proteins | Transcription factors (TFs) are macromolecules that bind to cis-regulatory specific sub-regions of DNA promoters and initiate transcription. Finding the exact location of these binding sites (aka motifs) is important in a variety of domains such as drug design and development. To address this need, several in vivo and in vitro techniques have been developed so far that try to characterize and predict the binding specificity of a protein to different DNA loci. The major problem with these techniques is that they are not accurate enough in prediction of the binding affinity and characterization of the corresponding motifs. As a result, downstream analysis is required to uncover the locations where proteins of interest bind. Here, we propose DeeperBind, a long short term recurrent convolutional network for prediction of protein binding specificities with respect to DNA probes. DeeperBind can model the positional dynamics of probe sequences and hence reckons with the contributions made by individual sub-regions in DNA sequences, in an effective way. Moreover, it can be trained and tested on datasets containing varying-length sequences. We apply our pipeline to the datasets derived from protein binding microarrays (PBMs), an in-vitro high-throughput technology for quantification of protein-DNA binding preferences, and present promising results. To the best of our knowledge, this is the most accurate pipeline that can predict binding specificities of DNA sequences from the data produced by high-throughput technologies through utilization of the power of deep learning for feature generation and positional dynamics modeling. |
A novel broadband AMC surface for lowering the height of planar antennas | A novel broadband artificial magnetic conductor (AMC) surface is proposed for lowering the height of planar antennas. By replacing the conventional perfect electric conductor (PEC) ground plane with the AMC surface, the height of planar antenna can be significantly reduced. Three planar antennas for LTE700/GSM900 base stations are investigated. The antenna height is lowered from 0.263λ0.83GHz to 0.083λ0.83GHz, (where λ0.83GHz is the wavelength in free space at the frequency 0.83 GHz). The height-lowered planar antennas maintain a wide bandwidth of, e.g., 38% for a linearly polarized antenna, 35% for a dual-polarized antenna, and 46% for a circularly polarized antenna. All these planar antennas above the AMC surface have a gain of higher than 7 dBi. |
Transformational leadership: relations to the five-factor model and team performance in typical and maximum contexts. | This study examined the 5-factor model of personality, transformational leadership, and team performance under conditions similar to typical and maximum performance contexts. Data were collected from 39 combat teams from an Asian military sample (N = 276). Results found that neuroticism and agreeableness were negatively related to transformational leadership ratings. Team performance ratings correlated at only.18 across the typical and maximum contexts. Furthermore, transformational leadership related more strongly to team performance in the maximum rather than the typical context. Finally, transformational leadership fully mediated the relationship between leader personality and team performance in the maximum context but only partially mediated the relationship between leader personality and team performance in the typical context. The Discussion section focuses on how these findings, although interesting, need to be replicated with different designs, contexts, and measures. |
WebCrow: A Web-Based System for Crossword Solving | Language games represent one of the most fascinating challenges of research in artificial intelligence. In this paper we give an overview of WebCrow, a system that tackles crosswords using the Web as a knowledge base. This appears to be a novel approach with respect to the available literature. It is also the first solver for nonEnglish crosswords and it has been designed to be potentially multilingual. Although WebCrow has been implemented only in a preliminary version, it already displays very interesting results reaching the performance of a human beginner: crosswords that are “easy” for expert humans are solved, within competition time limits, with 80% of correct words and over 90% of correct letters. Introduction Crosswords are probably the most played language puzzles worldwide. Until recently, the research in artificial intelligence focused its interest only on the NP-complete crossword generation problem (Mazlack 1976) (Ginsberg et al. 1990) which can now be solved in a few seconds for reasonable dimensions (Ginsberg 1993). Conversely, problems like solving crosswords from clues have been defined as AI-complete (Littman 2000) and are extremely challenging for machines since there is no closedworld assumption and they require human-level knowledge. Interestingly, AI has nowadays the opportunity to exploit a stable nucleus of technology, such as search engines, information retrieval and machine learning techniques, that can enable computers to enfold with semantics real-life concepts. We will present here a software system, called WebCrow, whose major assumption is to attack crosswords making use of the Web as its primary source of knowledge, being this an extremely rich and self-updating repository of human knowledge1. This paper gives a general overview of the modules and the subproblems present in the project. Copyright c © 2005, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. 1The WebCrow project, with the emphasis on the power coming from searching the Web, was briefly described in the Nature magazine(News-Nature 2004) and mentioned in the AAAI Web site concerning games and puzzles http://www.aaai.org/AITopics/html/crosswd.html To the best of our knowledge the only significant attempt reported in the literature to tackle this problem is the Proverb system, which has reached human-like performances (Keim, Shazeer, & Littman 1999). Unlike Proverb, WebCrow does not possess any knowledge-specific expert module nor a crossword database of great dimensions. WebCrow, instead, in order to stress the generality of its knowledge and language-independence, has in its core a module performing a special form of question answering on the Web that we shall call clue-answering. The second goal of the system, that is filling the crossword grid with the best set of candidate answers, has been tackled as a Probabilistic-CSP. Italian crosswords are structurally similar to Americans, but they differ in two aspects: two-letter words2 are allowed and, more importantly, they contain stronger and more ambiguous references to socio-cultural and political topics. The latter is a phenomenon especially present in newspapers and it introduces an additional degree of complexity in crossword solving, since it requires the possession of a knowledge that is extremely broad, fresh and also robust to volunteer vagueness and ambiguity. Neologisms and compound words are often present. We have collected 685 examples of Italian crosswords. The majority was obtained from two sources: the main Italian crossword magazine La Settimana Enigmistica and the main on-line newspaper’s crossword section La Repubblica. 60 puzzles were randomly chosen to form the experimental test suite. The remaining part constituted a database of clue-answer pairs used as an aid in the answering process. In this paper the challenge of WebCrow is to solve Italian crosswords within a 15 minutes time limit (running the program on a common laptop), as in real competitions. The version of WebCrow that is discussed here is preliminary but it has already given very promising results. WebCrow averaged around 70% words correct and 80% letters correct in the overall test set. On the examples that expert players consider “easy” WebCrow performed with 80% words correct (100% in one case) and over 90% letters correct. Architectural overview WebCrow has been designed as a modular and data-driven system. Two are the main sources that it applies to: Web 2Name initials, acronyms, abbreviations or portions of a word. |
The public health impact of obesity. | The increase in obesity worldwide will have an important impact on the global incidence of cardiovascular disease, type 2 diabetes mellitus, cancer, osteoarthritis, work disability, and sleep apnea. Obesity has a more pronounced impact on morbidity than on mortality. Disability due to obesity-related cardiovascular diseases will increase particularly in industrialized countries, as patients survive cardiovascular diseases in these countries more often than in nonindustrialized countries. Disability due to obesity-related type 2 diabetes will increase particularly in industrializing countries, as insulin supply is usually insufficient in these countries. As a result, in these countries, an increase in disabling nephropathy, arteriosclerosis, neuropathy, and retinopathy is expected. Increases in the prevalence of obesity will potentially lead to an increase in the number of years that subjects suffer from obesity-related morbidity and disability. A 1% increase in the prevalence of obesity in such countries as India and China leads to 20 million additional cases of obesity. Prevention programs will stem the obesity epidemic more efficiently than weight-loss programs. However, only a few prevention programs have been developed or implemented, and the success rates reported to date have been low. Obesity prevention programs should be high on the scientific and political agenda in both industrialized and industrializing countries. |
Rapid synthesis of monodispersed highly porous spinel nickel cobaltite (NiCo2O4) electrode material for supercapacitors | Monodispersed highly porous spinel nickel cobaltite electrode material was successfully synthesized in a short time using combustion technique. Single phase cubic nature of the spinel nickel cobaltite with average crystallite size of 24 nm was determined from X-ray diffraction study. Functional groups present in the compound were determined from FTIR study and it further confirms the spinel formation. FESEM images reveal the porous nature of the prepared material and uniform size distribution of the particles. Electrochemical evaluation was performed using Cyclic Voltammetry (CV) technique, Chronopotentiometry (CP) and Electrochemical Impedance Spectroscopy (EIS). Results reveal the typical pseudocapacitive behaviour of the material. Maximum capacitance of 754 F/g was calculated at the scan rate of 5 mV/s, high capacitance was due to the unique porous morphology of the electrode. Nyquist plot depicts the low resistance and good electrical conductivity of nickel cobaltite. It has been found that nickel cobaltite prepared by this typical method will be a potential electrode material for supercapcitor application. |
LEA: A 128-Bit Block Cipher for Fast Encryption on Common Processors | We propose a new block cipher LEA, which has 128-bit block size and 128, 192, or 256-bit key size. It provides a high-speed software encryption on general-purpose processors. Our experiments show that LEA is faster than AES on Intel, AMD, ARM, and ColdFire platforms. LEA can be also implemented to have tiny code size. Its hardware implementation has a competitive throughput per area. It is secure against all the existing attacks on block ciphers. |
2.4 GHz DOA finder using modified butler matrix for 2×2 array antennas | Technique to find direction of arrival (DOA) has been promisingly constructed by smart antenna technology. It consists of array antennas and signal processing unit. Because of the limitation of space in commercial products, the size of antenna array should be designed with the smallest dimension. Also the processing unit is expected for low complexity and expense. Most of DOA finders were recently proposed employing linear array in which the size is linearly increased by a number of antenna elements. Also the processing units were fully comprised with high level of computation. In this paper, the low profile of 2.4 GHz DOA finder is proposed. The DOA finder is designed in the compact size of 2times2 array antennas. Using modified Butler matrix, the processing unit is very simple in which it can be handled by any economic microprocessors. The experimental results confirm the success of proposed DOA finder. |
Off-Road Obstacle Avoidance through End-to-End Learning | We describe a vision-based obstacle avoidance system for of f-road mobile robots. The system is trained from end to end to map raw in put images to steering angles. It is trained in supervised mode t predict the steering angles provided by a human driver during training r uns collected in a wide variety of terrains, weather conditions, lighting conditions, and obstacle types. The robot is a 50cm off-road truck, with two f orwardpointing wireless color cameras. A remote computer process es the video and controls the robot via radio. The learning system is a lar ge 6-layer convolutional network whose input is a single left/right pa ir of unprocessed low-resolution images. The robot exhibits an excell ent ability to detect obstacles and navigate around them in real time at spe ed of 2 m/s. |
Creating interactive physics education books with augmented reality | Augmented Books show three-dimensional animated educational content and provide a means for students to interact with this content in an engaging learning experience. In this paper we present a framework for creating educational Augmented Reality (AR) books that overlay virtual content over real book pages. The framework features support for certain types of user interaction, model and texture animations, and an enhanced marker design suitable for educational books. Three books teaching electromagnetism concepts were created with this framework. To evaluate the effectiveness in helping students learn, we conducted a small pilot study with ten secondary school students, studying electromagnetism concepts using the three books. Half of the group used the books with the diagrams augmented, while the other half used the books without augmentation. Participants completed a pre-test, a test after the learning session and a retention test administered 1 month later. Results suggest that AR has potential to be effective in teaching complex 3D concepts. |
SPORC: Group Collaboration using Untrusted Cloud Resources | Cloud-based services are an attractive deployment model for user-facing applications like word processing and calendaring. Unlike desktop applications, cloud services allow multiple users to edit shared state concurrently and in real-time, while being scalable, highly available, and globally accessible. Unfortunately, these benefits come at the cost of fully trusting cloud providers with potentially sensitive and important data. To overcome this strict tradeoff, we present SPORC, a generic framework for building a wide variety of collaborative applications with untrusted servers. In SPORC, a server observes only encrypted data and cannot deviate from correct execution without being detected. SPORC allows concurrent, low-latency editing of shared state, permits disconnected operation, and supports dynamic access control even in the presence of concurrency. We demonstrate SPORC’s flexibility through two prototype applications: a causally-consistent key-value store and a browser-based collaborative text editor. Conceptually, SPORC illustrates the complementary benefits of operational transformation (OT) and fork* consistency. The former allows SPORC clients to execute concurrent operations without locking and to resolve any resulting conflicts automatically. The latter prevents a misbehaving server from equivocating about the order of operations unless it is willing to fork clients into disjoint sets. Notably, unlike previous systems, SPORC can automatically recover from such malicious forks by leveraging OT’s conflict resolution mechanism. |
Permanent molar pulpotomy with a new endodontic cement: A case series | The aim of this case series was to determine the clinical and radiographic success rate of pulpotomy, with new endodontic cement (NEC), in human mature permanent molar teeth. Twelve molars with established irreversible pulpitis were selected from patients 14 - 62 years old. The selection criteria included carious pulp exposure with a positive history of lingering pain. After isolation, caries removal, and pulp exposure, pulpotomy with NEC was performed and a permanent restoration was immediately placed. At the first recall (+1 day) no patients reported postoperative pain. One wisdom tooth had been extracted after two months because of failure in coronal restoration. Eleven patients were available for the second recall, with a mean time of 15.8 months. Clinical and radiographic examination revealed that all teeth were functional and free of signs and symptoms. Histological examination of the extracted teeth revealed complete dentin bridge formation and a normal pulp. Although the results favored the use of NEC, more studies with larger samples and a longer recall period were suggested, to justify the use of this novel material for treatment of irreversible pulpitis in human permanent molar teeth. |
Surface microstructures and high-temperature high-pressure corrosion behavior of N18 zirconium alloy induced by high current pulsed electron beam irradiation | Abstract The aim of present paper was to study the microstructural evolution and the high-temperature high-pressure corrosion behavior of N18 zirconium alloy prior and after high-current pulsed electron beam (HCPEB) surface irradiation. Observations of microstructures showed that after HCPEB irradiation, the surface layer of the sample was melted, inside which gradually refined grains, martensites and their internal twin substructures were obtained. Meanwhile, second-phase particles (SPPs) were dissolved into the matrix within the modified layer, and alloying elements Fe, Cr, Nb were supersaturated in Zr-based solid solution. Besides, high residual compressive stress was unceasingly accumulated in the surface layer with the increasing of pulses. After exposed at 500 °C/10.3 MPa superheated steam for 30 days, the corrosion resistance of multiple-pulsed samples, particularly the 25-pulsed one, was obviously higher than that of the initial sample. The combined action of high residual compressive stress, supersaturated alloying elements, selective purification effect and abundant structure defects in the modified layer helped improve the corrosion resistance. |
Handbook for Shipboard Sedimentologists | Material in this publication may be copied without restraint for In some cases, orders for copies may require a payment for postage and handling. Any opinions, findings and conclusions or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the National Science Foundation, the participating agencies, |
Acquiring COTS Software Selection Requirements | Today’s need for rapid software development has generated a great interest in employing Commercial-Off-The-Shelf (COTS) software products as a way of managing cost, developing time, and effort. With an abundance of COTS software packages to choose from, the problem now is how to systematically evaluate, rank, and select a COTS product that best meets the software project requirements and at the same time can leverage off the current corporate information technology architectural environment. This paper describes a systematic process for decision support in evaluating and ranking COTS software. Performed right after the requirements analysis, this process provides the evaluators with more concise, structural, and step-by-step activities for determining the best COTS software product with manageable risk. In addition, the process is presented in phases that are flexible to allow for customization or tailoring to meet various projects’ requirements. |
Controlling the False Discovery Rate : A Practical and Powerful Approach to Multiple Testing | Your use of the JSTOR archive indicates your acceptance of JSTOR' s Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. |
BotHunter: Detecting Malware Infection Through IDS-Driven Dialog Correlation | We present a new kind of network perimeter monitoring strategy, which focuses on recognizing the infection and coordination dialog that occurs during a successful malware infection. BotHunter is an application designed to track the two-way communication flows between internal assets and external entities, developing an evidence trail of data exchanges that match a state-based infection sequence model. BotHunter consists of a correlation engine that is driven by three malware-focused network packet sensors, each charged with detecting specific stages of the malware infection process, including inbound scanning, exploit usage, egg downloading, outbound bot coordination dialog, and outbound attack propagation. The BotHunter correlator then ties together the dialog trail of inbound intrusion alarms with those outbound communication patterns that are highly indicative of successful local host infection. When a sequence of evidence is found to match BotHunter’s infection dialog model, a consolidated report is produced to capture all the relevant events and event sources that played a role during the infection process. We refer to this analytical strategy of matching the dialog flows between internal assets and the broader Internet as dialog-based correlation, and contrast this strategy to other intrusion detection and alert correlation methods. We present our experimental results using BotHunter in both virtual and live testing environments, and discuss our Internet release of the BotHunter prototype. BotHunter is made available both for operational use and to help stimulate research in understanding the life cycle of malware infections. |
A coding approach to event correlation | This paper describes a novel approach to event correlation in networks based on coding techniques. Observable symptom events are viewed as a code that identifies the problems that caused them; correlation is performed by decoding the set of observed symptoms. The coding approach has been implemented in SMARTS Event Management System (SEMS), as a server running under Sun Solaris 2.3. Preliminary benchmarks of the SEMS demonstrate that the coding approach provides a speedup at least two orders of magnitude over other published correlation systems. In addition, it is resilient to high rates of symptom loss and false alarms. Finally, the coding approach scales well to very large domains involving thousands of problems. |
Fundamental models for fuel cell engineering. | 3.8.2. Temperature Distribution Measurements 4749 3.8.3. Two-Phase Visualization 4750 3.8.4. Experimental Validation 4751 3.9. Modeling the Catalyst Layer at Pore Level 4751 3.10. Summary and Outlook 4752 4. Direct Methanol Fuel Cells 4753 4.1. Technical Challenges 4754 4.1.1. Methanol Oxidation Kinetics 4754 4.1.2. Methanol Crossover 4755 4.1.3. Water Management 4755 4.1.4. Heat Management 4756 4.2. DMFC Modeling 4756 4.2.1. Needs for Modeling 4756 4.2.2. DMFC Models 4756 4.3. Experimental Diagnostics 4757 4.4. Model Validation 4758 4.5. Summary and Outlook 4760 5. Solid Oxide Fuel Cells 4760 5.1. SOFC Models 4761 5.2. Summary and Outlook 4762 6. Closing Remarks 4763 7. Acknowledgments 4763 8. References 4763 |
Characterizing the scalability of a large web-based shopping system | This article presents an analysis of five days of workload data from a large Web-based shopping system. The multitier environment of this Web-based shopping system includes Web servers, application servers, database servers, and an assortment of load-balancing and firewall appliances. We characterize user requests and sessions and determine their impact on system performance scalability. The purpose of our study is to assess scalability and support capacity planning exercises for the multitier system. We find that horizontal scalability is not always an adequate mechanism for supporting increased workloads and that personalization and robots can have a significant impact on system scalability. |
A Security Threats Measurement Model for Reducing Cloud Computing Security Risk | Cloud computing is the trend of information resources sharing and also an important tool for enterprises or organizations to enhance competitiveness. Users have the flexibility to adjust request and configuration. However, network security and resource sharing issues have continued to threaten the operation of cloud computing, making cloud computing security encounter serious test. How to evaluate cloud computing security has become a topic worthy of further exploration. Combining system, management and technique three levels security factors, this paper proposes a Security Threats Measurement Model (STMM). Applying the STMM, security of cloud computing system environment can be effectively evaluated and cloud computing security defects and problems can be concretely identified. Based on the evaluation results, the user can choice the higher security service provider or request the service provider security improvement actions. |
Genetic liability to fractures in the elderly. | BACKGROUND
The genetic impact on the causation of osteoporotic fractures is unclear. A large twin study is ideally suited to determine the genetic liability to categories of fracture at various ages.
METHODS
A cohort of all 33 432 Swedish twins born from 1896 to 1944 was used to evaluate the genetic liability to fracture occurrence in the elderly. The Swedish Inpatient Registry and computer-assisted telephone interviews enabled us to identify 6021 twins with any fracture, 3599 with an osteoporotic fracture, and 1055 with a hip fracture after the age of 50 years.
RESULTS
Genetic variation in liability to fracture differed considerably by type of fracture and age. Less than 20% of the overall age-adjusted fracture variance was explained by genetic variation. The age-adjusted heritability of any osteoporotic fracture was slightly greater (0.27; 95% confidence interval [CI], 0.09-0.28), and for hip fracture alone, it was 0.48 (95% CI, 0.28-0.57). Heritability was not attenuated after further adjustment for several known osteoporotic covariates but was considerably greater for first hip fractures before the age of 69 years (0.68; 95% CI, 0.41-0.78) and between 69 and 79 years (0.47; 95% CI, 0.04-0.62) than for hip fractures after 79 years of age (0.03; 95% CI, 0.00-0.26).
CONCLUSIONS
The importance of genetic factors in propensity to fractures depends on fracture site and age. The search for susceptibility genes and environmental factors that may modulate expression of these genes in younger elderly patients with hip fracture, the most devastating osteoporotic fracture, should be encouraged. Prevention of fractures in the oldest elderly should focus on lifestyle interventions. |
Maintenance of antifracture efficacy over 10 years with strontium ranelate in postmenopausal osteoporosis | In an open-label extension study, BMD increased continuously with strontium ranelate over 10 years in osteoporotic women (P < 0.01). Vertebral and nonvertebral fracture incidence was lower between 5 and 10 years than in a matched placebo group over 5 years (P < 0.05). Strontium ranelate's antifracture efficacy appears to be maintained long term. Strontium ranelate has proven efficacy against vertebral and nonvertebral fractures, including hip, over 5 years in postmenopausal osteoporosis. We explored long-term efficacy and safety of strontium ranelate over 10 years. Postmenopausal osteoporotic women participating in the double-blind, placebo-controlled phase 3 studies SOTI and TROPOS to 5 years were invited to enter a 5-year open-label extension, during which they received strontium ranelate 2 g/day (n = 237, 10-year population). Bone mineral density (BMD) and fracture incidence were recorded, and FRAX® scores were calculated. The effect of strontium ranelate on fracture incidence was evaluated by comparison with a FRAX®-matched placebo group identified in the TROPOS placebo arm. The patients in the 10-year population had baseline characteristics comparable to those of the total SOTI/TROPOS population. Over 10 years, lumbar BMD increased continuously and significantly (P < 0.01 versus previous year) with 34.5 ± 20.2% relative change from baseline to 10 years. The incidence of vertebral and nonvertebral fracture with strontium ranelate in the 10-year population in years 6 to 10 was comparable to the incidence between years 0 and 5, but was significantly lower than the incidence observed in the FRAX®-matched placebo group over 5 years (P < 0.05); relative risk reductions for vertebral and nonvertebral fractures were 35% and 38%, respectively. Strontium ranelate was safe and well tolerated over 10 years. Long-term treatment with strontium ranelate is associated with sustained increases in BMD over 10 years, with a good safety profile. Our results also support the maintenance of antifracture efficacy over 10 years with strontium ranelate. |
Research Guides: ENGS 7: Changing Climate: What to Keep in Mind | This course explores the published scientific literature on the nature and cause of climate change, potential impacts on us, and the implications for our nation's energy issues. |
Control of single-phase cascaded H-bridge multilevel inverter with modified MPPT for grid-connected photovoltaic systems | This paper presents a control technique of cascaded H-bridge multilevel voltage source inverter (CHB-MLI) for a grid-connected photovoltaic system (GCPVS). The proposed control technique is the modified ripple-correlation control maximum power point tracking (MRCC-MPPT). This algorithm has been developed using the mean function concept to continuously correct the maximum power point (MPP) of power transferring from each PV string and to speedily reach the MPP in rapidly shading irradiance. Additionally, It can reduce a PV voltage harmonic filter in the dc-link voltage controller. In task of injecting the quality current to the utility grid, the current control technique based-on the principle of rotating reference frame is proposed. This method can generate the sinusoidal current and independently control the injection of active and reactive power to the utility grid. Simulation results for two H-bridge cells CHB-MLI 4000W/220V/50Hz GCPVS are presented to validate the proposed control scheme. |
The Politics of Paradox: Metaphysics Beyond “Political Ontology” | Since the onset of the Enlightenment much of modern thought has celebrated the end of metaphysics and the death of God. The project of ‘political ontology’, which combines post-metaphysical with post-theistic thinking, underpins the scientific rationalism that pervades contemporary philosophy and politics. Faced with the secular slide into skepticism, relativism and nihilism, this essay argues that the only genuine alternative to ‘political ontology’ is a metaphysical politics of paradox.
Philosophically, modernity and post-modernity invented and intensified the onto-theological science of transcendental ontology that can be traced to Scotus, Ockham, Machiavelli and Suarez. They bequeathed three currents – possibilism, transcendentalism and absolutism – that flow through figures such as Descartes, Wolff and Clauberg to Kant, Hegel and Comte.
Politically, all the modern binary opposites such as state versus market or left versus right are grounded in a logic of dualism – the aporia between unalterable nature (the originally violent ‘state of nature’) and human artifice (the social contract). This logic reduces real relations among people or between humanity and the natural world to nominal connections that take the form of constitutional-legal rights or economic-contractual ties. Such nominal connections undermine the social bonds of reciprocity and mutuality and the intermediary institutions of civil society upon which vibrant democracies and market economies depend.
By contrast, the alternative logic of paradox eschews the dualistic categories in favor of a ‘radical center’ – the metaphysical-political realm of real relations and the common good in which all can share through diverse forms of association that hold society together. |
FAST - An Automatic Generation System for Grammar Tests | Testing has long been acknowledged as an integral part of language teaching; however, manually-designed tests are not only time consuming but also labor intensive. Lately, due to the remarkable progress of computer technology, computer-assisted item generation (CAIG) has drawn considerable attention and becomes one of the core subjects in CALL (Computer Assisted Language Learning). CAIG provides an alternative way to automatically generate questions in relatively short time, effectively establish item banks in a large scale, and possibly support adaptive testing for incremental language learning, solving the underlying problems of time and expenditure. Previous work has explored the generation of reading comprehension, vocabulary, and cloze questions, but very little has been done on grammar tests. The purpose of this paper is to address the issue of the automatic creation of English grammar tests. We introduce a method to automatically generate grammar test items by applying Natural Language Processing (NLP) techniques. Based on test patterns adapted from TOEFL (Test of English as Foreign Language), sentences gathered from the Web are transformed into tests on grammaticality. The method involves representing test writing knowledge as structural patterns, acquiring authentic sentences on the Web, and applying patterns to transform sentences into items. At runtime, sentences are converted into two types of TOEFL-style question: multiple-choice and error detection. A prototype system FAST (Free Assessment of Structural Tests) with initial evaluation on a set of generated questions indicates that the proposed method has great potential as an educational application. Our methodology provides a promising approach and offers significant potential for computer assisted language learning and assessment. Key word: computer-assisted item generation, computer assisted language learning, grammar Introduction Language testing, aiming to assess learners’ language ability, is an essential part both for language teaching and learning. Language tests provide information of the results of learning and instruction and can be used to make decisions about individuals (e.g., placement test, achievement test, and diagnostic test). Among all kinds of tests, grammar test is commonly used in every education setting and is always included in well-established standardized tests like TOEFL (Test of English as Foreign Language). |
SPRINT: A Scalable Parallel Classifier for Data Mining | Classification is an important data mining problem. Although classification is a wellstudied problem, most of the current classification algorithms require that all or a portion of the the entire dataset remain permanently in memory. This limits their suitability for mining over large databases. We present a new decision-tree-based classification algorithm, called SPRINT that removes all of the memory restrictions, and is fast and scalable. The algorithm has also been designed to be easily parallelized, allowing many processors to work together to build a single consistent model. This parallelization, also presented here, exhibits excellent scalability as well. The combination of these characteristics makes the proposed algorithm an ideal tool for data mining. |
Invited review: Exercise training-induced changes in insulin signaling in skeletal muscle. | This review will provide insight on the current understanding of the intracellular signaling mechanisms by which exercise training increases glucose metabolism and gene expression in skeletal muscle. Participation in regular exercise programs can have important clinical implications, leading to improved health in insulin-resistant persons. Evidence is emerging that insulin signal transduction at the level of insulin receptor substrates 1 and 2, as well as phosphatidylinositol 3-kinase, is enhanced in skeletal muscle after exercise training. This is clinically relevant because insulin signaling is impaired in skeletal muscle from insulin-resistant Type 2 diabetic and obese humans. The molecular mechanism for enhanced insulin-stimulated glucose uptake after exercise training may be partly related to increased expression and activity of key proteins known to regulate glucose metabolism in skeletal muscle. Exercise also leads to an insulin-independent increase in glucose transport, mediated in part by AMP-activated protein kinase. Changes in protein expression may be related to increased signal transduction through the mitogen-activated protein kinase signaling cascades, a pathway known to regulate transcriptional activity. Understanding the molecular mechanism for the activation of insulin signal transduction pathways after exercise training may provide novel entry points for new strategies to enhance glucose metabolism and for improved health in the general population. |
Supervised tensor learning | Tensor representation is helpful to reduce the small sample size problem in discriminative subspace selection. As pointed by this paper, this is mainly because the structure information of objects in computer vision research is a reasonable constraint to reduce the number of unknown parameters used to represent a learning model. Therefore, we apply this information to the vector-based learning and generalize the vector-based learning to the tensor-based learning as the supervised tensor learning (STL) framework, which accepts tensors as input. To obtain the solution of STL, the alternating projection optimization procedure is developed. The STL framework is a combination of the convex optimization and the operations in multilinear algebra. The tensor representation helps reduce the overfitting problem in vector-based learning. Based on STL and its alternating projection optimization procedure, we generalize support vector machines, minimax probability machine, Fisher discriminant analysis, and distance metric learning, to support tensor machines, tensor minimax probability machine, tensor Fisher discriminant analysis, and the multiple distance metrics learning, respectively. We also study the iterative procedure for feature extraction within STL. To examine the effectiveness of STL, we implement the tensor minimax probability machine for image classification. By comparing with minimax probability machine, the tensor version reduces the overfitting problem. |
Cubrick : A Scalable Distributed MOLAP Database for Fast Analytics | This paper describes the architecture and design of Cubrick, a distributed multidimensional in-memory database that enables real-time data analysis of large dynamic datasets. Cubrick has a strictly multidimensional data model composed of dimensions, dimensional hierarchies and metrics, supporting sub-second MOLAP operations such as slice and dice, roll-up and drill-down over terabytes of data. All data stored in Cubrick is chunked in every dimension and stored within containers called bricks in an unordered and sparse fashion, providing high data ingestion ratios and indexed access through every dimension. In this paper, we describe details about Cubrick’s internal data structures, distributed model, query execution engine and a few details about the current implementation. Finally, we present some experimental results found in a first Cubrick deployment inside Facebook. |
Evaluation of Consumer Understanding of Different Front-of-Package Nutrition Labels, 2010–2011 | INTRODUCTION
Governments throughout the world are using or considering various front-of-package (FOP) food labeling systems to provide nutrition information to consumers. Our web-based study tested consumer understanding of different FOP labeling systems.
METHODS
Adult participants (N = 480) were randomized to 1 of 5 groups to evaluate FOP labels: 1) no label; 2) multiple traffic light (MTL); 3) MTL plus daily caloric requirement icon (MTL+caloric intake); 4) traffic light with specific nutrients to limit based on food category (TL+SNL); or 5) the Choices logo. Total percentage correct quiz scores were created reflecting participants' ability to select the healthier of 2 foods and estimate amounts of saturated fat, sugar, and sodium in foods. Participants also rated products on taste, healthfulness, and how likely they were to purchase the product. Quiz scores and product perceptions were compared with 1-way analysis of variance followed by post-hoc Tukey tests.
RESULTS
The MTL+caloric intake group (mean [standard deviation], 73.3% [6.9%]) and Choices group (72.5% [13.2%]) significantly outperformed the no label group (67.8% [10.3%]) and the TL+SNL group (65.8% [7.3%]) in selecting the more healthful product on the healthier product quiz. The MTL and MTL+caloric intake groups achieved average scores of more than 90% on the saturated fat, sugar, and sodium quizzes, which were significantly better than the no label and Choices group average scores, which were between 34% and 47%.
CONCLUSION
An MTL+caloric intake label and the Choices symbol hold promise as FOP labeling systems and require further testing in different environments and population subgroups. |
Improved automatic caricature by feature normalization and exaggeration | This sketch presents an improved formalization of automatic caricature that extends a standard approach to account for the population variance of facial features. Caricature is generally considered a rendering that emphasizes the distinctive features of a particular face. A formalization of this idea, which we term “Exaggerating the Difference from the Mean” (EDFM), is widely accepted among caricaturists [Redman 1984] and was first implemented in a groundbreaking computer program by [Brennan 1985]. Brennan’s “Caricature generator” program produced caricatures by manually defining a polyline drawing with topology corresponding to a frontal, mean, face-shape drawing, and then displacing the vertices by a constant factor away from the mean shape. Many psychological studies have applied the “Caricature Generator” or EDFM idea to investigate caricaturerelated issues in face perception [Rhodes 1997]. |
Hyaluronic acid gel (Restylane) filler for facial rhytids: lessons learned from American Society of Ophthalmic Plastic and Reconstructive Surgery member treatment of 286 patients. | PURPOSE
To review injection techniques and patient satisfaction with injection of Restylane in various facial areas by American Society of Ophthalmic Plastic and Reconstructive Surgery members.
METHODS
Data from 286 patients treated with Restylane in nine American Society of Ophthalmic Plastic and Reconstructive Surgery practices were abstracted to a spreadsheet for analysis.
RESULTS
Nine practices performed Restylane injections for 8.8 months on average (range, 2 to 28 months). Average practice volume per patient was 1.2 ml (range, 0.7 to 2.1 ml). Nine of nine practices injected the nasolabial and melolabial folds, 9 of 9 practices injected the lips, and 6 of 9 injected the glabella. Only 2 of 9 practices injected other fillers concurrently. Botox was injected concurrently by 8 of 9 practices. On a scale of 1 to 10, physicians rated average patient discomfort during Restylane injection 4.6 with topical anesthesia and 2.1 with injectable lidocaine, with or without topical anesthesia. The end point for injection was determined by visual cues, volume of injection, extrusion of the product, and palpation. "Problematic" complications, including bruising, swelling, bumpiness, and redness each had an incidence of 5% or less. Patient satisfaction on a scale of 1 to 10 had an average rating of 8.1, compared with that of Botox injection (8.9), upper blepharoplasty (8.9), and collagen injection (6.6). The source of Restylane patients was estimated to be existing Botox patients (45%); existing non-Botox patients (18%); word of mouth (14%); and new patients for other services (13%).
CONCLUSIONS
Injection techniques, volume, end points, and anesthesia vary for different facial areas and between practices. Patients experience mild to moderate injection discomfort that is lessened with injectable lidocaine. Self-limited problems occur in about 5% of patients. Physician-determined patient satisfaction is perceived to be higher than that of collagen injection but slightly lower than that of botulinum toxin injection. The major source of Restylane patients was from existing practice patients, especially botulinum toxin patients. |
Neuroleptic malignant syndrome: a review for neurohospitalists. | Neuroleptic malignant syndrome (NMS) is a life-threatening idiosyncratic reaction to antipsychotic drugs characterized by fever, altered mental status, muscle rigidity, and autonomic dysfunction. It has been associated with virtually all neuroleptics, including newer atypical antipsychotics, as well as a variety of other medications that affect central dopaminergic neurotransmission. Although uncommon, NMS remains a critical consideration in the differential diagnosis of patients presenting with fever and mental status changes because it requires prompt recognition to prevent significant morbidity and death. Treatment includes immediately stopping the offending agent and implementing supportive measures, as well as pharmacological interventions in more severe cases. Maintaining vigilant awareness of the clinical features of NMS to diagnose and treat the disorder early, however, remains the most important strategy by which physicians can keep mortality rates low and improve patient outcomes. |
Large-scale, AST-based API-usage analysis of open-source Java projects | Research on API migration and language conversion can be informed by empirical data about API usage. For instance, such data may help with designing and defending mapping rules for API migration in terms of relevance and applicability. We describe an approach to large-scale API-usage analysis of open-source Java projects, which we also instantiate for the Source-Forge open-source repository in a certain way. Our approach covers checkout, building, tagging with metadata, fact extraction, analysis, and synthesis with a large degree of automation. Fact extraction relies on resolved (type-checked) ASTs. We describe a few examples of API-usage analysis; they are motivated by API migration. These examples are concerned with analysing API footprint (such as the numbers of distinct APIs used in a project), API coverage (such as the percentage of methods of an API used in a corpus), and framework-like vs. class-library-like usage. |
Risks and benefits of dietary isoflavones for cancer. | A high intake of fruits and vegetables is associated with a lower risk of cancer. In this context, considerable attention is paid to Asian populations who consume high amounts of soy and soy-derived isoflavones, and have a lower risk for several cancer types such as breast and prostate cancers than populations in Western countries. Hence, interest focuses on soyfoods, soy products, and soy ingredients such as isoflavones with regard to their possible beneficial effects that were observed in numerous experiments and studies. The outcomes of the studies are not always conclusive, are often contradictory depending on the experimental conditions, and are, therefore, difficult to interpret. Isoflavone research revealed not only beneficial but also adverse effects, for instance, on the reproductive system. This is also the case with tumor-promoting effects on, for example, breast tissue. Isoflavone extracts and supplements are often used for the treatment of menopausal symptoms and for the prevention of age-associated conditions such as cardiovascular diseases and osteoporosis in postmenopausal women. In relation to this, questions about the effectiveness and safety of isoflavones have to be clarified. Moreover, there are concerns about the maternal consumption of isoflavones due to the development of leukemia in infants. In contrast, men may benefit from the intake of isoflavones with regard to reducing the risk of prostate cancer. Therefore, this review examines the risks but also the benefits of isoflavones with regard to various kinds of cancer, which can be derived from animal and human studies as well as from in vitro experiments. |
Current Established Risk Assessment Methodologies and Tools | The technology behind information systems evolves at an exponential rate, while at the same time becoming more and more ubiquitous. This brings with it an implicit rise in the average complexity of systems as well as the number of external interactions. In order to allow a proper assessment of the security of such (sub)systems, a whole arsenal of methodologies, methods and tools have been developed in recent years. However, most security auditors commonly use a very small subset of this collection, that best suits their needs. This thesis aims at uncovering the differences and limitations of the most common Risk Assessment frameworks, the conceptual models that support them, as well as the tools that implement them. This is done in order to gain a better understanding of the applicability of each method and/or tool and suggest guidelines to picking the most suitable one. 0000000 Current Established Risk Assessment Methodologies and Tools Page 3 0000000 Current Established Risk Assessment Methodologies and Tools Page 4 |
Empathy: implications of three ways of knowing in counseling | From a humanistic orientation, Carl Rogers (1964) described 3 ways of knowing with reference to empathic understanding: subjective, interpersonal, and objective. In the context of a threefold perspective of knowledge, the author expands on Rogers's conception of empathy. As a consequence of a conceptual change in the direction of empathy, implications for counseling are affected. Almost 50 years have passed since Carl Rogers wrote a provocative article on the "necessary and sufficient" conditions for personality change in the therapeutic process (Rogers, 1957). Central to Rogers's formulations was his definition of empathy, which emphasized the accurate perception of the emotional components and meanings of an individual's internal frame of reference. In sensing the private world of a client, the counselor or therapist attempts to convey an empathic understanding of the person's experiencing. In the years since Rogers proposed his definition of empathy, the meaning of the construct has markedly shifted from one that reflected an attitudinal and internal state to one that reflects a communication and perceptual process (Bozarth, 1984; Hackney, 1978). Rather than empathy being viewed as a way of being that attempts to capture the perceptual field of an individual, various operationalized definitions have altered and oversimplified the original meaning of the construct (Bozarth, 1984; Pearson, 1999). Hackney (1978) pointed out that by 1968, there were 21 definitions of empathy that had been documented in the counseling literature; from a more contemporary perspective, definitions and mechanisms of empathy continue to vary, and they remain unclear (Crutchfield, Baltimore, Felfeli, & Worth, 2000; Duan & Hill, 1996; Hartley, 1995). The operational definitions presented by Carkhuff and Truax and their associates (Berenson & Carkhuff, 1967; Carkhuff, 1969; Carkhuff & Berenson, 1977; Truax & Carkhuff, 1967) and others are seen as differing from Rogers's empathic sensing of a client's experiencing (Corcoran, 1981; Hackney, 1978). A focus on explicit communication methods has also resulted in the technique of reflection being equated with empathy, which tends to diminish the potential role of a broad range of other empathic counseling interventions (Bozarth, 1984; Feller & Cottone, 2003). At the same time, the emphasis on conceptualizing empathy in observable interpersonal terms has enabled counselors-in-training and practitioners to clarify what is primarily an abstract construct and thereby implement its technical aspects. Bozarth also recognized that focusing on communication exchanges and associated clarifying techniques limits the empathic intuitive functioning of counselors and therapists. In this regard, Rogers's (1957) definition of empathy does not seem to include the subjective or intuitive dimension of the counselor's experience and, in this respect, may be subject to criticism. Despite the proliferation of publications and subsequent debate on the topic of empathy, it is interesting that scant attention has been given to an illuminating article written by Rogers (1964), in which he addressed empathy in the context of ways of knowing. In that incisive work, Rogers discussed the capacity of individuals to use empathy as a way of knowing from a three-fold perspective: subjective, interpersonal, and objective. From a subjective point of view, one directs one's own empathic attunement toward him- or herself. In the interpersonal mode, a person's empathic orientation is directed toward grasping the internal frame of reference of another individual. From an objective point of view, an empathic understanding of another person centers on an external source in the form of a reference group. It was also apparent to Rogers (1964) that the three ways of knowing must be appropriately interwoven in order to confirm or disconfirm evolving hypotheses. The purposes of this article are (a) to discuss Rogers's conception of three ways of knowing with respect to empathy and (b) using the three ways of knowing, to expand on Rogers's view of empathy, presenting implications for counseling. … |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.