title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Effects of drought on child health in Marsabit District, Northern Kenya. | This study uses five years of panel data (2009-2013) for Northern Kenya's Marsabit district to analyze the levels and extent of malnutrition among children aged five and under in that area. We measure drought based on the standardized normalized difference vegetation index (NDVI) and assess its effect on child health using mid-upper arm circumference (MUAC). The results show that approximately 20 percent of the children in the study area are malnourished and a one standard deviation increase in NDVI z-score decreases the probability of child malnourishment by 12-16 percent. These findings suggest that remote sensing data can be usefully applied to develop and evaluate new interventions to reduce drought effects on child malnutrition, including better coping strategies and improved targeting of food aid. |
E-mail and the unexpected power of interaction | The LFKN protocol, interactive proofs, complexity classes, relativized separation, arithmetization of Boolean formulas, program verification, multiple provers, circuit reductions and publishable proofs, and space-bounded interactive proofs are discussed. An examination is also made of e-mail ethics.<<ETX>> |
Exploiting Perceptual Illusions to Enhance Passive Haptics | Passive haptic feedback is very compelling, but a different physical object is needed for each virtual object requiring haptic feedback. I propose to enhance passive haptics by exploiting visual dominance, enabling a single physical object to provide haptic feedback for many differently shaped virtual objects. Potential applications include virtual prototyping, redirected walking, entertainment, art, and training. |
Improve Face Recognition Rate Using Different Image Pre-Processing Techniques | Face recognition become one of the most important and fastest growing area during the last several years and become the most successful application of image analysis and broadly used in security system. It has been a challenging, interesting, and fast growing area in real time applications. The propose method is tested using a benchmark ORL database that contains 400 images of 40 persons. Pre-Processing technique are applied on the ORL database to increase the recognition rate. The best recognition rate is 97.5% when tested using 9 training images and 1 testing image. Increasing image database brightness is efficient and will increase the recognition rate. Resizing images using 0.3 scale is also efficient and will increase the recognition rate. PCA is used for feature extraction and dimension reduction. Euclidean distance is used for matching process. |
Assessment of QRS duration and presence of fragmented QRS in patients with Behçet's disease. | BACKGROUND
QRS prolongation and the presence of QRS fragmentation in 12-lead ECG are associated with increased mortality and sudden cardiac death in the long term. In this study we aimed to assess QRS duration and fragmentation in patients with Behçet's disease (BD).
METHODS
A total of 50 patients (mean age 42.7±12.0 years) previously diagnosed with BD were recruited. In addition, a control group consisting of 50 healthy people (mean age 39.4±12.5 years) was formed. The longest QRS duration was measured in surface 12-lead ECG and QRS complexes were evaluated in terms of fragmentation. Serum C-reactive protein levels were also obtained.
RESULTS
QRS duration and corrected QT duration were significantly longer in patients with BD compared with controls (102.75±11.91 vs. 96.99±10.91 ms, P=0.007; 438.55±30.80 vs. 420.23±28.06 ms, P=0.003, respectively). Fragmented QRS (fQRS) pattern was more common in patients with BD than controls [n=27 (54%) vs. n=16 (32%), P=0.026]. Disease duration was longer in patients with BD with fQRS compared with those without (12.67±8.68 vs. 7.09±7.06 years, P=0.010). Furthermore, C-reactive protein level was higher in patients with BD with fQRS compared with those without (6.53±4.11 vs. 4.97±6.32 mg/dl, P=0.043). Correlation analysis revealed no association between disease duration and QRS duration (r=0.219, P=0.126).
CONCLUSION
QRS duration is greater and fQRS complexes are more frequent in patients with BD. These findings may indicate subclinical cardiac involvement in BD. Given the prognostic significance of ECG parameters, it is reasonable to evaluate patients with BD with prolonged and fQRS complexes more in detail such as late potentials in signal averaged ECG in terms of cardiac involvement. |
Resource-Efficient Neural Architect | Neural Architecture Search (NAS) is a laborious process. Prior work on automated NAS targets mainly on improving accuracy, but lacks consideration of computational resource use. We propose the Resource-Efficient Neural Architect (RENA), an efficient resource-constrained NAS using reinforcement learning with network embedding. RENA uses a policy network to process the network embeddings to generate new configurations. We demonstrate RENA on image recognition and keyword spotting (KWS) problems. RENA can find novel architectures that achieve high performance even with tight resource constraints. For CIFAR10, it achieves 2.95% test error when compute intensity is greater than 100 FLOPs/byte, and 3.87% test error when model size is less than 3M parameters. For Google Speech Commands Dataset, RENA achieves the state-of-the-art accuracy without resource constraints, and it outperforms the optimized architectures with tight resource constraints. |
Polynomial Algorithms in Computer Algebra | Chinese remainder problem Given: rl, ... , rn E R (remainders) 11, ... , In ideals in R (moduli), such that Ii + Ij = R for all i =f. j Find: r E R, such that r == ri mod Ii for 1 ::: i ::: n The abstract Chinese remainder problem can be treated basically in the same way as the CRP over Euclidean domains. Again there is a Lagrangian and a Newtonian approach and one can show that the problem always has a solution and if r is a solution then the set of all solutions is given by r + II n ... n In. 3.1 Chinese remainder problem 57 That is, the map ¢J: r t-+ (r + h, ... , r + In) is a homomorphism from R onto nj=1 R/lj with kernel II n ... n In. However, in the absence of the Euclidean algorithm it is not possible to compute a solution of the abstract CRP. See Lauer (1983). A preconditioned Chinese remainder algorithm If the CRA is applied in a setting where many conversions w.r.t. a fixed set of moduli have to be computed, it is reasonable to precompute all partial results depending on the moduli alone. This idea leads to a preconditioned CRA, as described in Aho et al. (1974). Theorem 3.1.7. Let rl, ... , rn and ml, ... , mn be the remainders and moduli, respectively, of a CRP in the Euclidean domain D. Let m be the product of all the moduli. Let Ci = m/mi and di = ci l mod mi for 1 :::s i :::s n. Then n r = LCidiri mod m i=1 is a solution to the corresponding CRP. (3.l.1) Proof Since Ci is divisible by mj for j =1= i, we have Cidiri == 0 mod mj for j =1= i. Therefore n LCidiri == cjdjrj == rj mod mj, for all l:::s j :::s n . 0 i=1 A more detailed analysis of (3.l.1) reveals many common factors of the expressions Cidiri. Let us assume that n is a power of 2, n = 2t. Obviously, ml ..... mn/2 is a factor of Cidiri for all i > n/2 and m n/2+1 ..... mn is a factor of Cidiri for all i :::s n/2. So we could write (3.1.1) as |
Among the Machines: Human-Bot Interaction on Social Q&A Websites | With the rise of social media and advancements in AI technology, human-bot interaction will soon be commonplace. In this paper we explore human-bot interaction in STACK OVERFLOW, a question and answer website for developers. For this purpose, we built a bot emulating an ordinary user answering questions concerning the resolution of git error messages. In a first run this bot impersonated a human, while in a second run the same bot revealed its machine identity. Despite being functionally identical, the two bot variants elicited quite different reactions. |
GTR-LSTM: A Triple Encoder for Sentence Generation from RDF Data | “Walk east on Flinders St/State Route 30 towards Market St; Turn right onto St Kilda Rd/Swanston St” vs. “Walk east on Flinders St/State Route 30 towards Market St; Turn right onto St Kilda Rd/Swanston St after Flinders Street Station, a yellow building with a green dome.” T1: <Flinders Street Station, front, Federation Square> T2: <Flinders Street Station, color, yellow> T3: <Flinders Street Station, has, green dome> Sent: Flinders Street Station is a yellow building with a green dome roof located in front of Federation Square |
Particle Swarm Optimization - Neural Networks, 1995. Proceedings., IEEE International Conference on | A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described, |
Spectral Normalization for Generative Adversarial Networks | One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_ projection. |
A New Approch For Ridge Preservation : Socket Shield Technique : A Review * | Tooth loss and subsequent ridge collapse continue to burden restorative implant treament. Careful management of the post-extraction tissues is needed to preserve the alveolar ridge. In-lieu of surgical augmentation to correct a ridge defect, the socket-shield technique offers a promising solution. As the root submergence technique retains the periodontal attachment and maintains the alveolar ridge for pontic site development, this review article demonstrates the hypothesis that retention of a prepared tooth root section as a socket-shield prevents the recession of tissues buccofacial to an immediately placed implant. keywords: Alveolar bone preservation, buccal bone, esthetic zone, extraction socket, immediate implant, socket shield --------------------------------------------------------------------------------------------------------------------------------------Date of Submission: 0X -XX-2017 Date of acceptance: 18-10-2017 --------------------------------------------------------------------------------------------------------------- |
Overcoming browser cookie churn with clustering | Many large Internet websites are accessed by users anonymously, without requiring registration or logging-in. However, to provide personalized service these sites build anonymous, yet persistent, user models based on repeated user visits. Cookies, issued when a web browser first visits a site, are typically employed to anonymously associate a website visit with a distinct user (web browser). However, users may reset cookies, making such association short-lived and noisy. In this paper we propose a solution to the cookie churn problem: a novel algorithm for grouping similar cookies into clusters that are more persistent than individual cookies. Such clustering could potentially allow more robust estimation of the number of unique visitors of the site over a certain long time period, and also better user modeling which is key to plenty of web applications such as advertising and recommender systems.
We present a novel method to cluster browser cookies into groups that are likely to belong to the same browser based on a statistical model of browser visitation patterns. We address each step of the clustering as a binary classification problem estimating the probability that two different subsets of cookies belong to the same browser. We observe that our clustering problem is a generalized interval graph coloring problem, and propose a greedy heuristic algorithm for solving it. The scalability of this method allows us to cluster hundreds of millions of browser cookies and provides significant improvements over baselines such as constrained K-means. |
Securing digital identities in the cloud by selecting an apposite Federated Identity Management from SAML, OAuth and OpenID Connect | Access to computer systems and the information held on them, be it commercially or personally sensitive, is naturally, strictly controlled by both legal and technical security measures. One such method is digital identity, which is used to authenticate and authorize users to provide access to IT infrastructure to perform official, financial or sensitive operations within organisations. However, transmitting and sharing this sensitive information with other organisations over insecure channels always poses a significant security and privacy risk. An example of an effective solution to this problem is the Federated Identity Management (FIdM) standard adopted in the cloud environment. The FIdM standard is used to authenticate and authorize users across multiple organisations to obtain access to their networks and resources without transmitting sensitive information to other organisations. Using the same authentication and authorization details among multiple organisations in one federated group, it protects the identities and credentials of users in the group. This protection is a balance, mitigating security risk whilst maintaining a positive experience for users. Three of the most popular FIdM standards are Security Assertion Markup Language (SAML), Open Authentication (OAuth), and OpenID Connect (OIDC). This paper presents an assessment of these standards considering their architectural design, working, security strength and security vulnerability, to cognise and ascertain effective usages to protect digital identities and credentials. Firstly, it explains the architectural design and working of these standards. Secondly, it proposes several assessment criteria and compares functionalities of these standards based on the proposed criteria. Finally, it presents a comprehensive analysis of their security vulnerabilities to aid in selecting an apposite FIdM. This analysis of security vulnerabilities is of great significance because their improper or erroneous deployment may be exploited for attacks. |
Can FPGAs Beat GPUs in Accelerating Next-Generation Deep Neural Networks? | Current-generation Deep Neural Networks (DNNs), such as AlexNet and VGG, rely heavily on dense floating-point matrix multiplication (GEMM), which maps well to GPUs (regular parallelism, high TFLOP/s). Because of this, GPUs are widely used for accelerating DNNs. Current FPGAs offer superior energy efficiency (Ops/Watt), but they do not offer the performance of today's GPUs on DNNs. In this paper, we look at upcoming FPGA technology advances, the rapid pace of innovation in DNN algorithms, and consider whether future high-performance FPGAs will outperform GPUs for next-generation DNNs. The upcoming Intel® 14-nm Stratix? 10 FPGAs will have thousands of hard floating-point units (DSPs) and on-chip RAMs (M20K memory blocks). They will also have high bandwidth memories (HBMs) and improved frequency (HyperFlex? core architecture). This combination of features brings FPGA raw floating point performance within striking distance of GPUs. Meanwhile, DNNs are quickly evolving. For example, recent innovations that exploit sparsity (e.g., pruning) and compact data types (e.g., 1-2 bit) result in major leaps in algorithmic efficiency. However, these innovations introduce irregular parallelism on custom data types, which are difficult for GPUs to handle but would be a great fit for FPGA's extreme customizability.
This paper evaluates a selection of emerging DNN algorithms on two generations of Intel FPGAs (Arria'10, Stratix'10) against the latest highest performance Titan X Pascal GPU. We created a customizable DNN accelerator template for FPGAs and used it in our evaluations. First, we study various GEMM operations for next-generation DNNs. Our results show that Stratix 10 FPGA is 10%, 50%, and 5.4x better in performance (TOP/sec) than Titan X Pascal GPU on GEMM operations for pruned, Int6, and binarized DNNs, respectively. Then, we present a detailed case study on accelerating Ternary ResNet which relies on sparse GEMM on 2-bit weights (i.e., weights constrained to 0,+1,-1) and full-precision neurons. The Ternary ResNet accuracy is within ~1% of the full-precision ResNet which won the 2015 ImageNet competition. On Ternary-ResNet, the Stratix 10 FPGA can deliver 60% better performance over Titan X Pascal GPU, while being 2.3x better in performance/watt. Our results indicate that FPGAs may become the platform of choice for accelerating next-generation DNNs. |
Topic Extraction and Sentiment Classification by using Latent Dirichlet Markov Allocation and SentiWordNet | Now days, the power of internet is having an immense impact on human life and helps one to make important decisions. Since plenty of knowledge and valuable information is available on the internet therefore many users read review information given on web to take decisions such as buying products, watching movies, going to restaurants etc. Reviews contain user opinion about the product, service, event or topic. It is difficult for web users to read and understand the contents from large number of reviews. Whenever any detail is required in the document, this can be achieved by many probabilistic topic models. A topic model provides a generative model for documents and it defines a probabilistic scheme by which documents can be achieved. Topic model is an Integration of acquaintance and these acquaintances are blended with theme, where a theme is a fusion of terms. We describe Latent Dirichlet Markov Allocation 4 level hierarchical Bayesian Model (LDMA), planted on Latent Dirichlet Allocation (LDA) and Hidden Markov Model (HMM), which highlights on extracting multiword topics from text data. To retrieve the sentiment of the reviews, along with LDMA we will be using SentiWordNet and will compare our result to LDMA with feature extraction of baseline method of sentiment analysis. |
A Cognitive Adopted Framework for IoT Big-Data Management and Knowledge Discovery Prospective | In future IoT big-data management and knowledge discovery for large scale industrial automation application, the importance of industrial internet is increasing day by day. Several diversified technologies such as IoT (Internet of Things), computational intelligence, machine type communication, big-data, and sensor technology can be incorporated together to improve the data management and knowledge discovery efficiency of large scale automation applications. So in this work, we need to propose a Cognitive Oriented IoT Big-data Framework (COIB-framework) along with implementation architecture, IoT big-data layering architecture, and data organization and knowledge exploration subsystem for effective data management and knowledge discovery that is well-suited with the large scale industrial automation applications. The discussion and analysis show that the proposed framework and architectures create a reasonable solution in implementing IoT big-data based smart industrial applications. |
Steganographic Generative Adversarial Networks | Steganography is collection of methods to hide secret information (“payload”) within non-secret information (“container”). Its counterpart, Steganalysis, is the practice of determining if a message contains a hidden payload, and recovering it if possible. Presence of hidden payloads is typically detected by a binary classifier. In the present study, we propose a new model for generating image-like containers based on Deep Convolutional Generative Adversarial Networks (DCGAN). This approach allows to generate more setganalysis-secure message embedding using standard steganography algorithms. Experiment results demonstrate that the new model successfully deceives the steganography analyzer, and for this reason, can be used in steganographic applications. |
AtDELFI: automatically designing legible, full instructions for games | This paper introduces a fully automatic method for generating video game tutorials. The AtDELFI system (Automatically DEsigning Legible, Full Instructions for games) was created to investigate procedural generation of instructions that teach players how to play video games. We present a representation of game rules and mechanics using a graph system as well as a tutorial generation method that uses said graph representation. We demonstrate the concept by testing it on games within the General Video Game Artificial Intelligence (GVG-AI) framework; the paper discusses tutorials generated for eight different games. Our findings suggest that a graph representation scheme works well for simple arcade style games such as Space Invaders and Pacman, but it appears that tutorials for more complex games might require higher-level understanding of the game than just single mechanics. |
Effects of assistance timing on metabolic cost, assistance power, and gait parameters for a hip-type exoskeleton | There are many important factors in developing an exoskeleton for assisting human locomotion. For example, the weight should be sufficiently light, the assist torque should be high enough to assist joint motion, and the assistance timing should be just right. Understanding how these design parameters affect overall performance of a complex human-machine system is critical for the development of these types of systems. The present study introduces an assistance timing controller that regulates assistance timing such that peak joint velocity and peak assistance power are offset by a reference value for our hip-type exoskeleton. This is followed by measuring the manner in which various assistance timing references affect an important metric for performance, namely metabolic cost. The results indicate that net metabolic cost exhibits a concave up pattern with the most reduction of 21%, when compared to walking without the exoskeleton, at 0% assistance timing reference. The study also examines assistance timing's effect on gait parameters; increase in assistance timing reference increases step length, decreases cadence, and increases walk ratio (i.e. step length/cadence ratio) during treadmill walking. |
Hierarchical Multi-label Conditional Random Fields for Aspect-Oriented Opinion Mining | A common feature of many online review sites is the use of an overall rating that summarizes the opinions expressed in a review. Unfortunately, these document-level ratings do not provide any information about the opinions contained in the review that concern a specific aspect (e.g., cleanliness) of the product being reviewed (e.g., a hotel). In this paper we study the finer-grained problem of aspect-oriented opinion mining at the sentence level, which consists of predicting, for all sentences in the review, whether the sentence expresses a positive, neutral, or negative opinion (or no opinion at all) about a specific aspect of the product. For this task we propose a set of increasingly powerful models based on conditional random fields (CRFs), including a hierarchical multi-label CRFs scheme that jointly models the overall opinion expressed in the review and the set of aspect-specific opinions expressed in each of its sentences. We evaluate the proposed models against a dataset of hotel reviews (which we here make publicly available) in which the set of aspects and the opinions expressed concerning them are manually annotated at the sentence level. We find that both hierarchical and multi-label factors lead to improved predictions of aspect-oriented opinions. |
Beyond View Transformation: Cycle-Consistent Global and Partial Perception Gan for View-Invariant Gait Recognition | Cross-view gait recognition is a challenging problem when view-interval and pose variation are relatively large. In this paper, we propose Cycle-consistent Attentive Generative Adversarial Networks (CA-GAN) to map different views' gait images to view-consistent and photorealistic gait images for cross-view gait recognition. In CA-GAN, the generative network is composed of two branches, which simultaneously perceives human's global contexts and local body parts information respectively. Moreover, we design a novel Attentive Adversarial Network (AAN) to adaptively learn different weights for the discriminator's receptive fields with attention mechanism. Furthermore, as it is hard to collect the pose-aligned gait image pairs from different views for training CA-GAN’ we combine forward cycle-consistency loss and adver-sarial loss to learn the transformation relationship from source views to target view. The combined loss function can also preserve the discriminative gait structures of different identities at the training stage. Finally, we directly exploit the synthesized view-consistent gait images for cross-view gait recognition task. Experimental results on CASIA-B demonstrate that our method not only outperforms the state-of-the-art methods in cross-view gait recognition, but also presents compelling perceptual results even across the large view-interval. |
Gated Recursive Neural Network for Chinese Word Segmentation | Recently, neural network models for natural language processing tasks have been increasingly focused on for their ability of alleviating the burden of manual feature engineering. However, the previous neural models cannot extract the complicated feature compositions as the traditional methods with discrete features. In this paper, we propose a gated recursive neural network (GRNN) for Chinese word segmentation, which contains reset and update gates to incorporate the complicated combinations of the context characters. Since GRNN is relative deep, we also use a supervised layer-wise training method to avoid the problem of gradient diffusion. Experiments on the benchmark datasets show that our model outperforms the previous neural network models as well as the state-of-the-art methods. |
Loss Analysis and Efficiency Improvement of an Axial-Flux PM Amorphous Magnetic Material Machine | This paper presents research work on a 12-slot ten-pole tapered axial-flux permanent-magnet machine utilizing amorphous magnetic material in the stator core. Novel loss separation techniques are described, including mechanical loss and locked-rotor tests. Mechanical loss estimation is based on a combination of experimental tests and 3-D finite-element model analysis using an uncut stator. The locked-rotor test is introduced to separate the stator and rotor losses by eliminating the uncertainty associated with mechanical loss. High rotor yoke losses were identified in the baseline design. The rotor design was modified and a significant improvement in efficiency was demonstrated. |
Topic2Vec: Learning distributed representations of topics | Latent Dirichlet Allocation (LDA) mining thematic structure of documents plays an important role in nature language processing and machine learning areas. However, the probability distribution from LDA only describes the statistical relationship of occurrences in the corpus and usually in practice, probability is not the best choice for feature representations. Recently, embedding methods have been proposed to represent words and documents by learning essential concepts and representations, such as Word2Vec and Doc2Vec. The embedded representations have shown more effectiveness than LDA-style representations in many tasks. In this paper, we propose the Topic2Vec approach which can learn topic representations in the same semantic vector space with words, as an alternative to probability distribution. The experimental results show that Topic2Vec achieves interesting and meaningful results. |
Classical verification of quantum proofs | We present a classical interactive protocol that verifies the validity of a quantum witness state for the local Hamiltonian problem. It follows from this protocol that approximating the non-local value of a multi-player one-round game to inverse polynomial precision is QMA-hard. Our work makes an interesting connection between the theory of QMA-completeness and Hamiltonian complexity on one hand and the study of non-local games and Bell inequalities on the other. |
Capability Challenges in Transforming Government through Open and Big Data: Tales of Two Cities | Hyper-connected and digitized governments are increasingly advancing a vision of data-driven government as producers and consumers of big data in the big data ecosystem. Despite the growing interests in the potential power of big data, we found paucity of empirical research on big data use in government. This paper explores organizational capability challenges in transforming government through big data use. Using systematic literature review approach we developed initial framework for examining impacts of socio-political, strategic change, analytical, and technical capability challenges in enhancing public policy and service through big data. We then applied the framework to conduct case study research on two large-size city governments’ big data use. The findings indicate the framework’s usefulness, shedding new insights into the unique government context. Consequently, the framework was revised by adding big data public policy, political leadership structure, and organizational culture to further explain impacts of organizational capability challenges in transforming government. |
From prediction to experimental validation: desmoglein 2 is a functionally relevant substrate of matriptase in epithelial cells and their reciprocal relationship is important for cell adhesion. | Accurate identification of substrates of a protease is critical in defining its physiological functions. We previously predicted that Dsg-2 (desmoglein-2), a desmosomal protein, is a candidate substrate of the transmembrane serine protease matriptase. The present study is an experimental validation of this prediction. As demanded by our published method PNSAS [Prediction of Natural Substrates from Artificial Substrate of Proteases; Venkatraman, Balakrishnan, Rao, Hooda and Pol (2009) PLoS ONE 4, e5700], this enzyme-substrate pair shares a common subcellular distribution and the predicted cleavage site is accessible to the protease. Matriptase knock-down cells showed enhanced immunoreactive Dsg-2 at the cell surface and formed larger cell clusters. When matriptase was mobilized from intracellular storage deposits to the cell surface there was a decrease in the band intensity of Dsg-2 in the plasma membrane fractions with a concomitant accumulation of a cleaved product in the conditioned medium. The exogenous addition of pure active recombinant matriptase decreased the surface levels of immunoreactive Dsg-2, whereas the levels of CD44 and E-cadherin were unaltered. Dsg-2 with a mutation at the predicted cleavage site is resistant to cleavage by matriptase. Thus Dsg-2 seems to be a functionally relevant physiological substrate of matriptase. Since breakdown of cell-cell contact is the first major event in invasion, this reciprocal relationship is likely to have a profound role in cancers of epithelial origin. Our algorithm has the potential to become an integral tool for discovering new protease-substrate pairs. |
Soy isoflavones augment the effect of TRAIL-mediated apoptotic death in prostate cancer cells. | Prostate cancer represents an ideal disease for chemopreventive intervention. Genistein, daidzein and equol, the predominant soy isoflavones, have been reported to lower the risk of prostate cancer. Isoflavones exert their chemopreventive properties by affecting apoptosis signalling pathways in cancer cells. Tumour necrosis factor-related apoptosis-inducing ligand (TRAIL) is an endogenous anticancer agent that induces apoptosis selectively in tumour cells. Soluble or expressed in immune cells, TRAIL molecules play an important role in immune surveillance and defense mechanisms against tumour cells. However, various types of cancer cells are resistant to TRAIL-mediated apoptosis. We examined the cytotoxic and apoptotic effects of genistein, daidzein and equol in combination with TRAIL in LNCaP cells. Cytotoxicity was measured by MTT and LDH assays. Apoptosis was analyzed by flow cytometry and fluorescence microscopy using Annexin V-FITC. Mitochondrial membrane potential (ΔΨm) was evaluated by fluorescence microscopy using DePsipher staining. Flow cytometry detected the expression of death receptor TRAIL-R1 (DR4) and TRAIL-R2 (DR5) on cell surfaces. The soy isoflavones sensitized TRAIL-resistant prostate cancer cells to apoptotic death. The isoflavones did not alter death receptor expression, but significantly augmented TRAIL-induced disruption of ΔΨm in the LNCaP cells. We showed for the first time that the chemopreventive effects of soy foods on prostate cancer are associated with isoflavone-induced support of TRAIL-mediated apoptotic death. |
Predictors of long-term outcomes in patients with significant myxomatous mitral regurgitation undergoing exercise echocardiography. | BACKGROUND
Significant myxomatous mitral regurgitation leads to progressive left ventricular (LV) decline, resulting in congestive heart failure and death. Such patients benefit from mitral valve surgery. Exercise echocardiography aids in risk stratification and helps decide surgical timing. We sought to assess predictors of outcomes in such patients undergoing exercise echocardiography.
METHODS AND RESULTS
This is an observational study of 884 consecutive patients (age, 58 ± 14 years; 67% men) with grade III+ or greater myxomatous mitral regurgitation who underwent exercise echocardiography between January 2000 and December 2011 (excluding functional mitral regurgitation, prior valvular surgery, hypertrophic cardiomyopathy, rheumatic valvular disease, or greater than mild mitral stenosis). Clinical and echocardiographic data (mitral regurgitation, LV ejection fraction, LV dimensions, right ventricular systolic pressure) and exercise variables (metabolic equivalents, heart rate recovery at 1 minute after exercise) were recorded. Composite events of death, myocardial infarction, stroke, and progression to congestive heart failure were recorded. Mean LV ejection fraction, indexed LV end-systolic dimension, resting right ventricular systolic pressure, peak stress right ventricular systolic pressure, metabolic equivalents achieved, and heart rate recovery were 58 ± 5%, 1.6 ± 0.4 mm/m(2), 31 ± 12 mm Hg, 46 ± 17 mm Hg, 9.6 ± 3, and 33 ± 14 beats, respectively. During 6.4 ± 4 years of follow-up, there were 87 events. On stepwise multivariable Cox analysis, percent of age/sex-predicted metabolic equivalents (hazard ratio, 0.99; 95% confidence interval, 0.98-0.99; P=0.005), heart rate recovery (hazard ratio, 0.29; 95% confidence interval, 0.17-0.50; P<0.001), resting right ventricular systolic pressure (hazard ratio, 1.03; 95% confidence interval, 1.004-1.05; P=0.02), atrial fibrillation (hazard ratio, 1.91; 95% confidence interval, 1.07-3.41; P=0.03), and LV ejection fraction (hazard ratio, 0.96; 95% confidence interval, 0.92-0.99; P=0.04) predicted outcomes.
CONCLUSIONS
In patients with grade III+ or greater myxomatous mitral regurgitation undergoing exercise echocardiography, lower percent of age/sex-predicted metabolic equivalents, lower heart rate recovery, atrial fibrillation, lower LV ejection fraction, and high resting right ventricular systolic pressure predicted worse outcomes. |
Population‐based analysis of pathological correlates of dementia in the oldest old | OBJECTIVE
The aim of this study was to analyze brain pathologies which cause dementia in the oldest old population.
METHODS
All 601 persons aged ≥85 years living in the city of Vantaa (Finland), on April 1st, 1991 formed the study population of the Vantaa85 + study, 300 of whom were autopsied during follow-up (79.5% females, mean age-at-death 92 ± 3.7 years). Alzheimer's disease (AD) pathology (tau and beta-amyloid [Aβ]), cerebral amyloid angiopathy (CAA) and Lewy-related pathologies were analyzed. Brain infarcts were categorized by size (<2 mm, 2-15 mm, >15 mm) and by location. Brain hemorrhages were classified as microscopic (<2 mm) and macroscopic.
RESULTS
195/300 (65%) were demented. 194/195 (99%) of the demented had at least one neuropathology. Three independent contributors to dementia were identified: AD-type tau-pathology (Braak stage V-VI), neocortical Lewy-related pathology, and cortical anterior 2-15 mm infarcts. These were found in 34%, 21%, and 21% of the demented, respectively, with the multivariate odds ratios (OR) for dementia 5.5, 4.5, and 3.4. Factor analysis investigating the relationships between different pathologies identified three separate factors: (1) AD-spectrum, which included neurofibrillary tau, Aβ plaque, and neocortical Lewy-related pathologies and CAA (2) >2 mm cortical and subcortical infarcts, and (3) <2 mm cortical microinfarcts and microhemorrhages. Multipathology was common and increased the risk of dementia significantly.
INTERPRETATION
These results indicate that AD-type neurodegenerative processes play the most prominent role in twilight cognitive decline. The high prevalence of both neurodegenerative and vascular pathologies indicates that multiple preventive and therapeutic approaches are needed to protect the brains of the oldest old. |
LOADED: link-based outlier and anomaly detection in evolving data sets | In this paper, we present LOADED, an algorithm for outlier detection in evolving data sets containing both continuous and categorical attributes. LOADED is a tunable algorithm, wherein one can trade off computation for accuracy so that domain-specific response times are achieved. Experimental results show that LOADED provides very good detection and false positive rates, which are several times better than those of existing distance-based schemes. |
Learning Globally-Consistent Local Distance Functions for Shape-Based Image Retrieval and Classification | We address the problem of visual category recognition by learning an image-to-image distance function that attempts to satisfy the following property: the distance between images from the same category should be less than the distance between images from different categories. We use patch-based feature vectors common in object recognition work as a basis for our image-to-image distance functions. Our large-margin formulation for learning the distance functions is similar to formulations used in the machine learning literature on distance metric learning, however we differ in that we learn local distance functions¿a different parameterized function for every image of our training set¿whereas typically a single global distance function is learned. This was a novel approach first introduced in Frome, Singer, & Malik, NIPS 2006. In that work we learned the local distance functions independently, and the outputs of these functions could not be compared at test time without the use of additional heuristics or training. Here we introduce a different approach that has the advantage that it learns distance functions that are globally consistent in that they can be directly compared for purposes of retrieval and classification. The output of the learning algorithm are weights assigned to the image features, which is intuitively appealing in the computer vision setting: some features are more salient than others, and which are more salient depends on the category, or image, being considered. We train and test using the Caltech 101 object recognition benchmark. |
Mindfulness training and stress reactivity in substance abuse: results from a randomized, controlled stage I pilot study. | Stress is important in substance use disorders (SUDs). Mindfulness training (MT) has shown promise for stress-related maladies. No studies have compared MT to empirically validated treatments for SUDs. The goals of this study were to assess MT compared to cognitive behavioral therapy (CBT) in substance use and treatment acceptability, and specificity of MT compared to CBT in targeting stress reactivity. Thirty-six individuals with alcohol and/or cocaine use disorders were randomly assigned to receive group MT or CBT in an outpatient setting. Drug use was assessed weekly. After treatment, responses to personalized stress provocation were measured. Fourteen individuals completed treatment. There were no differences in treatment satisfaction or drug use between groups. The laboratory paradigm suggested reduced psychological and physiological indices of stress during provocation in MT compared to CBT. This pilot study provides evidence of the feasibility of MT in treating SUDs and suggests that MT may be efficacious in targeting stress. |
Closure of defects of the malar region. | BACKGROUND
Reconstruction of the skin defects of malar region poses some challenging problems including obvious scar formation, dog-ear formation, trapdoor deformity and displacement of surrounding anatomic landmarks such as the lower eyelid, oral commissure, ala nasi, and sideburn.
PURPOSE
Here, a new local flap procedure, namely the reading man procedure, for reconstruction of large malar skin defects is described.
MATERIALS AND METHODS
In this technique, 2 flaps designed in an unequal Z-plasty manner are used. The first flap is transposed to the defect area, whereas the second flap is used for closure of the first flap's donor site. In the last 5 years, this technique has been used for closure of the large malar defects in 18 patients (11 men and 7 women) aged 21 to 95 years. The defect size was ranging between 3 and 8.5 cm in diameter.
RESULTS
A tension-free defect closure was obtained in all patients. There was no patient with dog-ear formation, ectropion, or distortion of the surrounding anatomic structures. No tumor recurrence was observed. A mean follow-up of 26 months (range, 5 mo to 3.5 y) revealed a cosmetically acceptable scar formation in all patients.
CONCLUSIONS
The reading man procedure was found to be a useful and easygoing technique for the closure of malar defects, which allows defect closure without any additional excision of surrounding healthy tissue. It provides a tension-free closure of considerably large malar defects without creating distortions of the mobile anatomic structures. |
A safety/security risk analysis approach of Industrial Control Systems: A cyber bowtie - combining new version of attack tree with bowtie analysis | The introduction of connected systems and digital technology in process industries creates new cyber-security vulnerabilities that can be exploited by sophisticated threats and lead to undesirable safety accidents. Thus, identifying these vulnerabilities during risk analysis becomes an important part for effective industrial risk evaluation. However, nowadays, safety and security are analyzed separately when they should not be. This is because a security threat can lead to the same dangerous phenomenon as a safety incident. In this paper, a new method that considers safety and security together during industrial risk analysis is proposed. This approach combines bowtie analysis, commonly used for safety analysis, with a new extended version of attack tree analysis, introduced for security analysis of industrial control systems. The combined use of bowtie and attack tree provides an exhaustive representation of risk scenarios in terms of safety and security. We then propose an approach for evaluating the risk level based on two-term likelihood parts, one for safety and one for security. The application of this approach is demonstrated using the case study of a risk scenario in a chemical facility. |
Scalable and High Performance Betweenness Centrality on the GPU | Graphs that model social networks, numerical simulations, and the structure of the Internet are enormous and cannot be manually inspected. A popular metric used to analyze these networks is betweenness centrality, which has applications in community detection, power grid contingency analysis, and the study of the human brain. However, these analyses come with a high computational cost that prevents the examination of large graphs of interest.
Prior GPU implementations suffer from large local data structures and inefficient graph traversals that limit scalability and performance. Here we present several hybrid GPU implementations, providing good performance on graphs of arbitrary structure rather than just scale-free graphs as was done previously. We achieve up to 13x speedup on high-diameter graphs and an average of 2.71x speedup overall over the best existing GPU algorithm. We observe near linear speedup and performance exceeding tens of GTEPS when running betweenness centrality on 192 GPUs. |
Local Fiedler vector centrality for detection of deep and overlapping communities in networks | In this paper, a new centrality called local Fiedler vector centrality (LFVC) is proposed to analyze the connectivity structure of a graph. It is associated with the sensitivity of algebraic connectivity to node or edge removals and features distributed computations via the associated graph Laplacian matrix. We prove that LFVC can be related to a monotonic submodular set function that guarantees that greedy node or edge removals come within a factor 1-1/e of the optimal non-greedy batch removal strategy. Due to the close relationship between graph topology and community structure, we use LFVC to detect deep and overlapping communities on real-world social network datasets. The results offer new insights on community detection by discovering new significant communities and key members in the network. Notably, LFVC is also shown to significantly outperform other well-known centralities for community detection. |
Made-to-Measure Technologies for an Online Clothing Store | The Internet along with the rapidly growing power of computing has emerged as a compelling channel for sale of garments. A number of initiatives have arisen recently across the world [1][2][3], revolving around the concepts of Made-to-Measure manufacturing and shopping via the Internet. These initiatives are fueled by the current Web technologies available, providing an exciting and aesthetically pleasing interface to the general public. |
Chip Level Techniques for EMI Reduction in LCD Panels | This paper presents chip level techniques to improve electro-magnetic interference (EMI) characteristics of LCD-TV panels. A timing controller (TCON) uses over-driving algorithms to improve the response time of liquid crystals (LC). Since this algorithm needs previous frame data, external memory such as double data rate synchronous DRAM (DDR SDRAM) is widely used as a frame buffer. A TTL interface between the TCON and memory is used for read and write operations, generating EMI noise. For reduction of this EMI, three methods are described. The first approach is to reduce the driving current of data I/O buffers. The second is to adopt spread spectrum clock generation (SSCG), and the third is to apply a proposed algorithm which minimizes data transitions. EMI measurement of a 32" LCD-TV panel shows that these approaches are very effective for reduction of EMI, achieving 20dB reduction at 175MHz. |
Model Agnostic Supervised Local Explanations | Model interpretability is an increasingly important component of practical machine learning. Some of the most common forms of interpretability systems are example-based, local, and global explanations. One of the main challenges in interpretability is designing explanation systems that can capture aspects of each of these explanation types, in order to develop a more thorough understanding of the model. We address this challenge in a novel model called MAPLE that uses local linear modeling techniques along with a dual interpretation of random forests (both as a supervised neighborhood approach and as a feature selection method). MAPLE has two fundamental advantages over existing interpretability systems. First, while it is effective as a black-box explanation system, MAPLE itself is a highly accurate predictive model that provides faithful self explanations, and thus sidesteps the typical accuracy-interpretability trade-off. Specifically, we demonstrate, on several UCI datasets, that MAPLE is at least as accurate as random forests and that it produces more faithful local explanations than LIME, a popular interpretability system. Second, MAPLE provides both example-based and local explanations and can detect global patterns, which allows it to diagnose limitations in its local explanations. |
Efficient Hyperparameter Optimization for Deep Learning Algorithms Using Deterministic RBF Surrogates | Automatically searching for optimal hyperparameter configurations is of crucial importance for applying deep learning algorithms in practice. Recently, Bayesian optimization has been proposed for optimizing hyperparameters of various machine learning algorithms. Those methods adopt probabilistic surrogate models like Gaussian processes to approximate and minimize the validation error function of hyperparameter values. However, probabilistic surrogates require accurate estimates of sufficient statistics (e.g., covariance) of the error distribution and thus need many function evaluations with a sizeable number of hyperparameters. This makes them inefficient for optimizing hyperparameters of deep learning algorithms, which are highly expensive to evaluate. In this work, we propose a new deterministic and efficient hyperparameter optimization method that employs radial basis functions as error surrogates. The proposed mixed integer algorithm, called HORD, searches the surrogate for the most promising hyperparameter values through dynamic coordinate search and requires many fewer function evaluations. HORD does well in low dimensions but it is exceptionally better in higher dimensions. Extensive evaluations on MNIST and CIFAR-10 for four deep neural networks demonstrate HORD significantly outperforms the well-established Bayesian optimization methods such as GP, SMAC and TPE. For instance, on average, HORD is more than 6 times faster than GP-EI in obtaining the best configuration of 19 hyperparameters. |
A Visual Approach to Sketched Symbol Recognition | There is increasing interest in building systems that can automatically interpret hand-drawn sketches. However, many challenges remain in terms of recognition accuracy, robustness to different drawing styles, and ability to generalize across multiple domains. To address these challenges, we propose a new approach to sketched symbol recognition that focuses on the visual appearance of the symbols. This allows us to better handle the range of visual and stroke-level variations found in freehand drawings. We also present a new symbol classifier that is computationally efficient and invariant to rotation and local deformations. We show that our method exceeds state-of-the-art performance on all three domains we evaluated, including handwritten digits, PowerPoint shapes, and electrical circuit symbols. |
Securing smart maintenance services: Hardware-security and TLS for MQTT | Increasing the efficiency of production and manufacturing processes is a key goal of initiatives like Industry 4.0. Within the context of the European research project ARROWHEAD, we enable and secure smart maintenance services. An overall goal is to proactively predict and optimize the Maintenance, Repair and Operations (MRO) processes carried out by a device maintainer, for industrial devices deployed at the customer. Therefore it is necessary to centrally acquire maintenance relevant equipment status data from remotely located devices over the Internet. Consequently, security and privacy issues arise from connecting devices to the Internet, and sending data from customer sites to the maintainer's back-end. In this paper we consider an exemplary automotive use case with an AVL Particle Counter (APC) as device. The APC transmits its status information by means of a fingerprint via the publish-subscribe protocol Message Queue Telemetry Transport (MQTT) to an MQTT Information Broker in the remotely located AVL back-end. In a threat analysis we focus on the MQTT routing information asset and identify two elementary security goals in regard to client authentication. Consequently we propose a system architecture incorporating a hardware security controller that processes the Transport Layer Security (TLS) client authentication step. We validate the feasibility of the concept by means of a prototype implementation. Experimental results indicate that no significant performance impact is imposed by the hardware security element. The security evaluation confirms the advanced security of our system, which we believe lays the foundation for security and privacy in future smart service infrastructures. |
Low-cost foods: how do they compare with their brand name equivalents? A French study. | OBJECTIVE
Consumers are increasingly relying on low-cost foods, although it is not clear if the nutritional quality of these foods is fully maintained. The aim of the present work was to analyse the relationship between cost and quality within a given food category.
DESIGN AND SETTING
The relationship was analysed between nutritional quality and cost for 220 food products belonging to seventeen different categories, controlling for package type and package size. Given that a summary of nutrient information was not available on the product label, a novel ingredient quality score was developed based on listed product ingredients.
RESULTS
Within a given category, the lowest-priced foods were not different from the equivalent branded products in terms of overall energy or total fat content. Nevertheless, a positive relationship, small but significant, was observed between the price and the ingredient quality score. On average, the branded products cost 2.5 times more than the low-cost products, for an equivalent energy and lipid content, and had a slightly higher (1.3 times) ingredient quality score.
CONCLUSIONS
More studies are necessary to evaluate the nutritional quality of low-cost foods. This evaluation would be facilitated if nutrition labelling was mandatory. Yet in view of the present results, it does not seem to be justified to divert consumers, especially the poorest, from low-cost foods because this may have an adverse effect on the nutritional quality of their diet, by reducing further the fraction of their food budget spent on fresh fruit and vegetables. |
A novel binary particle swarm optimization | Particle swarm optimization (PSO) as a novel computational intelligence technique, has succeeded in many continuous problems. But in discrete or binary version there are still some difficulties. In this paper a novel binary PSO is proposed. This algorithm proposes a new definition for the velocity vector of binary PSO. It will be shown that this algorithm is a better interpretation of continuous PSO into discrete PSO than the older versions. Also a number of benchmark optimization problems are solved using this concept and quite satisfactory results are obtained. |
Effects of inhaled furosemide on platelet-activating factor challenge in mild asthma. | Furosemide (Fur) may have an anti-inflammatory effect on airways in patients with asthma although its intrinsic mechanism remains elusive. Platelet-activating factor (PAF) is a potent proinflammatory mediator that induces systemic and respiratory effects in normal control subjects and asthmatics. The aim of this study was to assess whether pretreatment with nebulized Fur (40 mg) was able to modulate PAF-induced systemic and respiratory effects in asthma. Eleven patients were studied (mean+/-sem 22+/-0.8 yrs) with mild asthma (forced expiratory volume in one second, 95+/-4%) in a randomized, double-blind, placebo-controlled, cross-over fashion, one week apart. PAF challenge (18 microg) was carried out 15 min after administration of Fur or placebo. Peripheral blood neutrophils, respiratory system resistance, and arterial blood gases were measured at baseline, and 5, 15 and 45 min after PAF; urinary cysteinyl leukotriene E4 (uLTE4) was also measured, at baseline and 120 min after PAF challenge. Although Fur did not alter PAF-induced systemic and respiratory effects, it did partially inhibit (63%; p<0.04) the increments of uLTE4 levels shown after PAF inhalation. It is concluded that furosemide is not effective in protecting against platelet-activating factor challenge in patients with asthma despite its potential inhibition of leukotriene synthesis. These findings reinforce the view that the pulmonary effects of platelet-activating factor are mediated through different pathways. |
An Experimental Comparison of RGB, YIQ, LAB, HSV, and Opponent Color Models | The increasing availability of affordable color raster graphics displays has made it important to develop a better understanding of how color can be used effectively in an interactive environment. Most contemporary graphics displays offer a choice of some 16 million colors; the user's problem is to find the right color.
Folklore has it that the RGB color space arising naturally from color display hardware is user-hostile and that other color models such as the HSV scheme are preferable. Until now there has been virtually no experimental evidence addressing this point.
We describe a color matching experiment in which subjects used one of two tablet-based input techniques, interfaced through one of five color models, to interactively match target colors displayed on a CRT.
The data collected show small but significant differences between models in the ability of subjects to match the five target colors used in this experiment. Subjects using the RGB color model matched quickly but inaccurately compared with those using the other models. The largest speed difference occurred during the early convergence phase of matching. Users of the HSV color model were the slowest in this experiment, both during the convergence phase and in total time to match, but were relatively accurate. There was less variation in performance during the second refinement phase of a match than during the convergence phase.
Two-dimensional use of the tablet resulted in faster but less accurate performance than did strictly one-dimensional usage.
Significant learning occurred for users of the Opponent, YIQ, LAB, and HSV color models, and not for users of the RGB color model. |
Designing Sketches for Similarity Filtering | The amounts of currently produced data emphasize the importance of techniques for efficient data processing. Searching big data collections according to similarity of data well corresponds to human perception. This paper is focused on similarity search using the concept of sketches – a compact bit string representations of data objects compared by Hamming distance, which can be used for filtering big datasets. The object-to-sketch transformation is a form of the dimensionality reduction and thus there are two basic contradictory requirements: (1) The length of the sketches should be small for efficient manipulation, but (2) longer sketches retain more information about the data objects. First, we study various sketching methods for data modeled by metric space and we analyse their quality. Specifically, we study importance of several sketch properties for similarity search and we propose a high quality sketching technique. Further, we focus on the length of sketches by studying mutual influence of sketch properties such as correlation of their bits and the intrinsic dimensionality of a set of sketches. The outcome is an equation that allows us to estimate a suitable length of sketches for an arbitrary given dataset. Finally, we empirically verify proposed approach on two real-life datasets. |
Topological structural analysis of digitized binary images by border following | Two border following algorithms are proposed for the topological analysis of digitized binary images. The first one determines the surroundness relations among the borders of a binary image. Since the outer borders and the hole borders have a one-to-one correspondence to the connected components of l-pixels and to the holes, respectively, the proposed algorithm yields a representation of a binary image, from which one can extract some sort of features without reconstructing the image. The second algorithm, which is a modified version of the first, follows only the outermost borders (i.e., the outer borders which are not surrounded by holes). These algorithms can be effectively used in component counting, shrinking, and topological structural analysis of binary images, when a sequential digital computer is used. o 1985 Academic press, I~C. |
Minimum Text Corpus Selection for Limited Domain Speech Synthesis | This paper concerns limited domain TTS system based on the concatenative method, and presents an algorithm capable to extract the minimal domainoriented text corpus from the real data of the given domain, while still reaching the maximum coverage of the domain. The proposed approach ensures that the least amount of texts are extracted, containing the most common phrases and (possibly) all the words from the domain. At the same time, it ensures that appropriate phrase overlapping is kept, allowing to find smooth concatenation in the overlapped regions to reach high quality synthesized speech. In addition, several recommendations allowing a speaker to record the corpus more fluently and comfortably are presented and discussed. The corpus building is tested and evaluated on several domains differing in size and nature, and the authors present the results of the algorithm and demonstrate the advantages of using the domain oriented corpus for speech synthesis. |
Chapter Fifteen. Homer’S Phoenicians: History, Ethnography, Or Literary Trope? [A Perspective On Early Orientalism] | This chapter explores the texts of Iliad and the Odyssey and examines how to account for anomalies between the historical evidence and the way Phoenicians are represented. It considers two aspects of how to account for the picture(s) drawn: first, a double-pronged inquiry into the role the Phoenicians can be seen to play within the texts, along with an analysis of the narrative goals of those portions of the text in which Phoenicians appear, that would account for the way they were represented; and second, a consideration of the role the Homeric texts themselves may be said to have played within the larger context of eighth/seventh-century Greece, the presumed time of their writing. ?Homer's Phoenicians," present a masterful literary construct, at once produced by and working to produce the broader social, political, economic, and symbolic fabric of the early state in Archaic Greece.Keywords: Archaic Greece; early orientalism; Homer's Phoenicians; Iliad; Odyssey |
Does the Fragrance of Essential Oils Alleviate the Fatigue Induced by Exercise? A Biochemical Indicator Test in Rats | Objective
To study the effect of the essential oils of Citrus sinensis L., Mentha piperita L., Syzygium aromaticum L., and Rosmarinus officinalis L. on physical exhaustion in rats.
Methods
Forty-eight male Wistar rats were randomly divided into a control group, a fatigue group, an essential oil mixture (EOM) group, and a peppermint essential oil (PEO) group. Loaded swimming to exhaustion was used as the rat fatigue model. Two groups were nebulized with EOM and PEO after swimming, and the others were nebulized with distilled water. After continuous inhalation for 3 days, the swimming time, blood glucose, blood lactic acid (BLA), blood urea nitrogen (BUN), superoxide dismutase (SOD), glutathione peroxidase (GSH-PX), and malondialdehyde (MDA) in blood were determined.
Results
While an increased time to exhaustion and SOD activity were apparent in both the EOM and PEO groups, the BLA and MDA were lower in both groups, in comparison with the fatigue group, and the changes in the EOM group were more dramatic. Additionally, the EOM group also showed marked changes of the rise of blood glucose and the decrease of BUN and GSH-PX.
Conclusion
The results suggested that the inhalation of an essential oil mixture could powerfully relieve exercise-induced fatigue. |
Endoscopic Surgery: The History, the Pioneers | The introduction of endoscopy into surgical practice is one of the biggest success stories in the history of medicine. Endoscopy has its roots in the nineteenth century and was initially developed by urologists and internists. During the 1960s and 1970s gynecologists took the lead in the development of endoscopic surgery while most of the surgical community continued to ignore the possibilities of the new technique. This was due in part to the introduction of ever more sophisticated drugs, the impressive results of intensive care medicine, and advances in anesthesia, which led to the development of more radical and extensive operations, or “major surgery.” The idea that large problems require large incisions so deeply dominated surgical thinking that there was little room to appreciate the advances of “key-hole” surgery. Working against this current, some general surgeons took up the challenge. In 1976 the Surgical Study Group on Endoscopy and Ultrasound (CAES) was formed in Hamburg. Five years later, on the other side of the Atlantic, the Society of American Gastrointestinal Endoscopic Surgeons (SAGES) was called into being. In 1987 the first issue of the journal Surgical Endoscopy was published, and the following year the First World Congress on Surgical Endoscopy took place in Berlin. The sweeping success of the “laparoscopic revolution” (1989–1990) marked the end of traditional open surgery and encouraged surgeons to consider new perspectives. By the 1990s the breakthrough had been accomplished: endoscopy was incorporated into surgical thinking. |
Multiple Light Source Estimation in a Single Image | Many high-level image processing tasks require an estimate of the positions, directions and relative intensities of the light sources that illuminated the depicted scene. In image-based rendering, augmented reality and computer vision, such tasks include matching image contents based on illumination, inserting rendered synthetic objects into a natural image, intrinsic images, shape from shading and image relighting. Yet, accurate and robust illumination estimation, particularly from a single image, is a highly ill-posed problem. In this paper, we present a new method to estimate the illumination in a single image as a combination of achromatic lights with their 3D directions and relative intensities. In contrast to previous methods, we base our azimuth angle estimation on curve fitting and recursive refinement of the number of light sources. Similarly, we present a novel surface normal approximation using an osculating arc for the estimation of zenith angles. By means of a new data set of ground-truth data and images, we demonstrate that our approach produces more robust and accurate results, and show its versatility through novel applications such as image compositing and analysis. |
Hardware design experiences in ZebraNet | The enormous potential for wireless sensor networks to make a positive impact on our society has spawned a great deal of research on the topic, and this research is now producing environment-ready systems. Current technology limits coupled with widely-varying application requirements lead to a diversity of hardware platforms for different portions of the design space. In addition, the unique energy and reliability constraints of a system that must function for months at a time without human intervention mean that demands on sensor network hardware are different from the demands on standard integrated circuits. This paper describes our experiences designing sensor nodes and low level software to control them.
In the ZebraNet system we use GPS technology to record fine-grained position data in order to track long term animal migrations [14]. The ZebraNet hardware is composed of a 16-bit TI microcontroller, 4 Mbits of off-chip flash memory, a 900 MHz radio, and a low-power GPS chip. In this paper, we discuss our techniques for devising efficient power supplies for sensor networks, methods of managing the energy consumption of the nodes, and methods of managing the peripheral devices including the radio, flash, and sensors. We conclude by evaluating the design of the ZebraNet nodes and discussing how it can be improved. Our lessons learned in developing this hardware can be useful both in designing future sensor nodes and in using them in real systems. |
Cardiovascular and metabolic effects of 48-h glucagon-like peptide-1 infusion in compensated chronic patients with heart failure. | The incretin hormone glucagon-like peptide-1 (GLP-1) and its analogs are currently emerging as antidiabetic medications. GLP-1 improves left ventricular ejection fraction (LVEF) in dogs with heart failure (HF) and in patients with acute myocardial infarction. We studied metabolic and cardiovascular effects of 48-h GLP-1 infusions in patients with congestive HF. In a randomized, double-blind crossover design, 20 patients without diabetes and with HF with ischemic heart disease, EF of 30 +/- 2%, New York Heart Association II and III (n = 14 and 6) received 48-h GLP-1 (0.7 pmol.kg(-1).min(-1)) and placebo infusion. At 0 and 48 h, LVEF, diastolic function, tissue Doppler regional myocardial function, exercise testing, noninvasive cardiac output, and brain natriuretic peptide (BNP) were measured. Blood pressure, heart rate, and metabolic parameters were recorded. Fifteen patients completed the protocol. GLP-1 increased insulin (90 +/- 17 pmol/l vs. 69 +/- 12 pmol/l; P = 0.025) and lowered glucose levels (5.2 +/- 0.1 mmol/l vs. 5.6 +/- 0.1 mmol/l; P < 0.01). Heart rate (67 +/- 2 beats/min vs. 65 +/- 2 beats/min; P = 0.016) and diastolic blood pressure (71 +/- 2 mmHg vs. 68 +/- 2 mmHg; P = 0.008) increased during GLP-1 treatment. Cardiac index (1.5 +/- 0.1 l.min(-1).m(-2) vs. 1.7 +/- 0.2 l.min(-1).m(-2); P = 0.54) and LVEF (30 +/- 2% vs. 30 +/- 2%; P = 0.93), tissue Doppler indexes, body weight, and BNP remained unchanged. Hypoglycemic events related to GLP-1 treatment were observed in eight patients. GLP-1 infusion increased circulating insulin levels and reduced plasma glucose concentration but had no major cardiovascular effects in patients without diabetes but with compensated HF. The impact of minor increases in heart rate and diastolic blood pressure during GLP-1 infusion requires further studies. Hypoglycemia was frequent and calls for caution in patients without diabetes but with HF. |
Logistics supply chain management based on business ecology theory | In the same commercial ecosystem, although the different main bodies of logistics service such as transportation, suppliers and purchasers drive their interests differently, all the different stakeholders in the same business or consumers coexist mutually and share resources with each other. Based on this, this paper constructs a model of bonded logistics supply chain management based on the theory of commercial ecology, focusing on the logistics mode of transportation and multi-attribute behavior decision-making model based on the risk preference of the mode of transport of goods. After the weight is divided, this paper solves the model with ELECTRE-II algorithm and provides a scientific basis for decision-making of bonded logistics supply chain management through the decision model and ELECTRE-II algorithm. |
Brentuximab vedotin in relapsed/refractory Hodgkin's lymphoma: the Italian experience and results of its use in daily clinical practice outside clinical trials. | Clinical trial results indicate that brentuximab vedotin brings considerable promise for the treatment of patients with relapsed or refractory Hodgkin's lymphoma. A retrospective multicenter study was conducted on 65 heavily pretreated patients who underwent therapy through a Named Patient Program in Italy (non trial-setting). The primary study endpoint was the objective response rate; secondary endpoints were safety, overall survival and progression-free survival. The best overall response rate (70.7%), including 21.5% complete responses, was observed at the first restaging after the third cycle of treatment. After a median follow up of 13.2 months, the overall survival rate at 20 months was 73.8% while the progression-free survival rate at 20 months was 24.2%. Globally nine patients are in continuous complete response with a median follow up of 14 months (range, 10-19 months). Four patients proceeded to autotransplantation and nine to allotransplantation. The most frequent extra-hematologic toxicity was peripheral neuropathy, observed in 21.5% of cases (9 patients with grade 1/2 and 5 patients with grade 3/4); neurological toxicity led to discontinuation of treatment in three patients and to dose reduction in four. In general the treatment was well tolerated and toxicities, both hematologic and extra-hematologic, were manageable. This report indicates and confirms that brentuximab vedotin as a single agent is effective and safe also when used in standard, everyday clinical practice outside a clinical trial. Best overall responses were recorded after three or four cycles and showed that brentuximab vedotin provides an effective bridge to further therapeutic interventions. |
Maintenance scheduling of geographically distributed assets with prognostics information | Maintenance scheduling for high value assets has been studied for decades and is still a crucial area of research with new technological advancements. The main dilemma of maintenance scheduling is to avoid failures while preventing unnecessary maintenance. The technological advancements in real time monitoring and computational science make tracking asset health and forecasting asset failures possible. The usage and maintenance of assets can be planned more efficiently with the forecasted failure probability and remaining useful life (i.e., prognostic information). The prognostic information is time sensitive. Geographically distributed assets such as off-shore wind farms and railway switches add another complexity to the maintenance scheduling problem with the required time of travel to reach these assets. Thus, the travel time between geographically distributed assets should be incorporated in the maintenance scheduling when one technician (or team) is responsible for the maintenance of multiple assets. This paper presents a methodology to schedule the maintenance of geographically distributed assets using their prognostic information. Genetic Algorithm based solution incorporating the daily work duration of the maintenance team is also presented in the paper. © 2015 Elsevier B.V. All rights reserved. |
Regulation of hematopoietic cell clusters in the placental niche through SCF/Kit signaling in embryonic mouse. | Hematopoietic stem cells (HSCs) emerge from and expand in the mouse placenta at mid-gestation. To determine their compartment of origin and define extrinsic signals governing their commitment to this lineage, we identified hematopoietic cell (HC) clusters in mouse placenta, defined as cells expressing the embryonic HSC markers CD31, CD34 and Kit, by immunohistochemistry. HC clusters were first observed in the placenta at 9.5 days post coitum (dpc). To determine their origin, we tagged the allantoic region with CM-DiI at 8.25 dpc, prior to placenta formation, and cultured embryos in a whole embryo culture (WEC) system. CM-DiI-positive HC clusters were observed 42 hours later. To determine how clusters are extrinsically regulated, we isolated niche cells using laser capture micro-dissection and assayed them for expression of genes encoding hematopoietic cytokines. Among a panel of candidates assayed, only stem cell factor (SCF) was expressed in niche cells. To define niche cells, endothelial and mesenchymal cells were sorted by flow cytometry from dissociated placenta and hematopoietic cytokine gene expression was investigated. The endothelial cell compartment predominantly expressed SCF mRNA and protein. To determine whether SCF/Kit signaling regulates placental HC cluster proliferation, we injected anti-Kit neutralizing antibody into 10.25 dpc embryos and assayed cultured embryos for expression of hematopoietic transcription factors. Runx1, Myb and Gata2 were downregulated in the placental HC cluster fraction relative to controls. These observations demonstrate that placental HC clusters originate from the allantois and are regulated by endothelial niche cells through SCF/Kit signaling. |
Robotic surgery: changing the surgical approach for endometrial cancer in a referral cancer center. | STUDY OBJECTIVE
To study the effect of robotic surgery on the surgical approach to endometrial cancer in a gynecologic oncology center over a short time.
DESIGN
Prospective analysis of patients with early-stage endometrial cancer who underwent robotic surgery.
SETTING
Teaching hospital.
PATIENTS
Eighty patients who underwent robotic surgery.
INTERVENTIONS
Between November 2006 and October 2008, 80 consecutive patients with an initial diagnosis of endometrial cancer consented to undergo robotic surgery at the European Institute of Oncology, Milan, Italy.
MEASUREMENTS AND MAIN RESULTS
We collected all patient data for demographics, operating time, estimated blood loss, histologic findings, lymph node count, analgesic-free postoperative day, length of stay, and intraoperative and early postoperative complications. Mean (SD) patient age was 58.3 (11.5) years (95% confidence interval [CI], 55.7-60.9). Body mass index was 25.2 (6.1) kg/m(2) (95% CI, 23.6-26.7). In 3 patients (3.7%), conversion to conventional laparotomy was required. Mean operative time was 181.1 (63.1) minutes (95% CI, 166.7-195.5). Mean docking time was 4.5 (1.1) minutes (95% CI, 2.2-2.7). Mean hospital stay was 2.5 (1.1) days (95% CI, 2.2-2.7), and 93% of patients were analgesic-free on postoperative day 2.
CONCLUSIONS
Over a relatively short time using the da Vinci surgical system, we observed a substantial change in our surgical activity. For endometrial cancer, open surgical procedures decreased from 78% to 35%. Moreover, our preliminary data confirm that surgical robotic staging for early-stage endometrial cancer is feasible and safe. Age, obesity, and previous surgery do not seem to be contraindications. |
Learning classifier system equivalent with reinforcement learning with function approximation | We present an experimental comparison of the reinforcement process between Learning Classifier System (LCS) and Reinforcement Learning (RL) with function approximation (FA) method, regarding their generalization mechanisms. To validate our previous theoretical analysis that derived equivalence of reinforcement process between LCS and RL, we introduce a simple test environment named Gridworld, which can be applied to both LCS and RL with three different classes of generalization: (1) tabular representation; (2) state aggregation; and (3) linear approximation. From the simulation experiments comparing LCS with its GA-inactivated and corresponding RL method, all the cases regarding the class of generalization showed identical results with the criteria of performance and temporal difference (TD) error, thereby verifying the equivalence predicted from the theory. |
Swearing, Euphemisms, and Linguistic Relativity | Participants read aloud swear words, euphemisms of the swear words, and neutral stimuli while their autonomic activity was measured by electrodermal activity. The key finding was that autonomic responses to swear words were larger than to euphemisms and neutral stimuli. It is argued that the heightened response to swear words reflects a form of verbal conditioning in which the phonological form of the word is directly associated with an affective response. Euphemisms are effective because they replace the trigger (the offending word form) by another word form that expresses a similar idea. That is, word forms exert some control on affect and cognition in turn. We relate these findings to the linguistic relativity hypothesis, and suggest a simple mechanistic account of how language may influence thinking in this context. |
Next step recommendation and prediction based on process mining in adaptive case management | Adaptive Case Management (ACM) is a new paradigm that facilitates the coordination of knowledge work through case handling. Current ACM systems, however, lack support of providing sophisticated user guidance for next step recommendations and predictions about the case future. In recent years, process mining research developed approaches to make recommendations and predictions based on event logs readily available in process-aware information systems. This paper builds upon those approaches and integrates them into an existing ACM solution. The research goal is to design and develop a prototype that gives next step recommendations and predictions based on process mining techniques in ACM systems. The models proposed, recommend actions that shorten the case running time, mitigate deadline transgressions, support case goals and have been used in former cases with similar properties. They further give case predictions about the remaining time, possible deadline violations, and whether the current case path supports given case goals. A final evaluation proves that the prototype is indeed capable of making proper recommendations and predictions. In addition, starting points for further improvement are discussed. |
Light load efficiency improvement for PFC | With the fast growing information technologies, high efficiency AC-DC front-end power supplies are becoming more and more desired in all kinds of distributed power system applications due to the energy conservation consideration. For the power factor correction (PFC) stage, the conventional constant frequency average current mode control has very low efficiency at light load due to high switching frequency related loss. The constant on-time control for PFC features the automatic reduction of switching frequency at light load, resulting improved light load efficiency. However, lower heavy load efficiency of the constant on-time control is observed because of very high frequency at Continuous Conduction Mode (CCM). By carefully comparing the on-time and frequency profiles between constant on-time and constant frequency control, a novel adaptive on-time control is proposed to improve the light load efficiency without sacrificing the heavy load efficiency. The performance of the adaptive on-time control is verified by experiment. |
A vision-based method for weeds identification through the Bayesian decision theory | One of the objectives of precision agriculture is to minimize the volume of herbicides that are applied to the fields through the use of site-specific weed management systems. This paper outlines an automatic computer vision-based approach for the detection and differential spraying of weeds in corn crops. The method is designed for post-emergence herbicide applications where weeds and corn plants display similar spectral signatures and the weeds appear irregularly distributed within the crop’s field. The proposed strategy involves two processes: image segmentation and decision making. Image segmentation combines basic suitable image processing techniques in order to extract cells from the image as the low level units. Each cell is described by two area-based measuring relationships between crop and weeds. The decision making determines the cells to be sprayed based on the computation of a posterior probability under a Bayesian framework. The a priori probability in this framework is computed taking into account the dynamic of the physical system (tractor) where the method is embedded. The main contributions of this paper are: (1) the combination of the image segmentation and decision making processes and (2) the decision making itself which exploits a previous knowledge which is mapped as the a priori probability. The performance of the method is illustrated by comparative analysis against some existing strategies. 2007 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved. |
Digital LDO with analog-assisted dynamic reference correction for fast and accurate load regulation | Low-dropout voltage regulators (LDOs) have been extensively used on-chip to supply voltage for various circuit blocks. Digital LDOs (DLDO) have recently attracted circuit designers for their low voltage operating capability and load current scalability. Existing DLDO techniques suffer from either poor transient performance due to slow digital control loop or poor DC load regulation due to low loop gain. A dual-loop architecture to improve the DC load regulation and transient performance is proposed in this work. The proposed regulator uses a fast control loop for improved transient response and an analog assisted dynamic reference correction loop for an improved DC load regulation. The design achieved a DC load regulation of 0.005mV/mA and a settling time of 139ns while regulating loads up to 200mA. The proposed DLDO is designed in 28nm FD-SOI technology with a 0.027mm2 active area. |
IPv6 over LoRaWAN™ | Although short-range wireless communication explicitly targets local and regional applications, range continues to be a highly important issue. The range directly depends on the so-called link budget, which can be increased by the choice of modulation and coding schemes. The recent transceiver generation in particular comes with extensive and flexible support for software-defined radio (SDR). The SX127× family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview of the technologies to support Long Range (LoRa™) and the corresponding Layer 2 protocol (LoRaWAN™). It particularly describes the possibility to combine the Internet Protocol, i.e. IPv6, into LoRaWAN™, so that it can be directly integrated into a full-fledged Internet of Things (IoT). The proposed solution, which we name 6LoRaWAN, has been implemented and tested; results of the experiments are also shown in this paper. |
Online Learning: Beyond Regret | We study online learnability of a wide class of problems, extending the results of Rakhlin et al. (2010a) to general notions of performance measure well beyond external regret. Our framework simultaneously captures such well-known notions as internal and general Φ-regret, learning with non-additive global cost functions, Blackwell's approachability, calibration of forecasters, and more. We show that learnability in all these situations is due to control of the same three quantities: a martingale convergence term, a term describing the ability to perform well if future is known, and a generalization of sequential Rademacher complexity, studied in Rakhlin et al. (2010a). Since we directly study complexity of the problem instead of focusing on efficient algorithms, we are able to improve and extend many known results which have been previously derived via an algorithmic construction. Disciplines Computer Sciences | Statistics and Probability This conference paper is available at ScholarlyCommons: http://repository.upenn.edu/statistics_papers/133 JMLR: Workshop and Conference Proceedings 19 (2011) 559–594 24th Annual Conference on Learning Theory Online Learning: Beyond Regret Alexander Rakhlin Department of Statistics University of Pennsylvania Karthik Sridharan TTI-Chicago Ambuj Tewari Department of Computer Science University of Texas at Austin Editor: Sham Kakade, Ulrike von Luxburg Abstract We study online learnability of a wide class of problems, extending the results of Rakhlin et al. (2010a) to general notions of performance measure well beyond external regret. Our framework simultaneously captures such well-known notions as internal and general Φregret, learning with non-additive global cost functions, Blackwell’s approachability, calibration of forecasters, and more. We show that learnability in all these situations is due to control of the same three quantities: a martingale convergence term, a term describing the ability to perform well if future is known, and a generalization of sequential Rademacher complexity, studied in Rakhlin et al. (2010a). Since we directly study complexity of the problem instead of focusing on efficient algorithms, we are able to improve and extend many known results which have been previously derived via an algorithmic construction.We study online learnability of a wide class of problems, extending the results of Rakhlin et al. (2010a) to general notions of performance measure well beyond external regret. Our framework simultaneously captures such well-known notions as internal and general Φregret, learning with non-additive global cost functions, Blackwell’s approachability, calibration of forecasters, and more. We show that learnability in all these situations is due to control of the same three quantities: a martingale convergence term, a term describing the ability to perform well if future is known, and a generalization of sequential Rademacher complexity, studied in Rakhlin et al. (2010a). Since we directly study complexity of the problem instead of focusing on efficient algorithms, we are able to improve and extend many known results which have been previously derived via an algorithmic construction. |
PACE BOM chemotherapy: a 12-week regimen for advanced Hodgkin's disease. | BACKGROUND
This study was designed to evaluate the efficacy and toxicity of a 12-week alternating weekly chemotherapy regimen for advanced Hodgkin's disease. Consolidative irradiation of residual masses was used in selected cases.
PATIENTS AND METHODS
Eighty-three patients with newly diagnosed advanced Hodgkin's disease (bulky stage IIA, stage IIB-IVB) or with progressive disease after extended field radiotherapy for early stage disease were included in this study. The patients were treated for 12 weeks with PACE BOM comprising oral prednisolone together with intravenous doxorubicin, cyclophosphamide and etoposide alternating weekly with intravenous bleomycin, vincristine and methotrexate. Limited field adjuvant radiotherapy was also given to 21 patients with localised persistent radiological abnormalities visible on chest X-ray after chemotherapy. The study end points were overall survival, failure free survival (FFS) and toxicity, particularly with respect to reproductive function.
RESULTS
With a median post treatment follow up of 52 months the actuarial 5-year overall survival is 90% (confidence interval 81%-95%) and FFS is 64% (52%-74%). This treatment was well tolerated and fertility was maintained in a high proportion of young adults.
CONCLUSIONS
The brief duration PACE BOM regimen with or without radiotherapy appears to be comparable in efficacy to other doxorubicin containing regimens, with a favourable toxicity profile. Randomised clinical trials are now needed to evaluate the role of this and comparable initial treatment approaches to advanced Hodgkin's disease. |
DPICO: a high speed deep packet inspection engine using compact finite automata | Deep Packet Inspection (DPI)has been widely adopted in detecting network threats such as intrusion, viruses and spam. It is challenging, however, to achieve high speed DPI due to the expanding rule sets and ever increasing line rates. A key issue is that the size of the finite automata falls beyond the capacity of on-chip memory thus incurring expensive off-chip accesses. In this paper we present DPICO a hardware based DPI engine that utilizes novel techniques to minimize the storage requirements for finite automata. The techniques proposed are modified content addressable memory (mCAM), interleaved memory banks, and data packing. The experiment results show the scalable performance of DPICO can achieve up to 17.7 Gbps throughput using a contemporary FPGA chip. Experiment data also show that a DPICO based accelerator can improve the pattern matching performance of a DPI server by up to 10 times. |
A MODIS-Based Novel Method to Distinguish Surface Cyanobacterial Scums and Aquatic Macrophytes in Lake Taihu | Satellite remote sensing can be an effective alternative for mapping cyanobacterial scums and aquatic macrophyte distribution over large areas compared with traditional ship’s site-specific samplings. However, similar optical spectra characteristics between aquatic macrophytes and cyanobacterial scums in red and near infrared (NIR) wavebands create a barrier to their discrimination when they co-occur. We developed a new cyanobacteria and macrophytes index (CMI) based on a blue, a green, and a shortwave infrared band to separate waters with cyanobacterial scums from those dominated by aquatic macrophytes, and a turbid water index (TWI) to avoid interference from high turbid waters typical of shallow lakes. Combining CMI, TWI, and the floating algae index (FAI), we used a novel classification approach to discriminate lake water, cyanobacteria blooms, submerged macrophytes, and emergent/floating macrophytes using MODIS imagery in the large shallow and eutrophic Lake Taihu (China). Thresholds for CMI, TWI, and FAI were determined by statistical analysis for a 2010–2016 MODIS Aqua time series. We validated the accuracy of our approach by in situ reflectance spectra, field investigations and high spatial resolution HJ-CCD data. The overall classification accuracy was 86% in total, and the user’s accuracy was 88%, 79%, 85%, and 93% for submerged macrophytes, emergent/floating macrophytes, cyanobacterial scums and lake water, respectively. The estimated aquatic macrophyte distributions gave consistent results with that based on HJ-CCD data. This new approach allows for the coincident determination of the distributions of cyanobacteria blooms and aquatic macrophytes in eutrophic shallow lakes. We also discuss the utility of the approach with respect to masking clouds, black waters, and atmospheric effects, and its mixed-pixel effects. |
Correlation between EEG-EMG coherence during isometric contraction and its imaginary execution. | To assess the similarity between cortical activities observed during actual and imaginary motor tasks, we evaluated electroencephalography-electromyography (EEG-EMG) coherence during motor task execution (ME) and the same taskrelated EEG power increase (TRPI) during kinesthetic motor imagery (MI). EEGs recorded at the vertex and EMGs recorded at the right tibialis anterior muscle (TA) were analyzed in 13 healthy subjects. Subjects were requested to perform: (1) isometric TA contraction, (2) imagery of the same movement without overt motor behavior, and (3) rest without MI. The results show significant EEG-EMG coherence during ME, as well as TRPI during both ME and MI tasks within a similar 14-30 Hz band. The magnitude of EEG-EMG coherence and TRPI varied among the subjects. Intersubject analysis revealed a significant correlation between EEG-EMG coherence and TRPI. These results support the hypothesis that ME and MI tasks involve overlapping neural networks in the perirolandic cortical areas. |
High Altitude Wind Energy Generation Using Controlled Power Kites | This paper presents simulation and experimental results regarding a new class of wind energy generators, denoted as KiteGen, which employ power kites to capture high altitude wind power. A realistic kite model, which includes the kite aerodynamic characteristics and the effects of line weight and drag forces, is used to describe the system dynamics. Nonlinear model predictive control techniques, together with an efficient implementation based on set membership function approximation theory, are employed to maximize the energy obtained by KiteGen, while satisfying input and state constraints. Two different kinds of KiteGen are investigated through numerical simulations, the yo-yo configuration and the carousel configuration, respectively. For each configuration, a generator with the same kite and nominal wind characteristics is considered. A novel control strategy for the carousel configuration, with respect to previous works, is also introduced. The simulation results show that the power generation potentials of the yo-yo and carousel configurations are very similar. Thus, the choice between these two configurations for further development of a medium-to-large scale generator will be made on the basis of technical implementation problems and of other indexes like construction costs and generated power density with respect to land occupation. Experimental data, collected using the small-scale KiteGen prototype built at Politecnico di Torino, are compared to simulation results. The good matching between simulation and real measured data increases the confidence with the presented simulation results, which show that energy generation with controlled power kites can represent a quantum leap in wind power technology, promising to obtain renewable energy from a source largely available almost everywhere, with production costs lower than those of fossil sources. |
Automated Vessel Segmentation Using Infinite Perimeter Active Contour Model with Hybrid Region Information with Application to Retinal Images | Automated detection of blood vessel structures is becoming of crucial interest for better management of vascular disease. In this paper, we propose a new infinite active contour model that uses hybrid region information of the image to approach this problem. More specifically, an infinite perimeter regularizer, provided by using L2 Lebesgue measure of the γ-neighborhood of boundaries, allows for better detection of small oscillatory (branching) structures than the traditional models based on the length of a feature's boundaries (i.e., H1 Hausdorff measure). Moreover, for better general segmentation performance, the proposed model takes the advantage of using different types of region information, such as the combination of intensity information and local phase based enhancement map. The local phase based enhancement map is used for its superiority in preserving vessel edges while the given image intensity information will guarantee a correct feature's segmentation. We evaluate the performance of the proposed model by applying it to three public retinal image datasets (two datasets of color fundus photography and one fluorescein angiography dataset). The proposed model outperforms its competitors when compared with other widely used unsupervised and supervised methods. For example, the sensitivity (0.742), specificity (0.982) and accuracy (0.954) achieved on the DRIVE dataset are very close to those of the second observer's annotations. |
Distance measures for layout-based document image retrieval | Most methods for document image retrieval rely solely on text information to find similar documents. This paper describes a way to use layout information for document image retrieval instead. A new class of distance measures is introduced for documents with Manhattan layouts, based on a two-step procedure: First, the distances between the blocks of two layouts are calculated. Then, the blocks of one layout are assigned to the blocks of the other layout in a matching step. Different block distances and matching methods are compared and evaluated using the publicly available MARG database. On this dataset, the layout type can be determined successfully in 92.6% of the cases using the best distance measure in a nearest neighbor classifier. The experiments show that the best distance measure for this task is the overlapping area combined with the Manhattan distance of the corner points as block distance together with the minimum weight edge cover matching |
The influence of antihypertensive drugs on mineral status in hypertensive patients. | OBJECTIVES
Long-term therapy of hypertension may influence mineral status in patients. However, drug-micronutrient interactions are largely unexplored in practice. This study intended to evaluate the effect of hypotensive monotherapy on iron, zinc, and copper levels, as well as on selected biochemical parameters, in newly diagnosed patients with hypertension, and to assess the influence of diet with optimal mineral levels on the mineral balance in these subjects.
PATIENTS AND METHODS
Forty-five patients, aged 18-65 years with diagnosed essential hypertension, beginning monotherapy treatment with diuretics, calcium antagonists, angiotensin-converting enzyme inhibitors, and β-blockers, were employed. Over three months, the patients underwent monotherapy (stage II). Next, patients were randomly divided into a diet group (of 27 subjects) and a control group (of 18 subjects). In this stage, which lasted one month, patients were given the same drug but also followed an optimal mineral-content diet (for the diet group), or else continued drug use without any change in diet (for control group) (stage III). Lipids, glucose, ceruloplasmin, and ferritin--along with superoxide dismutase and catalase activities--were assayed in serum. Iron, zinc, and copper concentrations in serum, erythrocytes, and urine were determined using flame atomic absorption spectrometry. Blood pressure was measured. Diet intake was monitored at each stage.
RESULTS
It was found that the zinc level in serum significantly decreased following treatment, and that the use of the optimal-mineral diet during antihypertensive treatment markedly increased zinc serum concentration. After treatment, a significant increase in zinc excretion in the urine was observed. Glucose levels in the serum of patients in stage II were significantly higher than in the baseline. In patients in the diet group, glucose levels markedly decreased. Moreover, a negative correlation was found between serum glucose and zinc levels in patients.
CONCLUSIONS
Antihypertensive treatments should include monitoring of mineral status. It seems that the zinc balance of patients on long-term therapy with hypotensive drugs may benefit from an optimal-mineral diet. |
The New Frontier of Smart Grids | The power grid is a massive interconnected network used to deliver electricity from suppliers to consumers and has been a vital energy supply. To minimize the impact of climate change while at the same time maintaining social prosperity, smart energy must be embraced to ensure a balanced economical growth and environmental sustainability. There fore, in the last few years, the new concept of a smart grid (SG) became a critical enabler in the contempo rary world and has attracted increas ing attention of policy makers and engineers. This article introduces the main concepts and technological challenges of SGs and presents the authors' views on some required challenges and opportunities pre sented to the IEEE Industrial Electronics Society (IES) in this new and exciting frontier. |
The Heterotaxy gene, GALNT11, glycosylates Notch to orchestrate cilia type and laterality | Heterotaxy is a disorder of left–right body patterning, or laterality, that is associated with major congenital heart disease. The aetiology and mechanisms underlying most cases of human heterotaxy are poorly understood. In vertebrates, laterality is initiated at the embryonic left–right organizer, where motile cilia generate leftward flow that is detected by immotile sensory cilia, which transduce flow into downstream asymmetric signals. The mechanism that specifies these two cilia types remains unknown. Here we show that the N-acetylgalactosamine-type O-glycosylation enzyme GALNT11 is crucial to such determination. We previously identified GALNT11 as a candidate disease gene in a patient with heterotaxy, and now demonstrate, in Xenopus tropicalis, that galnt11 activates Notch signalling. GALNT11 O-glycosylates human NOTCH1 peptides in vitro, thereby supporting a mechanism of Notch activation either by increasing ADAM17-mediated ectodomain shedding of the Notch receptor or by modification of specific EGF repeats. We further developed a quantitative live imaging technique for Xenopus left–right organizer cilia and show that Galnt11-mediated Notch1 signalling modulates the spatial distribution and ratio of motile and immotile cilia at the left–right organizer. galnt11 or notch1 depletion increases the ratio of motile cilia at the expense of immotile cilia and produces a laterality defect reminiscent of loss of the ciliary sensor Pkd2. By contrast, Notch overexpression decreases this ratio, mimicking the ciliopathy primary ciliary dyskinesia. Together our data demonstrate that Galnt11 modifies Notch, establishing an essential balance between motile and immotile cilia at the left–right organizer to determine laterality, and reveal a novel mechanism for human heterotaxy. |
Fatigue Life and Frequency Response of Braided Pneumatic Actuators | Although braided pneumatic actuators are capable of producing phenomenal forces compared to their weight, they have yet to see mainstream use due to their relatively short fatigue lives. By improving manufacturing techniques, actuator lifetime was extended by nearly an order of magnitude. Another concern is that their response times may be too long for control of legged robots. In addition, the frequency response of these actuators was found to be similar to that of human muscle. |
Children’s imitation of causal action sequences is influenced by statistical and pedagogical evidence | Children are ubiquitous imitators, but how do they decide which actions to imitate? One possibility is that children rationally combine multiple sources of information about which actions are necessary to cause a particular outcome. For instance, children might learn from contingencies between action sequences and outcomes across repeated demonstrations, and they might also use information about the actor's knowledge state and pedagogical intentions. We define a Bayesian model that predicts children will decide whether to imitate part or all of an action sequence based on both the pattern of statistical evidence and the demonstrator's pedagogical stance. To test this prediction, we conducted an experiment in which preschool children watched an experimenter repeatedly perform sequences of varying actions followed by an outcome. Children's imitation of sequences that produced the outcome increased, in some cases resulting in production of shorter sequences of actions that the children had never seen performed in isolation. A second experiment established that children interpret the same statistical evidence differently when it comes from a knowledgeable teacher versus a naïve demonstrator. In particular, in the pedagogical case children are more likely to "overimitate" by reproducing the entire demonstrated sequence. This behavior is consistent with our model's predictions, and suggests that children attend to both statistical and pedagogical evidence in deciding which actions to imitate, rather than obligately imitating successful action sequences. |
Comment on Gunning | Although there are elements of this paper with which I shall take issue, I want to begin by enthusiastically welcoming its publication. Rothbard and Mises are giants in the field of laissez faire studies; any focus on their vastly underappreciated work is thus presumptively an important contribution to the literature. Second, it is very important to reconcile advocacy of the system of economic freedom, or laissez faire capitalism, with value freedom, or vertfreiheit, a necessary dimension of economics as a discipline. Is it possible to both favor the market system, and be a value free economist? Is the vitally important question to which Jones addresses himself, and he is to be congratulated for bringing it to our attention. Third, this paper is welcome on the ground that Mises and Rothbard agreed with each other on virtually all areas of political economy – except for this one. Prof. Jones has unearthed a man-bites-dog story wherein these two Castor and Pollux economists strongly diverge in their perspectives. His bringing of this situation to our attention is therefore alone worth the price of admission. |
Visual landmarks sharpen grid cell metric and confer context specificity to neurons of the medial entorhinal cortex | Neurons of the medial entorhinal cortex (MEC) provide spatial representations critical for navigation. In this network, the periodic firing fields of grid cells act as a metric element for position. The location of the grid firing fields depends on interactions between self-motion information, geometrical properties of the environment and nonmetric contextual cues. Here, we test whether visual information, including nonmetric contextual cues, also regulates the firing rate of MEC neurons. Removal of visual landmarks caused a profound impairment in grid cell periodicity. Moreover, the speed code of MEC neurons changed in darkness and the activity of border cells became less confined to environmental boundaries. Half of the MEC neurons changed their firing rate in darkness. Manipulations of nonmetric visual cues that left the boundaries of a 1D environment in place caused rate changes in grid cells. These findings reveal context specificity in the rate code of MEC neurons. |
The role of kinesiotaping combined with botulinum toxin to reduce plantar flexors spasticity after stroke. | PURPOSE
To evaluate the effect of kinesiotaping as an adjuvant therapy to botulinum toxin A (BTX-A) injection in lower extremity spasticity.
METHODS
This is a single-center, randomized, and double-blind study. Twenty hemiplegic patients with spastic equinus foot were enrolled into the study and randomized into 2 groups. The first group (n=10) received BTX-A injection and kinesiotaping, and the second group (n=10) received BTX-A injection and sham-taping. Clinical assessment was done before injection and at 2 weeks and 1, 3, and 6 months. Outcome measures were modified Ashworth scale (MAS), passive ankle dorsiflexion, gait velocity, and step length.
RESULTS
Improvement was recorded in both kinesiotaping and sham groups for all outcome variables. No significant difference was found between groups other than passive range of motion (ROM), which was found to have increased more in the kinesiotaping group at 2 weeks.
CONCLUSION
There is no clear benefit in adjuvant kinesiotaping application with botulinum toxin for correction of spastic equinus in stroke. |
Ultrasound in the diagnosis of peripheral neuropathy: structure meets function in the neuromuscular clinic. | Peripheral nerve ultrasound (US) has emerged as a promising technique for the diagnosis of peripheral nerve disorders. While most experience with US has been reported in the context of nerve entrapment syndromes, the role of US in the diagnosis of peripheral neuropathy (PN) has recently been explored. Distinctive US findings have been reported in patients with hereditary, immune-mediated, infectious and axonal PN; US may add complementary information to neurophysiological studies in the diagnostic work-up of PN. This review describes the characteristic US findings in PN reported to date and a classification of abnormal nerve US patterns in PN is proposed. Closer scrutiny of nerve abnormalities beyond assessment of nerve calibre may allow for more accurate diagnostic classification of PN, as well as contribute to the understanding of the intersection of structure and function in PN. |
The bubble algebra: structure of a two-colour Temperley–Lieb Algebra | We define new diagram algebras providing a sequence of multiparameter generalizations of the Temperley–Lieb algebra, suitable for the modelling of dilute lattice systems of two-dimensional statistical mechanics. These algebras give a rigorous foundation to the various 'multi-colour algebras' of Grimm, Pearce and others. We determine the generic representation theory of the simplest of these algebras, and locate the nongeneric cases (at roots of unity of the corresponding parameters). We show by this example how the method used (Martin's general procedure for diagram algebras) may be applied to a wide variety of such algebras occurring in statistical mechanics. We demonstrate how these algebras may be used to solve the Yang–Baxter equations. |
Handy broker: an intelligent product-brokering agent for m-commerce applications with user preference tracking | One of the potential applications for agent-based systems is m-commerce. A lot of research has been done on making such systems intelligent to personalize their services for users. In most systems, user-supplied keywords are generally used to help generate profiles for users. In this paper, an evolutionary ontology-based productbrokering agent has been designed for m-commerce applications. It uses an evaluation function to represent a user’s preference instead of the usual keyword-based profile. By using genetic algorithms, the agent tracks the user’s preferences for a particular product by tuning some parameters inside its evaluation function. A prototype called “Handy Broker” has been implemented in Java and the results obtained from our experiments looks promising for m-commerce use. |
An Intelligent System for Traffic Control in Smart Cities : A Case Study | Current traffic light systems use a fixed time delay for different traffic directions and do follow a particular cycle while switching from one signal to another. This creates unwanted congestion during peak hours, loss of man-hours and eventually decline in productivity. In addition to this, the current traffic light systems encourage extortion by corrupt traffic officials as commuters often violate traffic rules because of the insufficient time allocated to their lanes or may want to avoid a long waiting period for their lanes to come up. This research is aimed at tackling the afore-mentioned problems by adopting a density based traffic control approach using Jakpa Junction, one of the busiest junctions in Delta State, Nigeria as a case study. The developed system uses a microcontroller of PIC89C51 microcontroller duly interfaced with sensors. The signal timing changes automatically based on the traffic density at the junction, thereby, avoiding unnecessary waiting time at the junction. The sensors used in this project were infra-red (IR) sensors and photodiodes which were placed in a Line of Sight configuration across the loads to detect the density of the traffic signal. The density of the vehicles is measured in three zones i.e., low, medium and high based on which timings were allotted accordingly. The developed system has proven to be smart and intelligent and capable of curbing incidences of traffic malpractices and inefficiencies that have been the bane of current traffic congestion control systems in emerging cities of the third world. |
Explainable User Clustering in Short Text Streams | User clustering has been studied from different angles: behavior-based, to identify similar browsing or search patterns, and content-based, to identify shared interests. Once user clusters have been found, they can be used for recommendation and personalization. So far, content-based user clustering has mostly focused on static sets of relatively long documents. Given the dynamic nature of social media, there is a need to dynamically cluster users in the context of short text streams. User clustering in this setting is more challenging than in the case of long documents as it is difficult to capture the users' dynamic topic distributions in sparse data settings. To address this problem, we propose a dynamic user clustering topic model (or UCT for short). UCT adaptively tracks changes of each user's time-varying topic distribution based both on the short texts the user posts during a given time period and on the previously estimated distribution. To infer changes, we propose a Gibbs sampling algorithm where a set of word-pairs from each user is constructed for sampling. The clustering results are explainable and human-understandable, in contrast to many other clustering algorithms. For evaluation purposes, we work with a dataset consisting of users and tweets from each user. Experimental results demonstrate the effectiveness of our proposed clustering model compared to state-of-the-art baselines. |
Maximum Achievable Efficiency in Near-Field Coupled Power-Transfer Systems | Wireless power transfer is commonly realized by means of near-field inductive coupling and is critical to many existing and emerging applications in biomedical engineering. This paper presents a closed form analytical solution for the optimum load that achieves the maximum possible power efficiency under arbitrary input impedance conditions based on the general two-port parameters of the network. The two-port approach allows one to predict the power transfer efficiency at any frequency, any type of coil geometry and through any type of media surrounding the coils. Moreover, the results are applicable to any form of passive power transfer such as provided by inductive or capacitive coupling. Our results generalize several well-known special cases. The formulation allows the design of an optimized wireless power transfer link through biological media using readily available EM simulation software. The proposed method effectively decouples the design of the inductive coupling two-port from the problem of loading and power amplifier design. Several case studies are provided for typical applications. |
Laparoscopic Placement of Peritoneal Dialysis Catheter (Same Day Dialysis) | BACKGROUND AND OBJECTIVE
Peritoneal dialysis (PD) remains the generally accepted method for management of renal failure in chronic and acute renal failure. Despite the rapidly increasing use of continuous ambulatory peritoneal dialysis (CAPD) since its introduction, controversy persists as to the efficacy and exact role of the modality in the treatment of end stage renal failure. The aim of this paper is to present the experience with laparoscopic placement of a peritoneal dialysis catheter and starting the peritoneal dialysis on the same day.
METHODS
The laparoscopic placement of a peritoneal dialysis catheter was performed on 11 patients (10 males and 1 female) with an average age of 35 years, over a 12-month period. The procedure was done using two 5 mm abdominal trocars. The precise position of the catheter on the pelvis was ensured laparoscopically. One to two liters exchange dialysis was used for every patient, and no leakage was recorded.
RESULTS
The patients tolerated the procedure well. The peritoneal dialysis was started immediately. Patients were discharged after an overnight stay, and PD was carried out routinely.
CONCLUSION
The results of laparoscopic placement of a peritoneal dialysis catheter show the following advantages: minimal incision; less surgical trauma; the procedure hastens the early start of peritoneal dialysis and has no complications. |
A topological coverage algorithm for mobile robots | In applications such as vacuum cleaning, painting, demining and foraging, a mobile robot must cover an unknown surface. The efficiency and completeness of coverage is improved via the construction of a map of covered regions while the robot covers the surface. Existing methods generally use grid maps, which are susceptible to odometry error and may require considerable memory and computation. This paper proposes a topological map and presents a coverage algorithm in which natural landmarks are added as nodes in a partial map. The completeness of the algorithm is argued. Simulation tests show over 99% of the surface is covered; 85% for real (Khepera) robot tests. The path length is about 10% worse than optimal in simulation tests, and about 20% worse than optimal for the real robot, which are within theoretical upper bounds for approximate solutions to travelling salesman based coverage problems. The proposed algorithm generates shorter paths and covers a wider variety of environments than topological coverage based on Morse decompositions. |
THE BLACK-WHITE ACHIEVEMENT GAP IN THE FIRST COLLEGE YEAR: EVIDENCE FROM A NEW LONGITUDINAL CASE STUDY | In the United States, an achievement gap between whites and blacks persists at all levels of schooling from elementary school to higher education. Definitive reasons and remedies for minority underperformance remain unclear. This study examines how students acquire and utilize “collegiate capital” which, in turn, relates to their academic achievement in the first year of college. Results indicate that significant black-white differences in academic achievement emerge as early as the first semester of students’ first year in college. Controls for family background, parental involvement, prior ability, cultural capital acquired during the middleand high-school years, and other factors produce a moderate reduction in the achievement gap, but over half of the gap remains unexplained. The study is part of a larger research project that involves a longitudinal study of two cohorts -the graduating classes of 2005 and 2006 -at a major private university. Through the assessment of pre-college differences and extensive data collected via student surveys and academic records during the college years, the goal of the larger project is to illuminate the factors underlying raced-based variations on a range of academic outcomes such as educational performance and attainment, but also several new measures of collegiate intellectual development such as students’ ecological integration, perceptions of other groups, and satisfaction with college. |
Trust: An Element of Information Security | Information security is no longer restricted to technical issues but incorporates all facets of securing systems that produce the company’s information. Some of the most important information systems are those that produce the financial data and information. Besides securing the technical aspects of these systems, one needs to consider the human aspects of those that may ‘corrupt’ this information for personal gain. Opportunistic behaviour has added to the recent corporate scandals such as Enron, WorldCom, and Parmalat. However, trust and controls help curtail opportunistic behaviour, therefore, confidence in information security management can be achieved. Trust and security-based mechanisms are classified as safeguard protective measures and together allow the stakeholders to have confidence in the company’s published financial statements. This paper discusses the concept of trust and predictability as an element of information security and of restoring stakeholder confidence. It also argues that assurances build trust and that controls safeguard trust. |
Textual and Visual Content-Based Anti-Phishing: A Bayesian Approach | A novel framework using a Bayesian approach for content-based phishing web page detection is presented. Our model takes into account textual and visual contents to measure the similarity between the protected web page and suspicious web pages. A text classifier, an image classifier, and an algorithm fusing the results from classifiers are introduced. An outstanding feature of this paper is the exploration of a Bayesian model to estimate the matching threshold. This is required in the classifier for determining the class of the web page and identifying whether the web page is phishing or not. In the text classifier, the naive Bayes rule is used to calculate the probability that a web page is phishing. In the image classifier, the earth mover's distance is employed to measure the visual similarity, and our Bayesian model is designed to determine the threshold. In the data fusion algorithm, the Bayes theory is used to synthesize the classification results from textual and visual content. The effectiveness of our proposed approach was examined in a large-scale dataset collected from real phishing cases. Experimental results demonstrated that the text classifier and the image classifier we designed deliver promising results, the fusion algorithm outperforms either of the individual classifiers, and our model can be adapted to different phishing cases. |
The planum temporale as a computational hub | It is increasingly recognized that the human planum temporale is not a dedicated language processor, but is in fact engaged in the analysis of many types of complex sound. We propose a model of the human planum temporale as a computational engine for the segregation and matching of spectrotemporal patterns. The model is based on segregating the components of the acoustic world and matching these components with learned spectrotemporal representations. Spectrotemporal information derived from such a 'computational hub' would be gated to higher-order cortical areas for further processing, leading to object recognition and the perception of auditory space. We review the evidence for the model and specific predictions that follow from it. |
Late Orogenic doming in the eastern Betic Cordilleras: Final exhumation of the Nevado-Filabride complex and its relation to basin genesis: LATE OROGENIC DOMING IN THE BETICS | The geometry, timing, and kinematics of late orogenic extension in the Betic Cordilleras pose the problem of a decoupling of upper crustal and lower crustal deformation regimes. Perpendicular directions of extension in metamorphic domes and nearby sedimentary basins remain unexplained. This paper puts kinematic constraints on the final exhumation of the Nevado-Filabride complex, focusing on the formation of metamorphic domes and their relations with the adjacent basins. Structural fabrics and kinematic indicators below the main shear zones as well as their relations with both published changing metamorphic P-T conditions and geochronological data were studied. Our approach describes (1) a consistent top-to-the-west shear parallel to dome axes of during D2 (i.e., during decompression) with distributed ductile flow and the onset of strain localization along major shear zones, (2) further strain localization along the major shear zones under greenschist facies conditions, during D3 leading to S-C′ mylonites formation accompanied with a rock strong thickness reduction, (3) the divergence of shear direction on either limbs of domes during D3 showing the appearance of the dome geometry, and (4) a local evolution toward N-S brittle extension (D4) in the upper plate and formation of sedimentary basins. Continuous ductile to brittle top-to-the-west shear is compatible with the slab retreat hypothesis from the Miocene; the formation of domes which adds gravitational forces responsible for the final stages of exhumation is thus characterized by important kinematics changes necessary to explain coeval N-S opened basins. Later, from the upper Tortonian, a contractional event (D5) amplified the earlier domal structures forming the present north vergent folds. |
A survey on sentiment analysis of scientific citations | Sentiment analysis of scientific citations has received much attention in recent years because of the increased availability of scientific publications. Scholarly databases are valuable sources for publications and citation information where researchers can publish their ideas and results. Sentiment analysis of scientific citations aims to analyze the authors’ sentiments within scientific citations. During the last decade, some review papers have been published in the field of sentiment analysis. Despite the growth in the size of scholarly databases and researchers’ interests, no one as far as we know has carried out an in-depth survey in a specific area of sentiment analysis in scientific citations. This paper presents a comprehensive survey of sentiment analysis of scientific citations. In this review, the process of scientific citation sentiment analysis is introduced and recently proposed methods with the main challenges are presented, analyzed and discussed. Further, we present related fields such as citation function classification and citation recommendation that have recently gained enormous attention. Our contributions include identifying the most important challenges as well as the analysis and classification of recent methods used in scientific citation sentiment analysis. Moreover, it presents the normal process, and this includes citation context extraction, public data sources, and feature selection. We found that most of the papers use classical machine learning methods. However, due to limitations of performance and manual feature selection in machine learning, we believe that in the future hybrid and deep learning methods can possibly handle the problems of scientific citation sentiment analysis more efficiently and reliably. |
Postural support by a standing aid alleviating subjective discomfort among cooks in a forward-bent posture during food preparation. | In this study, we evaluated the effects on subjective discomfort among cooks during food preparation through use of a standing aid that we developed to alleviate the workload on the low back in the forward-bent posture. Twelve female cooks who worked in a kitchen in a nursing home were asked to prepare foods in 2 working postures: (a) supported by the standing aid (Aid) and (b) without the aid (No aid). They were instructed to evaluate discomfort in 13-body regions during food preparation and the degree of fatigue at the day's end and to enter their ratings after the end of the workday. Since a significant correlation was observed between body height and the improvement effect of discomfort through use of the standing aid, cooks were divided into two groups according to the height, and ratings were analyzed in each group. Among the tall cooks, subjective discomfort in the low back and the front and back of thighs was significantly less with the Aid posture than with the No aid posture. However, in short cooks these values tended to increase in the Aid posture compared with the No aid posture. The results suggest that the standing aid was effective in alleviating tall cooks' workload on the low back in the forward-bent posture. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.