title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Effect of insulin feedback on closed-loop glucose control: a crossover study. | BACKGROUND
Closed-loop (CL) insulin delivery systems utilizing proportional-integral-derivative (PID) controllers have demonstrated susceptibility to late postprandial hypoglycemia because of delays between insulin delivery and blood glucose (BG) response. An insulin feedback (IFB) modification to the PID algorithm has been introduced to mitigate this risk. We examined the effect of IFB on CL BG control.
METHODS
Using the Medtronic ePID CL system, four subjects were studied for 24 h on PID control and 24 h during a separate admission with the IFB modification (PID + IFB). Target glucose was 120 mg/dl; meals were served at 8:00 AM, 1:00 PM, and 6:00 PM and were identical for both admissions. No premeal manual boluses were given. Reference BG excursions, defined as incremental glucose rise from premeal to peak, and postprandial BG area under the curve (AUC; 0-5 h) were compared. Results are reported as mean ± standard deviation.
RESULTS
The PID + IFB control resulted in higher mean BG levels compared with PID alone (153 ± 54 versus 133 ± 56 mg/dl; p < .0001). Postmeal BG excursions (114 ± 28 versus 114 ± 47 mg/dl) and AUCs (285 ± 102 versus 255 ± 129 mg/dl/h) were similar under both conditions. Total insulin delivery averaged 57 ± 20 U with PID versus 45 ± 13 U with PID + IFB (p = .18). Notably, eight hypoglycemic events (BG < 60 mg/dl) occurred during PID control versus none during PID + IFB.
CONCLUSIONS
Addition of IFB to the PID controller markedly reduced the occurrence of hypoglycemia without increasing meal-related glucose excursions. Higher average BG levels may be attributable to differences in the determination of system gain (Kp) in this study. The prevention of postprandial hypoglycemia suggests that the PID + IFB algorithm may allow for lower target glucose selection and improved overall glycemic control. |
Black-box data-efficient policy search for robotics | The most data-efficient algorithms for reinforcement learning (RL) in robotics are based on uncertain dynamical models: after each episode, they first learn a dynamical model of the robot, then they use an optimization algorithm to find a policy that maximizes the expected return given the model and its uncertainties. It is often believed that this optimization can be tractable only if analytical, gradient-based algorithms are used; however, these algorithms require using specific families of reward functions and policies, which greatly limits the flexibility of the overall approach. In this paper, we introduce a novel model-based RL algorithm, called Black-DROPS (Black-box Data-efficient RObot Policy Search) that: (1) does not impose any constraint on the reward function or the policy (they are treated as black-boxes), (2) is as data-efficient as the state-of-the-art algorithm for data-efficient RL in robotics, and (3) is as fast (or faster) than analytical approaches when several cores are available. The key idea is to replace the gradient-based optimization algorithm with a parallel, black-box algorithm that takes into account the model uncertainties. We demonstrate the performance of our new algorithm on two standard control benchmark problems (in simulation) and a low-cost robotic manipulator (with a real robot). |
MMX-Accelerated Real-Time Hand Tracking System | We describe a system for tracking real-time hand gestures captured by a cheap web camera and a standard Intel Pentium based personal computer with no specialized image processing hardware. To attain the necessary processing speed, the system exploits the Multi-Media Instruction set (MMX) extensions of the Intel Pentium chip family through software including. the Microsoft DirectX SDK and the Intel Image Processing and Open Source Computer Vision (OpenCV) libraries. The system is based on the Camshift algorithm (from OpenCV) and the compound constantacceleration Kalman filter algorithms. Tracking is robust and efficient. and can tracks hand motion at 30 fps. Keyword: Real-Time, Camshift algorithm, Kalman filter, moment, HSV color, Gesture Recognition |
Unsupervised Representation Adversarial Learning Network: from Reconstruction to Generation | A good representation for arbitrarily complicated data should have the capability of semantic generation, clustering and reconstruction. Previous research has already achieved impressive performance on either one. This paper aims at learning a disentangled representation effective for all of them in an unsupervised way. To achieve all the three tasks together, we learn the forward and inverse mapping between data and representation on the basis of a symmetric adversarial process. In theory, we minimize the upper bound of the two conditional entropy loss between the latent variables and the observations together to achieve the cycle consistency. The newly proposed RepGAN is tested on MNIST, fashionMNIST, CelebA, and SVHN datasets to perform unsupervised or semi-supervised classification, generation and reconstruction tasks. The result demonstrates that RepGAN is able to learn a useful and competitive representation. To the author’s knowledge, our work is the first one to achieve both a high unsupervised classification accuracy and low reconstruction error on MNIST. |
Designing inflatable structures | We propose an interactive, optimization-in-the-loop tool for designing inflatable structures. Given a target shape, the user draws a network of seams defining desired segment boundaries in 3D. Our method computes optimally-shaped flat panels for the segments, such that the inflated structure is as close as possible to the target while satisfying the desired seam positions. Our approach is underpinned by physics-based pattern optimization, accurate coarse-scale simulation using tension field theory, and a specialized constraint-optimization method. Our system is fast enough to warrant interactive exploration of different seam layouts, including internal connections, and their effects on the inflated shape. We demonstrate the resulting design process on a varied set of simulation examples, some of which we have fabricated, demonstrating excellent agreement with the design intent. |
The brainweb: Phase synchronization and large-scale integration | The emergence of a unified cognitive moment relies on the coordination of scattered mosaics of functionally specialized brain regions. Here we review the mechanisms of large-scale integration that counterbalance the distributed anatomical and functional organization of brain activity to enable the emergence of coherent behaviour and cognition. Although the mechanisms involved in large-scale integration are still largely unknown, we argue that the most plausible candidate is the formation of dynamic links mediated by synchrony over multiple frequency bands. |
Map merging of Multi-Robot SLAM using Reinforcement Learning | Using `Simultaneous Localization and Mapping' (SLAM), mobile robots can become truly autonomous in the exploration of their environment. However, once these environments becomes too large, Multi-Robot SLAM becomes a requirement. This paper will outline how a mobile robot should decide when best to merge its maps with another robot's upon rendezvous, as opposed to doing so immediately. This decision will be based on the current status of the mapping particle filters and the current status of the environment. Using Reinforcement Learning, a model can be established and then trained upon to determine a policy capable of deciding when best to merge. This will allow the robot to incur less error during a merge compared to simply merging immediately. This policy is trained and validated using simulated mobile robot datasets. |
Structural equations modeling : Fit Indices , sample size , and advanced topics | This article is the second of two parts intended to serve as a primer for structural equations models for the behavioral researcher. The first article introduced the basics: the measurement model, the structural model, and the combined, full structural equations model. In this second article, advanced issues are addressed, including fit indices and sample size, moderators, longitudinal data, mediation, and so forth. © 2009 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved. |
Solving the Credit Assignment Problem : The interaction of Explicit and Implicit learning with Internal and External State Information | In most problem-solving activities, feedback is received at the end of an action sequence. This creates a credit-assignment problem where the learner must associate the feedback with earlier actions, and the interdependencies of actions require the learner to either remember past choices of actions (internal state information) or rely on external cues in the environment (external state information) to select the right actions. We investigated the nature of explicit and implicit learning processes in the credit-assignment problem using a probabilistic sequential choice task with and without external state information. We found that when explicit memory encoding was dominant, subjects were faster to select the better option in their first choices than in the last choices; when implicit reinforcement learning was dominant subjects were faster to select the better option in their last choices than in their first choices. However, implicit reinforcement learning was only successful when distinct external state information was available. The results suggest the nature of learning in credit assignment: an explicit memory encoding process that keeps track of internal state information and a reinforcement-learning process that uses state information to propagate reinforcement backwards to previous choices. However, the implicit reinforcement learning process is effective only when the valences can be attributed to the appropriate states in the system – either internally generated states in the cognitive system or externally presented stimuli in the environment. |
Enhanced Fraud Miner : Credit Card Fraud Detection using Clustering Data Mining Techniques | This paper aimed to build unified pattern per customer not only represent normal behavior but also Fraud pattern that’s represented previously and confirmed as fraud transactions that’s facilitate studding fraudsters behavior. An enhancement for the proposed algorithm of Fraud Miner has been proposed. This enhancement involves introducing LINGO clustering Data mining algorithm by replacing Apriori algorithm used in Fraud Miner for Frequently Pattern creation and facilitate summarize customer previous behavior either within his Legal or Fraud transactions. Using this algorithm provide more chance for easily fraud detection as the fraudsters always behaving same as customer behaviors instead of study fraudster behavior the customer frequent behavior will be identified from his legal or previously confirmed transactions being fraud. A performance comparison with other algorithms has been carried out. |
MATSUMOTO AND EKMAN ' S JAPANESE AND CAUCASIAN FACIAL EXPRESSIONS OF EMOTION ( JACFEE ) : RELIABILITY DATA AND CROSS-NATIONAL DIFFERENCES | Substantial research has documented the universality of several emotional expressions. However, recent findings have demonstrated cultural differences in level of recognition and ratings of intensity. When testing cultural differences, stimulus sets must meet certain requirements. Matsumoto and Ekman's Japanese and Caucasian Facial Expressions of Emotion (JACFEE) is the only set that meets these requirements. The purpose of this study was to obtain judgment reliability data on the JACFEE, and to test for possible cross-national differences in judgments as well. Subjects from Hungary, Japan, Poland, Sumatra, United States, and Vietnam viewed the complete JACFEE photo set and judged which emotions were portrayed in the photos and rated the intensity of those expressions. Results revealed high agreement across countries in identifying the emotions portrayed in the photos, demonstrating the reliability of the JACFEE. Despite high agreement, cross-national differences were found in the exact level of agreement for photos of anger, contempt, disgust, fear, sadness, and surprise. Cross-national differences were also found in the level of intensity attributed to the photos. No systematic variation due to either preceding emotion or presentation order of the JACFEE was found. Also, we found that grouping the countries into a Western/Non-Western dichotomy was not justified according to the data. Instead, the cross-national differences are discussed in terms of possible sociopsychological variables that influence emotion judgments. Cross-cultural research has documented high agreement in judgments of facial expressions of emotion in over 30 different cultures (Ekman, The research reported in this article was made supported in part by faculty awards for research and scholarship to David Matsumoto. Also, we would like to express our appreciation to William Irwin for his previous work on this project, and to Nathan Yrizarry, Hideko Uchida, Cenita Kupperbusch, Galin Luk, Carinda Wilson-Cohn, Sherry Loewinger, and Sachiko Takeuchi for their general assistance in our research program. Correspondence concerning this article should be addressed to David Matsumoto, Department of Psychology, San Francisco State University, 1600 Holloway Avenue, San Francisco, CA 94132. Electronic mail may be sent to [email protected]. loumal of Nonverbal Behavior 21(1), Spring 1997 © 1997 Human Sciences Press, Inc. 3 1994), including preliterate cultures (Ekman, Sorensen, & Friesen, 1969; Ekman & Friesen, 1971). Recent research, however, has reported cultural differences in judgment as well. Matsumoto (1989, 1992a), for example, found that American and Japanese subjects differed in their rates of recognition. Differences have also been found in ratings of intensity (Ekman et al., 1987). Examining cultural differences requires a different methodology than studying similarities. Matsumoto (1992a) outlined such requirements: (1) cultures must view the same expressions; (2) the facial expressions must meet criteria for validly and reliably portraying the universal emotions; (3) each poser must appear only once; (4) expressions must include posers of more than one race. Matsumoto and Ekman's (1988) Japanese and Caucasian Facial Expressions of Emotion (JACFEE) was designed to meet these requirements. JACFEE was developed by photographing over one hundred posers who voluntarily moved muscles that correspond to the universal expressions (Ekman & Friesen, 1975, 1986). From the thousands of photographs taken, a small pool of photos was coded using Ekman and Friesen's (1978) Facial Action Coding System (FACS). A final pool of photos was then selected to ensure that each poser only contributed one photo in the final set, which is comprised of 56 photos, including eight photos each of anger, contempt, disgust, fear, happiness, sadness, and surprise. Four photos of each emotion depict posers of either Japanese or Caucasian descent (2 males, 2 females). Two published studies have reported judgment data on the JACFEE, but only with American and Japanese subjects. Matsumoto and Ekman (1989), for example, asked their subjects to make scalar ratings (0-8) on seven emotion dimensions for each photo. The judgments of the Americans and Japanese were similar in relation to strongest emotion depicted in the photos, and the relative intensity among the photographs. Americans, however, gave higher absolute intensity ratings on photos of happiness, anger, sadness, and surprise. In the second study (Matsumoto, 1992a), high agreement was found in the recognition judgments, but the level of recognition differed for anger, disgust, fear, and sadness. While data from these and other studies seem to indicate the dual existence of universal and culture-specific aspects of emotion judgment, the methodology used in many previous studies has recently been questioned on several grounds, including the previewing of slides, judgment context, presentation order, preselection of slides, the use of posed expressions, and type of response format (Russell, 1994; see Ekman, 1994, and Izard, 1994, for reply). Two of these, judgment context and presentation order, are especially germane to the present study and are addressed here. JOURNAL OF NONVERBAL BEHAVIOR 4 |
Habitat Loss and Extinction in the Hotspots of Biodiversity | Nearly half the world’s vascular plant species and one-third of terrestrial vertebrates are endemic to 25 “hotspots” of biodiversity, each of which has at least 1500 endemic plant species. None of these hotspots have more than one-third of their pristine habitat remaining. Historically, they covered 12% of the land’s surface, but today their intact habitat covers only 1.4% of the land. As a result of this habitat loss, we expect many of the hotspot endemics to have either become extinct or—because much of the habitat loss is recent— to be threatened with extinction. We used World Conservation Union [IUCN] Red Lists to test this expectation. Overall, between one-half and two-thirds of all threatened plants and 57% of all threatened terrestrial vertebrates are hotspot endemics. For birds and mammals, in general, predictions of extinction in the hotspots based on habitat loss match numbers of species independently judged extinct or threatened. In two classes of hotspots the match is not as close. On oceanic islands, habitat loss underestimates extinction because introduced species have driven extinctions beyond those caused by habitat loss on these islands. In large hotspots, conversely, habitat loss overestimates extinction, suggesting scale dependence (this effect is also apparent for plants). For reptiles, amphibians, and plants, many fewer hotspot endemics are considered threatened or extinct than we would expect based on habitat loss. This mismatch is small in temperate hotspots, however, suggesting that many threatened endemic species in the poorly known tropical hotspots have yet to be included on the IUCN Red Lists. We then asked in which hotspots the consequences of further habitat loss (either absolute or given current rates of deforestation) would be most serious. Our results suggest that the Eastern Arc and Coastal Forests of Tanzania-Kenya, Philippines, and Polynesia-Micronesia can least afford to lose more habitat and that, if current deforestation rates continue, the Caribbean, Tropical Andes, Philippines, Mesoamerica, Sundaland, Indo-Burma, Madagascar, and Chocó–Darién–Western Ecuador will lose the most species in the near future. Without urgent conservation intervention, we face mass extinctions in the hotspots. Pérdida de Hábitat y Extinciones en Áreas Críticas para la Biodiversidad Resumen: Casi la mitad del total de plantas vasculares del mundo y un tercio de los vertebrados terrestres son endémicos en 25 “áreas críticas” para la biodiversidad, cada una de las cuales tiene por lo menos 1500 especies de plantas endémicas. En ninguno de estos sitios permanece más de un tercio de su hábitat prístino. Históricamente, cubrían 12% de la superficie terrestre, pero en la actualidad su hábitat intacto cubre solo 1.4% del terreno. Como resultado de esta pérdida de hábitat esperamos que muchas de las especies endémicas a es†† email [email protected] Paper submitted December 12, 2000; revised manuscript accepted September 26, 2001. 910 Extinction in Biodiversity Hotspots Brooks et al. Conservation Biology Volume 16, No. 4, August 2002 Introduction Four general principles combine to present one of the greatest challenges for conservation. First, most species have small range sizes relative to the mean range size, increasing their probability of extinction by chance alone (Gaston 1994). Second, species with small ranges also tend to be scarce within those ranges (Brown 1984), so their probability of extinction is increased on two counts. Third, the consistent mechanisms underlying the evolution of small range (Fjeldså & Lovett 1997) mean that most such species co-occur. Stattersfield et al. (1998) demonstrated this convincingly for birds. Finally, most of these areas of co-occurrence of species with small ranges—call them centers of endemism—are disproportionately threatened by human activity (Cincotta et al. 2000). Myers et al. (2000) quantified this. Twenty-five biogeographically distinctive “hotspots” each have 0.5% or more of the world’s flora completely restricted to their boundaries and have already lost 70% or more of their original geographic extent. The hotspots are therefore both irreplaceable and vulnerable (Margules & Pressey 2000). In combination, these hotspots hold the entire ranges of 44% of the world’s plants and 35% of terrestrial vertebrates in just 1.4% of the land area. Nature has put many of her eggs in a few baskets, and we are in danger of dropping even these. Where are these hotspots? We plotted the distribution and relative densities of endemics across the 25 hotspots as presented by Myers et al. (2000) (Fig. 1). We cannot simply map numbers of endemic species, because the hotspots (and countries) are different sizes and larger areas tend to hold more endemics. Furthermore, this relationship is not linear; it is probably a power function (Harte & Kinzig 1997). Therefore, we factored out area by plotting numbers of endemics against original hotspot area, fitting a power function and taking residuals about this line (Balmford & Long 1995). These residuals represent the relative density of endemics, which we plotted onto the map. The residuals are qualitatively similar if based on remaining hotspot area. Most (15) of the hotspots hold predominantly tropical rainforest; five hold Mediterranean-type vegetation, three temperate forest, one tropical dry forest (Brazil’s Cerrado), and one semidesert (Succulent Karoo). Given these enormous concentrations of small-ranged species in places where most natural habitat has already been cleared, we expect that many of the hotspot endemics will have become extinct (Pimm et al. 1995). Habitat loss accurately predicts species loss in regions where the habitat loss occurred a long time ago (Dial 1994; Pimm & Askins 1995). However, there is a time lag between habitat loss and species loss (Brooks et al. 1999 a ). For well-known taxa, one can detect this time lag in the form of population declines toward extinction. Such information is compiled in the World Conservation Union (IUCN) Red Data books (Baillie & Groombridge 1996; Walter & Gillett 1998). The very act of listing a taxon as threatened should stimulate conservation measures to preempt its decline to extinction (Collar 1996). Nevertheless, for both birds and mammals, the proportion of deforestation in both insular Southeast Asia (Magsalay et al. 1995; Brooks et al. 1997, 1999 b ) and Brazil’s Atlantic forest (Brooks & Balmford 1996; Brooks et al. 1999 c ; Grelle et al. 1999) consistently pretos sitios estén extintas o – porque la pérdida de hábitat es reciente – se encuentren amenazadas de extinción. Utilizamos Listas Rojas de UICN para comprobar esta predicción. En general, entre la mitad y dos tercios de las plantas amenazadas y el 57% de los vertebrados terrestres amenazados son endémicos de áreas críticas para la biodiversidad. Para aves y mamíferos en general, las predicciones de extinción en las áreas críticas para la biodiversidad, basadas en la pérdida de hábitat, coinciden con el número de especies consideradas extintas o amenazadas independientemente. En dos clases de áreas críticas para la biodiversidad la coincidencia no es muy grande. En islas oceánicas, la pérdida de hábitat subestima la extinción porque las especies introducidas han causado más extinciones que las producidas por la reducción del hábitat. Por lo contrario, la pérdida de hábitat sobrestima la extinción en áreas críticas para la biodiversidad extensas, lo que sugiere una dependencia de escala (este efecto también es aparente para plantas). Para reptiles, anfibios y plantas mucho menos especies endémicas son consideradas amenazadas o extintas por pérdida de hábitat. Sin embargo, esta discordancia es pequeña en áreas críticas para la biodiversidad en zonas templadas templadas, lo que sugiere que muchas especies endémicas amenazadas en las poco conocidas áreas críticas para la biodiversidad en zonas tropicales aun están por incluirse en las Listas Rojas. Posteriormente nos preguntamos en que áreas críticas para la biodiversidad serían más serias las consecuencias de una mayor pérdida de hábitat (absoluta o con las tasas actuales de deforestación). Nuestros resultados sugieren que el Arco Oriental y los Bosques Costeros de Tanzania/Kenia, Filipinas, Polinesia/Micronesia no pueden soportar mayores pérdidas y que, si continúan las tasas de deforestación actuales, el Caribe, Andes Tropicales, Filipinas, Mesoamérica, Sundaland, Indo-Burma, Madagascar y Chocó/Darién/Ecuador Occidental perderán más especies en el futuro. Sin acciones urgentes de conservación, habrá extinciones masivas en las áreas críticas para la biodiversidad. |
Leukemia Detection using Digital Image Processing Techniques | This paper discusses about methods for detection of leukemia. Various image processing techniques are used for identification of red blood cell and immature white cells. Different disease like anemia, leukemia, malaria, deficiency of vitamin B12, etc. can be diagnosed accordingly. Objective is to detect the leukemia affected cells and count it. According to detection of immature blast cells, leukemia can be identified and also define that either it is chronic or acute. To detect immature cells, number of methods are used like histogram equalization, linear contrast stretching, some morphological techniques like area opening, area closing, erosion, dilation. Watershed transform, K means, histogram equalization & linear contrast stretching, and shape based features are accurate 72.2%, 72%, 73.7 % and 97.8% respectively. |
Soft-tissue lesions: when can we exclude sarcoma? | OBJECTIVE
A wide spectrum of space-occupying soft-tissue lesions may be discovered on MRI studies, either as incidental findings or as palpable or symptomatic masses. Characterization of a lesion as benign or indeterminate is the most important step toward optimal treatment and avoidance of unnecessary biopsy or surgical intervention.
CONCLUSION
The systemic MRI interpretation approach presented in this article enables the identification of cases in which sarcoma can be excluded. |
CRM and customer-centric knowledge management: an empirical research | Current competitive challenges induced by globalization and advances in information technology have forced companies to focus on managing customer relationships, and in particular customer satisfaction, in order to efficiently maximize revenues. This paper reports exploratory research based on a mail survey addressed to the largest 1,000 Greek organizations. The objectives of the research were: to investigate the extent of the usage of customerand market-related knowledge management (KM) instruments and customer relationship management (CRM) systems by Greek organizations and their relationship with demographic and organizational variables; to investigate whether enterprises systematically carry out customer satisfaction and complaining behavior research; and to examine the impact of the type of the information system used and managers’ attitudes towards customer KM practices. In addition, a conceptual model of CRM development stages is proposed. The findings of the survey show that about half of the organizations of the sample do not adopt any CRM philosophy. The remaining organizations employ instruments to conduct customer satisfaction and other customer-related research. However, according to the proposed model, they are positioned in the first, the preliminary CRM development stage. The findings also suggest that managers hold positive attitudes towards CRM and that there is no significant relationship between the type of the transactional information system used and the extent to which customer satisfaction research is performed by the organizations. The paper concludes by discussing the survey findings and proposing future |
Longitudinal treatment mediation of traditional cognitive behavioral therapy and acceptance and commitment therapy for anxiety disorders. | OBJECTIVE
To assess the relationship between session-by-session putative mediators and treatment outcomes in traditional cognitive behavioral therapy (CBT) and acceptance and commitment therapy (ACT) for mixed anxiety disorders.
METHOD
Session-by-session changes in anxiety sensitivity and cognitive defusion were assessed in 67 adult outpatients randomized to CBT (n = 35) or ACT (n = 32) for a DSM-IV anxiety disorder.
RESULTS
Multilevel mediation analyses revealed significant changes in the proposed mediators during both treatments (p < .001, d = .90-1.93), with ACT showing borderline greater improvements than CBT in cognitive defusion (p = .05, d = .82). Anxiety sensitivity and cognitive defusion both significantly mediated post-treatment worry; cognitive defusion more strongly predicted worry reductions in CBT than in ACT. In addition, cognitive defusion significantly mediated quality of life, behavioral avoidance, and (secondary) depression outcomes across both CBT and ACT (p < .05, R(2) change = .06-.13), whereas anxiety sensitivity did not significantly mediate other outcomes.
CONCLUSIONS
Cognitive defusion represents an important source of therapeutic change across both CBT and ACT. The data offered little evidence for substantially distinct treatment-related mediation pathways. |
Vertical Slit Transistor Based Integrated Circuits ( VeSTICs ) : Overview and Highlights of Feasibility Study | In this note, the concept of Vertical Slit Transistor Based Integrated Circuits (VeSTICs) is introduced and its feasibility discussed. VeSTICs paradigm has been conceived in response to the rapidly growing complexity/cost of the traditional bulk-CMOS-based approach and to challenges posed by the nano-scale era. This paradigm is based on strictly regular layouts. The central element of the proposed vision is a new junctionless Vertical Slit Field Effect Transistor (JL VeSFET) with twin independent gates. It is expected that VeSTICs will enable much denser, much easier to design, test and manufacture ICs, as well as, will be 3Dextendable and OPC-free. |
SUPPLY CHAIN ANALYSIS: SPREADSHEET OR SIMULATION? | In the last few decades, a lot of company effort has been spent in the optimization of internal efficiency, aiming at cost reduction and competitiveness. Especially over the last decade, there has been a consensus that not only the company, but the whole supply chain in which it fits, is responsible for the success or failure of any business. Therefore, supply chain analysis tools and methodologies have become more and more important. From all tools, spreadsheets are by far the most widely used technique for scenario analysis. Other techniques such as optimization, simulation or both (simulation-optimization) are alternatives for in-depth analysis. While spreadsheet-based analysis is mainly a static-deterministic approach, simulation is a dynamic-stochastic tool. The purpose of this paper is to compare spreadsheet-based and simulation-based tools showing the impacts of using these two different approaches on the analysis of a real (yet simplified) supply chain case study. |
A nutrition and conditioning intervention for natural bodybuilding contest preparation: case study | Bodybuilding competitions are becoming increasingly popular. Competitors are judged on their aesthetic appearance and usually exhibit a high level of muscularity and symmetry and low levels of body fat. Commonly used techniques to improve physique during the preparation phase before competitions include dehydration, periods of prolonged fasting, severe caloric restriction, excessive cardiovascular exercise and inappropriate use of diuretics and anabolic steroids. In contrast, this case study documents a structured nutrition and conditioning intervention followed by a 21 year-old amateur bodybuilding competitor to improve body composition, resting and exercise fat oxidation, and muscular strength that does not involve use of any of the above mentioned methods. Over a 14-week period, the Athlete was provided with a scientifically designed nutrition and conditioning plan that encouraged him to (i) consume a variety of foods; (ii) not neglect any macronutrient groups; (iii) exercise regularly but not excessively and; (iv) incorporate rest days into his conditioning regime. This strategy resulted in a body mass loss of 11.7 kg's, corresponding to a 6.7 kg reduction in fat mass and a 5.0 kg reduction in fat-free mass. Resting metabolic rate decreased from 1993 kcal/d to 1814 kcal/d, whereas resting fat oxidation increased from 0.04 g/min to 0.06 g/min. His capacity to oxidize fat during exercise increased more than two-fold from 0.24 g/min to 0.59 g/min, while there was a near 3-fold increase in the corresponding exercise intensity that elicited the maximal rate of fat oxidation; 21% V̇O2max to 60% V̇O2max. Hamstring concentric peak torque decreased (1.7 to 1.5 Nm/kg), whereas hamstring eccentric (2.0 Nm/kg to 2.9 Nm/kg), quadriceps concentric (3.4 Nm/kg to 3.7 Nm/kg) and quadriceps eccentric (4.9 Nm/kg to 5.7 Nm/kg) peak torque all increased. Psychological mood-state (BRUMS scale) was not negatively influenced by the intervention and all values relating to the Athlete's mood-state remained below average over the course of study. This intervention shows that a structured and scientifically supported nutrition strategy can be implemented to improve parameters relevant to bodybuilding competition and importantly the health of competitors, therefore questioning the conventional practices of bodybuilding preparation. |
Multi-Task Learning with Multi-View Attention for Answer Selection and Knowledge Base Question Answering | Answer selection and knowledge base question answering (KBQA) are two important tasks of question answering (QA) systems. Existing methods solve these two tasks separately, which requires large number of repetitive work and neglects the rich correlation information between tasks. In this paper, we tackle answer selection and KBQA tasks simultaneously via multi-task learning (MTL), motivated by the following motivations. First, both answer selection and KBQA can be regarded as a ranking problem, with one at text-level while the other at knowledge-level. Second, these two tasks can benefit each other: answer selection can incorporate the external knowledge from knowledge base (KB), while KBQA can be improved by learning contextual information from answer selection. To fulfill the goal of jointly learning these two tasks, we propose a novel multi-task learning scheme that utilizes multi-view attention learned from various perspectives to enable these tasks to interact with each other as well as learn more comprehensive sentence representations. The experiments conducted on several real-world datasets demonstrate the effectiveness of the proposed method, and the performance of answer selection and KBQA is improved. Also, the multi-view attention scheme is proved to be effective in assembling attentive information from different representational perspectives. |
Replication and validation of higher order models demonstrated that a summary score for the EORTC QLQ-C30 is robust. | OBJECTIVE
To further evaluate the higher order measurement structure of the European Organisation for Research and Treatment of Cancer (EORTC) Quality of Life Questionnaire Core 30 (QLQ-C30), with the aim of generating a summary score.
STUDY DESIGN AND SETTING
Using pretreatment QLQ-C30 data (N = 3,282), we conducted confirmatory factor analyses to test seven previously evaluated higher order models. We compared the summary score(s) derived from the best performing higher order model with the original QLQ-C30 scale scores, using tumor stage, performance status, and change over time (N = 244) as grouping variables.
RESULTS
Although all models showed acceptable fit, we continued in the interest of parsimony with known-groups validity and responsiveness analyses using a summary score derived from the single higher order factor model. The validity and responsiveness of this QLQ-C30 summary score was equal to, and in many cases superior to the original, underlying QLQ-C30 scale scores.
CONCLUSION
Our results provide empirical support for a measurement model for the QLQ-C30 yielding a single summary score. The availability of this summary score can avoid problems with potential type I errors that arise because of multiple testing when making comparisons based on the 15 outcomes generated by this questionnaire and may reduce sample size requirements for health-related quality of life studies using the QLQ-C30 questionnaire when an overall summary score is a relevant primary outcome. |
Energy-driven computing: Rethinking the design of energy harvesting systems | Energy harvesting computing has been gaining increasing traction over the past decade, fueled by technological developments and rising demand for autonomous and battery-free systems. Energy harvesting introduces numerous challenges to embedded systems but, arguably the greatest, is the required transition from an energy source that typically provides virtually unlimited power for a reasonable period of time until it becomes exhausted, to a power source that is highly unpredictable and dynamic (both spatially and temporally, and with a range spanning many orders of magnitude). The typical approach to overcome this is the addition of intermediate energy storage/buffering to smooth out the temporal dynamics of both power supply and consumption. This has the advantage that, if correctly sized, the system ‘looks like’ a battery-powered system; however, it also adds volume, mass, cost and complexity and, if not sized correctly, unreliability. In this paper, we consider energy-driven computing, where systems are designed from the outset to operate from an energy harvesting source. Such systems typically contain little or no additional energy storage (instead relying on tiny parasitic and decoupling capacitance), alleviating the aforementioned issues. Examples of energy-driven computing include transient systems (which power down when the supply disappears and efficiently continue execution when it returns) and power-neutral systems (which operate directly from the instantaneous power harvested, gracefully modulating their consumption and performance to match the supply). In this paper, we introduce a taxonomy of energy-driven computing, articulating how power-neutral, transient, and energy-driven systems present a different class of computing to conventional approaches. |
Non-static nature of patient consent: shifting privacy perspectives in health information sharing | The purpose of the study is to explore how chronically ill patients and their specialized care network have viewed their personal medical information privacy and how it has impacted their perspectives of sharing their records with their network of healthcare providers and secondary use organizations. Diabetes patients and specialized diabetes medical care providers in Eastern England were interviewed about their sharing of medical information and their privacy concerns to inform a descriptive qualitative and exploratory thematic analysis. From the interview data, we see that diabetes patients shift their perceived privacy concerns and needs throughout their lifetime due to persistence of health data, changes in health, technology advances, and experience with technology that affect one's consent decisions. From these findings, we begin to take a translational research approach in critically examining current privacy enhancing technologies for secondary use consent management and motivate the further exploration of both temporally-sensitive privacy perspectives and new options in consent management that support shifting privacy concerns over one's lifetime. |
Ondansetron has similar clinical efficacy against both nausea and vomiting. | Ondansetron is widely believed to prevent postoperative vomiting more effectively than nausea. We analysed data from 5161 patients undergoing general anaesthesia who were randomly stratified to receive a combination of six interventions, one of which was 4 mg ondansetron vs placebo. For the purpose of this study a 20% difference in the relative risks for the two outcomes was considered clinically relevant. Nausea was reduced from 38% (969/2585) in the control to 28% (715/2576) in the ondansetron group, corresponding to a relative risk of 0.74, or a relative risk reduction of 26%. Vomiting was reduced from 17% (441/2585) to 11% (293/2576), corresponding to a relative risk of 0.67, or a relative risk reduction of 33%. The relative risks of 0.67 and 0.74 were clinically similar and the difference between them did not reach statistical significance. We thus conclude that ondansetron prevents postoperative nausea and postoperative vomiting equally well. |
Deep Robust Kalman Filter | A Robust Markov Decision Process (RMDP) is a sequential decision making model that accounts for uncertainty in the parameters of dynamic systems. This uncertainty introduces difficulties in learning an optimal policy, especially for environments with large state spaces. We propose two algorithms, RTD-DQN and Deep-RoK, for solving large-scale RMDPs using nonlinear approximation schemes such as deep neural networks. The RTD-DQN algorithm incorporates the robust Bellman temporal difference error into a robust loss function, yielding robust policies for the agent. The Deep-RoK algorithm is a robust Bayesian method, based on the Extended Kalman Filter (EKF), that accounts for both the uncertainty in the weights of the approximated value function and the uncertainty in the transition probabilities, improving the robustness of the agent. We provide theoretical results for our approach and test the proposed algorithms on a continuous state domain. |
Amphetamine-type central nervous system stimulants release norepinephrine more potently than they release dopamine and serotonin. | A large body of evidence supports the hypothesis that mesolimbic dopamine (DA) mediates, in animal models, the reinforcing effects of central nervous system stimulants such as cocaine and amphetamine. The role DA plays in mediating amphetamine-type subjective effects of stimulants in humans remains to be established. Both amphetamine and cocaine increase norepinephrine (NE) via stimulation of release and inhibition of reuptake, respectively. If increases in NE mediate amphetamine-type subjective effects of stimulants in humans, then one would predict that stimulant medications that produce amphetamine-type subjective effects in humans should share the ability to increase NE. To test this hypothesis, we determined, using in vitro methods, the neurochemical mechanism of action of amphetamine, 3,4-methylenedioxymethamphetamine (MDMA), (+)-methamphetamine, ephedrine, phentermine, and aminorex. As expected, their rank order of potency for DA release was similar to their rank order of potency in published self-administration studies. Interestingly, the results demonstrated that the most potent effect of these stimulants is to release NE. Importantly, the oral dose of these stimulants, which produce amphetamine-type subjective effects in humans, correlated with the their potency in releasing NE, not DA, and did not decrease plasma prolactin, an effect mediated by DA release. These results suggest that NE may contribute to the amphetamine-type subjective effects of stimulants in humans. |
CRM ADOPTION IN A HIGHER EDUCATION INSTITUTION | More and more organisations, from private to public sectors, are pursuing higher levels of customer satisfaction, loyalty and retention. With this intent, higher education institutions (HEI) have adopted CRM – Customer Relationship Management. In order to analyse some of the interesting aspects of this phenomenon n, we conducted an action research in a European Institute. The main research question we answered is “how to adopt a CRM strategy in a Higher Education Institution?” Some of the main findings of this study are (1) even though HEI’s main customer is the student, there are others stakeholders that a CRM project must consider; (2) universities can use their internal resources to implement a CRM project successfully; and (3) using Agile software methodology is an effective way to define clearer, more objective and more assertive technical requirements which result in a CRM software that meet send user’s expectations and organizational strategic goals. These findings can help other HEIs 46 Rigo, G. E., Pedron, C. D., Caldeira, M., Araújo, C. C. S., JISTEM, Brazil Vol. 13, No. 1, Jan/Abr., 2016 pp. 45-60 www.jistem.fea.usp.br planning to adopt CRM as a strategic tool to improve their relationship with the stakeholders ́ community and expand their student body. |
Osteonecrosis of the jaw induced by orally administered bisphosphonates: incidence, clinical features, predisposing factors and treatment outcome | Osteonecrosis of the jaw (ONJ) is a well-known devastating side effect of bisphosphonate therapy for cancer. Several ONJ cases of patients using oral bisphosphonates have been reported in the literature. The present study analyzed the clinical features, predisposing factors, and treatment outcome of 11 patients with oral bisphosphonates-related ONJ. Osteonecrosis of the jaw (ONJ) is a well-known side effect of parenteral bisphosphonates therapy. Although ONJ has been reported in patients using oral bisphosphonates, documentation of this entity is sparse. It was hypothesized that the clinical features, predisposing factors, and treatment outcome of this population are different from those of oncologic patients. This retrospective bi-central study involved 98 ONJ patients, 13 of whom were treated with oral bisphosphonates. Two patients were excluded because of previous use of intravenous bisphosphonates. The profiles of 11 patients were analyzed. The mean duration of alendronate use before developing ONJ was 4.1 years. ONJ was triggered by dental surgery in 9 patients and by ill-fitted dentures in 2. Heavy smokers were the most recalcitrant subjects. Among the nine patients with at least 6 months of follow-up, ONJ healed completely in three, partially in four, and not at all in two. ONJ is a rare devastating side effect of oral bisphosphonates associated with patient morbidity and high financial burden. Clinicians must be aware of this entity and inform patients of the risks of dental surgery. The synergistic effect of smoking in the pathogenesis of ONJ should be further investigated. |
Social news, citizen journalism and democracy | This article aims to contribute to a critical research agenda for investigating the democratic implications of citizen journalism and social news. The article calls for a broad conception of ‘citizen journalism’ which is (1) not an exclusively online phenomenon, (2) not confined to explicitly ‘alternative’ news sources, and (3) includes ‘metajournalism’ as well as the practices of journalism itself. A case is made for seeing democratic implications not simply in the horizontal or ‘peer-to-peer’ public sphere of citizen journalism networks, but also in the possibility of a more ‘reflexive’ culture of news consumption through citizen participation. The article calls for a research agenda that investigates new forms of gatekeeping and agendasetting power within social news and citizen journalism networks and, drawing on the example of three sites, highlights the importance of both formal and informal status differentials and of the software ‘code’ structuring these new modes of news |
Autophagy in Cancer Stem Cells: A Potential Link Between Chemoresistance, Recurrence, and Metastasis | Cancer cells require an uninterrupted nutritional supply for maintaining their proliferative needs and this high demand in concurrence with inadequate supply of blood and nutrition induces stress in these cells. These cells utilize various strategies like high glycolytic flux, redox signaling, and modulation of autophagy to avoid cell death and overcome nutritional deficiency. Autophagy allows the cell to generate ATP and other essential biochemical building blocks necessary under such adverse conditions. It is emerging as a decisive process in the development and progression of pathophysiological conditions that are associated with increased cancer risk. However, the precise role of autophagy in tumorigenesis is still debatable. Autophagy is a novel cytoprotective process to augment tumor cell survival under nutrient or growth factor starvation, metabolic stress, and hypoxia. The tumor hypoxic environment may provide site for the enrichment/expansion of the cancer stem cells (CSCs) and successive rapid tumor progression. CSCs are characteristically resistant to conventional anticancer therapy, which may contribute to treatment failure and tumor relapse. CSCs have the potential to regenerate for an indefinite period, which can impel tumor metastatic invasion. From last decade, preclinical research has focused on the diversity in CSC content within tumors that could affect their chemo- or radio-sensitivity by impeding with mechanisms of DNA repair and cell cycle progression. The aim of this review is predominantly directed on the recent developments in the CSCs during cancer treatment, role of autophagy in maintenance of CSC populations and their implications in the development of promising new cancer treatment options in future. |
Sol–gel synthesis, structural and dielectric properties of Y-doped BaTiO3 ceramics | Prepared nano-polycrystalline of BaTiO3 containing 20, 40, 60 and 80 mol.% Y were prepared by sol–gel method. Consider BaTiO3 perovskite structure where Ba atom is replaced by amounts Y. Careful X-ray diffraction analysis showed the presence of tetragonal phase. The effect of the Y3+ content in BaTiO3 ceramics materials was investigated. The incorporation of yttrium into BaTiO3 unit cell was smoothly alternated the bond vibration of the crystal lattice. By using the Raman-active modes, it was observed that the tetragonal phase is present in all synthesized samples. The morphology of obtaining ceramics is found to be nanosized. Y-doped BaTiO3 ceramics exhibit high values of dielectric constants and low dielectric losses, such properties, combined with the microstructural development arising from the influence of sintering temperature. The fitting derived from the Curie-Wiess laws, confirm that all samples are normal ferroelectric with first order transition accompanied by displacive ones. The increase of conductivity is linked to the formation of oxygen vacancies arise from the dissociation of molecules during the synthesis process. |
Towards a Self-Deploying and Gliding Robot | Strategies for hybrid locomotion such as jumping and gliding are used in nature by many different animals for traveling over rough terrain. This combination of locomotion modes also allows small robots to overcome relatively large obstacles at a minimal energetic cost compared to wheeled or flying robots. In this chapter we describe the development of a novel palm sized robot of 10 g that is able to autonomously deploy itself from ground or walls, open its wings, recover in midair and subsequently perform goal-directed gliding. In particular, we focus on the subsystems that will in the future be integrated such as a 1.5 g microglider that can perform phototaxis; a 4.5 g, bat-inspired, wing folding mechanism that can unfold in only 50 ms; and a locust-inspired, 7 g robot that can jump more than 27 times its own height. We also review the relevance of jumping and gliding for living and robotic systems and we highlight future directions for the realization of a fully integrated robot. |
Inferring the score of a tennis match from in-play betting exchange markets | Over the past few years, betting exchanges have attracted a large variety of customers ranging from casual human participants to sophisticated automated trading agents. Professional tennis matches give rise to a number of betting exchange markets, which vary in liquidity. A problem faced by participants in tennis-related betting exchange markets is the lack of a reliable, low-latency and low-cost point-to-point score feed. Using existing quantitative tennis models, this paper demonstrates that it is possible to infer the score of a tennis match solely from live Match Odds betting exchange market data, assuming it has sufficient liquidity. By comparing the implied odds generated by our quantitative model during play with market data retrieved from the betting exchange, we devise an algorithm that detects when a point is scored by either player. This algorithm is further refined by identifying scenarios where false positives or misses occur and heuristically correcting for them. Testing this algorithm using live matches, we demonstrate that this idea is not only feasible but in fact is also capable of deducing the score of entire sets with few errors. While errors are still common and can lead to a derailment of the detection algorithm, with more work as well as improved data collection, the system has the potential of becoming a precise tool for score inference. |
Improving the Coverage and Spectral Efficiency of Millimeter-Wave Cellular Networks Using Device-to-Device Relays | The susceptibility of millimeter waveform propagation to blockages limits the coverage of millimeter-wave (mmWave) signals. To overcome blockages, we propose to leverage two-hop device-to-device (D2D) relaying. Using stochastic geometry, we derive expressions for the downlink coverage probability of relay-assisted mmWave cellular networks when the D2D links are implemented in either uplink mmWave or uplink microwave bands. We further investigate the spectral efficiency (SE) improvement in the cellular downlink, and the effect of D2D transmissions on the cellular uplink. For mmWave links, we derive the coverage probability using dominant interferer analysis while accounting for both blockages and beamforming gains. For microwave D2D links, we derive the coverage probability considering both line-of-sight and non-line-of-sight (NLOS) propagation. Numerical results show that downlink coverage and SE can be improved using two-hop D2D relaying. Specifically, microwave D2D relays achieve better coverage because D2D connections can be established under NLOS conditions. However, mmWave D2D relays achieve better coverage when the density of interferers is large because blockages eliminate interference from NLOS interferers. The SE on the downlink depends on the relay mode selection strategy, and mmWave D2D relays use a significantly smaller fraction of uplink resources than microwave D2D relays. |
Divisive Normalization in Olfactory Population Codes | In many regions of the visual system, the activity of a neuron is normalized by the activity of other neurons in the same region. Here we show that a similar normalization occurs during olfactory processing in the Drosophila antennal lobe. We exploit the orderly anatomy of this circuit to independently manipulate feedforward and lateral input to second-order projection neurons (PNs). Lateral inhibition increases the level of feedforward input needed to drive PNs to saturation, and this normalization scales with the total activity of the olfactory receptor neuron (ORN) population. Increasing total ORN activity also makes PN responses more transient. Strikingly, a model with just two variables (feedforward and total ORN activity) accurately predicts PN odor responses. Finally, we show that discrimination by a linear decoder is facilitated by two complementary transformations: the saturating transformation intrinsic to each processing channel boosts weak signals, while normalization helps equalize responses to different stimuli. |
Biological network analysis and comparison: mining new biological knowledge | The mechanisms underlying life machinery are still not completely understood. Something is known, something is “probably” known, other things are still unknown. Scientists all over the world are working very hard to clarify the processes regulating the cell life cycle and bioinformaticians try to support them by developing specialized automated tools. Within the plethora of applications devoted to the study of life mechanisms, tools for the analysis and comparison of biological networks are catching the attention of many researchers. It is interesting to investigate why. |
Electromagnetic design considerations for a 50,000 rpm 1kW Switched Reluctance Machine using a flux bridge | The Switched Reluctance Machine (SRM) is a robust machine and is a candidate for ultra high speed applications. Until now the area of ultra high speed machines has been dominated by permanent magnet machines (PM). The PM machine has a higher torque density and some other advantages compared to SRMs. However, the soaring prices of the rare earth materials are driving the efforts to find an alternative to PM machines without significantly impacting the performance. At the same time significant progress has been made in the design and control of the SRM. This paper reviews the progress of the SRM as a high speed machine and proposes a novel rotor structure design to resolve the challenge of high windage losses at ultra high speed. It then elaborates on the path of modifying the design to achieve optimal performance. The simulation result of the final design is verified on FEA software. Finally, a prototype machine with similar design is built and tested to verify the simulation model. The experimental waveform indicates good agreement with the simulation result. Therefore, the performance of the prototype machine is analyzed and presented at the end of this paper. |
Beyond the Tragic Vision: The Quest for Identity in the Nineteenth Century | Introduction: the problems of the historian Part I. The End of Ancient Thinking Part II. The Alienated Vision Part III. The Heroic Redeemer Part IV. Illusion and Reality Part V. Style and Value Index. |
A large multicenter outcome study of female genital plastic surgery. | INTRODUCTION
Female Genital Plastic Surgery, a relatively new entry in the field of Cosmetic and Plastic Surgery, has promised sexual enhancement and functional and cosmetic improvement for women. Are the vulvovaginal aesthetic procedures of Labiaplasty, Vaginoplasty/Perineoplasty ("Vaginal Rejuvenation") and Clitoral Hood Reduction effective, and do they deliver on that promise? For what reason do women seek these procedures? What complications are evident, and what effects are noted regarding sexual function for women and their partners? Who should be performing these procedures, what training should they have, and what are the ethical considerations?
AIM
This study was designed to produce objective, utilizable outcome data regarding FGPS.
MAIN OUTCOME MEASURES
1) Reasons for considering surgery from both patient's and physician's perspective; 2) Pre-operative sexual functioning per procedure; 3) Overall patient satisfaction per procedure; 4) Effect of procedure on patient's sexual enjoyment, per procedure; 5) Patient's perception of effect on her partner's sexual enjoyment, per procedure; 6) Complications.
METHODS
This cross-sectional study, including 258 women and encompassing 341 separate procedures, comes from a group of twelve gynecologists, gynecologic urologists and plastic surgeons from ten centers in eight states nationwide. 104 labiaplasties, 24 clitoral hood reductions, 49 combined labiaplasty/clitoral hood reductions, 47 vaginoplasties and/or perineoplasties, and 34 combined labiaplasty and/or reduction of the clitoral hood plus vaginoplasty/perineoplasty procedures were studied retrospectively, analyzing both patient's and physician's perception of surgical rationale, pre-operative sexual function and several outcome criteria.
RESULTS
Combining the three groups, 91.6% of patients were satisfied with the results of their surgery after a 6-42 month follow-up. Significant subjective enhancement in sexual functioning for both women and their sexual partners was noted (p = 0.0078), especially in patients undergoing vaginal tightening/perineal support procedures. Complications were acceptable and not of major consequence.
CONCLUSIONS
While emphasizing that these female genital plastic procedures are not performed to correct "abnormalities," as there is a wide range of normality in the external and internal female genitalia, both parous and nulliparous, many women chose to modify their vulvas and vaginas. From the results of this large study pooling data from a diverse group of experienced genital plastic surgeons, outcome in both general and sexual satisfaction appear excellent. |
Elevated methylmalonic acid and total homocysteine levels show high prevalence of vitamin B12 deficiency after gastric surgery. | Elderly persons [1, 2] and persons who have had gastric surgery [3-11] are at increased risk for developing vitamin B12 (cobalamin) deficiency. The hematologic and neurologic manifestations of vitamin B12 deficiency have been well described; however, this deficiency often remains undetected, and some patients receive a misdiagnosis of Alzheimer disease, spinal cord compression, amyotrophic lateral sclerosis, or diabetic or alcoholic peripheral neuropathy [12]. Although megaloblastic anemia is usually reversible with vitamin B12 treatment, the neurologic injuries are reversible only if they are treated soon after their onset [12, 13]. In addition, as do patients with folate deficiency [14], patients with untreated vitamin B12 deficiency have elevated total homocysteine levels. Substantial biochemical and epidemiologic evidence now suggests that an elevated serum total homocysteine level contributes to the development of carotid artery stenosis, coronary artery disease, and peripheral vascular disease [15-18]. Thus, in theory at least, patients with untreated vitamin B12 deficiency may be at increased risk for developing atherosclerotic vascular disease. In the future, the prevalence of vitamin B12 deficiency in the aging population may be expected to increase. Among persons at major risk are those who had subtotal gastrectomy for ulcer disease between the 1930s [19] and 1974 [5, 7] (in 1974, the first histamine-2 blocker, cimetidine, was released [20]). It is not possible to determine how many Americans had gastric surgery during this period, but representative data from the University of Minnesota Hospital suggest that the number is large. At that hospital alone, 1550 patients had subtotal gastrectomy between 1938 and 1950 [4]. Throughout the United States, therefore, hundreds of thousands of patients probably had this surgery. A new operation, gastric bypass for obesity, is currently creating another cohort at risk for developing vitamin B12 deficiency [11]. As these cohorts age, an unknown number of persons will develop vitamin B12 deficiency, and clinicians caring for such persons currently have no accurate guidelines on which to base screening decisions. Previous prevalence estimates are unreliable because clinical manifestations are insensitive and radiodilution vitamin B12 assays were nonspecific [21, 22]. The Schilling test is unreliable after gastrectomy [8, 9], anemia is often absent in vitamin B12-deficient patients [2, 12, 23], and macrocytosis may be masked by coexisting iron deficiency [7, 24]. See editorial comment on pp 509-511. Recently, however, measurements of the metabolites from two vitamin B12-dependent pathways (Figure 1)serum methylmalonic acid [25] and total homocysteine [14]were shown to be highly sensitive detectors of vitamin B12 deficiency [26]. Two enzymes have a known requirement for vitamin B12: L-methylmalonyl-CoA mutase and methionine synthase [22]. Methionine synthase requires folate in addition to vitamin B12 for normal functioning. If the conversion of L-methylmalonyl-CoA to succinyl-CoA is impaired by a deficiency of the vitamin B12 cofactor adenosylcobalamin, the excess methylmalonyl-CoA is cleaved to methylmalonic acid and methylmalonic acid levels in the serum and urine are elevated [25]. Similarly, if the methylation of homocysteine to methionine is impaired by a deficiency of methylcobalamin or methyltetrahydrofolate, serum total homocysteine levels are elevated [14]. The metabolic pathways in which these two enzymes function are not always equally affected by vitamin B12 deficiency. At the time vitamin B12 deficiency is diagnosed, therefore, levels of methylmalonic acid, total homocysteine, or both may be elevated [22]. Figure 1. The two vitamin B12-dependent enzymes, L-methylmalonyl-CoA mutase (left) and methionine synthase (right). In vitamin B12-deficient patients, elevated levels of both serum methylmalonic acid and total homocysteine decrease promptly with adequate vitamin B12 therapy [22, 26]. However, in folate-deficient patients, total homocysteine levels return to normal only after folate replacement [22]. Therefore, in addition to serum vitamin B12 levels, we used methylmalonic acid, total homocysteine, and folate levels to determine whether the prevalence of vitamin B12 deficiency differed between persons who had had gastric surgery and those who had not. Methods Between September 1991 and March 1993, 65 patients who had had gastric surgery were identified at the Philadelphia Veterans Affairs Medical Center. These patients were identified either by review of gastrointestinal radiographs, surveys of the house-staff assigned to the medicine and surgery inpatient services, or referral of outpatients from physicians in the medical clinic. Four of the 65 patients were excluded: Three were receiving vitamin B12 therapy, and one had a hepatoma. Hepatoma can produce increased levels of vitamin B12-binding protein, which may complicate interpretation of serum vitamin B12 levels. Patients who had not had gastric surgery (controls) were drawn from 127 consecutive patients attending one author's Philadelphia Veterans Affairs Medical Center clinic between November 1992 and March 1993. One hundred seven controls participated, and 20 either declined to participate or did not complete the required blood tests. We determined the type of gastric surgery that had been done either from patient reporting or by reviewing radiologic, endoscopic, or surgical records. In most patients (51 of 61), we determined the year surgery had been done from patient report or chart review. For patients who could not provide the year of surgery but could specify the decade, we used the mid-decade year. For example, if the patient said that the surgery had been done in the 1950s, we recorded the year as 1955. Serum vitamin B12 and folate levels were determined at the Philadelphia Veterans Affairs Medical Center using a commercially available radioligand kit (Bio-Rad, Diagnostics Group, Hercules, California). In the hospital's laboratory, normal values for vitamin B12 and folate levels were 171 to 840 pmol/L and 5 to 39 nmol/L, respectively. The remaining serum samples were frozen at 20 C and were shipped to Denver so that serum methylmalonic acid and total homocysteine levels could be analyzed by the stable isotope dilution gas chromatography-mass spectrometry method [27-30]. The normal range for serum methylmalonic acid levels (determined in 50 normal blood donors 18 to 65 years of age) is 73 to 271 nmol/L, and the normal range for serum total homocysteine levels is 5.4 to 16.2 mol/L [22]. Vitamin B12 deficiency was defined as one of the following: 1) a serum vitamin B12 level less than 221 pmol/L and an elevated methylmalonic acid level; 2) a serum vitamin B12 level less than 221 pmol/L and a total homocysteine level that decreased after vitamin B12 therapy; or 3) in patients unavailable for treatment, a serum vitamin B12 level less than 221 pmol/L, a folate level greater than 9 nmol/L, and an elevated total homocysteine level. Hemoglobin level, hematocrit, and mean corpuscular volume were measured by automatic devices. Macrocytosis was defined as a mean corpuscular volume of 95 fL or less. The peripheral smears of 71% of patients (43 of 61) and 88% of controls (94 of 107) were reviewed by one hematologist who was blinded to each participant's vitamin B12 level, hemoglobin level, hematocrit, and gastric surgery status. Hypersegmentation was defined as five neutrophils with five or more lobes or one neutrophil with six lobes per 100 cells counted. Treatment Vitamin B12 treatment generally consisted of daily intramuscular injections of 1000 g of vitamin B12 for 5 days, followed by monthly injections. Folic acid was given orally, 1 mg/d. Serum vitamin B12, folate, methylmalonic acid, and total homocysteine levels were measured 1 to 6 weeks after treatment. Statistical Analysis Data were examined to determine whether the variables were suitable for parametric analyses. Although relatively modest, the skew for the numeric variables necessitated that several variables be transformed to logs for entry into two-way analysis of variance or be subjected to nonparametric analyses. The comparison between patients and controls for levels of vitamin B12, folate, methylmalonic acid, and total homocysteine was done by two-factor analysis of variance on log-transformed variables. In each analysis of variance, race was included as a factor (along with study group) to control for possible race-by-group interactions. We used unpaired t-tests or Mann-Whitney U tests to do comparisons of other numeric variables, such as hemoglobin and mean corpuscular volume; comparisons between other groups, such as patients with a positive and patients with a negative peripheral blood smear; and comparisons between deficient and nondeficient patients. We used chi-square tests to compare groups on dichotomous variables (such as white patients compared with black patients). Spearman correlations were used to assess the association between the time since surgery and other variables. The Human Studies Subcommittee and the Research and Development Committee of the Philadelphia Veterans Affairs Medical Center approved the study. Results Clinical Characteristics The 61 patients (who had had gastric surgery) and 107 controls (who had not) were similar in the ratio of men to women (60:1 compared with 104:3), age, and race (Table 1). The indications for surgery included peptic ulcer disease (56 patients), obesity (3 patients), gastric cancer (3 patients [2 of whom had previously had surgery for peptic ulcer disease]), and gastric lymphoma (1 patient). The type of gastric surgery could be determined in 36 of 61 patients (59%). The types of surgery were Billroth II (23 patients), repair of perforated ulcer (6 patients), vagotomy and pyloroplasty (2 patients), gastric bypass or gastric banding for obesity (3 patients), Billroth I (1 patient), |
Tiresias: Predicting Security Events Through Deep Learning | With the increased complexity of modern computer attacks, there is a need for defenders not only to detect malicious activity as it happens, but also to predict the specific steps that will be taken by an adversary when performing an attack. However this is still an open research problem, and previous research in predicting malicious events only looked at binary outcomes (eg. whether an attack would happen or not), but not at the specific steps that an attacker would undertake. To fill this gap we present Tiresias xspace, a system that leverages Recurrent Neural Networks (RNNs) to predict future events on a machine, based on previous observations. We test Tiresias xspace on a dataset of 3.4 billion security events collected from a commercial intrusion prevention system, and show that our approach is effective in predicting the next event that will occur on a machine with a precision of up to 0.93. We also show that the models learned by Tiresias xspace are reasonably stable over time, and provide a mechanism that can identify sudden drops in precision and trigger a retraining of the system. Finally, we show that the long-term memory typical of RNNs is key in performing event prediction, rendering simpler methods not up to the task. |
The colonial context of Filipino American immigrants' psychological experiences. | Because of the long colonial history of Filipinos and the highly Americanized climate of postcolonial Philippines, many scholars from various disciplines have speculated that colonialism and its legacies may play major roles in Filipino emigration to the United States. However, there are no known empirical studies in psychology that specifically investigate whether colonialism and its effects have influenced the psychological experiences of Filipino American immigrants prior to their arrival in the United States. Further, there is no existing empirical study that specifically investigates the extent to which colonialism and its legacies continue to influence Filipino American immigrants' mental health. Thus, using interviews (N = 6) and surveys (N = 219) with Filipino American immigrants, two studies found that colonialism and its consequences are important factors to consider when conceptualizing the psychological experiences of Filipino American immigrants. Specifically, the findings suggest that (a) Filipino American immigrants experienced ethnic and cultural denigration in the Philippines prior to their U.S. arrival, (b) ethnic and cultural denigration in the Philippines and in the United States may lead to the development of colonial mentality (CM), and (c) that CM may have negative mental health consequences among Filipino American immigrants. The two studies' findings suggest that the Filipino American immigration experience cannot be completely captured by the voluntary immigrant narrative, as they provide empirical support to the notion that the Filipino American immigration experience needs to be understood in the context of colonialism and its most insidious psychological legacy- CM. |
Ethics of abortion: the arguments for and against. | In England, Scotland and Wales legislation has facilitated the process of procuring an abortion to the point at which, in 2007, it appears to have been effectively assimilated into contemporary life. However, despite the legal acceptance of abortion it remains an ethically contentious subject. Arguments in favour of, or in opposition to, abortion can arouse vociferous and, on occasions, extreme reactions. At the heart of the abortion debate lie questions concerning rights, autonomy and the way in which society views disability (if a pregnancy is terminated for this reason alone). It is important that health professionals comprehend the basis of the abortion debate, from the perspective of their profession, society as a whole and the individual woman who may have had or is considering an abortion or has been affected by the subject in some way. This article examines some of the key ethical issues concerning abortion. |
Dialogue act segmentation for Vietnamese human-human conversational texts | Dialog act identification plays an important role in understanding conversations. It has been widely applied in many fields such as dialogue systems, automatic machine translation, automatic speech recognition, and especially useful in systems with human-computer natural language dialogue interfaces such as virtual assistants and chatbots. The first step of identifying dialog act is identifying the boundary of the dialog act in utterances. In this paper, we focus on segmenting the utterance according to the dialog act boundaries, i.e. functional segments identification, for Vietnamese utterances. We investigate carefully functional segment identification in two approaches: (1) machine learning approach using maximum entropy (ME) and conditional random fields (CRFs); (2) deep learning approach using bidirectional Long Short-Term Memory (LSTM) with a CRF layer (Bi-LSTM-CRF) on two different conversational datasets: (1) Facebook messages (Message data); (2) transcription from phone conversations (Phone data). To the best of our knowledge, this is the first work that applies deep learning based approach to dialog act segmentation. As the results show, deep learning approach performs appreciably better as to compare with traditional machine learning approaches. Moreover, it is also the first study that tackles dialog act and functional segment identification for Vietnamese. |
Generic Text Summarization Using Probabilistic Latent Semantic Indexing | This paper presents a strategy to generate generic summary of documents using Probabilistic Latent Semantic Indexing. Generally a document contains several topics rather than a single one. Summaries created by human beings tend to cover several topics to give the readers an overall idea about the original document. Hence we can expect that a summary containing sentences from better part of the topic spectrum should make a better summary. PLSI has proven to be an effective method in topic detection. In this paper we present a method for creating extractive summary of the document by using PLSI to analyze the features of document such as term frequency and graph structure. We also show our results, which was evaluated using ROUGE, and compare the results with other techniques, proposed in the past. |
Mining distance-based outliers in near linear time with randomization and a simple pruning rule | Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set. |
Fabrication of metal matrix composites by metal injection molding : A review | Metal injection molding (MIM) is a near net-shape manufacturing technology that is capable of mass production of complex parts cost-effectively. The unique features of the process make it an attractive route for the fabrication of metal matrix composite materials. In this paper, the status of the research and development in fabricating metal matrix composites by MIM is reviewed, with a major focus on material systems, fabrication methods, resulting material properties and microstructures. Also, limitations and needs of the technique in composite fabrication are presented in the literature. The full potential of MIM process for fabricating metal matrix composites is yet to be explored. |
Needs assessment of cancer survivors in Connecticut. | INTRODUCTION
There are knowledge gaps regarding the needs of cancer survivors in Connecticut and their utilization of supportive services.
METHODS
A convenience sample of cancer survivors residing in Connecticut were invited to complete a self-administered (print or online) needs assessment (English or Spanish). Participants identified commonly occurring problems and completed a modified version of the Supportive Care Needs Survey Short Form (SNCS-SF34) assessing needs across five domains (psychosocial, health systems/information, physical/daily living, patient care /support, and sexuality).
RESULTS
The majority of the 1,516 cancer survivors (76.4%) were women, 47.5% had completed high school or some college, 66.1% were diagnosed ≤5 years ago, and 87.7% were non-Hispanic white. The breast was the most common site (47.6%), followed by the prostate, colorectal, lung, and melanoma. With multivariate adjustment, need on the SCNS-SF34 was greatest among women, younger survivors, those diagnosed within the past year, those not free of cancer, and Hispanics/Latinos. We also observed some differences by insurance and education status. In addition, we assessed the prevalence of individual problems, with the most common being weight gain/loss, memory changes, paying for care, communication, and not being told about services.
CONCLUSIONS
Overall and domain specific needs in this population of cancer survivors were relatively low, although participants reported a wide range of problems. Greater need was identified among cancer survivors who were female, younger, Hispanic/Latino, and recently diagnosed.
IMPLICATIONS FOR CANCER SURVIVORS
These findings can be utilized to target interventions and promote access to available resources for Connecticut cancer survivors. |
J-Sim: a simulation and emulation environment for wireless sensor networks | Wireless sensor networks have gained considerable attention in the past few years. They have found application domains in battlefield communication, homeland security, pollution sensing, and traffic monitoring. As such, there has been an increasing need to define and develop simulation frameworks for carrying out high-fidelity WSN simulation. In this article we present a modeling, simulation, and emulation framework for WSNs in J-Sim - an open source, component-based compositional network simulation environment developed entirely in Java. This framework is built on the autonomous component architecture and extensible internetworking framework of J-Sim, and provides an object-oriented definition of target, sensor, and sink nodes, sensor and wireless communication channels, and physical media such as seismic channels, mobility models, and power models (both energy-producing and energy-consuming components). Application-specific models can be defined by subclassing classes in the simulation framework and customizing their behaviors. We also include in J-Sim a set of classes and mechanisms to realize network emulation. We demonstrate the use of the proposed WSN simulation framework by implementing several well-known localization, geographic routing, and directed diffusion protocols, and perform performance comparisons (in terms of the execution time incurred and memory used) in simulating WSN scenarios in J-Sim and ns-2. The simulation study indicates the WSN framework in J-Sim is much more scalable than ns-2 (especially in memory usage). We also demonstrate the use of the WSN framework in carrying out real-life full-fledged Future Combat System (FCS) simulation and emulation |
FPGA-Based Test-Bench for Resonant Inverter Load Characterization | Resonant converters often require accurate load characterization in order to ensure appropriate and safe control. Besides, for systems with a highly variable load, as the induction heating systems, a real-time load estimation is mandatory. This paper presents the development of an FPGA-based test-bench aimed to extract the electrical equivalent of the induction heating loads. The proposed test-bench comprises a resonant power converter, sigma-delta ADCs, and an embedded system implemented in an FPGA. The characterization algorithm is based on the discrete-time Fourier series computed directly from the ΔΣ ADC bit-streams, and the FPGA implementation has been partitioned into hardware and software platforms to optimize the performance and resources utilization. Analytical and simulation results are verified through experimental measurements with the proposed test-bench. As a result, the proposed platform can be used as a load identification tool either for stand-alone or PC-hosted operation. |
Official digital currency | In ancient times goods and services were exchanged through barter system1 Gold, valuable metals and other tangibles like stones and shells were also exploited as medium of exchange. Now Paper Currency (PC) is country-wide accepted common medium of trade. It has three major flaws. First, the holder of currency is always at risk due to theft and robbery culture in most of the societies of world. Second, counterfeit 2 currency is a challenge for currency issuing authorities. Third, printing and transferring PC causes a heavy cost. Different organizations have introduced and implemented digital currency systems but none of them is governed by any government. In this paper we introduce Official digital currency System (ODCS). Our proposed digital currency is issued and controlled by the state/central bank of a country that is why we name it Official digital currency (ODC). The process of issuing ODC is almost same as that of Conventional Paper Currency (CPC) but controlling system is different. The proposal also explains country-wide process of day to day transactions in trade through ODCS. ODC is more secure, reliable, economical and easy to use. Here we introduce just the idea and compulsory modules of ODC system and not the implementable framework. We will present the implementable framework in a separate forthcoming publication. |
Generative Visual Manipulation on the Natural Image Manifold | Interactive Image Generation User edits Generated images User edits Generated images User edits Generated images [1] Zhu et al. Learning a Discriminative Model for the Perception of Realism in Composite Images. ICCV 2015. [2] Goodfellow et al. Generative Adversarial Nets. NIPS 2014 [3] Radford et al. Unsupervised representation learning with deep convolutional generative adversarial networks. ICLR 2016 Reference : Natural images 0, I , Unif 1, 1 |
On the Past and Future of American Immigration and Ethnic History: A Sociologist’s Reflections on a Silver Jubilee | To a historian, twenty five years may not seem like much: a span scarcely the measure of a generation, within the frame of “current affairs,” it may not yet qualify as “history”; it might even be said, paraphrasing Gertrude Stein, that there is no long duree there. But to a sociologist, it is a span that packs a wallop. This essay examines the past and future of American immigration and ethnic history in the light of rapid changes marking the 25-year period between 1981 and 2006. Although much of American sociology, led by the scholars of what came to be called the Chicago School, gained its impetus and its disciplinary identity a century ago via the empirical study of mass immigration and the adaptations in American cities of an unprecedented diversity of newcomers, twenty-five years ago there was little scholarly work being done in the sociology of immigration and ethnicity. Doctoral students at leading universities were advised by their mentors as late as the 1980s to avoid writing their dissertations on such topics, since immigration was not a “field” or even a recognized section of the American Sociological Association. There was no there there, then. On the other hand, immigration became a field of specialization in American history in the 1926-40 period, it “erupted” in the late 1960s, and by the 1970s an astounding 1,813 doctoral dissertations in history focused on immigration or ethnicity. It was when immigration became “a thing of the past” that historians surged to study it, while sociologists turned to more contemporary concerns (including what would become glossed as “race and ethnic relations”). Still, historians and sociologists alike have kindred interests in understanding and explaining the common and endlessly fascinating phenomena that delimit our respective fields. While the recovery, if not the discovery, of the past may be the historian’s raison d’etre, it is in part the social scientist’s conceit to examine the patterned present in order to predict the future. Whether our glance is backward or forward, our knowledge is ineluctably shaped by our present predicaments, so that as often as not we see through our respective prisms darkly. And history, in any case, does not obligingly repeat itself, whether as tragedy or as familiar farce. If we can learn something from the chaotic, checkered past of the last era of mass migration to the United States a century ago, it may be to harbor few illusions about crystal-ball gazing, even in increments of twenty-five years. Historians and sociologists alike, in our own Janus-faced ways, seek to contribute to human understanding, to enlightenment, and to the tolerance and humility that comes with it – and in that way we make our most important contributions to the long duree of humanity, all the more in a present suffused by a climate of fear. To bring these two disciplines together in the study of American immigration and ethnicity has been one of the pioneering contributions of the JAEH in its first quarter century. |
Wireless ad hoc sensor and actuator networks on the farm | Agriculture accounts for a significant portion of the GDP in most developed countries. However, managing farms, particularly large-scale extensive farming systems, is hindered by lack of data and increasing shortage of labour. We have deployed a large heterogeneous sensor network on a working farm to explore sensor network applications that can address some of the issues identified above. Our network is solar powered and has been running for over 6 months. The current deployment consists of over 40 moisture sensors that provide soil moisture profiles at varying depths, weight sensors to compute the amount of food and water consumed by animals, electronic tag readers, up to 40 sensors that can be used to track animal movement (consisting of GPS, compass and accelerometers), and 20 sensor/actuators that can be used to apply different stimuli (audio, vibration and mild electric shock) to the animal. The static part of the network is designed for 24/7 operation and is linked to the Internet via a dedicated high-gain radio link, also solar powered. The initial goals of the deployment are to provide a testbed for sensor network research in programmability and data handling while also being a vital tool for scientists to study animal behavior. Our longer term aim is to create a management system that completely transforms the way farms are managed. |
EdgeChain: An Edge-IoT Framework and Prototype Based on Blockchain and Smart Contracts | The emerging Internet of Things (IoT) is facing significant scalability and security challenges. On the one hand, IoT devices are “weak” and need external assistance. Edge computing provides a promising direction addressing the deficiency of centralized cloud computing in scaling massive number of devices. On the other hand, IoT devices are also relatively “vulnerable” facing malicious hackers due to resource constraints. The emerging blockchain and smart contracts technologies bring a series of new security features for IoT and edge computing. In this paper, to address the challenges, we design and prototype an edge-IoT framework named “EdgeChain” based on blockchain and smart contracts. The core idea is to integrate a permissioned blockchain and the internal currency or “coin” system to link the edge cloud resource pool with each IoT device’ account and resource usage, and hence behavior of the IoT devices. EdgeChain uses a credit-based resource management system to control how much resource IoT devices can obtain from edge servers, based on pre-defined rules on priority, application types and past behaviors. Smart contracts are used to enforce the rules and policies to regulate the IoT device behavior in a non-deniable and automated manner. All the IoT activities and transactions are recorded into blockchain for secure data logging and auditing. We implement an EdgeChain prototype and conduct extensive experiments to evaluate the ideas. The results show that while gaining the security benefits of blockchain and smart contracts, the cost of integrating them into EdgeChain is within a reasonable and acceptable range. |
Holisitic Approach to Scientific Traditions | There are at least two perspectives which must be taken into consideration when evaluating a scientific tradition; epistemological, because it is the result of an activity of acquiring knowledge, and sociological, because as a tradition it is the outcome of a community of scholars actively involved in acquiring that knowledge. If both of these perspectives are held together in all their aspects, then we shall have a holistic approach in evaluating a scientific tradition. Unfortunately, most explanations offered for the decline of science in the Muslim world neglect both of these perspectives. In this paper I attempt to explain from these perspectives that Islam has a viable relationship with science and secondly, offer my view concerning how these two perspectives solve the problem of revival of science in the Muslim world. Keywords: Islam and science; sociology of science; epistemology of science; scientific community; scientific tradition; science and religion; scientific progress; scientific process; worldview; holistic approach; contextual causes. Introduction A perspective is the position of an investigator who views the subject under investigation from that particular position. As human beings we cannot be "perspectiveless". Since it is also our perspective that provides us with a view to look at things and the fact that we cannot be "perspectiveless", whatever activity we do, we will necessarily put it within that perspective. If our activity is scientific, for example, it will be from our own perspective that is gradually developed through our education, which in turn has a broader perspective of our past educational and scientific perspectives. This broader perspective can be called "scientific tradition" if it involves our scientific past and present. If we define our perspective with clear borders, then we draw a "framework". Depending on the nature of our activity both our perspective and our framework can be abstract or concrete with varying degrees of intensity between these extremes. Since science is a conceptual activity, both perspective and framework will also be conceptual. From this theoretical background we can define a scientific tradition as that accumulation of scientific knowledge and practices, in its civilization, of the scholars and the practitioners of science from the perspective of its worldview but within the framework of its epistemology. On the basis of this definition, Western scientific tradition is, for example, the accumulated scientific knowledge and practices carried out through the epistemology of the Western worldview, together with a set of cultural values and mores that grew out through time among the network of scholars actively engaged in scientific activities in the West. The same is valid for Islamic scientific tradition and all other such traditions. Two factors stand out in this definition: a perspective and an environment. The perspective is necessarily epistemological, and it is, in fact, epistemology of science; whereas the second is sociological that can also be expressed as sociology of science. Both of these factors have many intricate issues and phenomena involved in the complex structure of scientific activities. If we try at least to keep in view all these aspects, issues and phenomena in general then we can develop a holistic attitude in evaluating a scientific tradition. Any other evaluation is to be held only as a scholarly examination of an aspect of such a tradition and nothing more. In this brief attempt we shall try to offer here only certain aspects of the epistemology and the sociology of Islamic scientific tradition. This way we hope that we shall be treating Islamic scientific tradition with the holistic approach as defined above. The Epistemological Framework That mental framework which is followed naturally and/or actively by human activity can be defined as the 'epistemic ground' of that activity. … |
Correlating Synthetic Aperture Radar (CoSAR) | This paper presents the correlating synthetic aperture radar (CoSAR) technique, a novel radar imaging concept to observe statistical properties of fast decorrelating surfaces. A CoSAR system consists of two radars with a relative motion in the along-track (cross-range) dimension. The spatial autocorrelation function of the scattered signal can be estimated by combining quasi-simultaneously received radar echoes. By virtue of the Van Cittert-Zernike theorem, estimates of this autocorrelation function for different relative positions can be processed by generating images of several properties of the scene, including the normalized radar cross section, Doppler velocities, and surface topography. Aside from the geometric performance, a central aspect of this paper is a theoretical derivation of the radiometric performance of CoSAR. The radiometric quality is proportional to the number of independent samples available for the estimation of the spatial correlation, and to the ratio between the CoSAR azimuth resolution and the real-aperture resolution. A CoSAR mission concept is provided where two geosynchronous radar satellites fly at opposing sides of a quasi-circular trajectory. Such a mission could provide bidaily images of the ocean backscatter, mean Doppler, and surface topography at resolutions on the order of 500 m over wide areas. |
Privacy Loss in Apple's Implementation of Differential Privacy on MacOS 10.12 | In June 2016, Apple made a bold announcement that it will deploy local differential privacy for some of their user data collection in order to ensure privacy of user data, even from Apple [21, 23]. The details of Apple’s approach remained sparse. Although several patents [17–19] have since appeared hinting at the algorithms that may be used to achieve differential privacy, they did not include a precise explanation of the approach taken to privacy parameter choice. Such choice and the overall approach to privacy budget use and management are key questions for understanding the privacy protections provided by any deployment of differential privacy. In this work, through a combination of experiments, static and dynamic code analysis of macOS Sierra (Version 10.12) implementation, we shed light on the choices Apple made for privacy budget management. We discover and describe Apple’s set-up for differentially private data processing, including the overall data pipeline, the parameters used for differentially private perturbation of each piece of data, and the frequency with which such data is sent to Apple’s servers. We find that although Apple’s deployment ensures that the (differential) privacy loss per each datum submitted to its servers is 1 or 2, the overall privacy loss permitted by the system is significantly higher, as high as 16 per day for the four initially announced applications of Emojis, New words, Deeplinks and Lookup Hints [21]. Furthermore, Apple renews the privacy budget available every day, which leads to a possible privacy loss of 16 times the number of days since user opt-in to differentially private data collection for those four applications. We applaud Apple’s deployment of differential privacy for its bold demonstration of feasibility of innovation while guaranteeing rigorous privacy. However, we argue that in order to claim the full benefits of differentially private data collection, Apple must give full transparency of its implementation and privacy loss choices, enable user choice in areas related to privacy loss, and set meaningful defaults on the daily and device lifetime privacy loss permitted. ACM Reference Format: Jun Tang, Aleksandra Korolova, Xiaolong Bai, XueqiangWang, and Xiaofeng Wang. 2017. Privacy Loss in Apple’s Implementation of Differential Privacy |
Common mode EMI reduction technique for interleaved MHz critical mode PFC converter with coupled inductor | Coupled inductor has been widely adopted in VR applications for many years because of its benefits such as reducing current ripple and improving transient performance. In this presentation, coupled inductor concept is applied to interleaved MHz totem-pole CRM PFC converter with GaN devices. The coupled inductor in CRM PFC converter can reduce switching frequency variation, help achieving ZVS and reduce circulating energy. Therefore the coupled inductor can improve the efficiency of PFC converter. In addition, balance technique is applied to help minimize CM noise. This paper will introduce how to achieve balance with coupled inductor. The novel PCB winding inductor design will be provided. The PCB winding coupled inductor will have similar loss with litz wire inductor but much less labor in manufacture. |
Maximum Power Extraction From a Partially Shaded PV Array Using Shunt-Series Compensation | Under partially shaded conditions, the current through a photovoltaic (PV) string is limited to the current produced by its most shaded module, reducing the overall PV array output power. A current compensation-based distributed maximum power point tracking (DMPPT) scheme may typically be used to maximize power output under these conditions. However, because of nonuniform irradiation during partial shading, different modules across a given string, as well as across the various strings, experience different conditions. Yet, the module voltages in all the strings should add up to the same voltage because all strings are connected in parallel. This causes the individual modules in a string to readjust their voltages and operate away from their actual MPP. In other words, the DMPPT scheme using shunt current compensation alone is not effective. This paper describes a novel method comprising shunt (current) and series (voltage) compensation of modules and strings, respectively. Per this technique, a current-compensating converter is connected in shunt with each module, and a voltage-compensating converter is connected in series with each string. This facilitates each PV module to operate at its exact MPP and deliver maximum power. All the analytical, simulation, and experimental results of this study are included. |
User interface development environment based on components: assets for mobility | In this paper we present our UI development environment based on components. The UI is considered as a technical service of a business component just like security or persistence. The dialog between UI and business components is managed by an interaction/coordination service that allows the reconfiguration of components without modifying them. A UI component merging service handles dynamic assembly of corresponding UI components. |
A Feature Subset Selection Method based on Conditional Mutual Information and Ant Colony Optimization | Feature subset selection is one of the key problems in the area of pattern recognition and machine learning. Feature subset selection refers to the problem of selecting only those features that are useful in predicting a target concept i. e. class. Most of the data acquired through different sources are not particularly screened for any specific task e. g. classification, clustering, anomaly detection, etc. When this data is fed to a learning algorithm, its results deteriorate. The proposed method is a pure filter based feature subset selection technique that incurs less computational cost and highly efficient in terms of classification accuracy. Moreover, along with high accuracy the proposed method requires less number of features in most of the cases. In the proposed method the issue of feature ranking and threshold value selection is addressed. The proposed method adaptively selects number of features as per the worth of an individual feature in the dataset. An extensive experimentation is performed, comprised of a number of benchmark datasets over three well known classification algorithms. Empirical results endorse efficiency and effectiveness of the proposed method. |
A full-3D voxel-based dynamic obstacle detection for urban scenario using stereo vision | Autonomous Ground Vehicles designed for dynamic environments require a reliable perception of the real world, in terms of obstacle presence, position and speed. In this paper we present a flexible technique to build, in real time, a dense voxel-based map from a 3D point cloud, able to: (1) discriminate between stationary and moving obstacles; (2) provide an approximation of the detected obstacle's absolute speed using the information of the vehicle's egomotion computed through a visual odometry approach. The point cloud is first sampled into a full 3D map based on voxels to preserve the tridimensional information; egomotion information allows computational efficiency in voxels creation; then voxels are processed using a flood fill approach to segment them into a clusters structure; finally, with the egomotion information, the obtained clusters are labeled as stationary or moving obstacles, and an estimation of their speed is provided. The algorithm runs in real time; it has been tested on one of VisLab's AGVs using a modified SGM-based stereo system as 3D data source. |
The anatomy of the medial part of the knee. | BACKGROUND
While the anatomy of the medial part of the knee has been described qualitatively, quantitative descriptions of the attachment sites of the main medial knee structures have not been reported. The purpose of the present study was to verify the qualitative anatomy of medial knee structures and to perform a quantitative evaluation of their anatomic attachment sites as well as their relationships to pertinent osseous landmarks.
METHODS
Dissections were performed and measurements were made for eight nonpaired fresh-frozen cadaveric knees with use of an electromagnetic three-dimensional tracking sensor system.
RESULTS
In addition to the medial epicondyle and the adductor tubercle, a third osseous prominence, the gastrocnemius tubercle, which corresponded to the attachment site of the medial gastrocnemius tendon, was identified. The average length of the superficial medial (tibial) collateral ligament was 94.8 mm. The superficial medial collateral ligament femoral attachment was 3.2 mm proximal and 4.8 mm posterior to the medial epicondyle. The superficial medial collateral ligament had two separate attachments on the tibia. The distal attachment of the superficial medial collateral ligament on the tibia was 61.2 mm distal to the knee joint. The deep medial collateral ligament consisted of meniscofemoral and meniscotibial portions. The posterior oblique ligament femoral attachment was 7.7 mm distal and 6.4 mm posterior to the adductor tubercle and 1.4 mm distal and 2.9 mm anterior to the gastrocnemius tubercle. The medial patellofemoral ligament attachment on the femur was 1.9 mm anterior and 3.8 mm distal to the adductor tubercle.
CONCLUSIONS
The medial knee ligament structures have a consistent attachment pattern. |
Oral L-arginine in patients with coronary artery disease on medical management. | BACKGROUND
Vascular nitric oxide (NO) bioavailability is reduced in patients with coronary artery disease (CAD). We investigated whether oral L-arginine, the substrate for NO synthesis, improves homeostatic functions of the vascular endothelium in patients maintained on appropriate medical therapy and thus might be useful as adjunctive therapy.
METHODS AND RESULTS
Thirty CAD patients (29 men; age, 67+/-8 years) on appropriate medical management were randomly assigned to L-arginine (9 g) or placebo daily for 1 month, with crossover to the alternate therapy after 1 month off therapy, in a double-blind study. Nitrogen oxides in serum (as an index of endothelial NO release), flow-mediated brachial artery dilation (as an index of vascular NO bioactivity), and serum cell adhesion molecules (as an index of NO-regulated markers of inflammation) were measured at the end of each treatment period. L-Arginine significantly increased arginine levels in plasma (130+/-53 versus 70+/-17 micromol/L, P<0.001) compared with placebo. However, there was no effect of L-arginine on nitrogen oxides (19.3+/-7.9 versus 18. 6+/-6.7 micromol/L, P=0.546), on flow-mediated dilation of the brachial artery (11.9+/-6.3% versus 11.4+/-7.9%, P=0.742), or on the cell adhesion molecules E-selectin (47.8+/-15.2 versus 47.2+/-14.4 ng/mL, P=0.601), intercellular adhesion molecule-1 (250+/-57 versus 249+/-57 ng/mL, P=0.862), and vascular cell adhesion molecule-1 (567+/-124 versus 574+/-135 ng/mL, P=0.473).
CONCLUSIONS
Oral L-arginine therapy does not improve NO bioavailability in CAD patients on appropriate medical management and thus may not benefit this group of patients. |
Temporal trends in patient-reported angina at 1 year after percutaneous coronary revascularization in the stent era: a report from the National Heart, Lung, and Blood Institute-sponsored 1997-2006 dynamic registry. | BACKGROUND
Percutaneous coronary intervention (PCI) has witnessed rapid technological advancements, resulting in improved safety and effectiveness over time. Little, however, is known about the temporal impact on patient-reported symptoms and quality of life after PCI.
METHODS AND RESULTS
Temporal trends in post-PCI symptoms were analyzed using 8879 consecutive patients enrolled in the National Heart, Lung, and Blood Institute-sponsored Dynamic Registry (wave 1: 1997 [bare metal stents], wave 2: 1999 [uniform use of stents], wave 3: 2001 [brachytherapy], wave 4, 5: 2004, 2006 [drug eluting stents]). Patients undergoing PCI in the recent waves were older and more often reported comorbidities. However, fewer patients across the waves reported post-PCI angina at one year (wave 1 to 5: 24%, 23%, 18%, 20%, 20%; P(trend)<0.001). The lower risk of angina in recent waves was explained by patient characteristics including use of antianginal medications at discharge (relative risk [95% CI] for waves 2, 3, 4 versus 1: 1.0 [0.9 to 1.2], 0.9 [0.7 to 1.1], 1.0 [0.8 to 1.3], 0.9 [0.7 to 1.1]). Similar trend was seen in the average quality of life scores over time (adjusted mean score for waves 1 to 5: 6.2, 6.5, 6.6 and 6.6; P(trend)=0.01). Other factors associated with angina at 1 year included younger age, female gender, prior revascularization, need for repeat PCI, and hospitalization for myocardial infarction over 1 year.
CONCLUSIONS
Favorable temporal trends are seen in patient-reported symptoms after PCI in routine clinical practice. Specific subgroups, however, remain at risk for symptoms at 1 year and thus warrant closer attention. |
Update on botulinum toxin for facial aesthetics. | The use of botulinum toxin has revolutionized the treatment of facial lines with an incomparable safety record over the past 14 years. The most common used injection sites are shown in Fig. 9. With the recent FDA approval for Botox in the treatment of glabellar lines, its use will likely increase dramatically. It is essential that practitioners have a detailed and specific knowledge of the facial and neck musculature to be injected to minimize untoward side effects, especially in the early days of new users' learning curve. The specifics of the dilutions and units per amount used for the various different commercial forms of botulinum toxin types A and B need to be understood fully and standardized together with the potential for antigenicity with the higher protein load of type B. In addition, specific indications for the use of botulinum toxin as adjunctive therapy for specific facial surgical procedures (i.e., blepharoplasty, surgical brow lift, and laser resurfacing) will become better understood. [figure: see text] Finally, even though the anatomy of the facial musculature is well described, individual differences in men and women, in different population groups, and in tissue qualities, such as turgor and elasticity [87], are important factors to be considered before undertaking botulinum toxin injections. It is likely that the use of specific measuring devices, such as digital imaging, will further help define the use of botulinum toxin for different muscle groups and facial aesthetic indications. |
Comparison of the Coding Efficiency of Video Coding Standards—Including High Efficiency Video Coding (HEVC) | The compression capability of several generations of video coding standards is compared by means of peak signal-to-noise ratio (PSNR) and subjective testing results. A unified approach is applied to the analysis of designs, including H.262/MPEG-2 Video, H.263, MPEG-4 Visual, H.264/MPEG-4 Advanced Video Coding (AVC), and High Efficiency Video Coding (HEVC). The results of subjective tests for WVGA and HD sequences indicate that HEVC encoders can achieve equivalent subjective reproduction quality as encoders that conform to H.264/MPEG-4 AVC when using approximately 50% less bit rate on average. The HEVC design is shown to be especially effective for low bit rates, high-resolution video content, and low-delay communication applications. The measured subjective improvement somewhat exceeds the improvement measured by the PSNR metric. |
Jet drive mechanisms in edge tones and organ pipes | Measurements of the phases of free jet waves relative to an acoustic excitation, and of the pattern and time phase of the sound pressure produced by the same jet impinging on an edge, provide a consistent model for Stage I frequencies of edge tones and of an organ pipe with identical geometry. Both systems are explained entirely in terms of volume displacement of air by the jet. During edge-tone oscillation, 180 ø of phase delay occur on the jet. Peak positive acoustic pressure on a given side of the edge occurs at the instant the jet profile crosses the edge and starts into that side. For the pipe, additional phase shifts occur that depend on the driving points for the jet current, the Q of the pipe, and the frequency of oscillation. Introduction of this additional phase shift yields an accurate prediction of the frequencies of a blown pipe and the blowing pressure at which mode jumps will occur. |
Time-based language models | We explore the relationship between time and relevance using TREC ad-hoc queries. A type of query is identified that favors very recent documents. We propose a time-based language model approach to retrieval for these queries. We show how time can be incorporated into both query-likelihood models and relevance models. These models were used for experiments comparing time-based language models to heuristic techniques for incorporating document recency in the ranking. Our results show that time-based models perform as well as or better than the best of the heuristic techniques. |
A FRAMEWORK OF A MECHANICAL TRANSLATION BETWEEN JAPANESE AND ENGLISH BY ANALOGY PRINCIPLE | Problems inherent in current machine translation systems have been reviewed and have been shown to be inherently inconsistent. The present paper defines a model based on a series of human language processing and in particular the use of analogical thinking. Machine translation systems developed so far have a kind of inherent contradiction in themselves. The more detailed a system has become by the additional improvements, the clearer the limitation and the boundary will be for the translation ability. To break through this difficulty we have to think about the mechanism of human translation, and have to build a model based on the fundamental function of language processing in the human brain. The following is an attempt to do this based on the ability of analogy finding in human beings. |
Detecting Domain-Specific Ambiguities: An NLP Approach Based on Wikipedia Crawling and Word Embeddings | In the software process, unresolved natural language (NL) ambiguities in the early requirements phases may cause problems in later stages of development. Although methods exist to detect domain-independent ambiguities, ambiguities are also influenced by the domain-specific background of the stakeholders involved in the requirements process. In this paper, we aim to estimate the degree of ambiguity of typical computer science words (e.g., system, database, interface) when used in different application domains. To this end, we apply a natural language processing (NLP) approach based on Wikipedia crawling and word embeddings, a novel technique to represent the meaning of words through compact numerical vectors. Our preliminary experiments, performed on five different domains, show promising results. The approach allows an estimate of the variation of meaning of the computer science words when used in different domains. Further validation of the method will indicate the words that need to be carefully defined in advance by the requirements analyst to avoid misunderstandings when editing documents and dealing with experts in the considered domains. |
Effects of sleep deprivation on performance and EEG spectral analysis in young adults. | Nine totally sleep deprived (TSD) and nine control subjects were evaluated with a complete battery for attention and memory performance. Frontal and temporal EEGs (5 min, eyes closed) were also recorded before and after the night. TSD subjects exhibited three performance deficits: learning the Pursuit Rotor Task, implicit recall of paired words, and distractibility on the Brown-Peterson Test. Relative to evening recordings, control subjects showed decreased morning absolute powers in all electrodes for all frequencies except for Frontal delta; TSD subjects showed increased Frontal and Temporal theta and Frontal beta. These results show that motor procedural, implicit memory, and working memory are sensitive to one night of TSD, and that Frontal and Temporal theta spectral power seem to discriminate between a night with sleep from a night without. |
Method construction - a core approach to organizational engineering | Dominated by the behavioral science approach for a long time, information systems research increasingly acknowledges design science as a complementary approach. While primarily information systems instantiations, but also constructs and models have been discussed quite comprehensively, the design of methods is addressed rarely. But methods appear to be of utmost importance particularly for organizational engineering. This paper justifies method construction as a core approach to organizational engineering. Based on a discussion of fundamental scientific positions in general and approaches to information systems research in particular, appropriate conceptualizations of 'method' and 'method construction' are presented. These conceptualizations are then discussed regarding their capability of supporting organizational engineering. Our analysis is located on a meta level: Method construction is conceptualized and integrated from a large number of references. Method instantiations or method engineering approaches however are only referenced and not described in detail. |
Psychometric development of the retinopathy treatment satisfaction questionnaire (RetTSQ). | Objectives were to evaluate the psychometric properties and to determine optimal scoring of the retinopathy treatment satisfaction questionnaire (RetTSQ) in a cross-sectional study of 207 German patients with diabetic retinopathy and a wide range of treatment experience. Forty patients (19%) also had clinically significant macular oedema. Principal components analysis was used to identify factor structures and Cronbach's alpha to assess internal consistency reliabilities. Two highly reliable subscales represented negative versus positive aspects of treatment (both alpha = 0.85). A highly reliable total score can be calculated (alpha = 0.90). Construct validity was examined by testing expected relationships of RetTSQ scores with visual impairment, stage of diabetic retinopathy, additional impact of macular oedema, SF-12 scores and scores of the RetDQoL measure of quality of life in diabetic retinopathy. Worse impairment, worse diabetic retinopathy and macular oedema were associated with less treatment satisfaction. RetTSQ scores correlated moderately with SF-12 scores (r: 0.33-0.53, p < 0.001) and RetDQoL scores (r: 0.43-0.51, p < 0.001). Answers to an open-ended question indicated no need for additional items. Repeating the analyses in a subsample with experience of more intense treatment showed very similar results. It can be concluded that the RetTSQ is valid and reliable for people with diabetic retinopathy with or without macular oedema who have experienced different treatments. |
Low-memory GEMM-based convolution algorithms for deep neural networks | Deep neural networks (DNNs) require very large amounts of computation both for training and for inference when deployed in the field. A common approach to implementing DNNs is to recast the most computationally expensive operations as general matrix multiplication (GEMM). However, as we demonstrate in this paper, there are a great many different ways to express DNN convolution operations using GEMM. Although different approaches all perform the same number of operations, the size of temorary data structures differs significantly. Convolution of an input matrix with dimensions C × H × W , requires O(KCHW ) additional space using the classical im2col approach. More recently memory-efficient approaches requiring just O(KCHW ) auxiliary space have been proposed. We present two novel GEMM-based algorithms that require just O(MHW ) and O(KW ) additional space respectively, where M is the number of channels in the result of the convolution. These algorithms dramatically reduce the space overhead of DNN convolution, making it much more suitable for memory-limited embedded systems. Experimental evaluation shows that our lowmemory algorithms are just as fast as the best patch-building approaches despite requiring just a fraction of the amount of additional memory. Our low-memory algorithms have excellent data locality which gives them a further edge over patch-building algorithms when multiple cores are used. As a result, our low memory algorithms often outperform the best patch-building algorithms using multiple threads. |
A 0.1–2-GHz Quadrature Correction Loop for Digital Multiphase Clock Generation Circuits in 130-nm CMOS | A 100-MHz–2-GHz closed-loop analog in-phase/ quadrature correction circuit for digital clocks is presented. The proposed circuit consists of a phase-locked loop- type architecture for quadrature error correction. The circuit corrects the phase error to within a 1.5° up to 1 GHz and to within 3° at 2 GHz. It consumes 5.4 mA from a 1.2 V supply at 2 GHz. The circuit was designed in UMC 0.13-<inline-formula> <tex-math notation="LaTeX">$\mu \text{m}$ </tex-math></inline-formula> mixed-mode CMOS with an active area of <inline-formula> <tex-math notation="LaTeX">$102\,\,\mu {\mathrm{ m}} \times 95\,\,\mu {\mathrm{ m}}$ </tex-math></inline-formula>. The impact of duty cycle distortion has been analyzed. High-frequency quadrature measurement related issues have been discussed. The proposed circuit was used in two different applications for which the functionality has been verified. |
Adaptive LQR stabilization control of reaction wheel for satellite systems | Spacecrafts, which are used for stereoscopic mapping, imaging and telecommunication applications, require fine attitude and stabilization control which has an important role in high precision pointing and accurate stabilization. The conventional techniques for attitude and stabilization control are thrusters, reaction wheels, control moment gyroscopes (CMG) and magnetic torquers. Since reaction wheel can generate relatively smaller torques, they provide very fine stabilization and attitude control. Although conventional PID framework solves many stabilization problems, it is reported that many PID feedback loops are poorly tuned. In this paper, a model reference adaptive LQR control for reaction wheel stabilization problem is implemented. The tracking performance and disturbance rejection capability of proposed controller is found to give smooth motion after abnormal disruptions. |
Changes in connective tissue in patients with pelvic organ prolapse—a review of the current literature | Little is known about the pathophysiology of pelvic organ prolapse (POP). In 1996, Jackson presented a hypothesis on pelvic floor connective tissue that tried to explain the development of POP on a molecular level. The objective of this review is to test the hypothesis against recent literature. The method used was a review of literature. The association between POP and connective tissue metabolism is well established. However, the causality of this association is unclear. The characteristics of the pelvic floor connective tissue of POP patients relate to tissue repair. To resolve the question of cause and effect, the role of fibroblasts in producing the extracellular matrix should be clarified. With these data, the use of autologous or allogenic stem cells in the treatment of POP may come in sight. Recent literature supports the hypothesis of Jackson but does not resolve long-standing questions on the aetiology of POP. |
Masquerade Detection Using Enriched Command Lines | A masquerade attack, in which one user impersonates another, is among the most serious forms of computer abuse, largely because such attacks are often mounted by insiders, and can be very difficult to detect. Automatic discovery of masqueraders is sometimes undertaken by detecting significant departures from normal user behavior, as represented by user profiles based on users’ command histories. A series of experiments performed by Schonlau et al. [12] achieved moderate success in masquerade detection based on a data set comprised of truncated command lines, i.e., single commands, stripped of any accompanying flags, arguments or elements of shell grammar such as pipes or semi-colons. Using the same data, Maxion and Townsend [8] improved on the Schonlau et al. results by 56%, raising the detection rate from 39.4% to 61.5% at false-alarm rates near 1%. The present paper extends this work by testing the hypothesis that a limitation of these approaches is the use of truncated command-line data, as opposed to command lines enriched with flags, shell grammar, arguments and information about aliases. Enriched command lines were found to facilitate correct detection at the 82% level, far exceeding previous results, with a corresponding 30% reduction in the overall cost of errors, and only a small increase in false alarms. Descriptions of pathological cases illustrate strengths and limitations of both the data and the detection algorithm. |
On the Order Fill Rate in a Multi-Item, Base-Stock Inventory System | A customer order to a multi-item inventory system typically consists of several different items in different amounts. The probability of satisfying an arbitrary demand within a prespecified time window, termed the order fill rate, is an important measure of customer satisfaction in industry. This measure, however, has received little attention in the inventory literature, partly because its evaluation is considered a hard problem. In this paper, we study this performance measure for a base-stock system in which the demand process forms a multivariate compound Poisson process and the replenishment leadtimes are constant. We show that the order fill rate can be computed through a series of convolutions of one-dimensional compound Poisson distributions and the batch-size distributions. This procedure makes the exact calculation faster and much more tractable. We also develop simpler bounds to estimate the order fill rate. These bounds require only partial order-based information or merely the item-based information. Finally, we investigate the impact of the standard independent demand assumption when the demand is actually correlated across items. |
Blood pressure is reduced and insulin sensitivity increased in glucose-intolerant, hypertensive subjects after 15 days of consuming high-polyphenol dark chocolate. | Flavanols from chocolate appear to increase nitric oxide bioavailability, protect vascular endothelium, and decrease cardiovascular disease (CVD) risk factors. We sought to test the effect of flavanol-rich dark chocolate (FRDC) on endothelial function, insulin sensitivity, beta-cell function, and blood pressure (BP) in hypertensive patients with impaired glucose tolerance (IGT). After a run-in phase, 19 hypertensives with IGT (11 males, 8 females; 44.8 +/- 8.0 y) were randomized to receive isocalorically either FRDC or flavanol-free white chocolate (FFWC) at 100 g/d for 15 d. After a wash-out period, patients were switched to the other treatment. Clinical and 24-h ambulatory BP was determined by sphygmometry and oscillometry, respectively, flow-mediated dilation (FMD), oral glucose tolerance test, serum cholesterol and C-reactive protein, and plasma homocysteine were evaluated after each treatment phase. FRDC but not FFWC ingestion decreased insulin resistance (homeostasis model assessment of insulin resistance; P < 0.0001) and increased insulin sensitivity (quantitative insulin sensitivity check index, insulin sensitivity index (ISI), ISI(0); P < 0.05) and beta-cell function (corrected insulin response CIR(120); P = 0.035). Systolic (S) and diastolic (D) BP decreased (P < 0.0001) after FRDC (SBP, -3.82 +/- 2.40 mm Hg; DBP, -3.92 +/- 1.98 mm Hg; 24-h SBP, -4.52 +/- 3.94 mm Hg; 24-h DBP, -4.17 +/- 3.29 mm Hg) but not after FFWC. Further, FRDC increased FMD (P < 0.0001) and decreased total cholesterol (-6.5%; P < 0.0001), and LDL cholesterol (-7.5%; P < 0.0001). Changes in insulin sensitivity (Delta ISI - Delta FMD: r = 0.510, P = 0.001; Delta QUICKI - Delta FMD: r = 0.502, P = 0.001) and beta-cell function (Delta CIR(120) - Delta FMD: r = 0.400, P = 0.012) were directly correlated with increases in FMD and inversely correlated with decreases in BP (Delta ISI - Delta 24-h SBP: r = -0.368, P = 0.022; Delta ISI - Delta 24-h DBP r = -0.384, P = 0.017). Thus, FRDC ameliorated insulin sensitivity and beta-cell function, decreased BP, and increased FMD in IGT hypertensive patients. These findings suggest flavanol-rich, low-energy cocoa food products may have a positive impact on CVD risk factors. |
Dosage effects of diabetes self-management education for Mexican Americans: the Starr County Border Health Initiative. | OBJECTIVE
The objective of this study was to compare two diabetes self-management interventions designed for Mexican Americans: "extended" (24 h of education, 28 h of support groups) and "compressed" (16 h of education, 6 h of support groups). Both interventions were culturally competent regarding language, diet, social emphasis, family participation, and incorporating cultural beliefs.
RESEARCH DESIGN AND METHODS
We recruited 216 persons between 35 and 70 years of age diagnosed with type 2 diabetes >/=1 year. Intervention groups of eight participants and eight support persons were randomly assigned to the compressed or extended conditions. The interventions differed in total number of contact hours over the year-long intervention period, with the major difference being the number of support group sessions held. The same information provided in the educational sessions of the extended intervention was compressed into fewer sessions, thus providing more information during each group meeting.
RESULTS
The interventions were not statistically different in reducing HbA(1c); however, both were effective. A "dosage effect" of attendance was detected with the largest HbA(1c) reductions achieved by those who attended more of the extended intervention. For individuals who attended >/=50% of the intervention, baseline to 12-month HbA(1c) change was -0.6 percentage points for the compressed group and -1.7 percentage points for the extended group.
CONCLUSIONS
Both culturally competent diabetes self-management education interventions were effective in promoting improved metabolic control and diabetes knowledge. A dosage effect was evident; attending more sessions resulted in greater improvements in metabolic control. |
Horse-race model simulations of the stop-signal procedure. | In the stop-signal paradigm, subjects perform a standard two-choice reaction task in which, occasionally and unpredictably, a stop-signal is presented requiring the inhibition of the response to the choice signal. The stop-signal paradigm has been successfully applied to assess the ability to inhibit under a wide range of experimental conditions and in various populations. The current study presents a set of evidence-based guidelines for using the stop-signal paradigm. The evidence was derived from a series of simulations aimed at (a) examining the effects of experimental design features on inhibition indices, and (b) testing the assumptions of the horse-race model that underlies the stop-signal paradigm. The simulations indicate that, under most conditions, the latency, but not variability, of response inhibition can be reliably estimated. |
Remembering the earthquake: direct experience vs. hearing the news. | Three groups of informants--two in California, one in Atlanta--recalled their experiences of the 1989 Loma Prieta earthquake shortly after the event and again 11/2 years later. The Californians' recalls of their own earthquake experiences were virtually perfect. Even their recalls of hearing the news of an earthquake-related event were very good: much higher than Atlantan recalls of hearing about the quake itself. Atlantans who had relatives in the affected area remembered significantly more than those who did not. These data show that personal involvement in the quake led to greatly improved recall, but do not show why. Many Californian informants reported low levels of stress/arousal during the event; arousal ratings were not significantly correlated with recall. The authors suggest that repeated narrative rehearsals may have played an important role. |
Gas Sensors Based on Conducting Polymers | The gas sensors fabricated by using conducting polymers such as polyaniline (PAni), polypyrrole (PPy) and poly (3,4-ethylenedioxythiophene) (PEDOT) as the active layers have been reviewed. This review discusses the sensing mechanism and configurations of the sensors. The factors that affect the performances of the gas sensors are also addressed. The disadvantages of the sensors and a brief prospect in this research field are discussed at the end of the review. |
A Biographical Précis | Nicholas Rescher is University Professor of Philosophy at the University of Pittsburgh, where he also serves as Research Professor in the Center for the Philosophy of Science. He has published some two hundred articles in scholarly journals, has contributed to various encyclopedias and reference works, and has written over thirty books in the areas of epistemology, metaphysics, logic, the philosophy of science, social philosophy, and the history of philosophy. His characteric method in philosophical inquiry consists in a distinctive fusion of historical research with the techniques of logical and conceptual analysis. |
Software Pipelined Execution of Stream Programs on GPUs | The StreamIt programming model has been proposed to exploit parallelism in streaming applications on general purpose multi-core architectures. This model allows programmers to specify the structure of a program as a set of filters that act upon data, and a set of communication channels between them. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on modern Graphics Processing Units (GPUs), as they support abundant parallelism in hardware. In this paper, we describe the challenges in mapping StreamIt to GPUs and propose an efficient technique to software pipeline the execution of stream programs on GPUs. We formulate this problem --- both scheduling and assignment of filters to processors --- as an efficient Integer Linear Program (ILP), which is then solved using ILP solvers. We also describe a novel buffer layout technique for GPUs which facilitates exploiting the high memory bandwidth available in GPUs. The proposed scheduling utilizes both the scalar units in GPU, to exploit data parallelism, and multiprocessors, to exploit task and pipeline parallelism. Further it takes into consideration the synchronization and bandwidth limitations of GPUs, and yields speedups between 1.87X and 36.83X over a single threaded CPU. |
Facial nerve preservation with preoperative identification and intraoperative monitoring in large vestibular schwannoma surgery | Microsurgery is an option of choice for large vestibular schwannomas (VSs). Anatomical and functional preservation of facial nerve (FN) is still a challenge in these surgeries. FNs are often displaced and morphologically changed by large VSs. Preoperative identification of FN with magnetic resonance (MR) diffusion tensor tracking (DTT) and intraoperative identification with facial electromyography (EMG) may be desirable for improving functional results of FN. In this retrospective study, eight consecutive cases with large VS (≥30 mm in maximal extrameatal diameter) were retrospectively studied. FN DTT was performed in each case preoperatively. All the cases underwent microsurgical resection of the tumor with intraoperative FN EMG monitoring. Correctness of prediction for FN location by DTT was verified by the surgeon’s inspection. Postoperative FN function of each patient was followed up. Preoperative identification of FN was possible in 7 of 8 (87.5 %) cases. FN location predicted by preoperative DTT agreed to surgical finding in all the 7 cases. FN EMG was helpful to locate and protect the FN. Total resection was achieved in 7 of 8 (87.5 %). All FNs were anatomically preserved. All cases had excellent facial nerve function (House–Brackmann Grade I–II). FN DTT is a powerful technique in preoperatively identification of FN in large VS cases. Continuous intraoperative FN EMG monitoring is contributive to locating and protecting FNs. Radical resection of large VSs as well as favorable postoperative FN outcome is available with application of these techniques. |
Parameterization of Three-Phase Electric Machine Models for EMI Simulation | A systematic and practical method to parameterize three-phase electric machine models for electromagnetic interference (EMI) simulation is presented. The proposed behavioral model consists of multiple sections of linear RLC circuits and is intended for time-domain simulation with inverters and other power electronic circuits found in typical motor drive systems. The proposed parameterization method uses a differential-mode (DM) and a common-mode (CM) impedance measurement of the machine and takes advantage of the separation among different parallel and series resonant frequencies of each impedance to determine the parameters of each stage in a noniterative manner. The proposed method can also be applied to model three-phase cables and transformers. |
Let Your Photos Talk: Generating Narrative Paragraph for Photo Stream via Bidirectional Attention Recurrent Neural Networks | Automatic generation of natural language description for individual images (a.k.a. image captioning) has attracted extensive research attention. In this paper, we take one step further to investigate the generation of a paragraph to describe a photo stream for the purpose of storytelling. This task is even more challenging than individual image description due to the difficulty in modeling the large visual variance in an ordered photo collection and in preserving the long-term language coherence among multiple sentences. To deal with these challenges, we formulate the task as a sequence-to-sequence learning problem and propose a novel joint learning model by leveraging the semantic coherence in a photo stream. Specifically, to reduce visual variance, we learn a semantic space by jointly embedding each photo with its corresponding contextual sentence, so that the semantically related photos and their correlations are discovered. Then, to preserve language coherence in the paragraph, we learn a novel Bidirectional Attention-based Recurrent Neural Network (BARNN) model, which can attend on the discovered semantic relation to produce a sentence sequence and maintain its consistence with the photo stream. We integrate the two-step learning components into one single optimization formulation and train the network in an end-to-end manner. Experiments on three widely-used datasets (NYC/Disney/SIND) show that the proposed approach outperforms state-of-the-art methods with large margins for both retrieval and paragraph generation tasks. We also show the subjective preference of the machinegenerated stories by the proposed approach over the baselines through a user study with 40 human subjects. |
Real-time super-resolution Sound Source Localization for robots | Sound Source Localization (SSL) is an essential function for robot audition and yields the location and number of sound sources, which are utilized for post-processes such as sound source separation. SSL for a robot in a real environment mainly requires noise-robustness, high resolution and real-time processing. A technique using microphone array processing, that is, Multiple Signal Classification based on Standard EigenValue Decomposition (SEVD-MUSIC) is commonly used for localization. We improved its robustness against noise with high power by incorporating Generalized EigenValue Decomposition (GEVD). However, GEVD-based MUSIC (GEVD-MUSIC) has mainly two issues: 1) the resolution of pre-measured Transfer Functions (TFs) determines the resolution of SSL, 2) its computational cost is expensive for real-time processing. For the first issue, we propose a TF interpolation method integrating time-domain-based and frequency-domain-based interpolation. The interpolation achieves super-resolution SSL, whose resolution is higher than that of the pre-measured TFs. For the second issue, we propose two methods, MUSIC based on Generalized Singular Value Decomposition (GSVD-MUSIC), and Hierarchical SSL (H-SSL). GSVD-MUSIC drastically reduces the computational cost while maintaining noise-robustness in localization. H-SSL also reduces the computational cost by introducing a hierarchical search algorithm instead of using greedy search in localization. These techniques are integrated into an SSL system using a robot embedded microphone array. The experimental result showed: the proposed interpolation achieved approximately 1 degree resolution although we have only TFs at 30 degree intervals, GSVD-MUSIC attained 46.4% and 40.6% of the computational cost compared to SEVD-MUSIC and GEVD-MUSIC, respectively, H-SSL reduces 59.2% computational cost in localization of a single sound source. |
Using representation learning and out-of-domain data for a paralinguistic speech task | In this work, we study the paralinguistic speech task of eating condition classification and present our submitted classification system for the INTERSPEECH 2015 Computational Paralinguistics challenge. We build upon a deep learning language identification system, which we repurpose for general audio sequence classification. The main idea is that we train local convolutional neural network classifiers that automatically learn representations on smaller windows of the full sequence’s spectrum and to aggregate multiple local classifications towards a full sequence classification. A particular challenge of the task is training data scarcity and the resulting overfitting of neural network methods, which we tackle with dropout, synthetic data augmentation and transfer learning with out-of-domain data from a language identification task. Our final submitted system achieved an UAR score of 75.9% for 7-way eating condition classification, which is a relative improvement of 15% over the baseline. |
A study on threat model for federated identities in federated identity management system | Federated Identity Management (FIM) based on standards allows and facilitates participating federated organizations to share users identity attributes, facilitate authentication and grant or deny service access requests. Using single sign-on facility users authenticates only once to home identity provider and logged into access successive service providing service providers within federation. User's identity theft, misused of user identity information via single sign-on facility in identity providers and service providers, and trustworthiness of subject, identity providers and service providers are active concerns in federated identity management systems. In addition, we had explored trusted computing technology, which covers Trusted Platform Module security features such as Trusted Platform Module Identity, Integrity Measurement and Key certification as well as Trusted Network Connect. In this paper, we presented conceptual threat model for inter-domain web single sign-on in federate identity management system. For this, we set identity theft, misused of identity information, and trust relationship scenarios and in the end, we discussed how trusted computing technology use can effectively resolve identity theft, misused of identity information, and trust relationship concerns in federated identity management system. |
Molecular de-novo design through deep reinforcement learning | This work introduces a method to tune a sequence-based generative model for molecular de novo design that through augmented episodic likelihood can learn to generate structures with certain specified desirable properties. We demonstrate how this model can execute a range of tasks such as generating analogues to a query structure and generating compounds predicted to be active against a biological target. As a proof of principle, the model is first trained to generate molecules that do not contain sulphur. As a second example, the model is trained to generate analogues to the drug Celecoxib, a technique that could be used for scaffold hopping or library expansion starting from a single molecule. Finally, when tuning the model towards generating compounds predicted to be active against the dopamine receptor type 2, the model generates structures of which more than 95% are predicted to be active, including experimentally confirmed actives that have not been included in either the generative model nor the activity prediction model. Graphical abstract . |
I/Q-channel regeneration in 5-port junction based direct receivers | This paper presents a new I/Q regeneration technique for direct receivers which employ quadrature modulation. The technique is based on the property that the I/Q components of the received signal are uncorrelated, and is applicable to any quadrature modulation. A K-band 5-port junction based direct receiver prototype has been developed which supports a data rate of 89.472 Mbps with QPSK modulation. The bit error rate performance has been simulated and measured, and the results are presented. |
The application of mutual information-based feature selection and fuzzy LS-SVM-based classifier in motion classification | This paper presents an effective mutual information-based feature selection approach for EMG-based motion classification task. The wavelet packet transform (WPT) is exploited to decompose the four-class motion EMG signals to the successive and non-overlapped sub-bands. The energy characteristic of each sub-band is adopted to construct the initial full feature set. For reducing the computation complexity, mutual information (MI) theory is utilized to get the reduction feature set without compromising classification accuracy. Compared with the extensively used feature reduction methods such as principal component analysis (PCA), sequential forward selection (SFS) and backward elimination (BE) etc., the comparison experiments demonstrate its superiority in terms of time-consuming and classification accuracy. The proposed strategy of feature extraction and reduction is a kind of filter-based algorithms which is independent of the classifier design. Considering the classification performance will vary with the different classifiers, we make the comparison between the fuzzy least squares support vector machines (LS-SVMs) and the conventional widely used neural network classifier. In the further study, our experiments prove that the combination of MI-based feature selection and SVM techniques outperforms other commonly used combination, for example, the PCA and NN. The experiment results show that the diverse motions can be identified with high accuracy by the combination of MI-based feature selection and SVM techniques. Compared with the combination of PCA-based feature selection and the classical Neural Network classifier, superior performance of the proposed classification scheme illustrates the potential of the SVM techniques combined with WPT and MI in EMG motion classification. |
Applications of transcranial direct current stimulation for understanding brain function | In recent years there has been an exponential rise in the number of studies employing transcranial direct current stimulation (tDCS) as a means of gaining a systems-level understanding of the cortical substrates underlying behaviour. These advances have allowed inferences to be made regarding the neural operations that shape perception, cognition, and action. Here we summarise how tDCS works, and show how research using this technique is expanding our understanding of the neural basis of cognitive and motor training. We also explain how oscillatory tDCS can elucidate the role of fluctuations in neural activity, in both frequency and phase, in perception, learning, and memory. Finally, we highlight some key methodological issues for tDCS and suggest how these can be addressed. |
An Introduction to Physiological Player Metrics for Evaluating Games | Do you remember insult sword fi ghting in Monkey Island? The moment when you got off the elevator in the fourth mission of Call of Duty: Modern Warfare 2? Your romantic love affair with Leliana or Alistair in Dragon Age? Dancing as Madison for Paco in his nightclub in Heavy Rain? Climbing and fi ghting Cronos in God of War 3? Some of the most memorable moments from successful video games, have a strong emotional impact on us. It is only natural that game designers and user researchers are seeking methods to better understand the positive and negative emotions that we feel when we are playing games. While game metrics provide excellent methods and techniques to infer behavior from the interaction of the player in the virtual game world, they cannot infer or see emotional signals of a player. Emotional signals are observable changes in the state of the human player, such as facial expressions, body posture, or physiological changes in the player’s body. The human eye can observe facial expression, gestures or human sounds that could tell us how a player is feeling, but covert physiological changes are only revealed to us when using sensor equipment, such as |
Generalized second price auction with probabilistic broad match | Generalized Second Price (GSP) auctions are widely used by search engines today to sell their ad slots. Most search engines have supported the broad match between queries and bid keywords when executing the GSP auctions, however, it has been revealed that the GSP auction with the standard broad-match mechanism they are currently using (denoted as SBM-GSP) has several theoretical drawbacks (e.g., its theoretical properties are known only for the single-slot case and full-information setting, and even in this simple setting, the corresponding worst-case social welfare can be rather bad). To address this issue, we propose a novel broad-match mechanism, which we call the Probabilistic Broad-Match (PBM) mechanism. Different from SBM that puts together the ads bidding on all the keywords matched to a given query for the GSP auction, the GSP with PBM (denoted as PBM-GSP) randomly samples a keyword according to a predefined probability distribution and only runs the GSP auction for the ads bidding on this sampled keyword. We perform a comprehensive study on the theoretical properties of the PBM-GSP. Specifically, we study its social welfare in the worst equilibrium, in both full-information and Bayesian settings. The results show that PBM-GSP can generate larger welfare than SBM-GSP} under mild conditions. Furthermore, we also study the revenue guarantee for PBM-GSP in Bayesian setting. To the best of our knowledge, this is the first work on broad-match mechanisms for GSP that goes beyond the single-slot case and the full-information setting. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.