title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Effects of 6 months exercise training on ventricular remodelling and autonomic tone in patients with acute myocardial infarction and percutaneous coronary intervention. | OBJECTIVE
To study the effects of 6 months exercise training on ventricular remodelling and autonomic tone in patients with acute myocardial infarction and percutaneous coronary intervention.
DESIGN
Single-blinded randomized control trial.
PARTICIPANTS
Sixty patients with acute myocardial infarction who had undergone percutaneous coronary intervention.
METHODS
The exercise group followed a 6-month supervised exercise programme, while the control group received routine recommendations. All patients underwent an incremental cardiopulmonary exercise test and Doppler echocardiography at baseline and after 6 months.
RESULTS
Three patients in the exercise group did not complete the programme. At 6 months follow-up, an improvement was seen in the exercise group compared with the control group regarding peak VO2 (p<0.01), Powermax (p<0.05), VO2 at anaerobic threshold (p<0.01), time to reach anaerobic threshold (p<0.05), heart rate recovery (p<0.01), left ventricular end-diastolic diameter (p<0.01) and left ventricular ejection fraction (p<0.05).
CONCLUSION
Six months exercise training in patients with acute myocardial infarction and percutaneous coronary intervention with mild ventricular systolic dysfunction could prevent ventricular remodelling to a certain extent, and favourable modulating sympatho-vagal balance may be an important mechanism. |
Temporal Dynamics of Avian Populations during Pleistocene Revealed by Whole-Genome Sequences | Global climate fluctuations have significantly influenced the distribution and abundance of biodiversity. During unfavorable glacial periods, many species experienced range contraction and fragmentation, expanding again during interglacials. An understanding of the evolutionary consequences of both historical and ongoing climate changes requires knowledge of the temporal dynamics of population numbers during such climate cycles. Variation in abundance should have left clear signatures in the patterns of intraspecific genetic variation in extant species, from which historical effective population sizes (N(e)) can be estimated. We analyzed whole-genome sequences of 38 avian species in a pairwise sequentially Markovian coalescent (PSMC, [5]) framework to quantitatively reveal changes in N(e) from approximately 10 million to 10 thousand years ago. Significant fluctuations in N(e) over time were evident for most species. The most pronounced pattern observed in many species was a severe reduction in N(e) coinciding with the beginning of the last glacial period (LGP). Among species, N(e) varied by at least three orders of magnitude, exceeding 1 million in the most abundant species. Several species on the IUCN Red List of Threatened Species showed long-term reduction in population size, predating recent declines. We conclude that cycles of population expansions and contractions have been a common feature of many bird species during the Quaternary period, likely coinciding with climate cycles. Population size reduction should have increased the risk of extinction but may also have promoted speciation. Species that have experienced long-term declines may be especially vulnerable to recent anthropogenic threats. |
Chemical and Physicochemical Pretreatment of Lignocellulosic Biomass: A Review | Overcoming the recalcitrance (resistance of plant cell walls to deconstruction) of lignocellulosic biomass is a key step in the production of fuels and chemicals. The recalcitrance is due to the highly crystalline structure of cellulose which is embedded in a matrix of polymers-lignin and hemicellulose. The main goal of pretreatment is to overcome this recalcitrance, to separate the cellulose from the matrix polymers, and to make it more accessible for enzymatic hydrolysis. Reports have shown that pretreatment can improve sugar yields to higher than 90% theoretical yield for biomass such as wood, grasses, and corn. This paper reviews different leading pretreatment technologies along with their latest developments and highlights their advantages and disadvantages with respect to subsequent hydrolysis and fermentation. The effects of different technologies on the components of biomass (cellulose, hemicellulose, and lignin) are also reviewed with a focus on how the treatment greatly enhances enzymatic cellulose digestibility. |
Miniaturized Substrate Integrated Waveguide Slot Antennas Based on Negative Order Resonance | Miniaturized waveguide slot antennas using negative order resonance are proposed and developed in this paper. The design of these novel resonant-type antennas is based on the study of a composite right/left-handed (CRLH) substrate integrated waveguide (SIW) starting from the dispersion diagram and the equivalent circuit. These proposed CRLH resonators are realized by etching the interdigital slot on the surface of the SIW. The slot acts as a series capacitor as well as a radiator leading to a CRLH antenna application. Two types of slot antennas, which are open-ended and short-ended, respectively, are proposed and discussed. The short-ended antenna, which represents a quasi-quarter-wavelength resonator, is characterized by a cavity-backed slot antenna thus providing a high gain. On the other hand, the open-ended antenna exhibits a small size owing to a quasi-half-wavelength operation in the left-handed (LH) region. Four antennas with one or two unit cells are fabricated and measured. Compared with the existing CRLH patch antennas and waveguide slot antennas, our antennas show advantages in terms of simplicity, low-profile, high efficiency, easy fabrication and integration with other circuits. |
A stable compensation scheme for low dropout regulator in the absence of ESR | A new compensation scheme for low dropout regulator (LDR) in relaxing the constraint on the equivalent series resistance (ESR) down to the value of zero from no load to full load current condition is presented. This is achieved by introducing a load-tracking zero that tracks with the moving output pole as the load current varies, and a level shift buffer reduces the size and capacitance of the power transistor. The load-tracking circuit ensures the stability in full load current range even in the absence of ESR. The proposed LDR is fabricated with a standard 0.35 mum CMOS technology. Measurement results at very small value ESR (10 mOmega) and output capacitor (0.47 muF), where this equivalent ESR zero is far beyond the compensation region of LDR, confirm the stability at ESR equivalent value to be zero. Also the proposed compensation scheme allows a wide range of ESR from equivalent 0 Omega to 20 Omega and load capacitor from 0.47 muF to 22 muF with a load current of 0 muA to 100 mA at dropout voltage of 0.2 V. |
Throwable tetrahedral robot with transformation capability | In this paper, a tetrahedral mobile robot with central axis for transformation to the flat vehicle is presented. The throwable robot with the function of going into narrow spaces when its in the flat-vehicle mode is explained in detail. A prototype has been developed to illustrate the concept. Motion experiments confirm the novel properties of this mechanism: mode changing function and omnidirectional motion. Basic Motion experiments, with a test vehicle are also presented. |
A Complexity-Invariant Distance Measure for Time Series | The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While there is a plethora of classification algorithms that can be applied to time series, all of the current empirical evidence suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping. In this work we make a surprising claim. There is an invariance that the community has missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where complex objects are incorrectly assigned to a simpler class. We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series classification experiments ever attempted, and show that complexity-invariant distance measures can produce improvements in accuracy in the vast majority of cases. |
The burden of physical activity-related ill health in the UK. | BACKGROUND
Despite evidence that physical inactivity is a risk factor for a number of diseases, only a third of men and a quarter of women are meeting government targets for physical activity. This paper provides an estimate of the economic and health burden of disease related to physical inactivity in the UK. These estimates are examined in relation to current UK government policy on physical activity.
METHODS
Information from the World Health Organisation global burden of disease project was used to calculate the mortality and morbidity costs of physical inactivity in the UK. Diseases attributable to physical inactivity included ischaemic heart disease, ischaemic stroke, breast cancer, colon/rectum cancer and diabetes mellitus. Population attributable fractions for physical inactivity for each disease were applied to the UK Health Service cost data to estimate the financial cost.
RESULTS
Physical inactivity was directly responsible for 3% of disability adjusted life years lost in the UK in 2002. The estimated direct cost to the National Health Service is pound 1.06 billion.
CONCLUSION
There is a considerable public health burden due to physical inactivity in the UK. Accurately establishing the financial cost of physical inactivity and other risk factors should be the first step in a developing national public health strategy. |
Dynamic TXOP HCCA reclaiming scheduler with transmission time estimation for IEEE 802.11e real-time networks | IEEE 802.11e HCCA reference scheduler guarantees Quality of Service only for Constant Bit Rate traffic streams, whereas its assignment of scheduling parameters (transmission time TXOP and polling period) is too rigid to serve Variable Bit Rate (VBR) traffic.
This paper presents a new scheduling algorithm, Dynamic TXOP HCCA (DTH). Its scheduling scheme, integrated with the centralized scheduler, uses both a statistical estimation of needed transmission duration and a bandwidth reclaiming mechanism with the aim of improving the resource management and providing an instantaneous dynamic Transmission Opportunity (TXOP), tailored to multimedia applications with variable bit rate. Performance evaluation through simulation, confirmed by the scheduling analysis, shows that DTH is suitable to reduce the transmission queues length. This positively impacts on the delay and on packets drop rate experienced by VBR traffic streams. |
Fast Group Recommendations by Applying User Clustering | Recommendation systems have received significant attention, with most of the proposed methods focusing on personal recommendations. However, there are contexts in which the items to be suggested are not intended for a single user but for a group of people. For example, assume a group of friends or a family that is planning to watch a movie or visit a restaurant. In this paper, we propose an extensive model for group recommendations that exploits recommendations for items that similar users to the group members liked in the past. We do not exhaustively search for similar users in the whole user base, but we pre-partition users into clusters of similar ones and use the cluster members for recommendations. We efficiently aggregate the single user recommendations into group recommendations by leveraging the power of a top-k algorithm. We evaluate our approach in a real dataset of movie ratings. |
Stars and Misfits: Self-Employment and Labor Market Frictions | Recent evidence has shown that entrants into self-employment are disproportionately drawn from the tails of the earnings and ability distributions. This observation is explained by a multi-task model of occupational choice in which frictions in the labor market induces mismatches between firms and workers, and mis-assignment of workers to tasks. The model also yields distinctive predictions relating prior work histories to earnings and to the probability of entry into self-employment. These predictions are tested with the Korean Labor and Income Panel Study, from which we find considerable support for the model. |
Business Process Modeling: Current Issues and Future Challenges | Business process modeling has undoubtedly emerged as a popular and relevant practice in Information Systems. Despite being an actively researched field, anecdotal evidence and experiences suggest that the focus of the research community is not always well aligned with the needs of industry. The main aim of this paper is, accordingly, to explore the current issues and the future challenges in business process modeling, as perceived by three key stakeholder groups (academics, practitioners, and tool vendors). We present the results of a global Delphi study with these three groups of stakeholders, and discuss the findings and their implications for research and practice. Our findings suggest that the critical areas of concern are standardization of modeling approaches, identification of the value proposition of business process modeling, and model-driven process execution. These areas are also expected to persist as business process modeling roadblocks in the future. |
Introducing LETOR 4.0 Datasets | LETOR is a package of benchmark data sets for research on LEarning TO Rank, which contains standard features, relevance judgments, data partitioning, evaluation tools, and several baselines. Version 1.0 was released in April 2007. Version 2.0 was released in Dec. 2007. Version 3.0 was released in Dec. 2008. This version, 4.0, was released in July 2009. Very different from previous versions (V3.0 is an update based on V2.0 and V2.0 is an update based on V1.0), LETOR4.0 is a totally new release. It uses the Gov2 web page collection (~25M pages) and two query sets from Million Query track of TREC 2007 and TREC 2008. We call the two query sets MQ2007 and MQ2008 for short. There are about 1700 queries in MQ2007 with labeled documents and about 800 queries in MQ2008 with labeled documents. If you have any questions or suggestions about the datasets, please kindly email us ([email protected]). Our goal is to make the dataset reliable and useful for the community. |
Real time health monitoring of industrial machine using multiclass support vector machine | Most of the failures in the industrial systems are due to motor faults which can be catastrophic and cause major downtimes. Hence, continuous health monitoring, precise fault detection and advance failure warning for motors are pivotal and cost-effective. The identification of motor faults requires sophisticated signal processing techniques for quick fault detection and isolation. This paper presents a real time health monitoring technique for induction motor using pattern recognition method. The proposed fault detection and isolation scheme comprises three stages: data acquisition, feature extraction and multiclass support vector machine classifier. This paper investigates single and multiple faults in single-phase induction motor including bearing fault, load fault and their combination. The testbed consists of 1/2 hp, 220V squirrel cage induction motor with load, vibration sensor, current sensor, data acquisition system and controller. Two features standard deviation and average value are computed for each sensor's data. Multiclass support vector machine classifier is implemented using a low-cost arduino controller for fault detection and isolation. The performance analysis of the classifier with real time sensor's data is presented which shows superior capabilities of developed method. |
Subpixel Photometric Stereo | Conventional photometric stereo recovers one normal direction per pixel of the input image. This fundamentally limits the scale of recovered geometry to the resolution of the input image, and cannot model surfaces with subpixel geometric structures. In this paper, we propose a method to recover subpixel surface geometry by studying the relationship between the subpixel geometry and the reflectance properties of a surface. We first describe a generalized physically-based reflectance model that relates the distribution of surface normals inside each pixel area to its reflectance function. The distribution of surface normals can be computed from the reflectance functions recorded in photometric stereo images. A convexity measure of subpixel geometry structure is also recovered at each pixel, through an analysis of the shadowing attenuation. Then, we use the recovered distribution of surface normals and the surface convexity to infer subpixel geometric structures on a surface of homogeneous material by spatially arranging the normals among pixels at a higher resolution than that of the input image. Finally, we optimize the arrangement of normals using a combination of belief propagation and MCMC based on a minimum description length criterion on 3D textons over the surface. The experiments demonstrate the validity of our approach and show superior geometric resolution for the recovered surfaces. |
Mediterranean diet reduces 24-hour ambulatory blood pressure, blood glucose, and lipids: one-year randomized, clinical trial. | UNLABELLED
The PREvención con DIeta MEDiterránea (PREDIMED) trial showed that Mediterranean diets (MedDiets) supplemented with either extravirgin olive oil or nuts reduced cardiovascular events, particularly stroke, compared with a control, lower fat diet. The mechanisms of cardiovascular protection remain unclear. We evaluated the 1-year effects of supplemented MedDiets on 24-hour ambulatory blood pressure (BP), blood glucose, and lipids. Randomized, parallel-design, controlled trial was conducted in 2 PREDIMED sites. Diets were ad libitum, and no advice on increasing physical activity or reducing sodium intake was given. Participants were 235 subjects (56.5% women; mean age, 66.5 years) at high cardiovascular risk (85.4% with hypertension). Adjusted changes from baseline in mean systolic BP were -2.3 (95% confidence interval [CI], -4.0 to -0.5) mm Hg and -2.6 (95% CI, -4.3 to -0.9) mm Hg in the MedDiets with olive oil and the MedDiets with nuts, respectively, and 1.7 (95% CI, -0.1 to 3.5) mm Hg in the control group (P<0.001). Respective changes in mean diastolic BP were -1.2 (95% CI, -2.2 to -0.2), -1.2 (95% CI, -2.2 to -0.2), and 0.7 (95% CI, -0.4 to 1.7) mm Hg (P=0.017). Daytime and nighttime BP followed similar patterns. Mean changes from baseline in fasting blood glucose were -6.1, -4.6, and 3.5 mg/dL (P=0.016) in the MedDiets with olive oil, MedDiets with nuts, and control diet, respectively; those of total cholesterol were -11.3, -13.6, and -4.4 mg/dL (P=0.043), respectively. In high-risk individuals, most with treated hypertension, MedDiets supplemented with extravirgin olive oil or nuts reduced 24-hour ambulatory BP, total cholesterol, and fasting glucose.
CLINICAL TRIAL REGISTRATION URL
http://www.clinicaltrials.gov. Unique identifier: ISRCTN35739639. |
Applying social learning analytics to message boards in online distance learning: A case study | Social learning analytics introduces tools and methods that help improving the learning process by providing useful information about the actors and their activity in the learning system. This study examines the relation between SNA parameters and student outcomes, between network parameters and global course performance, and it shows how visualizations of social learning analytics can help observing the visible and invisible interactions occurring in online distance education. The findings from our empirical study show that future research should further investigate whether there are conditions under which social network parameters are reliable predictors of academic performance, but also advises against relying exclusively in social network parameters for predictive purposes. The findings also show that data visualization is a useful tool for social learning analytics, and how it may provide additional information about actors and their behaviors for decision making in online distance |
Perceived racial discrimination and nonadherence to screening mammography guidelines: results from the race differences in the screening mammography process study. | The study objective was to determine whether perceived racial discrimination influenced nonadherence to screening mammography guidelines. Enrolled in this prospective study were 1,451 women aged 40-79 years who obtained an "index" screening mammogram at one of five urban hospitals in Connecticut between October 1996 and January 1998. This logistic regression analysis included 1,229 women (484 African American (39%), 745 White (61%)) who completed telephone interviews at baseline and follow-up (on average 29 months later). Perceived racial discrimination was measured as lifetime experience in seven possible situations. Approximately 42% of African-American women and 10% of White women reported lifetime racial discrimination. Perceived racial discrimination was not associated with nonadherence to age-specific mammography screening guidelines in unadjusted or multivariate-adjusted analyses. Although these negative findings may reflect the well-recognized problems associated with measurement of perceived discrimination, it is possible that women who recognize and report racial discrimination develop compensatory characteristics that enable positive health prevention behavior, in spite of their past experiences. |
The Evolution to Modern Phased Array Architectures | Phased array technology has been evolving steadily with advances in solid-state microwave integrated circuits, analysis and design tools, and reliable fabrication practices. With significant government investments, the technologies have matured to a point where phased arrays are widely used in military systems. Next-generation phased arrays will employ high levels of digitization, which enables a wide range of improvements in capability and performance. Digital arrays leverage the rapid commercial evolution of digital processor technology. The cost of phased arrays can be minimized by utilizing high-volume commercial microwave manufacturing and packaging techniques. Dramatic cost reductions are achieved by employing a tile array architecture, which greatly reduces the number of printed circuit boards and connectors in the array. |
Comparing Relational and Ontological Triple Stores in Healthcare Domain | Today’s technological improvements have made ubiquitous healthcare systems that converge into smart healthcare applications in order to solve patients’ problems, to communicate effectively with patients, and to improve healthcare service quality. The first step of building a smart healthcare information system is representing the healthcare data as connected, reachable, and sharable. In order to achieve this representation, ontologies are used to describe the healthcare data. Combining ontological healthcare data with the used and obtained data can be maintained by storing the entire health domain data inside big data stores that support both relational and graph-based ontological data. There are several big data stores and different types of big data sets in the healthcare domain. The goal of this paper is to determine the most applicable ontology data store for storing the big healthcare data. For this purpose, AllegroGraph and Oracle 12c data stores are compared based on their infrastructural capacity, loading time, and query response times. Hence, healthcare ontologies (GENE Ontology, Gene Expression Ontology (GEXO), Regulation of Transcription Ontology (RETO), Regulation of Gene Expression Ontology (REXO)) are used to measure the ontology loading time. Thereafter, various queries are constructed and executed for GENE ontology in order to measure the capacity and query response times for the performance comparison between AllegroGraph and Oracle 12c triple stores. |
Intravenous anaesthesia and the rat microcirculation: the dorsal microcirculatory chamber. | The use of the dorsal microcirculatory chamber in male Wistar rats (n=7) to study the effects of induction and maintenance of anaesthesia on the microcirculation is described. Different patterns of responses were observed. At induction, arteriolar dilation was found following propofol and thiopental but ketamine produced constriction. During maintenance, constriction of arterioles was seen with ketamine and thiopental but dilation persisted with propofol. The dorsal microcirculatory chamber appears to be a useful tool for the study of microcirculatory changes related to anaesthesia. |
Wireless network for health monitoring: heart rate and temperature sensor | In the field of human health, collecting real-time data is vital. A system that can remotely monitor heart rate and body temperature is presented in this paper. The data was collected from a group of volunteers using the sensors developed by the research team to test the system. The Arduino micro-controller is programmed to transmit the data securely to a remote PC station using an XBee wireless network for display and storage. Power consumption by the system was minimized by activating the sensors when a command from a remote PC is received. |
Factors influencing a nurse's decision to question medication administration in a neonatal clinical care unit. | AIMS AND OBJECTIVES
The aim of this study was to identify factors that influence nurse's decisions to question concerning aspects of medication administration within the context of a neonatal clinical care unit.
BACKGROUND
Medication error in the neonatal setting can be high with this particularly vulnerable population. As the care giver responsible for medication administration, nurses are deemed accountable for most errors. However, they are recognised as the forefront of prevention. Minimal evidence is available around reasoning, decision making and questioning around medication administration. Therefore, this study focuses upon addressing the gap in knowledge around what nurses believe influences their decision to question.
DESIGN
A critical incident design was employed where nurses were asked to describe clinical incidents around their decision to question a medication issue. Nurses were recruited from a neonatal clinical care unit and participated in an individual digitally recorded interview.
RESULTS
One hundred and three nurses participated between December 2013-August 2014. Use of the constant comparative method revealed commonalities within transcripts. Thirty-six categories were grouped into three major themes: 'Working environment', 'Doing the right thing' and 'Knowledge about medications'.
CONCLUSIONS
Findings highlight factors that influence nurses' decision to question issues around medication administration. Nurses feel it is their responsibility to do the right thing and speak up for their vulnerable patients to enhance patient safety. Negative dimensions within the themes will inform planning of educational strategies to improve patient safety, whereas positive dimensions must be reinforced within the multidisciplinary team.
RELEVANCE TO CLINICAL PRACTICE
The working environment must support nurses to question and ultimately provide safe patient care. Clear and up to date policies, formal and informal education, role modelling by senior nurses, effective use of communication skills and a team approach can facilitate nurses to appropriately question aspects around medication administration. |
Review of "Computer selection by Joslin, Edward O." Technology Press, Fairfax Station, Va., 1977, 216 pp. | The author has packaged a wealth of otherwise hard to come by information on computer selection in the book's 216 pages. As pointed out in the Foreword, the original work published in 1968 [Addison-Wesley, Reading, Mass.; see CR 10, 3 (March 1969), Rev. 16,340], a classic, with the basic ideas of Part I still valid after almost a decade of rapid change. In fact, in today's environment, where more and more is being demanded from computer systems, having the major selection criteria readily at hand is a major contribution and, indeed, a necessary foundation for fielding an efficient and effective computer system. |
Missing value estimation methods for DNA microarrays | MOTIVATION
Gene expression microarray experiments can generate data sets with multiple missing expression values. Unfortunately, many algorithms for gene expression analysis require a complete matrix of gene array values as input. For example, methods such as hierarchical clustering and K-means clustering are not robust to missing data, and may lose effectiveness even with a few missing values. Methods for imputing missing data are needed, therefore, to minimize the effect of incomplete data sets on analyses, and to increase the range of data sets to which these algorithms can be applied. In this report, we investigate automated methods for estimating missing data.
RESULTS
We present a comparative study of several methods for the estimation of missing values in gene microarray data. We implemented and evaluated three methods: a Singular Value Decomposition (SVD) based method (SVDimpute), weighted K-nearest neighbors (KNNimpute), and row average. We evaluated the methods using a variety of parameter settings and over different real data sets, and assessed the robustness of the imputation methods to the amount of missing data over the range of 1--20% missing values. We show that KNNimpute appears to provide a more robust and sensitive method for missing value estimation than SVDimpute, and both SVDimpute and KNNimpute surpass the commonly used row average method (as well as filling missing values with zeros). We report results of the comparative experiments and provide recommendations and tools for accurate estimation of missing microarray data under a variety of conditions. |
PEEK dental implants: a review of the literature. | The insertion of dental implants containing titanium can be associated with various complications (eg, hypersensitivity to titanium). The aim of this article is to evaluate whether there are existing studies reporting on PEEK (polyetheretherketone) as an alternative material for dental implants. A systematic literature search of PubMed until December 2010 yielded 3 articles reporting on dental implants made from PEEK. One article analyzed stress distribution in carbon fiber-reinforced PEEK (CFR-PEEK) dental implants by the 3-dimensional finite element method, demonstrating higher stress peaks due to a reduced stiffness compared to titanium. Two articles reported on investigations in mongrel dogs. The first article compared CFR-PEEK to titanium-coated CFR-PEEK implants, which were inserted into the femurs and evaluated after 4 and 8 weeks. The titanium-coated implants showed significantly higher bone-implant contact (BIC) rates. In a second study, implants of pure PEEK were inserted into the mandibles beside implants made from titanium and zirconia and evaluated after 4 months, where PEEK presented the lowest BIC. The existing articles reporting on PEEK dental implants indicate that PEEK could represent a viable alternative material for dental implants. However, further experimental studies on the chemical modulation of PEEK seem to be necessary, mainly to increase the BIC ratio and to minimize the stress distribution to the peri-implant bone. |
A Survey on Recommendations in Location-based Social Networks | Recent advances in position localization techniques have fundamentally enhanced social networking services, allowing users to share their locations and location-related content, such as geo-tagged photos and notes. We refer to these social networks as location-based social networks (LBSNs). Location data both bridges the gap between the physical and digital worlds and enables a deeper understanding of user preferences and behavior. This addition of vast geospatial datasets has stimulated research into novel recommender systems that seek to facilitate users’ travels and social interactions. In this paper, we offer a systematic review of this research, summarizing the contributions of individual efforts and exploring their relations. We discuss the new properties and challenges that location brings to recommendation systems for LBSNs. We present a comprehensive survey of recommender systems for LBSNs, analyzing 1) the data source used, 2) the methodology employed to generate a recommendation, and 3) the objective of the recommendation. We propose three taxonomies that partition the recommender systems according to the properties listed above. First, we categorize the recommender systems by the objective of the recommendation, which can include locations, users, activities, or social media.Second, we categorize the recommender systems by the methodologies employed, including content-based, link analysis-based, and collaborative filtering-based methodologies. Third, we categorize the systems by the data sources used, including user profiles, user online histories, and user location histories. For each category, we summarize the goals and contributions of each system and highlight one representative research effort. Further, we provide comparative analysis of the recommendation systems within each category. Finally, we discuss methods of evaluation for these recommender systems and point out promising research topics for future work. This article presents a panorama of the recommendation systems in location-based social networks with a balanced depth, facilitating research into this important research theme. |
Google Scholar: the pros and the cons | IT may appear blasphemous to paraphrase the title of the classic article of Vannevar Bush but it may be a mitigating factor that it is done to pay tribute to another legendary scientist, Eugene Garfield. His ideas of citationbased searching, resource discovery and quantitative evaluation of publications serve as the basis for many of the most innovative and powerful online information services these days. Bush 60 years ago contemplated – among many other things – an information workstation, the Memex. A researcher would use it to annotate, organize, link, store, and retrieve microfilmed documents. He is acknowledged today as the forefather of the hypertext system, which in turn, is the backbone of the Internet. He outlined his thoughts in an essay published in the Atlantic Monthly. Maybe because of using a nonscientific outlet the paper was hardly quoted and cited in scholarly and professional journals for 30 years. Understandably, the Atlantic Monthly was not covered by the few, specialized abstracting and indexing databases of scientific literature. Such general interest magazines are not source journals in either the Web of Science (WoS), or Scopus databases. However, records for items which cite the ‘As We May Think’ article of Bush (also known as the ‘Memex’ paper) are listed with appropriate bibliographic information. Google Scholar (G-S) lists the records for the Memex paper and many of its citing papers. It is a rather confusing list with many dead links or otherwise dysfunctional links, and a hodge-podge of information related to Bush. It is quite telling that (based on data from the 1945– 2005 edition of WoS) the article of Bush gathered almost 90% of all its 712 citations in WoS between 1975 and 2005, peaking in 1999 with 45 citations in that year alone. Undoubtedly, this proportion is likely to be distorted because far fewer source articles from far fewer journals were processed by the Institute for Scientific Information for 1945–1974 than for 1975–2005. Scopus identifies 267 papers citing the Bush article. The main reason for the discrepancy is that Scopus includes cited references only from 1995 onward, while WoS does so from 1945. Bush’s impatience with the limitations imposed by the traditional classification and indexing tools and practices of the time is palpable. It is worth to quote it as a reminder. Interestingly, he brings up the terms ‘web of trails’ and ‘association of thoughts’ which establishes the link between him and Garfield. |
GIS Cloud Computing Methodology | Geographic Information System (GIS) is the processes of managing, manipulating, analyzing, updating and presenting metadata according to its geographic location, to be effectively used in different aspects of life [1]. Cloud Computing allowed the utilization of all computing resources and software as required through the web in a virtual computing environment [2], [3]. Different Application software and data are provided at the (virtual) server side to be used. GIS Cloud is the future of Web GIS, with capabilities of easy and fast: collecting, processing, analyzing, updating, rectifying, publishing geospatial data through the internet. It is a web-based- GIS Application to enable all users fast, easy informed decision-making at fair price. It provides the power of desktop GIS on a web-based platform at fair cost anywhere. It offers a JavaScript Application Programming Interface (API) and a REST API, to provide GIS functionality into an application or website that can be hosted by GIS Cloud or by a third-party. In this work we are presenting a GIS-Cloud application in AL-Kamaliah region, small town of Amman suburbs to demonstrate its practicality and functionality. |
Action Recognition with Spatio-Temporal Visual Attention on Skeleton Image Sequences | Action recognition with 3D skeleton sequences became popular due to its speed and robustness. The recently proposed Convolutional Neural Networks (CNN) based methods shown good performance in learning spatio-temporal representations for skeleton sequences. Despite the good recognition accuracy achieved by previous CNN based methods, there existed two problems that potentially limit the performance. First, previous skeleton representations were generated by chaining joints with a fixed order. The corresponding semantic meaning was unclear and the structural information among the joints was lost. Second, previous models did not have an ability to focus on informative joints. The attention mechanism was important for skeleton based action recognition because different joints contributed unequally towards the correct recognition. To solve these two problems, we proposed a novel CNN based method for skeleton based action recognition. We first redesigned the skeleton representations with a depth-first tree traversal order, which enhanced the semantic meaning of skeleton images and better preserved the associated structural information. We then proposed the general two-branch attention architecture that automatically focused on spatio-temporal key stages and filtered out unreliable joint predictions. Based on the proposed general architecture, we designed a Global Long-sequence Attention Network (GLAN) with refined branch structures. Furthermore, in order to adjust the kernel’s spatio-temporal aspect ratios and better capture long term dependencies, we proposed a Sub-Sequence Attention Network (SSAN) that took sub-image sequences as inputs. We showed that the two-branch attention architecture could be combined with the SSAN to further improve the performance. Our experiment results on the NTU RGB+D dataset and the SBU Kinetic Interaction dataset outperformed the state-of-the-art. The model was further validated on noisy estimated poses from the subsets of the UCF101 dataset and the Kinetics dataset. |
Modular array-based GPU computing in a dynamically-typed language | Nowadays, GPU accelerators are widely used in areas with large data-parallel computations such as scientific computations or neural networks. Programmers can either write code in low-level CUDA/OpenCL code or use a GPU extension for a high-level programming language for better productivity. Most extensions focus on statically-typed languages, but many programmers prefer dynamically-typed languages due to their simplicity and flexibility.
This paper shows how programmers can write high-level modular code in Ikra, a Ruby extension for array-based GPU computing. Programmers can compose GPU programs of multiple reusable parallel sections, which are subsequently fused into a small number of GPU kernels. We propose a seamless syntax for separating code regions that extensively use dynamic language features from those that are compiled for efficient execution. Moreover, we propose symbolic execution and a program analysis for kernel fusion to achieve performance that is close to hand-written CUDA code. |
Quark condensate in a magnetic field | Abstract We study the dependence of quark condensate Σ on an external magnetic field. For weak fields, it rises linearly: Σ(H) = Σ(0) [1 + eH ln2 (16π 2 F π 2 ) + …] . M π and F π are also shifted so that the Gell-Mann-Oakes-Renner relation is satisfied. In the strong field region, Σ(H) α (eH) 3 2 . |
Hypoxia: a key player in antitumor immune response. A Review in the Theme: Cellular Responses to Hypoxia. | The tumor microenvironment is a complex system, playing an important role in tumor development and progression. Besides cellular stromal components, extracellular matrix fibers, cytokines, and other metabolic mediators are also involved. In this review we outline the potential role of hypoxia, a major feature of most solid tumors, within the tumor microenvironment and how it contributes to immune resistance and immune suppression/tolerance and can be detrimental to antitumor effector cell functions. We also outline how hypoxic stress influences immunosuppressive pathways involving macrophages, myeloid-derived suppressor cells, T regulatory cells, and immune checkpoints and how it may confer tumor resistance. Finally, we discuss how microenvironmental hypoxia poses both obstacles and opportunities for new therapeutic immune interventions. |
Social Media Mining to Understand Public Mental Health | In this paper, we apply text mining and topic modelling to understand public mental health. We focus on identifying common mental health topics across two anonymous social media platforms: Reddit and a mobile journalling/mood-tracking app. Furthermore, we analyze journals from the app to uncover relationships between topics, journal visibility (private vs. visible to other users of the app), and user-labelled sentiment. Our main findings are that 1) anxiety and depression are shared on both platforms; 2) users of the journalling app keep routine topics such as eating private, and these topics rarely appear on Reddit; and 3) sleep was a critical theme on the journalling app and had an unexpectedly negative sentiment. |
Celebratory technology: new directions for food research in HCI | Food is a central part of our lives. Fundamentally, we need food to survive. Socially, food is something that brings people together-individuals interact through and around it. Culturally, food practices reflect our ethnicities and nationalities. Given the importance of food in our daily lives, it is important to understand what role technology currently plays and the roles it can be imagined to play in the future. In this paper we describe the existing and potential design space for HCI in the area of human-food interaction. We present ideas for future work on designing technologies in the area of human-food interaction that celebrate the positive interactions that people have with food as they eat and prepare foods in their everyday lives. |
An adaptive write buffer management scheme for flash-based SSDs | Solid State Drives (SSD's) have shown promise to be a candidate to replace traditional hard disk drives. The benefits of SSD's over HDD's include better durability, higher performance, and lower power consumption, but due to certain physical characteristics of NAND flash, which comprise SSD's, there are some challenging areas of improvement and further research. We focus on the layout and management of the small amount of RAM that serves as a cache between the SSD and the system that uses it. Of the techniques that have previously been proposed to manage this cache, we identify several sources of inefficient cache space management due to the way pages are clustered in blocks and the limited replacement policy. We find that in many traces hot pages reside in otherwise cold blocks, and that the spatial locality of most clusters can be fully exploited in a limited time period, so we develop a hybrid page/block architecture along with an advanced replacement policy, called BPAC, or Block-Page Adaptive Cache, to exploit both temporal and spatial locality. Our technique involves adaptively partitioning the SSD on-disk cache to separately hold pages with high temporal locality in a page list and clusters of pages with low temporal but high spatial locality in a block list. In addition, we have developed a novel mechanism for flash-based SSD's to characterize the spatial locality of the disk I/O workload and an approach to dynamically identify the set of low spatial locality clusters. We run trace-driven simulations to verify our design and find that it outperforms other popular flash-aware cache schemes under different workloads. For instance, compared to a popular flash aware cache algorithm BPLRU, BPAC reduces the number of cache evictions by up to 79.6% and 34% on average. |
Stochastic reachability based motion planning for multiple moving obstacle avoidance | One of the many challenges in designing autonomy for operation in uncertain and dynamic environments is the planning of collision-free paths. Roadmap-based motion planning is a popular technique for identifying collision-free paths, since it approximates the often infeasible space of all possible motions with a networked structure of valid configurations. We use stochastic reachable sets to identify regions of low collision probability, and to create roadmaps which incorporate likelihood of collision. We complete a small number of stochastic reachability calculations with individual obstacles a priori. This information is then associated with the weight, or preference for traversal, given to a transition in the roadmap structure. Our method is novel, and scales well with the number of obstacles, maintaining a relatively high probability of reaching the goal in a finite time horizon without collision, as compared to other methods. We demonstrate our method on systems with up to 50 dynamic obstacles. |
Effects of a short term supplementation of a fermented papaya preparation on biomarkers of diabetes mellitus in a randomized Mauritian population. | OBJECTIVE
Clinical evidence and cellular models have shown an inverse relationship between the intakes of plant and fruit based diets and oxidative stress, suggesting the suitability of natural antioxidants in the management of diabetes mellitus and its complications.
METHOD
A randomized controlled clinical trial was conducted at the Cardiac Centre, SSRN Hospital, Pamplemousses, (Mauritius) to determine the effect of a short term supplementation of a fermented papaya preparation (FPP®) on biomarkers of diabetes and antioxidant status in a multi-ethnical neo-diabetic population from November 2010 to March 2011.
RESULT
Supplementation of 6g FPP®/day for a period of 14 weeks could improve the general health status of several organs targeted by oxidative stress during diabetes. When comparing experimental to control groups with independent samples t-test, C-reactive protein levels significantly decreased (P=0.018), LDL/HDL ratio was considerably changed (P=0.042), and uric acid levels were significantly improved (P=0.001). ANOVA results also validated the same findings with significant differences in C-reactive protein, LDL/HDL ratio, uric acid and in serum ferritin levels.
CONCLUSION
FPP® may present a novel, economically feasible nutraceutical supplement for the management of diabetes and for those at risk for cardiovascular disease, neurological disease and other conditions worsened by overt inflammation and oxidative stress. |
Context-Aware Tourist Trip Recommendations | Mobile and web-based services solving common tourist trip design problems are available, but only few solutions consider context for the recommendation of point of interest (POI) sequences. In this paper, we present a novel approach to incorporating context into a tourist trip recommendation algorithm. In addition to traditional context factors in tourism, such as location, weather or opening hours, we focus on two context factors that are highly relevant when recommending a sequence of POIs: time of the day and previously visited point of interest. We conducted an online questionnaire to determine the in uence of the context factors on the user’s decision of visiting a POI and the ratings of the POIs under these conditions. We integrated our approach into a web application recommending context-aware tourist trips across the world. In a user study, we veri ed the results of our novel approach as well as the application’s usability. The study proves a high usability of our system and shows that our context-aware approach outperforms a baseline algorithm. |
L0 sparse graphical modeling | Graphical models are well established in providing compact conditional probability descriptions of complex multivariable interactions. In the Gaussian case, graphical models are determined by zeros in the precision or concentration matrix, i.e. the inverse of the covariance matrix. Hence, there has been much recent interest in sparse precision matrices in areas such as statistics, machine learning, computer vision, pattern recognition and signal processing. In this paper we propose a simple new algorithm for constructing a sparse estimator for the precision matrix from multivariate data where the sparsity is enforced by an l0 penalty. We compare and test the quality of our method on a synthetic graphical model. |
Game theory based mitigation of Interest flooding in Named Data Network | The Distributed Denial of Service (DDoS) attacks are a serious threat in today's Internet where packets from large number of compromised hosts block the path to the victim nodes and overload the victim servers. In the newly proposed future Internet Architecture, Named Data Networking (NDN), the architecture itself has prevention measures to reduce the overload to the servers. This on the other hand increases the work and security threats to the intermediate routers. Our project aims at identifying the DDoS attack in NDN which is known as Interest flooding attack, mitigate the consequence of it and provide service to the legitimate users. We have developed a game model for the DDoS attacks and provide possible countermeasures to stop the flooding of interests. Through this game theory model, we either forward or redirect or drop the incoming interest packets thereby reducing the PIT table consumption. This helps in identifying the nodes that send malicious interest packets and eradicate their actions of sending malicious interests further. The main highlight of this work is that we have implemented the Game Theory model in the NDN architecture. It was primarily imposed for the IP internet architecture. |
The clinical utility of normal findings on noninvasive cardiac assessment in the prediction of atrial fibrillation. | BACKGROUND
The absence of abnormalities on noninvasive cardiac assessment possibly confers a reduced risk of atrial fibrillation (AF) despite the presence of traditional risk factors.
HYPOTHESIS
Normal findings on noninvasive cardiac assessment are associated with a lower risk of AF development.
METHODS
We examined the clinical utility of normal findings on routine noninvasive cardiac assessment in 5331 participants (85% white; 57% women) from the Cardiovascular Health Study who were free of baseline AF. The combination of a normal electrocardiogram (ECG) + normal echocardiogram was assessed for the development of AF events. A normal ECG was defined as the absence of major or minor Minnesota code abnormalities. A normal echocardiogram was defined as the absence of contractile dysfunction, wall motion abnormalities, or abnormal left ventricular mass. Cox regression was used to compute the 10-year risk of developing AF.
RESULTS
During the 10-year study period, a total of 951 (18%) AF events were detected. A normal ECG (multivariable hazard ratio [HR]: 0.80, 95% confidence interval [CI]: 0.69-0.92) and normal echocardiogram (multivariable HR: 0.75, 95% CI: 0.65-0.87) were associated with a reduced risk of AF in isolation. This association improved in those with normal ECG + normal echocardiogram (multivariable HR: 0.66, 95% CI: 0.55-0.79) compared with participants who had abnormal ECG + abnormal echocardiogram (referent).
CONCLUSIONS
Normal findings on routine noninvasive cardiac assessment identify persons in whom the risk of AF is low. Further studies are needed to explore the utility of this profile regarding the decision to implement certain risk factor modification strategies in older adults to reduce AF burden. |
An Entropy Model for Loiterer Retrieval across Multiple Surveillance Cameras | Loitering is a suspicious behavior that often leads to criminal actions, such as pickpocketing and illegal entry. Tracking methods can determine suspicious behavior based on trajectory, but require continuous appearance and are difficult to scale up to multi-camera systems. Using the duration of appearance of features works on multiple cameras, but does not consider major aspects of loitering behavior, such as repeated appearance and trajectory of candidates. We introduce an entropy model that maps the location of a person's features on a heatmap. It can be used as an abstraction of trajectory tracking across multiple surveillance cameras. We evaluate our method over several datasets and compare it to other loitering detection methods. The results show that our approach has similar results to state of the art, but can provide additional interesting candidates. |
Threats, Countermeasures and Attribution of Cyber Attacks on Critical Infrastructures | As Critical National Infrastructures are becoming more vulnerable to cyber attacks, their protection becomes a significant issue for any organization as well as a nation. Moreover, the ability to attribute is a vital element of avoiding impunity in cyberspace. In this article, we present main threats to critical infrastructures along with protective measures that one nation can take, and which are classified according to legal, technical, organizational, capacity building, and cooperation aspects. Finally we provide an overview of current methods and practices regarding cyber attribution and cyber peace keeping. |
Critical success factors in enterprise wide information management systems projects | In the past several years many organizations have initiated enterprise-wide information management systems projects, using such packages as SAP, Peoplesoft, and Oracle. These projects often represent the single largest investment in an information systems project in the history of these companies, and in many cases the largest single investment in any corporate-wide project. These enterprise-wide information management systems projects bring about a host of new questions. Some of these questions and issues are: • What is the purpose and scope of the enterprise-wide information management systems project? • What are the project objectives and outcomes to date? • How was the investment in the integrated system justified? What were the tangible and intangible business benefits that were considered? • What was the role and importance of top management support? • How were business processes affected by the software? • What investments in training, support, and maintenance were needed to assure project success? • Was external vendor expertise used to accomplish certain aspects of the project? • What was the role of end-users in project management and systems development? |
Neurohormonal and clinical sex differences in heart failure. | AIMS
Despite disparities in pathophysiology and disease manifestation between male and female patients with heart failure, studies focusing on sex differences in biomarkers are scarce. The purpose of this study was to assess sex-specific variation in clinical characteristics and biomarker levels to gain more understanding of the potential pathophysiological mechanisms underlying sex differences in heart failure.
METHODS AND RESULTS
Baseline demographic and clinical characteristics, multiple biomarkers, and outcomes were compared between men and women in 567 patients. The mean age of the study group was 71 ± 11 years and 38% were female. Women were older, had a higher body mass index and left ventricular ejection fraction, more hypertension, and received more diuretic and antidepressant therapy, but less ACE-inhibitor therapy compared with men. After 3 years, all-cause mortality was lower in women than men (37.0 vs. 43.9%, multivariable hazard ratio = 0.64; 95% confidence interval 0.45-0.92, P = 0.016). Levels of biomarkers related to inflammation [C-reactive protein, pentraxin 3, growth differentiation factor 15 (GDF-15), and interleukin 6] and extracellular matrix remodelling (syndecan-1 and periostin) were significantly lower in women compared with men. N-terminal pro-brain natriuretic peptide, TNF-αR1a, and GDF-15 showed the strongest interaction between sex and mortality.
CONCLUSION
Female heart failure patients have a distinct clinical presentation and better outcomes compared with male patients. The lower mortality was independent of differences in clinical characteristics, but differential sex associations between several biomarkers and mortality might partly explain the survival difference. |
Music listening enhances cognitive recovery and mood after middle cerebral artery stroke. | We know from animal studies that a stimulating and enriched environment can enhance recovery after stroke, but little is known about the effects of an enriched sound environment on recovery from neural damage in humans. In humans, music listening activates a wide-spread bilateral network of brain regions related to attention, semantic processing, memory, motor functions, and emotional processing. Music exposure also enhances emotional and cognitive functioning in healthy subjects and in various clinical patient groups. The potential role of music in neurological rehabilitation, however, has not been systematically investigated. This single-blind, randomized, and controlled trial was designed to determine whether everyday music listening can facilitate the recovery of cognitive functions and mood after stroke. In the acute recovery phase, 60 patients with a left or right hemisphere middle cerebral artery (MCA) stroke were randomly assigned to a music group, a language group, or a control group. During the following two months, the music and language groups listened daily to self-selected music or audio books, respectively, while the control group received no listening material. In addition, all patients received standard medical care and rehabilitation. All patients underwent an extensive neuropsychological assessment, which included a wide range of cognitive tests as well as mood and quality of life questionnaires, one week (baseline), 3 months, and 6 months after the stroke. Fifty-four patients completed the study. Results showed that recovery in the domains of verbal memory and focused attention improved significantly more in the music group than in the language and control groups. The music group also experienced less depressed and confused mood than the control group. These findings demonstrate for the first time that music listening during the early post-stroke stage can enhance cognitive recovery and prevent negative mood. The neural mechanisms potentially underlying these effects are discussed. |
Histopathological grading of breast ductal carcinoma In Situ: validation of a web-based survey through intra-observer reproducibility analysis | BACKGROUND
Histopathological grading diagnosis of ductal carcinoma in situ (DCIS) of the breast may be very difficult even for experts, and it is important for therapeutic decisions. The challenge may be due to the inaccurate and/or subjective application of the diagnosis criteria. The aim of this study was to investigate the intra-observer agreement between a traditional method and a developed web-based questionnaire for scoring breast DCIS.
METHODS
A cross-sectional study was carried out to evaluate the diagnostic agreement of an electronic questionnaire and its point scoring system with the subjective reading of digital images for 3 different DCIS grading systems: Holland, Van Nuys and modified Black nuclear grade system. Three pathologists analyzed the same set of digitized images from 43 DCIS cases using two different web-based programs. In the first phase, they accessed a website with a newly created questionnaire and scoring system developed to allow the determination of the histological grade of the cases. After at least 6 months, the pathologists read again the same images, but without the help of the questionnaire, indicating subjectively the diagnoses. The intra-observer agreement analysis was employed to validate this innovative web-based survey.
RESULTS
Overall, diagnostic reproducibility was similar for all histologic grading classification systems, with kappa values of 0.57 ± 0.10, 0.67 ± 0.09 and 0.67 ± 0.09 for Holland, Van Nuys classification and modified Black nuclear grade system respectively. Only two 2-step diagnostic disagreements were found, one for Holland and another for Van Nuys. Both cases were superestimated by the web-based survey.
CONCLUSION
The diagnostic agreement between the web-based questionnaire and a traditional method, both using digital images, is moderate to good for Holland, Van Nuys and modified Black nuclear grade system. The use of a scoring point system does not appear to pose a major risk of presenting large (2-step) diagnostic disagreements. These findings indicate that the use of this point scoring system in this web-based survey to grade objectively DCIS lesions is a useful diagnostic tool. |
Fast and accurate classification of echocardiograms using deep learning | Echocardiography is essential to modern cardiology. However, human interpretation limits high throughput analysis, limiting echocardiography from reaching its full clinical and research potential for precision medicine. Deep learning is a cutting-edge machine-learning technique that has been useful in analyzing medical images but has not yet been widely applied to echocardiography, partly due to the complexity of echocardiograms' multi view, multi modality format. The essential first step toward comprehensive computer assisted echocardiographic interpretation is determining whether computers can learn to recognize standard views. To this end, we anonymized 834,267 transthoracic echocardiogram (TTE) images from 267 patients (20 to 96 years, 51 percent female, 26 percent obese) seen between 2000 and 2017 and labeled them according to standard views. Images covered a range of real world clinical variation. We built a multilayer convolutional neural network and used supervised learning to simultaneously classify 15 standard views. Eighty percent of data used was randomly chosen for training and 20 percent reserved for validation and testing on never seen echocardiograms. Using multiple images from each clip, the model classified among 12 video views with 97.8 percent overall test accuracy without overfitting. Even on single low resolution images, test accuracy among 15 views was 91.7 percent versus 70.2 to 83.5 percent for board-certified echocardiographers. Confusional matrices, occlusion experiments, and saliency mapping showed that the model finds recognizable similarities among related views and classifies using clinically relevant image features. In conclusion, deep neural networks can classify essential echocardiographic views simultaneously and with high accuracy. Our results provide a foundation for more complex deep learning assisted echocardiographic interpretation. |
Towards a unified Media-User Typology (MUT): A meta-analysis and review of the research literature on media-user typologies | Considering the increasingly complex media landscape and diversity of use, it is important to establish a common ground for identifying and describing the variety of ways in which people use new media technologies. Characterising the nature of media-user behaviour and distinctive user types is challenging and the literature offers little guidance in this regard. Hence, the present research aims to classify diverse user behaviours into meaningful categories of user types, according to the frequency of use, variety of use and content preferences. To reach a common framework, a review of the relevant research was conducted. An overview and meta-analysis of the literature (22 studies) regarding user typology was established and analysed with reference to (1) method, (2) theory, (3) media platform, (4) context and year, and (5) user types. Based on this examination, a unified Media-User Typology (MUT) is suggested. This initial MUT goes beyond the current research literature, by unifying all the existing and various user type models. A common MUT model can help the Human–Computer Interaction community to better understand both the typical users and the diversification of media-usage patterns more qualitatively. Developers of media systems can match the users’ preferences more precisely based on an MUT, in addition to identifying the target groups in the developing process. Finally, an MUT will allow a more nuanced approach when investigating the association between media usage and social implications such as the digital divide. 2010 Elsevier Ltd. All rights reserved. 1 Difficulties in understanding media-usage behaviour have also arisen because of |
Key Risk Indicators – Their Role in Operational Risk Management and Measurement | In comparison with other operational risk management and measurement tools such as loss data (internal, public and consortium), risk-and-control assessments, capital allocation and performance measurement, key risk indicators (KRIs) remain one of the outstanding action items on most firms’ to-do lists, along with scenario analysis. KRIs are not new, but, up until recently, attempts to implement such programmes have often been characterised as less than effective. We believe that there are emerging best practices for KRI programmes that overcome some of the traditional challenges in KRI implementations. Further, we believe that investment in KRI programmes will reap many benefits, including the ability to clearly convey risk appetite, optimise risk and return, and improve the likelihood of achieving primary business goals through more effective operational risk management. In this chapter, we will seek to demystify KRIs, understand the basic fundamentals in identifying, specifying, selecting and implementing quality indicators, and consider how to monitor and report on them, in conjunction with other useful operational risk management information, to create powerful management reporting. We will also examine the potential for more advanced applications of KRIs, touching on the measurement of risk for correlation purposes, and the potential for composite indicators as well as KRI benchmarking. Finally, we will overview an industry KRI 10 |
Techniques , Applications and Challenging Issue in Text Mining | Text mining is a very exciting research area as it tries to discover knowledge from unstructured texts. These texts can be found on a computer desktop, intranets and the internet. The aim of this paper is to give an overview of text mining in the contexts of its techniques, application domains and the most challenging issue. The focus is given on fundamentals methods of text mining which include natural language possessing and information extraction. This paper also gives a short review on domains which have employed text mining. The challenging issue in text mining which is caused by the complexity in a natural language is also addressed in this paper. |
Implementation of deep-learning based image classification on single board computer | In this paper, a deep-learning agorithm based on convolutional neural-network is implemented using python and tflearn for image classification. A large number of different images which contains two types of animals, namely cat and dog are used for classification. Two different structures of CNN are used, namely with two and five layers. It is shown that the CNN with higher layer performs classification process with much higher accuracy. The best CNN model with high accuracy and small loss function deployed in single board computer. |
Hardware Fingerprinting Using HTML5 | Device fingerprinting over the web has received much attention both by the research community and the commercial market a like. Almost all the fingerprinting feature s proposed to date depend on software run on the device. All of these features can be changed by the user, thereby thwarti ng the device’s fingerprint. In this position paper we argue that the recent emergence of the HTML5 standard gives rise to a new class of fingerprinting features that are based on thehardware of the device. Such features are much harder to mask or change thus provide a higher degree of confidence in the fingerprint.We propose several possible fingerprint methods that allow a HT ML5 web application to identify a device’s hardware. We also present an initial experiment to fingerprint a device’s GPU. |
UK Back pain Exercise And Manipulation (UK BEAM) trial – national randomised trial of physical treatments for back pain in primary care: objectives, design and interventions [ISRCTN32683578] | BACKGROUND
Low back pain has major health and social implications. Although there have been many randomised controlled trials of manipulation and exercise for the management of low back pain, the role of these two treatments in its routine management remains unclear. A previous trial comparing private chiropractic treatment with National Health Service (NHS) outpatient treatment, which found a benefit from chiropractic treatment, has been criticised because it did not take treatment location into account. There are data to suggest that general exercise programmes may have beneficial effects on low back pain. The UK Medical Research Council (MRC) has funded this major trial of physical treatments for back pain, based in primary care. It aims to establish if, when added to best care in general practice, a defined package of spinal manipulation and a defined programme of exercise classes (Back to Fitness) improve participant-assessed outcomes. Additionally the trial compares outcomes between participants receiving the spinal manipulation in NHS premises and in private premises.
DESIGN
Randomised controlled trial using a 3 x 2 factorial design.
METHODS
We sought to randomise 1350 participants with simple low back pain of at least one month's duration. These came from 14 locations across the UK, each with a cluster of 10-15 general practices that were members of the MRC General Practice Research Framework (GPRF). All practices were trained in the active management of low back pain. Participants were randomised to this form of general practice care only, or this general practice care plus manipulation, or this general practice care plus exercise, or this general practice care plus manipulation followed by exercise. Those randomised to manipulation were further randomised to receive treatment in either NHS or private premises. Follow up was by postal questionnaire one, three and 12 months after randomisation. The primary analysis will consider the main treatment effects before interactions between the two treatment packages. Economic analysis will estimate the cost per unit of health utility gained by adding either or both of the treatment packages to general practice care. |
Blocks and Fuel: Frameworks for deep learning | We introduce two Python frameworks to train neural networks on large datasets: Blocks and Fuel. Blocks is based on Theano, a linear algebra compiler with CUDA-support (Bastien et al., 2012; Bergstra et al., 2010). It facilitates the training of complex neural network models by providing parametrized Theano operations, attaching metadata to Theano’s symbolic computational graph, and providing an extensive set of utilities to assist training the networks, e.g. training algorithms, logging, monitoring, visualization, and serialization. Fuel provides a standard format for machine learning datasets. It allows the user to easily iterate over large datasets, performing many types of pre-processing on the fly. |
The Meaning of “Place” to Older Adults | Social workers are well-equipped to work with older adults and their families. The life course perspective provides a framework for seeing older adulthood as a stage of life in the continuum of life as well as a stage with its own characteristics and tasks. All the roles within social work practice can be adapted to this population. In addition, social workers working with older adults and their families must be cognizant of the specific issues that are associated with aging and older adulthood. The issue of loss on many levels is a frequent topic. One area of loss that is not frequently addressed is the loss associated with where one lives. The word place can have several meanings. One meaning has to do with where one lives. The second meaning of loss is about one’s status and role—place-in-society. For older adults both meanings become important issues as they and their families navigate the decisions that have to be made. While residence is based on the level of independence and competence of the older adult, the issue of place-as-status is a constant frustration for older adults. Issues of leaving one’s place and losing status in the eyes of others evoke a myriad of feelings depending on the particular older adult. But given that as one ages there are naturally some physical and mental acuity losses, every older adult is subject to feelings of sadness, depression, hopelessness, and even anger. These feelings are natural responses to loss. Among the roles of social workers working with older adults is one of helping a mourning process move to a healthy acceptance of one’s aging and planning rather than devolving into major depression. |
Dynamic ferrofluid sculpture: organic shape-changing art forms | From ancient times, standing sculptures in Japan and elsewhere were made of materials such as clay, stone, wood, or metal. Materials were formed, modeled, modified, cut, and reshaped using processes appropriate for them, and the forms and textures of sculptures made from the materials did not change except by abrasion or surface corrosion. The invention of photography changed this world of unchanging art. Modern materials and electric and machine technology came to be used in artworks and inspired kinetic art such as that by Naum Gabo and László Moholy-Nagy was created. Since then, numerous artists, designers, and architects have created moving, kinetic works. Since the introduction of the computer (for example, in cybernetic art proposed by Nicolas Shöffer), a number of artworks have been produced by processing external |
The Body Appreciation Scale: development and psychometric evaluation. | Body image has been conceptualized and assessed almost exclusively in terms of its negative dimensions. Therefore, a measure reflecting body appreciation, an aspect of positive body image, was developed and evaluated via four independent samples of college women. Study 1 (N = 181) supported the Body Appreciation Scale's (BAS) unidimensionality and construct validity, as it was related as expected to body esteem, body surveillance, body shame, and psychological well-being. Study 2 (N = 327) cross-validated its unidimensionality. Study 3 (N = 424) further upheld the construct validity of the BAS, as it was: (a) related as expected to appearance evaluation, body preoccupation, body dissatisfaction, and eating disorder symptomatology and (b) unrelated to impression management. Studies 1 and 3 also indicated that the BAS predicted unique variance in psychological well-being above and beyond extant measures of body image. Study 4 (N = 177) demonstrated that its scores were stable over a 3-week period. All studies supported the internal consistency reliability of its scores. The BAS should prove useful for researchers and clinicians interested in positive body image assessment. |
Unsupervised cross-modal retrieval through adversarial learning | The core of existing cross-modal retrieval approaches is to close the gap between different modalities either by finding a maximally correlated subspace or by jointly learning a set of dictionaries. However, the statistical characteristics of the transformed features were never considered. Inspired by recent advances in adversarial learning and domain adaptation, we propose a novel Unsupervised Cross-modal retrieval method based on Adversarial Learning, namely UCAL. In addition to maximizing the correlations between modalities, we add an additional regularization by introducing adversarial learning. In particular, we introduce a modality classifier to predict the modality of a transformed feature. This can be viewed as a regularization on the statistical aspect of the feature transforms, which ensures that the transformed features are also statistically indistinguishable. Experiments on popular multimodal datasets show that UCAL achieves competitive performance compared to state of the art supervised cross-modal retrieval methods. |
A Genome-Wide Analysis of Small Regulatory RNAs in the Human Pathogen Group A Streptococcus | The coordinated regulation of gene expression is essential for pathogens to infect and cause disease. A recently appreciated mechanism of regulation is that afforded by small regulatory RNA (sRNA) molecules. Here, we set out to assess the prevalence of sRNAs in the human bacterial pathogen group A Streptococcus (GAS). Genome-wide identification of candidate GAS sRNAs was performed through a tiling Affymetrix microarray approach and identified 40 candidate sRNAs within the M1T1 GAS strain MGAS2221. Together with a previous bioinformatic approach this brings the number of novel candidate sRNAs in GAS to 75, a number that approximates the number of GAS transcription factors. Transcripts were confirmed by Northern blot analysis for 16 of 32 candidate sRNAs tested, and the abundance of several of these sRNAs were shown to be temporally regulated. Six sRNAs were selected for further study and the promoter, transcriptional start site, and Rho-independent terminator identified for each. Significant variation was observed between the six sRNAs with respect to their stability during growth, and with respect to their inter- and/or intra-serotype-specific levels of abundance. To start to assess the contribution of sRNAs to gene regulation in M1T1 GAS we deleted the previously described sRNA PEL from four clinical isolates. Data from genome-wide expression microarray, quantitative RT-PCR, and Western blot analyses are consistent with PEL having no regulatory function in M1T1 GAS. The finding that candidate sRNA molecules are prevalent throughout the GAS genome provides significant impetus to the study of this fundamental gene-regulatory mechanism in an important human pathogen. |
Measuring ADHD behaviors in children with symptomatic accommodative dysfunction or convergence insufficiency: a preliminary study. | BACKGROUND
Accommodative dysfunction and convergence insufficiency (CI) are common pediatric vision problems that have been associated with an increase in frequency and severity of vision-specific symptoms that affect children when doing schoolwork. However, the relationship between accommodative dysfunction and CI and other learning problems, such as attention deficit hyperactivity disorder (ADHD), are not well understood. The purpose of this study was to evaluate the frequency of ADHD behaviors in school-aged children with symptomatic accommodative dysfunction or CI.
METHODS
Children 8 to 15 years of age with symptomatic accommodative dysfunction or CI were recruited from the teaching clinic at the Southern California College of Optometry. Children with learning disabilities or ADHD were excluded. One parent of each child completed the Conners Parent Rating Scale-Revised Short Form (CPRS-R:S). The children's scores on the CPRS-R:S were compared with the normative sample.
RESULTS
Twenty-four children (9 boys and 15 girls) participated in the study with a mean age of 10.93 years (SD = 1.75). On the CPRS-R:S, cognitive problem/inattention, hyperactivity, and ADHD index were significantly different from normative values (p < or = .001 for all tests).
CONCLUSIONS
The results from this preliminary study suggest that school-aged children with symptomatic accommodative dysfunction or CI have a higher frequency of behaviors related to school performance and attention as measured by the CPRS-R:S. |
Stability and Dissipativity Analysis of Static Neural Networks With Time Delay | This paper is concerned with the problems of stability and dissipativity analysis for static neural networks (NNs) with time delay. Some improved delay-dependent stability criteria are established for static NNs with time-varying or time-invariant delay using the delay partitioning technique. Based on these criteria, several delay-dependent sufficient conditions are given to guarantee the dissipativity of static NNs with time delay. All the given results in this paper are not only dependent upon the time delay but also upon the number of delay partitions. Some examples are given to illustrate the effectiveness and reduced conservatism of the proposed results. |
State Space Model based Trust Evaluation over Wireless Sensor Networks: An Iterative Particle Filter Approach | In this paper we propose a state space modeling approach for trust evaluation in wireless sensor networks. In our state space trust model (SSTM), each sensor node is associated with a trust metric, which measures to what extent the data transmitted from this node would better be trusted by the server node. Given the SSTM, we translate the trust evaluation problem to be a nonlinear state filtering problem. To estimate the state based on the SSTM, a component-wise iterative state inference procedure is proposed to work in tandem with the particle filter, and thus the resulting algorithm is termed as iterative particle filter (IPF). The computational complexity of the IPF algorithm is theoretically linearly related with the dimension of the state. This property is desirable especially for high dimensional trust evaluation and state filtering problems. The performance of the proposed algorithm is evaluated by both simulations and real data analysis. Index Terms state space trust model, wireless sensor network, trust evaluation, particle filter, high dimensional. ✦ |
Block-Matching Convolutional Neural Network for Image Denoising | There are two main streams in up-to-date image denoising algorithms: non-local self similarity (NSS) prior based methods and convolutional neural network (CNN) based methods. The NSS based methods are favorable on images with regular and repetitive patterns while the CNN based methods perform better on irregular structures. In this paper, we propose a blockmatching convolutional neural network (BMCNN) method that combines NSS prior and CNN. Initially, similar local patches in the input image are integrated into a 3D block. In order to prevent the noise from messing up the block matching, we first apply an existing denoising algorithm on the noisy image. The denoised image is employed as a pilot signal for the block matching, and then denoising function for the block is learned by a CNN structure. Experimental results show that the proposed BMCNN algorithm achieves state-of-the-art performance. In detail, BMCNN can restore both repetitive and irregular structures. |
Ontology semantic approach to extraction of knowledge from holy quran | With the continued demand for Islamic knowledge, which is mainly based on the Quran as a source of knowledge and wisdom, systems that facilitate an easy search of the content of the Quran remain a considerable challenge. Although in recent years there have been tools for Quran search, most of these tools are based on keyword search, meaning that the user needs to know the correct keywords before being able to retrieve the content of al-Quran. In this paper, we propose a system that supports the end user in querying and exploring the Quran ontology. The system comprises user query reformulation against the Quran ontology stored and annotated in the knowledge base. The Quran ontology consists of noun concepts identified in al-Quran, and the relationship that exists between these concepts. The user writes a query in the natural language and the proposed system reformulates the query to match the content found in the knowledge base in order to retrieve the relevant answer. The answer is represented by the Quranic verse related to the user query. |
Effect of over-and under-expression of glyceraldehyde 3-phosphate dehydrogenase on tolerance of plants to water-deficit stress | While involved in a functional genomics program, we found that the overexpression of potato (Solanum tuberosum) glyceraldehyde 3-phosphate dehydrogenase (GAPDH) gene in yeast improves its water-deficit stress (drought) tolerance. But, the effect of altered (under and over) expression of GAPDH on water-deficit stress tolerance of higher plants is not yet studied. In this study, we used a versatile reverse genetics tool called virus-induced gene silencing and down-regulated the expression of GAPDH gene in tobacco (Nicotiana benthamiana) to examine the relevance (effect of underexpression) of GAPDH on drought tolerance of higher plants. Leaf discs made from silenced and nonsilenced tobacco plants were subjected to water-deficit stress. Measure of cell viability and the content of chlorophyll in stressed and nonstressed leaf discs were determined to quantify the effect of stress. Leaf discs made from the gene-silenced plants were found to be more severely affected by the stress than the leaf discs made from nonsilenced plants, implying the importance of GAPDH gene in drought tolerance of plants. Furthermore, to reiterate the involvement of GAPDH in drought tolerance of plants, potato transgenic plants constitutively overexpressing the GAPDH gene were generated and their performance under drought condition was analyzed. Transgenic potato plants showed improved drought tolerance when compared to wild-type potato. On the whole, our results confirm that the GAPDH gene plays an important role in drought tolerance of higher plants, and its constitutive overexpression by genetic engineering can be used to improve drought tolerance of crop plants like potato. |
Body Space in Social Interactions: A Comparison of Reaching and Comfort Distance in Immersive Virtual Reality | BACKGROUND
Do peripersonal space for acting on objects and interpersonal space for interacting with con-specifics share common mechanisms and reflect the social valence of stimuli? To answer this question, we investigated whether these spaces refer to a similar or different physical distance.
METHODOLOGY
Participants provided reachability-distance (for potential action) and comfort-distance (for social processing) judgments towards human and non-human virtual stimuli while standing still (passive) or walking toward stimuli (active).
PRINCIPAL FINDINGS
Comfort-distance was larger than other conditions when participants were passive, but reachability and comfort distances were similar when participants were active. Both spaces were modulated by the social valence of stimuli (reduction with virtual females vs males, expansion with cylinder vs robot) and the gender of participants.
CONCLUSIONS
These findings reveal that peripersonal reaching and interpersonal comfort spaces share a common motor nature and are sensitive, at different degrees, to social modulation. Therefore, social processing seems embodied and grounded in the body acting in space. |
TADAM: Task dependent adaptive metric for improved few-shot learning | Few-shot learning has become essential for producing models that generalize from few examples. In this work, we identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms. Our analysis reveals that simple metric scaling completely changes the nature of few-shot algorithm parameter updates. Metric scaling provides improvements up to 14% in accuracy for certain metrics on the mini-Imagenet 5-way 5-shot classification task. We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space. Moreover, we propose and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. The resulting few-shot learning model based on the task-dependent scaled metric achieves state of the art on mini-Imagenet. We confirm these results on another few-shot dataset that we introduce in this paper based on CIFAR100. |
The Impact of Internet Banking on Bank Performance and Risk : The Indian Experience | The paper describes the current state of Internet banking in India and discusses its implications for the Indian banking industry. Particularly, it seeks to examine the impact of Internet banking on banks’ performance and risk. Using information drawn from the survey of 85 scheduled commercial bank’s websites, during the period of June 2007, the results show that nearly 57 percent of the Indian commercial banks are providing transactional Internet banking services. The univariate analysis indicates that Internet banks are larger banks and have better operating efficiency ratios and profitability as compared to non-Internet banks. Internet banks rely more heavily on core deposits for funding than non-Internet banks do. However, the multiple regression results reveal that the profitability and offering of Internet banking does not have any significant association, on the other hand, Internet banking has a significant and negative association with risk profile of |
"And We Will Fight For Our Race!" A Measurement Study of Genetic Testing Conversations on Reddit and 4chan | Rapid progress in genomics has enabled a thriving market for “direct-to-consumer” genetic testing, whereby people have access to their genetic information without the involvement of a healthcare provider. Companies like 23andMe and AncestryDNA, which provide affordable health, genealogy, and ancestry reports, have already tested tens of millions of customers. At the same time, alas, far-right groups have also taken an interest in genetic testing, using them to attack minorities and prove their genetic “purity.” However, the relation between genetic testing and online hate has not really been studied by the scientific community. To address this gap, we present a measurement study shedding light on how genetic testing is discussed on Web communities in Reddit and 4chan. We collect 1.3M comments posted over 27 months using a set of 280 keywords related to genetic testing. We then use Latent Dirichlet Allocation, Google’s Perspective API, Perceptual Hashing, and word embeddings to identify trends, themes, and topics of discussion. Our analysis shows that genetic testing is discussed frequently on Reddit and 4chan, and often includes highly toxic language expressed through hateful, racist, and misogynistic comments. In particular, on 4chan’s politically incorrect board (/pol/), content from genetic testing conversations involves several alt-right personalities and openly antisemitic memes. Finally, we find that genetic testing appears in a few unexpected contexts, and that users seem to build groups ranging from technology enthusiasts to communities using it to promote fringe political views. |
Circularly Polarized U-Slot Antenna | Circularly polarized single-layer U-slot microstrip patch antenna has been proposed. The suggested asymmetrical U-slot can generate the two orthogonal modes for circular polarization without chamfering any corner of the probe-fed square patch microstrip antenna. A parametric study has been carried out to investigate the effects caused by different arm lengths of the U-slot. The thickness of the foam substrate is about 8.5% of the wavelength at the operating frequency. The 3 dB axial ratio bandwidth of the antenna is 4%. Both experimental and theoretical results of the antenna have been presented and discussed. Circular polarization, printed antennas, U-slot. |
MULTI-MODAL BACKGROUND SUBTRACTION USING GAUSSIAN MIXTURE MODELS | Background subtraction is a common first step in the field of video processing and it is used to reduce the effective image size in subsequent processing steps by segmenting the mostly static background from the moving or changing foreground. In this paper previous approaches towards background modeling are extended to handle videos accompanied by information gained from a novel 2D/3D camera. This camera contains a color and a PMD chip which operates on the Time-of-Flight operating principle. The background is estimated using the widely spread Gaussian mixture model in color as well as in depth and amplitude modulation. A new matching function is presented that allows for better treatment of shadows and noise and reduces block artifacts. Problems and limitations to overcome the problem of fusing high resolution color information with low resolution depth data are addressed and the approach is tested with different parameters on several scenes and the results are compared to common and widely accepted methods. |
VANET security surveys | Vehicular ad hoc networks (VANETs), a subset of Mobile Ad hoc NETworks (MANETs), refer to a set of smart vehicles used on the road. These vehicles provide communication services among one another or with Road Side Infrastructure (RSU) based on wireless Local Area Network (LAN) technologies. The main benefits of VANETs are that they enhance road safety and vehicle security while protecting drivers’ privacy from attacks perpetrated by adversaries. Security is one of the most critical issues related to VANETs since the information transmitted is distributed in an open access environment. VANETs face many challenges. This paper presents a survey of the security issues and the challenges they generate. The various categories of applications in VANETs are introduced, as well as some security requirements, threats and certain architectures are proposed to solve the security problem. Finally, global security architecture for VANETs is proposed. 2014 Elsevier B.V. All rights reserved. |
Learning to share visual appearance for multiclass object detection | We present a hierarchical classification model that allows rare objects to borrow statistical strength from related objects that have many training examples. Unlike many of the existing object detection and recognition systems that treat different classes as unrelated entities, our model learns both a hierarchy for sharing visual appearance across 200 object categories and hierarchical parameters. Our experimental results on the challenging object localization and detection task demonstrate that the proposed model substantially improves the accuracy of the standard single object detectors that ignore hierarchical structure altogether. |
Low Temperature Sintering of Silicon Carbide through a Liquid Polymer Precursor | There exists a need for a more developed and advanced cladding material for operational nuclear reactors. The current cladding, a Zirconium-alloy, is quickly approaching the pinnacle of its ability to handle the increasing fuel load demands. Also, as evidenced by the incident at Fukushima, it reacted violently with water steam at high temperatures, which created hydrogen gas and lead to subsequent explosions. Therefore, the material leading the investigation to replace the alloy is silicon carbide. Silicon carbide has been processed and manufactured through several techniques; however, this study focuses on the development of silicon carbide through the use of the polymer infiltration and pyrolysis (PIP) technique. This technique utilizes a polymer precursor, AHPCS, in accordance with low temperature sintering to enhance the density of silicon carbide. This procedure is environmentally friendly and makes use of low temperature and pressure processing parameters. The density achieved through this study was found to be 95 percent of fully dense silicon carbide. In an effort to enhance sintering and further increase density, nickel nanoparticles were added to the polymer precursor in two different proportions, 5 and 10 weight percent. The obtained densities were 96 and 97 percent, respectively. Hardness values were also obtained for the pure silicon carbide sample, 5 and 10 weight percent nickel samples. They were 2600, 2700 and 2730 HV, respectively. In addition to the recorded densities and hardness values, scanning electron and optical microscopy images were also utilized to properly characterize the samples. From the images obtained through these instruments, it is seen that the samples appear quite dense with minimal open pores. Lastly, x-ray diffraction patterns were recorded to appropriately characterize the phase of each sample and assess any additional product formation with the addition of the nickel nanoparticles. It is apparent that the pure silicon carbide samples are crystalline in phase from the XRD examination. As for the nickel added samples, there appears to be a formation of nickel carbide. The obtained results are a promising outlook on the advancement of silicon carbide as a potential cladding material for operational nuclear reactors. |
Learning for serving deadline-constrained traffic in multi-channel wireless networks | We study the problem of serving randomly arriving and delay-sensitive traffic over a multi-channel communication system with time-varying channel states and unknown statistics. This problem deviates from the classical exploration-exploitation setting in that the design and analysis must accommodate the dynamics of packet availability and urgency as well as the cost of each channel use at the time of decision. To that end, we have developed and investigated two policies, one index-based (UCB-Deadline) and the other Bayesian (TS-Deadline), both of which perform dynamic channel allocation decisions that incorporate these traffic requirements and costs. Under symmetric channel conditions, we have proved that the UCB-Deadline policy can achieve bounded regret in the likely case where the cost of using a channel is not too high to prevent all transmissions, and logarithmic regret otherwise. In our numerical studies, we also show that TS-Deadline achieves superior performance over its UCB counterpart, making it a potentially useful alternative when fast convergence to optimal is important. |
spatstat: An R Package for Analyzing Spatial Point Patterns | spatstat is a package for analyzing spatial point pattern data. Its functionality includes exploratory data analysis, model-fitting, and simulation. It is designed to handle realistic datasets, including inhomogeneous point patterns, spatial sampling regions of arbitrary shape, extra covariate data, and ‘marks’ attached to the points of the point pattern. A unique feature of spatstat is its generic algorithm for fitting point process models to point pattern data. The interface to this algorithm is a function ppm that is strongly analogous to lm and glm. This paper is a general description of spatstat and an introduction for new users. |
Power Control in Ad-Hoc Networks: Theory, Architecture, Algorithm and Implementation of the COMPOW Protocol | We present a new protocol for power control in ad hoc networks. We describe the issues in conceptualizing the power control problem, and provide an architecturally simple as well as theoretically well founded solution. The solution is shown to simultaneously satisfy the three objectives of maximizing the traffic carrying capacity of the entire network, extending battery life through providing low power routes, and reducing the contention at the MAC layer. Further, the protocol has the plug and play feature that it can be employed in conjunction with any routing protocol that pro-actively maintains a routing table. The protocol, called COMPOW, has been implemented in the Linux kernel and we describe the software architecture and implementation details. |
Quantitative trait loci predicting circulating sex steroid hormones in men from the NCI-Breast and Prostate Cancer Cohort Consortium (BPC3). | Twin studies suggest a heritable component to circulating sex steroid hormones and sex hormone-binding globulin (SHBG). In the NCI-Breast and Prostate Cancer Cohort Consortium, 874 SNPs in 37 candidate genes in the sex steroid hormone pathway were examined in relation to circulating levels of SHBG (N = 4720), testosterone (N = 4678), 3 alpha-androstanediol-glucuronide (N = 4767) and 17beta-estradiol (N = 2014) in Caucasian men. rs1799941 in SHBG is highly significantly associated with circulating levels of SHBG (P = 4.52 x 10(-21)), consistent with previous studies, and testosterone (P = 7.54 x 10(-15)), with mean difference of 26.9 and 14.3%, respectively, comparing wild-type to homozygous variant carriers. Further noteworthy novel findings were observed between SNPs in ESR1 with testosterone levels (rs722208, mean difference = 8.8%, P = 7.37 x 10(-6)) and SRD5A2 with 3 alpha-androstanediol-glucuronide (rs2208532, mean difference = 11.8%, P = 1.82 x 10(-6)). Genetic variation in genes in the sex steroid hormone pathway is associated with differences in circulating SHBG and sex steroid hormones. |
Restoration of depressed immune function in spinal cord injury patients receiving rehabilitation therapy | Both natural and adaptive immune responses were strikingly decreased 2 weeks after injury in 49 spinal cord injuries, 28 tetraplegie and 21 paraplegic patients compared to agematched controls. All values are expressed as means. NK cell function decreased to 21.0% 2 weeks after spinal cord injury compared to 48.6% in controls. At 2 weeks, plasma ACTH values increased to 17.0 pg/ml in patients compared to 11.2 pg/ml in controls and urine free Cortisol levels were elevated to 162.4 μg/24 h in patients compared to 53.6 ug/24 h in controls. T cell function decreased to 40.2% of normal (lymphocyte transformation) by 3 months post injury. T cell activation (IL-2R) was diminished, i.e., 183.4 ug/ml compared to 328.2 ug/ml in controls. With rehabilitation therapy, NK cell function increased to 41.6% by 7 months post injury. NK cell-mediated lysis diminished sharply between 7 and 9 months decreasing to 22.8% at 10 months and ultimately returning to the 2 week post injury level. Rehabilitation therapy contributed to the restoration of T cell function to 92.0% of normal by 6 months post injury where it remained for 6+ months. IL-2R values improved in parallel with lymphocyte transformation. Whereas NK cell-induced lysis remained depressed, i.e., 11.8% at 6 months and 11.4% at 12+ months in patients not receiving therapy, the restoration of NK cell function at 6 months to 40.6% in rehabilitated patients decreased to 23.0% with cessation of treatment. NK cell-mediated lysis values in cervical injury patients were significantly less than those in the thoracic injury group. FIM scores of the two paralleled their NK cell function. With rehabilitation therapy, NK cell-mediated lysis in the cervical group increased from 15.2% to 28.4%, whereas it improved in the thoracic group with therapy from 26.8% to 43.7%. With rehabilitation therapy, lymphocyte transformation in the cervical group increased from 37.3% to 85.6% and improved in the thoracic group from 48.4% to 88.9%. With rehabilitation therapy, FIM scores improved from 49.7 to 74.0 in the cervical group and from 79.8 to 97.3 in thoracic patients compared to 126 in controls of healthy age matched controls. |
STAR: articulation training for young children | The Speech Training, Assessment, and Remediation (STAR) system is intended to assist Speech and Language Pathologists in treating children with articulation problems. The system is embedded in an interactive video game that is set in a spaceship and involves teaching aliens to “understand” selected words by spoken example. The sequence of events leads children through a series of successively more diff icult speech production tasks, beginning with CV syllables and progressing to words/phrases. Word selection is further tailored to emphasize the contrastive nature of phonemes by the use of minimal pairs (e.g., run/won) in production sets. To assess children’s speech, a discrete hidden Markov model recognition engine is used[1]. Phone models were trained on the CMU Kids database[2]. Performance of the HMM recognizer was compared to perceptual ratings of speech recorded from children who substitute /w/ for /r/. The difference in log likelihood between /r/ and /w/ models correlates well with perceptual ratings of utterances containing substitution errors, but very poorly for correctly articulated examples. The poor correlation between perceptual and machine ratings for correctly articulated utterances may be due to very restricted variance in the perceptual data for those utterances. |
From trust on intimacy: A new inventory for examining erikson's stages of psychosocial development. | A new inventory for examining the first six of Erikson's psychosocial stages is described. The self-report questionnaire, developed in a pilot study of 97 adolescents and tested in a study of 622 adolescents, has 12 items for each subscale. Measures of reliability and validity are reported. It is concluded that the Erikson Psychosocial Stage Inventory (EPSI) is a useful measure for researchers interested in development from early adolescence and in mapping changes as a function of life events. |
Closed-form Solution for IMU based LSD-SLAM Point Cloud Conversion into the Scaled 3D World Environment | SLAM is a very popular research stream in computer vision and robotics nowadays. For more effective SLAM implementation it is necessary to have reliable information about the environment, also the data should be aligned and scaled according to the real world coordinate system. Monocular SLAM research is an attractive sub-stream, because of the low equipment cost, size and weight. In this paper we present a way to build a conversion from LSD-SLAM coordinate space to the real world coordinates using a true metric scale with IMU sensor data implementation. The causes of differences between the real and calculated spaces are explained and the possibility of conversions between the spaces is proved. Additionally, a closed-form solution for inter space transformation calculation is presented. The synthetic method of generating high level accurate and well controlled input data for the LSD-SLAM algorithm is presented. Finally, the reconstructed 3D environment representation is delivered as an output of the implemented conversion. |
Geochronological and geochemical constraints on the heat source of thermal activity in the Rausu geothermal field, Shiretoko Peninsula, Hokkaido, Japan | The Rausu geothermal field is situated in the middle part of the Shiretoko Peninsula, east Hokkaido, Japan. It has been classified into “A-type” (characterized by high temperature/depth ratio) in terms of its Geothermal Activity Index. The geology of the area consists of Tertiary volcaniclastic rocks, a hornblende dacite intrusion and overlying Tertiary to Quaternary lavas. The dacite is 63 wt.% SiO2 and belongs to the calc-alkaline suite. The solidus and subsolidus temperatures of the dacite magma, calculated from mineral equilibrium, are 830°C and 741°C, respectively. The K-Ar age obtained from hornblende phenocryst is 3.0±0.8 Ma. The geological, geochronological and petrological data suggest that the heat source of the Rausu geothermal field is a long-lived, high-temperature granodioritic pluton existing beneath the intrusion. It is suggested that the thermal activity is caused by the upwelling of high-temperature fluids related to the pluton, through fracture networks within or around the intrusion. |
Business ecosystem as the new approach to complex adaptive business environments | This paper discusses the concept of business ecosystem. Business ecosystem is a relatively new concept in the field of business research, and there is still a lot of work to be done to establish it. First the subject is approached by examining a biological ecosystem, especially how biological ecosystems are defined, how they evolve and how they are classified and structured. Second, different analogies of biological ecosystem are reviewed, including industrial ecosystem, economy as an ecosystem, digital business ecosystem and social ecosystem. Third, business ecosystem concept is outlined by discussing views of main contributors and then bringing authors’ own definition out. Fourth, the emerging research field of complexity in social sciences is brought out due to authors’ attitude to consider ecosystems and business ecosystems as complex, adaptive systems. The focal complexity aspects appearing in business ecosystems are presented; they are self-organization, emergence, co-evolution and adaptation. By connecting business ecosystem concept to complexity research, it is possible to bring new insights to changing business environments. |
How deep learning works -The geometry of deep learning | Why and how that deep learning works well on different tasks remains a mystery from a theoretical perspective. In this paper we draw a geometric picture of the deep learning system by finding its analogies with two existing geometric structures, the geometry of quantum computations and the geometry of the diffeomorphic template matching. In this framework, we give the geometric structures of different deep learning systems including convolutional neural networks, residual networks, recursive neural networks, recurrent neural networks and the equilibrium prapagation framework. We can also analysis the relationship between the geometrical structures and their performance of different networks in an algorithmic level so that the geometric framework may guide the design of the structures and algorithms of deep learning systems. |
Huffman Coding-Based Adaptive Spatial Modulation | Antenna switch enables multiple antennas to share a common RF chain. It also offers an additional spatial dimension, i.e., antenna index, that can be utilized for data transmission via both signal space and spatial dimension. In this paper, we propose a Huffman coding-based adaptive spatial modulation that generalizes both conventional spatial modulation and transmit antenna selection. Through the Huffman coding, i.e., designing variable length prefix codes, the transmit antennas can be activated with different probabilities. When the input signal is Gaussian distributed, the optimal antenna activation probability is derived through optimizing channel capacity. To make the optimization tractable, closed form upper bound and lower bound are derived as the effective approximations of channel capacity. When the input is discrete QAM signal, the optimal antenna activation probability is derived through minimizing symbol error rate. Numerical results show that the proposed adaptive transmission offers considerable performance improvement over the conventional spatial modulation and transmit antenna selection. |
Study on application modes of military Internet of Things (MIOT) | Recently, the Internet of Things (IOT) has obtained rapid development and has a significant impact on the military field. This paper first proposes a conception of military internet of things (MIOT) and analyzes the architecture of MIOT in detail. Then, three modes of MIOT, i.e., information sensing, information transmission and information serving, are respectively studied to show various military domain applications. Finally, an application assumption of MIOT from the weapon control aspect is given to validate the proposed application modes. |
Cognitive radio: making software radios more personal | Software radios arc cmcrging as platforms for multiband multimode personal communications systems. Radio etiqucttc is the set of RF bands, air interfaces, protocols, and spatial and temporal patterns that modcrate the use of the radio spectrum. Cognitive radio extends the software radio with radio-domain model-based rcasoning about such ctiquettes. Cognitivc radio enhances the flexibility of personal services through a Radio Knowlcdge Representation Language. This language reprcsents knowledge of radio etiquettc, devices, software modules, propagation, nctworks, user needs, and application scenarios in a way that supports automated reasoning about thc needs of the user. This empowers software radios to conduct expressivc ncgotiations among peers about the use of radio spectrum across fluents of space, time, and uscr context. With RKRL, cognitivc radio agents may actively manipulate the protocol stack to adapt known etiquettes to better satis$ the LISCT’S nccds. This transforms radio nodes from blind executors of predefined protocols to radio-domain-aware intclligent agents that scarch out ways to dclivcr the services thc uscr wants even if that uscr docs not know how to obtain them. Softwarc radio [l] provides an idcal platform for thc rcalization of cognitive radio. Cognitive Radio: Making Software Radios More Personal J O S E P H MITOLA 111 A N D GERALD Q. M A G U I R E , J R . R O Y A L INSTITUTE OF TECHNOLOGY Global System for Mobile Communications (GSM) radio’s equalizer taps reflcct the channel multipath structure. A network might want to ask a handset, “How many distinguishable multipath components are you seeing?” Knowledge of thc internal states of the equalizer could be useful bccause in some reception areas, thcrc may be little or no multipath and 20 dB of extra signal-to-noisc ratio (SNR). Software radio processing capacity is wasted running a computationally intcnsive equalizer algorithm when no cqualizer is necessary. That processing capacity could be diverted to better use, or part of the processor might be put to sleep, saving battery life. In addition, the radio and network could agree to put data bits in the superfluous embedded training scquence, enhancing the payload data rate accordingly.’ Two problems arise. First, the network has no standard language with which to posc a question about cqualizer taps. Sccand, the handset has the answer in the time-domain structure of its equalizer taps, but cannot access this information. It has no computational description of its own structure. Thus, it does not “know what it knows.” Standards-setting bodies have been gradually making such internal data available to networks through specific air interfaces, as the needs of the technology dictate. This labor-intensive process takes ycars to accomplish. Radio Knowledge Represcntation Language (RKRL), on the other hand, provides a standard languagc within which such unanticipated data exchanges can be defined dynamically. Why might the need for such unanticipatcd exchanges arise? Debugging new software radio downloads might require access to internal software parameters. Creating personal services that diffcrentiate one servicc provider from another might be enhanced if the provider does not need to expose new ideas to the competition in the standards-setting process. And the time to deploy those personalized services could be reduced. Cognitive radio, through RKRL, knows that the natural lanThis raise.$ a host of questions about the control of such complex adaptive agents, network ,stabiliQ, and the like. guage phrase equalizer taps refers to specific parameters of a tapped delay-line structure. This structure may be implemented in an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), or an algorithm in a software radio. Since a cognitive radio has a model of its own internal structurc, it can check the model to find out how the equalizer has been implemented. It then may retrieve thc register values from the ASIC (e.g., using a JTAG port) or find the taps in the proper memory location of its software implementation. A radio that knows its own internal structure to this degree does not have to wait for a consortium, forum, or standards body to define a level H33492.x7 radio as one that can access its equalizcr taps. The network can pose such an unanticipated question in (a standard) RKRL, and any RKRL-capable radio can answer it. To enable such a scenario, cognitive radio has an RKRL model of itself that includes the equalizer’s structure and function, as illustrated in Fig. 1. In this example, the radio hardware consists of the antenna, the radio frequency (RF) conversion module, the modem, and the other modules shown in the hardware part of the figure. The baseband processor includes a baseband modem and a back-end control protocol stack. In addition, this processor contains a cognition cngine and a set of computational models. The models consist of RKRL frames that describe the radio itself, including the equalizer, in the context of a comprehensive ontology, also written in RKRL. Using this ontology, the radio can track the user’s environment over time and space. Cognitive radio, then, matches its internal models to cxternal observations to understand what it means to commute to and from work, take a business trip to Europe, go on vacation, and so on. Clearly, significant memory, computational resources, and communications bandwidth are needed for cognitive radio, so this technology might not be deployable for somc time. In addition, a collcction of cognitivc radios may not require human intervention to develop their own protocols. Initially, intervention will be required in order to ensurd that networks of such radios remain stable (or that we know who to blame if this is not the case). Networks of such radios are complex Authorized licensed use limited to: CHONGQING UNIVERSITY. Downloaded on April 8, 2009 at 21:50 from IEEE Xplore. Restrictions apply. ___ ................. ............ .. Cognition -. |
The ratio of CRP to prealbumin levels predict mortality in patients with hospital-acquired acute kidney injury | BACKGROUND
Animal and human studies suggest that inflammation and malnutrition are common in acute kidney injury (AKI) patients. However, only a few studies reported CRP, a marker of inflammation, albumin, prealbumin and cholesterol, markers of nutritional status were associated with the prognosis of AKI patients. No study examined whether the combination of inflammatory and nutritional markers could predict the mortality of AKI patients.
METHODS
155 patients with hospital-acquired AKI were recruited to this prospective cohort study according to RIFLE (Risk, Injury, Failure, Lost or End Stage Kidney) criteria. C-reactive protein (CRP), and the nutritional markers (albumin, prealbumin and cholesterol) measured at nephrology consultation were analyzed in relation to all cause mortality of these patients. In addition, CRP and prealbumin were also measured in healthy controls (n = 45), maintenance hemodialysis (n = 70) and peritoneal dialysis patients (n = 50) and then compared with AKI patients.
RESULTS
Compared with healthy controls and end-stage renal disease patients on maintenance hemodialysis or peritoneal dialysis, patients with AKI had significantly higher levels of CRP/prealbumin (p < 0.001). Higher level of serum CRP and lower levels of albumin, prealbumin and cholesterol were found to be significant in the patients with AKI who died within 28 days than those who survived >28 days. Similarly, the combined factors including the ratio of CRP to albumin (CRP/albumin), CRP/prealbumin and CRP/cholesterol were also significantly higher in the former group (p < 0.001 for all). Multivariate analysis (Cox regression) revealed that CRP/prealbumin was independently associated with mortality after adjustment for age, gender, sepsis and sequential organ failure assessment (SOFA, p = 0.027) while the others (CRP, albumin, prealbumin, cholesterol, CRP/albumin and CRP/cholesterol) became non-significantly associated. The hazard ratio was 1.00 (reference), 1.85, 2.25 and 3.89 for CRP/prealbumin increasing according to quartiles (p = 0.01 for the trend).
CONCLUSIONS
Inflammation and malnutrition were common in patients with AKI. Higher level of the ratio of CRP to prealbumin was associated with mortality of AKI patients independent of the severity of illness and it may be a valuable addition to SOFA score to independent of the severity of illness and it may be a valuable addition to SOFA score to predict the prognosis of AKI patients. |
Ideology and discourse analysis | Contrary to most traditional approaches, ideologies are defined here within a multidisciplinary framework that combines a social, cognitive and discursive component. As 'systems of ideas', ideologies are sociocognitively defined as shared representations of social groups, and more specifically as the `axiomatic ' principies of such representations. As the basis of a social group's selfimage, ideologies organize its identity, actions, aims, norms and values, and resources as well as its relations to other social groups. Ideologies are distinct from the sociocognitive basis of broader cultural communities, within which different ideological groups share fundamental beliefs such as their cultural knowledge. Ideologies are expressed and generally reproduced in the social practices of their members, and more particularly acquired, confirmed, changed and perpetuated through discourse. Although general properties of language and discourse are not, as such, ideologically marked, systematic discourse analysis offers powerful methods to study the structures and functions of underlying' ideologies. The ideological polarization between ingroups and outgroups— a prominent feature of the structure of ideologies—may also be systematically studied at all levels of text and talk, e.g. by analysing how members of ingroups typically emphasize their own good deeds and properties and the bad ones of the outgroup, and mitigate or deny their own bad ones and the good ones of the outgroup. |
Implementation of DC/DC converter with high frequency transformer (DHFT) in hybrid AC/DC microgrid | Hybrid AC/DC microgrid is a compromised solution to cater for the increasing penetration of DC-compatible energy sources, storages and loads. In this paper, DC/DC converter with High Frequency Transformer (DHFT) is proposed to replace the conventional bulky transformer for bus voltage matching and galvanic isolation. Various DHFT topologies have been compared and CLLC-type has been recommended due to its capabilities of bidirectional power flow, seamless transition and low switching loss. Different operating scenarios of the hybrid AC/DC microgrid have been analyzed and DHFT open-loop control has been selected to simplify systematic coordination. DHFT are designed in order to maximize the conversion efficiency and minimize output voltage variations in different loading conditions. Lab-scale prototypes of the DHFT and hybrid AC/DC microgrid have been developed for experimental verifications. The performances of DHFT and system in both steady state and transient states have been confirmed. |
Drug treatment or alleviating the negative consequences of imprisonment? A critical view of prison-based drug treatment in Denmark. | BACKGROUND
The availability of prison-based drug treatment has increased markedly throughout Europe over the last 15 years in terms of both volume and programme diversity. However, prison drug treatment faces problems and challenges because of the tension between ideologies of rehabilitation and punishment.
METHODS
This article reports on a study of four cannabis treatment programmes and four psychosocial drug treatment programmes in four Danish prisons during 2007. The data include the transcripts of 22 semi-structured qualitative interviews with counsellors and prison employees, prison statistics, and information about Danish laws and regulations.
RESULTS
These treatment programmes reflect the 'treatment guarantee' in Danish prisons. However, they are simultaneously embedded in a new policy of zero tolerance and intensified disciplinary sanctions. This ambivalence is reflected in the experiences of treatment counsellors: reluctantly, they feel associated with the prison institution in the eyes of the prisoners; they experience severe opposition from prison officers; and the official goals of the programmes, such as making clients drug free and preparing them for a life without crime, are replaced by more pragmatic aims such as alleviating the pain of imprisonment felt by programme clients.
CONCLUSION
The article concludes that at a time when prison-based drug treatment is growing, it is crucial that we thoroughly research and critically discuss its content and the restrictions facing such treatment programmes. One way of doing this is through research with counsellors involved in delivering drug treatment services. By so doing, the programmes can become more pragmatic and focused, and alternatives to prison-based drug treatment can be seriously considered. |
Blue Screen Matting | A classical problem of imaging—the matting problem—is separation of a non-rectangular foreground image from a (usually) rectangular background image—for example, in a film frame, extraction of an actor from a background scene to allow substitu tion of a different background. Of the several attacks on this diff cult and persistent problem, we discuss here only the special ca of separating a desired foreground image from a background of constant, or almost constant, backing color. This backing color has often been blue, so the problem, and its solution, have bee called blue screen matting . However, other backing colors, such as yellow or (increasingly) green, have also been used, so we o ten generalize to constant color matting . The mathematics of constant color matting is presented and proven to be unsolvable as generally practiced. This, of course, flies in the face of the fact that the technique is commonly used in film and video, so we demonstrate constraints on the general problem that lead to so tions, or at least significantly prune the search space of solution We shall also demonstrate that an algorithmic solution is possib by allowing the foreground object to be shot against two constant backing colors—in fact, against two completely arbitrary backin so long as they differ everywhere. |
Double circuit transmission line Fault Distance Location using Artificial Neural Network | Distance relays used for protection of transmission lines have problems of under-reach, over-reach and maloperation due to high impedance faults. Further the problem is compounded when the distance relays are used for protection of double circuit transmission lines due to effect of zero sequence mutual coupling. Different types of faults on a protected transmission line should be located correctly. This paper presents a single neural network for fault distance location for all the ten types of faults (3 LG, 3 LLG, 3 LL, 1 LLL) in both the circuits of a double circuit transmission line fed from sources at both the end. This technique uses only one end data and accurate fault distance location is achieved after one cycle from the inception of fault. The proposed Artificial Neural Network (ANN) based Fault Distance Locator uses fundamental components of three phase current signals of both the circuits & three phase voltage signals to learn the hidden relationship in the input patterns. An improved performance is obtained once the neural network is trained suitably, thus performing correctly when faced with different system parameters and conditions i.e. varying fault type, fault location, fault resistance, fault inception angle, presence of mutual coupling and remote source infeed. |
Тенденция историко-культурных процессов в лесостепи Западной Сибири | Physical and geographic peculiarities of Western Siberia are plain landscape (the largest lowland in the world), latitudes of landscape zones, limited amount and, in some regions, absence of stone material for tool production, absence of material for non-ferrous metal production. During the early stages of the ancient history, these conditions restrained the populating the Western Siberia territories. Mechanisms and processes of adaptation created the historical and cultural singularity of the territory of Western Siberia which individualizes it among the other regions of North Asia. Another peculiarity is associated with the development of Western Siberia archaeological science. It was based on academic centers of Moscow and Leningrad (Saint-Petersburg), and since 1960-s of the XX century, Novosibirsk as well; organization of local centers in universities and specialized higher education institutions; scientific schools. The new level of archaeological knowledge allows arguing that on Asian territory of Russia, Western Siberia stands apart in terms of historical and cultural content, its diversity and processes dynamics. On the other hand, it allows considering the archaeological cultures development, their interactions in a different way; allows finding out the general and local trends of historical and cultural development in the large chronological limits and within different landscape zones. This work deals with a research aimed to solve this issue. The cartography of the Early Holocenic findings of Western Siberia, their absolute chronology allow to conclude that the populating of the Western Siberian lowland was dispersive and depended on changing natural environment and necessary resources. Neolithic Age demonstrates another state of historical and cultural development. At that time the whole territory of Western Siberia was populated, the archaeological cultures were created, the population passed to sedentary lifestyle. During the transition period between ages, historical development was quite unstable. During the Bronze Age, the situation stabilized. Anyway, Seymino-Turbino phenomena can be considered as manifestation of globalization. Andronovo culture had a globalization element as well. During the Late Bronze age was the time of cultural traditions developing from Ural to Yenissei. This changed during the transition from Bronze to Iron Age. In Scythian time the trend to global development had been revived. Thus, on the territory of forest-steppe of Western Siberia, the trend of historical and cultural processes had a cyclic nature. |
A Novel Approach of an Absolute Encoder Coding Pattern | This paper presents a novel approach of an absolute rotary and linear optical encoder codding patterns. The concept is based on the analysis of 2-D images to find a unique sequence that allows to unambiguously determine an angular shaft position. The adopted coding method allows to achieve a very high data density for specified number of tracks on the code pattern disc. Encoders based on the proposed solution enable the production from readily available and inexpensive components. Nevertheless, encoders retain high measuring accuracy and high dependability obtained in the classical approach. The optical device and a design of processing system is proposed. Finally, the feasibility of the pattern coding method is further proved with encoder prototype. |
Adaptive bidding for display advertising | Motivated by the emergence of auction-based marketplaces for display ads such as the Right Media Exchange, we study the design of a bidding agent that implements a display advertising campaign by bidding in such a marketplace. The bidding agent must acquire a given number of impressions with a given target spend, when the highest external bid in the marketplace is drawn from an unknown distribution P. The quantity and spend constraints arise from the fact that display ads are usually sold on a CPM basis. We consider both the full information setting, where the winning price in each auction is announced publicly, and the partially observable setting where only the winner obtains information about the distribution; these differ in the penalty incurred by the agent while attempting to learn the distribution. We provide algorithms for both settings, and prove performance guarantees using bounds on uniform closeness from statistics, and techniques from online learning. We experimentally evaluate these algorithms: both algorithms perform very well with respect to both target quantity and spend; further, our algorithm for the partially observable case performs nearly as well as that for the fully observable setting despite the higher penalty incurred during learning. |
Brain-computer interfaces for control of neuroprostheses: from synchronous to asynchronous mode of operation. | Transferring a brain-computer interface (BCI) from the laboratory environment into real world applications is directly related to the problem of identifying user intentions from brain signals without any additional information in real time. From the perspective of signal processing, the BCI has to have an uncued or asynchronous design. Based on the results of two clinical applications, where 'thought' control of neuroprostheses based on movement imagery in tetraplegic patients with a high spinal cord injury has been established, the general steps from a synchronous or cue-guided BCI to an internally driven asynchronous brain-switch are discussed. The future potential of BCI methods for various control purposes, especially for functional rehabilitation of tetraplegics using neuroprosthetics, is outlined. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.