title
stringlengths
8
300
abstract
stringlengths
0
10k
Clustering Sensors in Wireless Ad Hoc Networks Operating in a Threat Environment
Sensors in a data fusion environment over hostile territory are geographically dispersed and change location with time. In order to collect and process data from these sensors an equally flexible network of fusion beds (i.e., clusterheads) is required. To account for the hostile environment, we allow communication links between sensors and clusterheads to be unreliable. We develop a mixed integer linear programming (MILP) model to determine the clusterhead location strategy that maximizes the expected data covered minus the clusterhead reassignments, over a time horizon. A column generation (CG) heuristic is developed for this problem. Computational results show that CG performs much faster than a standard commercial solver and the typical optimality gap for large problems is less than 5%. Improvements to the basic model in the areas of modeling link failure, consideration of bandwidth capacity, and clusterhead changeover cost estimation are also discussed.
Treatment failure in pneumonia: impact of antibiotic treatment and cost analysis.
The aim of this study was to investigate treatment failure (TF) in hospitalised community-acquired pneumonia (CAP) patients with regard to initial antibiotic treatment and economic impact. CAP patients were included in two open, prospective multicentre studies assessing the direct costs for in-patient treatment. Patients received treatment either with moxifloxacin (MFX) or a nonstandardised antibiotic therapy. Any change in antibiotic therapy after >72 h of treatment to a broadened antibiotic spectrum was considered as TF. Overall, 1,236 patients (mean ± SD age 69.6 ± 16.8 yrs, 691 (55.9%) male) were included. TF occurred in 197 (15.9%) subjects and led to longer hospital stay (15.4 ± 7.3 days versus 9.8 ± 4.2 days; p < 0.001) and increased median treatment costs (€2,206 versus €1,284; p<0.001). 596 (48.2%) patients received MFX and witnessed less TF (10.9% versus 20.6%; p < 0.001). After controlling for confounders in multivariate analysis, adjusted risk of TF was clearly reduced in MFX as compared with β-lactam monotherapy (adjusted OR for MFX 0.43, 95% CI 0.27-0.68) and was more comparable with a β-lactam plus macrolide combination (BLM) (OR 0.68, 95% CI 0.38-1.21). In hospitalised CAP, TF is frequent and leads to prolonged hospital stay and increased treatment costs. Initial treatment with MFX or BLM is a possible strategy to prevent TF, and may thus reduce treatment costs.
Climatic and topographic controls on the style and timing of Late Quaternary glaciation throughout Tibet and the Himalaya defined by 10Be cosmogenic radionuclide surface exposure dating
Temporal and spatial changes in glacier cover throughout the Late Quaternary in Tibet and the bordering mountains are poorly defined because of the inaccessibility and vastness of the region, and the lack of numerical dating. To help reconstruct the timing and extent of glaciation throughout Tibet and the bordering mountains, we use geomorphic mapping and Be cosmogenic radionuclide (CRN) surface dating in study areas in southeastern (Gonga Shan), southern (Karola Pass) and central (Western Nyainqentanggulha Shan and Tanggula Shan) Tibet, and we compare these with recently determined numerical chronologies in other parts of the plateau and its borderlands. Each of the study regions receives its precipitation mainly during the south Asian summer monsoon when it falls as snow at high altitudes. Gonga Shan receives the most precipitation (42000mma ) while, near the margins of monsoon influence, the Karola Pass receives moderate amounts of precipitation (500–600mma ) and, in the interior of the plateau, little precipitation falls on the western Nyainqentanggulha Shan ( 300mma ) and the Tanggula Shan (400–700mma ). The higher precipitation values for the Tanggula Shan are due to strong orographic effects. In each region, at least three sets of moraines and associated landforms are preserved, providing evidence for multiple glaciations. The Be CRN surface exposure dating shows that the formation of moraines in Gonga Shan occurred during the early–mid Holocene, Neoglacial and Little Ice Age, on the Karola Pass during the Lateglacial, Early Holocene and Neoglacial, in the Nyainqentanggulha Shan date during the early part of the last glacial cycle, global Last Glacial Maximum and Lateglacial, and on the Tanggula Shan during the penultimate glacial cycle and the early part of the last glacial cycle. The oldest moraine succession in each of these regions varies from the early Holocene (Gonga Shan), Lateglacial (Karola Pass), early Last Glacial (western Nyainqentanggulha Shan), and penultimate glacial cycle (Tanggula Shan). We believe that the regional patterns and timing of glaciation reflect temporal and spatial variability in the south Asian monsoon and, in particular, in regional precipitation gradients. In zones of greater aridity, the extent of glaciation has become increasingly restricted throughout the Late Quaternary leading to the preservation of old (b100 ka) glacial landforms. In contrast, in regions that are very strongly influenced by the monsoon (b1600mma ), the preservation potential of pre-Lateglacial moraine successions is generally extremely poor. This is possibly because Lateglacial and Holocene glacial advances may have been more extensive than early glaciations and hence may have destroyed any landform or sedimentary evidence of earlier glaciations. Furthermore, the intense denudation, mainly by fluvial and mass movement processes, which characterize these wetter environments, results in rapid erosion and re-sedimentation of glacial and associated landforms, which also contributes to their poor preservation potential. r 2004 Elsevier Ltd. All rights reserved. e front matter r 2004 Elsevier Ltd. All rights reserved. ascirev.2004.10.014 ing author. Tel.: +1512 556 3732; fax: +1 513 556 6931. ess: [email protected] (L.A. Owen).
Fibrinogen-induced perivascular microglial clustering is required for the development of axonal damage in neuroinflammation
Blood-brain barrier disruption, microglial activation and neurodegeneration are hallmarks of multiple sclerosis. However, the initial triggers that activate innate immune responses and their role in axonal damage remain unknown. Here we show that the blood protein fibrinogen induces rapid microglial responses toward the vasculature and is required for axonal damage in neuroinflammation. Using in vivo two-photon microscopy, we demonstrate that microglia form perivascular clusters before myelin loss or paralysis onset and that, of the plasma proteins, fibrinogen specifically induces rapid and sustained microglial responses in vivo. Fibrinogen leakage correlates with areas of axonal damage and induces reactive oxygen species release in microglia. Blocking fibrin formation with anticoagulant treatment or genetically eliminating the fibrinogen binding motif recognized by the microglial integrin receptor CD11b/CD18 inhibits perivascular microglial clustering and axonal damage. Thus, early and progressive perivascular microglial clustering triggered by fibrinogen leakage upon blood-brain barrier disruption contributes to axonal damage in neuroinflammatory disease.
Contextual LSTM (CLSTM) models for Large scale NLP tasks
Documents exhibit sequential structure at multiple levels of abstraction (e.g., sentences, paragraphs, sections). These abstractions constitute a natural hierarchy for representing the context in which to infer the meaning of words and larger fragments of text. In this paper, we present CLSTM (Contextual LSTM), an extension of the recurrent neural network LSTM (Long-Short Term Memory) model, where we incorporate contextual features (e.g., topics) into the model. We evaluate CLSTM on three specific NLP tasks: word prediction, next sentence selection, and sentence topic prediction. Results from experiments run on two corpora, English documents in Wikipedia and a subset of articles from a recent snapshot of English Google News, indicate that using both words and topics as features improves performance of the CLSTM models over baseline LSTM models for these tasks. For example on the next sentence selection task, we get relative accuracy improvements of 21% for the Wikipedia dataset and 18% for the Google News dataset. This clearly demonstrates the significant benefit of using context appropriately in natural language (NL) tasks. This has implications for a wide variety of NL applications like question answering, sentence completion, paraphrase generation, and next utterance prediction in dialog systems.
STREAMER: A distributed framework for incremental closeness centrality computation
Networks are commonly used to model the traffic patterns, social interactions, or web pages. The nodes in a network do not possess the same characteristics: some nodes are naturally more connected and some nodes can be more important. Closeness centrality (CC) is a global metric that quantifies how important is a given node in the network. When the network is dynamic and keeps changing, the relative importance of the nodes also changes. The best known algorithm to compute the CC scores makes it impractical to recompute them from scratch after each modification. In this paper, we propose Streamer, a distributed memory framework for incrementally maintaining the closeness centrality scores of a network upon changes. It leverages pipelined and replicated parallelism and takes NUMA effects into account. It speeds up the maintenance of the CC of a real graph with 916K vertices and 4.3M edges by a factor of 497 using a 64 nodes cluster.
A Psychometric Study of Empowerment and Confidence among Veterans with Psychiatric Disabilities
The word "recovery" as used in every day language is taken by most people to mean a cure, or the complete absence of illness. In the mental health field, the term has increasingly been given a broader meaning that addresses the multi-faceted process of living a full and meaningful life with a mental illness (Resnick, Fontana, Lehman, & Rosenheck, 2005). With the release of prominent commission reports such as the President's New Freedom Commission report (President's New Freedom Commission on Mental Health, 2003) and SAMHSA's National Consensus Statement on Recovery (e.g., SAMHSA, 2004), as well as the growing recognition of the importance of broader conceptualizations of living with mental illness, identifying tools for reliably and validly measuring recovery has become increasingly necessary. As Mancini (2008) has mused, based on the high level of interest in the recovery concept, one might expect the field to have developed empirically-supported definitions of the term and to have identified well-defined recovery-oriented practices supported by scientific data. Yet there is little consistency or consensus across recovery definitions (Resnick et al., 2005; Silverstein & Bellack, 2008), with the same terms sometimes used to describe different constructs, and different terms used to describe similar constructs, making it difficult to generalize across studies. For example, Figure 1 is an illustration of some potential recovery domains. In this figure self-esteem and optimism are included twice, representing different theoretical perspectives on their placement in a recovery definition. Empowerment is an often cited recovery domain that has been linked empirically with participation in peer support (Burti et al., 2005; Dumont & Jones, 2002; Resnick & Rosenheck, 2008; Rogers et al., 2007), working for pay, and participation in family psycho-education (Resnick, Rosenheck, & Lehman, 2004). Rogers et al. (1997) using a mixed-methods approach, created a tool to measure empowerment, and identified five subordinate factors: self-esteem/self-efficacy, power-powerlessness, community activism and autonomy, optimism and control over the future, and righteous anger. Carpinello et al. (2000) identified a related concept, confidence, with similar components to those identified by Rogers et al.: optimism, coping, and advocacy, suggesting overlap between the operationalization of empowerment by Rogers et al. and that of confidence by Carpinello. The current study is an evaluation of the psychometric properties of these two measures and their interrelationships. We examine the internal consistency and test-retest reliability for each measure and examine convergent and discriminant validity of both the total scale and subscales in a sample of veterans receiving community-based outpatient mental health services. [FIGURE 1 OMITTED] Method Subjects Participants consisted of 296 veterans with severe mental illness who were admitted to the Community Reintegration Program, a community-based outpatient program at the Errera Community Care Center of the V.A. Connecticut Healthcare System between 2002 and 2006, and who agreed to participate in a quasi-experimental efficacy study of a peer education program for veterans. In the parent study, participants were recruited in two cohorts, but are pooled into a single sample for the present study, and thus represent both control and experimental groups (Resnick & Rosenheck, 2008). As summarized in Table 1, respondents were predominantly male (95%) and white (66%), averaging 48.5 years of age and 12.6 years of education. One-third (36%) indicated regular full- or part-time employment. Although a comparable number (34%) reported unemployment due to disability, only one in five (19%) were receiving service-connected disability payments from the Veterans Administration for either medical or psychiatric reasons. PTSD symptom severity was high (mean [+ or -] SD = 46. …
DECAF: Detecting and Characterizing Ad Fraud in Mobile Apps
Ad networks for mobile apps require inspection of the visual layout of their ads to detect certain types of placement frauds. Doing this manually is error prone, and does not scale to the sizes of today’s app stores. In this paper, we design a system called DECAF to automatically discover various placement frauds scalably and effectively. DECAF uses automated app navigation, together with optimizations to scan through a large number of visual elements within a limited time. It also includes a framework for efficiently detecting whether ads within an app violate an extensible set of rules that govern ad placement and display. We have implemented DECAF for Windows-based mobile platforms, and applied it to 1,150 tablet apps and 50,000 phone apps in order to characterize the prevalence of ad frauds. DECAF has been used by the ad fraud team in Microsoft and has helped find many instances of ad frauds.
Short-term plasticity of the human auditory cortex
Magnetoencephalographic measurements (MEG) were used to examine the effect on the human auditory cortex of removing specific frequencies from the acoustic environment. Subjects listened for 3 h on three consecutive days to music "notched" by removal of a narrow frequency band centered on 1 kHz. Immediately after listening to the notched music, the neural representation for a 1-kHz test stimulus centered on the notch was found to be significantly diminished compared to the neural representation for a 0.5-kHz control stimulus centered one octave below the region of notching. The diminished neural representation for 1 kHz reversed to baseline between the successive listening sessions. These results suggest that rapid changes can occur in the tuning of neurons in the adult human auditory cortex following manipulation of the acoustic environment. A dynamic form of neural plasticity may underlie the phenomenon observed here.
Multifunctional and compact 3D FMCW MIMO radar system with rectangular array for medium-range applications
Ever since the era of autonomous systems started, multisensor platforms have become a key topic in aerospace, automotive, and robotics industries. Multifunctional systems integrate more than one sensor and/or actuator in one device and are capable of creating synergies between them. As a consequence, their performance is superior compared to single-sensor systems, for different reasons [1]-[3].
Towards Automated Classification of Firmware Images and Identification of Embedded Devices
Embedded systems, as opposed to traditional computers, bring an incredible diversity. The number of devices manufactured is constantly increasing and each has a dedicated software, commonly known as firmware. Full firmware images are often delivered as multiple releases, correcting bugs and vulnerabilities, or adding new features. Unfortunately, there is no centralized or standardized firmware distribution mechanism. It is therefore difficult to track which vendor or device a firmware package belongs to, or to identify which firmware version is used in deployed embedded devices. At the same time, discovering devices that run vulnerable firmware packages on public and private networks is crucial to the security of those networks. In this paper, we address these problems with two different, yet complementary approaches: firmware classification and embedded web interface fingerprinting. We use supervised Machine Learning on a database subset of real world firmware files. For this, we first tell apart firmware images from other kind of files and then we classify firmware images per vendor or device type. Next, we fingerprint embedded web interfaces of both physical and emulated devices. This allows recognition of web-enabled devices connected to the network. In some cases, this complementary approach allows to logically link web-enabled online devices with the corresponding firmware package that is running on the devices. Finally, we test the firmware classification approach on 215 images with an accuracy of 93.5%, and the device fingerprinting approach on 31 web interfaces with 89.4% accuracy.
A Survey of Automated Text Simplification
Text simplification modifies syntax and lexicon to improve the understandability of language for an end user. This survey identifies and classifies simplification research within the period 1998-2013. Simplification can be used for many applications, including: Second language learners, preprocessing in pipelines and assistive technology. There are many approaches to the simplification task, including: lexical, syntactic, statistical machine translation and hybrid techniques. This survey also explores the current challenges which this field faces. Text simplification is a non-trivial task which is rapidly growing into its own field. This survey gives an overview of contemporary research whilst taking into account the history that has brought text simplification to its current state. Keywords—Text Simplification, Lexical Simplification, Syntactic Simplification
Lease or Buy ? : A Structural Model of the Vehicle Acquisition Decision ¤
Despite the growing popularity of leasing as an alternative to purchasing a vehicle, there is very little research on how consumers choose among various leasing and ̄nancing (namely buying) contracts and how this choice a®ects the brand they choose. In this paper therefore, we develop a structural model of the consumer's choice of automobile brand and the related decision of whether to lease or buy it. We conceptualize the leasing and buying of the same vehicle as two di®erent goods, each with its own costs and bene ̄ts. The di®erences between the two types of contracts are summarized along three dimensions: (i) the \net price" or ̄nancial cost of the contract, (ii) maintenance and repair costs and (iii) operating costs, which depend on the consumer's driving behavior. Based on consumer utility maximization, we derive a nested logit of brand and contract choice that captures the tradeo®s among all three costs. The model is estimated on a dataset of new car purchases from the near luxury segment of the automobile market. The optimal choice of brand and contract is determined by the consumer's implicit interest rate and the number of miles she expects to drive, both of which are estimated as parameters of the model. The empirical results yield several interesting ̄ndings. We ̄nd that (i) cars that deteriorate faster are more likely to be leased than bought, (ii) the estimated implicit interest rate is higher than the market rate, which implies that consumers do not make e±cient tradeo®s between the net price and operating costs and may often incorrectly choose to lease and (iii) the estimate of the annual expected mileage indicates that most consumers would incur substantial penalties if they lease, which explains why buying or ̄nancing continues to be more popular than leasing. This research also provides several interesting managerial insights into the e®ectiveness of various promotional instruments. We examine this issue by looking at (i) sales response to a promotion, (ii) the ability of the promotion to draw sales from other brands and (iii) its overall pro ̄tability. We ̄nd, for example that although the sales response to a cash rebate on a lease is greater than an equivalent increase in the residual value, under certain conditions and for certain brands, a residual value promotion yields higher pro ̄ts. These ̄ndings are of particular value to manufacturers in the prevailing competitive environment, which is marked by the extensive use of large rebates and 0% APR o®ers.
[Late-onset lupus in the elderly after 65 years: retrospective study of 18 cases].
OBJECTIVE The aim of our study was to investigate characteristics of late-onset lupus after 65 years compared to younger ones. METHOD Patients with lupus revealed after 65 years were investigated in four French hospitals between 1985 and 2013. Patients with 4 ACR criteria or more were included. Clinical and biological characteristics, prognosis, treatment, comorbidities were described retrospectively and compared to the cohort of 1000 lupus patients of Cervera et al. RESULTS Eighteen patients were included (14 women and 4 men). The most frequent features were arthritis (13/18), skin involvement (9/18). Hemolytic anemia and thrombosis were more frequently found in elderly lupus (p<0.05). During evolution, only cutaneous involvement were less frequent than in young subjects (p <0.05). Corticosteroids were often used (16/18), but iatrogenic complications were frequent (10/16). CONCLUSION Diagnosis is difficult because of non-specific clinical features. Treatment needed a rigourous follow-up because of iatrogenic complications.
Boosting bit rates in noninvasive EEG single-trial classifications by feature combination and multiclass paradigms
Noninvasive electroencephalogram (EEG) recordings provide for easy and safe access to human neocortical processes which can be exploited for a brain-computer interface (BCI). At present, however, the use of BCIs is severely limited by low bit-transfer rates. We systematically analyze and develop two recent concepts, both capable of enhancing the information gain from multichannel scalp EEG recordings: 1) the combination of classifiers, each specifically tailored for different physiological phenomena, e.g., slow cortical potential shifts, such as the premovement Bereitschaftspotential or differences in spatio-spectral distributions of brain activity (i.e., focal event-related desynchronizations) and 2) behavioral paradigms inducing the subjects to generate one out of several brain states (multiclass approach) which all bare a distinctive spatio-temporal signature well discriminable in the standard scalp EEG. We derive information-theoretic predictions and demonstrate their relevance in experimental data. We will show that a suitably arranged interaction between these concepts can significantly boost BCI performances.
Crustal fracturing and chert dike formation triggered by large meteorite impacts, ca. 3.260 Ga, Barberton greenstone belt, South Africa
The ca. 3260 Ma contact between the largely volcanic Onverwacht Group and overlying largely sedimentary Fig Tree Group in the Barberton greenstone belt, South Africa, is widely marked by chert dikes that extend downward for up to 100 m into underlying sedimentary and volcanic rocks of the Mendon Formation (Onverwacht Group). In the Barite Valley area, these dikes formed as open fractures that were filled by both precipitative fill and the downward flowage of liquefied carbonaceous sediments and ash at the top of the Mendon Formation. Spherules that formed during a large meteorite or asteroid impact event occur in a wave- and/or current-deposited unit, spherule bed S2, which widely marks the Onverwacht–Fig Tree contact, and as loose grains and masses within some chert dikes up to 50 m below the contact. Four main types of chert dikes and veins are recognized: (Type 1) irregular dikes up to 8 m wide that extend downward across as much as 100 m of stratigraphy; (Type 2) small vertical dikes, most <1 m wide, which are restricted to the lower half of the Mendon chert section; (Type 3) small crosscutting veins, most <50 cm across, filled with precipitative silica; and (Type 4) small irregular to bedding-parallel to irregular veins, mostly <10 cm wide, filled with translucent precipitative silica. Type 2 dikes formed first and reflect a short-lived seismic event that locally decoupled the sedimentary section at the top of the Mendon Formation from underlying volcanic rocks and opened narrow vertical tension fractures in the lower, lithified part of the sedimentary section. Later seismic events triggered formation of the larger type 1 fractures throughout the sedimentary and upper volcanic section, widespread liquefaction of soft, uppermost Mendon sediments, and flowage of the liquefied sediments and loose impact-generated spherules into the open fractures. Late-stage tsunamis everywhere eroded and reworked the spherule layer. The coincidence of crustal disruption, dike formation, spherule deposition, and tsunami activity suggests that all were related to the S2 impact or impact cluster. Crustal disruption at this time also formed local relief that provided clastic sediment to the postimpact Fig Tree Group, including a small conglomeratic fan delta in the Barite Valley area. Remobilization and further movement of debris in the subsurface continued for some time. Locally, the deposition of dense baritic sediments over soft dike materials induced remobilization of material in the dike, causing foundering of S2 and ∼1–2 m of overlying baritic sediments into the dike. Spherule beds occur at the base of the Fig Tree Group over wide areas of the Barberton belt, marking the abrupt change from ∼300 m.y. of predominantly anorogenic, mafic, and komatiitic volcanism of the Onverwacht Group to orogenic clastic sedimentation and associated felsic volcanism of the Fig Tree Group. This area never again returned to Onverwacht-style mafic and ultramafic volcanism but evolved ∼100 m.y. later into the Kaapvaal craton. These results indicate that this major transition in crustal evolution coincided with and was perhaps triggered by major impact events ca. 3260–3240 Ma.
Real time video stabilization for handheld devices
The paper proposes a method and robust algorithm for 2D video stabilization designed for portable devices in real-time. The BSC (Boundary Signal Computation) chip of TI (Texas Instruments) is essentially used (or emulated herein) for searching of correlations between the 1D integral projections, horizontal and vertical ones, by a SAD (Sums of Absolute Differences) approach. The proposed method is based on an accurate vector model allowing interpretations of increasing complexity for the transformations among frames. Experiments, conducted on testing video clips, are very promising for the future R&D of the method.
[The Post-Traumatic Stress Syndrome 14-Questions Inventory (PTSS-14) - Translation of the UK-PTSS-14 and validation of the German version].
BACKGROUND Hospitalization may represent a stressor that can lead to Posttraumatic stress disorder (PTSD). METHODS Translation of the UK-PTSS-14, conducted in accordance with ISPOR principles and validation with the PDS (86 patients). RESULTS The ROC analysis showed that the German version of PTSS-14 is a valid instrument with high sensitivity (82%) and specificity (92%) with the optimum cut-off point at 40 points. The translation process was authorized by the author of the UK-PTSS-14. CONCLUSION The validated German version of PTSS-14 is now ready for use as an efficient and reliable screening-tool for PTSD in a clinical setting.
Smile Line and Periodontium Visibility
A patient’s smile expresses a feeling of joy, success, sensuality, affection and courtesy, and reveals self-confidence and kindness. A smile is more than a method of communicating, it is a means of socialization and attraction whose advantages are taken for granted in the media, both publicly and politically (Gandet, 1987). This media portrayal of the stereotyped smile leads to a standardization of the smile and consequently to an increase in aesthetic demand from patients (Morley and Eubank, 2001). The harmony of the smile is determined not only by the shape, the position and the color of teeth but also by the gingival tissues. The gingival margin should be healthy and harmonious and today both patients and dentists are more aware of the impact of the gingiva on the beauty of the smile (Towsend, 1993). In particular, the periodontist can influence the appearance of the patient’s smile (Garber and Salama, 1996; Dolt and Robbins, 1997; Glise et al, 1999; Borghetti and MonnetCorti, 2000). Periodontium visibility depends on the position of the smile line, which is defined as the relationship between the upper lip and the visibility of gingival tissues and teeth. The smile line is an imaginary line following the lower margin of the upper lip and usually has a convex appearance (Towsend, 1993; Morley and Eubank, 2001) (Fig. 1). Few publications exist regarding the relationship between teeth and periodontium visibility during the smile. In a study of 425 students Crispin and Watson (1981) reported that the gingival margin was visible in 66% of the participants in natural smile. With maximal smiling, 84% of the participants revealed their gingival margin. Tjian et al Smile Line and Periodontium Visibility
Blind equalization and multiuser detection in dispersive CDMA channels
The problem of blind demodulation of multiuser information symbols in a high-rate code-division multiple-access (CDMA) network in the presence of both multiple-access interference (MAI) and intersymbol interference (ISI) is considered. The dispersive CDMA channel is first cast into a multipleinput multiple-output (MIMO) signal model framework. By applying the theory of blind MIMO channel identification and equalization, it is then shown that under certain conditions the multiuser information symbols can be recovered without any prior knowledge of the channel or the users’ signature waveforms (including the desired user’s signature waveform), although the algorithmic complexity of such an approach is prohibitively high. However, in practice, the signature waveform of the user of interest is always available at the receiver. It is shown that by incorporating this knowledge, the impulse response of each user’s dispersive channel can be identified using a subspace method. It is further shown that based on the identified signal subspace parameters and the channel response, two linear detectors that are capable of suppressing both MAI and ISI, i.e., a zeroforcing detector and a minimum-mean-square-errror (MMSE) detector, can be constructed in closed form, at almost no extra computational cost. Data detection can then be furnished by applying these linear detectors (obtained blindly) to the received signal. The major contribution of this paper is the development of these subspace-based blind techniques for joint suppression of MAI and ISI in the dispersive CDMA channels.
Moral Responsibility and Determinism : The Cognitive Science of Folk Intuitions
The dispute between compatibilists and incompatibilists must be one of the most persistent and heated deadlocks in Western philosophy. Incompatibilists maintain that people are not fully morally responsible if determinism is true, i.e., if every event is an inevitable consequence of the prior conditions and the natural laws. By contrast, compatibilists maintain that even if determinism is true our moral responsibility is not undermined in the slightest, for determinism and moral responsibility are perfectly consistent.1 The debate between these two positions has invoked many different resources, including quantum mechanics, social psychology, and basic metaphysics. But recent discussions have relied heavily on arguments that draw on people’s intuitions about particular cases. Some philosophers have claimed that people have incompatibilist intuitions (e.g., Kane 1999, 218; Strawson 1986, 30; Vargas 2006); others have challenged this claim and suggested that people’s intuitions actually fit with compatibilism (Nahmias et al. 2005). But although philosophers have constructed increasingly sophisticated arguments about the implications of people’s intuitions, there has been remarkably little discussion about why people have the intuitions they do. That is to say, relatively little has been said about the specific psychological processes that generate or sustain people’s intuitions. And yet, it seems clear that questions about the sources of people’s intuitions could have a major impact on debates
Using Mise-En-Scène Visual Features based on MPEG-7 and Deep Learning for Movie Recommendation
Item features play an important role in movie recommender systems, where recommendations can be generated by using explicit or implicit preferences of users on traditional features (attributes) such as tag, genre, and cast. Typically, movie features are human-generated, either editorially (e.g., genre and cast) or by leveraging the wisdom of the crowd (e.g., tag), and as such, they are prone to noise and are expensive to collect. Moreover, these features are often rare or absent for new items, making it difficult or even impossible to provide good quality recommendations. In this paper, we show that user’s preferences on movies can be better described in terms of the Mise-en-Scène features, i.e., the visual aspects of a movie that characterize design, aesthetics and style (e.g., colors, textures). We use both MPEG-7 visual descriptors and Deep Learning hidden layers as example of mise-en-scène features that can visually describe movies. Interestingly, mise-en-scène features can be computed automatically from video files or even from trailers, offering more flexibility in handling new items, avoiding the need for costly and error-prone human-based tagging, and providing good scalability. We have conducted a set of experiments on a large catalogue of 4K movies. Results show that recommendations based on mise-en-scène features consistently provide the best performance with respect to richer sets of more traditional features, such as genre and tag.
Mining Console Logs for Large-Scale System Problem Detection
The console logs generated by an application contain messag that the application developers believed would be useful in debugging or monitoring the application. Despite the ubiquit y and large size of these logs, they are rarely exploited in a syste matic way for monitoring and debugging because they are not readily machine-parsable. In this paper, we propose a novel meth od for mining this rich source of information. First, we combin e log parsing and text mining with source code analysis to extract structure from the console logs. Second, we extract fe atures from the structured information in order to detect ano malous patterns in the logs using Principal Component Analysi s (PCA). Finally, we use a decision tree to distill the results of PCA-based anomaly detection to a format readily understand able by domain experts (e.g. system operators) who need not be familiar with the anomaly detection algorithms. As a case study, we distill over one million lines of console logs from the Hadoop file system to a simple decision tree that a domain expert can readily understand; the process requires no operat r intervention and we detect a large portion of runtime anomal ies that are commonly overlooked.
Motor Imagery in Mental Rotation: An fMRI Study
Twelve right-handed men performed two mental rotation tasks and two control tasks while whole-head functional magnetic resonance imaging was applied. Mental rotation tasks implied the comparison of different sorts of stimulus pairs, viz. pictures of hands and pictures of tools, which were either identical or mirror images and which were rotated in the plane of the picture. Control tasks were equal except that stimuli pairs were not rotated. Reaction time profiles were consistent with those found in previous research. Imaging data replicate classic areas of activation in mental rotation for hands and tools (bilateral superior parietal lobule and visual extrastriate cortex) but show an important difference in premotor area activation: pairs of hands engender bilateral premotor activation while pairs of tools elicit only left premotor brain activation. The results suggest that participants imagined moving both their hands in the hand condition, while imagining manipulating objects with their hand of preference (right hand) in the tool condition. The covert actions of motor imagery appear to mimic the "natural way" in which a person would manipulate the object in reality, and the activation of cortical regions during mental rotation seems at least in part determined by an intrinsic process that depends on the afforded actions elicited by the kind of stimuli presented.
WG-8: A Lightweight Stream Cipher for Resource-Constrained Smart Devices
Lightweight cryptographic primitives are essential for securing pervasive embedded devices like RFID tags, smart cards, and wireless sensor nodes. In this paper, we present a lightweight stream cipher WG-8, which is tailored from the well-known Welch-Gong (WG) stream cipher family, for resource-constrained devices. WG-8 inherits the good randomness and cryptographic properties of the WG stream cipher family and is resistant to the most common attacks against stream ciphers. The software implementations of the WG-8 stream cipher on two popular lowpower microcontrollers as well as the extensive comparison with other lightweight cryptography implementations highlight that in the context of securing lightweight embedded applications WG-8 has favorable performance and low energy consumption.
Brain structure links trait creativity to openness to experience.
Creativity is crucial to the progression of human civilization and has led to important scientific discoveries. Especially, individuals are more likely to have scientific discoveries if they possess certain personality traits of creativity (trait creativity), including imagination, curiosity, challenge and risk-taking. This study used voxel-based morphometry to identify the brain regions underlying individual differences in trait creativity, as measured by the Williams creativity aptitude test, in a large sample (n = 246). We found that creative individuals had higher gray matter volume in the right posterior middle temporal gyrus (pMTG), which might be related to semantic processing during novelty seeking (e.g. novel association, conceptual integration and metaphor understanding). More importantly, although basic personality factors such as openness to experience, extroversion, conscientiousness and agreeableness (as measured by the NEO Personality Inventory) all contributed to trait creativity, only openness to experience mediated the association between the right pMTG volume and trait creativity. Taken together, our results suggest that the basic personality trait of openness might play an important role in shaping an individual's trait creativity.
Geomodification in Query Rewriting
Web searchers signal their geographic intent by using placenames in search queries. They also indicate their flexibility about geographic specificity by reformulating their queries. We conducted experiments on geomodification in query rewriting. We examine both deliberate query rewriting, conducted in user search sessions, and automated query rewriting, with users evaluating the relevance of geo-modified queries. We find geo-specification in 12.7% of user query rewrites in search sessions, and show the breakdown into sub-classes such as same-city, same-state, same-country and different-country. We also measure the dependence between US-state-name and distance-of-modified-location-from-originallocation, finding that Vermont web searchers modify their locations greater distances than California web searchers. We also find that automatically-modified queries are perceived as much more relevant when the geographic component is unchanged.
Multi-User Energy Consumption Monitoring and Anomaly Detection with Partial Context Information
Anomaly detection is an important problem in building energy management in order to identify energy theft and inefficiencies. However, it is hard to differentiate actual anomalies from the genuine changes in energy consumption due to seasonal variations and changes in personal settings such as holidays. One of the important drawbacks of existing anomaly detection algorithms is that various unknown context variables, such as seasonal variations, can affect the energy consumption of users in ways that appear anomalous to existing time series based anomaly detection algorithms. In this paper, we present a system for monitoring the energy consumption of multiple users within a neighborhood and a novel algorithm for detecting anomalies by combining data from multiple users. For each user, the neighborhood is defined as the set of all other users that have similar characteristics (function, location or demography), and are therefore likely to react and consume energy in the similar way in response to the external conditions. The neighborhood can be predefined based on prior customer information, or can be identified through an analysis of historical energy consumption. The proposed algorithm works as a two-step process. In the first step, the algorithm periodically computes an anomaly score for each user by just considering their own energy consumption and variations in the consumption of the past. In the second step, the anomaly score for a user is adjusted by analyzing the energy consumption data in the neighborhood. The collation of data within the neighborhood allows the proposed algorithm to differentiate between these genuine effects and real anomalous behavior of users. Unlike multivariate time series anomaly detection algorithms, the proposed algorithm can identify specific users that are exhibiting anomalous behavior. The capabilities of the algorithm are demonstrated using several year-long real-world data sets, for commercial as well as residential consumers.
GNSS Spoofing and Detection
Global navigation satellite signals can be spoofed by false signals, but special receivers can provide defenses against such attacks. The development of good spoofing defenses requires an understanding of the possible attack modes of a spoofer and the properties of those modes that can be exploited for defense purposes. Sets of attack methods and defense methods are described in detail. An attack/defense matrix is developed that documents which defense techniques are effective against the various attack techniques. Recommendations are generated to improve the offerings of commercial off-the-shelf receivers from the current situation, a complete lack of spoofing defenses, to a situation in which various levels of defense are present, some that add significant security for relatively little additional cost and others that add more security at costs that start to become appreciable.
Bayesian Network Models in Cyber Security: A Systematic Review
Bayesian Networks (BNs) are an increasingly popular modelling technique in cyber security especially due to their capability to overcome data limitations. This is also exemplified by the growth of BN models development in cyber security. However, a comprehensive comparison and analysis of these models is missing. In this paper, we conduct a systematic review of the scientific literature and identify 17 standard BN models in cyber security. We analyse these models based on 8 different criteria and identify important patterns in the use of these models. A key outcome is that standard BNs are noticeably used for problems especially associated with malicious insiders. This study points out the core range of problems that were tackled using standard BN models in cyber security, and illuminates key research gaps.
Diagnostic accuracy, reliability and validity of Childhood Autism Rating Scale in India.
BACKGROUND Since there is no established measure for autism in India, we evaluated the diagnostic accuracy, reliability and validity of Childhood Autism Rating Scale (CARS). METHODS Children and adolescents suspected of having autism were identified from the unit's database. Scale and item level scores of CARS were collected and analyzed. Sensitivity, specificity, likelihood ratios and predictive values for various CARS cut-off scores were calculated. Test-retest reliability and inter-rater reliability of CARS were examined. The dichotomized CARS score was correlated with the ICD-10 clinical diagnosis of autism to establish the criterion validity of CARS as a measure of autism. Convergent and divergent validity was calculated. The factor structure of CARS was demonstrated by principal components analysis. RESULTS A CARS score of > or =33 (sensitivity = 81.4%, specificity = 78.6%; area under the curve = 81%) was suggested for diagnostic use in Indian populations. The inter-rater reliability (ICC=0.74) and test-retest reliability (ICC=0.81) for CARS were good. Besides the adequate face and content validity, CARS demonstrated good internal consistency (Cronbach's alpha=0.79) and item-total correlation. There was moderate convergent validity with Binet-Kamat Test of Intelligence or Gessell's Developmental Schedule (r=0.42; P=0.01), divergent validity (r=-0.18; P=0.4) with ADD-H Comprehensive Teacher Rating Scale, and high concordance rate with the reference standard, ICD-10 diagnosis (82.52%; Cohen's kappa=0.40, P=0.001) in classifying autism. A 5-factor structure explained 65.34% of variance. CONCLUSION The CARS has strong psychometric properties and is now available for clinical and research work in India.
Hiding Transaction Amounts and Balances in Bitcoin
Bitcoin is gaining increasing adoption and popularity nowadays. In spite of its reliance on pseudonyms, Bitcoin raises a number of privacy concerns due to the fact that all of the transactions that take place in the system are publicly announced. The literature contains a number of proposals that aim at evaluating and enhancing user privacy in Bitcoin. To the best of our knowledge, ZeroCoin (ZC) is the first proposal which prevents the public tracing of coin expenditure in Bitcoin by leveraging zero-knowledge proofs of knowledge and one-way accumulators. While ZeroCoin hardens the traceability of coins, it does not hide the amount per transaction, nor does it prevent the leakage of the balances of Bitcoin addresses. In this paper, we propose, EZC, an extension of ZeroCoin which (i) enables the construction of multi-valued ZCs whose values are only known to the sender and recipient of the transaction and (ii) supports the expenditure of ZCs among users in the Bitcoin system, without the need to convert them back to Bitcoins. By doing so, EZC hides transaction values and address balances in Bitcoin, for those users who opt-out from exchanging their coins to BTCs. We performed a preliminary assessment of the performance of EZC; our findings suggest that EZC improves the communication overhead incurred in ZeroCoin.
Soluble CD14: genomewide association analysis and relationship to cardiovascular risk and mortality in older adults.
OBJECTIVE CD14 is a glycosylphosphotidylinositol-anchored membrane glycoprotein expressed on neutrophils and monocytes/macrophages that also circulates as a soluble form (sCD14). Despite the well-recognized role of CD14 in inflammation, relatively little is known about the genetic determinants of sCD14 or the relationship of sCD14 to vascular- and aging-related phenotypes. METHODS AND RESULTS We measured baseline levels of sCD14 in >5000 European-American and black adults aged 65 years and older from the Cardiovascular Health Study, who were well characterized at baseline for atherosclerotic risk factors and subclinical cardiovascular disease, and who have been followed for clinical cardiovascular disease and mortality outcomes up to 20 years. At baseline, sCD14 generally showed strong positive correlations with traditional cardio-metabolic risk factors and with subclinical measures of vascular disease such as carotid wall thickness and ankle-brachial index (independently of traditional cardiovascular disease risk factors), and was also inversely correlated with body mass index. In genomewide association analyses of sCD14, we (1) confirmed the importance of the CD14 locus on chromosome 5q21 in European-American; (2) identified a novel African ancestry-specific allele of CD14 associated with lower sCD14 in blacks; and (3) identified a putative novel association in European-American of a nonsynonymous variant of PIGC, which encodes an enzyme required for the first step in glycosylphosphotidylinositol anchor biosynthesis. Finally, we show that, like other acute phase inflammatory biomarkers, sCD14 predicts incident cardiovascular disease, and strongly and independently predicts all-cause mortality in older adults. CONCLUSIONS CD14 independently predicts risk mortality in older adults.
Empirical Study of Artificial Fish Swarm Algorithm
Artificial fish swarm algorithm (AFSA) is one of the swarm intelligence optimization algorithms that works based on population and stochastic search. In order to achieve acceptable result, there are many parameters needs to be adjusted in AFSA. Among these parameters, visual and step are very significant in view of the fact that artificial fish basically move based on these parameters. In standard AFSA, these two parameters remain constant until the algorithm termination. Large values of these parameters increase the capability of algorithm in global search, while small values improve the local search ability of the algorithm. In this paper, we empirically study the performance of the AFSA and different approaches to balance between local and global exploration have been tested based on the adaptive modification of visual and step during algorithm execution. The proposed approaches have been evaluated based on the four well-known benchmark functions. Experimental results show considerable positive impact on the performance of AFSA.
ADPfusion Ecient Dynamic Programming over Sequence Data
ADPfusion combines the usual high-level, terse notation of Haskell with an underlying fusion framework. The result is a parsing library that allows the user to write algorithms in a style very close to the notation used in formal languages and reap the performance benefits of automatic program fusion. Recent developments in natural language processing and computational biology have lead to a number of works that implement algorithms that process more than one input at the same time. We provide an extension of ADPfusion that works on extended index spaces and multiple input sequences, thereby increasing the number of algorithms that are amenable to implementation in our framework. This allows us to implement even complex algorithms with a minimum of overhead, while enjoying all the guarantees that algebraic dynamic programming provides to the user.
Performance of the IOTA ADNEX model in preoperative discrimination of adnexal masses in a gynecological oncology center.
OBJECTIVE To evaluate the performance of the International Ovarian Tumor Analysis (IOTA) ADNEX model in the preoperative discrimination between benign ovarian (including tubal and para-ovarian) tumors, borderline ovarian tumors (BOT), Stage I ovarian cancer (OC), Stage II-IV OC and ovarian metastasis in a gynecological oncology center in Brazil. METHODS This was a diagnostic accuracy study including 131 women with an adnexal mass invited to participate between February 2014 and November 2015. Before surgery, pelvic ultrasound examination was performed and serum levels of tumor marker CA 125 were measured in all women. Adnexal masses were classified according to the IOTA ADNEX model. Histopathological diagnosis was the gold standard. Receiver-operating characteristics (ROC) curve analysis was used to determine the diagnostic accuracy of the model to classify tumors into different histological types. RESULTS Of 131 women, 63 (48.1%) had a benign ovarian tumor, 16 (12.2%) had a BOT, 17 (13.0%) had Stage I OC, 24 (18.3%) had Stage II-IV OC and 11 (8.4%) had ovarian metastasis. The area under the ROC curve (AUC) was 0.92 (95% CI, 0.88-0.97) for the basic discrimination between benign vs malignant tumors using the IOTA ADNEX model. Performance was high for the discrimination between benign vs Stage II-IV OC, BOT vs Stage II-IV OC and Stage I OC vs Stage II-IV OC, with AUCs of 0.99, 0.97 and 0.94, respectively. Performance was poor for the differentiation between BOT vs Stage I OC and between Stage I OC vs ovarian metastasis with AUCs of 0.64. CONCLUSION The majority of adnexal masses in our study were classified correctly using the IOTA ADNEX model. On the basis of our findings, we would expect the model to aid in the management of women with an adnexal mass presenting to a gynecological oncology center. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd.
Risk of congenital anomalies in pregnant users of statin drugs
AIMS Evidence from animal studies suggests that statin medications should not be taken during pregnancy. Our aim was to examine the association between the use of statins in early pregnancy and the incidence of congenital anomalies. METHODS A population-based pregnancy registry was built. Three study groups were assembled: women prescribed statins in the first trimester (group A), fibrate/nicotinic acid in the first trimester (group B) and statins between 1 year and 1 month before conception, but not during pregnancy (group C). Among live-born infants, we selected as cases infants with any congenital anomaly diagnosed in the first year of life. Controls were defined as infants with no congenital anomalies. The rate of congenital anomalies in the respective groups was calculated. Adjusted odds ratios (OR) and 95% confidence intervals (CI) were also calculated. RESULTS Our study group consisted of 288 pregnant women. Among women with a live birth, the rate of congenital anomalies was 3/64 (4.69%; 95% CI 1.00, 13.69) in group A, 3/14 in group B (21.43%; 95% CI 4.41, 62.57) and 7/67 in group C (10.45%; 95% CI 4.19, 21.53). The adjusted OR for congenital anomalies in group A compared with group C was 0.36 (95% CI 0.06, 2.18). CONCLUSION We did not detect a pattern in fetal congenital anomalies or evidence of an increased risk in the live-born infants of women filling prescriptions for statins in the first trimester of pregnancy. Conclusions, however, remain uncertain in the absence of data from non-live births.
Blind user wearable audio assistance for indoor navigation based on visual markers and ultrasonic obstacle detection
This paper presents an indoor navigation wearable system based on visual markers recognition and ultrasonic obstacles perception used as an audio assistance for blind people. In this prototype, visual markers identify the points of interest in the environment; additionally this location status is enriched with information obtained in real time by other sensors. A map lists these points and indicates the distance and direction between closer points, building a virtual path. The blind users wear also glasses built with sensors like RGB camera, ultrasonic, magnetometer, gyroscope, and accelerometer enhancing the amount and quality of the available information. The user navigates freely in the prepared environment identifying the location markers. Based on the origin point information or the location point information and on the gyro sensor value the path to next marker (target) is calculated. To raise the perception of the environment, avoiding possible obstacles, it is used a couple of ultrasonic sensors. The audio assistance provided to the user makes use of an audio bank, with simple known instructions to indicate precisely the desired route and obstacles. Ten blind users tested and evaluated the system. The results showed rates of about 94.92% successful recognition of the markers using only 26 frames per second and 98.33% of ultrasonic obstacles perception disposed between 0.50 meters and 4.0 meters.
Centrality in valued graphs : A measure of betweenness based on network flow
A new measure of centrality, C,, is introduced. It is based on the concept of network flows. While conceptually similar to Freeman’s original measure, Ca, the new measure differs from the original in two important ways. First, C, is defined for both valued and non-valued graphs. This makes C, applicable to a wider variety of network datasets. Second, the computation of C, is not based on geodesic paths as is C, but on all the independent paths between all pairs of points in the network.
Inclusive growth: an imperative for African agriculture
The findings, interpretations and conclusions expressed in this report are those of the authors and do not necessarily imply the expression of any opinion whatsoever on the part of the Management or the Executive Directors of the African Development Bank, nor the Governments they represent, nor of the other institutions mentioned in this study. In the preparation of this report, every effort has been made to provide the most up to date, correct and clearly expressed information as possible; however, the authors do not guarantee accuracy of the data. Rights and Permissions All rights reserved. Reproduction, citation and dissemination of material contained in this information product for educational and non-commercial purposes are authorized without any prior written permission from the publisher, if the source is fully acknowledged. Reproduction of material in this information product for resale or other commercial purposes is prohibited. Since 2000, Africa has been experiencing a remarkable economic growth accompanied by improving democratic environment. Real GDP growth has risen by more than twice its pace in the last decade. Telecommunications, financial services and banking, construction and private-investment inflows have also increased substantially. However, most of the benefits of the high growth rates achieved over the last few years have not reached the rural poor. For this to happen, substantial growth in the agriculture sector will need to be stimulated and sustained, as the sector is key to inclusive growth, given its proven record of contributing to more robust reduction of poverty. This is particularly important when juxtaposed with the fact that the majority of Africa's poor are engaged in agriculture, a sector which supports the livelihoods of 90 percent of Africa's population. The sector also provides employment for about 60 percent of the economically active population, and 70 percent of the continent's poorest communities. In spite of agriculture being an acknowledged leading growth driver for Africa, the potential of the sector's contribution to growth and development has been underexploited mainly due to a variety of challenges, including the widening technology divide, weak infrastructure and declining technical capacity. These challenges have been exacerbated by weak input and output marketing systems and services, slow progress in regional integration, land access and rights issues, limited access to affordable credit, challenging governance issues in some countries, conflicts, effects of climate change, and the scourge of HIV/AIDS and other diseases. Green growth is critical to Africa because of the fragility of the …
A Review on Security Evaluation for Pattern Classifier against Attack
The systems which can be used for pattern classification are used in adversarial application, for example spam filtering, network intrusion detection system, biometric authentication. This adversarial scenario&apos;s exploitation may sometimes affect their performance and limit their practical utility. In case of pattern classification conception and contrive methods to adversarial environment is a novel and relevant research direction, which has not yet pursued in a systematic way. To address one main open issue: evaluating at contrive phase the security of pattern classifiers (for example the performance degradation under potential attacks which incurs during the operation). To propose a framework for evaluation of classifier security and also this framework can be applied to different classifiers on one of the application from the
Investigating Content Selection for Language Generation using Machine Learning
The content selection component of a natural language generation system decides which information should be communicated in its output. We use information from reports on the game of cricket. We first describe a simple factoid-to-text alignment algorithm then treat content selection as a collective classification problem and demonstrate that simple ‘grouping’ of statistics at various levels of granularity yields substantially improved results over a probabilistic baseline. We additionally show that holding back of specific types of input data, and linking database structures with commonality further increase performance.
Deep Embedding Forest: Forest-based Serving with Deep Embedding Features
Deep Neural Networks (DNN) have demonstrated superior ability to extract high level embedding vectors from low level features. Despite the success, the serving time is still the bottleneck due to expensive run-time computation of multiple layers of dense matrices. GPGPU, FPGA, or ASIC-based serving systems require additional hardware that are not in the mainstream design of most commercial applications. In contrast, tree or forest-based models are widely adopted because of low serving cost, but heavily depend on carefully engineered features. This work proposes a Deep Embedding Forest model that benefits from the best of both worlds. The model consists of a number of embedding layers and a forest/tree layer. The former maps high dimensional (hundreds of thousands to millions) and heterogeneous low-level features to the lower dimensional (thousands) vectors, and the latter ensures fast serving. Built on top of a representative DNN model called Deep Crossing, and two forest/tree-based models including XGBoost and LightGBM, a two-step Deep Embedding Forest algorithm is demonstrated to achieve on-par or slightly better performance as compared with the DNN counterpart, with only a fraction of serving time on conventional hardware. After comparing with a joint optimization algorithm called partial fuzzification, also proposed in this paper, it is concluded that the two-step Deep Embedding Forest has achieved near optimal performance. Experiments based on large scale data sets (up to 1 billion samples) from a major sponsored search engine proves the efficacy of the proposed model.
Knowledge Management a way to gain a competitive advantage in firms ( evidence of manufacturing companies )
Knowledge as the basis of competition is the most important factor and the knowledge, innovations also Technology and knowledge based companies as the most important factor for survival is known. Knowledge Process Entrepreneurship, the creation of knowledge and the conversion of products and services through innovation. The most basic feature of Intelligent Organizations twenty-first century, the emphasis on knowledge and information. Unlike previous organizations, organizations advanced technologies today, require Acquisition, management and exploitation of knowledge and information to improve efficiency, manage and track Variations are endless. Knowledge is a powerful tool that can change the world And innovations made possible. Knowledge management is an interdisciplinary business model with all aspects of knowledge creation, Coding, sharing and using knowledge to enhance learning and innovation in the context of the company and Is working. This study developed a questionnaire and send it to companies located in the industrial town managers found that knowledge management has an impact on the surface of the competitive advantage's Knowledge management and competitive advantage, Innovation , Organizational performance, Customer satisfaction, The study variables were. Ranged the in 2,013th were tested. The results indicate that Knowledge management has made a significant competitive advantage.
Multiple-attribute decision making methods for plant layout design problem
The layout design problem is a strategic issue and has a significant impact on the efficiency of a manufacturing system. Much of the existing layout design literature that uses a surrogate function for flow distance or for simplified objectives may be entrapped into local optimum; and subsequently lead to a poor layout design due to the multiple-attribute decision making (MADM) nature of a layout design decision. The present study explores the use of MADM approaches in solving a layout design problem. The proposed methodology is illustrated through a practical application from an IC packaging company. Two methods are proposed in solving the case study problem: Technique for order preference by similarity to ideal solution (TOPSIS) and fuzzy TOPSIS. Empirical results showed that the proposed methods are viable approaches in solving a layout design problem. TOPSIS is a viable approach for the case study problem and is suitable for precise value performance ratings. When the performance ratings are vague and imprecise, the fuzzy TOPSIS is a preferred solution method. r 2006 Elsevier Ltd. All rights reserved.
Blockchain access control Ecosystem for Big Data security
In recent years, the advancement in modern technologies has not only resulted in an explosion of huge data sets being captured and recorded in different fields, but also given rise to concerns in the security and protection of data during storage, transmission, processing, and access. The blockchain is a distributed ledger that records transactions in a secure, flexible, verifiable and permanent way. Transactions in a blockchain can be an exchange of an asset, the execution of the terms of a smart contract, or an update to a record. In this paper, we have developed a blockchain access control ecosystem that gives asset owners the sovereign right to effectively manage access control of large data sets and protect against data breaches. The Linux Foundation's Hyperledger Fabric blockchain is used to run the business network while the Hyperledger composer modeling tool is used to implement the smart contracts or transaction processing functions that run on the blockchain network. Keywords—Blockchain, access control, data sharing, privacy, data protection, hyperledger, distributed ledger technology, smart contract
Song recommendation with non-negative matrix factorization and graph total variation
This work formulates a novel song recommender system as a matrix completion problem that benefits from collaborative filtering through Non-negative Matrix Factorization (NMF) and content-based filtering via total variation (TV) on graphs. The graphs encode both playlist proximity information and song similarity, using a rich combination of audio, meta-data and social features. As we demonstrate, our hybrid recommendation system is very versatile and incorporates several well-known methods while outperforming them. Particularly, we show on real-world data that our model overcomes w.r.t. two evaluation metrics the recommendation of models solely based on low-rank information, graph-based information or a combination of both.
2 State Transition Graph ( STG )
SIS is an interactive tool for synthesis and optimization of sequential circuits. Given a state transition table, a signal transition graph, or a logic-level description of a sequential circuit, it produces an optimized net-list in the target technology while preserving the sequential input-output behavior. Many different programs and algorithms have been integrated into SIS, allowing the user to choose among a variety of techniques at each stage of the process. It is built on top of MISII [5] and includes all (combinational) optimization techniques therein as well as many enhancements. SIS serves as both a framework within which various algorithms can be tested and compared, and as a tool for automatic synthesis and optimization of sequential circuits. This paper provides an overview of SIS. The first part contains descriptions of the input specification, STG (state transition graph) manipulation, new logic optimization and verification algorithms, ASTG (asynchronous signal transition graph) manipulation, and synthesis for PGA’s (programmable gate arrays). The second part contains a tutorial example illustrating the design process using SIS.
The Efficacy and Limitations of Sirolimus Conversion in Liver Transplant Patients Who Develop Renal Dysfunction on Calcineurin Inhibitors
This study evaluates sirolimus in preserving renal function in 28 patients who developed renal insufficiency after liver transplantation. Patients with a creatinine level higher than 1.8 mg/ml were eligible for conversion. Of the 28 patients, 7 (25%) did not tolerate sirolimus, 6 (21%) progressed to end-stage renal disease (ESRD), and 14 (50%) have been maintained on sirolimus with stable renal function. The 28 patients overall had a decline in creatinine of 0.38 mg/dl (P D 0:029) at week 4, with a small increase by week 24. However, the subset of 14 patients who did not develop ESRD had a decline in creatinine that persisted to week 48. While the differences between those who developed ESRD and those with stable renal function were not statistically significant, the patients who developed ESRD had a higher creatinine at conversion (2.8 vs 2.3) and a lower creatinine clearance (36 vs 53 ml/min). Patients receiving sirolimus had a persistent rise in cholesterol (P < 0:05). The use of sirolimus to preserve renal function was limited by patients unable to tolerate drug (25%) and patients who developed ESRD (21%). A subgroup of patients (50%) had an improvementin creatinine that persisted for 48 weeks.
Happiness Economics , Eudaimonia and Positive Psychology : From Happiness Economics to Flourishing Economics
A remarkable current development, happiness economics focuses on the relevance of people’s happiness in economic analyses. As this theory has been criticised for relying on an incomplete notion of happiness, this paper intends to support it with richer philosophical and psychological foundations. Specifically, it suggests that happiness economics should be based on Aristotle’s philosophical eudaimonia concept and on a modified version of ‘positive psychology’ that stresses human beings’ relational nature. First, this analysis describes happiness economics and its shortcomings. Next, it introduces Aristotle’s eudaimonia and takes a look at positive psychology with this lens, elaborating on the need to develop a new approach that goes beyond the economics of happiness: the economics of flourishing. Finally, the paper specifies some possible socio-economic objectives of a eudaimonic economics of happiness.
Systematic Review of Object Oriented Metric Tools
Tools that extract metrics from object oriented code are widely used as part of static code analysis which acts as a feedback mechanism for the managers, developers and other stake holders to improve the software quality. The software industry and academic research have confirmed the necessity of such tools and the impact they have on ensuring quality software. There is a transition of tools from measuring traditional software metrics to object oriented metrics as the focus has shifted to object oriented design and development. This paper presents a systematic review of both commercial and open source object oriented metric tools, highlighting the features supported and extensibility. The results are useful to arrive at the most suitable tool depending on the requirements of the stake holder. The results also identify a potential for an object oriented tool that can address the need for a tool that can work effectively across many object oriented languages and also be flexible for extending it to different languages and metrics.
Towards a software-defined Network Operating System for the IoT
The heterogeneity characterizing the Internet of Things (IoT) landscape can be addressed by leveraging network operating systems (NOS)s. Current NOS solutions have been developed for traditional infrastructured networks: they do not take the specific features of fundamental IoT components such as wireless sensor and actuator networks into account. Accordingly, in this paper an innovative integrated network operating system for the IoT which is obtained as the evolution of the Open Network Operating System (ONOS) is proposed. This has been appropriately extended to enhance the recently proposed SDN-WISE platform for supporting SDN in wireless sensor networks and let it interact with standard OpenFlow (OF) switches. The proposed integrated system has been prototyped.
A Novel Compact Balanced-to-Unbalanced Low-Temperature Co-Fired Ceramic Bandpass Filter With Three Coupled Lines Configuration
This paper proposes a novel compact balanced-to-unbalanced bandpass filter. Firstly, a pre-design circuit is presented, which is composed of an inductive coupled-line bandpass filter and an out-of-phase capacitive coupled-line bandpass filter. A novel compact circuit with three coupled lines configuration, derived from the pre-design circuit, is then proposed for miniaturizing the balanced-to-unbalanced bandpass filter. A 2.4-GHz multilayer ceramic chip type balanced-to-unbalanced bandpass filter with a size of 2.0 mm times 1.2 mm times 0.7 mm is developed to validate the feasibility of the proposed structure. The filter is designed by using circuit simulation, as well as full-wave electromagnetic simulation softwares, and fabricated by the use of low-temperature co-fired ceramic technology. The measured results agree quite well with the simulated. According to the measurement results, the maximum insertion loss is 1.65 dB, the maximum in-band phase imbalance is within 3deg, and the maximum in-band magnitude imbalance is less than 0.32 dB.
Capsaicin and Dihydrocapsaicin Determination in Chili Pepper Genotypes Using Ultra-Fast Liquid Chromatography
Research was carried out to estimate the levels of capsaicin and dihydrocapsaicin that may be found in some heat tolerant chili pepper genotypes and to determine the degree of pungency as well as percentage capsaicin content of each of the analyzed peppers. A sensitive, precise, and specific ultra fast liquid chromatographic (UFLC) system was used for the separation, identification and quantitation of the capsaicinoids and the extraction solvent was acetonitrile. The method validation parameters, including linearity, precision, accuracy and recovery, yielded good results. Thus, the limit of detection was 0.045 µg/kg and 0.151 µg/kg for capsaicin and dihydrocapsaicin, respectively, whereas the limit of quantitation was 0.11 µg/kg and 0.368 µg/kg for capsaicin and dihydrocapsaicin. The calibration graph was linear from 0.05 to 0.50 µg/g for UFLC analysis. The inter- and intra-day precisions (relative standard deviation) were <5.0% for capsaicin and <9.9% for dihydrocapsaicin while the average recoveries obtained were quantitative (89.4%-90.1% for capsaicin, 92.4%-95.2% for dihydrocapsaicin), indicating good accuracy of the UFLC method. AVPP0705, AVPP0506, AVPP0104, AVPP0002, C05573 and AVPP0805 showed the highest concentration of capsaicin (12,776, 5,828, 4,393, 4,760, 3,764 and 4,120 µg/kg) and the highest pungency level, whereas AVPP9703, AVPP0512, AVPP0307, AVPP0803 and AVPP0102 recorded no detection of capsaicin and hence were non-pungent. All chili peppers studied except AVPP9703, AVPP0512, AVPP0307, AVPP0803 and AVPP0102 could serve as potential sources of capsaicin. On the other hand, only genotypes AVPP0506, AVPP0104, AVPP0002, C05573 and AVPP0805 gave a % capsaicin content that falls within the pungency limit that could make them recommendable as potential sources of capsaicin for the pharmaceutical industry.
How well do Computers Solve Math Word Problems? Large-Scale Dataset Construction and Evaluation
Recently a few systems for automatically solving math word problems have reported promising results. However, the datasets used for evaluation have limitations in both scale and diversity. In this paper, we build a large-scale dataset which is more than 9 times the size of previous ones, and contains many more problem types. Problems in the dataset are semiautomatically obtained from community question-answering (CQA) web pages. A ranking SVM model is trained to automatically extract problem answers from the answer text provided by CQA users, which significantly reduces human annotation cost. Experiments conducted on the new dataset lead to interesting and surprising results.
Melatonin as a chronobiotic.
Melatonin, hormone of the pineal gland, is concerned with biological timing. It is secreted at night in all species and in ourselves is thereby associated with sleep, lowered core body temperature, and other night time events. The period of melatonin secretion has been described as 'biological night'. Its main function in mammals is to 'transduce' information about the length of the night, for the organisation of daylength dependent changes, such as reproductive competence. Exogenous melatonin has acute sleepiness-inducing and temperature-lowering effects during 'biological daytime', and when suitably timed (it is most effective around dusk and dawn) it will shift the phase of the human circadian clock (sleep, endogenous melatonin, core body temperature, cortisol) to earlier (advance phase shift) or later (delay phase shift) times. The shifts induced are sufficient to synchronise to 24 h most blind subjects suffering from non-24 h sleep-wake disorder, with consequent benefits for sleep. Successful use of melatonin's chronobiotic properties has been reported in other sleep disorders associated with abnormal timing of the circadian system: jetlag, shiftwork, delayed sleep phase syndrome, some sleep problems of the elderly. No long-term safety data exist, and the optimum dose and formulation for any application remains to be clarified.
FingerScanner: Embedding a Fingerprint Scanner in a Raspberry Pi
Nowadays, researchers are paying increasing attention to embedding systems. Cost reduction has lead to an increase in the number of platforms supporting the operating system Linux, jointly with the Raspberry Pi motherboard. Thus, embedding devices on Raspberry-Linux systems is a goal in order to make competitive commercial products. This paper presents a low-cost fingerprint recognition system embedded into a Raspberry Pi with Linux.
Retinal Arteriolar Morphometry Based on Full Width at Half Maximum Analysis of Spectral-Domain Optical Coherence Tomography Images
OBJECTIVES In this study, we develop a microdensitometry method using full width at half maximum (FWHM) analysis of the retinal vascular structure in a spectral-domain optical coherence tomography (SD-OCT) image and present the application of this method in the morphometry of arteriolar changes during hypertension. METHODS Two raters using manual and FWHM methods measured retinal vessel outer and lumen diameters in SD-OCT images. Inter-rater reproducibility was measured using coefficients of variation (CV), intraclass correlation coefficient and a Bland-Altman plot. OCT images from forty-three eyes of 43 hypertensive patients and 40 eyes of 40 controls were analyzed using an FWHM approach; wall thickness, wall cross-sectional area (WCSA) and wall to lumen ratio (WLR) were subsequently calculated. RESULTS Mean difference in inter-rater agreement ranged from -2.713 to 2.658 μm when using a manual method, and ranged from -0.008 to 0.131 μm when using a FWHM approach. The inter-rater CVs were significantly less for the FWHM approach versus the manual method (P < 0.05). Compared with controls, the wall thickness, WCSA and WLR of retinal arterioles were increased in the hypertensive patients, particular in diabetic hypertensive patients. CONCLUSIONS The microdensitometry method using a FWHM algorithm markedly improved inter-rater reproducibility of arteriolar morphometric analysis, and SD-OCT may represent a promising noninvasive method for in vivo arteriolar morphometry.
Ambulatory assessment of human body kinematics and kinetics
Ground reaction force (GRF) measurement is important in the analysis of human body movements. The main drawback of the existing measurement systems is the restriction to a laboratory environment. This study proposes an ambulatory system for assessing the dynamics of ankle and foot, which integrates the measurement of the GRF with the measurement of human body movement. The GRF and the center of pressure (CoP) are measured using two 6D force/moment sensors mounted beneath the shoe. The movement of the foot and the lower leg is measured using three miniature inertial sensors, two rigidly attached to the shoe and one to the lower leg. The proposed system is validated using a force plate and an optical position measurement system as a reference. The results show good correspondence between both measurement systems, except for the ankle power. The root mean square (rms) difference of the magnitude of the GRF over 10 evaluated trials was 0.012 ± 0.001 N/N (mean ± standard deviation), being 1.1 ± 0.1 % of the maximal GRF magnitude. It should be noted that the forces, moments, and powers are normalized with respect to body weight. The CoP estimation using both methods shows good correspondence, as indicated by the rms difference of 5.1± 0.7 mm, corresponding to 1.7 ± 0.3 % of the length of the shoe. The rms difference between the magnitudes of the heel position estimates was calculated as 18 ± 6 mm, being 1.4 ± 0.5 % of the maximal magnitude. The ankle moment rms difference was 0.004 ± 0.001 Nm/N, being 2.3 ± 0.5 % of the maximal magnitude. Finally, the rms difference of the estimated power at the ankle was 0.02 ± 0.005 W/N, being 14 ± 5 % of the maximal power. This power difference is caused by an inaccurate estimation of the angular velocities using the optical reference measurement system, which is due to considering the foot as a single segment. The ambulatory system considers separate heel and forefoot segments, thus allowing an additional foot moment and power to be estimated. Based on the results of this research, it is concluded that the combination of the instrumented shoe and inertial sensing is a promising tool for the assessment of the dynamics of foot and ankle in an ambulatory setting.
Patterns of mortality after prolonged follow-up of a randomised controlled trial using granulocyte colony-stimulating factor to maintain chemotherapy dose intensity in non-Hodgkin's lymphoma
The effect of utilising granulocyte colony-stimulating factor (G-CSF) to maintain chemotherapy dose intensity in non-Hodgkin's lymphoma (NHL) on long-term mortality patterns has not been formally evaluated. We analysed prolonged follow-up data from the first randomised controlled trial investigating this approach. Data on 10-year overall survival (OS), progression-free survival (PFS), freedom from progression (FFP) and incidence of second malignancies were collected for 80 patients with aggressive subtypes of NHL, who had been randomised to receive either VAPEC-B chemotherapy or VAPEC-B+G-CSF. Median follow-up was 15.7 years for surviving patients. No significant differences were found in PFS or OS. However, 10-year FFP was better in the G-CSF arm (68 vs 47%, P=0.037). Eleven deaths from causes unrelated to NHL or its treatment occurred in the G-CSF arm compared to five in controls. More deaths occurred from second malignancies (4 vs 2) and cardiovascular causes (5 vs 0) in the G-CSF arm. Although this pharmacovigilance study has insufficient statistical power to draw conclusions and is limited by the lack of data on smoking history and other cardiovascular risk factors, these unique long-term outcome data generate hypotheses that warrant further investigation.
Politics and Housing Markets - Four Normative Arguments
The normative question of markets and politics in housing is discussed in relation to theories of welfare economics and political philosophy. The point of departure is a general presumption in favour of market solutions, based on both procedural ("negative freedom") and instrumental ("maximum utility") arguments. Four types of counter‐arguments are discussed against the background of the specific conditions of housing. The procedural arguments based on negative freedom or democracy are not found to be conclusive. The existence of transaction costs and externalities makes it questionable whether market solutions in housing could maximize consumer utility. Alternative values to utility have certain paternalistic implications, though political intervention may sometimes be justified in terms of physiological needs, positive freedom or social citizenship. From an empirical point of view the presumption in favour of market solutions may still be defensible, since housing provision in the Western world is ultim...
Node, Node-Link, and Node-Link-Group Diagrams: An Evaluation
Effectively showing the relationships between objects in a dataset is one of the main tasks in information visualization. Typically there is a well-defined notion of distance between pairs of objects, and traditional approaches such as principal component analysis or multi-dimensional scaling are used to place the objects as points in 2D space, so that similar objects are close to each other. In another typical setting, the dataset is visualized as a network graph, where related nodes are connected by links. More recently, datasets are also visualized as maps, where in addition to nodes and links, there is an explicit representation of groups and clusters. We consider these three Techniques, characterized by a progressive increase of the amount of encoded information: node diagrams, node-link diagrams and node-link-group diagrams. We assess these three types of diagrams with a controlled experiment that covers nine different tasks falling broadly in three categories: node-based tasks, network-based tasks and group-based tasks. Our findings indicate that adding links, or links and group representations, does not negatively impact performance (time and accuracy) of node-based tasks. Similarly, adding group representations does not negatively impact the performance of network-based tasks. Node-link-group diagrams outperform the others on group-based tasks. These conclusions contradict results in other studies, in similar but subtly different settings. Taken together, however, such results can have significant implications for the design of standard and domain snecific visualizations tools.
Optimal Fan Speed Control for Thermal Management of Servers
Improving the cooling efficiency of servers has become an essential requirement in data centers today as the power used to cool the servers has become an increasingly large component of the total power consumption. Additionally, fan speed control has emerged in recent years as a critical part of system thermal architecture. However, the state of the art in server fan control often results in over provisioning of air flow that leads to high fan power consumption. It can be exacerbated in server architectures that share cooling resources among server components, where single hot spot can often drive the operation of a multiplicity of fans. To address this problem, this paper presents a novel multiinput multi-output (MIMO) fan controller that utilizes thermal models developed from first-principles to manipulate the operation of fans. The controller tunes the speeds of individual fans proactively based on prediction of the sever temperatures. Experimental results show that, with fans controlled by the optimal controller, over-provisioning of cooling air is eliminated, temperatures are more tightly controlled and fan energy consumption is reduced by up to 20% compared to that with a zone-based feedback controller.
3D printing: the new industrial revolution
This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.
The MIT - Cornell Collision and Why It Happened
Mid-way through the 2007 DARPA Urban Challenge, MIT’s autonomous Land Rover LR3 ‘Talos’ and Team Cornell’s autonomous Chevrolet Tahoe ‘Skynet’ collided in a low-speed accident, one of the first well-documented collisions between two full-size autonomous vehicles. This collaborative study between MIT and Cornell examines the root causes of the collision, which are identified in both teams’ system designs. Systems-level descriptions of both autonomous vehicles are given, and additional detail is provided on sub-systems and algorithms implicated in the collision. A brief summary of robot–robot interactions during the race is presented, followed by an in-depth analysis of both robots’ behaviors leading up to and during the Skynet–Talos collision. Data logs from the vehicles are used to show the gulf between autonomous and human-driven vehicle behavior at low speeds and close proximities. Contributing factors are shown to be: (1) difficulties in sensor data association leading to phantom obstacles and an inability to detect slow moving vehicles, (2) failure to anticipate vehicle intent, and (3) an over emphasis on lane constraints versus vehicle proximity in motion planning. Eye contact between human road users is a crucial communications channel for slow-moving close encounters between vehicles. Inter-vehicle communication may play a similar role for autonomous vehicles; however, there are availability and denial-of-service issues to be addressed.
A high-linearity, LC-Tuned, 24-GHz T/R switch in 90-nm CMOS
This paper presents an LC-tuned, 24-GHz single-pole double-throw (SPDT) transmit/receive (T/R) switch implemented in 90-nm CMOS. The design focuses on the techniques to increase the power handling capability in the transmit (Tx) mode under 1.2-V operation. The switch achieves a measured P-1dB of 28.7 dBm, which represents the highest linearity, reported to date, for CMOS millimeter-wave T/R switches. The transmit and receive (Rx) branches employ different switch topologies to minimize the power leakage into the Rx path during Tx mode, and hence improve the linearity. To accommodate large signal swing, AC floating bias is applied using large bias resistors to all terminals of the switch devices. Triple-well devices are utilized to effectively float the substrate terminals. The switch uses a single 1.2-V digital control signal for T/R mode selection and for source/drain bias. The measured insertion loss is 3.5 dB and return loss is better than -10 dB at 24 GHz.
Polynomial texture maps
In this paper we present a new form of texture mapping that produces increased photorealism. Coefficients of a biquadratic polynomial are stored per texel, and used to reconstruct the surface color under varying lighting conditions. Like bump mapping, this allows the perception of surface deformations. However, our method is image based, and photographs of a surface under varying lighting conditions can be used to construct these maps. Unlike bump maps, these Polynomial Texture Maps (PTMs) also capture variations due to surface self-shadowing and interreflections, which enhance realism. Surface colors can be efficiently reconstructed from polynomial coefficients and light directions with minimal fixed-point hardware. We have also found PTMs useful for producing a number of other effects such as anisotropic and Fresnel shading models and variable depth of focus. Lastly, we present several reflectance function transformations that act as contrast enhancement operators. We have found these particularly useful in the study of ancient archeological clay and stone writings.
A randomized and open-label trial evaluating the addition of pazopanib to lapatinib as first-line therapy in patients with HER2-positive advanced breast cancer
This phase II study (VEG20007; NCT00347919) with randomized and open-label components evaluated first-line lapatinib plus pazopanib therapy and/or lapatinib monotherapy in patients with human epidermal growth factor receptor type 2 (HER2)-positive advanced/metastatic breast cancer. Patients were enrolled sequentially into two cohorts: Cohort 1, patients were randomly assigned to lapatinib 1,000 mg plus pazopanib 400 mg or lapatinib 1,500 mg monotherapy; Cohort 2, patients received lapatinib 1,500 mg plus pazopanib 800 mg. The primary endpoint was week-12 progressive disease rate (PDR) for Cohort 1. The principal secondary endpoint was week-12 response rate (RR) for Cohort 2. Efficacy was assessed in patients with centrally confirmed HER2 positivity (modified intent-to-treat population [MITT]). The study enrolled 190 patients (Cohort 1, combination n = 77, lapatinib n = 73; Cohort 2, n = 40). The MITT population comprised n = 141 (Cohort 1) and n = 36 (Cohort 2). In Cohort 1, week-12 PDRs were 36.2 % (combination) versus 38.9 % (lapatinib; P = 0.37 for the difference). Week-12 RRs were 36.2 % (combination) versus 22.2 % (lapatinib). In Cohort 2, week-12 RR was 33.3 %. In Cohort 1, grade 3/4 adverse events (AEs) included diarrhea (combination, 9 %; lapatinib, 5 %) and hypertension (combination, 5 %; lapatinib, 0 %). Grades 3/4 AEs in Cohort 2 included diarrhea (40 %), hypertension (5 %), and fatigue (5 %). Alanine aminotransferase elevations >5 times the upper limit of normal occurred in Cohort 1 (combination, 18 %; lapatinib, 5 %) and Cohort 2 (20 %). Upon conclusion, the combination of lapatinib plus pazopanib did not improve PDR compared with lapatinib monotherapy, although RR was increased. Toxicity was higher with the combination, including increased diarrhea and liver enzyme elevations.
Recombinate study
In the Recombinate study of previously treated patients, we have followed 67 patients (the majority of them were adults or adolescents) for as long as 5.5 years, although a majority of patients have been on study for a shorter period of time. As part of the research protocol, T-cell phenotype analyses have been carried out prospectively from baseline through to the present time. A superficial analysis had been carried out on this data some time back, but now a somewhat more detailed analysis has been done. Overall there are 67 individuals, with a mean age of 26.5 years. There were 23 HIV negatives, the rest being HIV positive. At the time of entry no patients were moving towards end-stage HIV-disease; a positive prognosis was a criterion for enrollment. There were 59 patients tested for CD4s at baseline. Testing was done by local laboratories rather than at a central lab. Of this group 19 HIV-negative patients were studied, but they are not the subject of this paper. There were 40 HIVpositive patients analyzed, with mean CD4s at baseline of 428 and a range in absolute numbers as wide as 271136. Looking at the group of 40 patients as a whole, there was a general change, a drop in CD4 numbers over the period of the study. CD8s tended to increase very slightly, and the CD4/CD8 ratio tended to be the most stable of these three parameters monitored. However, on looking at this in more detail it became apparent that age is a co-factor for HIV disease progression; we know that children and adolescents move more slowly through the HIV disease process, and that elderly individuals move more rapidly this has been reported by a number of groups. In addition, it appears that CD4 numbers may also be associated with rapidity of progression. We therefore decided to look at our numbers and compare them with those reported by de Biasi and his group in the October, 1991, issue of Blood. They had selected their patients, 20 individuals, according to age and CD4 numbers at entry, and then randomized them to receive either the Hemofil-M high-purity product or an intermediate-purity concentrate. So we had a look at our HIV-positive Recombinate patients, particular patients between 12 and 36 years of age: there were 32,
FOSSIL EVIDENCE FOR THE DONGTUJINHE FORMATION OF YISHENJILIKE MOUNTAIN,WESTERN TIANSHAN
The lamellibranchiiiata fossils and coral fossils of Upper Carboniferous period had been discoveried in the detrital rock and carbonatite rock by the geologic survey of the area of Yishenjilike mountain.The rocks association of the strata which keep these fossil is similar to the Dongtujinhe Formation in Boluohuoluoshan minor strata area.Dongtujinhe Formation first been established in Yining minor strata area,not only have an important value in stratigraphic correlation,but also proves that the Yining basin similar as Boluohuoluoshan blok in sedimentation and evolutionary process.
Learners' acceptance of e-learning in South Korea: Theories and results
0360-1315/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.compedu.2009.06.014 * Corresponding author. Tel.: +1 309 298 1409; fax E-mail address: [email protected] (I. Lee). One of the most significant changes in the field of education in this information age is the paradigm shift from teacher-centered to learner-centered education. Along with this paradigm shift, understanding of students’ e-learning adoption behavior among various countries is urgently needed. South Korea’s dense student population and high educational standards made investment in e-learning very cost-effective. However, despite the fact that South Korea is one of the fastest growing countries in e-learning, not much of the research results have been known to the globalized world. By investigating critical factors on e-learning adoption in South Korea, our study attempts to fill a gap in the individual country-level e-learning research. Based on the extensive literature review on flow theory, service quality, and the Technology Acceptance Model, our study proposes a research model which consists of four independent variables (instructor characteristics, teaching materials, design of learning contents, and playfulness), two belief variables (perceived usefulness and perceived ease of use), and one dependent variable (intention to use e-learning). Results of regression analyses are presented. Managerial implications of the findings and future research directions are also discussed. 2009 Elsevier Ltd. All rights reserved.
ERNN: A Biologically Inspired Feedforward Neural Network to Discriminate Emotion From EEG Signal
Emotions play an important role in human cognition, perception, decision making, and interaction. This paper presents a six-layer biologically inspired feedforward neural network to discriminate human emotions from EEG. The neural network comprises a shift register memory after spectral filtering for the input layer, and the estimation of coherence between each pair of input signals for the hidden layer. EEG data are collected from 57 healthy participants from eight locations while subjected to audio-visual stimuli. Discrimination of emotions from EEG is investigated based on valence and arousal levels. The accuracy of the proposed neural network is compared with various feature extraction methods and feedforward learning algorithms. The results showed that the highest accuracy is achieved when using the proposed neural network with a type of radial basis function.
Deterministic Word Segmentation Using Maximum Matching with Fully Lexicalized Rules
We present a fast algorithm of word segmentation that scans an input sentence in a deterministic manner just one time. The algorithm is based on simple maximum matching which includes execution of fully lexicalized transformational rules. Since the process of rule matching is incorporated into dictionary lookup, fast segmentation is achieved. We evaluated the proposed method on word segmentation of Japanese. Experimental results show that our segmenter runs considerably faster than the state-of-the-art systems and yields a practical accuracy when a more accurate segmenter or an annotated corpus is available.
Fitness determinants of success in men's and women's football.
In this study, we examined gender and age differences in physical performance in football. Thirty-four elite female and 34 elite male players (age 17 +/- 1.6 to 24 +/- 3.4 years) from a professional football club were divided into four groups (n=17 each) according to gender and competitive level (senior males, senior females, junior males, and junior females). Players were tested for specific endurance (Yo-YoIR1), sprint over 15 m (Sprint-15 m), vertical jump without (CMJ) or with (ACMJ) arm swing, agility (Agility-15 m), and ball dribbling over 15 m (Ball-15 m). The Yo-YoIR1 and Agility-15m performances showed both a gender and competitive level difference (P < 0.001). Senior and junior males covered 97 and 153% more distance during the Yo-YoIR1 than senior and junior females, respectively (P < 0.001). Gender but not age differences were found for Sprint-15 m performance (P < 0.001). No difference in vertical jump and Ball-15 m performances were found between senior and junior males (P > 0.05). More marked gender differences were evident in endurance than in anaerobic performance in female players. These results show major fitness differences by gender for a given competitive level in football players. It is suggested that training and talent identification should focus on football-specific endurance and agility as fitness traits in post-adolescent players of both sexes.
Reading the mind's eye: Decoding category information during mental imagery
Category information for visually presented objects can be read out from multi-voxel patterns of fMRI activity in ventral-temporal cortex. What is the nature and reliability of these patterns in the absence of any bottom-up visual input, for example, during visual imagery? Here, we first ask how well category information can be decoded for imagined objects and then compare the representations evoked during imagery and actual viewing. In an fMRI study, four object categories (food, tools, faces, buildings) were either visually presented to subjects, or imagined by them. Using pattern classification techniques, we could reliably decode category information (including for non-special categories, i.e., food and tools) from ventral-temporal cortex in both conditions, but only during actual viewing from retinotopic areas. Interestingly, in temporal cortex when the classifier was trained on the viewed condition and tested on the imagery condition, or vice versa, classification performance was comparable to within the imagery condition. The above results held even when we did not use information in the specialized category-selective areas. Thus, the patterns of representation during imagery and actual viewing are in fact surprisingly similar to each other. Consistent with this observation, the maps of "diagnostic voxels" (i.e., the classifier weights) for the perception and imagery classifiers were more similar in ventral-temporal cortex than in retinotopic cortex. These results suggest that in the absence of any bottom-up input, cortical back projections can selectively re-activate specific patterns of neural activity.
A Sharp PageRank Algorithm with Applications to Edge Ranking and Graph Sparsification
We give an improved algorithm for computing personalized PageRank vectors with tight error bounds which can be as small as Ω(n−p) for any fixed positive integer p. The improved PageRank algorithm is crucial for computing a quantitative ranking of edges in a given graph. We will use the edge ranking to examine two interrelated problems – graph sparsification and graph partitioning. We can combine the graph sparsification and the partitioning algorithms using PageRank vectors to derive an improved partitioning algorithm.
Algorithmic Complexity in Coding Theory and the Minimum Distance Problem
We startwithan overviewof algorithmiccomplexity problemsin coding theory We then show that the problemof computing the minimumdiktanceof a binaryIinwr code is NP-hard,and the correspondingdeci~”onproblemis W-complete. Thisconstitutes a proof of the conjecture Bedekamp, McEliece,vanTilborg, dating back to 1978. Extensionsand applicationsof this result to other problemsin codingtheqv are discussed.
The role of the sympathetic nervous system in hypoglycaemia-stimulated gastric secretion.
The gastric secretory response to insulin is mediated predominantly by the vagus. The associated hypoglycaemia stress response is mediated by the sympathetic nervous system. Inhibition of the sympathetic response by simultaneous alpha and beta receptor blockade was studied in five healthy young adults. No appreciable modification of gastric secretory output resulted.
VANET security: Issues, challenges and solutions
Vehicular Ad-hoc Network (VANET) is an infrastructure less network. It provides enhancement in safety related techniques and comfort while driving. It enables vehicles to share information regarding safety and traffic analysis. The scope of VANET application has increased with the recent advances in technology and development of smart cities across the world. VANET provide a self aware system that has major impact in enhancement of traffic services and in reducing road accidents. Information shared in this system is time sensitive and requires robust and quick forming network connections. VANET, being a wireless ad hoc network, serves this purpose completely but is prone to security attacks. Highly dynamic connections, sensitive information sharing and time sensitivity of this network, make it an eye-catching field for attackers. This paper represents a literature survey on VANET with primary concern of the security issues and challenges with it. Features of VANET, architecture, security requisites, attacker type and possible attacks in VANET are considered in this survey paper.
Structured-light 3 D surface imaging : a tutorial
We provide a review of recent advances in 3D surface imaging technologies. We focus particularly on noncontact 3D surface measurement techniques based on structured illumination. The high-speed and high-resolution pattern projection capability offered by the digital light projection technology, together with the recent advances in imaging sensor technologies, may enable new generation systems for 3D surface measurement applications that will provide much better functionality and performance than existing ones in terms of speed, accuracy, resolution, modularization, and ease of use. Performance indexes of 3D imaging system are discussed, and various 3D surface imaging schemes are categorized, illustrated, and compared. Calibration techniques are also discussed, since they play critical roles in achieving the required precision. Numerous applications of 3D surface imaging technologies are discussed with several examples. c © 2011 Optical Society of America
Control of esophageal and intragastric pH with compounded and manufactured omeprazole in patients with reflux esophagitis: a pilot study.
BACKGROUND Proton pump inhibitors (PPI) are the drugs of choice for treatment of gastroesophageal reflux disease (GERD). Omeprazole, the first PPI commercialized, is now available in different formulations. OBJECTIVES To compare the efficacy of different omeprazole formulations on gastric acid secretion measured by intragastric and esophageal pH monitoring in patients with reflux esophagitis. METHODS Prospective, open, randomized clinical trial involving H. pylori negative patients with typical symptoms of GERD. Patients were submitted to 24-h intragastric and esophageal pH studies during use of six different formulations of compounded and manufactured omeprazole. RESULTS Thirty patients, 19 female, median age 55 years were studied. The intragastric pH was maintained below 4.0 for a median of 36.7% of total time in compounded group and 47.7% in manufactured group (p>0.05). There was also no statistical difference between the median percentage of time of pH below 4.0 in orthostatic and supine position in compounded and manufactured groups (30.1% and 49.6% and 28.8% and 55.2%, respectively). The esophageal pH was maintained below 4.0 for a median of 0.1% of total time in compounded group and 0.4% in manufactured group (p>0.05). In orthostatic position the median percentage of time of esophageal pH below 4.0 was 0.0% in both groups (p>0.05). In supine position, the median percentage of time of esophageal pH below 4.0 was 0.1% and 0.3% in compounded and manufactured groups, respectively (p>0.05). CONCLUSION The omeprazole formulations studied (compounded and manufactured) showed similar control of gastric acid secretion and esophageal acid exposure in patients with reflux esophagitis.
Efficient Ranking from Pairwise Comparisons
The ranking of n objects based on pairwise comparisons is a core machine learning problem, arising in recommender systems, ad placement, player ranking, biological applications and others. In many practical situations the true pairwise comparisons cannot be actively measured, but a subset of all n(n−1)/2 comparisons is passively and noisily observed. Optimization algorithms (e.g., the SVM) could be used to predict a ranking with fixed expected Kendall tau distance, while achieving an Ω(n) lower bound on the corresponding sample complexity. However, due to their centralized structure they are difficult to extend to online or distributed settings. In this paper we show that much simpler algorithms can match the same Ω(n) lower bound in expectation. Furthermore, if an average of O(n log(n)) binary comparisons are measured, then one algorithm recovers the true ranking in a uniform sense, while the other predicts the ranking more accurately near the top than the bottom. We discuss extensions to online and distributed ranking, with benefits over traditional alternatives.
Riemannian Geometry Applied to BCI Classification
In brain computer interface based on motor imagery, covariances matrices are widely used through spatial filters computation and other signal processing methods. Covariances matrices lie in the space of Semi-definite Positives (SPD) matrices and therefore, fall within the Riemannian geometry domain. Using a differential geometry frameworks, we propose different algorithms in order to classify covariances matrices in their native space.
A Geotectonic Model of the Svecofennidic Orogeny
Abstract The nowadays generally accepted history of the Archaean Svecofennidic orogeny seems to fit very well into a cyclic pattern with four cycles. Each cycle begins with the eruption of acid rocks and ends with basic ones. As an explanation of this cyclic development mobilisation, or melting in the border region between the sial and the underlying basic layer or mantle is proposed. The acid mobilised rock or magma, being lighter than the underlying basic one, prevents the latter from moving upwards in the crust until almost all acid magma has left the magma chamber. The result of this is the observed sequence of eruption beginning with acid rocks and ending with basic ones. The strong movements during the first folding phase of the orogeny, however, mix to some extent the acid and basic magmas forming the calc-alkaline suite of eruptive rocks. This cycle begins with basic intrusions and is followed by acid ones because the basic magmas move faster than the acid ones owing to their lower viscosity. At t...
Blockchains and Smart Contracts for the Internet of Things
Motivated by the recent explosion of interest around blockchains, we examine whether they make a good fit for the Internet of Things (IoT) sector. Blockchains allow us to have a distributed peer-to-peer network where non-trusting members can interact with each other without a trusted intermediary, in a verifiable manner. We review how this mechanism works and also look into smart contracts-scripts that reside on the blockchain that allow for the automation of multi-step processes. We then move into the IoT domain, and describe how a blockchain-IoT combination: 1) facilitates the sharing of services and resources leading to the creation of a marketplace of services between devices and 2) allows us to automate in a cryptographically verifiable manner several existing, time-consuming workflows. We also point out certain issues that should be considered before the deployment of a blockchain network in an IoT setting: from transactional privacy to the expected value of the digitized assets traded on the network. Wherever applicable, we identify solutions and workarounds. Our conclusion is that the blockchain-IoT combination is powerful and can cause significant transformations across several industries, paving the way for new business models and novel, distributed applications.
Exploring the heat-induced structural changes of β-lactoglobulin -linoleic acid complex by fluorescence spectroscopy and molecular modeling techniques
Linoleic acid (LA) is the precursor of bioactive oxidized linoleic acid metabolites and arachidonic acid, therefore is essential for human growth and plays an important role in good health in general. Because of the low water solubility and sensitivity to oxidation, new ways of LA delivery without compromising the sensory attributes of the enriched products are to be identified. The major whey protein, β-lactoglobulin (β-Lg), is a natural carrier for hydrophobic molecules. The thermal induced changes of the β-Lg-LA complex were investigated in the temperature range from 25 to 85 °C using fluorescence spectroscopy techniques in combination with molecular modeling study and the results were compared with those obtained for β-Lg. Experimental results indicated that, regardless of LA binding, the polypeptide chain rearrangements at temperatures higher than 75 °C lead to higher exposure of hydrophobic residues causing the increase of fluorescence intensity. Phase diagram indicated an all or none transition between two conformations. The LA surface involved in the interaction with β-Lg was about 497 Ǻ2, indicating a good affinity between those two components even at high temperatures. Results obtained in this study provide important details about heat-induced changes in the conformation of β-Lg-LA complex. The thermal treatment at high temperature does not affect the LA binding and carrier functions of β-Lg.
EEG oscillations: From correlation to causality.
Already in his first report on the discovery of the human EEG in 1929, Berger showed great interest in further elucidating the functional roles of the alpha and beta waves for normal mental activities. Meanwhile, most cognitive processes have been linked to at least one of the traditional frequency bands in the delta, theta, alpha, beta, and gamma range. Although the existing wealth of high-quality correlative EEG data led many researchers to the conviction that brain oscillations subserve various sensory and cognitive processes, a causal role can only be demonstrated by directly modulating such oscillatory signals. In this review, we highlight several methods to selectively modulate neuronal oscillations, including EEG-neurofeedback, rhythmic sensory stimulation, repetitive transcranial magnetic stimulation (rTMS), and transcranial alternating current stimulation (tACS). In particular, we discuss tACS as the most recent technique to directly modulate oscillatory brain activity. Such studies demonstrating the effectiveness of tACS comprise reports on purely behavioral or purely electrophysiological effects, on combination of behavioral effects with offline EEG measurements or on simultaneous (online) tACS-EEG recordings. Whereas most tACS studies are designed to modulate ongoing rhythmic brain activity at a specific frequency, recent evidence suggests that tACS may also modulate cross-frequency interactions. Taken together, the modulation of neuronal oscillations allows to demonstrate causal links between brain oscillations and cognitive processes and to obtain important insights into human brain function.
Multi-Level Security Embedded With Surveillance System
Graveness of guarding is an essential component of any system or organization in an increasingly hacking environment. Layers of protection are necessary. This paper presents a model to develop a multilevel security system. To reach or access inner most circle, three stages of security system endorsement will be necessary, making it the primary level of security. These include the Hex Keypad, Bluetooth, and RFID. The valuables in the inner vault are further secured with a secondary system completely separate from the primary, consisting of a fingerprint scanner. Any security breach detected will alert the authorities with the help of a GSM Shield, therefore taking the necessary response immediately. Continuous surveillance with online streaming is also demonstrated using Raspberry Pi and a digital camera, further safeguarding the valuables.
Nanoscale plasmonic memristor with optical readout functionality.
We experimentally demonstrate for the first time a nanoscale resistive random access memory (RRAM) electronic device integrated with a plasmonic waveguide providing the functionality of optical readout. The device fabrication is based on silicon on insulator CMOS compatible approach of local oxidation of silicon, which enables the realization of RRAM and low optical loss channel photonic waveguide at the same fabrication step. This plasmonic device operates at telecom wavelength of 1.55 μm and can be used to optically read the logic state of a memory by measuring two distinct levels of optical transmission. The experimental characterization of the device shows optical bistable behavior between these levels of transmission in addition to well-defined hysteresis. We attribute the changes in the optical transmission to the creation of a nanoscale absorbing and scattering metallic filament in the amorphous silicon layer, where the plasmonic mode resides.
Cryptographic Cloud Storage
We consider the problem of building a secure cloud storage service on top of a public cloud infrastructure where the service provider is not completely trusted by the customer. We describe, at a high level, several architectures that combine recent and non-standard cryptographic primitives in order to achieve our goal. We survey the benefits such an architecture would provide to both customers and service providers and give an overview of recent advances in cryptography motivated specifically by cloud storage.
adaQN: An Adaptive Quasi-Newton Algorithm for Training RNNs
Recurrent Neural Networks (RNNs) are powerful models that a chieve exceptional performance on several pattern recognition problem s. However, the training of RNNs is a computationally difficult task owing to the wellknown “vanishing/exploding” gradient problem. Algorithms proposed for training RNNs either exploit no (or limited) curvature information and have chea p per-iteration complexity, or attempt to gain significant curvature informati on at the cost of increased per-iteration cost. The former set includes diagonally-sc aled first-order methods such as ADAGRAD and ADAM , while the latter consists of second-order algorithms like Hessian-Free Newton and K-FAC. In this paper, we presentADAQN, a stochastic quasi-Newton algorithm for training RNNs. Our approach retains a low per-iteration cost while allowing for non-diagonal sca ling through a stochastic L-BFGS updating scheme. The method uses a novel L-BFGS scali ng initialization scheme and is judicious in storing and retaining L-BFGS curvature pairs. We present numerical experiments on two language modeling tas ks nd show that ADAQN is competitive with popular RNN training algorithms.
Feature Selection for SVMs
We introduce a method of feature selection for Support Vector Machines. The method is based upon finding those features which minimize bounds on the leave-one-out error. This search can be efficiently performed via gradient descent. The resulting algorithms are shown to be superior to some standard feature selection algorithms on both toy data and real-life problems of face recognition, pedestrian detection and analyzing DNA micro array data.
The effects of psyllium on lipoproteins in type II diabetic patients
We examined the effects of 2 months of psyllium treatment in optimizing metabolic control and lipoprotein profile, and its postprandial effects on lipids in type II diabetes. We recruited 40 type II diabetic patients who were on sulfonylureas and a controlled diet, sequentially assigning them to psyllium treatment (G1) or to a control group (G2) treated with dietary measures alone. After 2 months of treatment, body mass index, waist circumference, HbA1c (hemoglobin A1c) and fasting plasma glucose levels had significantly decreased in both groups. There were no postprandial differences in the lipoprotein profile between the two groups. Triglycerides were significantly lower in G1, but not in G2. Our study contributes toward elucidating the effects of psyllium on serum lipids, and suggests that psyllium treatment may help in reducing triglycerides (a known risk factor for cardiovascular disease) in type II diabetic patients.
Learning Segmentation Masks with the Independence Prior
An instance with a bad mask might make a composite image that uses it look fake. This encourages us to learn segmentation by generating realistic composite images. To achieve this, we propose a novel framework that exploits a new proposed prior called the independence prior based on Generative Adversarial Networks (GANs). The generator produces an image with multiple category-specific instance providers, a layout module and a composition module. Firstly, each provider independently outputs a category-specific instance image with a soft mask. Then the provided instances’ poses are corrected by the layout module. Lastly, the composition module combines these instances into a final image. Training with adversarial loss and penalty for mask area, each provider learns a mask that is as small as possible but enough to cover a complete category-specific instance. Weakly supervised semantic segmentation methods widely use grouping cues modeling the association between image parts, which are either artificially designed or learned with costly segmentation labels or only modeled on local pairs. Unlike them, our method automatically models the dependence between any parts and learns instance segmentation. We apply our framework in two cases: (1) Foreground segmentation on category-specific images with box-level annotation. (2) Unsupervised learning of instance appearances and masks with only one image of homogeneous object cluster (HOC). We get appealing results in both tasks, which shows the independence prior is useful for instance segmentation and it is possible to unsupervisedly learn instance masks with only one image.
Genetic variation of alcohol dehydrogenase type 1C (ADH1C), alcohol consumption, and metabolic cardiovascular risk factors: results from the IMMIDIET study.
INTRODUCTION Moderate alcohol consumption is protective against cardiovascular disease (CAD). ADHs are major enzymes of alcohol metabolism. A polymorphism in the alcohol dehydrogenases 1C gene (ADH1C) was reportedly associated with the protective effect of alcohol consumption on CAD risk and risk factor levels. AIMS The aim of our study was to investigate whether the association of alcohol consumption with metabolic risk factors for CAD is related to ADH1C variants. METHODS IMMIDIET is a cross-sectional study of 974 healthy male-female pairs living together, randomly recruited in Belgium, Italy and England. The rs698 ADH1C polymorphism was genotyped. A 1-year recall food frequency questionnaire was used to estimate alcohol intake. RESULTS The intake of alcohol did not vary in relation to ADH1C genotypes. BMI, waist circumference (WC), waist-to-hip ratio, blood pressure, HDL or total cholesterol, triglycerides and FVII:ag levels were positively associated with alcohol intake in men (multivariate ANOVA). Regression coefficient for alcohol and BMI or WC was progressively higher in heterozygotes and gamma 2 homozygotes as compared to gamma 1 homozygotes (p=0.006 and p=0.03 for interaction, respectively). No interaction was found for other risk factors. In women, alcohol intake was positively associated with HDL, LDL and FVII:ag levels but no interaction was found between ADH1C polymorphism and any risk factor. CONCLUSION Regulation of ADH1C genotype on the association between alcohol consumption, BMI and WC was found in men from different European countries. In men homozygous for the gamma 2 alleles, intake of alcohol was positively associated with both BMI and WC values.
The Finite Volume-Complete Flux Scheme for Advection-Diffusion-Reaction Equations
We present a new finite volume scheme for the advection-diffusion-reaction equation. The scheme is second order accurate in the grid size, both for dominant diffusion and dominant advection, and has only a three-point coupling in each spatial direction. Our scheme is based on a new integral representation for the flux of the one-dimensional advection-diffusion-reaction equation, which is derived from the solution of a local boundary value problem for the entire equation, including the source term. The flux therefore consists of two parts, corresponding to the homogeneous and particular solution of the boundary value problem. Applying suitable quadrature rules to the integral representation gives the complete flux scheme. Extensions of the complete flux scheme to two-dimensional and time-dependent problems are derived, containing the cross flux term or the time derivative in the inhomogeneous flux, respectively. The resulting finite volume-complete flux scheme is validated for several test problems.
Aggregating User Input in Ecology Citizen Science Projects
Camera traps (remote, automatic cameras) are revolutionizing large-scale studies in ecology. The Serengeti Lion Project has used camera traps to produce over 1.5 million pictures of animals in the Serengeti. To analyze these pictures, the Project created Snapshot Serengeti, a citizen science website where volunteers can help classify animals. To increase accuracy, each photo is shown to multiple users and a critical step is aggregating individual classifications. In this paper, we present a new aggregation algorithm which achieves an accuracy of 98.6%, better than many human experts. Our algorithm also requires fewer users per photo than existing methods. The algorithm is intuitive and designed so that nonexperts can understand the end results. Ecology seeks to understand the interrelationships of species with one another and with their environment. Monitoring many species of animals simultaneously has traditionally been very difficult. Camera traps (remote, automatic cameras) are revolutionizing ecological research by providing a non-invasive, cost-effective approach for large-scale monitoring. Ecologists are currently using these traps in the Serengeti National Park, one of the world’s last large intact natural areas, to understand the dynamics of its dozens of large mammals (Swanson et al. 2014). As of November 2013, the ecologists have spent 3 years using more than 200 cameras spread over 1,125 square kilometers to take more than 1.5 million photos. In order to process so many images, the ecologists, along with Zooniverse (a citizen science platform), created Snapshot Serengeti, a web site where over 35,000 volunteers helped classify the species in the photos (Zooniverse 2014a). Since volunteers can make mistakes, each photo is shown to multiple users. A critical step is to combine these classifications into one aggregate classification: e.g., if 4 out of 5 users classify a photo as containing a zebra, we might ⇤These authors are part of the Zooniverse project, funded in part by MICO. Copyright c 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. decide that the photo does indeed contain a zebra. In this paper, we develop an aggregation algorithm for Snapshot Serengeti. Classification aggregation is an active area in machine learning; however, we show that much of the existing literature is based on assumptions which do not apply to Snapshot Serengeti, and must therefore develop a novel approach. 1 In addition, current machine learning work on classification aggregation often draws on ideas such as expectation maximization and Bayesian reasoning. While powerful, these methods obscure the connection between input and results, making it hard for non-machine learning experts to understand the end results. Thus, our algorithm must be both accurate and intuitive. Our paper proceeds as follows. We begin by discussing Snapshot Serengeti and previous machine learning literature on classifier aggregation. We then discuss why much of this existing work is not applicable to Snapshot Serengeti. We next introduce a new classifier aggregation algorithm for Snapshot Serengeti and compare it against the current algorithm. Finally, we conclude and discuss possible future work.
Estimating Continuous Distributions in Bayesian Classifiers
When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous vari­ ables. Most previous work has either solved the problem by discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon the normality as­ sumption and instead use statistical methods for nonparametric density estimation. For a naive Bayesian classifier, we present experi­ mental results on a variety of natural and ar­ tificial domains, comparing two methods of density estimation: assuming normality and modeling each conditional distribution with a single Gaussian; and using nonparamet­ ric kernel density estimation. We observe large reductions in error on several natural and artificial data sets, which suggests that kernel estimation is a useful tool for learning Bayesian models.
OpenCV based disease identification of mango leaves
This paper aims in classifying and identifying the diseases of mango leaves for Indian agriculture. K-means algorithm is chosen for the disease segmentation, and the disease classification and identification is carried out using the SVM classifier. Disease identification based on analysis of patches or discoloring of leaf will hold good for some of the plant diseases, but some other diseases which will deform the leaf shape cannot be identified based on the same method. In this case leaf shape based disease identification has to be performed. Based on this analysis two topics are addressed in this research paper. (1) Disease identification using the OpenCV libraries (2) Leaf shape based disease identification. Keywordk-means,Principal Component Analysis (PCA), feature extraction, shape detection, disease identification, Elliptic fourier analysis, Support Vector Machine(SVM), Artificial Neural Network (ANN)
Learning Image Representations Tied to Egomotion from Unlabeled Video
Understanding how images of objects and scenes behave in response to specific egomotions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose a new “embodied” visual learning paradigm, exploiting proprioceptive motor signals to train visual representations from egocentric video with no manual supervision. Specifically, we enforce that our learned features exhibit equivariance i.e., they respond predictably to transformations associated with distinct egomotions. With three datasets, we show that our unsupervised feature learning approach significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in static images from a disjoint domain.