_id
stringlengths
40
40
text
stringlengths
0
10k
6ae2dd7824f45d9c721c1e0fdc79250b85a598d2
The recent interest in big data has led many companies to develop big data analytics capability (BDAC) in order to enhance firm performance (FPER). However, BDAC pays off for some companies but not for others. It appears that very few have achieved a big impact through big data. To address this challenge, this study proposes a BDAC model drawing on the resource-based theory (RBT) and the entanglement view of sociomaterialism. The findings show BDAC as a hierarchical model, which consists of three primary dimensions (i.e., management, technology, and talent capability) and 11 subdimensions (i.e., planning, investment, coordination, control, connectivity, compatibility, modularity, technology management knowledge, technical knowledge, business knowledge and relational knowledge). The findings confirm the value of the entanglement conceptualization of the higher-order BDAC model and its impact on FPER. The results also illuminate the significant moderating impact of analytics capability–business strategy alignment on the BDAC FPER relationship.
b8f2fb3975e15d13d12715ce53b37821a6214b9e
Understanding scientific phenomena in terms of complex systems principles is both scientifically and pedagogically important. Situations from different disciplines of science are often governed by the same principle, and so promoting knowledge transfer across disciplines makes valuable cross-fertilization and scientific unification possible. Although evidence for this kind of transfer has been historically controversial, experiments and observations of students suggest pedagogical methods to promote transfer of complex systems principles. One powerful strategy is for students to actively interpret the elements and interactions of perceptually grounded scenarios. Such interpretation can be facilitated through the presentation of cases alongside general principles, and by students exploring and constructing computational models of cases. The resulting knowledge can be both concretely grounded yet highly perspective-dependent and generalizeable. We discuss methods for coordinating computational and mental models of complex systems, the roles of idealization and concreteness in fostering understanding and generalization, and other complementary theoretical approaches to transfer. Understanding Complex Systems 3 Promoting Transfer through Complex Systems Principles When and how do students transfer what they have learned to new situations? This is one of the most important questions confronting education and cognitive science. Addressing it has crucial practical consequence, while also touching on deep basic research issues related to learning, analogical reasoning, and conceptual representation. Considerable research suggests that students do not spontaneously transfer what they have learned, at least not to superficially dissimilar domains (Detterman, 1993; Gick & Holyoak, 1980; 1983). This is disturbing because teachers choose content with the hope that students will apply what they have learned to relevant new situations. We believe that students can transfer scientific principles across superficially dissimilar domains, and we are not alone in this belief (Bransford & Schwartz, 1999; Jacobson, 2001; Judd, 1908; Simon, 1980). To present our case, we will describe kinds of transfer that are worth “fighting for.” Identifying these turns out not to be only an educational question, but a scientific question as well. Accordingly, we will describe a novel approach toward science that seeks to unite phenomena from disparate domains according to general principles that govern complex systems. This complex systems approach to science offers unique educational opportunities for imparting scientific understanding that is both concretely grounded yet transportable. The notion of a grounded generalization may sound like an oxymoron, but it is key to our account of transfer. The time-honored method for conveying generalizations has been to use symbolic formalisms such as predicate logic or algebra. These formalisms can enable a student to transcend the specifics of a situation, but they also run the risk of disconnecting the resulting abstraction from an intuitive understanding of the situation. Instead, we propose learning and teaching methods that promote situation construals that are concrete insofar as they are perceptually, temporally, and spatially grounded. However, they are still idealizations in that many elements of a situation are ignored or highly simplified. In this paper, we will develop an approach to achieving grounded generalizations through the following steps: 1) Describe the 1 See also the notion of ‘situated abstraction’ in the section “comparison to other approaches to transfer.” Understanding Complex Systems 4 nature of complex systems accounts of science, 2) provide examples of general complex systems principles that appear in several case studies, 3) Describe pedagogical benefits of teaching science through complex systems, 4) Discuss the importance of transfer and generalization in relation to complex systems, 5) present a method for achieving generalization through perceptually grounded yet interpreted simulations, 6) compare generalization from grounded simulations to formalism-centered strategies, and 7) draw parallels between computational and mental models, with the aim of identifying design principles that allow the two kinds of models to mesh. Connecting Science with Complex Systems Principles One way to advance science is to progressively flesh out theories, adding experimental details and elaborating mechanistic accounts. By this account, the “devil is in the details” and the proper occupation of scientists is to pursue these details. This vision of science was most emphatically painted by John Horgan in his 1996 book “The End of Science.” He argued that the age of fundamental scientific theorizing and discoveries has passed, and that all that is left to be done is refining the details of theories already laid down by the likes of Einstein, Darwin, and Newton. The rapid rate of scientific specialization seems to support Horgan’s argument. We have gone from an era when the only major scientific journals were “Nature” and “Science” to an era with specialized journals such as “Journal of Contaminant Hydrology” and “Journal of Shoulder and Elbow Surgery,” each an umbrella outlet for several distinct sub-specializations. Well over half of the respondents to a Science Advisory Board believe that biologists are loosing track of the big picture by specializing in narrowly defined biological fields. Yet, many scientists feel compelled to specialize to still further degrees due to the sheer volume of scholarly output that competes for their eyes and minds. A small subset of scientists have chosen to reverse the trend toward increasing specialization. They have instead pursued principles that apply to many scientific domains, from physics to biology to social sciences. The principles are not applied to these domains in a metaphoric or vague fashion, as the terms “chaos” and “fractal” often were applied to art or interpersonal relations. Rather, the claim of complex system researchers Understanding Complex Systems 5 is that the same specific principle, sometimes expressible as equations or as sets of computational rules, can describe seemingly different phenomena. By complex systems, we refer to systems that contain numerous distinct elements that interact locally, resulting in a system globally changing over time. Over the past few decades, the field of complex systems theory has been rapidly developing (Bar-Yam, 1997; Holland, 1995; Kauffman, 1993; Wolfram, 2002). Complex systems theory and methods are now endemic to the sciences and can be found in just about every academic discipline and profession (Amaral & Ottino, 2004; Barabasi, 2002; Diermeir & Merlo, 2000; Epstein & Axtell 1996; Wolfram, 1986). It may at first seem that complex systems represent a small fraction of natural and social phenomena. But, this would be a misunderstanding of the field. What makes something a complex system or complex phenomenon is a matter of perspective. If you take a complex systems perspective, focusing on the interaction of system elements, then almost every system in nature and in society can be described as a complex system. Seen from this perspective, the system can then be analyzed using complex systems principles and methods. Complex systems theory has described a number of quite general principles that can describe natural and social systems across a wide variety of traditional disciplines. A few examples are in order. A commonly found system architecture in nature is: Pattern 1: An entity causes more entities like itself to be produced. At the same time, it causes another kind of entity to be produced that eliminates the
4215c25c3757f5ac542bf0449ffd1ad55a11f630
41ab8a3c6088eb0576ba65e114ebd37340c2bae1
842301714c2513659a6814a7e9b5ae761136f9d8
In this chapter, we survey methods that perform keyword search on graph data. Keyword search provides a simple but user-friendly interface to retrieve information from complicated data structures. Since many real life datasets are represented by trees and graphs, keyword search has become an attractive mechanism for data of a variety of types. In this survey, we discuss methods of keyword search on schema graphs, which are abstract representation for XML data and relational data, and methods of keyword search on schema-free graphs. In our discussion, we focus on three major challenges of keyword search on graphs. First, what is the semantics of keyword search on graphs, or, what qualifies as an answer to a keyword search; second, what constitutes a good answer, or, how to rank the answers; third, how to perform keyword search efficiently. We also discuss some unresolved challenges and propose some new research directions
21813c61601a8537136488ce55a2c15669365ef9
We give an improved algorithm for computing personalized PageRank vectors with tight error bounds which can be as small as Ω(n−p) for any fixed positive integer p. The improved PageRank algorithm is crucial for computing a quantitative ranking of edges in a given graph. We will use the edge ranking to examine two interrelated problems – graph sparsification and graph partitioning. We can combine the graph sparsification and the partitioning algorithms using PageRank vectors to derive an improved partitioning algorithm.
71c3182fa122a1d6ccd4aa8eb9dccd95314b848b
The pervasiveness of Cyber-Physical Systems (CPS) in various aspects of the modern society grows rapidly. This makes CPS to increasingly attractive targets for various kinds of attacks. We consider cyber-security as an integral part of CPS security. Additionally, the necessity exists to investigate the CPS-specific aspects which are out of scope of cyber-security. Most importantly, attacks capable to cross the cyber-physical domain boundary should be analyzed. The vulnerability of CPS to such cross-domain attacks has been practically proven by numerous examples, e.g., by the currently most famous Stuxnet attack. In this paper, we propose taxonomy for description of attacks on CPS. The proposed taxonomy is capable of representing both conventional cyber-attacks as well as cross-domain attacks on CPS. Furthermore, based on the proposed taxonomy, we define the attack categorization. Several possible application areas of the proposed taxonomy are extensively discussed. Among others, it can be used to establish a knowledge base about attacks on CPS known in the literature. Furthermore, the proposed description structure will foster the quantitative and qualitative analysis of these attacks, both of which are necessarily to improve CPS security.
237292e08fe45320e954377ebe2b7e08d08f1979
9f7d7dc88794d28865f28d7bba3858c81bdbc3db
Reinforcement learning holds the promise of enabling autonomous robots to learn large repertoires of behavioral skills with minimal human intervention. However, robotic applications of reinforcement learning often compromise the autonomy of the learning process in favor of achieving training times that are practical for real physical systems. This typically involves introducing hand-engineered policy representations and human-supplied demonstrations. Deep reinforcement learning alleviates this limitation by training general-purpose neural network policies, but applications of direct deep reinforcement learning algorithms have so far been restricted to simulated settings and relatively simple tasks, due to their apparent high sample complexity. In this paper, we demonstrate that a recent deep reinforcement learning algorithm based on offpolicy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots. We demonstrate that the training times can be further reduced by parallelizing the algorithm across multiple robots which pool their policy updates asynchronously. Our experimental evaluation shows that our method can learn a variety of 3D manipulation skills in simulation and a complex door opening skill on real robots without any prior demonstrations or manually designed representations.
de81d968a660df67a8984df6aa77cf88df77259f
In the present, a power decoupling method without additional component is proposed for a dc to single-phase ac converter, which consists of a flying capacitor dc/dc converter (FCC) and the voltage source inverter (VSI). In particular, a small flying capacitor in the FCC is used for both a boost operation and a double-line-frequency power ripple reduction. Thus, the dc-link capacitor value can be minimized in order to avoid the use of a large electrolytic capacitor. In addition, component design, of, e.g., the boost inductor and the flying capacitor, is clarified when the proposed control is applied. Experiments were carried out using a 1.5-kW prototype in order to verify the validity of the proposed control. The experimental results revealed that the use of the proposed control reduced the dc-link voltage ripple by 74.5%, and the total harmonic distortion (THD) of the inverter output current was less than 5%. Moreover, a maximum system efficiency of 95.4% was achieved at a load of 1.1 kW. Finally, the high power density design is evaluated by the Pareto front optimization. The power densities of three power decoupling topologies, such as a boost topology, a buck topology, and the proposed topology are compared. As a result, the proposed topology achieves the highest power density (5.3 kW/dm3) among the topologies considered herein.
470a6b517b36ed5c8125f93bb8a82984e8835c55
In this paper, we propose an image super-resolution approach using a novel generic image prior - gradient profile prior, which is a parametric prior describing the shape and the sharpness of the image gradients. Using the gradient profile prior learned from a large number of natural images, we can provide a constraint on image gradients when we estimate a hi-resolution image from a low-resolution image. With this simple but very effective prior, we are able to produce state-of-the-art results. The reconstructed hi-resolution image is sharp while has rare ringing or jaggy artifacts.
857176d022369e963d3ff1be2cb9e1ca2f674520
We study the problem of learning to reason in large scale knowledge graphs (KGs). More specifically, we describe a novel reinforcement learning framework for learning multi-hop relational paths: we use a policy-based agent with continuous states based on knowledge graph embeddings, which reasons in a KG vector space by sampling the most promising relation to extend its path. In contrast to prior work, our approach includes a reward function that takes the accuracy, diversity, and efficiency into consideration. Experimentally, we show that our proposed method outperforms a path-ranking based algorithm and knowledge graph embedding methods on Freebase and Never-Ending Language Learning datasets.1
8597da33970c02df333d9d6520884c1ba3f5fb17
We present a new motion tracking technique to robustly reconstruct non-rigid geometries and motions from a single view depth input recorded by a consumer depth sensor. The idea is based on the observation that most non-rigid motions (especially human-related motions) are intrinsically involved in articulate motion subspace. To take this advantage, we propose a novel <inline-formula><tex-math notation="LaTeX">$L_0$</tex-math><alternatives> <inline-graphic xlink:href="xu-ieq2-2688331.gif"/></alternatives></inline-formula> based motion regularizer with an iterative solver that implicitly constrains local deformations with articulate structures, leading to reduced solution space and physical plausible deformations. The <inline-formula><tex-math notation="LaTeX">$L_0$</tex-math> <alternatives><inline-graphic xlink:href="xu-ieq3-2688331.gif"/></alternatives></inline-formula> strategy is integrated into the available non-rigid motion tracking pipeline, and gradually extracts articulate joints information online with the tracking, which corrects the tracking errors in the results. The information of the articulate joints is used in the following tracking procedure to further improve the tracking accuracy and prevent tracking failures. Extensive experiments over complex human body motions with occlusions, facial and hand motions demonstrate that our approach substantially improves the robustness and accuracy in motion tracking.
f5bbdfe37f5a0c9728c9099d85f0799e67e3d07d
7d553ced76a668120cf524a0b3e633edfea426df
0b277244b78a172394d3cbb68cc068fb1ebbd745
As more sensitive data is shared and stored by third-party sites on the Internet, there will be a need to encrypt data stored at these sites. One drawback of encrypting data, is that it can be selectively shared only at a coarse-grained level (i.e., giving another party your private key). We develop a new cryptosystem for fine-grained sharing of encrypted data that we call Key-Policy Attribute-Based Encryption (KP-ABE). In our cryptosystem, ciphertexts are labeled with sets of attributes and private keys are associated with access structures that control which ciphertexts a user is able to decrypt. We demonstrate the applicability of our construction to sharing of audit-log information and broadcast encryption. Our construction supports delegation of private keys which subsumesHierarchical Identity-Based Encryption (HIBE).
48b7ca5d261de75f66bc41c1cc658a00c4609199
While much work has recently focused on the analysis of social media in order to get a feel for what people think about current topics of interest, there are, however, still many challenges to be faced. Text mining systems originally designed for more regular kinds of texts such as news articles may need to be adapted to deal with facebook posts, tweets etc. In this paper, we discuss a variety of issues related to opinion mining from social media, and the challenges they impose on a Natural Language Processing (NLP) system, along with two example applications we have developed in very different domains. In contrast with the majority of opinion mining work which uses machine learning techniques, we have developed a modular rule-based approach which performs shallow linguistic analysis and builds on a number of linguistic subcomponents to generate the final opinion polarity and score.
2b7d5931a08145d9a501af9839fb9f8954c82c3c
In order to simplify the configuration of photovoltaic (PV) grid-connection system, this paper proposes to adopt a buck-boost dc-dc converter, and then develops a single-phase inverter via the connection with an H-bridge unfolding circuit with line-commutated. Depending on the conditions of dc input-voltage and ac output-voltage, the proposed circuit can work functionally as either a step-down or step-up inverter. It is suitable for the applications with wide voltage-variation range. Since only one switch is operated with high-frequency, the switching loss can be significantly reduced to improve the efficiency. Finally, a laboratory prototype with 110 Vrms / 60 Hz output voltage is then simulated and implemented accordingly to verify the feasibility of the proposed inverter.
373cf414cc038516a2cff11d7caafa3ff1031c6d
Recent work has shown how denoising and contractive autoencoders implicitly capture the structure of the data-generating density, in the case where the corruption noise is Gaussian, the reconstruction error is the squared error, and the data is continuous-valued. This has led to various proposals for sampling from this implicitly learned density function, using Langevin and Metropolis-Hastings MCMC. However, it remained unclear how to connect the training procedure of regularized auto-encoders to the implicit estimation of the underlying datagenerating distribution when the data are discrete, or using other forms of corruption process and reconstruction errors. Another issue is the mathematical justification which is only valid in the limit of small corruption noise. We propose here a different attack on the problem, which deals with all these issues: arbitrary (but noisy enough) corruption, arbitrary reconstruction loss (seen as a log-likelihood), handling both discrete and continuous-valued variables, and removing the bias due to non-infinitesimal corruption noise (or non-infinitesimal contractive penalty).
ee8c779e7823814a5f1746d883ca77b26671b617
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/asl.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.
5732afb98a2e5b2970344b255b7af10f5c363873
554f6cc9cb9c64a25670eeb12827b803f3db2f71
8db9e3f2384b032278ed9e9113021538ef4b9b94
Sarcasm is a sophisticated form of speech act widely used in online communities. Automatic recognition of sarcasm is, however, a novel task. Sarcasm recognition could contribute to the performance of review summarization and ranking systems. This paper presents SASI, a novel Semi-supervised Algorithm for Sarcasm Identification that recognizes sarcastic sentences in product reviews. SASI has two stages: semisupervised pattern acquisition, and sarcasm classification. We experimented on a data set of about 66000 Amazon reviews for various books and products. Using a gold standard in which each sentence was tagged by 3 annotators, we obtained precision of 77% and recall of 83.1% for identifying sarcastic sentences. We found some strong features that characterize sarcastic utterances. However, a combination of more subtle pattern-based features proved more promising in identifying the various facets of sarcasm. We also speculate on the motivation for using sarcasm in online communities and social networks.
7a953aaf29ef67ee094943d4be50d753b3744573
94f8a728a072b9b48b043a87b16619a052340421
The most recent Wireless Sensor Networks technologies can provide viable solutions to perform automatic monitoring of the water grid, and smart metering of water consumptions. However, sensor nodes located along water pipes cannot access power grid facilities, to get the necessary energy imposed by their working conditions. In this sense, it is of basic importance to design the network architecture in such a way as to require the minimum possible power. This paper investigates the suitability of the Wireless Metering Bus protocol for possible adoption in future smart water grids, by evaluating its transmission performance, through simulations and experimental tests executed by means of prototype sensor nodes.
25931b74f11f0ffdd18c3f81d3899c0efa710223
This paper analyzes mutual-fund performance from an investor’s perspective. We study the portfolio-choice problem for a mean-variance investor choosing among a risk-free asset, index funds, and actively managed mutual funds. To solve this problem, we employ a Bayesian method of performance evaluation; a key innovation in our approach is the development of a f lexible set of prior beliefs about managerial skill. We then apply our methodology to a sample of 1,437 mutual funds. We find that some extremely skeptical prior beliefs nevertheless lead to economically significant allocations to active managers. ACTIVELY MANAGED EQUITY MUTUAL FUNDS have trillions of dollars in assets, collect tens of billions in management fees, and are the subject of enormous attention from investors, the press, and researchers. For years, many experts have been saying that investors would be better off in low-cost passively managed index funds. Notwithstanding the recent growth in index funds, active managers still control the vast majority of mutual-fund assets. Are any of these active managers worth their added expenses? Should investors avoid all actively managed mutual funds? Since Jensen ~1968!, most studies have found that the universe of mutual funds does not outperform its benchmarks after expenses.1 This evidence indicates that the average active mutual fund should be avoided. On the other hand, recent studies have found that future abnormal returns ~“alphas”! can be forecast using past returns or alphas,2 past fund * Baks and Metrick are from the Department of Finance, The Wharton School, University of Pennsylvania. Wachter is from the Department of Finance, The Stern School, New York University. We thank Nick Barberis, Gary Chamberlain, Ken French, Will Goetzmann, Karsten Hansen, Chris Jones, Tom Knox, Tony Lancaster, L̆ubos̆ Pástor, André Perold, Steve Ross, Andrei Shleifer, Rob Stambaugh, René Stulz, Sheridan Titman, an anonymous referee, and seminar participants at Columbia, Wharton, the NBER, the 1999 NBER Summer Institute, and the 2000 AFA meetings for helpful comments. Wachter thanks Lehman Brothers for financial support. 1 Recently, Carhart ~1995!, Malkiel ~1995!, and Daniel et al. ~1997! all find small or zero average abnormal returns by using modern performance-evaluation methods on samples that are relatively free of survivorship bias. 2 Carlson ~1970!, Lehman and Modest ~1987!, Grinblatt and Titman ~1988, 1992!, Hendricks, Patel, and Zechhauser ~1993!, Goetzmann and Ibbotson ~1994!, Brown and Goetzmann ~1995!, Elton, Gruber, and Blake ~1996!, and Carhart ~1997!. THE JOURNAL OF FINANCE • VOL. LVI, NO. 1 • FEBRUARY 2001
f23ecb25c3250fc6e2d3401dc2f54ffd6135ae2e
Significant advances in the development of millimeter-wave and terahertz (30-10 000 GHz) technologies have been made to cope with the increasing interest in this still not fully explored electromagnetic spectrum. The nature of electromagnetic waves over this frequency range is well suited for the development of high-resolution imaging applications, molecular-sensitive spectroscopic devices, and ultrabroadband wireless communications. In this paper, millimeter-wave and terahertz antenna technologies are overviewed including the conventional and nonconventional planar/nonplanar antenna structures based on different platforms. As a promising technological platform, substrate-integrated circuits (SICs) attract more and more attention. Various substrate-integrated waveguide (SIW) schemes and other synthesized guide techniques have been widely employed in the design of antennas and arrays. Different types of substrate-integrated antennas and beamforming networks are discussed with respect to theoretical and experimental results in connection with electrical and mechanical performances.
e2d634ee9e9abaca804b69af69a40cf00897b2d0
36246ff904be7044ecd536d072b1388ea59aaf43
Childhood sexual abuse (CSA) is widespread amongst South African (SA) children, yet data on risk factors and psychiatric consequences are limited and mixed. Traumatised children and adolescents referred to our Youth Stress Clinic were interviewed to obtain demographic, sexual abuse, lifetime trauma and psychiatric histories. Data for 94 participants (59 female, 35 male; mean age 14.25 [8.25–19] years) exposed to at least one lifetime trauma were analysed. Sexual abuse was reported in 53% of participants (42.56% females, 10.63% males) with 64% of violations committed by perpetrators known to them. Multinomial logistic regression analysis revealed female gender (P = 0.002) and single-parent families (P = 0.01) to be significant predictors of CSA (62.5%). CSA did not predict exposure to other traumas. Sexually abused children had significantly higher physical and emotional abuse subscale scores and total CTQ scores than non-abused children. Depression (33%, X 2 = 10.89, P = 0.001) and PTSD (63.8%, X 2 = 4.79, P = 0.034) were the most prevalent psychological consequences of trauma and both were significantly associated with CSA. High rates of CSA predicted high rates of PTSD in this traumatised sample. Associations we found appear consistent with international studies of CSA and, should be used to focus future social awareness, prevention and treatment strategies in developing countries.
af82f1b9fdee7e6f92fccab2e6f02816965bf937
Alzheimer's disease (AD) is one of the most frequent type of dementia. Currently there is no cure for AD and early diagnosis is crucial to the development of treatments that can delay the disease progression. Brain imaging can be a biomarker for Alzheimer's disease. This has been shown in several works with MR Images, but in the case of functional imaging such as PET, further investigation is still needed to determine their ability to diagnose AD, especially at the early stage of Mild Cognitive Impairment (MCI). In this paper we study the use of PET images of the ADNI database for the diagnosis of AD and MCI. We adopt a Boosting classification method, a technique based on a mixture of simple classifiers, which performs feature selection concurrently with the segmentation thus is well suited to high dimensional problems. The Boosting classifier achieved an accuracy of 90.97% in the detection of AD and 79.63% in the detection of MCI.
c23136a48527a20d6bfef019337ba4494077f7c5
753c0ecd5b4951b94fcba2fbc30ede5499ae00f5
This paper presents design and simulation of a wide band antenna for base station in the TV White Space (TVWS) spectrum band operating between 470 MHz-700 MHz. Concept of printed Log Periodic Dipole Array (LPDA), which provides wide bandwidth, has been used to realize wide band antenna. The antenna elements are printed on low cost FR4 substrate with εr = 4.4 and tan δ = 0.02. These elements are printed alternatively on both side of the substrate. Total volume of the antenna is 303 × 162.3 × 1.6 mm3. To reduce the size, scaling factor (τ) for this design is considered as 0.89 and the relative spacing (σ) is chosen as 0.054. The antenna is fed at the base of smallest element. The antenna shows an impedance bandwidth for VSWR ≤ 2 in the frequency range of 470 MHz–700 MHz. The gain of this antenna is between 5.3 dB to 6.5 dB in the entire band of operation. The radiation pattern shows end fire behavior with uniform radiation pattern in both E and H-planes with maximum front to back lobe ratio (F/B) of 30 dB.
426ccb6a700ac1dbe21484735fc182127783670b
In recent years, opinion mining attracted a great deal of research attention. However, limited work has been done on detecting opinion spam (or fake reviews). The problem is analogous to spam in Web search [1, 9 11]. However, review spam is harder to detect because it is very hard, if not impossible, to recognize fake reviews by manually reading them [2]. This paper deals with a restricted problem, i.e., identifying unusual review patterns which can represent suspicious behaviors of reviewers. We formulate the problem as finding unexpected rules. The technique is domain independent. Using the technique, we analyzed an Amazon.com review dataset and found many unexpected rules and rule groups which indicate spam activities.
0f0387d7207390dec305e09cdbbf4847e3c948e7
Recent advances in AutoML have led to automated tools that can compete with machine learning experts on supervised learning tasks. In this work, we present two versions of Auto-Net, which provide automatically-tuned deep neural networks without any human intervention. The first version, Auto-Net 1.0, builds upon ideas from the competition-winning system Auto-sklearn by using the Bayesian Optimization method SMAC and uses Theano as the underlying deep learning (DL) framework. The more recent Auto-Net 2.0 builds upon a recent combination of Bayesian Optimization and HyperBand, called BOHB, and uses PyTorch as DL framework. To the best of our knowledge, Auto-Net 1.0 was the first automatically-tuned neural network to win competition datasets against human experts (as part of the first AutoML challenge). Further empirical results show that ensembling Auto-Net 1.0 with Auto-sklearn can perform better than either approach alone, and that Auto-Net 2.0 can perform better yet.
1ff3ebd402e29c3af7226ece7f1d716daf1eb4a9
This paper presents a 64 GHz transmit/receive communication link between two 32-element SiGe-based phased arrays. The antenna element is a series-fed patch array, which provides directivity in the elevation plane. The transmit array results in an EIRP of 42 dBm, while the receive array provides an electronic gain of 33 dB and a system NF < 8 dB including the T/R switch and antenna losses. The arrays can be scanned to +/−50° in the azimuth using a 5-bit phase shifter on the SiGe chip, while keeping very low sidelobes and a near-ideal pattern. The communication link uses one array on the transmit side and another array on the receive side, together with external mixers and IF amplifiers. A Keysight M8195A arbitrary waveform generator is used to generate the modulated waveforms on the transmit side and a Keysight DSO804A oscilloscope is used to demodulate the received IF signal. The link performance was measured for different scan angles and modulation formats. Data rates of 1 Gbps using 16-QAM and 2 Gbps using QPSK are demonstrated at 300 m. The system also results in > 4 Gbps data rate at 100 meters, and ∼ 500 Mbps data rate at 800 meters.
2fb03a66f250a2c51eb2eb30344a13a5e4d8a265
This paper discusses a fabrication approach and experimental validation of a very large, planar active electronically scanned array (AESA). The planar AESA architecture employs a monolithic printed circuit board (PCB) with 768 active antenna elements at X-Band. Manufacturing physically large arrays with high element counts is discussed in relation to construction, assembly and yield considerations. Measured active array patterns of the ESA are also presented.
5ec3ee90bbc5b23e748d82cb1914d1c45d85bdd9
This paper demonstrates a 16-element phased-array transmitter in a standard 0.18-mum SiGe BiCMOS technology for Q-band satellite applications. The transmitter array is based on the all-RF architecture with 4-bit RF phase shifters and a corporate-feed network. A 1:2 active divider and two 1:8 passive tee-junction dividers constitute the corporate-feed network, and three-dimensional shielded transmission-lines are used for the passive divider to minimize area. All signals are processed differentially inside the chip except for the input and output interfaces. The phased-array transmitter results in a 12.5 dB of average power gain per channel at 42.5 GHz with a 3-dB gain bandwidth of 39.9-45.6 GHz. The RMS gain variation is < 1.3 dB and the RMS phase variation is < for all 4-bit phase states at 35-50 GHz. The measured input and output return losses are < -10 dB at 36.6-50 GHz, and <-10 dB at 37.6-50 GHz, respectively. The measured peak-to-peak group delay variation is plusmn 20 ps at 40-45 GHz. The output P-1dB is -5plusmn1.5 dBm and the maximum saturated output power is - 2.5plusmn1.5 dBm per channel at 42.5 GHz. The transmitter shows <1.8 dB of RMS gain mismatch and < 7deg of RMS phase mismatch between the 16 different channels over all phase states. A - 30 dB worst-case port-to-port coupling is measured between adjacent channels at 30-50 GHz, and the measured RMS gain and phase disturbances due to the inter-channel coupling are < 0.15 dB and < 1deg, respectively, at 35-50 GHz. All measurements are obtained without any on-chip calibration. The chip consumes 720 mA from a 5 V supply voltage and the chip size is 2.6times3.2 mm2.
a1b40af260487c00a2031df1ffb850d3bc368cee
Developing next-generation cellular technology (5G) in the mm-wave bands will require low-cost phased-array transceivers [1]. Even with the benefit of beamforming, due to space constraints in the mobile form-factor, increasing TX output power while maintaining acceptable PA PAE, LNA NF, and overall transceiver power consumption is important to maximizing link budget allowable path loss and minimizing handset case temperature. Further, the phased-array transceiver will need to be able to support dual-polarization communication. An IF interface to the analog baseband is desired for low power consumption in the handset or user equipment (UE) active antenna and to enable use of arrays of transceivers for customer premises equipment (CPE) or basestation (BS) antenna arrays with a low-loss IF power-combining/splitting network implemented on an antenna backplane carrying multiple tiled antenna modules.
c1efd29ddb6cb5cf82151ab25fbfc99e20354d9e
The Global Vectors for word representation (GloVe), introduced by Jeffrey Pennington et al. [3] is reported to be an efficient and effective method for learning vector representations of words. State-of-the-art performance is also provided by skip-gram with negative-sampling (SGNS) [2] implemented in the word2vec tool. In this note, we explain the similarities between the training objectives of the two models, and show that the objective of SGNS is similar to the objective of a specialized form of GloVe, though their cost functions are defined differently.
b8090b7b7efa0d971d3e1174facea60129be09c6
Data centers are increasing in number and size at astounding rates, while operational cost, thermal management, size, and performance continue to be the driving metrics for the power subsystems in the associated computing equipment. This paper presents a SiC-based phase-shifted full bridge (PSFB) converter designed for 10kW output power and targeted at data center applications. The design approach focused on tuning of the converter efficiency and minimizing the thermal management system, resulting in a high-density converter. A unique thermal management system has also been employed, resulting in both increased power density and better thermal management. In this paper, the implementation of this converter is described in detail, along with empirical results, both electrical and thermal.
4ddbeb946a4ff4853f2e98c547bb0b39cc6a4480
A brief review of metamaterials and their applications to antenna systems is given. Artificial magnetic conductors and electrically small radiating and scattering systems are emphasized. Single negative, double negative, and zero-index metamaterial systems are discussed as a means to manipulate their size, efficiency, bandwidth, and directivity characteristics. key words: metamaterials, electrically small antennas, complex media, artificial magnetic conductors
967972821567b8a34dc058c9fbf60c4054dc3b69
242377d7e76ad3371ed1814cf6f5249139e4b830
Open innovation has become one of the hottest topics in innovation management. This article intends to explore the limits in our understanding of the open innovation concept. In doing so, I address the questions of what (the content of open innovation), when (the context dependency) and how (the process). Open innovation is a rich concept, that can be implemented in many different ways. The context dependency of open innovation is one of the least understood topics; more research is needed on the internal and external environment characteristics affecting performance. The open innovation process relates to both the transition towards open innovation, and the various open innovation practices. As with any new concept, initial studies focus on successful and early adopters, are based on case studies, and descriptive. However, not all lessons learned from the early adopters may be applicable to following firms. Case study research increases our understanding of how things work and enables us to identify important phenomena. They should be followed by quantitative studies involving large samples to determine the relative importance of factors, to build path models to understand chains of effects, and to formally test for context dependencies. However, the evidence shows that open innovation has been a valuable concept for so many firms and in so many contexts, that it is on its way to find its final place in innovation management. & 2010 Elsevier Ltd. All rights reserved.
8fcffa267ed01e38e2280891c9f33bfa41771cad
Sorting is at the core of many database operations, such as index creation, sort-merge joins, and user-requested output sorting. As GPUs are emerging as a promising platform to accelerate various operations, sorting on GPUs becomes a viable endeavour. Over the past few years, several improvements have been proposed for sorting on GPUs, leading to the first radix sort implementations that achieve a sorting rate of over one billion 32-bit keys per second. Yet, state-of-the-art approaches are heavily memory bandwidth-bound, as they require substantially more memory transfers than their CPU-based counterparts. Our work proposes a novel approach that almost halves the amount of memory transfers and, therefore, considerably lifts the memory bandwidth limitation. Being able to sort two gigabytes of eight-byte records in as little as 50 milliseconds, our approach achieves a 2.32-fold improvement over the state-of-the-art GPU-based radix sort for uniform distributions, sustaining a minimum speed-up of no less than a factor of 1.66 for skewed distributions. To address inputs that either do not reside on the GPU or exceed the available device memory, we build on our efficient GPU sorting approach with a pipelined heterogeneous sorting algorithm that mitigates the overhead associated with PCIe data transfers. Comparing the end-to-end sorting performance to the state-of-the-art CPU-based radix sort running 16 threads, our heterogeneous approach achieves a 2.06-fold and a 1.53-fold improvement for sorting 64 GB key-value pairs with a skewed and a uniform distribution, respectively.
b0c38ce8350927dd9cf3920f33f17b7bfc009c3b
81e0f458a894322baf170fa4d6fa8099bd055c39
284db8df66ef94594ee831ff2b36f546e023953a
We consider visual category recognition in the framework of measuring similarities, or equivalently perceptual distances, to prototype examples of categories. This approach is quite flexible, and permits recognition based on color, texture, and particularly shape, in a homogeneous framework. While nearest neighbor classifiers are natural in this setting, they suffer from the problem of high variance (in bias-variance decomposition) in the case of limited sampling. Alternatively, one could use support vector machines but they involve time-consuming optimization and computation of pairwise distances. We propose a hybrid of these two methods which deals naturally with the multiclass setting, has reasonable computational complexity both in training and at run time, and yields excellent results in practice. The basic idea is to find close neighbors to a query sample and train a local support vector machine that preserves the distance function on the collection of neighbors. Our method can be applied to large, multiclass data sets for which it outperforms nearest neighbor and support vector machines, and remains efficient when the problem becomes intractable for support vector machines. A wide variety of distance functions can be used and our experiments show state-of-the-art performance on a number of benchmark data sets for shape and texture classification (MNIST, USPS, CUReT) and object recognition (Caltech- 101). On Caltech-101 we achieved a correct classification rate of 59.05%(±0.56%) at 15 training images per class, and 66.23%(±0.48%) at 30 training images.
e5536c3033153fd18de13ab87428c204bb15818f
The next generation of embedded computing systems will have to meet new challenges. The systems are expected to act mainly autonomously, to dynamically adapt to changing environments and to interact with one another if necessary. Such systems are called organic. Organic Computing systems are similar to autonomic computing systems. In addition Organic Computing systems often behave life-like and are inspired by nature/biological phenomena. Design and construction of such systems brings new challenges for the software engineering process. In this paper we present a framework for design, construction and analysis of organic computing systems. It can facilitate design and construction as well as it can be used to (semi-)formally define organic properties like self-configuration or self-adaptation. We illustrate the framework on a real-world case study from production automation.
0c881ea63ff12d85bc3192ce61f37abf701fdf38
We propose a semi-supervised model which segments and annotates images using very few labeled images and a large unaligned text corpus to relate image regions to text labels. Given photos of a sports event, all that is necessary to provide a pixel-level labeling of objects and background is a set of newspaper articles about this sport and one to five labeled images. Our model is motivated by the observation that words in text corpora share certain context and feature similarities with visual objects. We describe images using visual words, a new region-based representation. The proposed model is based on kernelized canonical correlation analysis which finds a mapping between visual and textual words by projecting them into a latent meaning space. Kernels are derived from context and adjective features inside the respective visual and textual domains. We apply our method to a challenging dataset and rely on articles of the New York Times for textual features. Our model outperforms the state-of-the-art in annotation. In segmentation it compares favorably with other methods that use significantly more labeled training data.
a6af22306492b0830dd1002ad442cc0b53f14b25
This paper presents a 5.8-GHz radar sensor chip for non-contact vital sign detection. The sensor chip is designed and fabricated in TSMC 0.18 μm CMOS 1P6M process. Except for the low-noise amplifier, all the active and passive components of the radar system are fully integrated in a single CMOS chip. With packaging on the printed-circuit board and connecting transmitting and receiving antennas, the radar sensor chip has been successfully demonstrated to detect the respiration and heart beat rates of a human adult.
1a6c9d9165fe77b3b8b974f57ca1e11d0326903a
In this paper a triangular fractal patch antenna with slit is designed for IRNSS and GAGAN applications using ADS software. India is intended to develop a satellite based navigation systems known as Indian Regional Navigational Satellite System (IRNSS) for positioning applications. Design of IRNSS antenna at user sector is indispensable. GPS Aided and Geo Augmented Navigation (GAGAN), a satellite based augmentation system for India, erect over the GPS system is anticipated to provide the flawless navigation support over the Asia-Pacific regions. The desired antenna has been deliberate on dielectric constant εr = 4.8 and substrate thickness h = 3.05 mm. The feed location of the antenna has been selected to produce the circular polarization. The self-similar property in antenna exhibits multi-band resonant frequencies. These specifications should be satisfied at the frequency L5 (1175 MHz), L1 (1575.42 MHz) and S (2492.08 MHz).
0d3de784c0a418d2c6eefdfcbc8f5a93da97af7e
The increasing interest in bi-directional mobile high data rate satellite communications in Ka-band necessitates the development of dedicated antenna tracking systems and feeds. In this paper we describe a compact feed structure based on printed circuit boards for a mobile satellite communications ground terminal with a Cassegrain reflector antenna. The novel structure provides a dual circular polarisation communication mode as well as the TM01 mode for multimode monopulse tracking. This coupler, based on carefully matched transitions from grounded coplanar lines to circular waveguides, is operational at 20GHz and 30 GHz, to cover the downlink and the uplink frequency ranges in Ka-band. This work contributes to the development of a satellite terminal for land-mobile communications in disaster scenarios.
9ab4883c0ee0db114a81eb3f47bde38a1270c590
Change is accelerating, and the complexity of the systems in which we live is growing. Increasingly change is the result of humanity itself. As complexity grows so do the unanticipated side effects of human action, further increasing complexity in a vicious cycle. Many scholars call for the development of 'systems thinking' to improve our ability to manage wisely. But how do people learn in and about complex dynamic systems? Learning is a feedback process in which our decisions alter the real world, we receive information feedback about the world, and using the new information we revise the decisions we make and the mental models that motivate those decisions. Unfortunately, in the world of social action various impediments slow or prevent these learning feedbacks from functioning, allowing erroneous and harmful behaviors and beliefs to persist. The barriers to learning include the dynamic complexity of the systems themselves, inadequate and ambiguous outcome feedback, systematic 'misperceptions of feedback' where our cognitive maps omit important feedback processes, delays, stocks and flows, and nonlinearities that characterize complex systems, inability to simulate mentally the dynamics of our cognitive maps, poor interpersonal and organizational inquiry skills, and poor scientific reasoning skills. To be successful methods to enhance learning about complex systems must address all these impediments. Effective methods for learning in and about complex dynamic systems must include (1) tools to elicit participant knowledge, articulate and reframe perceptions, and create maps of the feedback structure of a problem from those perceptions; (2) simulation tools and management flight simulators to assess the dynamics of those maps and test new policies; and (3) methods to improve scientific reasoning skills, strengthen group process and overcome defensive routines for individuals and teams.
c0a2293809917839047c7ec98d942777ca426e57
Enterprise systems (ES) that capture the most advanced developments of information technology are becoming common fixtures in most organisations. However, how ES affect organizational agility (OA) has been less researched and the existing research remains equivocal. From the perspective that ES can positively contribute to OA, this research via theory-based model development and rigorous empirical investigation of the proposed model, has bridged significant research gaps and provided empirical evidence for, and insights into, the effect of ES on OA. The empirical results based on data collected from 179 large organizations in Australia and New Zealand which have implemented and used ES for at least one year show that organisations can achieve agility out of their ES in two ways: by developing ES technical competences to build ES-enabled capabilities that digitise their key sensing and responding processes; and when ES-enabled sensing and responding capabilities are aligned in relatively turbulent environment.
2e64b370a86bcdaac392ca078f41f5bbe8d0307f
This paper represents a comparative picture of irrigation cost of different Bangladeshi crops for diesel, grid electricity and solar power based irrigation systems. The study has been conducted on 27 types of crops. All the data were collected about volume of water for those crops. Then three different types of pump (solar, diesel, electric) have been chosen with same power rating i.e. 5hp. Specific area covered for different crops are calculated furthermore from the attained water volumes. Then finally it has been calculated the 10 years cost in taka. The study found for the entire crops grid powered irrigation cost is minimum than solar powered irrigation cost because the later one is associated with huge primary investment.[12] The study also discovered irrigation with solar power for most of the crops such as onion, carrot, chill, tomato, maize, garlic, gourd, ginger, turmeric, pumpkin, cabbage, cauliflower, lady finger, banana, papaya and groundnut are not beneficial at all rather it costs very high among all the three types of irrigation system.[5] It is also evident that irrigation with solar power of certain crops like potato, cotton, soybean, sunflower, strawberry, lentil, mustard are very much lucrative compared to diesel powered irrigation.
4420fca3cb722ad0478030c8209b550cd7db8095
Driven by the growing aging population, prevalence of chronic diseases, and continuously rising healthcare costs, the healthcare system is undergoing a fundamental transformation, from the conventional hospital-centered system to an individual-centered system. Current and emerging developments in wearable medical systems will have a radical impact on this paradigm shift. Advances in wearable medical systems will enable the accessibility and affordability of healthcare, so that physiological conditions can be monitored not only at sporadic snapshots but also continuously for extended periods of time, making early disease detection and timely response to health threats possible. This paper reviews recent developments in the area of wearable medical systems for p-Health. Enabling technologies for continuous and noninvasive measurements of vital signs and biochemical variables, advances in intelligent biomedical clothing and body area networks, approaches for motion artifact reduction, strategies for wearable energy harvesting, and the establishment of standard protocols for the evaluation of wearable medical devices are presented in this paper with examples of clinical applications of these technologies.
5c3fb7e2ffc8b312b20bae99c822d427d0dc003d
This work introduces the concept of a Wireless Underground Sensor Network (WUSN). WUSNs can be used to monitor a variety of conditions, such as soil properties for agricultural applications and toxic substances for environmental monitoring. Unlike existing methods of monitoring underground conditions, which rely on buried sensors connected via wire to the surface, WUSN devices are deployed completely belowground and do not require any wired connections. Each device contains all necessary sensors, memory, a processor, a radio, an antenna, and a power source. This makes their deployment much simpler than existing underground sensing solutions. Wireless communication within a dense substance such as soil or rock is, however, significantly more challenging than through air. This factor, combined with the necessity to conserve energy due to the difficulty of unearthing and recharging WUSN devices, requires that communication protocols be redesigned to be as efficient as possible. This work provides an extensive overview of applications and design challenges for WUSNs, challenges for the underground communication channel including methods for predicting path losses in an underground link, and challenges at each layer of the communication protocol stack. 2006 Elsevier B.V. All rights reserved.
45adc16c4111bcc5201f721d2d573dc8206f8a79
Automation does not mean humans are replaced; quite the opposite. Increasingly, humans are asked to interact with automation in complex and typically large-scale systems, including aircraft and air traffic control, nuclear power, manufacturing plants, military systems, homes, and hospitals. This is not an easy or error-free task for either the system designer or the human operator/automation supervisor, especially as computer technology becomes ever more sophisticated. This review outlines recent research and challenges in the area, including taxonomies and qualitative models of human-automation interaction; descriptions of automation-related accidents and studies of adaptive automation; and social, political, and ethical issues.
848644136ee9190ce8098615e5dd60c70a660628
In this paper, we describe the architecture and performance of a fine pitch multiple chip heterogeneous integration solution using ceramic interconnect bridge on organic substrate package. We present the increased IO density and improvement of electrical high-speed performance on signal integrity are achievable through this novel integration scheme, where dense copper routings on small ceramic elements are served as interconnect bridges. The cost and signal attenuation of using ceramic bridge is far better than that of silicon bridge or wafer interposer on substrate.
0690ba31424310a90028533218d0afd25a829c8d
Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called ”semantic image segmentation”). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our “DeepLab” system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6% IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the ’hole’ algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.
bfb5f142d0eb129fa66616685a84ce055ad8f071
We present a system to assist users in dietary logging habits, which performs food recognition from pictures snapped on their phone in two different scenarios. In the first scenario, called "Food in context", we exploit the GPS information of a user to determine which restaurant they are having a meal at, therefore restricting the categories to recognize to the set of items in the menu. Such context allows us to also report precise calories information to the user about their meal, since restaurant chains tend to standardize portions and provide the dietary information of each meal. In the second scenario, called "Foods in the wild" we try to recognize a cooked meal from a picture which could be snapped anywhere. We perform extensive experiments on food recognition on both scenarios, demonstrating the feasibility of our approach at scale, on a newly introduced dataset with 105K images for 500 food categories.
1f9ede76dbbd6caf7e3877918fae0d421c6f180c
3a932716ed323b247a828bd0fd8ae9b2ee0197b2
Discovery of association rules is an important database mining problem. Current algorithms for nding association rules require several passes over the analyzed database, and obviously the role of I/O overhead is very signi cant for very large databases. We present new algorithms that reduce the database activity considerably. The idea is to pick a random sample, to nd using this sample all association rules that probably hold in the whole database, and then to verify the results with the rest of the database. The algorithms thus produce exact association rules, not approximations based on a sample. The approach is, however, probabilistic, and in those rare cases where our sampling method does not produce all association rules, the missing rules can be found in a second pass. Our experiments show that the proposed algorithms can nd association rules very e ciently in only one database pass.
780e2631adae2fb3fa43965bdeddc0f3b885e20d
We introduce the problem of mining association rules in large relational tables containing both quantitative and categorical attributes. An example of such an association might be "10% of married people between age 50 and 60 have at least 2 cars". We deal with quantitative attributes by fine-partitioning the values of the attribute and then combining adjacent partitions as necessary. We introduce measures of partial completeness which quantify the information lost due to partitioning. A direct application of this technique can generate too many similar rules. We tackle this problem by using a "greater-than-expected-value" interest measure to identify the interesting rules in the output. We give an algorithm for mining such quantitative association rules. Finally, we describe the results of using this approach on a real-life dataset.
586ed71b41362ef55e92475ff063a753e8536afe
Association mining may often derive an undesirably large set of frequent itemsets and association rules. Recent studies have proposed an interesting alternative: mining frequent closed itemsets and their corresponding rules, which has the same power as association mining but substantially reduces the number of rules to be presented. In this paper, we propose an e cient algorithm, CLOSET, for mining closed itemsets, with the development of three techniques: (1) applying a compressed, frequent pattern tree FP-tree structure for mining closed itemsets without candidate generation, (2) developing a single pre x path compression technique to identify frequent closed itemsets quickly, and (3) exploring a partition-based projection mechanism for scalable mining in large databases. Our performance study shows that CLOSET is e cient and scalable over large databases, and is faster than the previously proposed methods.
0728ea2e21c8a24b51d14d5878c9485c5b11b52f
fb4e92e1266898152058f9b1f24acd8226ed9249
In this paper we describe a method to discover frequent behavioral patterns in event logs. We express these patterns as local process models. Local process model mining can be positioned in-between process discovery and episode / sequential pattern mining. The technique presented in this paper is able to learn behavioral patterns involving sequential composition, concurrency, choice and loop, like in process mining. However, we do not look at start-to-end models, which distinguishes our approach from process discovery and creates a link to episode / sequential pattern mining. We propose an incremental procedure for building local process models capturing frequent patterns based on so-called process trees. We propose five quality dimensions and corresponding metrics for local process models, given an event log. We show monotonicity properties for some quality dimensions, enabling a speedup of local process model discovery through pruning. We demonstrate through a real life case study that mining local patterns allows us to get insights in processes where regular start-to-end process discovery techniques are only able to learn unstructured, flower-like, models.
606b2c57cfed7328dedf88556ac657e9e1608311
The traditional internet has many central points of failure and trust, like (a) the Domain Name System (DNS) servers, (b) public-key infrastructure, and (c) end-user data stored on centralized data stores. We present the design and implementation of a new internet, called Blockstack, where users don’t need to trust remote servers. We remove any trust points from the middle of the network and use blockchains to secure critical data bindings. Blockstack implements services for identity, discovery, and storage and can survive failures of underlying blockchains. The design of Blockstack is informed by three years of experience from a large blockchain-based production system. Blockstack gives comparable performance to traditional internet services and enables a much-needed security and reliability upgrade to the traditional internet.
317072c8b7213d884f5b2d4d3133368d17c412ab
A novel design technique for broadband substrate integrated waveguide cavity-backed slot antenna is demonstrated in this letter. Instead of using a conventional narrow rectangular slot, a bow-tie-shaped slot is implemented to get broader bandwidth performance. The modification of the slot shape helps to induce strong loading effect in the cavity and generates two closely spaced hybrid modes that help to get a broadband response. The slot antenna incorporates thin cavity backing (height <;0.03λ0 ) in a single substrate and thus retains low-profile planar configuration while showing unidirectional radiation characteristics with moderate gain. A fabricated prototype is also presented that shows a bandwidth of 1.03 GHz (9.4%), a gain of 3.7 dBi over the bandwidth, 15 dB front-to-back ratio, and cross-polarization level below -18 dB.
4255bbd10e2a1692b723f8b40f28db7e27b06de9
Though there is a growing literature on fairness for supervised learning, incorporating fairness into unsupervised learning has been less well-studied. This paper studies fairness in the context of principal component analysis (PCA). We first define fairness for dimensionality reduction, and our definition can be interpreted as saying a reduction is fair if information about a protected class (e.g., race or gender) cannot be inferred from the dimensionality-reduced data points. Next, we develop convex optimization formulations that can improve the fairness (with respect to our definition) of PCA and kernel PCA. These formulations are semidefinite programs, and we demonstrate their effectiveness using several datasets. We conclude by showing how our approach can be used to perform a fair (with respect to age) clustering of health data that may be used to set health insurance rates.
4e85e17a9c74cd0dbf66c6d673eaa9161e280b18
Random Projection is another class of methods used for low-rank matrix approximation. A random projection algorithm projects datapoints from a high-dimensional space R n onto a lower-dimensional subspace R r (r ⌧ n) using a random matrix S 2 R r⇥n. The key idea of random mapping comes from the Johnson-Lindenstrauss lemma[7] (we will explain later in detail) that says " if points in a vector space are projected onto a randomly selected subspace of suitably high dimension, then the distances between points are approximately preserved ". Random projection methods are computationally efficient and sufficiently accurate in practice for dimen-sionality reduction of high-dimensional datasets. Moreover, since complexity of many geometric algorithms depends significantly on the dimension, applying random projection as pre-processing is a common task in many data mining applications. As opposed to column sampling methods, that need to access data for approximating the low-rank subspace, random projections are data oblivious, as their computation involves only a random matrix S. We start with the basic concepts and definitions required to understand random projections techniques. Definition 1 (Column Space). Consider a matrix A 2 R n⇥d (n > d). Notice that as one ranges over all vectors x 2 R d , Ax ranges over all linear combinations of the columns of A and therefore defines a d-dimensional subspace of R n , which we refer to as the column space of A and denote it by C(A). Definition 2 (` 2-Subspace Embedding). A matrix S 2 R r⇥n provides a subspace embedding for C(A) if kSAxk 2 2 = (1 ± ")kAxk 2 2 , 8x 2 R d. Such matrix S provides a low distortion embedding, and is called a (1 ± ") ` 2-subspace embedding. Using an`2-subspace embedding, one can work with SA 2 R r⇥d instead of A 2 R n⇥d. Typically r ⌧ n, so we are working with a smaller matrix that reduces the time/space complexity of many algorithms. However note that definitely r needs to be larger than d if we are talking about the whole subspace R d [11]. Note that subspace embedding does not depend on a particular basis for C(A), that means if we have a matrix U that is an orthonormal basis for C(A), then Ux gives the same subspace as Ax. Therefore if S is an embedding for A, it will be an embedding for U too. Let's consider …
01dfe1868e8abc090b1485482929f65743e23743
Exploration in an unknown environment is the core functionality for mobile robots. Learning-based exploration methods, including convolutional neural networks, provide excellent strategies without human-designed logic for the feature extraction [1]. But the conventional supervised learning algorithms cost lots of efforts on the labeling work of datasets inevitably. Scenes not included in the training set are mostly unrecognized either. We propose a deep reinforcement learning method for the exploration of mobile robots in an indoor environment with the depth information from an RGB-D sensor only. Based on the Deep Q-Network framework [2], the raw depth image is taken as the only input to estimate the Q values corresponding to all moving commands. The training of the network weights is end-to-end. In arbitrarily constructed simulation environments, we show that the robot can be quickly adapted to unfamiliar scenes without any man-made labeling. Besides, through analysis of receptive fields of feature representations, deep reinforcement learning motivates the convolutional networks to estimate the traversability of the scenes. The test results are compared with the exploration strategies separately based on deep learning [1] or reinforcement learning [3]. Even trained only in the simulated environment, experimental results in real-world environment demonstrate that the cognitive ability of robot controller is dramatically improved compared with the supervised method. We believe it is the first time that raw sensor information is used to build cognitive exploration strategy for mobile robots through end-to-end deep reinforcement learning.
20a773041aa5667fbcf5378ac87cad2edbfd28b7
The DBpedia project is a community effort to extract structured information from Wikipedia and to make this information accessible on the Web. The resulting DBpedia knowledge base currently describes over 2.6 million entities. For each of these entities, DBpedia defines a globally unique identifier that can be dereferenced over the Web into a rich RDF description of the entity, including human-readable definitions in 30 languages, relationships to other resources, classifications in four concept hierarchies, various facts as well as data-level links to other Web data sources describing the entity. Over the last year, an increasing number of data publishers have begun to set data-level links to DBpedia resources, making DBpedia a central interlinking hub for the emerging Web of data. Currently, the Web of interlinked data sources around DBpedia provides approximately 4.7 billion pieces of information and covers domains such as geographic information, people, companies, films, music, genes, drugs, books, and scientific publications. This article describes the extraction of the DBpedia knowledge base, the current status of interlinking DBpedia with other data sources on the Web, and gives an overview of applications that facilitate the Web of Data around DBpedia.
744eacc689e1be16de6ca1f386ea3088abacad49
We describe our method for benchmarking Semantic Web knowledge base systems with respect to use in large OWL applications. We present the Lehigh University Benchmark (LUBM) as an example of how to design such benchmarks. The LUBM features an on-tology for the university domain, synthetic OWL data scalable to an arbitrary size, fourteen extensional queries representing a variety of properties, and several performance metrics. The LUBM can be used to evaluate systems with different reasoning capabilities and storage mechanisms. We demonstrate this with an evaluation of two memory-based systems and two systems with persistent storage.
92862e13ceb048d596d05b5c788765649be9d851
Due to the distributed nature of Denial-of-Service attacks, it is tremendously challenging to identify such malicious behavior using traditional intrusion detection systems in Wireless Sensor Networks (WSNs). In the current paper, a bio-inspired method is introduced, namely the cooperative-based fuzzy artificial immune system (Co-FAIS). It is a modular-based defense strategy derived from the danger theory of the human immune system. The agents synchronize and work with one another to calculate the abnormality of sensor behavior in terms of context antigen value (CAV) or attackers and update the fuzzy activation threshold for security response. In such a multi-node circumstance, the sniffer module adapts to the sink node to audit data by analyzing the packet components and sending the log file to the next layer. The fuzzy misuse detector module (FMDM) integrates with a danger detector module to identify the sources of danger signals. The infected sources are transmitted to the fuzzy Q-learning vaccination modules (FQVM) in order for particular, required action to enhance system abilities. The Cooperative Decision Making Modules (Co-DMM) incorporates danger detector module with the fuzzy Q-learning vaccination module to produce optimum defense strategies. To evaluate the performance of the proposed model, the Low Energy Adaptive Clustering Hierarchy (LEACH) was simulated using a network simulator. The model was subsequently compared against other existing soft computing methods, such as fuzzy logic controller (FLC), artificial immune system (AIS), and fuzzy Q-learning (FQL), in terms of detection accuracy, counter-defense, network lifetime and energy consumption, to demonstrate its efficiency and viability. The proposed method improves detection accuracy and successful defense rate performance against attacks compared to conventional empirical methods. & 2014 Elsevier Ltd. All rights reserved.
c286ba73f645535d19e085bdaa713a0bb9cb1ddc
In this paper, an X-band 1×3 substrate integrated waveguide (SIW) power divider design is presented. The designed SIW power divider provides equal amplitude with uniform phase distribution at the each output port. It has also a satisfactory operating bandwidth and low insertion loss. Moreover, the return loss is approximately 25 dB at the design frequency as shown in the EM simulation results.
81d51bf638a6a7c405e1e1d461ae979f83fd929b
9f27c7cd7a66f612c4807ec6e9a90d6aafd462e8
Depth can complement RGB with useful cues about object volumes and scene layout. However, RGB-D image datasets are still too small for directly training deep convolutional neural networks (CNNs), in contrast to the massive monomodal RGB datasets. Previous works in RGB-D recognition typically combine two separate networks for RGB and depth data, pretrained with a large RGB dataset and then fine tuned to the respective target RGB and depth datasets. These approaches have several limitations: 1) only use low-level filters learned from RGB data, thus not being able to exploit properly depth-specific patterns, and 2) RGB and depth features are only combined at highlevels but rarely at lower-levels. In this paper, we propose a framework that leverages both knowledge acquired from large RGB datasets together with depth-specific cues learned from the limited depth data, obtaining more effective multi-source and multi-modal representations. We propose a multi-modal combination method that selects discriminative combinations of layers from the different source models and target modalities, capturing both high-level properties of the task and intrinsic low-level properties of both modalities.
1986349b2df8b9d4064453d169d69ecfde283e27
Conflict is an essential element of interesting stories. In this paper, we operationalize a narratological definition of conflict and extend established narrative planning techniques to incorporate this definition. The conflict partial order causal link planning algorithm (CPOCL) allows narrative conflict to arise in a plan while maintaining causal soundness and character believability. We also define seven dimensions of conflict in terms of this algorithm's knowledge representation. The first three-participants, reason, and duration-are discrete values which answer the “who?” “why?” and “when?” questions, respectively. The last four-balance, directness, stakes, and resolution-are continuous values which describe important narrative properties that can be used to select conflicts based on the author's purpose. We also present the results of two empirical studies which validate our operationalizations of these narrative phenomena. Finally, we demonstrate the different kinds of stories which CPOCL can produce based on constraints on the seven dimensions.
2766913aabb151107b28279645b915a3aa86c816
This article outlines explanation-based learning (EBL) and its role in improving problem-solving performance through experience. Unlike inductive systems, which learn by abstracting common properties from multiple examples, EBL systems explain why a particular example is an instance of a concept. The explanations are then converted into operational recognition rules. In essence, the EBL approach is analytical and knowledge-intensive, whereas inductive methods are empirical and knowledge-poor. This article focuses on extensions of the basic EBL method and their integration with the PRODIGY problem-solving system. PRODIGY'S EBL method is specifically designed to acquire search control rules that are effective in reducing total search time for complex task domains. Domain-specific search control rules are learned from successful problem-solving decisions, costly failures, and unforeseen goal interactions. The ability to specify multiple learning strategies in a declarative manner enables EBL to serve as a general technique for performance improvement. PRODIGY'S EBL method is analyzed, illustrated with several examples and performance results, and compared with other methods for integrating EBL and problem solving. 'Present address: Artificial Intelligence Research Branch, NASA Ames Research Center, Sterling Federal Systems, Mail Stop 244-17, Moffett Field CA 94035. This research was sponsored in part by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 4976, Amendment 20, under contract number F33615-87-C-1499, monitored by the Air Force Avionics Laboratory, in part by the Office of Naval Research under contracts N00014-84-K-0345 (N91) and N00014-86-K-0678-N123, in part by NASA under contract NCC 2-463, in part by the Army Research Institute under contract MDA903-85-C-0324, under subcontract 487650-25537 through the University of California, Irvine, and in part by small contributions from private institutions. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of DARPA, ONR, NASA, ARI, or the US Government. The first and fifth authors were supported by AT&T Bell Labs Ph.D. Scholarships. Table of
516bd2e2bfc7405568f48560e02154135616374c
Narrative, and in particular storytelling, is an important part of the human experience. Consequently, computational systems that can reason about narrative can be more effective communicators, entertainers, educators, and trainers. One of the central challenges in computational narrative reasoning is narrative generation, the automated creation of meaningful event sequences. There are many factors – logical and aesthetic – that contribute to the success of a narrative artifact. Central to this success is its understandability. We argue that the following two attributes of narratives are universal: (a) the logical causal progression of plot, and (b) character believability. Character believability is the perception by the audience that the actions performed by characters do not negatively impact the audience’s suspension of disbelief. Specifically, characters must be perceived by the audience to be intentional agents. In this article, we explore the use of refinement search as a technique for solving the narrative generation problem – to find a sound and believable sequence of character actions that transforms an initial world state into a world state in which goal propositions hold. We describe a novel refinement search planning algorithm – the Intent-based Partial Order Causal Link (IPOCL) planner – that, in addition to creating causally sound plot progression, reasons about character intentionality by identifying possible character goals that explain their actions and creating plan structures that explain why those characters commit to their goals. We present the results of an empirical evaluation that demonstrates that narrative plans generated by the IPOCL algorithm support audience comprehension of character intentions better than plans generated by conventional partial-order planners.
444e8aacda5f06d2a6c5197c89567638eaccb677
With the increasing significance of information technology, there is an urgent need for adequate measures of information security. Systematic information security management is one of most important initiatives for IT management. At least since reports about privacy and security breaches, fraudulent accounting practices, and attacks on IT systems appeared in public, organizations have recognized their responsibilities to safeguard physical and information assets. Security standards can be used as guideline or framework to develop and maintain an adequate information security management system (ISMS). The standards ISO/IEC 27000, 27001 and 27002 are international standards that are receiving growing recognition and adoption. They are referred to as “common language of organizations around the world” for information security [1]. With ISO/IEC 27001 companies can have their ISMS certified by a third-party organization and thus show their customers evidence of their security measures.
54eed22ff377dcb0472c8de454b1261988c4a9ac
Traffic safety is a severe problem around the world. Many road accidents are normally related with the driver's unsafe driving behavior, e.g. eating while driving. In this work, we propose a vision-based solution to recognize the driver's behavior based on convolutional neural networks. Specifically, given an image, skin-like regions are extracted by Gaussian Mixture Model, which are passed to a deep convolutional neural networks model, namely R*CNN, to generate action labels. The skin-like regions are able to provide abundant semantic information with sufficient discriminative capability. Also, R*CNN is able to select the most informative regions from candidates to facilitate the final action recognition. We tested the proposed methods on Southeast University Driving-posture Dataset and achieve mean Average Precision(mAP) of 97.76% on the dataset which prove the proposed method is effective in drivers's action recognition.
79f026f743997ab8b5251f6e915a0a427576b142
Millimeter-wave communications are expected to play a key role in future 5G mobile networks to overcome the dramatic traffic growth expected over the next decade. Such systems will severely challenge antenna technologies used at mobile terminal, access point or backhaul/fronthaul levels. This paper provides an overview of the authors' recent achievements in the design of integrated antennas, antenna arrays and high-directivity quasi-optical antennas for high data-rate 60-GHz communications.
04ee77ef1143af8b19f71c63b8c5b077c5387855
Most tasks in natural language processing can be cast into question answering (QA) problems over language input. We introduce the dynamic memory network (DMN), a unified neural network framework which processes input sequences and questions, forms semantic and episodic memories, and generates relevant answers. Questions trigger an iterative attention process which allows the model to condition its attention on the result of previous iterations. These results are then reasoned over in a hierarchical recurrent sequence model to generate answers. The DMN can be trained end-to-end and obtains state of the art results on several types of tasks and datasets: question answering (Facebook’s bAbI dataset), sequence modeling for part of speech tagging (WSJ-PTB), coreference resolution (Quizbowl dataset) and text classification for sentiment analysis (Stanford Sentiment Treebank). The model relies exclusively on trained word vector representations and requires no string matching or manually engineered features.
165db9e093be270d38ac4a264efff7507518727e
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks.
17357530b7aae622162da73d3b796c63b557b3b3
The ultimate proof of our understanding of natural or technological systems is reflected in our ability to control them. Although control theory offers mathematical tools for steering engineered and natural systems towards a desired state, a framework to control complex self-organized systems is lacking. Here we develop analytical tools to study the controllability of an arbitrary complex directed network, identifying the set of driver nodes with time-dependent control that can guide the system’s entire dynamics. We apply these tools to several real networks, finding that the number of driver nodes is determined mainly by the network’s degree distribution. We show that sparse inhomogeneous networks, which emerge in many real complex systems, are the most difficult to control, but that dense and homogeneous networks can be controlled using a few driver nodes. Counterintuitively, we find that in both model and real systems the driver nodes tend to avoid the high-degree nodes.
8ff18d710813e5ea50d05ace9f07f48006430671
We present an extended analysis of our previous work on the HydroSense technology, which is a low-cost and easily installed single-point sensor of pressure for automatically disaggregating water usage activities in the home (Froehlich et al., 2009 [53]). We expand upon this work by providing a survey of existing and emerging water disaggregation techniques, a more comprehensive description of the theory of operation behind our approach, and an expanded analysis section that includes hot versus coldwater valve usage classification and a comparisonbetween two classification approaches: the template-based matching scheme used in Froehlich et al. (2009) [53] and a new stochastic approach using a Hidden Markov Model. We show that both are successful in identifying valveand fixturelevel water events with greater than 90% accuracies. We conclude with a discussion of the limitations in our experimental methodology and open problems going forward. © 2010 Elsevier B.V. All rights reserved.
d4d5a73c021036dd548f5fbe71dbdabcad378e98
This paper describes a transient event classification scheme, system identification techniques, and implementation for use in nonintrusive load monitoring. Together, these techniques form a system that can determine the operating schedule and find parameters of physical models of loads that are connected to an AC or DC power distribution system. The monitoring system requires only off-the-shelf hardware and recognizes individual transients by disaggregating the signal from a minimal number of sensors that are installed at a central location in the distribution system. Implementation details and field tests for AC and DC systems are presented.
3556c846890dc0dbf6cd15ebdcd8932f1fdef6a2
A key aspect of pervasive computing is using computers and sensor networks to effectively and unobtrusively infer users' behavior in their environment. This includes inferring which activity users are performing, how they're performing it, and its current stage. Recognizing and recording activities of daily living is a significant problem in elder care. A new paradigm for ADL inferencing leverages radio-frequency-identification technology, data mining, and a probabilistic inference engine to recognize ADLs, based on the objects people use. We propose an approach that addresses these challenges and shows promise in automating some types of ADL monitoring. Our key observation is that the sequence of objects a person uses while performing an ADL robustly characterizes both the ADL's identity and the quality of its execution. So, we have developed Proactive Activity Toolkit (PROACT).
fdec38019625fbcffc9debb804544cce6630c3ac
There is growing recognition that firms in the contemporary business environment derive substantial and sustained competitive advantage from a bundle of intangible assets such as knowledge, networks and innovative capability. Measuring the return on such intangible assets has now become imperative for managers. The present manuscript focuses on the measurement of the return on marketing. We first discuss the conditions that make this task a high managerial priority. We then discuss measurement efforts to date, both in general management and marketing. We then offer a conceptual framework that places measurement efforts in a historical perspective. We conclude with a discussion on where the future of marketing metrics lies. © 2006 Elsevier Inc. All rights reserved.
0cfd5a7c6610e0eff2d277b419808edb32d93b78
592fc3377e8b590a457d8ffaed60b71730114347
The past twenty years has seen a rapid growth of interest in stochastic search algorithms, particularly those inspired by natural processes in physics and biology. Impressive results have been demonstrated on complex practical optimisation problems and related search applications taken from a variety of fields, but the theoretical understanding of these algorithms remains weak. This results partly from the insufficient attention that has been paid to results showing certain fundamental limitations on universal search algorithms, including the so-called “No Free Lunch” Theorem. This paper extends these results and draws out some of their implications for the design of search algorithms, and for the construction of useful representations. The resulting insights focus attention on tailoring algorithms and representations to particular problem classes by exploiting domain knowledge. This highlights the fundamental importance of gaining a better theoretical grasp of the ways in which such knowledge may be systematically exploited as a major research agenda for the future.
c4ea9066db2e73a7ddfa8643277bfd2948eebfe0
7758a1c9a21e0b8635a5550cfdbebc40b22a41a6
bf48f1d556fdb85d5dbe8cfd93ef13c212635bcf
In this work, we propose a novel robot learning framework called Neural Task Programming (NTP), which bridges the idea of few-shot learning from demonstration and neural program induction. NTP takes as input a task specification (e.g., video demonstration of a task) and recursively decomposes it into finer sub-task specifications. These specifications are fed to a hierarchical neural program, where bottom-level programs are callable subroutines that interact with the environment. We validate our method in three robot manipulation tasks. NTP achieves strong generalization across sequential tasks that exhibit hierarchal and compositional structures. The experimental results show that NTP learns to generalize well towards unseen tasks with increasing lengths, variable topologies, and changing objectives.stanfordvl.github.io/ntp/.
bbe657fbc16cbf0ceaebd596cea5b3915f4eb39c
A wide-band ‘corners-truncated rectangular’ stacked patch antenna for use in the circular polarization applications was proposed. The antenna proposed in this paper an axial ratio of less than 3 dB and a VSWR of less than 2 : 1 were shown to be achievable over a 25% bandwidth for use in the wireless communication applications, and this antenna can achieves higher gain, lower side lobes and wider bandwidth compared to the traditional microstrip patch antenna.
15a2c58b29c5a84a134d1504faff528101321f21
Boosting (Freund & Schapire 1996, Schapire & Singer 1998) is one of the most important recent developments in classiication methodology. The performance of many classiication algorithms can often be dramatically improved by sequentially applying them to reweighted versions of the input data, and taking a weighted majority vote of the sequence of classiiers thereby produced. We show that this seemingly mysterious phenomenon can be understood in terms of well known statistical principles, namely additive modeling and maximum likelihood. For the two-class problem, boosting can be viewed as an approximation to additive modeling on the logistic scale using maximum Bernoulli likelihood as a criterion. We develop more direct approximations and show that they exhibit nearly identical results to boosting. Direct multi-class generalizations based on multinomial likelihood are derived that exhibit performance comparable to other recently proposed multi-class generalizations of boosting in most situations, and far superior in some. We suggest a minor modiication to boosting that can reduce computation, often by factors of 10 to 50. Finally, we apply these insights to produce an alternative formulation of boosting decision trees. This approach, based on best-rst truncated tree induction , often leads to better performance, and can provide interpretable descriptions of the aggregate decision rule. It is also much faster com-putationally making it more suitable to large scale data mining applications .
35b86ef854c728da5f905ae9fb09dbbcf59a0cdd
This paper describes the design and modeling of CMOS transistors, integrated passives, and circuit blocks at millimeter-wave (mm-wave) frequencies. The effects of parasitics on the high-frequency performance of 130-nm CMOS transistors are investigated, and a peak f/sub max/ of 135 GHz has been achieved with optimal device layout. The inductive quality factor (Q/sub L/) is proposed as a more representative metric for transmission lines, and for a standard CMOS back-end process, coplanar waveguide (CPW) lines are determined to possess a higher Q/sub L/ than microstrip lines. Techniques for accurate modeling of active and passive components at mm-wave frequencies are presented. The proposed methodology was used to design two wideband mm-wave CMOS amplifiers operating at 40 GHz and 60 GHz. The 40-GHz amplifier achieves a peak |S/sub 21/| = 19 dB, output P/sub 1dB/ = -0.9 dBm, IIP3 = -7.4 dBm, and consumes 24 mA from a 1.5-V supply. The 60-GHz amplifier achieves a peak |S/sub 21/| = 12 dB, output P/sub 1dB/ = +2.0 dBm, NF = 8.8 dB, and consumes 36 mA from a 1.5-V supply. The amplifiers were fabricated in a standard 130-nm 6-metal layer bulk-CMOS process, demonstrating that complex mm-wave circuits are possible in today's mainstream CMOS technologies.
0a06201d7d0f60d775b2e8d3b100026190081db8
Agriculture has become much more than simply a means to feed ever growing populations. It is very important where in more than 70% population depends on agriculture in India. That means it feeds great number of people. The plant diseases effect the humans directly or indirectly by health or also economically. To detect these plant diseases we need a fast automatic way. Diseases are analyzed by different digital image processing techniques. In this paper, we have done survey on different digital image processing techniques to detect the plant diseases.