title
stringlengths
8
300
abstract
stringlengths
0
10k
Faster Transit Routing by Hyper Partitioning
We present a preprocessing-based acceleration technique for computing bi-criteria Pareto-optimal journeys in public transit networks, based on the well-known RAPTOR algorithm [16]. Our key idea is to first partition a hypergraph into cells, in which vertices correspond to routes (e.g., bus lines) and hyperedges to stops, and to then mark routes sufficient for optimal travel across cells. The query can then be restricted to marked routes and those in the source and target cells. This results in a practical approach, suitable for networks that are too large to be efficiently handled by the basic RAPTOR algorithm. 1998 ACM Subject Classification G.2.2 Graph Theory
Efficiency Study of a 2.2 kV, 1 ns, 1 MHz Pulsed Power Generator Based on a Drift-Step-Recovery Diode
Drift-step-recovery diodes (DSRDs) are used in pulsed-power generators to produce nanosecond-scale pulses with a rise rate of the order of 1 kV/ns. A 2.2 kV, 1 ns pulsed power circuit is presented. The circuit features a single prime switch that utilizes a low-voltage dc power supply to pump and pulse the DSRD in the forward and reverse directions. An additional low-current dc power supply is used to provide a voltage bias in order to balance the DSRD forward with respect to its reverse charge. The DSRD was connected in parallel to the load. In order to study the circuit's efficiency, it was operated over a wide range of operating parameters, including the main and bias source voltages, and the trigger duration of the prime switch. A peak voltage of 2.2 kV with a rise time of less than 1 ns and a rise rate of 3 kV/ns was obtained, where the efficiency was 24%. A higher efficiency of 52% was obtained when the circuit was optimized to an output peak voltage of 1.15 kV. The circuit was operated in single-shot mode as well as in bursts of up to 100 pulses at a repetition rate of 1 MHz. The experimental results are supported by a PSPICE simulation of the circuit. An analysis of the circuit input and output energies with respect to the MOSFET and DSRD losses is provided.
Virtual Virgins and Vamps : The Effects of Exposure to Female Characters ’ Sexualized Appearance and Gaze in an Immersive Virtual Environment
This experiment exposed a sample of U.S. undergraduates (43 men, 40 women) to suggestively or conservatively clad virtual females who exhibited either responsive, high eye gaze or nonresponsive, low gaze in an immersive virtual environment. Outside the virtual world, men and women who encountered a highly stereotypical character—a suggestively clad, high gaze agent (“vamp”) or conservatively clad, low gaze character (“virgin”)— demonstrated more sexism and greater rape myth acceptance than participants who saw a suggestively clad nonresponsive or conservatively clad, responsive character. Results suggest that gender-stereotypical virtual females enhance negative attitudes toward women, whereas those that violate expectations and break stereotypes do not.
Fighting against phishing attacks: state of the art and future challenges
In the last few years, phishing scams have rapidly grown posing huge threat to global Internet security. Today, phishing attack is one of the most common and serious threats over Internet where cyber attackers try to steal user’s personal or financial credentials by using either malwares or social engineering. Detection of phishing attacks with high accuracy has always been an issue of great interest. Recent developments in phishing detection techniques have led to various new techniques, specially designed for phishing detection where accuracy is extremely important. Phishing problem is widely present as there are several ways to carry out such an attack, which implies that one solution is not adequate to address it. Two main issues are addressed in our paper. First, we discuss in detail phishing attacks, history of phishing attacks and motivation of attacker behind performing this attack. In addition, we also provide taxonomy of various types of phishing attacks. Second, we provide taxonomy of various solutions proposed in the literature to detect and defend from phishing attacks. In addition, we also discuss various issues and challenges faced in dealing with phishing attacks and spear phishing and how phishing is now targeting the emerging domain of IoT. We discuss various tools and datasets that are used by the researchers for the evaluation of their approaches. This provides better understanding of the problem, current solution space and future research scope to efficiently deal with such attacks.
Kylix: A Sparse Allreduce for Commodity Clusters
Allreduce is a basic building block for parallel computing. Our target here is "Big Data" processing on commodity clusters (mostly sparse power-law data). Allreduce can be used to synchronize models, to maintain distributed datasets, and to perform operations on distributed data such as sparse matrix multiply. We first review a key constraint on cluster communication, the minimum efficient packet size, which hampers the use of direct all-to-all protocols on large networks. Our allreduce network is a nested, heterogeneous-degree butterfly. We show that communication volume in lower layers is typically much less than the top layer, and total communication across all layers a small constant larger than the top layer, which is close to optimal. A chart of network communication volume across layers has a characteristic "Kylix" shape, which gives the method its name. For optimum performance, the butterfly degrees also decrease down the layers. Furthermore, to efficiently route sparse updates to the nodes that need them, the network must be nested. While the approach is amenable to various kinds of sparse data, almost all "Big Data" sets show power-law statistics, and from the properties of these, we derive methods for optimal network design. Finally, we present experiments showing with Kylix on Amazon EC2 and demonstrating significant improvements over existing systems such as PowerGraph and Hadoop.
Language embeddings that preserve staging and safety
We study embeddings of programming languages into one another that preserve what reductions take place at compile-time, i.e., staging. A certain condition -- what we call a `Turing complete kernel' -- is sufficient for a language to be stage-universal in the sense that any language may be embedded in it while preserving staging. A similar line of reasoning yields the notion of safety-preserving embeddings, and a useful characterization of safety-universality. Languages universal with respect to staging and safety are good candidates for realizing domain-specific embedded languages (DSELs) and `active libraries' that provide domain-specific optimizations and safety checks.
The effects of an oral preparation containing hyaluronic acid (Oralvisc®) on obese knee osteoarthritis patients determined by pain, function, bradykinin, leptin, inflammatory cytokines, and heavy water analyses
The purpose of this study was to determine the effects of an oral preparation containing hyaluronic acid on osteoarthritic knee joint pain and function as well as changes in inflammatory cytokines, bradykinin, and leptin. We also used heavy water to determine the turnover rates of glycosaminoglycans in synovial fluid. This was a double-blind, randomized, placebo-controlled study of 40 subjects over a period of 3 months. Visual analog scale, Western Ontario McMaster pain, and WOMAC function scores were recorded. Serum and synovial fluid were measured by enzyme-linked immunosorbent assays for inflammatory cytokines, bradykinin, and leptin. In 20 subjects, terminal heavy water ingestion was used for spectral analyses of serum and joint fluid samples. There were statistically significant improvements in pain and function. Both serum and synovial fluid samples showed significant decreases for a majority of inflammatory cytokines, leptin, and bradykinin in the oral hyaluronic acid preparation group. Heavy water analyses revealed a significant decrease in hyaluronic acid turnover in the synovial fluid of the treatment group. A preparation containing hyaluronic acid and other glycosaminoglycans holds promise for a safe and effective agent for the treatment for patients with knee osteoarthritis and who are overweight. Further studies will be required to see whether this is a disease-modifying agent.
Mining electronic health records: towards better research applications and clinical care
Clinical data describing the phenotypes and treatment of patients represents an underused data source that has much greater research potential than is currently realized. Mining of electronic health records (EHRs) has the potential for establishing new patient-stratification principles and for revealing unknown disease correlations. Integrating EHR data with genetic data will also give a finer understanding of genotype–phenotype relationships. However, a broad range of ethical, legal and technical reasons currently hinder the systematic deposition of these data in EHRs and their mining. Here, we consider the potential for furthering medical research and clinical care using EHR data and the challenges that must be overcome before this is a reality.
Building Robust Systems an essay
It is hard to build robust systems: systems that have accept­ able behavior over a larger class of situations than was anticipated by their designers. The most robust systems are evolvable: they can be easily adapted to new situations with only minor mod­ ification. How can we design systems that are flexible in this way? Observations of biological systems tell us a great deal about how to make robust and evolvable systems. Techniques origi­ nally developed in support of symbolic Artificial Intelligence can be viewed as ways of enhancing robustness and evolvability in programs and other engineered systems. By contrast, common practice of computer science actively discourages the construc­ tion of robust systems.
Improv: An Input Framework for Improvising Cross-Device Interaction by Demonstration
As computing devices become increasingly ubiquitous, it is now possible to combine the unique capabilities of different devices or Internet of Things to accomplish a task. However, there is currently a high technical barrier for creating cross-device interaction. This is especially challenging for end users who have limited technical expertise—end users would greatly benefit from custom cross-device interaction that best suits their needs. In this article, we present Improv, a cross-device input framework that allows a user to easily leverage the capability of additional devices to create new input methods for an existing, unmodified application, e.g., creating custom gestures on a smartphone to control a desktop presentation application. Instead of requiring developers to anticipate and program these cross-device behaviors in advance, Improv enables end users to improvise them on the fly by simple demonstration, for their particular needs and devices at hand. We showcase a range of scenarios where Improv is used to create a diverse set of useful cross-device input. Our study with 14 participants indicated that on average it took a participant 10 seconds to create a cross-device input technique. In addition, Improv achieved 93.7% accuracy in interpreting user demonstration of a target UI behavior by looking at the raw input events from a single example.
BUSI NESS INTELLIGENCE SYSTEMS: STATE-OF-THE-ART REVIEW AND CONTEMPORARY APPLICATIONS
Recently business intelligence (BI) applications have been the primary agenda for many CIOs. However, the concept of BI is fairly new and to date there is no commonly agreed definition of BI. This paper explores the nebulous definitions and the various applications of BI through a comprehensive review of academic as well as practitioner’s literature. As a result, three main perspectives of BI have been identified, namely the management aspect, the technological aspect, and the product aspect. This categorization gives researchers, practitioners, and BI vendors a better idea of how different parties have approached BI thus far and is valuable in their, design, planning, and implementation of a contemporary BI system in the future. The categorization may even be a first effort towards a commonly agreed definition of BI.
Chapter 1 Artificial Neurogenesis : An Introduction and Selective Review
In this introduction and review—like in the book which follows—we explore the hypothesis that adaptive growth is a means of producing brain-like machines. The emulation of neural development can incorporate desirable characteristics of natural neural systems into engineered designs. The introduction begins with a review of neural development and neural models. Next, artificial development— the use of a developmentally-inspired stage in engineering design—is introduced. Several strategies for performing this “meta-design” for artificial neural systems are reviewed. This work is divided into three main categories: bio-inspired representations; developmental systems; and epigenetic simulations. Several specific network biases and their benefits to neural network design are identified in these contexts. In particular, several recent studies show a strong synergy, sometimes interchangeability, between developmental and epigenetic processes—a topic that has remained largely under-explored in the literature. T. Kowaliw (B) Institut des Systèmes Complexes Paris Île-de-France, CNRS, Paris, France e-mail: [email protected] N. Bredeche Sorbonne Universités, UPMC University Paris 06, UMR 7222 ISIR,F-75005 Paris, France e-mail: [email protected] N. Bredeche CNRS, UMR 7222 ISIR,F-75005 Paris, France S. Chevallier Versailles Systems Engineering Laboratory (LISV), University of Versailles, Velizy, France e-mail: [email protected] R. Doursat School of Biomedical Engineering, Drexel University, Philadelphia, USA e-mail: [email protected] T. Kowaliw et al. (eds.), Growing Adaptive Machines, 1 Studies in Computational Intelligence 557, DOI: 10.1007/978-3-642-55337-0_1, © Springer-Verlag Berlin Heidelberg 2014
Half A Century of Desalination With Electrodialysis
On February 21, 1952 the New York Times ran an interesting cover story. It described how a young company called Ionics (which was purchased by GE Water & Process Technologies in 2005) invented a new technology that could change the world. The invention was Electrodialysis, or ED, and with its use of salt transfer membranes ED, offered for the first time, a truly practical and less expensive way to desalt brackish water. Distillation had been the only way. That was a cumbersome process, best suited to treat sea water and it required large amounts of energy to operate. The US Congress saw the promise of ED, and passed The Saline Water Bill in 1953 to fund additional research into desalination. In December 1953, ED became commercially viable when Ionics supplied an oil field campsite in Saudi Arabia with their first ED system. Many more ED units followed that one. ED used electricity to generate a DC field across a stack of flat sheet ion exchange membranes arranged in a cation – anion configuration. The DC field pulled unwanted salts across the membranes, creating both a product and a recirculating brine water flow.
Cross-Cultural Software Production and Use: A Structurational Analysis
This paper focuses on cross-cultural software production and use, which is increasingly common in today's more globalized world. A theoretical basis for analysis is developed, using concepts drawn from structuration theory. The theory is illustrated using two cross-cultural case studies. It is argued that structurational analysis provides a deeper examination of cross-cultural working and IS than is found in the current literature, which is dominated by Hofstede-type studies, in particular, the theoretical approach can be used to analyze cross-cultural conflict and contradiction, cultural heterogeneity, detailed work patterns, and the dynamic nature of culture. The paper contributes to the growing body of literature that emphasizes the essential role of cross-cultural understanding in contemporary society. 'Michael D. Myers was the accepting senior editor for this paper. introduction There has been much debate over the last decade about the major sociat transformations taking place in the world such as the increasing interconnectedness of different societies, the compression of time and space, and an intensification of consciousness of the world as a whole (Robertson 1992), Such changes are often tabeted with the term globatization, atthough the precise nature of this phenomenon is highly complex on closer examination. For example. Beck (2000) distinguishes between globality, the change in consciousness of the world as a single entity, and globaiism, the ideology of neoliberatism which argues that the world market eliminates or supplants the importance of locat potiticat action. Despite the complexity of the globalization phenomena, all commentators would agree that information and communication technologies (ICTs) are deeply implicated in the changes that are taking ptace through their abitity to enabte new modes of work, communication, and organization MiS Quarterly Vol. 26 No. 4. pp. 359-380/December 2002 359 Walsham/Cross-Cultural Software Production & Use across time and space. For example, the influential work of Castells (1996, 1997, 1998) argues that we are in the "information age" where information generation, processing, and transformation are fundamental to societal functioning and societal change, and where ICTs enable the pervasive expansion of networking throughout the social structure. However, does globalization, and the related spread of ICTs, imply that the world is becoming a homogeneous arena for gtobat business and gtobal attitudes, with differences between organizations and societies disappearing? There are many authors who take exception to this conclusion. For exampie, Robertson (1992) discussed the way in which imported themes are indigenized in particular societies with tocat culture constraining receptivity to some ideas rather than others, and adapting them in specific ways. He cited Japan as a good example of these glocalization processes. White accepting the idea of time-space compression facilitated by ICTs, Robertson argued that one of its main consequences is an exacerbation of collisions between gtobat, societat, and communat attitudes. Simitarty, Appadural {1997), coming from a nonWestern background, argued against the gtobat homogenization thesis on the grounds that difl'erent societies witt appropriate the "materials of modernity" differently depending on their specific geographies, histories, and languages. Watsham {2001) devetoped a related argument, with a specific focus on the rote of ICTs, concluding that globat diversity needs to be a key focus when devetoping and using such technotogies. If these latter arguments are broadty correct, then working with tCTs in and across different cuttures should prove to be problematic, in that there wilt be different views ofthe relevance, appticabitity, and vatue of particular modes of working and use of ICTs which may produce conflict. For exampte, technotogy transfer from one society to another involves the importing of that technology into an "atien" cutturat context where its value may not be similarly perceived to that in its original host cutture. Simitarty, cross-cuttural communication through tCTs, or cross-cultural information systems (IS) devetopment teams, are likely to confront issues of incongruence of values and attitudes. The purpose of this paper is to examine a particular topic within the area of cross-cutturat working and tCTs, namety that of software production and use; in particutar, where the software is not devetoped in and for a specific cutturat group. A primary goat is to devetop a theoreticat basis for anatysis of this area. Key eiements of this basis, which draws on structuration theory, are described in the next section of the paper, tn order to iltustrate the theoreticat basis and its vatue in analyzing real situations, the subsequent sections draw on the field data from two published case studies of cross-cultural software development and application. There is an extensive titerature on cross-cutturat working and IS, and the penultimate section ofthe paper reviews key etements of this titerature, and shows how the anatysis of this paper makes a new contribution. In particular, it witt be argued that the structurationat analysis enabtes a more sophisticated and detailed consideration of issues in cross-culturat software production under four specific headings: cross-cultural contradiction and conflict; cultural heterogeneity; detailed work patterns in different cuttures; and the dynamic, emergent nature of cutture. The final section of the paper wilt summarize some theoretical and practical implications. Structuration Theory, Cuiture and iS The theoretical basis for this paper draws on structuration theory {Giddens 1979, 1984). This theory has been highty inftuentiat in sociology and the social sciences generalty since Giddens first developed the ideas some 20 years ago. In addition, the theory has received considerable attention in the IS field {for a good review, see Jones 1998). The focus here, however, wilt be on how structuration theory can offer a new way of looking 360 MIS Quarterly Vol. 26 No. 4/December 2002 Walsham/Cross-Cuitural Software Production & Use Table 1. Structuration Theory, Culture, and ICTs: Some Key Concepts
Lumbar stabilization: a review of core concepts and current literature, part 2.
Lumbar-stabilization exercise programs have become increasingly popular as a treatment for low-back pain. In this article, we outline an evidence-based medicine approach to evaluating patients for a lumbar-stabilization program. We also discuss typical clinical components of this type of program and the rationale for including these particular features based on the medical literature.
A New Prime and Probe Cache Side-Channel Attack for Cloud Computing
Cloud computing is considered one of the most dominant paradigms in the Information Technology (IT) industry nowadays. It supports multi-tenancy to fulfil future increasing demands for accessing and using resources provisioned over the Internet. However, multi-tenancy in cloud computing has unique vulnerabilities such as clients' co-residence and virtual machine physical co-residency. Physical co-residency of virtual machines can facilitate attackers with an ability to interfere with another virtual machine running on the same physical machine due to an insufficient logical isolation. In the worst scenario, attackers can exfiltrate sensitive information of victims on the same physical machine by using hardware side-channels. There are various types of side-channels attacks, which are classified according to hardware medium they target and exploit, for instance, cache side-channel attacks. CPU caches are one of the most hardware devices targeted by adversaries because it has high-rate interactions and sharing between processes. This paper presents a new Prime and Probe cache side-channel attack, which can prime physical addresses. These addresses are translated form virtual addresses used by a virtual machine. Then, time is measured to access these addresses and it will be varied according to where the data is located. If it is in the CPU cache, the time will be less than in the main memory. The attack was implemented in a server machine comparable to cloud environment servers. The results show that the attack needs less effort and time than other types and is easy to be launched.
Changing the Software Engineering Education: A Report from Current Situation in Mexico
Nowadays, in Mexico software engineering education has two problems when satisfying software industry necessities: the quantity of young and skilled students and the quality of their formation. In this sense, it is necessary to improve education at the undergraduate level. We have identified five malfunctions in the current situation of software engineering education. The correction of these problems requires the design of a module-oriented curriculum, creation of networks with local industry, federal and state funding, and alternative educational paradigms for software engineering. This paper shows the strategy and federal programs established to modernize the software engineering curriculum.
Safety assessment of cyclomethicone, cyclotetrasiloxane, cyclopentasiloxane, cyclohexasiloxane, and cycloheptasiloxane.
Cyclomethicone (mixture) and the specific chain length cyclic siloxanes (n = 4-7) reviewed in this safety assessment are cyclic dimethyl polysiloxane compounds. These ingredients have the skin/hair conditioning agent function in common. Minimal percutaneous absorption was associated with these ingredients and the available data do not suggest skin irritation or sensitization potential. Also, it is not likely that dermal exposure to these ingredients from cosmetics would cause significant systemic exposure. The Cosmetic Ingredient Review Expert Panel concluded that these ingredients are safe in the present practices of use and concentration.
Dual three-phase permanent magnet synchronous machine supplied by two independent voltage source inverters
This paper investigates a dual three-phase permanent magnet synchronous machine supplied by two independent three-phase voltage source inverters (VSIs). Dual three-phase machines have many important advantages compared with their conventional three-phase counterparts. Instead of six-phase converters and special vector controls, it would be a very interesting alternative to supply these machines by two conventional three-phase VSIs since they are readily commercially available. This paper shows that the proposed supply method can be used successfully although it suffers from a decrease in the dynamic performance and an error in the estimation of torque. On the other hand, two independent VSIs do not cause low-frequency current harmonics and guarantee balanced current sharing between the winding sets thus avoiding the two most common problems with dual three-phase machines. Experimental results are provided to verify the conclusions. The results suggest that the simple supply method of two conventional VSIs could be a feasible alternative for many industrial applications.
Phase II trial combining docetaxel and doxorubicin as neoadjuvant chemotherapy in patients with operable breast cancer.
BACKGROUND This study was conducted to assess the antitumour activity of docetaxel in combination with doxorubicin for neoadjuvant therapy of patients with breast cancer. PATIENTS AND METHODS Forty-eight women were treated with intravenous doxorubicin 50 mg/m(2) over 15 min followed by a 1-h infusion of docetaxel 75 mg/m(2) every 3 weeks for six cycles. Dexamethasone or prednisolone premedication was allowed. Granulocyte colony-stimulating factor was not allowed as primary prophylaxis. The primary end point was the pathologically documented complete response rate (pathological response). RESULTS The mean relative dose intensity calculated for four or more cycles was 0.99 for doxorubicin and 0.99 for docetaxel. Overall, the pathological response rate was 13%. There were 11 complete and 29 partial clinical responses for an overall response rate of 85% [95% confidence interval (CI) 75% to 95%] in the evaluable population (n = 47). Disease-free and overall survival rates were 85% (95% CI 71% to 94%) and 96% (95% CI 85% to 99%), respectively, after a median follow-up of 36.6 months. Grade 3/4 neutropenia was observed in 65% of patients and 17% reported grade 4 febrile neutropenia. CONCLUSIONS Docetaxel and doxorubicin is an effective and well-tolerated combination in the neoadjuvant therapy of breast cancer. Future controlled trials are warranted to investigate the best schedules and to correlate response with biological factors.
Bidding algorithms for simultaneous auctions
This paper introduces RoxyBot, one of the top-scoring agents in the First International Trading Agent Competition. A TAC agent simulates one vision of future travel agents: it represents a set of clients in simultaneous auctions, trading complementary (e.g., airline tickets and hotel reservations) and substitutable (e.g., symphony and theater tickets) goods. RoxyBot faced two key technical challenges in TAC: (i) allocation---assigning purchased goods to clients at the end of a game instance so as to maximize total client utility, and (ii) completion---determining the optimal quantity of each resource to buy and sell given client preferences, current holdings, and market prices. For the dimensions of TAC, an optimal solution to the allocation problem is tractable, and RoxyBot uses a search algorithm based on A* to produce optimal allocations. An optimal solution to the completion problem is also tractable, but in the interest of minimizing bidding cycle time, RoxyBot solves the completion problem using beam search, producing approximately optimal completions. RoxyBot's completer relies on an innovative data structure called a priceline.
Simple, fast and accurate hyper-parameter tuning in Gaussian-kernel SVM
We consider the parameter tuning problem for Gaussian-kernel support vector machines, i.e., how to set its two hyperparameters — σ (bandwidth) and C (tradeoff). Among the many methods in the literature, the majority handle this task by maximizing the cross validation accuracy over the first quadrant of the (σ, C) plane. However, they are all computationally expensive because the objective function has no explicit formula so that one has to resort to numerical methods (which require training and testing the classifier many times). Additionally, these methods ignore the intrinsic geometry of training data and always operate in a large set, thus being computationally inefficient. In this paper we propose a two-step procedure for efficient parameter selection: First, we use a nearest neighbor method to directly set the value of σ based on the data geometry; afterwards, for the tradeoff parameter C we employ an elbow method that finds the smallest C leading to “nearly” the highest validation accuracy. By slightly sacrificing the validation accuracy our method gains additional attractive properties such as (1) faster training (i.e., much less candidate points to be examined) and (2) better generalizability (due to larger class margins). We conduct extensive experiments to show that such a combination of simple techniques achieves excellent performance — the classification accuracy of our method is comparable to its competitors in most cases, but it is much faster.
Generation Mechanism of Linear and Angular Ball Velocity in Baseball Pitching
The purpose of this study was to quantify the functional roles of the whole-body’s joint torques including fingers’ joints in the generation of ball speed and spin during baseball fast-ball pitching motion. The dynamic contributions of joint torque term, gravitational term, and motion-dependent term (MDT) consisting of centrifugal force and Coriolis force, to the generation of the ball variables were calculated using two types of models. Motion and ground reaction forces of a baseball pitcher, who was instructed to throw a fastball into the target, were measured with a motion capture system with two force platforms. The results showed (1) the MDT is the largest contributor to ball speed (e.g., about 31 m/s prior to ball release) when using 16segment model, and (2) the horizontal adduction torque of pitching-side shoulder joint plays a crucial role in generating ball speed with conversion of the MDT into other terms using a recurrence formula.
(Im)maturity of judgment in adolescence: why adolescents may be less culpable than adults.
A crucial step in the establishment of effective policies and regulations concerning legal decisions involving juveniles is the development of a complete understanding of the many factors-psychosocial as well as cognitive-that affect the evolution of judgment over the course of adolescence and into adulthood. This study examines the influence of three psychosocial factors (responsibility, perspective, and temperance) on maturity of judgment in a sample of over 1,000 participants ranging in age from 12 to 48 years. Participants completed assessments of their psychosocial maturity in the aforementioned domains and responded to a series of hypothetical decision-making dilemmas about potentially antisocial or risky behavior. Socially responsible decision making is significantly more common among young adults than among adolescents, but does not increase appreciably after age 19. Individuals exhibiting higher levels of responsibility, perspective, and temperance displayed more mature decision-making than those with lower scores on these psychosocial factors, regardless of age. Adolescents, on average, scored significantly worse than adults, but individual differences in judgment within each adolescent age group were considerable. These findings call into question recent arguments, derived from studies of logical reasoning, that adolescents and adults are equally competent and that laws and social policies should treat them as such.
AHA Guidelines for Primary Prevention of Cardiovascular Disease and Stroke: 2002 Update: Consensus Panel Guide to Comprehensive Risk Reduction for Adult Patients Without Coronary or Other Atherosclerotic Vascular Diseases. American Heart Association Science Advisory and Coordinating Committee.
Understanding responses to political conflict: interactive effects of the need for closure and salient conflict schemas.
Two studies examined the relationship between the need for cognitive closure and preferences for conflict-resolution strategies in 2 different samples of elite political actors. Although research has suggested that high need for closure should be associated with competitiveness, the authors argue that this relationship should be strongest among political actors with a hostile conflict schema, or representation of what a conflict is and how it should be dealt with. The authors provide evidence for this hypothesis using archival survey data on American foreign-policy officials' attitudes toward international conflict at the height of the Cold War (Study 1) and their own data on the relationship between the need for closure and conflict-strategy preferences among samples of activists from 2 political parties in Poland: a centrist party with a reputation for cooperativeness and an extremist party with a reputation for confrontation (Study 2). The broader implications of these findings are discussed.
Feedback Networks
Urrently, the most successful learning models in computer vision are based on learning successive representations followed by a decision layer. This is usually actualized through feedforward multilayer neural networks, e.g. ConvNets, where each layer forms one of such successive representations. However, an alternative that can achieve the same goal is a feedback based approach in which the representation is formed in an iterative manner based on a feedback received from previous iterations output. We establish that a feedback based approach has several core advantages over feedforward: it enables making early predictions at the query time, its output naturally conforms to a hierarchical structure in the label space (e.g. a taxonomy), and it provides a new basis for Curriculum Learning. We observe that feedback develops a considerably different representation compared to feedforward counterparts, in line with the aforementioned advantages. We provide a general feedback based learning architecture, instantiated using existing RNNs, with the endpoint results on par or better than existing feedforward networks and the addition of the above advantages.
Quantum-inspired evolutionary algorithm for a class of combinatorial optimization
This paper proposes a novel evolutionary algorithm inspired by quantum computing, called a quantum-inspired evolutionary algorithm (QEA), which is based on the concept and principles of quantum computing, such as a quantum bit and superposition of states. Like other evolutionary algorithms, QEA is also characterized by the representation of the individual, the evaluation function, and the population dynamics. However, instead of binary, numeric, or symbolic representation, QEA uses a Q-bit, defined as the smallest unit of information, for the probabilistic representation and a Q-bit individual as a string of Q-bits. A Q-gate is introduced as a variation operator to drive the individuals toward better solutions. To demonstrate its effectiveness and applicability, experiments are carried out on the knapsack problem, which is a well-known combinatorial optimization problem. The results show that QEA performs well, even with a small population, without premature convergence as compared to the conventional genetic algorithm.
The international personality item pool and the future of public-domain personality measures
Seven experts on personality measurement here discuss the viability of public-domain personality measures, focusing on the International Personality Item Pool (IPIP) as a prototype. Since its inception in 1996, the use of items and scales from the IPIP has increased dramatically. Items from the IPIP have been translated from English into more than 25 other languages. Currently over 80 publications using IPIP scales are listed at the IPIP Web site (http://ipip.ori.org), and the rate of IPIPrelated publications has been increasing rapidly. The growing popularity of the IPIP can be attributed to Wve factors: (1) It is cost free; (2) its items can be obtained instantaneously via the Internet; (3) it includes over 2000 items, all easily available for inspection; (4) scoring keys for IPIP scales are This article represents a synthesis of contributions to the presidential symposium, The International Personality Item Pool and the Future of Public-Domain Personality Measures (L.R. Goldberg, Chair) at the sixth annual meeting of the Association for Research in Personality, New Orleans, January 20, 2005. Authorship order is based on the order of participation in the symposium. The IPIP project has been continually supported by Grant MH049227 from the National Institute of Mental Health, U.S. Public Health Service. J.A. Johnson’s research was supported by the DuBois Educational Foundation. The authors thank Paul T. Costa Jr., Samuel D. Gosling, Leonard G. Rorer, Richard Reed, and Krista Trobst for their helpful suggestions. * Corresponding author. Fax: +1 814 375 4784. E-mail address: [email protected] (J.A. Johnson). 0092-6566/$ see front matter  2005 Elsevier Inc. All rights reserved. doi:10.1016/j.jrp.2005.08.007 L.R. Goldberg et al. / Journal of Research in Personality 40 (2006) 84–96 85 provided; and (5) its items can be presented in any order, interspersed with other items, reworded, translated into other languages, and administered on the World Wide Web without asking permission of anyone. The unrestricted availability of the IPIP raises concerns about possible misuse by unqualiWed persons, and the freedom of researchers to use the IPIP in idiosyncratic ways raises the possibility of fragmentation rather than scientiWc uniWcation in personality research.  2005 Elsevier Inc. All rights reserved.
A Semi-Supervised AUC Optimization Method with Generative Models
This paper presents a semi-supervised learning method for improving the performance of AUC-optimized classifiers by using both labeled and unlabeled samples. In actual binary classification tasks, there is often an imbalance between the numbers of positive and negative samples. For such imbalanced tasks, the area under the ROC curve (AUC) is an effective measure with which to evaluate binary classifiers. The proposed method utilizes generative models to assist the incorporation of unlabeled samples in AUC-optimized classifiers. The generative models provide prior knowledge that helps learn the distribution of unlabeled samples. To evaluate the proposed method in text classification, we employed naive Bayes models as the generative models. Our experimental results using three test collections confirmed that the proposed method provided better classifiers for imbalanced tasks than supervised AUC-optimized classifiers and semi-supervised classifiers trained to maximize the classification accuracy of labeled samples. Moreover, the proposed method improved the effect of using unlabeled samples for AUC optimization especially when we used appropriate generative models.
Malware Detection by HTTPS Traffic Analysis
In order to evade detection by network-traffic analysis, a growing proportion of malware uses the encrypted HTTPS protocol. We explore the problem of detecting malware on client computers based on HTTPS traffic analysis. In this setting, malware has to be detected based on the host IP address, ports, timestamp, and data volume information of TCP/IP packets that are sent and received by all the applications on the client. We develop a scalable protocol that allows us to collect network flows of known malicious and benign applications as training data and derive a malware-detection method based on a neural networks and sequence classification. We study the method’s ability to detect known and new, unknown malware in a largescale empirical study.
A Differential Clapp-VCO in 0.13 $\mu{\rm m}$ CMOS Technology
A new differential voltage-controlled oscillator (VCO) is designed and implemented in a 0.13 mum CMOS 1P8M process. The designed circuit topology is an all nMOS LC-tank Clapp-VCO using a series-tuned resonator. At the supply voltage of 0.9 V, the output phase noise of the VCO is -110.5 dBc/Hz at 1 MHz offset frequency from the carrier frequency of 18.78 GHz, and the figure of merit is -188.67 dBc/Hz. The core power consumption is 5.4 mW. Tuning range is about 3.43 GHz, from 18.79 to 22.22 GHz, while the control voltage was tuned from 0 to 1.3 V.
Lidar-Based Relative Position Estimation and Tracking for Multi-robot Systems
Relative positioning systems play a vital role in current multirobot systems. We present a self-contained detection and tracking approach, where a robot estimates a distance (range) and an angle (bearing) to another robot using measurements extracted from the raw data provided by two laser range finders. We propose a method based on the detection of circular features with least-squares fitting and filtering out outliers using a map-based selection. We improve the estimate of the relative robot position and reduce its uncertainty by feeding measurements into a Kalman filter, resulting in an accurate tracking system. We evaluate the performance of the algorithm in a realistic indoor environment to demonstrate its robustness and reliability.
Exercise motion classification from large-scale wearable sensor data using convolutional neural networks
The ability to accurately identify human activities is essential for developing automatic rehabilitation and sports training systems. In this paper, large-scale exercise motion data obtained from a forearm-worn wearable sensor are classified with a convolutional neural network (CNN). Time-series data consisting of accelerometer and orientation measurements are formatted as images, allowing the CNN to automatically extract discriminative features. A comparative study on the effects of image formatting and different CNN architectures is also presented. The best performing configuration classifies 50 gym exercises with 92.1% accuracy.
Diet, exercise, and endothelial function in obese adolescents.
BACKGROUND AND OBJECTIVES Endothelial dysfunction is the first, although reversible, sign of atherosclerosis and is present in obese adolescents. The primary end point of this study was to investigate the influence of a multicomponent treatment on microvascular function. Additional objectives and end points were a reduced BMI SD score, improvements in body composition, exercise capacity, and cardiovascular risk factors, an increase in endothelial progenitor cells (EPCs), and a decrease in endothelial microparticles (EMPs). METHODS We used a quasi-randomized study with 2 cohorts of obese adolescents: an intervention group (n = 33; 15.4 ± 1.5 years, 24 girls and 9 boys) treated residentially with supervised diet and exercise and a usual care group (n = 28; 15.1 ± 1.2 years, 22 girls and 6 boys), treated ambulantly. Changes in body mass, body composition, cardiorespiratory fitness, microvascular endothelial function, and circulating EPCs and EMPs were evaluated after 5 months and at the end of the 10-month program. RESULTS Residential intervention decreased BMI and body fat percentage, whereas it increased exercise capacity (P < .001 after 5 and 10 months). Microvascular endothelial function also improved in the intervention group (P = .04 at 10 months; + 0.59 ± 0.20 compared with + 0.01 ± 0.12 arbitrary units). Furthermore, intervention produced a significant reduction in traditional cardiovascular risk factors, including high-sensitivity C-reactive protein (P = .012 at 10 months). EPCs were increased after 5 months (P = .01), and EMPs decreased after 10 months (P = .004). CONCLUSIONS A treatment regimen consisting of supervised diet and exercise training was effective in improving multiple adolescent obesity-related end points.
Different effect of the hyperons Λ and ≡ on the nuclear core
We demonstrate the different effect of strange impurities (Λ and Ξ) on the static properties of unclei within the framework of the relativistic mean-field model. Systematic calculations show that the glue-like role of the Λ-hyperon is universal for all Λ-hypernuclei considered. However, the Ξ - -hyperon has a glue-like role only for the protons distribution in nuclei, while for the neutrons distribution it plays a repulsive role. On the other hand, the Ξ 0 -hyperon attracts the surrounding neutrons and reveals a repulsive force to the protons. Possible explanations of the above observation are discussed.
Ekiden: A Platform for Confidentiality-Preserving, Trustworthy, and Performant Smart Contract Execution
Smart contracts are applications that execute on blockchains. Today they manage billions of dollars in value and motivate visionary plans for pervasive blockchain deployment. While smart contracts inherit the availability and other security assurances of blockchains, however, they are impeded by blockchains’ lack of confidentiality and poor performance. We present Ekiden, a system that addresses these critical gaps by combining blockchains with Trusted Execution Environments (TEEs). Capable of operating on any desired blockchain, Ekiden permits concurrent, off-chain execution of smart contracts within TEE-backed compute nodes, yielding high performance, low cost, and confidentiality for sensitive data. Ekiden enforces a strong set of security and availability properties. By maintaining on-chain state, it achieves consistency, meaning a single authoritative sequence of state transitions, and availability, meaning contracts can survive the failure of compute nodes. Ekiden is anchored in a formal security model expressed as an ideal functionality. We prove the security of the corresponding implemented protocol in the UC framework. Our implementation of Ekiden supports contract development in Rust and the Ethereum Virtual Machine (EVM). We present experiments for applications including machine learning models, poker, and cryptocurrency tokens. Ekiden is designed to support multiple underlying blockchains. We have built one end-to-end instantiation of our system, Ekiden-BT, with a blockchain extending from Tendermint. Ekiden-BT achieves example performance of 600x more throughput and 400x less latency at 1000x less cost than on the Ethereum mainnet. When used with Ethereum as the backing blockchain, Ekiden still costs less than on-chain execution and supports contract confidentiality.
Optimal patient education for cancer pain: a systematic review and theory-based meta-analysis
Previous systematic reviews have found patient education to be moderately efficacious in decreasing the intensity of cancer pain, but variation in results warrants analysis aimed at identifying which strategies are optimal. A systematic review and meta-analysis was undertaken using a theory-based approach to classifying and comparing educational interventions for cancer pain. The reference lists of previous reviews and MEDLINE, PsycINFO, and CENTRAL were searched in May 2012. Studies had to be published in a peer-reviewed English language journal and compare the effect on cancer pain intensity of education with usual care. Meta-analyses used standardized effect sizes (ES) and a random effects model. Subgroup analyses compared intervention components categorized using the Michie et al. (Implement Sci 6:42, 2011) capability, opportunity, and motivation behavior (COM-B) model. Fifteen randomized controlled trials met the criteria. As expected, meta-analysis identified a small-moderate ES favoring education versus usual care (ES, 0.27 [−0.47, −0.07]; P = 0.007) with substantial heterogeneity (I² = 71 %). Subgroup analyses based on the taxonomy found that interventions using “enablement” were efficacious (ES, 0.35 [−0.63, −0.08]; P = 0.01), whereas those lacking this component were not (ES, 0.18 [−0.46, 0.10]; P = 0.20). However, the subgroup effect was nonsignificant (P = 0.39), and heterogeneity was not reduced. Factoring in the variable of individualized versus non-individualized influenced neither efficacy nor heterogeneity. The current meta-analysis follows a trend in using theory to understand the mechanisms of complex interventions. We suggest that future efforts focus on interventions that target patient self-efficacy. Authors are encouraged to report comprehensive details of interventions and methods to inform synthesis, replication, and refinement.
Dimensional effects in controlled structure supported catalysts derived from layered synthetic microstructures. Final progress report for period March 1, 1997 - February 28, 2000
Several heterogeneous catalytic reactions show size dependence, whereby the specific rate changes with the average diameter of supported metal particles in the nanometer range. Geometric arguments relating the size dependence to the relative concentration of active sites on idealized crystal particles cannot account for all the observed results. In an effort to overcome the geometric limitations of supported particles, the authors had previously created novel supported metal catalysts called Layered Synthetic Microstructures (LSMs) by the physical vapor deposition of alternating thin films of Ni and silica onto 3-in. Si wafer substrates. Subsequent lithography followed by wet etching left an array of micron-sized towers. Relative catalytic rate measured for ethane hydrogenolysis showed that LSMs produced similar size effects as previously found with supported particles. In the current work, experiments were accomplished using LSMs with a wider range of metals (Ni, Pt, Ir, Rh, Ru, etc.) and supports (SiO{sub 2} and Al{sub 2}O{sub 3}). Dry etching with Ar ions was used. It was found that a distinction can be made between several types of size effects due to the well-defined geometry of LSM catalysts. Rates in some systems are truly size dependent, while in other systems rates are clearly dependent on the metal-support interface. In addition, a lift-off process was developed for fabrication of all kinds of LSMs without resorting to either wet or dry etch techniques.
On Bitcoin and red balloons
In this letter we present a brief report of our recent research on information distribution mechanisms in networks [Babaioff et al. 2011]. We study scenarios in which all nodes that become aware of the information compete for the same prize, and thus have an incentive not to propagate information. Examples of such scenarios include the 2009 DARPA Network Challenge (finding red balloons), and raffles. We give special attention to one application domain, namely Bitcoin, a decentralized electronic currency system. We propose reward schemes that will remedy an incentives problem in Bitcoin in a Sybil-proof manner, with little payment overhead.
DNA Barcoding of Marine Copepods: Assessment of Analytical Approaches to Species Identification
More than 2,500 species of copepods (Class Maxillopoda; Subclass Copepoda) occur in the marine planktonic environment. The exceptional morphological conservation of the group, with numerous sibling species groups, makes the identification of species challenging, even for expert taxonomists. Molecular approaches to species identification have allowed rapid detection, discrimination, and identification of species based on DNA sequencing of single specimens and environmental samples. Despite the recent development of diverse genetic and genomic markers, the barcode region of the mitochondrial cytochrome c oxidase subunit I (COI) gene remains a useful and - in some cases - unequaled diagnostic character for species-level identification of copepods. This study reports 800 new barcode sequences for 63 copepod species not included in any previous study and examines the reliability and resolution of diverse statistical approaches to species identification based upon a dataset of 1,381 barcode sequences for 195 copepod species. We explore the impact of missing data (i.e., species not represented in the barcode database) on the accuracy and reliability of species identifications. Among the tested approaches, the best close match analysis resulted in accurate identification of all individuals to species, with no errors (false positives), and out-performed automated tree-based or BLAST based analyses. This comparative analysis yields new understanding of the strengths and weaknesses of DNA barcoding and confirms the value of DNA barcodes for species identification of copepods, including both individual specimens and bulk samples. Continued integrative morphological-molecular taxonomic analysis is needed to produce a taxonomically-comprehensive database of barcode sequences for all species of marine copepods.
Self-esteem, locus of control, hippocampal volume, and cortisol regulation in young and old adulthood
Self-esteem, the value we place on ourselves, has been associated with effects on health, life expectancy, and life satisfaction. Correlated with self-esteem is internal locus of control, the individual's perception of being in control of his or her outcomes. Recently, variations in self-esteem and internal locus of control have been shown to predict the neuroendocrine cortisol response to stress. Cumulative exposure to high levels of cortisol over the lifetime is known to be related to hippocampal atrophy. We therefore examined hippocampal volume and cortisol regulation, to investigate potential biological mechanisms related to self-esteem. We investigated 16 healthy young (age range 20-26 years of age) and 23 healthy elderly subjects (age range 60-84 years). The young subjects were exposed to a psychosocial stress task, while the elderly subjects were assessed for their basal cortisol regulation. Structural Magnetic Resonance Images were acquired from all subjects, and volumetric analyses were performed on medial temporal lobe structures, and whole brain gray matter. Standardized neuropsychological assessments in the elderly were performed to assess levels of cognitive performance, and to exclude the possibility of neurodegenerative disease. Self-esteem and internal locus of control were significantly correlated with hippocampal volume in both young and elderly subjects. In the young, the cortisol response to the psychosocial stress task was significantly correlated with both hippocampal volume and levels of self-esteem and locus of control, while in the elderly, these personality traits moderated age-related patterns of cognitive decline, cortisol regulation, and global brain volume decline.
A fully integrated 77GHz FMCW radar system in 65nm CMOS
Millimeter-wave anti-collision radars have been widely investigated in advanced CMOS technologies recently. This paper presents a fully integrated 77GHz FMCW radar system in 65nm CMOS. The FMCW radar transmits a continuous wave which is triangularly modulated in frequency and receives the wave reflected from objects. As can be illustrated in Fig. 11.2.1, for a moving target, the received frequency would be shifted (i.e. Doppler shift), resulting in two different offset frequencies f + and f− for the falling and rising ramps. Denoting the modulation range and period as B and Tm, respectively, we can derive the distance R and the relative velocity VR as listed in Fig. 11.2.1, where fc represents the center frequency and c the speed of light.
Aesthetic Surgery of the Female Genitalia
The fi rst cosmetic vaginal surgery was reported in the literature by Hodgkinson and Hait in 1984 [ 1 ]. Recently, there has been an increased interest in cosmetic surgical procedures of the female genitalia [ 2 – 5 ]. The National Health Service (NHS) reported a doubling of the number of labia reductions carried out in the UK in 2004 compared to 1998 [ 6 ]. The indications for labia reduction are generally organized in the literature by three categories – women who suffer from physical or functional complaints associated with a genital abnormality, women without physical complaints but want surgical intervention for cosmetic reasons, and women who seek surgery for both functional and aesthetic reasons [ 7 ]. In the physical complaint group, several specifi c complaints were vulvar pain and irritation riding a bike, horseback riding, wearing tight underwear or clothes, and superfi cial dyspareunia. Enlarged labia can signifi cantly impair a woman’s quality of life – causing constant irritation, diffi culty maintaining hygiene, discomfort or embarrassment with clothing, and impairment or pain with exercise or sexual activity [ 8 , 9 ]. Many women feel emotional embarrassment with enlarged labia. Women often compare themselves with others and protrusion of the labia minora past the labia majora is considered by many women to be unattractive [ 10 ]. Miklos et al. found that 93 % of patients sought surgery for purely personal reasons and 7 % admitted to being infl uenced by a male of female partner, spouse, or friend [ 7 ]. 2 Embryology and Anatomy
Edge Adaptive Image Steganography Based on LSB Matching Revisited
The least-significant-bit (LSB)-based approach is a popular type of steganographic algorithms in the spatial domain. However, we find that in most existing approaches, the choice of embedding positions within a cover image mainly depends on a pseudorandom number generator without considering the relationship between the image content itself and the size of the secret message. Thus the smooth/flat regions in the cover images will inevitably be contaminated after data hiding even at a low embedding rate, and this will lead to poor visual quality and low security based on our analysis and extensive experiments, especially for those images with many smooth regions. In this paper, we expand the LSB matching revisited image steganography and propose an edge adaptive scheme which can select the embedding regions according to the size of secret message and the difference between two consecutive pixels in the cover image. For lower embedding rates, only sharper edge regions are used while keeping the other smoother regions as they are. When the embedding rate increases, more edge regions can be released adaptively for data hiding by adjusting just a few parameters. The experimental results evaluated on 6000 natural images with three specific and four universal steganalytic algorithms show that the new scheme can enhance the security significantly compared with typical LSB-based approaches as well as their edge adaptive ones, such as pixel-value-differencing-based approaches, while preserving higher visual quality of stego images at the same time.
Instability, stabilization, and formulation of liquid protein pharmaceuticals.
One of the most challenging tasks in the development of protein pharmaceuticals is to deal with physical and chemical instabilities of proteins. Protein instability is one of the major reasons why protein pharmaceuticals are administered traditionally through injection rather than taken orally like most small chemical drugs. Protein pharmaceuticals usually have to be stored under cold conditions or freeze-dried to achieve an acceptable shelf life. To understand and maximize the stability of protein pharmaceuticals or any other usable proteins such as catalytic enzymes, many studies have been conducted, especially in the past two decades. These studies have covered many areas such as protein folding and unfolding/denaturation, mechanisms of chemical and physical instabilities of proteins, and various means of stabilizing proteins in aqueous or solid state and under various processing conditions such as freeze-thawing and drying. This article reviews these investigations and achievements in recent years and discusses the basic behavior of proteins, their instabilities, and stabilization in aqueous state in relation to the development of liquid protein pharmaceuticals.
Electrical engineering in the Postwar world: I — Influence of wartime developments
IT IS not the purpose of this article to deal with any specific branch of electrical engineering or science, nor to present a detailed discussion of types of improvement in equipment design, since these various fields of activity will be covered adequately by specialists in later articles in this series. Rather I wish to draw a general picture of what we may hope will happen in the future.
Adoption of Telemedicine - Challenges and Opportunities
U.S. health care system is plagued by rising cost and limited access. While the cost of care is increasing faster than the rate of inflation, people living in rural areas have very limited access to quality health care due to a shortage of physicians and facilities in these areas. Information and communication technologies in general and telemedicine in particular offer great promise to extend quality care to underserved rural communities at an affordable cost. However, adoption of telemedicine among the various stakeholders of the health care system has not been very encouraging. Based on an analysis of the extant research literature, this study identifies critical factors that impede the adoption of telemedicine, and offers suggestions to mitigate these challenges.
Dual-polarized broad-band microstrip antennas fed by proximity coupling
This work presents a novel broad-band dual-polarized microstrip patch antenna, which is fed by proximity coupling. The microstrip line with slotted ground plane is used at two ports to feed the patch antenna. By using only one patch, the prototype antenna yields a bandwidth of 22% and 21.3% at the input port 1 and 2, respectively. The isolation between two input ports is below -34 dB across the bandwidth. Good broadside radiation patterns are observed, and the cross-polar levels are below -21 dB at both E and H planes. Due to its simple structure, it is easy to form arrays by using this antenna as an element.
The kinetics and mechanism of the silver ion-promoted hydrolysis of thiolurethanes in aqueous solution
The hydrolysis of thiolurethanes p-RC6H4NHCOSEt (1, R = Cl, H, OMe) in dilute aqueous acid is promoted by silver ions. The promoted hydrolysis involves the slow decomposition of low concentrations of complexes of 2 Ag+:1-thiolurethane stoichiometry, which are formed from relatively stable 1:1-complexes. The (positively charged) 1:1-complex (2) undergoes N-H dissociation to give a formally neutral 1:1-complex, 3. The 2:1-complexes are formed from both 2 and 3. The formation constants, K1, and the acid dissociation constants, Ka, of (2, R = Cl, H, MeO) have been obtained from a kinetic analysis using a range of silver and hydrogen ion concentrations, and different temperatures. The results suggest that the 2:1-complexes undergo nucleophilic attack by water in an overall A2-like process. Thiolurethanes 1 are ca. 105-fold more reactive in Ag+ ion-promoted hydrolysis than are the corresponding ethyl thiolbenzoates.
MultiSE: multi-path symbolic execution using value summaries
Dynamic symbolic execution (DSE) has been proposed to effectively generate test inputs for real-world programs. Unfortunately, DSE techniques do not scale well for large realistic programs, because often the number of feasible execution paths of a program increases exponentially with the increase in the length of an execution path. In this paper, we propose MultiSE, a new technique for merging states incrementally during symbolic execution, without using auxiliary variables. The key idea of MultiSE is based on an alternative representation of the state, where we map each variable, including the program counter, to a set of guarded symbolic expressions called a value summary. MultiSE has several advantages over conventional DSE and conventional state merging techniques: value summaries enable sharing of symbolic expressions and path constraints along multiple paths and thus avoid redundant execution. MultiSE does not introduce auxiliary symbolic variables, which enables it to 1) make progress even when merging values not supported by the constraint solver, 2) avoid expensive constraint solver calls when resolving function calls and jumps, and 3) carry out most operations concretely. Moreover, MultiSE updates value summaries incrementally at every assignment instruction, which makes it unnecessary to identify the join points and to keep track of variables to merge at join points. We have implemented MultiSE for JavaScript programs in a publicly available open-source tool. Our evaluation of MultiSE on several programs shows that 1) value summaries are an eective technique to take advantage of the sharing of value along multiple execution path, that 2) MultiSE can run significantly faster than traditional dynamic symbolic execution and, 3) MultiSE saves a substantial number of state merges compared to conventional state-merging techniques.
A machine learning approach to predict movie box-office success
Predicting society's reaction to a new product in the sense of popularity and adaption rate has become an emerging field of data analysis. The motion picture industry is a multi-billion-dollar business, and there is a massive amount of data related to movies is available over the internet. This study proposes a decision support system for movie investment sector using machine learning techniques. This research helps investors associated with this business for avoiding investment risks. The system predicts an approximate success rate of a movie based on its profitability by analyzing historical data from different sources like IMDb, Rotten Tomatoes, Box Office Mojo and Metacritic. Using Support Vector Machine (SVM), Neural Network and Natural Language Processing the system predicts a movie box office profit based on some pre-released features and post-released features. This paper shows Neural Network gives an accuracy of 84.1% for pre-released features and 89.27% for all features while SVM has 83.44% and 88.87% accuracy for pre-released features and all features respectively when one away prediction is considered. Moreover, we figure out that budget, IMDb votes and no. of screens are the most important features which play a vital role while predicting a movie's box-office success.
Effective field theory for nuclear physics
Abstract I review the current status of the application of effective field theory to nuclear physics, and its present implications for nuclear astrophysics.
Leveraging mid-level deep representations for predicting face attributes in the wild
Predicting facial attributes from faces in the wild is very challenging due to pose and lighting variations in the real world. The key to this problem is to build proper feature representations to cope with these unfavourable conditions. Given the success of Convolutional Neural Network (CNN) in image classification, the high-level CNN feature, as an intuitive and reasonable choice, has been widely utilized for this problem. In this paper, however, we consider the mid-level CNN features as an alternative to the high-level ones for attribute prediction. This is based on the observation that face attributes are different: some of them are locally oriented while others are globally defined. Our investigations reveal that the mid-level deep representations outperform the prediction accuracy achieved by the (fine-tuned) high-level abstractions. We empirically demonstrate that the mid-level representations achieve state-of-the-art prediction performance on CelebA and LFWA datasets. Our investigations also show that by utilizing the mid-level representations one can employ a single deep network to achieve both face recognition and attribute prediction.
Music and social bonding: “self-other” merging and neurohormonal mechanisms
It has been suggested that a key function of music during its development and spread amongst human populations was its capacity to create and strengthen social bonds amongst interacting group members. However, the mechanisms by which this occurs have not been fully discussed. In this paper we review evidence supporting two thus far independently investigated mechanisms for this social bonding effect: self-other merging as a consequence of inter-personal synchrony, and the release of endorphins during exertive rhythmic activities including musical interaction. In general, self-other merging has been experimentally investigated using dyads, which provide limited insight into large-scale musical activities. Given that music can provide an external rhythmic framework that facilitates synchrony, explanations of social bonding during group musical activities should include reference to endorphins, which are released during synchronized exertive movements. Endorphins (and the endogenous opioid system (EOS) in general) are involved in social bonding across primate species, and are associated with a number of human social behaviors (e.g., laughter, synchronized sports), as well as musical activities (e.g., singing and dancing). Furthermore, passively listening to music engages the EOS, so here we suggest that both self-other merging and the EOS are important in the social bonding effects of music. In order to investigate possible interactions between these two mechanisms, future experiments should recreate ecologically valid examples of musical activities.
Modified β-Cyclodextrin Inclusion Complex to Improve the Physicochemical Properties of Albendazole. Complete In Vitro Evaluation and Characterization
The potential use of natural cyclodextrins and their synthetic derivatives have been studied extensively in pharmaceutical research and development to modify certain properties of hydrophobic drugs. The ability of these host molecules of including guest molecules within their cavities improves notably the physicochemical properties of poorly soluble drugs, such as albendazole, the first chosen drug to treat gastrointestinal helminthic infections. Thus, the aim of this work was to synthesize a beta cyclodextrin citrate derivative, to analyze its ability to form complexes with albendazole and to evaluate its solubility and dissolution rate. The synthesis progress of the cyclodextrin derivative was followed by electrospray mass spectrometry and the acid-base titration of the product. The derivative exhibited an important drug affinity. Nuclear magnetic resonance experiments demonstrated that the tail and the aromatic ring of the drug were inside the cavity of the cyclodextrin derivative. The inclusion complex was prepared by spray drying and full characterized. The drug dissolution rate displayed exceptional results, achieving 100% drug release after 20 minutes. The studies indicated that the inclusion complex with the cyclodextrin derivative improved remarkably the physicochemical properties of albendazole, being a suitable excipient to design oral dosage forms.
Algorithms for Hyper-Parameter Optimization
Several recent advances to the state of the art in image classification benchmarks have come from better configurations of existing techniques rather than novel approaches to feature learning. Traditionally, hyper-parameter optimization has been the job of humans because they can be very efficient in regimes where only a few trials are possible. Presently, computer clusters and GPU processors make it possible to run more trials and we show that algorithmic approaches can find better results. We present hyper-parameter optimization results on tasks of training neural networks and deep belief networks (DBNs). We optimize hyper-parameters using random search and two new greedy sequential methods based on the expected improvement criterion. Random search has been shown to be sufficiently efficient for learning neural networks for several datasets, but we show it is unreliable for training DBNs. The sequential algorithms are applied to the most difficult DBN learning problems from [1] and find significantly better results than the best previously reported. This work contributes novel techniques for making response surface models P (y|x) in which many elements of hyper-parameter assignment (x) are known to be irrelevant given particular values of other elements.
Random Erasing Data Augmentation
In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN). In training, Random Erasing randomly selects a rectangle region in an image and erases its pixels with random values. In this process, training images with various levels of occlusion are generated, which reduces the risk of over-fitting and makes the model robust to occlusion. Random Erasing is parameter learning free, easy to implement, and can be integrated with most of the CNN-based recognition models. Albeit simple, Random Erasing is complementary to commonly used data augmentation techniques such as random cropping and flipping, and yields consistent improvement over strong baselines in image classification, object detection and person reidentification. Code is available at: https://github. com/zhunzhong07/Random-Erasing.
Thinking in Terms of Design Decisions When Developing Maturity Models
To measure dedicated aspects of “maturity”, a range of maturity models have been developed in the field of information systems by practitioners and academics over the past years. Despite its broad proliferation, the concept has not escaped criticism. Unnecessary bureaucracy, poor theoretical foundation, and the impression of a falsified certainty to achieve success are a few examples. As there is a significant lack of knowledge on how to design theoretically sound and widely accepted maturity models, in this paper, the author opens the discussion on design decisions when developing these models. Based on analogy and informed arguments, the author synthesizes a generic but adjuvant framework that consists of five common design steps and eighteen decision parameters that help practitioners as well as researchers in the development of maturity models. DOI: 10.4018/978-1-4666-1589-2.ch010
Deterministic RF nulling in phased arrays for the next generation of radio telescopes
A requirement of the next generation radio telescopes for astronomy is its ability to cope with the forever increasing problem of Radio Frequency Interference (RFI). Unlike conventional fixed parabolic receivers used currently in astronomy, the application of phased-array beamforming techniques opens the possibility to spatially null RFI in the RF domain prior to signal digitisation. This paper presents results from the second phased-array experimental demonstrator, the One Square Metre Array, on calibration and RF nulling performances. The approach is to deterministically null known RFI in the RF beamforming domain and adaptively remove the remaining RFI in the digital beamforming domain. A novel array calibration technique called the Multi-Element Phase toggle technique (MEP) is presented, which allows a fast and very accurate calibration of wide-band phased-array antennas. Array calibration is shown to determine the extent to which RFI can be removed by experimental verification of simulated null depths.
The metabolic syndrome is frequent in Klinefelter's syndrome and is associated with abdominal obesity and hypogonadism.
OBJECTIVE Klinefelter's syndrome is associated with an increased prevalence of diabetes, but the pathogenesis is unknown. Accordingly, the aim of this study was to investigate measures of insulin sensitivity, the metabolic syndrome, and sex hormones in patients with Klinefelter's syndrome and an age-matched control group. RESEARCH DESIGN AN METHODS: In a cross-sectional study, we examined 71 patients with Klinefelter's syndrome, of whom 35 received testosterone treatment, and 71 control subjects. Body composition was evaluated using dual-energy X-ray absorptiometry scans. Fasting blood samples were analyzed for sex hormones, plasma glucose, insulin, C-reactive protein (CRP), and adipocytokines. We analyzed differences between patients with untreated Klinefelter's syndrome and control subjects and subsequently analyzed differences between testosterone-treated and untreated Klinefelter's syndrome patients. RESULTS Of the patients with Klinefelter's syndrome, 44% had metabolic syndrome (according to National Cholesterol Education Program/Adult Treatment Panel III criteria) compared with 10% of control subjects. Insulin sensitivity (assessed by homeostasis model assessment 2 modeling), androgen, and HDL cholesterol levels were significantly decreased, whereas total fat mass and LDL cholesterol, triglyceride, CRP, leptin, and fructosamine levels were significantly increased in untreated Klinefelter's syndrome patients. In treated Klinefelter's syndrome patients, LDL cholesterol and adiponectin were significantly decreased, whereas no difference in body composition was found in comparison with untreated Klinefelter's syndrome patients. Multivariate analyses showed that truncal fat was the major determinant of metabolic syndrome and insulin sensitivity. CONCLUSIONS The prevalence of metabolic syndrome was greatly increased, whereas insulin sensitivity was decreased in Klinefelter's syndrome. Both correlated with truncal obesity. Hypogonadism in Klinefelter's syndrome may cause an unfavorable change in body composition, primarily through increased truncal fat and decreased muscle mass. Testosterone treatment in Klinefelter's syndrome only partly corrected the unfavorable changes observed in untreated Klinefelter's syndrome, perhaps due to insufficient testosterone doses.
Effect of errors in the sequence of optical densities from the Roche AMPLICOR HIV-1 MONITOR assay on the validity of assay results.
Specifications for the AMPLICOR HIV-1 MONITOR kit indicate that the results are invalid if the optical densities (ODs) from the PCR-amplified sample that are between 0.1 and 2.3 units are out of sequence. However, among 11,904 assays, results were biased only when ODs of 0.2 to 2.0 units were out of sequence, reducing the rate of invalid results from 3.2 to 0.59%.
Using Images to Improve Machine-Translating E-Commerce Product Listings
In this paper we study the impact of using images to machine-translate user-generated ecommerce product listings. We study how a multi-modal Neural Machine Translation (NMT) model compares to two text-only approaches: a conventional state-of-the-art attentional NMT and a Statistical Machine Translation (SMT) model. User-generated product listings often do not constitute grammatical or well-formed sentences. More often than not, they consist of the juxtaposition of short phrases or keywords. We train our models end-to-end as well as use text-only and multimodal NMT models for re-ranking n-best lists generated by an SMT model. We qualitatively evaluate our user-generated training data also analyse how adding synthetic data impacts the results. We evaluate our models quantitatively using BLEU and TER and find that (i) additional synthetic data has a general positive impact on text-only and multi-modal NMT models, and that (ii) using a multi-modal NMT model for re-ranking n-best lists improves TER significantly across different n-
A Hebbian/Anti-Hebbian Neural Network for Linear Subspace Learning: A Derivation from Multidimensional Scaling of Streaming Data
Neural network models of early sensory processing typically reduce the dimensionality of streaming input data. Such networks learn the principal subspace, in the sense of principal component analysis, by adjusting synaptic weights according to activity-dependent learning rules. When derived from a principled cost function, these rules are nonlocal and hence biologically implausible. At the same time, biologically plausible local rules have been postulated rather than derived from a principled cost function. Here, to bridge this gap, we derive a biologically plausible network for subspace learning on streaming data by minimizing a principled cost function. In a departure from previous work, where cost was quantified by the representation, or reconstruction, error, we adopt a multidimensional scaling cost function for streaming data. The resulting algorithm relies only on biologically plausible Hebbian and anti-Hebbian local learning rules. In a stochastic setting, synaptic weights converge to a stationary state, which projects the input data onto the principal subspace. If the data are generated by a nonstationary distribution, the network can track the principal subspace. Thus, our result makes a step toward an algorithmic theory of neural computation.
A deep reinforcement learning based framework for power-efficient resource allocation in cloud RANs
Cloud Radio Access Networks (RANs) have become a key enabling technique for the next generation (5G) wireless communications, which can meet requirements of massively growing wireless data traffic. However, resource allocation in cloud RANs still needs to be further improved in order to reach the objective of minimizing power consumption and meeting demands of wireless users over a long operational period. Inspired by the success of Deep Reinforcement Learning (DRL) on solving complicated control problems, we present a novel DRL-based framework for power-efficient resource allocation in cloud RANs. Specifically, we define the state space, action space and reward function for the DRL agent, apply a Deep Neural Network (DNN) to approximate the action-value function, and formally formulate the resource allocation problem (in each decision epoch) as a convex optimization problem. We evaluate the performance of the proposed framework by comparing it with two widely-used baselines via simulation. The simulation results show it can achieve significant power savings while meeting user demands, and it can well handle highly dynamic cases.
Fractal analysis of subchondral bone changes of the hand in rheumatoid arthritis
Rheumatoid arthritis (RA) is a chronic and systemic inflammatory disorder. Conventional radiography, a widely available and cost-effective examination method, remains the standard of reference for the detection and quantification of joint involvement in RA. Fractal dimension (FD) of the trabecular bone structure has been proven to correlate with the bone's physical properties. The present study was designed to use fractal analysis to validate radiograph changes in the hand of RA patients.This study retrospectively evaluated the hand radiographs of 108 subjects. Fifty-four patients were suffering from RA, of which 18 were men and 36 were women. Their ages ranged from 25 to 90 years. The hand radiographs of 54 healthy patients, 18 men and 36 women (age range 23-88 years), were used as the control group. Bone structure value (BSV) is a critical parameter for the assessment and analysis of bone microarchitecture. The BSVs were calculated over the fractal dimension using the Brownian motion.The BSV calculated for ROI showed a significant difference in ROI5 (0.210 ± 0.045), ROI6 (0.186 ± 0.066), and ROI11 (0.201 ± 0.056) in patients with RA, in comparison to the CG (P < 0.05). A significant correlation was observed between anti-CCP and ROI4, ROI5, ROI6, ROI9, and ROI12 in seropositive RA patients (post hoc test (Bonferroni) P <0.001).This study demonstrates that the bone textural image analysis technique can be used to quantify the radiographic changes in RA hands, based on comparisons of FDs.
The effect of high-intensity progressive resistance training on adiposity in children: a randomized controlled trial
Background:High-intensity progressive resistance training (PRT) improves adiposity and metabolic risk in adults, but has not been investigated in children within a randomized controlled trial (RCT).Objective:We hypothesized that high-intensity PRT (8 weeks, twice a week) would decrease central adiposity in children, as assessed via waist circumference.Methods Design/Setting/Participants:Concealed randomization stratified by age and gender was used to allocate rural New Zealand school students to the wait-list control or PRT group.Intervention:Participants were prescribed two sets (eight repetitions per set) of 11 exercises targeting all the major muscle groups at high intensity.Primary Outcome:Waist circumference; secondary outcomes included whole body fat, muscular fitness (one repetition maximum), cardiorespiratory fitness (peak oxygen consumption during a treadmill test), lipids, insulin sensitivity and fasting glucose.Results:Of the 78 children (32 girls and 46 boys; mean age 12.2(1.3) years), 51% were either overweight (33%) or obese (18%). High-intensity PRT significantly improved waist circumference (mean change PRT −0.8 (2.2) cm vs +0.5 (1.7) cm control; F=7.59, P=0.008), fat mass (mean change PRT +0.2 (1.4) kg vs +1.0 (1.2) kg control; F=6.00, P=0.017), percent body fat (mean change PRT –0.3 (1.8)% vs +1.2 (2.1)% control; F=9.04, P=0.004), body mass index (mean change PRT −0.01 (0.8) kg m−2 vs +0.4 (0.7) kg m−2 control; F=6.02, P=0.017), upper body strength (mean change PRT+11.6(6.1) kg vs +2.9(3.7) kg control; F=48.6, P<0.001) and lower body strength (mean change PRT +42.9(26.6) kg vs +28.5(26.6) kg control; F=4.72, P=0.034) compared to the control group. Waist circumference decreased the most in those with the greatest baseline relative strength (r=−0.257, P=0.036), and greatest relative (r=−0.400, P=0.001) and absolute (r=0.340, P=0.006) strength gains during the intervention.Conclusion:Isolated high-intensity PRT significantly improves central and whole body adiposity in association with muscle strength in normal-weight and overweight children. The clinical relevance and sustainability of these changes in adiposity should be addressed in future long-term studies.
Relation Schema Induction using Tensor Factorization with Side Information
Given a set of documents from a specific domain (e.g., medical research journals), how do we automatically build a Knowledge Graph (KG) for that domain? Automatic identification of relations and their schemas, i.e., type signature of arguments of relations (e.g., undergo(Patient, Surgery)), is an important first step towards this goal. We refer to this problem as Relation Schema Induction (RSI). In this paper, we propose Schema Induction using Coupled Tensor Factorization (SICTF), a novel tensor factorization method for relation schema induction. SICTF factorizes Open Information Extraction (OpenIE) triples extracted from a domain corpus along with additional side information in a principled way to induce relation schemas. To the best of our knowledge, this is the first application of tensor factorization for the RSI problem. Through extensive experiments on multiple real-world datasets, we find that SICTF is not only more accurate than state-of-the-art baselines, but also significantly faster (about 14x faster).
Intrusion Detection in Cyber-Physical Systems: Techniques and Challenges
Cyber-physical systems (CPSs) integrate the computation with physical processes. Embedded computers and networks monitor and control the physical processes, usually with feedback loops where physical processes affect computations and vice versa. CPS was identified as one of the eight research priority areas in the August 2007 report of the President's Council of Advisors on Science and Technology, as CPS will be the core component of many critical infrastructures and industrial control systems in the near future. However, a variety of random failures and cyber attacks exist in CPS, which greatly restrict their growth. Fortunately, an intrusion detection mechanism could take effect for protecting CPS. When a misbehavior is found by the intrusion detector, the appropriate action can be taken immediately so that any harm to the system will be minimized. As CPSs are yet to be defined universally, the application of the instruction detection mechanism remain open presently. As a result, the effort will be made to discuss how to appropriately apply the intrusion detection mechanism to CPS in this paper. By examining the unique properties of CPS, it intends to define the specific requirements first. Then, the design outline of the intrusion detection mechanism in CPS is introduced in terms of the layers of system and specific detection techniques. Finally, some significant research problems are identified for enlightening the subsequent studies.
Advances in natural language processing
Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today’s researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area.
Smart exploration in reinforcement learning using absolute temporal difference errors
Exploration is still one of the crucial problems in reinforcement learning, especially for agents acting in safety-critical situations. We propose a new directed exploration method, based on a notion of state controlability. Intuitively, if an agent wants to stay safe, it should seek out states where the effects of its actions are easier to predict; we call such states more controllable. Our main contribution is a new notion of controlability, computed directly from temporaldifference errors. Unlike other existing approaches of this type, our method scales linearly with the number of state features, and is directly applicable to function approximation. Our method converges to correct values in the policy evaluation setting. We also demonstrate significantly faster learning when this exploration strategy is used in large control problems.
Marriage Payments: a fundamental reconsideration - eScholarship
Marriage payments: a fundamental reconsideration Abstract This paper is a constructive critique of the well-known book by Jack Goody and Stanley Tambiah (1973), Bridewealth and Dowry. Given the general acceptance of Goody’s framework in contemporary studies of marriage and marriage payments, it is essential that we refer to this framework as we advance new theoretical concepts of marriage-related socio-economic processes. As some reviews of this paper have observed, this critique is certainly overdue. In the course of this discussion we shall set forth analytical conceptions of wealth and consumption goods that we find to be foundational to an understanding of marriage payments and other economic processes; and we provide consistent criteria for studying the cross-cultural incidence of payments, gifts, bequests and inheritance that are often associated with marriage. For cross-cultural analysis it is important that the dimensions of social process be clearly delineated, in spite of confusion that arises at the level of common discourse. Unfortunately, the Goody-Tambiah presentation amplifies this confusion in the interest of an ethnocentric evolutionary scheme. And in the context of the study of marriage payments in China, Goody’s construction of the “indirect dowry” is particularly unfortunate.
Buying or browsing ? An exploration of shopping orientations and online purchase intention
Consumer selection of retail patronage mode has been widely researched by marketing scholars. Several researchers have segmented consumers by shopping orientation. However, few have applied such methods to the Internet shopper. Despite the widespread belief that Internet shoppers are primarily motivated by convenience, the authors show empirically that consumers’ fundamental shopping orientations have no signi®cant impact on their proclivity to purchase products online. Factors that are more likely to in ̄uence purchase intention include product type, prior purchase, and, to a lesser extent, gender. Literature examining electronic commerce tends either to discuss the size and potential of the phenomenon or to indicate problems associated with it. For example, Forrester Research recently reported that worldwide Internet commerce ± both business to business (B2B) and both business to customer (B2C) ± would reach $6.8 trillion in 2004. At the same time, reports of business failures are increasing, as it is evident that the corporate sector is not satis®ed with Internet performance (Wolff, 1998). Despite these two apparently contradictory positions, many observers note an absence of research into consumer motivation to purchase via the Internet and other aspects of consumer behaviour with regard to the medium (Donthu and Garcia, 1999; Hagel and Armstrong, 1997; Korgaonkar and Wolin, 1999). Literature falls into two categories: usage ± by which we mean rate, purpose or quantity bought ± and advertising response (McDonald, 1993). Common to these two streams is the a ̄owo research of Hoffman and Novak (1996) which suggests that the Internet is a very different medium requiring new means of segmentation. The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at http://www .emeraldinsight .com/researchregister http:// www.emeraldinsigh t.com/0309-056 6.htm Research funded by Grif®th University School of Marketing. EJM 37,11/12
Mental health in youth infected with and affected by HIV: the role of caregiver HIV.
OBJECTIVE To examine the association of youth and caregiver HIV status, and other contextual and social regulation factors with youth mental health. METHOD Data were from two longitudinal studies of urban youth perinatally infected, affected, and unaffected by HIV (N = 545; 36% PHIV+ youth; 45.7% HIV+ caregivers). Youth mental health was measured using the Child Behavior Checklist, the Child Depression Inventory, and the State-Trait Anxiety Inventory for Children. RESULTS HIV+ youth reported elevated scores on the CDI compared with HIV- youth. HIV+ caregivers reported fewer symptoms and were less likely to report scores in the clinical range for their children on the CBCL compared with HIV- caregivers. Caregiver mental health and parent-child communication and involvement were also associated with youth mental health. CONCLUSIONS Youth who resided with HIV+ caregivers had better mental health. Future research needs to further explore the role of caregiver HIV infection in youth mental health. Understanding and building upon strengths of HIV-affected families may be an effective focus of interventions for this population.
AREPS and TEMPER - Getting Familiar with these Powerful Propagation Software Tools
AREPS and TEMPER are two powerful software tools developed by the government that allow the determination of radar coverage for real world propagation conditions. TEMPER was developed by the Applied Physics Laboratory (APL) and AREPS by the Space and Naval Warfare System command (SPAWAR) with mutual cooperation between the two. Both tools allow determination of coverage in ducted conditions and in the presence of sub-refraction and super-refraction. Both tools account for multi-path and take into account actual terrain height variations with range. When doing propagation over water both tools take into account surface roughness. This paper gives a discussion of each of these tools with the intention of providing the user with a better understanding of them and of how to use them.
DV Based Positioning in Ad Hoc Networks
Many ad hoc network protocols and applications assume the knowledge of geographic location of nodes. The absolute position of each networked node is an assumed fact by most sensor networks which can then present the sensed information on a geographical map. Finding position without the aid of GPS in each node of an ad hoc network is important in cases where GPS is either not accessible, or not practical to use due to power, form factor or line of sight conditions. Position would also enable routing in sufficiently isotropic large networks, without the use of large routing tables. We are proposing APS – a localized, distributed, hop by hop positioning algorithm, that works as an extension of both distance vector routing and GPS positioning in order to provide approximate position for all nodes in a network where only a limited fraction of nodes have self positioning capability.
Capabilities , Processes , and Performance of Knowledge Management : A Structural Approach
The purpose of this study is to examine structural relationships among the capabilities, processes, and performance of knowledge management, and suggest strategic directions for the successful implementation of knowledge management. To serve this purpose, the authors conducted an extensive survey of 68 knowledge management-adopting Korean firms in diverse industries and collected 215 questionnaires. Analyzing hypothesized structural relationships with the data collected, they found that there exists statistically significant relationships among knowledge management capabilities, processes, and performance. The empirical results of this study also support the wellknown strategic hypothesis of the balanced scorecard (BSC). © 2007 Wiley Periodicals, Inc.
Compact Generalized Non-local Network
The non-local module [27] is designed for capturing long-range spatio-temporal dependencies in images and videos. Although having shown excellent performance, it lacks the mechanism to model the interactions between positions across channels, which are of vital importance in recognizing fine-grained objects and actions. To address this limitation, we generalize the non-local module and take the correlations between the positions of any two channels into account. This extension utilizes the compact representation for multiple kernel functions with Taylor expansion that makes the generalized non-local module in a fast and low-complexity computation flow. Moreover, we implement our generalized non-local method within channel groups to ease the optimization. Experimental results illustrate the clear-cut improvements and practical applicability of the generalized non-local module on both fine-grained object recognition and video classification. Code is available at: https://github.com/KaiyuYue/cgnl-network.pytorch.
Factors Affecting Intent to Purchase Virtual Goods in Online Games
Online games increasingly sell virtual goods to generate real income. As a result, it is increasingly important to identify factors and theory of consumption values that affect intent to purchase virtual goods in online games. However, very little research has been devoted to the topic. This study is an empirical investigation of the factors and theory of consumption values that affect intent to purchase virtual goods in online games. The study determines the effects of game type, satisfaction with the game, identification with the character, and theory of consumption values on intent to purchase virtual goods. The study used a survey to collect information from 523 virtual game users. Study results showed that game type is a moderating variable that affects intent to purchase virtual goods. And it demonstrated that role-playing game users are affected by theory of consumption values: functional quality, playfulness, and social relationship support. Moreover, war-strategy game users are affected by satisfaction with the game, identification with the character, and theory of consumption values: price, utility, and playfulness. The study also presents conclusions, proposes applications, and describes opportunities for further research.
Commonsense computing: using student sorting abilities to improve instruction
We examine students' commonsense understanding of computer science concepts before they receive any formal instruction in the field. For this study, we asked students on the first day of a CS1 class to describe in English how they would arrange a set of numbers in ascending, sorted order; we then repeated the experiment asking students to sort a list of dates (in mm/dd/yyyy format).We found that a majority of students described a coherent algorithm; some described versions of insertion or selection sort, but many gave unexpected algorithms. We also found significant differences between responses given for sorting numbers versus dates. Based on our analysis of the data we suggest that beginning-programming instructors more explicitly discuss data types, begin loop instruction with post-test loops, assist students in recognizing implicit conditional and iteration use in natural language solutions to probls, and recognize that novices and experts focus on different aspects of the probl in even basic probl solving tasks.
Differentially Private Federated Learning: A Client Level Perspective
Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client’s contribution during training and information about their data set is revealed through analyzing the distributed model. We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients’ contributions during training, balancing the trade-off between privacy loss and model performance. Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance.
Gelling mechanism and interactions of polysaccharides from Mesona blumes: Role of urea and calcium ions.
In this study, the relaxation modulus was used to elucidate the gelling mechanism of polysaccharides from Mesona blumes. The pH of Mesona blumes polysaccharides (MBP) was adjusted could significantly change the relaxation modulus of MBP. The results showed that the hydrogen bonds and electrostatic interaction existed in during the formation of MBP gel. The addition of salt ions (sodium ions and calcium ions), EDTA and urea have different effects on the relaxation modulus of MBP. Result showed that the hydrogen bond was the main force maintaining the MBP gel network structure, followed was calcium ions. And electrostatic interaction was not the decisive role of gel formation. The small molecules with active hydrogen-bond donors and/or acceptors were added into MBP, which proved -COOH was involved in the hydrogen bonds formation of MBP gel. In addition, the entanglement network number (ENN) results quantitatively assessed contribution of interaction: hydrogen bonds interaction > calcium ions (calcium bridge) >electrostatic interaction.
Tempo And Beat Estimation Of Musical Signals
Tempo estimation is fundamental in automatic music processing and in many multimedia applications. This paper presents an automatic tempo tracking system that processes audio recordings and determines the beats per minute and temporal beat location. The concept of spectral energy flux is defined and leads to an efficient note onset detector. The algorithm involves three stages: a frontend analysis that efficiently extracts onsets, a periodicity detection block and the temporal estimation of beat locations. The performance of the proposed method is evaluated using a large database of 489 excerpts from several musical genres. The global recognition rate is 89.7 %. Results are discussed and compared to other tempo estimation systems.
Complex problem solving: a case for complex cognition?
Complex problem solving (CPS) emerged in the last 30 years in Europe as a new part of the psychology of thinking and problem solving. This paper introduces into the field and provides a personal view. Also, related concepts like macrocognition or operative intelligence will be explained in this context. Two examples for the assessment of CPS, Tailorshop and MicroDYN, are presented to illustrate the concept by means of their measurement devices. Also, the relation of complex cognition and emotion in the CPS context is discussed. The question if CPS requires complex cognition is answered with a tentative “yes.”
Wired Academia: Why Social Science Scholars Are Using Social Media
Social media websites are having a significant impact on how collaborative relationships are formed and information is disseminated throughout society. While there is a large body of literature devoted to the ways in which the general public is making use of social media, there is little research regarding how such trends are impacting scholarly practices. This paper presents the results of a study on how academics, primarily in social sciences, are adopting these new sites.
The Gridfit algorithm: an efficient and effective approach to visualizing large amounts of spatial data
In a large number of applications, data is collected and referenced by their spatial locations. Visualizing large amounts of spatially referenced data on a limited-size screen display often results in poor visualizations due to the high degree of overplotting of neighboring datapoints. We introduce a new approach to visualizing large amounts of spatially referenced data. The basic idea is to intelligently use the unoccupied pixels of the display instead of overplotting data points. After formally describing the problem, we present two solutions which are based on: placing overlapping data points on the nearest unoccupied pixel; and shifting data points along a screen-filling curve (e.g., Hilbert-curve). We then develop a more sophisticated approach called Gridfit, which is based on a hierarchical partitioning of the data space. We evaluate all three approaches with respect to their efficiency and effectiveness and show the superiority of the Gridfit approach. For measuring the effectiveness, we not only present the resulting visualizations but also introduce mathematical effectiveness criteria measuring properties of the generated visualizations with respect to the original data such as distance- and position-preservation.
Laparoscopic or open surgery for the cancer of the middle and lower rectum short-term outcomes of a comparative non-randomised study
The study compares the short-term results of the laparoscopic and open approach for the surgical treatment of rectal cancer. Consecutive cases with rectal cancer operated upon with laparoscopy from 2004 to 2007 were compared to open rectal cancer cases. Total mesorectal excision (TME) was attempted in all cases. Forty-two cases were included in the OPEN and 45 in the LAP group and were matched for age, gender, disease stage and operation type. Duration of surgery was longer and blood transfusion requirements were less in the LAP group. Higher blood loss was observed in patients with neoadjuvant treatment in both groups. Patients with neoadjuvant treatment in the OPEN group had higher operation time, but that was not the case in the LAP group. There were three conversions (7%). Overall morbidity was higher in the OPEN group. LAP group patients were found to recover faster. R0 resection was achieved in 88% in the OPEN and 94% in the LAP group. Less morbidity and faster recovery is offered after laparoscopic TME. Quality of surgery assessed by histopathology is similar between the approaches. Neoadjuvant chemoradiation seems to have significant impact on blood loss but results in longer operation times of the OPEN group.
Design of a under voltage lock out circuit with bandgap structure
According to the necessary function of Under Voltage Lock Out (UVLO) in DC-DC power management systems, an improved UVLO circuit is proposed. The circuit realizes stabilization of parameters such as threshold point voltage, hysteretic range of the comparator etc. without utilizing an extra bandgap reference voltage source for comparison. The UVLO circuit is implemented in CSMC 0.5µm BCD process. Hspice simulation results show that the UVLO circuit presents advantages such as simple topology, sensitive response, low temperature draft, low power consumption.
Scaling agile methods to regulated environments: An industry case study
Agile development methods are growing in popularity with a recent survey reporting that more than 80% of organizations now following an agile approach. Agile methods were seen initially as best suited to small, co-located teams developing non-critical systems. The first two constraining characteristics (small and co-located teams) have been addressed as research has emerged describing successful agile adoption involving large teams and distributed contexts. However, the applicability of agile methods for developing safety-critical systems in regulated environments has not yet been demonstrated unequivocally, and very little rigorous research exists in this area. Some of the essential characteristics of agile approaches appear to be incompatible with the constraints imposed by regulated environments. In this study we identify these tension points and illustrate through a detailed case study how an agile approach was implemented successfully in a regulated environment. Among the interesting concepts to emerge from the research are the notions of continuous compliance and living traceability.
POSTER: Watch Out Your Smart Watch When Paired
We coin a new term called \textit{data transfusion} as a phenomenon that a user experiences when pairing a wearable device with the host device. A large amount of data stored in the host device (e.g., a smartphone) is forcibly copied to the wearable device (e.g., a smart watch) due to pairing while the wearable device is usually less attended. To the best of knowledge, there is no previous work that manipulates how sensitive data is transfused even without user's consent and how users perceive and behave regarding such a phenomenon for smart watches. We tackle this problem by conducting an experimental study of data extraction from commodity devices, such as in Android Wear, watchOS, and Tizen platforms, and a following survey study with 205 smart watch users, in two folds. The experimental studies have shown that a large amount of sensitive data was transfused, but there was not enough user notification. The survey results have shown that users have lower perception on smart watches for security and privacy than smartphones, but they tend to set the same passcode on both devices when needed. Based on the results, we perform risk assessment and discuss possible mitigation that involves volatile transfusion.
Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics
Simultaneous tracking of multiple persons in real-world environments is an active research field and several approaches have been proposed, based on a variety of features and algorithms. Recently, there has been a growing interest in organizing systematic evaluations to compare the various techniques. Unfortunately, the lack of common metrics for measuring the performance of multiple object trackers still makes it hard to compare their results. In this work, we introduce two intuitive and general metrics to allow for objective comparison of tracker characteristics, focusing on their precision in estimating object locations, their accuracy in recognizing object configurations and their ability to consistently label objects over time. These metrics have been extensively used in two large-scale international evaluations, the 2006 and 2007 CLEAR evaluations, to measure and compare the performance of multiple object trackers for a wide variety of tracking tasks. Selected performance results are presented and the advantages and drawbacks of the presented metrics are discussed based on the experience gained during the evaluations.
Glucocorticoids reduce phobic fear in humans.
Phobias are characterized by excessive fear, cued by the presence or anticipation of a fearful situation. Whereas it is well established that glucocorticoids are released in fearful situations, it is not known whether these hormones, in turn, modulate perceived fear. As extensive evidence indicates that elevated glucocorticoid levels impair the retrieval of emotionally arousing information, they might also inhibit retrieval of fear memory associated with phobia and, thereby, reduce phobic fear. Here, we investigated whether acutely administrated glucocorticoids reduced phobic fear in two double-blind, placebo-controlled studies in 40 subjects with social phobia and 20 subjects with spider phobia. In the social phobia study, cortisone (25 mg) administered orally 1 h before a socio-evaluative stressor significantly reduced self-reported fear during the anticipation, exposure, and recovery phase of the stressor. Moreover, the stress-induced release of cortisol in placebo-treated subjects correlated negatively with fear ratings, suggesting that endogenously released cortisol in the context of a phobic situation buffers fear symptoms. In the spider phobia study, repeated oral administration of cortisol (10 mg), but not placebo, 1 h before exposure to a spider photograph induced a progressive reduction of stimulus-induced fear. This effect was maintained when subjects were exposed to the stimulus again 2 days after the last cortisol administration, suggesting that cortisol may also have facilitated the extinction of phobic fear. Cortisol treatment did not reduce general, phobia-unrelated anxiety. In conclusion, the present findings in two distinct types of phobias indicate that glucocorticoid administration reduces phobic fear.
The influence of tobacco marketing on adolescent smoking intentions via normative beliefs.
Using cross-sectional data from three waves of the Youth Tobacco Policy Study, which examines the impact of the UK's Tobacco Advertising and Promotion Act (TAPA) on adolescent smoking behaviour, we examined normative pathways between tobacco marketing awareness and smoking intentions. The sample comprised 1121 adolescents in Wave 2 (pre-ban), 1123 in Wave 3 (mid-ban) and 1159 in Wave 4 (post-ban). Structural equation modelling was used to assess the direct effect of tobacco advertising and promotion on intentions at each wave, and also the indirect effect, mediated through normative influences. Pre-ban, higher levels of awareness of advertising and promotion were independently associated with higher levels of perceived sibling approval which, in turn, was positively related to intentions. Independent paths from perceived prevalence and benefits fully mediated the effects of advertising and promotion awareness on intentions mid- and post-ban. Advertising awareness indirectly affected intentions via the interaction between perceived prevalence and benefits pre-ban, whereas the indirect effect on intentions of advertising and promotion awareness was mediated by the interaction of perceived prevalence and benefits mid-ban. Our findings indicate that policy measures such as the TAPA can significantly reduce adolescents' smoking intentions by signifying smoking to be less normative and socially unacceptable.
Medicaid patient asthma-related acute care visits and their associations with ozone and particulates in Washington, DC, from 1994-2005.
The primary objective of this ecologic and contextual study is to determine statistically significant short-term associations between air quality (daily ozone and particulate concentrations) and Medicaid patient general acute care daily visits for asthma exacerbations over 11 years for Washington, DC residents, and to identify regions and populations that may experience increased asthma exacerbations related to air quality. After removing long-term trends and day-of-week effects in the Medicaid data, Poisson regression was applied to daily time series data. Significant associations were found between asthma-related general acute care visits and ozone concentrations. Significant associations with both ozone and PM2.5 concentrations were observed for 5- to 12-year-olds. While poor air quality was closely associated with asthma exacerbations observed in acute care visits in areas where Medicaid enrollment was high, the strongest associations between asthma-related visits and air quality were not always for the areas with the highest Medicaid enrollment.
Deep Memory Networks for Attitude Identification
We consider the task of identifying attitudes towards a given set of entities from text. Conventionally, this task is decomposed into two separate subtasks: target detection that identifies whether each entity is mentioned in the text, either explicitly or implicitly, and polarity classification that classifies the exact sentiment towards an identified entity (the target) into positive, negative, or neutral. Instead, we show that attitude identification can be solved with an end-to-end machine learning architecture, in which the two subtasks are interleaved by a deep memory network. In this way, signals produced in target detection provide clues for polarity classification, and reversely, the predicted polarity provides feedback to the identification of targets. Moreover, the treatments for the set of targets also influence each other -- the learned representations may share the same semantics for some targets but vary for others. The proposed deep memory network, the AttNet, outperforms methods that do not consider the interactions between the subtasks or those among the targets, including conventional machine learning methods and the state-of-the-art deep learning models.
Intake of copper has no effect on cognition in patients with mild Alzheimer’s disease: a pilot phase 2 clinical trial
Disturbed copper (Cu) homeostasis may be associated with the pathological processes in Alzheimer’s disease (AD). In the present report, we evaluated the efficacy of oral Cu supplementation in the treatment of AD in a prospective, randomized, double-blind, placebo-controlled phase 2 clinical trial in patients with mild AD for 12 months. Sixty-eight subjects were randomized. The treatment was well-tolerated. There were however no significant differences in primary outcome measures (Alzheimer’s Disease Assessment Scale, Cognitive subscale, Mini Mental Status Examination) between the verum [Cu-(II)-orotate-dihydrate; 8 mg Cu daily] and the placebo group. Despite a number of findings supporting the hypothesis of environmental Cu modulating AD, our results demonstrate that oral Cu intake has neither a detrimental nor a promoting effect on the progression of AD.
Modeling Multidimensional Databases
Multidimensional databases have recently gained widespread acceptance in the commercial world for supporting on-line analytical processing (OLAP) applications. We propose a hypercube-based data model and a few algebraic operations that provide semantic foundation to multidimensional databases and extend their current functionality. The distinguishing feature of the proposed model is the symmetric treatment not only of all dimensions but also measures. The model also is very exible in that it provides support for multiple hierarchies along each dimension and support for adhoc aggregates. The proposed operators are composable, reorderable, and closed in application. These operators are also minimal in the sense that none can be expressed in terms of others nor can any one be dropped without sacri cing functionality. They make possible the declarative speci cation and optimization of multidimensional database queries that are currently speci ed operationally. The operators have been designed to be translated to SQL and can be implemented either on top of a relational database system or within a special purpose multidimensional database engine. In e ect, they provide an algebraic application programming interface (API) that allows the separation of the frontend from the backend. Finally, the proposed model provides a framework in which to study multidimensional databases and opens several new research problems. Current Address: Oracle Corporation, Redwood City, California. Current Address: University of California, Berkeley, California.
Flanerie and Writing the City in Iain Sinclair's Lights Out for the Territory, Edmund White's The Flaneur, and Jose Cardoso Pires's Lisboa: Livro de Bordo
2 Graeme Gilloch, commenting on the unlikely re-emergence of the figure of the flâneur in contemporary theory, a figure he himself had written off as obsolete seven years earlier (see Gilloch 1992), declares that " the flâneur returns as the perspicacious pedestrian, a figure with privileged insights into, and precious knowledge of, the modern city. Resurrected and recast, the flâneur-as-historical-figure becomes the flâneur-as-heuristic fiction " (1999: 105). In short, the flâneur re-emerges as a way in which to expound a vision of the city. This essay will examine the way in which three contemporary works use flânerie, the activity of the flâneur, as a way of both investigating metropolitan space and constructing a textual representation of it. These works are: Iain Sinclair's Lights Out for the Territory, José Cardoso Pires's Lisboa: Livro de Bordo and Edmund White's The Flâneur, dealing with London, Lisbon and Paris respectively. Before moving on to the primary texts, it is important to delineate the features comprising the concept of flânerie in modern social and literary theory. The flâneur originally arose as an historical figure in the early nineteenth-century. Paradigmatically, he 1 was an aloof dandy of independent means, ostensibly idling away his time ambling through the Arcades of Second Empire Paris. However, beneath his cover of disinterest and detachment, the flâneur was an ever-observant artist who 1 The flâneur, as the French noun indicates, is a male figure. For a discussion of the problems and 3 used his perambulations to amass knowledge of the metropolis, as material for the production of literary, or journalistic, texts for later publication. His flickering existence was eventually extinguished in the mid-nineteenth-century, when the increasingly unfashionable Arcades became obsolescent. In addition to the disappearance of the flâneur's favourite haunts, according to Benjamin (1974), it was a combination of the historical figure's increasing identification with the commodity, together with the quickening pace and spatio-temporal reorganisation of metropolitan existence, that eventually led to the extinction of the flâneur. The passing of the historical figure paved the way for the resurrection of the flâneur as a methodological persona, adopted in order to pursue the exploration of the city. Stripped to its basic characteristics and used as a modus operandi for the writer, flânerie, as a scopic methodology, involves mobile observation on the part of an individual consciousness from the supposed viewpoint of a pedestrian city dweller. As Shields (Tester, 1994: 65) says " …
Detecting Nastiness in Social Media
Although social media has made it easy for people to connect on a virtually unlimited basis, it has also opened doors to people who misuse it to undermine, harass, humiliate, threaten and bully others. There is a lack of adequate resources to detect and hinder its occurrence. In this paper, we present our initial NLP approach to detect invective posts as a first step to eventually detect and deter cyberbullying. We crawl data containing profanities and then determine whether or not it contains invective. Annotations on this data are improved iteratively by in-lab annotations and crowdsourcing. We pursue different NLP approaches containing various typical and some newer techniques to distinguish the use of swear words in a neutral way from those instances in which they are used in an insulting way. We also show that this model not only works for our data set, but also can be successfully applied to different data sets.
Hippocampal memory consolidation during sleep: a comparison of mammals and birds.
The transition from wakefulness to sleep is marked by pronounced changes in brain activity. The brain rhythms that characterize the two main types of mammalian sleep, slow-wave sleep (SWS) and rapid eye movement (REM) sleep, are thought to be involved in the functions of sleep. In particular, recent theories suggest that the synchronous slow-oscillation of neocortical neuronal membrane potentials, the defining feature of SWS, is involved in processing information acquired during wakefulness. According to the Standard Model of memory consolidation, during wakefulness the hippocampus receives input from neocortical regions involved in the initial encoding of an experience and binds this information into a coherent memory trace that is then transferred to the neocortex during SWS where it is stored and integrated within preexisting memory traces. Evidence suggests that this process selectively involves direct connections from the hippocampus to the prefrontal cortex (PFC), a multimodal, high-order association region implicated in coordinating the storage and recall of remote memories in the neocortex. The slow-oscillation is thought to orchestrate the transfer of information from the hippocampus by temporally coupling hippocampal sharp-wave/ripples (SWRs) and thalamocortical spindles. SWRs are synchronous bursts of hippocampal activity, during which waking neuronal firing patterns are reactivated in the hippocampus and neocortex in a coordinated manner. Thalamocortical spindles are brief 7-14 Hz oscillations that may facilitate the encoding of information reactivated during SWRs. By temporally coupling the readout of information from the hippocampus with conditions conducive to encoding in the neocortex, the slow-oscillation is thought to mediate the transfer of information from the hippocampus to the neocortex. Although several lines of evidence are consistent with this function for mammalian SWS, it is unclear whether SWS serves a similar function in birds, the only taxonomic group other than mammals to exhibit SWS and REM sleep. Based on our review of research on avian sleep, neuroanatomy, and memory, although involved in some forms of memory consolidation, avian sleep does not appear to be involved in transferring hippocampal memories to other brain regions. Despite exhibiting the slow-oscillation, SWRs and spindles have not been found in birds. Moreover, although birds independently evolved a brain region--the caudolateral nidopallium (NCL)--involved in performing high-order cognitive functions similar to those performed by the PFC, direct connections between the NCL and hippocampus have not been found in birds, and evidence for the transfer of information from the hippocampus to the NCL or other extra-hippocampal regions is lacking. Although based on the absence of evidence for various traits, collectively, these findings suggest that unlike mammalian SWS, avian SWS may not be involved in transferring memories from the hippocampus. Furthermore, it suggests that the slow-oscillation, the defining feature of mammalian and avian SWS, may serve a more general function independent of that related to coordinating the transfer of information from the hippocampus to the PFC in mammals. Given that SWS is homeostatically regulated (a process intimately related to the slow-oscillation) in mammals and birds, functional hypotheses linked to this process may apply to both taxonomic groups.