title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
An Introduction to Modeling and Analyzing Complex Product Development Processes Using the Design Structure Matrix (DSM) Method | The design and development of complex engineering products require the efforts and collaboration of hundreds of participants from diverse backgrounds resulting in complex relationships among both people and tasks. Many of the traditional project management tools (PERT, Gantt and CPM methods) do not address problems stemming from this complexity. While these tools allow the modeling of sequential and parallel processes, they fail to address interdependency (feedback and iteration), which is common in complex product development (PD) projects. To address this issue, a matrix-based tool called the Design Structure Matrix (DSM) has evolved. This method differs from traditional project-management tools because it focuses on representing information flows rather than work flows. The DSM method is an information exchange model that allows the representation of complex task (or team) relationships in order to determine a sensible sequence (or grouping) for the tasks (or teams) being modeled. This article will cover how the basic method works and how you can use the DSM to improve the planning, execution, and management of complex PD projects using different algorithms (i.e., partitioning, tearing, banding, clustering, simulation, and eigenvalue analysis). Introduction: matrices and projects Consider a system (or project) that is composed of two elements /sub-systems (or activities/phases): element "A" and element "B". A graph may be developed to represent this system pictorially. The graph is constructed by allowing a vertex/node on the graph to represent a system element and an edge joining two nodes to represent the relationship between two system elements. The directionality of influence from one element to another is captured by an arrow instead of a simple link. The resultant graph is called a directed graph or simply a digraph. There are three basic building blocks for describing the relationship amongst system elements: parallel (or concurrent), sequential (or dependent) and coupled (or interdependent) (fig. 1) Fig.1 Three Configurations that Characterize a System Relationship Parallel Sequential Coupled Graph Representation A B A |
A 42mW 26–28 GHz phased-array receive channel with 12 dB gain, 4 dB NF and 0 dBm IIP3 in 45nm CMOS SOI | This paper presents a low-power 26-28 GHz phased-array receive channel in 45nm CMOS SOI. The design alternates cascode amplifiers with switched-LC phase-shifter cells to result in 5-bit phase control with gain and rms phase error <; 0.6 dB and 4°, respectively, over 32 phase states. The measured gain, noise figure (NF) and IIP3 are 12.2 dB, 4 dB and 0 dBm, respectively, and are achieved at a DC power of 42 mW. A gain control of 6-dB is also available without affecting the system NF. To our knowledge, this represents state-of-the-art in mm-wave phased-arrays with the best published linearity at low NF. Application areas include 5G base-stations and hand-held units. |
Using the Contingent Valuation Method to Measure Patron Benefits of Reference Desk Service in an Academic Library | 56 he concept of “just-in-case” reference (the librarian waits at the desk just in case the patron has a question) is receiving increasing scrutiny by library administrators and reference librarians.1,2 An entire symposium in the Journal of Academic Librarianship was dedicated to a discussion of the viability of reference services and the possible need for a new model.3 But this discussion is occurring without information on the value of reference desk services to patrons. The library literature is replete with cost studies, but few studies consider the benefit to patrons of library services. This is particularly true of reference services where librarians have thoroughly studied the cost of reference transactions, Using the Contingent Valuation Method to Measure Patron Benefits of Reference Desk Service in an Academic Library |
Approximating Polyhedra with Spheres for Time-Critical Collision Detection | This article presentsa method for approximatingpolyhedralobjects to support a time-critical collision-detectionalgorithm. The approximationsare hierarchies of spheres, and they allow the time-critical algorithm to progressivelyrefine the accuracy of its detection, stopping as needed to maintain the real-time performanceessential for interactive applications.The key to this approachis a preprocessthat automaticallybuilds tightly fitting hierarchies for rigid and articulatedobjects.The preprocessuses medial-axis surfaces, which are skeletal representations of objects. These skeletonsguide an optimizationtechniquethat gives the hierarchies accuracy properties appropriate for collision detection. In a sample application, hierarchies built this way allow the time-criticalcollision-detectionalgorithmto have acceptableaccuracy, improving significantly on that possible with hierarchies built by previous techniques. The performanceof the time-critical algorithmin this application is consistently 10 to 100 times better than a previous collision-detection algorithm, maintaining low latency and a nearIy constant frame rate of 10 frames per second on a conventional graphics workstation. The time-critical algorithm maintains its real-time performanceas objects become more complicated, even as they exceed previouslyreported complexitylevels by a factor of more than 10. |
The blocker tag: selective blocking of RFID tags for consumer privacy | We propose the use of "selective blocking" by "blocker tags" as a way of protecting consumers from unwanted scanning of RFID tags attached to items they may be carrying or wearing.While an ordinary RFID tag is a simple, cheap (e.g. five-cent) passive device intended as an "electronic bar-code" for use in supply-chain management, a blocker tag is a cheap passive RFID device that can simulate many ordinary RFID tags simultaneously. When carried by a consumer, a blocker tag thus "blocks" RFID readers. It can do so universally by simulating all possible RFID tags. Or a blocker tag can block selectively by simulating only selected subsets of ID codes, such as those by a particular manufacturer, or those in a designated "privacy zone.We believe that this approach, when used with appropriate care, provides a very attractive alternative for addressing privacy concerns raised by the potential (and likely) widespread use of RFID tags in consumer products.We also discuss possible abuses arising from blocker tags, and means for detecting and dealing with them. |
All Silicon Marx-bank Topology for High-voltage, High-frequency Rectangular Pulses | This paper discusses the operation of a fully integrated solid-state Marx generator circuit, which has been developed for high-frequency (kHz), high-voltage (kV) applications needing rectangular pulses. The conventional Marx generator, used for high-voltage pulsed applications, uses inductors, or resistors, to supply the charging capacitors voltage, which has the disadvantages of size, power loss and frequency limitation. The proposed circuit takes advantage of the intensive use of power semiconductor switches, replacing the passive elements in the conventional circuit, to increase the performance, strongly reducing losses and increasing the pulse repetition frequency. Also, the proposed topology enables the use of typical half-bridge semiconductor structures, while ensuring that the maximum voltage blocked by the semiconductors is the voltage of each capacitor (i.e. the power supply voltage), even with mismatches in the synchronized switching, and with fault conditions. A laboratory prototype with five stages, 5 kW peak power, of this all silicon Marx generator circuit, was constructed using 1200 V IGBTs and diodes, operating with 1000 V d-c input voltage and 10 kHz frequency, giving 5 kV pulses, with 10 mus width and 50 ns rise time |
Contextual domain classification in spoken language understanding systems using recurrent neural network | In a multi-domain, multi-turn spoken language understanding session, information from the history often greatly reduces the ambiguity of the current turn. In this paper, we apply the recurrent neural network (RNN) to exploit contextual information for query domain classification. The Jordan-type RNN directly sends the vector of output distribution to the next query turn as additional input features to the convolutional neural network (CNN). We evaluate our approach against SVM with and without contextual features. On our contextually labeled dataset, we observe a 1.4% absolute (8.3% relative) improvement in classification error rate over the non-contextual SVM, and 0.9% absolute (5.5% relative) improvement over the contextual SVM. |
Anthropometry of elderly residents in the city of São Paulo, Brazil. | The article presents gender and age-specific selected anthropometric data for a representative sample of elderly Brazilians in the city of São Paulo. This was a cross-sectional, population-based household survey. A total of 1,894 older adults (men and women, > 60 years) were examined from January to March 2001. Data were presented as means and percentiles for body mass (BM); height or stature (ST); body mass index (BMI); waist (WC), hip (HC), arm (AC), and calf (CC) circumferences; triceps skinfold thickness (TST); and arm muscle circumference (AMC), and differences were described according to age (all variables) and gender (BMI). Except for HC (men), all anthropometric variables were lower in the oldest than in the youngest individuals (p < 0.01) in both genders. BMI was significantly higher (p < 0.01) in women than men (all age groups). The observations suggest that there is loss of muscle mass and redistribution and reduction of fat mass with age (both genders). The data can be used in clinical practice and epidemiological studies based on interpretation of anthropometric measurements in the elderly in São Paulo. |
Local sleep homeostasis in the avian brain: convergence of sleep function in mammals and birds? | The function of the brain activity that defines slow wave sleep (SWS) and rapid eye movement (REM) sleep in mammals is unknown. During SWS, the level of electroencephalogram slow wave activity (SWA or 0.5-4.5 Hz power density) increases and decreases as a function of prior time spent awake and asleep, respectively. Such dynamics occur in response to waking brain use, as SWA increases locally in brain regions used more extensively during prior wakefulness. Thus, SWA is thought to reflect homeostatically regulated processes potentially tied to maintaining optimal brain functioning. Interestingly, birds also engage in SWS and REM sleep, a similarity that arose via convergent evolution, as sleeping reptiles and amphibians do not show similar brain activity. Although birds deprived of sleep show global increases in SWA during subsequent sleep, it is unclear whether avian sleep is likewise regulated locally. Here, we provide, to our knowledge, the first electrophysiological evidence for local sleep homeostasis in the avian brain. After staying awake watching David Attenborough's The Life of Birds with only one eye, SWA and the slope of slow waves (a purported marker of synaptic strength) increased only in the hyperpallium--a primary visual processing region--neurologically connected to the stimulated eye. Asymmetries were specific to the hyperpallium, as the non-visual mesopallium showed a symmetric increase in SWA and wave slope. Thus, hypotheses for the function of mammalian SWS that rely on local sleep homeostasis may apply also to birds. |
Epoetin delta in the management of renal anaemia: results of a 6-month study. | BACKGROUND
Epoetin delta is an epoetin that, unlike existing agents, is produced in a human cell line. The present study investigated the efficacy and tolerability of intravenous (i.v.) epoetin delta compared with i.v. epoetin alfa.
METHODS
This was a 6-month, multicentre, randomized, double-blind trial in haemodialysis patients previously receiving epoetin alfa. Haematological parameters were assessed, and adverse events monitored. Equivalent efficacy was defined as a difference in mean haemoglobin between the two agents over weeks 12-24 of < or = 1 g/dl with a 90% confidence interval (CI) within the range -1 to 1 g/dl.
RESULTS
In total, 560 patients received epoetin delta while 192 received epoetin alfa, and 76.8% and 79.7% of patients, respectively, completed the study. Both agents showed similar efficacy in controlling anaemia: the point estimate for the difference in mean haemoglobin over weeks 12-24 was 0.01 g/dl (90% CI, -0.13, 0.15 g/dl), confirming equivalence. Adverse events were those expected in dialysis patients. Events possibly related to treatment occurred in 9.2% of patients receiving epoetin delta and 8.4% receiving epoetin alfa. Serious adverse events (SAEs) occurred in 33.0% and 26.7% of patients in the epoetin delta and epoetin alfa groups, respectively. Six patients in the epoetin delta group experienced an SAE considered possibly related to treatment (mostly access-related clotting), compared with no patient in the epoetin delta group. None of these SAEs were life threatening.
CONCLUSIONS
Epoetin delta was shown to have an equivalent efficacy and safety profile to epoetin alfa in this 6-month study. |
Partial entity structure: a compact non-manifold boundary representation based on partial topological entities | Non-manifold boundary representations have gained a great deal of popularity in recent years and various representation schemes have been proposed because they allow an even wider range of objects for various applications than conventional manifold representations. However, since these schemes are mainly interested in describing sufficient adjacency relationships of topological entities, the models represented in these schemes occupy too much storage space redundantly although they are very efficient in answering queries on topological adjacency relationships. Storage requirement can arise as a crucial problem in models in which topological data is more dominant than geometric data, such as tessellated or mesh models.
To solve this problem, in this paper, we propose a compact non-manifold boundary representation, called the partial entity structure, which allows the reduction of the storage size to half that of the radial edge structure, which is known as a time efficient non-manifold data structure, while allowing full topological adjacency relationships to be derived without loss of efficiency. This representation contains not only the conventional primitive entities like the region, face, edge, and vertex, but also the partial topological entities such as the partial-face, partial-edge, and partial-vertex for describing non-manifold conditions at vertices, edges, and faces. In order to verify the time and storage efficiency of the partial entity structure, the time complexity of basic query procedures and the storage requirement for typical geometric models are derived and compared with those of existing schemes. Furthermore, a set of the generalized Euler operators and typical high-level modeling capabilities such as Boolean operations are also implemented to confirm that our data structure is sound and easy to be manipulated. |
An improved analytical IGBT model for loss calculation including junction temperature and stray inductance | An improved analytical model suitable for IGBT modules is proposed in this paper to calculate the power losses with high accuracy and short calculation time. In this model, the parameters varying with junction temperature of the modules, such as di/dt in the turn-on period and dv/dt in the turn-off period, are discussed and derived according to several equivalent models. In addition, the parasitic inductance in the circuit including the emitter and collector inductance in the power circuits and the gate inductance in the driving loop are considered in this model. Based on this proposed model, the simulation switching waveforms of collector currents and collector-emitter voltages are provided to verify the model. Meanwhile, the calculated power losses are confirmed to be precise by comparing with measurement results. |
Data Poisoning Attacks in Contextual Bandits | We study offline data poisoning attacks in contextual bandits, a class of reinforcement learning problems with important applications in online recommendation and adaptive medical treatment, among others. We provide a general attack framework based on convex optimization and show that by slightly manipulating rewards in the data, an attacker can force the bandit algorithm to pull a target arm for a target contextual vector. The target arm and target contextual vector are both chosen by the attacker. That is, the attacker can hijack the behavior of a contextual bandit. We also investigate the feasibility and the side effects of such attacks, and identify future directions for defense. Experiments on both synthetic and real-world data demonstrate the efficiency of the attack algorithm. |
Single-Port Reconfigurable Magneto-Electric Dipole Antenna With Quad-Polarization Diversity | We propose a magneto-electric (ME) dipole antenna which is able to provide four states of polarization control, namely, the linear polarization (LP) in <inline-formula> <tex-math notation="LaTeX">$x$ </tex-math></inline-formula>-direction, LP in <inline-formula> <tex-math notation="LaTeX">$y$ </tex-math></inline-formula>-direction, left-hand circular polarization (CP), and right-hand CP. Under each of the two LP states, the reconfigurable ME dipole can be further tuned to work in two adjacent frequency ranges (i.e., a relatively higher band and a lower band). The uniqueness of this design arises from the fact that all the polarization and frequency agilities can be realized with a single port and without complicate feeding network, thus eliminating the insertion loss and considerably increased design complexity associated with a reconfigurable feeding network. Other advantages that make the proposed design attractive comparing with conventional reconfigurable microstrip antennas include the comparatively stable and high gain, nearly the same gain for both linear and CPs, symmetrical radiation pattern, and low back radiation level at the operating frequencies. |
Insights from Deploying See-Through Augmented Reality Signage in the Wild | Typically the key challenges with interactive digital signage are (1) interaction times are short (usually in the order of seconds), (2) interaction needs to be very easy to understand, and (3) interaction needs to provide a benefit that justifies the effort to engage. To tackle these challenges, we propose a see-through augmented reality application for digital signage that enables passersby to observe the area behind the display, augmented with useful data. We report on the development and deployment of our application in two public settings: a public library and a supermarket. Based on observations of 261 (library) and 661 (supermarket) passersby and 14 interviews, we provide early insights and implications for application designers. Our results show a significant increase in attention: the see-through signage was noticed by 46% of the people, compared to 14% with the non-see through version. Furthermore, findings indicate that to best benefit the passersby, the AR displays should clearly communicate their purpose. |
Predictive characteristics of patients achieving glycaemic control with insulin after sulfonylurea failure. | AIM
We investigated the clinical and metabolic parameters in type 2 diabetic patients who were inadequately controlled on sulfonylurea (SU) before initiating insulin therapy to characterise patients who are likely to achieve target glycaemic control with insulin analogues.
METHODS
A total of 120 Korean patients aged ≥ 40 years with insulin-naïve, poorly controlled, SU-treated type 2 diabetes were randomised on the basis of SU dose, and obesity with 1 : 1 ratio of insulin detemir (long-acting analogue; LAA) and 70% insulin aspart protamine and 30% insulin aspart (biphasic insulin analogue; BIA). Patients who failed to reach ≤ 20% glycated albumin (GA) at 3 weeks were switched to therapy with a twice-daily BIA for 16 weeks.
RESULTS
Mean HbA(1c) , GA, fasting and stimulated plasma glucose levels were significantly reduced after 16 weeks compared with the baseline in all groups, and 40% of patients reached the target HbA(1c) ( ≤ 7%). Compared with responders, non-responders had significantly longer duration of diabetes and higher dose of glimepiride. However, there was no significant difference in insulin secretory profiles between responders and non-responders. Clinical factors such as diabetes duration, SU dose and BMI were independently associated with inadequate response to insulin analogues in patients with secondary failure.
CONCLUSIONS
In type 2 diabetics with secondary SU failure, clinical parameters such as duration of diabetes (< 10 years), SU dose ( ≤ 4 mg) and BMI should be taken into consideration as important factors than laboratory indices related to β-cell function when predicting the response to insulin analogues. |
Otto refrigerator based on a superconducting qubit: Classical and quantum performance | We analyse a quantum Otto refrigerator based on a superconducting qubit coupled to two LCresonators each including a resistor acting as a reservoir. We find various operation regimes: nearly adiabatic (low driving frequency), ideal Otto cycle (intermediate frequency), and non-adiabatic coherent regime (high frequency). In the nearly adiabatic regime, the cooling power is quadratic in frequency, and we find substantially enhanced coefficient of performance , as compared to that of an ideal Otto cycle. Quantum coherent effects lead invariably to decrease in both cooling power and as compared to purely classical dynamics. In the non-adiabatic regime we observe strong coherent oscillations of the cooling power as a function of frequency. We investigate various driving waveforms: compared to the standard sinusoidal drive, truncated trapezoidal drive with optimized rise and dwell times yields higher cooling power and efficiency. |
Face and hands localization and tracking for sign language recognition | We develop the face and hand detection and tracking for sign language recognition system. We first perform a preliminary evaluation on several color spaces to find the most suitable one using a nonparametric model approach. Then, we propose to use the elliptical model in the CbCr color space to lower the complexity of the detection algorithm and to better model the skin color. After the skin regions from the input video have been segmented, the interesting facial features and hands are detected using luminance differences and skeleton features respectively. In the tracking stage, each blob determines the search region and finds the MMSE (minimum mean square error) to match its own blob. The block matching method between previous and current frame is used. Experimental results show that our proposed system is able to detect and track the face and hands in sign language video sequences. |
Shear Wave Splitting and Mantle Anisotropy: Measurements, Interpretations, and New Directions | Measurements of the splitting or birefringence of seismic shear waves that have passed through the Earth’s mantle yield constraints on the strength and geometry of elastic anisotropy in various regions, including the upper mantle, the transition zone, and the D00 layer. In turn, information about the occurrence and character of seismic anisotropy allows us to make inferences about the style and geometry of mantle flow because anisotropy is a direct consequence of deformational processes. While shear wave splitting is an unambiguous indicator of anisotropy, the fact that it is typically a near-vertical path-integrated measurement means that splitting measurements generally lack depth resolution. Because shear wave splitting yields some of the most direct constraints we have on mantle flow, however, understanding how to make and interpret splitting measurements correctly and how to relate them properly to mantle flow is of paramount importance to the study of mantle dynamics. In this paper, we review the state of the art and recent developments in the measurement and interpretation of shear wave splitting—including new measurement methodologies and forward and inverse modeling techniques,—provide an overview of data sets from different tectonic settings, show how they help us relate mantle flow to surface tectonics, and discuss new directions that should help to advance the shear wave splitting field. |
A Blockchain Based New Secure Multi-Layer Network Model for Internet of Things | This paper presents a multi-layer secure IoT network model based on blockchain technology. The model reduces the difficulty of actual deployment of the blockchain technology by dividing the Internet of Things into a multi-level de-centric network and adopting the technology of block chain technology at all levels of the network, with the high security and credibility assurance of the blockchain technology retaining. It provides a wide-area networking solution of Internet of Things. |
Efficient Classifier for Classification of Prognostic Breast Cancer Data through Data Mining Techniques | Data mining involves the process of recovering related, significant and credential information from a large collection of aggregated data. A major area of current research in data mining is the field of clinical investigations that involve disease diagnosis, prognosis and drug therapy. The objective of this paper is to identify an efficient classifier for prognostic breast cancer data. This research work involves designing a data mining framework that incorporates the task of learning patterns and rules that will facilitate the formulation of decisions in new cases. The machine learning techniques employed to train the proposed system are based on feature relevance analysis and classification algorithms. Wisconsin Prognostic Breast Cancer (WPBC) data from the UCI machine learning repository is utilized by means of data mining techniques to completely train the system on 198 individual cases, each comprising of 33 predictor values. This paper highlights the performance of feature reduction and classification algorithms on the training dataset. We evaluate the number of attributes for split in the Random tree algorithm and the confidence level and minimum size of the leaves in the C4.5 algorithm to produce 100 percent classification accuracy. Our results demonstrate that Random Tree and Quinlan’s C4.5 classification algorithm produce 100 percent accuracy in the training and test phase of classification with proper evaluation of algorithmic parameters. |
Linking online news and social media | Much of what is discussed in social media is inspired by events in the news and, vice versa, social media provide us with a handle on the impact of news events. We address the following linking task: given a news article, find social media utterances that implicitly reference it. We follow a three-step approach: we derive multiple query models from a given source news article, which are then used to retrieve utterances from a target social media index, resulting in multiple ranked lists that we then merge using data fusion techniques. Query models are created by exploiting the structure of the source article and by using explicitly linked social media utterances that discuss the source article. To combat query drift resulting from the large volume of text, either in the source news article itself or in social media utterances explicitly linked to it, we introduce a graph-based method for selecting discriminative terms.
For our experimental evaluation, we use data from Twitter, Digg, Delicious, the New York Times Community, Wikipedia, and the blogosphere to generate query models. We show that different query models, based on different data sources, provide complementary information and manage to retrieve different social media utterances from our target index. As a consequence, data fusion methods manage to significantly boost retrieval performance over individual approaches. Our graph-based term selection method is shown to help improve both effectiveness and efficiency. |
From value chain to value constellation: designing interactive strategy. | In today's fast-changing competitive environment, strategy is no longer a matter of positioning a fixed set of activities along that old industrial model, the value chain. Successful companies increasingly do not just add value, they reinvent it. The key strategic task is to reconfigure roles and relationships among a constellation of actors--suppliers, partners, customers--in order to mobilize the creation of value by new combinations of players. What is so different about this new logic of value? It breaks down the distinction between products and services and combines them into activity-based "offerings" from which customers can create value for themselves. But as potential offerings grow more complex, so do the relationships necessary to create them. As a result, a company's strategic task becomes the ongoing reconfiguration and integration of its competencies and customers. The authors provide three illustrations of these new rules of strategy. IKEA has blossomed into the world's largest retailer of home furnishings by redefining the relationships and organizational practices of the furniture business. Danish pharmacies and their national association have used the opportunity of health care reform to reconfigure their relationships with customers, doctors, hospitals, drug manufacturers, and with Danish and international health organizations to enlarge their role, competencies, and profits. French public-service concessionaires have mastered the art of conducting a creative dialogue between their customers--local governments in France and around the world--and a perpetually expanding set of infrastructure competencies. |
Information-Theoretic Measures for Anomaly Detection | Anomaly detection is an essential component of the protection mechanisms against novel attacks. In this paper, we propose to use several information-theoretic measures, namely, entropy, conditional entropy, relative conditional entropy, information gain, and information cost for anomaly detection. These measures can be used to describe the characteristics of an audit data set, suggest the appropriate anomaly detection model(s) to be built, and explain the performance of the model(s). We use case studies on Unix system call data, BSM data, and network tcpdump data to illustrate the utilities of these measures. |
Cities, productivity, and quality of life. | Technological changes and improved electronic communications seem, paradoxically, to be making cities more, rather than less, important. There is a strong correlation between urbanization and economic development across countries, and within-country evidence suggests that productivity rises in dense agglomerations. But urban economic advantages are often offset by the perennial urban curses of crime, congestion and contagious disease. The past history of the developed world suggests that these problems require more capable governments that use a combination of economic and engineering solutions. Though the scope of urban challenges can make remaining rural seem attractive, agrarian poverty has typically also been quite costly. |
On Test Sets for Checking Morphism Equivalence on Languages with Fair Distribution of Letters | Abstract A test set for a language L is a finite subset T of L with the property that each pair of morphisms that agrees on T also agrees on L . Some results concerning test sets for languages with fair distribution of letters are presented. The first result is that every D0L language with fair distribution of letters has a test set. The second result shows that every language L with fair distribution has a test set relative to morphisms g , h which have bounded balance on L . These results are generalizations of results of Culik II and Karhumaki (1983). |
Learning from Clinical Judgments: Semi-Markov-Modulated Marked Hawkes Processes for Risk Prognosis | Critically ill patients in regular wards are vulnerable to unanticipated adverse events which require prompt transfer to the intensive care unit (ICU). To allow for accurate prognosis of deteriorating patients, we develop a novel continuoustime probabilistic model for a monitored patient’s temporal sequence of physiological data. Our model captures “informatively sampled” patient episodes: the clinicians’ decisions on when to observe a hospitalized patient’s vital signs and lab tests over time are represented by a marked Hawkes process, with intensity parameters that are modulated by the patient’s latent clinical states, and with observable physiological data (mark process) modeled as a switching multi-task Gaussian process. In addition, our model captures “informatively censored” patient episodes by representing the patient’s latent clinical states as an absorbing semi-Markov jump process. The model parameters are learned from offline patient episodes in the electronic health records via an EM-based algorithm. Experiments conducted on a cohort of patients admitted to a major medical center over a 3-year period show that risk prognosis based on our model significantly outperforms the currently deployed medical risk scores and other baseline machine learning algorithms. |
Quo vadis, PUF?: Trends and challenges of emerging physical-disorder based security | The physical unclonable function (PUF) has emerged as a popular and widely studied security primitive based on the randomness of the underlying physical medium. To date, most of the research emphasis has been placed on finding new ways to measure randomness, hardware realization and analysis of a few initially proposed structures, and conventional secret-key based protocols. In this work, we present our subjective analysis of the emerging and future trends in this area that aim to change the scope, widen the application domain, and make a lasting impact. We emphasize on the development of new PUF-based primitives and paradigms, robust protocols, public-key protocols, digital PUFs, new technologies, implementations, metrics and tests for evaluation/validation, as well as relevant attacks and countermeasures. |
Double secret key based medical image watermarking for secure telemedicine in cloud environment | This paper presents a highly secured new image watermarking scheme using two secret keys to prevent unauthorized access of information stored in cloud environment. The proposed scheme is applied for secure medical information transmission or its cloud storage. The scheme is implemented in two steps. In first step, randomization of patient information is done based on a password secret key before embedding it into the cover medical image. Second step is to hide the watermarked image into a secret key image before cloud storage so that even if the stored image is hacked or leaked, the cover medical image along with embedded patient information cannot be retrieved by the hacker. In this paper we have proposed a novel watermarking algorithm for medical images by randomizing the information according to the secret password key and then embedding it into the cover medical image followed by embedding of cover image into another secret key image. The proposed scheme aims to protect sensitive information transfer and cloud storage from unauthorized access. Extraction of the embedded information at the receiver end is done in two steps. Firstly, non-blind detection is carried out in which cover medical image is extracted with help of secret key image. In the second step, binary image watermark is extracted by analyzing the DCT (Discrete Cosine Transform) coefficient values. Performance of the proposed algorithm is analyzed by varying the quality factor of JPEG compression and sub-band decomposition levels. |
Automatic modeling of personality states in small group interactions | In this paper, we target the automatic recognition of personality states in a meeting scenario employing visual and acoustic features. The social psychology literature has coined the name personality state to refer to a specific behavioral episode wherein a person behaves as more or less introvert/extrovert, neurotic or open to experience, etc. Personality traits can then be reconstructed as density distributions over personality states. Different machine learning approaches were used to test the effectiveness of the selected features in modeling the dynamics of personality states. |
Comparison with Parametric Optimization in Credit Card Fraud Detection | We apply five classification methods, neural nets (NN), Bayesian nets (BN), naive Bayes (NB), artificial immune systems (AIS) and decision trees (DT), to credit card fraud detection. For a fair comparison, we fine adjust the parameters for each method either through exhaustive search, or through genetic algorithm (GA). Furthermore, we compare these classification methods in two training modes: a cost sensitive training mode where different costs for false positives and false negatives are considered in the training phase; and a plain training mode. The exploration of possible cost-sensitive metaheuristics to be applied is not in the scope of this work and all executions are run using Weka, a publicly available software. Although NN is claimed to be widely used in the market today, the evaluated implementation of NN in plain training leads to quite poor results. Our experiments are consistent with the early result of Maes et al. (2002) which concludes that BN is better than NN. Cost sensitive training substantially improves the performance of all classification methods apart from NB and, independently of the training mode, DT and AIS with, optimized parameters, are the best methods in our experiments. |
A Tutorial on Machine Learning and Data Science Tools with Python | In this tutorial, we will provide an introduction to the main Python software tools used for applying machine learning techniques to medical data. The focus will be on open-source software that is freely available and is cross platform. To aid the learning experience, a companion GitHub repository is available so that you can follow the examples contained in this paper interactively using Jupyter notebooks. The notebooks will be more exhaustive than what is contained in this chapter, and will focus on medical datasets and healthcare problems. Briefly, this tutorial will first introduce Python as a language, and then describe some of the lower level, general matrix and data structure packages that are popular in the machine learning and data science communities, such as NumPy and Pandas. From there, we will move to dedicated machine learning software, such as SciKit-Learn. Finally we will introduce the Keras deep learning and neural networks library. The emphasis of this paper is readability, with as little jargon used as possible. No previous experience with machine learning is assumed. We will use openly available medical datasets throughout. |
Standardizing Power Monitoring and Control at Exascale | Power API-the result of collaboration among national laboratories, universities, and major vendors-provides a range of standardized power management functions, from application-level control and measurement to facility-level accounting, including real-time and historical statistics gathering. Support is already available for Intel and AMD CPUs and standalone measurement devices. |
Optimal design of a rotating transverse flux motor (TFM) with permanent magnets in rotor | An optimal design procedure, based on Hooke-Jeeves method, applied to a permanent magnet transverse flux motor (PMTFM) is presented in the paper. Different objective functions, as minimum cost, minimum cogging torques and maximum efficiency were considered. The results obtained for a low power sample PMTFM are given and commented. |
Resource Allocation for Cloud-Assisted Mobile Applications | Mobile devices such as netbooks, smartphones, and tablets have made computing ubiquitous. However, such battery powered devices often have limited computing power for the benefit of an extended runtime. Nevertheless, despite the reduced processing power, users expect to perform the same types of operations as they could do using their desktop or laptop computers. We address mobile devices's lack of computing power by leveraging cloud computing resources. We present a middleware that relocates computing-intensive parts of Java applications to cloud re-sources. Consequently, our middleware enables the execution of computing-intensive applications on mo-bile devices. We present a case study on which we adapt Sunflow, an open-source ray tracing application, to use our middleware and show the results obtained by deploying it on Amazon EC2. We show, via simulations, a cost analysis of using the different resource allocation strategies available on our solution. |
Evaluation of brain perfusion with technetium-99m bicisate single-photon emission tomography in patients with depressive disorder before and after drug treatment | Depression is one of the most common psychiatric illnesses. Its influence on brain perfusion has been demonstrated, but conflicting data exist on follow-up after drug treatment. The aim of our study was to evaluate the effects of antidepressant drugs on regional cerebral blood flow (rCBF) in patients with depression after 3 weeks and 6 months of drug therapy. Clinical criteria for depression without psychosis were met according to psychiatric evaluation. Severity of depression was evaluated with the Hamilton Depression Rating Scale (HAMD) before every scintigraphic study. rCBF was assessed using technetium-99m bicisate (Neurolite) brain single-photon emission tomography in nine patients with severe depression before the beginning of antidepressant drug therapy and 3 weeks and six months after initiation of therapy. Only patients with no change in antidepressant medication during the study were included. No antipsychotic drugs were used. Cerebellum was used as the reference region. rCBF was evaluated for eight regions in each study in three consecutive transversal slices. Follow-up studies were compared with the baseline study. The mean HAMD score was 25.5 points initially, 16 at the second examination and 8.8 after 6 months. Global CBF was decreased compared with the reference region in drug-free patients. Perfusion of left frontal and temporal regions was significantly lower (P<0.005) in comparison with the contralateral side. After therapy, a moderate decrease in perfusion was seen in the right frontal region (P<0.05). Perfusion decreased further after 6 months in the right frontal (P<0.005) and temporal regions (P<0.01). The highly significant asymmetry in perfusion between the left and right frontal and temporal lobes almost disappeared during treatment. Our findings implicate dysfunction of the frontal and temporal cortex in clinically depressed patients before specific drug treatment. Clinical improvement and decreases in HAMD score after 3 weeks and after 6 months reflect the treatment effect on mood-related rCBF changes. |
A Mixed Fragmentation Methodology For Initial Distributed Database Design | We deene mixed fragmentation as a process of simultaneously applying the horizontal and vertical fragmentation on a relation. It can be achieved in one of two ways: by performing horizontal fragmentation followed by vertical fragmentation or by performing vertical fragmentation followed by horizontal fragmentation. The need for mixed fragmentation arises in distributed databases because database users usually access subsets of data which are vertical and horizontal fragments of global relations and there is a need to process queries or transactions that would access these fragments optimally. We present algorithms for generating candidate vertical and horizontal fragmentation schemes and propose a methodology for distributed database design using these fragmentation schemes. When applied together these schemes form a grid. This grid consisting of cells is then merged to form mixed fragments so as to minimize the number of disk accesses required to process the distributed transactions. We have implemented the vertical and horizontal fragmentation algorithms and are developing a Distributed Database Design Tool to support the proposed methodology. |
A Rule-Based Approach For Effective Sentiment Analysis | The success of Web 2.0 applications has made online social media websites tremendous assets for supporting critical business intelligence applications. The knowledge gained from social media can potentially lead to the development of novel services that are better tailored to users’ needs and at the same time meet the objectives of businesses offering them. Online consumer reviews are one of the critical social media contents. Proper analysis of consumer reviews not only provides valuable information to facilitate the purchase decisions of customers but also helps merchants or product manufacturers better understand general responses of customers on their products for marketing campaign improvement. This study aims at designing an approach for supporting the effective analysis of the huge volume of online consumer reviews and, at the same time, settling the major limitations of existing approaches. Specifically, the proposed rule-based sentiment analysis (R-SA) technique employs the class association rule mining algorithm to automatically discover interesting and effective rules capable of extracting product features or opinion sentences for a specific product feature interested. According to our preliminary evaluation results, the R-SA technique performs well in comparison with its benchmark technique. |
Rover localization results for the FIDO rover | This paper describes the development of a two-tier state estimation approach for NASA/JPL’s FIDO Rover that utilizes wheel odometry, inertial measurement sensors, and a sun sensor to generate accurate estimates of the rover’s position and attitude throughout a rover traverse. The state estimation approach makes use of a linear Kalman filter to estimate the rate sensor bias terms associated with the inertial measurement sensors and then uses these estimated rate sensor bias terms to compute the attitude of the rover during a traverse. The estimated attitude terms are then combined with the wheel odometry to determine the rover’s position and attitude through an extended Kalman filter approach. Finally, the absolute heading of the vehicle is determined via a sun sensor which is then utilized to initialize the rover’s heading prior to the next planning cycle for the rover’s operations. This paper describes the formulation, implementation, and results associated with the two-tier state estimation approach for the FIDO rover. |
Basic Approaches for the Evaluation of IT-Investments in E-Government: A Literature Review | Electronic government (e-government) investments are made to help governments transform service delivery in a way that they fulfill their obligations to stakeholders in the most efficient and cost effective way. During this process, governments are under constant pressure to deliver better public services with fewer resources which necessitate the use of confidential evaluation methods for IT investments in e-government in order to guide policy and decision makers and to raise public approval. Despite a various number of evaluation methods, there is still lack of an appropriate method, which takes all perspectives and dimensions into account. The aim of this paper is to investigate the existing state of art dealing with the principle approaches of evaluation of e-government investments by following a comprehensive review of the normative literature and contributing to these works by suggesting guidelines for developing an appropriate model. |
Handbook of Research on Advanced Intelligent Control Engineering and Automation | In industrial engineering and manufacturing, control of individual processes and systems is crucial to developing a quality final product. Rapid developments in technology are pioneering new techniques of research in control and automation with multi-disciplinary applications in electrical, electronic, chemical, mechanical, aerospace, and instrumentation engineering. The Handbook of Research on Advanced Intelligent Control Engineering and Automation presents the latest research into intelligent control technologies with the goal of advancing knowledge and applications in various domains. This text will serve as a reference book for scientists, engineers, and researchers, as it features many applications of new computational and mathematical tools for solving complicated problems of mathematical modeling, simulation, and control. |
Time-release Protocol from Bitcoin and Witness Encryption for SAT | We propose a new time-release protocol based on the bitcoin protocol and witness encryption. We derive a “public key” from the bitcoin block chain for encryption. The decryption key are the unpredictable information in the future blocks (e.g., transactions, nonces) that will be computed by the bitcoin network. We build this protocol by witness encryption and encrypt with the bitcoin proof-of-work constraints. The novelty of our protocol is that the decryption key will be automatically and publicly available in the bitcoin block chain when the time is due. Witness encryption was originally proposed by Garg, Gentry, Sahai and Waters. It provides a means to encrypt to an instance, x, of an NP language and to decrypt by a witness w that x is in the language. Encoding CNF-SAT in the existing witness encryption schemes generate poly(n · k) group elements in the ciphertext where n is the number of variables and k is the number of clauses of the CNF formula. We design a new witness encryption for CNF-SAT which achieves ciphertext size of 2n + 2k group elements. Our witness encryption is based on an intuitive reduction from SAT to Subset-Sum problem. Our scheme uses the framework of multilinear maps, but it is independent of the implementation details of multilinear maps. |
Intersubject Differences in False Nonmatch Rates for a Fingerprint-Based Authentication System | The intersubject dependencies of false nonmatch rates were investigated for a minutiae-based biometric authentication process using single enrollment and verification measurements. A large number of genuine comparison scores were subjected to statistical inference tests that indicated that the number of false nonmatches depends on the subject and finger under test. This result was also observed if subjects associated with failures to enroll were excluded from the test set. The majority of the population (about 90%) showed a false nonmatch rate that was considerably smaller than the average false nonmatch rate of the complete population. The remaining 10% could be characterized as “goats” due to their relatively high probability for a false nonmatch. The image quality reported by the template extraction module only weakly correlated with the genuine comparison scores. When multiple verification attempts were investigated, only a limited benefit was observed for “goats,” since the conditional probability for a false nonmatch given earlier nonsuccessful attempts increased with the number of attempts. These observations suggest that (1) there is a need for improved identification of “goats” during enrollment (e.g., using dedicated signal-driven analysis and classification methods and/or the use of multiple enrollment images) and (2) there should be alternative means for identity verification in the biometric system under test in case of two subsequent false nonmatches. |
A Comparative Study of Off-Line Deep Learning Based Network Intrusion Detection | Network intrusion detection systems (NIDS) are essential security building-blocks for today's organizations to ensure safe and trusted communication of information. In this paper, we study the feasibility of off-line deep learning based NIDSes by constructing the detection engine with multiple advanced deep learning models and conducting a quantitative and comparative evaluation of those models. We first introduce the general deep learning methodology and its potential implication on the network intrusion detection problem. We then review multiple machine learning solutions to two network intrusion detection tasks (NSL-KDD and UNSW-NB15 datasets). We develop a TensorFlow-based deep learning library, called NetLearner, and implement a handful of cutting-edge deep learning models for NIDS. Finally, we conduct a quantitative and comparative performance evaluation of those models using NetLearner. |
Community Discovery in Dynamic Networks: A Survey | Several research studies have shown that complex networks modeling real-world phenomena are characterized by striking properties: (i) they are organized according to community structure, and (ii) their structure evolves with time. Many researchers have worked on methods that can efficiently unveil substructures in complex networks, giving birth to the field of community discovery. A novel and fascinating problem started capturing researcher interest recently: the identification of evolving communities. Dynamic networks can be used to model the evolution of a system: nodes and edges are mutable, and their presence, or absence, deeply impacts the community structure that composes them.
This survey aims to present the distinctive features and challenges of dynamic community discovery and propose a classification of published approaches. As a “user manual,” this work organizes state-of-the-art methodologies into a taxonomy, based on their rationale, and their specific instantiation. Given a definition of network dynamics, desired community characteristics, and analytical needs, this survey will support researchers to identify the set of approaches that best fit their needs. The proposed classification could also help researchers choose in which direction to orient their future research. |
When to use broader internalising and externalising subscales instead of the hypothesised five subscales on the Strengths and Difficulties Questionnaire (SDQ): data from British parents, teachers and children. | The Strengths and Difficulties Questionnaire (SDQ) is a widely used child mental health questionnaire with five hypothesised subscales. There is theoretical and preliminary empirical support for combining the SDQ's hypothesised emotional and peer subscales into an 'internalizing' subscale and the hypothesised behavioral and hyperactivity subscales into an 'externalizing' subscale (alongside the fifth prosocial subscale). We examine this using parent, teacher and youth SDQ data from a representative sample of 5-16 year olds in Britain (N = 18,222). Factor analyses generally supported second-order internalizing and externalizing factors, and the internalizing and externalizing subscales showed good convergent and discriminant validity across informants and with respect to clinical disorder. By contrast, discriminant validity was poorer between the emotional and peer subscales and between the behavioral, hyperactivity and prosocial subscales. This applied particularly to children with low scores on those subscales. We conclude that there are advantages to using the broader internalizing and externalizing SDQ subscales for analyses in low-risk samples, while retaining all five subscales when screening for disorder. |
Modeling Local Geometric Structure of 3D Point Clouds using Geo-CNN | Recent advances in deep convolutional neural networks (CNNs) have motivated researchers to adapt CNNs to directly model points in 3D point clouds. Modeling local structure has been proven to be important for the success of convolutional architectures, and researchers exploited the modeling of local point sets in the feature extraction hierarchy. However, limited attention has been paid to explicitly model the geometric structure amongst points in a local region. To address this problem, we propose GeoCNN, which applies a generic convolution-like operation dubbed as GeoConv to each point and its local neighborhood. Local geometric relationships among points are captured when extracting edge features between the center and its neighboring points. We first decompose the edge feature extraction process onto three orthogonal bases, and then aggregate the extracted features based on the angles between the edge vector and the bases. This encourages the network to preserve the geometric structure in Euclidean space throughout the feature extraction hierarchy. GeoConv is a generic and efficient operation that can be easily integrated into 3D point cloud analysis pipelines for multiple applications. We evaluate Geo-CNN on ModelNet40 and KITTI and achieve state-of-the-art performance. |
Crowd analysis using visual and non-visual sensors, a survey | This paper proposes a critical survey of crowd analysis techniques using visual and non-visual sensors. Automatic crowd understanding has a massive impact on several applications including surveillance and security, situation awareness, crowd management, public space design, intelligent and virtual environments. In case of emergency, it enables practical safety applications by identifying crowd situational context information. This survey identifies different approaches as well as relevant work on crowd analysis by means of visual and non-visual techniques. Multidisciplinary research groups are addressing crowd phenomenon and its dynamics ranging from social, and psychological aspects to computational perspectives. The possibility to use smartphones as sensing devices and fuse this information with video sensors data, allows to better describe crowd dynamics and behaviors. Eventually, challenges and further research opportunities with reference to crowd analysis are exposed. |
Dynamic Virtual Machine Migration in a vehicular cloud | Vehicular clouds are formed by incorporating cloud-based services into vehicular ad hoc networks. Amongst the several challenges in a vehicular cloud network, virtual machine migration (VMM) may be one of the most crucial issues that need addressing. In this paper, a novel solution for VMM in a vehicular cloud is presented. The vehicular cloud is modeled as a small corporate data center with mobile hosts, equipped with limited computational and storage capacities. The proposed scheme is called Vehicular Virtual Machine Migration (VVMM). The VVMM aims to achieve efficient handling of frequent changes in the data center topology, host heterogeneity, all while doing so with minimum Roadside Unit (RU) intervention. Three modes of VVMM are studied. The first mode, VVMM-U uniformly selects the destinations for VM migrations, which will take place shortly prior to a vehicle's departure from the coverage of the RU. The second mode, VVMM-LW aims at migrating the VM to the vehicle with the least workload, and the third mode, VVMM-MA incorporates mobility awareness by migrating the VM to the vehicle with the least workload and forecasted to be within the geographic boundaries of the vehicular cloud. We evaluate the performance of our proposed framework through simulations. Simulation results show that VVMM-MA introduces significant reduction in unsuccessful migration attempts and results in an increased fairness in vehicle capacity utilization across the vehicular cloud system. |
Computational thinking and tinkering: Exploration of an early childhood robotics curriculum | By engaging in construction-based robotics activities, children as young as four can play to learn a range of concepts. The TangibleK Robotics Program paired developmentally appropriate computer programming and robotics tools with a constructionist curriculum designed to engage kindergarten children in learning computational thinking, robotics, programming, and problem-solving. This paper documents three kindergarten classrooms’ exposure to computer programming concepts and explores learning outcomes. Results point to strengths of the curriculum and areas where further redesign of the curriculum and technologies would be appropriate. Overall, the study demonstrates that kindergartners were both interested in and able to learn many aspects of robotics, programming, and computational thinking with the TangibleK curriculum design. 2013 Elsevier Ltd. All rights reserved. |
Data-driven enterprise architecture and the TOGAF ADM phases | This paper investigates how Data as a disruptive technology could be integrated into TOGAF. Given the recent attention of Big Data and Data Science as disruptors, this paper investigates what the impact on the enterprise could be and how Enterprise Architecture (EA) should accommodate data to enable data-driven EA. There is no model currently available that investigates how Big Data can be incorporated into data-driven EA solutions. This study specifically focuses on how the TOGAF ADM could support a data-driven enterprise. Through document analysis and a systematic literature review, a specific adaption of the TOGAF ADM is proposed that indicates the influence that Data and Big Data has on each phase within the ADM. |
A Task-based Approach for Ontology Evaluation | The need for the establishment of evaluation methods that can measure respective improvements or degradations of ontological models e.g. yielded by a precursory ontology population stage is undisputed. We propose an evaluation scheme that allows to employ a number of different ontologies and to measure their performance on specific tasks. In this paper we present the resulting task-based approach for quantitative ontology evaluation, which also allows for a bootstrapping approach to ontology population. Benchmark tasks commonly feature a so-called gold-standard defining perfect performance. By selecting ontology-based approaches for the respective tasks, the ontology-dependent part of the performance can be measured. Following this scheme, we present the results of an experiment for testing and incrementally augmenting ontologies using a well-defined benchmark problem based on a evaluation goldstandard. |
Knowledge Graph Embedding with Diversity of Structures | In recent years, different web knowledge graphs, both free and commercial, have been created. Knowledge graphs use relations between entities to describe facts in the world. We engage in embedding a large scale knowledge graph into a continuous vector space. TransE, TransH, TransR and TransD are promising methods proposed in recent years and achieved state-of-the-art predictive performance. In this paper, we discuss that graph structures should be considered in embedding and propose to embed substructures called “one-relation-circle” (ORC) to further improve the performance of the above methods as they are unable to encode ORC substructures. Some complex models are capable of handling ORC structures but sacrifice efficiency in the process. To make a good trade-off between the model capacity and efficiency, we propose a method to decompose ORC substructures by using two vectors to represent the entity as a head or tail entity with the same relation. In this way, we can encode the ORC structure properly when apply it to TransH, TransR and TransD with almost the same model complexity of themselves. We conduct experiments on link prediction with benchmark dataset WordNet. Our experiments show that applying our method improves the results compared with the corresponding original results of TransH, TransR and TransD. |
A virtual reality based exercise system for hand rehabilitation post-stroke: transfer to function | This paper presents preliminary results from a virtual reality (VR)-based system for hand rehabilitation that uses a CyberGlove and a Rutgers Master II-ND haptic glove. This computerized system trains finger range of motion, finger flexion speed, independence of finger motion, and finger strength using specific VR simulation exercises. A remote Web-based monitoring station was developed to allow telerehabilitation interventions. The remote therapist observes simplified versions of the patient exercises that are updated in real time. Patient data is stored transparently in an Oracle database, which is also Web accessible through a portal GUI. Thus the remote therapist or attending physician can graph exercise outcomes and thus evaluate patient outcomes at a distance. Data from the VR simulations is complemented by clinical measurements of hand function and strength. Eight chronic post-stroke subjects participated in a pilot study of the above system. In keeping with variability in both their lesion size and site and in their initial upper extremity function, each subject showed improvement on a unique combination of movement parameters in VR training. Importantly, these improvements transferred to gains on clinical tests, as well as to significant reductions in task-completion times for the prehension of real objects. These results are indicative of the potential feasibility of this exercise system for rehabilitation in patients with hand dysfunction resulting from neurological impairment. |
How Habit Limits the Predictive Power of Intention: The Case of Information Systems Continuance | Past research in the area of information systems acceptance has primarily focused on initial adoption under the implicit assumption that IS usage is mainly determined by intention. Carol Saunders was the accepting senior editor for this paper. Sue Brown was the associate editor. Anol Bhattacherjee and Likoebe Maruping served as reviewers. While plausible in the case of initial IS adoption, this assumption may not be as readily applicable to continued IS usage behavior since it ignores that frequently performed behaviors tend to become habitual and thus automatic over time. This paper is a step forward in defining and incorporating the “habit” construct into IS research. Specifically, the purpose of this study is to explore the role of habit and its antecedents in the context of continued IS usage. Building on previous work in other disciplines, we define habit in the context of IS usage as the extent to which people tend to perform behaviors (use IS) automatically because of learning. Using recent work on the continued usage of IS (IS continuance), we have developed a model suggesting that continued IS usage is not only a consequence of intention, but also of habit. In particular, in our research model, we propose IS habit to moderate the influence of intention such that its importance in determining behavior decreases as the behavior in question takes on a more habitual nature. Integrating past research on habit and IS continuance further, we suggest how antecedents of behavior/behavioral intention as identified by IS continuance research relate to drivers of habitualization. We empirically tested the model in the context of voluntary continued WWW usage. Our results support the argument that habit acts as a moderating variable of the relationship between intentions and IS continuance behavior, which may put a boundary condition on the explanatory power of intentions in the context of continued IS usage. The data also support that satisfaction, frequency of past behavior, and comprehensiveness of usage are key to habit formation and Limayem et al./Limits to the Predictive Power of Intention 706 MIS Quarterly Vol. 31 No. 4/December 2007 thus relevant in the context of IS continuance behavior. Implications of these findings are discussed and managerial guidelines presented. |
Introducing HIV/AIDS education into the electrical engineering curriculum at the University of Pretoria | This paper describes how HIV/AIDS education is being introduced into the curriculum of the Department of Electrical, Electronic, and Computer Engineering at the University of Pretoria, Pretoria, South Africa. Third- and fourth-year students were provided with an HIV/AlDS Educational CD developed at the university. Their knowledge of the subject was tested via two quizzes-one written before they were exposed to the material on the CD and one after. In addition, a mathematical HIV/AIDS model is being incorporated into a third-year control systems course. This model is used to illustrate standard control systems engineering concepts, such as linearization, system stability, feedback, and dynamic compensation. This paper is an example of how topical nonengineering material can effectively be made part of a high-level undergraduate engineering course. Students benefit not only from the topical nature of the subject, but also from an improved understanding of control engineering concepts which can be applied to many different fields. |
Transverse flux machines with distributed windings for in-wheel applications | Transverse flux machine (TFM) useful for in-wheel motor applications is presented. This transverse flux permanent magnet motor is designed to achieve high torque-to-weight ratio and is suitable for direct-drive wheel applications. As in conventional TFM, the phases are located under each other, which will increase the axial length of the machine. The idea of this design is to reduce the axial length of TFM, by placing the windings around the stator and by shifting those from each other by electrically 120° or 90°, for three- or two-phase machine, respectively. Therefore, a remarkable reduction on the total axial length of the machine will be achieved while keeping the torque density high. This TFM is compared to another similar TFM, in which the three phases have been divided into two halves and placed opposite each other to ensure the mechanical balance and stability of the stator. The corresponding mechanical phase shifts between the phases have accordingly been taken into account. The motors are modelled in finite-element method (FEM) program, Flux3D, and designed to meet the specifications of an optimisation scheme, subject to certain constraints, such as construction dimensions, electric and magnetic loading. Based on this comparison study, many recommendations have been suggested to achieve optimum results. |
Data Imbalance Problem in Text Classification | Aimming at the ever-present problem of imbalanced data in text classification, the authors study on several forms of imbalanced data, such as text number, class size, subclass and class fold. Some useful conclusions are gotten from a series of correlative experiments: first, when the text of two class is almost the same number, the difference of word number become major factor to affect the accuracy of the classification, second, to improve the accuracy of the classification through increasing the small class size is limited, third, in the case of unbalanced data, the same words which are appeared in two class often carry strong class information, that is, class overlap will not affect the classification accuracy. |
Using a discourse-intensive pedagogy and android's app inventor for introducing computational concepts to middle school students | Past research on children and programming from the 1980s called for deepening the study of the pedagogy of programming in order to help children build better cognitive models of foundational concepts of CS. More recently, computing education researchers are beginning to recognize the need to apply the learning sciences to develop age- and grade-appropriate curricula and pedagogies for developing computational competencies among children. This paper presents the curriculum of an exploratory workshop that employed a discourse-intensive pedagogy to introduce middle school children to programming and foundational concepts of computer science through programming mobile apps in App Inventor for Android (AIA). |
Automated valet parking as part of an integrated travel assistance | The integration of automated valet parking (automated search of a parking space and execution of the parking maneuver) in a comprehensive travel assistance approach promises great benefits for a traveler. In particular, it aims at increasing comfort, optimizing travel time and improving energy efficiency for changing means of transport (where one of them is a car) within a whole multimodal travel chain. In this paper an automated valet parking system as part of a travel assistant is presented. Besides giving an overview of the overall system, the main components (namely the environment perception and automation modules of the fully automated vehicle, a mobile phone application as human machine interface and a parking space occupancy detection camera as part of the parking area infrastructure) are described. The system was successfully tested on three different parking areas, where one of these areas was located at Braunschweig main station. |
Trajectory Space: A Dual Representation for Nonrigid Structure from Motion | Existing approaches to nonrigid structure from motion assume that the instantaneous 3D shape of a deforming object is a linear combination of basis shapes. These bases are object dependent and therefore have to be estimated anew for each video sequence. In contrast, we propose a dual approach to describe the evolving 3D structure in trajectory space by a linear combination of basis trajectories. We describe the dual relationship between the two approaches, showing that they both have equal power for representing 3D structure. We further show that the temporal smoothness in 3D trajectories alone can be used for recovering nonrigid structure from a moving camera. The principal advantage of expressing deforming 3D structure in trajectory space is that we can define an object independent basis. This results in a significant reduction in unknowns and corresponding stability in estimation. We propose the use of the Discrete Cosine Transform (DCT) as the object independent basis and empirically demonstrate that it approaches Principal Component Analysis (PCA) for natural motions. We report the performance of the proposed method, quantitatively using motion capture data, and qualitatively on several video sequences exhibiting nonrigid motions, including piecewise rigid motion, partially nonrigid motion (such as a facial expressions), and highly nonrigid motion (such as a person walking or dancing). |
Estimates of Insulin Sensitivity Using Glucose and C-Peptide From the Hyperglycemia and Adverse Pregnancy Outcome Glucose Tolerance Test | OBJECTIVE To determine if glucose and C-peptide values obtained as part of the Hyperglycemia and Adverse Pregnancy Outcome (HAPO) study could be used to estimate insulin sensitivity during late pregnancy. RESEARCH DESIGN AND METHODS A total of 78 women enrolled in the HAPO study were recruited for this ancillary study. Venous plasma samples were drawn after an 8- to 10-h fast (time 0) and at 30, 60, 90, and 120 min after a 75-g glucose challenge, which was performed at 24-32 weeks' gestation. Samples were analyzed for plasma glucose, insulin, and C-peptide. Insulin sensitivity was estimated using the established Matsuda and DeFronzo insulin sensitivity index for oral glucose tolerance tests (IS(OGTT)). Insulin sensitivity was also calculated from two other commonly used indexes of insulin sensitivity (that for homeostasis model assessment [IS(HOMA)] and that for quantitative insulin sensitivity check index [IS(QUICKI)]). A new insulin sensitivity index was calculated using the glucose and C-peptide concentrations at 0 and 60 min to derive IS(HOMA C-pep), IS(QUICKI C-pep), and IS(OGTT C-pep). These indexes were then correlated with insulin sensitivity estimated from the IS(OGTT). RESULTS The strongest correlation with the IS(OGTT) was obtained for IS(OGTT C-pep) (r = 0.792, P < 0.001). Further, the correlations of IS(HOMA) (C-pep) and IS(QUICKI C-pep) with IS(OGTT) were also significant (r = 0.676, P < 0.001 and r = 0.707, P < 0.001, respectively). CONCLUSIONS These data suggest that calculated IS(OGTT C-pep) is an excellent predictor of insulin sensitivity in pregnancy and can be used to estimate insulin sensitivity in over 25,000 women participating in the HAPO study. |
Learning Adversary-Resistant Deep Neural Networks | Deep neural networks (DNNs) have proven to be quite effective in a vast array of machine learning tasks, with recent examples in cyber security and autonomous vehicles. Despite the superior performance of DNNs in these applications, it has been recently shown that these models are susceptible to a particular type of attack that exploits a fundamental flaw in their design. This attack consists of generating particular synthetic examples referred to as adversarial samples. These samples are constructed by slightly manipulating real datapoints in order to “fool” the original DNN model, forcing it to mis-classify previously correctly classified samples with high confidence. Addressing this flaw in the model is essential if DNNs are to be used in critical applications such as those in cyber security. Previous work has provided various defense mechanisms by either augmenting the training set or enhancing model complexity. However, after a thorough analysis, we discover that DNNs protected by these defense mechanisms are still susceptible to adversarial samples, indicating that there are no theoretical guarantees of resistance provided by these mechanisms. To the best of our knowledge, we are the first to investigate this issue shared across previous research work and to propose a unifying framework for protecting DNN models by integrating a data transformation module with the DNN. More importantly, we provide a theoretical guarantee for protection under our proposed framework. We evaluate our method and several other existing solutions on both MNIST, CIFAR-10, and a malware dataset, to demonstrate the generality of our proposed method and its potential for handling cyber security applications. The results show that our framework provides better resistance compared to state-of-art solutions while experiencing negligible degradation in accuracy. |
Policy Evaluation with Temporal Differences: A Survey and Comparison (Extended Abstract) | All references are available in the full journal article. Also at Technische Universität Darmstadt, Karolinenplatz 5, Darmstadt, Germany. Proceedings of the Twenty-Fifth International Conference on Automated Planning and Scheduling |
Can we predict the failure of electrical cardioversion of acute atrial fibrillation? The FinCV study. | BACKGROUND
Data on predictors of failure of electrical cardioversion of acute atrial fibrillation are scarce.
METHODS
We explored 6,906 electrical cardioversions of acute (<48 hours) atrial fibrillation in 2,868 patients in a retrospective multicenter study.
RESULTS
The success rate of electrical cardioversion was 94.2%. In 26% of unsuccessful cardioversions, the cardioversion was performed successfully later. Antiarrhythmic drug therapy, short (<12 hours) duration of atrial fibrillation episode, advanced age, permanent pacemaker, history of atrial fibrillation episodes within 30 days before cardioversion, and β-blockers were independent predictors of unsuccessful electrical cardioversion. In the subgroup of patients with cardioversion of the first atrial fibrillation episode (N = 1,411), the short duration of episode (odds ratio [OR] = 2.28; 95% confidence interval [CI] 1.34-3.90, P = 0.003) and advanced age (OR = 1.03; 95% CI 1.02-1.05, P < 0.001) were the only independent predictors of unsuccessful cardioversion. After successful cardioversion, the rate of early (<30 days) clinical recurrence of atrial fibrillation was 17.3%. The index cardioversion being performed due to the first atrial fibrillation episode was the only predictor of remaining in the sinus rhythm.
CONCLUSION
A short (<12 hours) duration of acute atrial fibrillation is a significant predictor of unsuccessful cardioversion, especially during the first attack. First atrial fibrillation episode was the only predictor of remaining in the sinus rhythm. |
Evaluating WAP News Sites: The Webqual/m Approach | This paper reports on the evaluation of wireless Internet news sites using the WebQual/m instrument. From initial application in the domain of traditional Internet Web sites, the instrument has been adapted for sites delivered using the wireless application protocol (WAP). The WebQual approach is to assess the Web-site quality from the perspective of the 'voice of the customer’, an approach adopted in quality function deployment. The WebQual/m instrument is used to assess customer perceptions of information, site and useroriented qualities. In particular, the qualities of three UK-based WAP news sites are assessed via an online questionnaire. The results are reported and analysed and demonstrate considerable variations in the offerings of the news sites. The findings and their implications for mobile commerce are discussed and some conclusions and directions for further research are provided. |
Toward Naturalistic 2D-to-3D Conversion | Natural scene statistics (NSSs) models have been developed that make it possible to impose useful perceptually relevant priors on the luminance, colors, and depth maps of natural scenes. We show that these models can be used to develop 3D content creation algorithms that can convert monocular 2D videos into statistically natural 3D-viewable videos. First, accurate depth information on key frames is obtained via human annotation. Then, both forward and backward motion vectors are estimated and compared to decide the initial depth values, and a compensation process is applied to further improve the depth initialization. Then, the luminance/chrominance and initial depth map are decomposed by a Gabor filter bank. Each subband of depth is modeled to produce a NSS prior term. The statistical color-depth priors are combined with the spatial smoothness constraint in the depth propagation target function as a prior regularizing term. The final depth map associated with each frame of the input 2D video is optimized by minimizing the target function over all subbands. In the end, stereoscopic frames are rendered from the color frames and their associated depth maps. We evaluated the quality of the generated 3D videos using both subjective and objective quality assessment methods. The experimental results obtained on various sequences show that the presented method outperforms several state-of-the-art 2D-to-3D conversion methods. |
The importance of the label hierarchy in hierarchical multi-label classification | We address the task of hierarchical multi-label classification (HMC). HMC is a task of structured output prediction where the classes are organized into a hierarchy and an instance may belong to multiple classes. In many problems, such as gene function prediction or prediction of ecological community structure, classes inherently follow these constraints. The potential for application of HMC was recognized by many researchers and several such methods were proposed and demonstrated to achieve good predictive performances in the past. However, there is no clear understanding when is favorable to consider such relationships (hierarchical and multi-label) among classes, and when this presents unnecessary burden for classification methods. To this end, we perform a detailed comparative study over 8 datasets that have HMC properties. We investigate two important influences in HMC: the multiple labels per example and the information about the hierarchy. More specifically, we consider four machine learning tasks: multi-label classification, hierarchical multi-label classification, single-label classification and hierarchical single-label classification. To construct the predictive models, we use predictive clustering trees (a generalized form of decision trees), which are able to tackle each of the modelling tasks listed. Moreover, we investigate whether the influence of the hierarchy and the multiple labels carries over for ensemble models. For each of the tasks, we construct a single tree and two ensembles (random forest and bagging). The results reveal that the hierarchy and the multiple labels do help to obtain a better single tree model, while this is not preserved for the ensemble models. |
Fusion of laser and monocular camera data in object grid maps for vehicle environment perception | Occupancy grid maps provide a reliable vehicle environmental model and usually process data from range finding sensors. Object grid maps additionally contain information about the classes of objects, which is crucial for applications like autonomous driving. Unfortunately, they lack the precision of occupancy grid maps, since they mostly process classification results from camera data by projecting the corresponding images onto the ground plane. This paper proposes a modular framework to create precise object grid maps. The presented algorithm creates classical occupancy grid maps and object grid maps. In a combination step, it transforms both maps into the same frame of discernment based on the Dempster-Shafer theory of evidence. This allows fusing the maps to one object grid map, which contains valuable object information and at the same time benefits from the precision of the occupancy grid map. |
A one-year longitudinal study of English and Japanese vowel production by Japanese adults and children in an English-speaking setting | The effect of age of acquisition on first- and second-language vowel production was investigated. Eight English vowels were produced by Native Japanese (NJ) adults and children as well as by age-matched Native English (NE) adults and children. Productions were recorded shortly after the NJ participants' arrival in the USA and then one year later. In agreement with previous investigations [Aoyama, et al., J. Phon. 32, 233-250 (2004)], children were able to learn more, leading to higher accuracy than adults in a year's time. Based on the spectral quality and duration comparisons, NJ adults had more accurate production at Time 1, but showed no improvement over time. The NJ children's productions, however, showed significant differences from the NE children's for English "new" vowels /ɪ/, /ε/, /ɑ/, /ʌ/ and /ʊ/ at Time 1, but produced all eight vowels in a native-like manner at Time 2. An examination of NJ speakers' productions of Japanese /i/, /a/, /u/ over time revealed significant changes for the NJ Child Group only. Japanese /i/ and /a/ showed changes in production that can be related to second language (L2) learning. The results suggest that L2 vowel production is affected importantly by age of acquisition and that there is a dynamic interaction, whereby the first and second language vowels affect each other. |
The Energy Efficiency Of Iram Architectures | Portable systems demand energy efficiency in order to maximize battery life. IRAM architectures, which combine DRAM and a processor on the same chip in a DRAM process, are more energy efficient than conventional systems. The high density of DRAM permits a much larger amount of memory on-chip than a traditional SRAM cache design in a logic process. This allows most or all IRAM memory accesses to be satisfied on-chip. Thus there is much less need to drive high-capacitance off-chip buses, which contribute significantly to the energy consumption of a system. To quantify this advantage we apply models of energy consumption in DRAM and SRAM memories to results from cache simulations of applications reflective of personal productivity tasks on low power systems. We find that IRAM memory hierarchies consume as little as 22% of the energy consumed by a conventional memory hierarchy for memory-intensive applications, while delivering comparable performance. Furthermore, the energy consumed by a system consisting of an IRAM memory hierarchy combined with an energy efficient CPU core is as little as 40% of that of the same CPU core with a traditional memory hierarchy. |
Opiates for the Matches: Matching Methods for Causal Inference | In recent years, there has been a burst of innovative work on methods for estimating causal effects using observational data. Much of this work has extended and brought a renewed focus on old approaches such as matching, which is the focus of this review. The new developments highlight an old tension in the social sciences: a focus on research design versus a focus on quantitative models. This realization, along with the renewed interest in field experiments, has marked the return of foundational questions as opposed to a fascination with the latest estimator. I use studies of get-out-the-vote interventions to exemplify this development. Without an experiment, natural experiment, a discontinuity, or some other strong design, no amount of econometric or statistical modeling can make the move from correlation to causation persuasive. 487 A nn u. R ev . P ol it. S ci . 2 00 9. 12 :4 87 -5 08 . D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by U ni ve rs ity o f C al if or ni a B er ke le y on 0 7/ 08 /0 9. F or p er so na l u se o nl y. ANRV377-PL12-27 ARI 7 April 2009 11:7 RD: regression discontinuity INTRODUCTION Although the quantitative turn in the search for causal inferences is more than a century old in the social sciences, in recent years there has been a renewed interest in the problems associated with making causal inferences using such methods. These recent developments highlight tensions in the quantitative tradition that have been present from the beginning. There are a number of conflicting approaches, which overlap but have important distinctions. I focus here on three of them: the experimental, the modelbased, and the design-based. The first is the use of randomized experiments, which in political science may go back to Gosnell (1927).1 Whether Gosnell randomized or not, Eldersveld (1956) certainly did when he conducted a randomized field experiment to study the effectiveness of canvassing by mail, telephone, and house-to-house visits on voter mobilization. But even with randomization, there is ample disagreement and confusion about exactly how such data should be analyzed—for example, is adjustment by multivariate regression unbiased? There are also concerns about external validity and whether experiments can be used to answer “interesting” or “important” questions. This latter concern appears to be common among social scientists and is sometimes harshly put. One early and suspicious reviewer of experimental methods in the social sciences recalled the words of Horace: “Parturiunt montes, nascetur ridiculus mus” (Mueller 1945).2 For observational data analysis, however, the disagreements are sharper. 1Gosnell may not have actually used randomization (Green & Gerber 2002). His 1924 get-out-the-vote experiment, described in his 1927 book, was conducted one year before Fisher’s 1925 book and 11 years before Fisher’s famous 1935 book on experimental design. Therefore, unsurprisingly, Gosnell’s terminology is nonstandard and leads to some uncertainty about exactly what was done. A definitive answer requires a close examination of Gosnell’s papers at the University of Chicago. 2“The mountains are in labor, a ridiculous mouse will be brought forth,” from Horace’s, Epistles, Book II, Ars Poetica (The Art of Poetry). Horace is observing that some poets make great promises that result in little. By far the dominant method of making causal inferences in the quantitative social sciences is model-based, and the most popular model is multivariate regression. This tradition is also surprisingly old; the first use of regression to estimate treatment effects (as opposed to simply fitting a line through data) was Yule’s (1899) investigation into the causes of changes in pauperism in England. By that time the understanding of regression had evolved from what Stigler (1990) calls the Gauss-Laplace synthesis. The third tradition focuses on design. Examples abound, but they can be broadly categorized as natural experiments or regressiondiscontinuity (RD) designs. They share in common an assumption that found data, not part of an actual field experiment, have some “as if random” component: that the assignment to treatment can be regarded as if it were random, or can be so treated after some covariate adjustment. From the beginning, some natural experiments were analyzed as if they were actual experiments (e.g., difference of means), others by matching methods (e.g., Chapin 1938), and yet others—many, many others—by instrumental variables (e.g., Yule 1899). [For an interesting note on who invented instrumental variable regression, see Stock & Trebbi (2003).] A central criticism of natural experiments is that they are not randomized experiments. In most cases, the “as if random” assumption is implausible (for reviews see Dunning 2008 and Rosenzweig & Wolpin 2000). Regression-discontinuity was first proposed by Thistlethwaite & Campbell (1960). They proposed RD as an alternative to what they called “ex post facto experiments,” or what we today would call natural experiments analyzed by matching methods. More specifically, they proposed RD as an alternative to matching methods and other “as if ” (conditionally) random experiments outlined by Chapin (1938) and Greenwood (1945), where the assignment mechanism is not well understood. In the case of RD, the researcher finds a sharp breakpoint that makes seemingly random distinctions between units that receive treatment and those that do not. 488 Sekhon A nn u. R ev . P ol it. S ci . 2 00 9. 12 :4 87 -5 08 . D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by U ni ve rs ity o f C al if or ni a B er ke le y on 0 7/ 08 /0 9. F or p er so na l u se o nl y. ANRV377-PL12-27 ARI 7 April 2009 11:7 Where does matching fit in? As we shall see, it depends on how it is used. One of the innovative intellectual developments over the past few years has been to unify all of these methods into a common mathematical and conceptual language, that of the Neyman-Rubin model (Neyman 1990 [1923], Rubin 1974). Although randomized experiments and matching estimators have long been tied to the model, recently instrumental variables (Angrist et al. 1996) and RD (Lee 2008) have also been so tied. This leads to an interesting unity of thought that makes clear that the Neyman-Rubin model is the core of the causal enterprise, and that the various methods and estimators consistent with it, although practically important, are of secondary interest. These are fighting words, because all of these techniques, particularly the clearly algorithmic ones such as matching, can be used without any ties to the Neyman-Rubin model or causality. In such cases, matching becomes nothing more than a nonparametric estimator, a method to be considered alongside CART (Breiman et al. 1984), BART (Chipman et al. 2006), kernel estimation, and a host of others. Matching becomes simply a way to lessen model dependence, not a method for estimating causal effects per se. For causal inference, issues of design are of utmost importance; a lot more is needed than just an algorithm. Like other methods, matching algorithms can always be used, and they usually are, even when design issues are ignored in order to obtain a nonparametric estimate from the data. Of course, in such cases, what exactly has been estimated is unclear. The Neyman-Rubin model has radical implications for work in the social sciences given current practices. According to this framework, much of the quantitative work that claims to be causal is not well posed. The questions asked are too vague, and the design is hopelessly compromised by, for example, conditioning on posttreatment variables (Cox 1958, Section 4.2; Rosenbaum 2002, pp. 73–74). The radical import of the Neyman-Rubin model may be highlighted by using it to determine how regression estimators behave when fitted to data from randomized experiments. Randomization does not justify the regression assumptions (Freedman 2008b,c). Without additional assumptions, multiple regression is not unbiased. The variance estimates from multiple regression may be arbitrarily too large or too small, even asymptotically. And for logistic regression, matters only become worse (Freedman 2008d). These are fearful conclusions. These pathologies occur even with randomization, which is supposed to be the easy case. Although the Neyman-Rubin model is currently the most prominent, and I focus on it in this review, there have obviously been many other attempts to understand causal inference (reviewed by Brady 2008). An alternative whose prominence has been growing in recent years is Pearl’s (2000) work on nonparametric structural equations models (for a critique see Freedman 2004). Pearl’s approach is a modern reincarnation of an old enterprise that has a rich history, including foundational work on causality in systems of structural equations by the political scientist Herbert Simon (1953). Haavelmo (1943) was the first to precisely examine issues of causality in the context of linear structural equations with random errors. As for matching itself, there is no consensus on how exactly matching ought to be done, how to measure the success of the matching procedure, and whether or not matching estimators are sufficiently robust to misspecification so as to be useful in practice (Heckman et al. 1998). To illuminate issues of general interest, I review a prominent exchange in the political science literature involving a set of get-outthe-vote (GOTV) field experiments and the use of matching estimators (Arceneaux et al. 2006; Gerber & Green 2000, 2005; Hansen & Bowers 2009; Imai 2005). The matching literature is growing rapidly, so it is impossible to summarize it in a brief review. I focus on design issues more than the technical details of exactly how matching should be done, although the basics are reviewed. Imbens & Wooldridge (2008) have provided an excellent review of recent www.annualreviews.org • Ma |
Chapter 3 Recent Developments in Auxiliary Particle Filtering | State space models (SSMs; sometimes termed hidden Markov models, particularly in the discrete case) are very popular statistical models for time series. Such models describe the trajectory of some system of interest as an unobserved E-valued Markov chain, known as the signal process. Let X1 ∼ ν and Xn|(Xn−1 = xn−1) ∼ f(·|xn−1) denote this process. Indirect observations are available via an observation process, {Yn}n∈N. Conditional upon Xn, Yn is independent of the remainder of the observation and signal processes, with Yn|(Xn = xn) ∼ g(·|xn). For any sequence {zn}n∈N, we write zi:j = (zi, zi+1, ..., zj). In numerous applications, we are interested in estimating, recursively in time, an analytically intractable sequence of posterior distributions {p (x1:n| y1:n)}n∈N, of the form: |
Modelling Exchange Rate Volatility using GARCH Models: Empirical Evidence from Arab Countries | This paper considers the generalized autoregressive conditional heteroscedastic approach in modelling exchange rate volatility in a panel of nineteen of the Arab countries using daily observations over the period of 1 January 2000 to 19 November 2011. The paper applies both symmetric and asymmetric models that capture most common stylized facts about exchange rate returns such as volatility clustering and leverage effect. Based on the GARCH(1,1) model, the results show that for ten out of nineteen currencies the sum of the estimated persistent coefficients exceed one, implying that volatility is an explosive process, in contrast, it is quite persistent for seven currencies, a result which is required to have a mean reverting variance process. Furthermore, the asymmetrical EGARCH (1,1) results provide evidence of leverage effect for majority of currencies, indicating that negative shocks imply a higher next period volatility than positive shocks. Finally, the paper concludes that the exchange rates volatility can be adequately modelled by the class of GARCH models. |
Slower progression of atherosclerosis in vein grafts harvested with 'no touch' technique compared with conventional harvesting technique in coronary artery bypass grafting: an angiographic and intravascular ultrasound study. | OBJECTIVES
In a long-term randomised coronary artery bypass grafting (CABG) study, the patency rate using a new 'no touch' (NT) vein-graft preparation technique was superior to the conventional (C) technique. This cineangiographic and intravascular ultrasound (IVUS) substudy examined possible mechanisms.
METHODS
A total of 45 patients (118 grafts) in the NT group and 46 patients (112 grafts) in the C group had patent grafts at short-term follow-up after 18 months. Thirty-seven patients (91 grafts) in the NT group and 37 patients (77 grafts) in the C group had patent grafts at long-term follow-up after 8.5 years, and were evaluated on a scale from 0 (normal) to 2 (significant stenosis) by cineangiogram. IVUS was performed in 15 NT grafts and 14 C grafts in the short-term follow-up, and 27 NT grafts and 26 C grafts in the long-term follow-up, in grafts considered normal by the cineangiogram. The grafts were evaluated with respect to lumen volume, intimal thickness, incidence of plaque and plaque components.
RESULTS
In the short-term follow-up, the cineangiogram showed more normal grafts (89.0% in the NT group compared with 75.0% in the C group), and the number of grafts with stenosis was 11.0% in the NT group compared with 25.0% in the C group (p=0.006). IVUS showed less mean intimal thickness (0.43 (0.07)mm vs 0.52 (0.08)mm; p=0.03), less grafts with considerable intimal hyperplasia (≥ 0.9 mm; 20% vs 78.6%; p=0.011) and fewer patients with considerable hyperplasia (≥ 0.9 mm; 25% vs 100%; p=0.007). In the long-term follow-up, the cineangiogram showed more normal grafts, with 91.2% in the NT group compared with 83.1% in the C group; there were fewer grafts with significant stenosis, with 7.7% in the NT group compared with 15.6% in the C group (p=0.14). IVUS showed fewer grafts containing multiple plaques (14.8% vs 50%; p=0.008), less advanced plaque with lipid (11.8% vs 63.9%; p=0.0004) and less maximal plaque thickness (1.04 (0.23)mm vs 1.32 (0.25)mm; p=0.02) in the NT group compared with the C group.
CONCLUSION
The superior long-term patency rate using the NT vein-graft technique at CABG could be explained by a significantly slower progression of atherosclerosis. |
Improving malware classification: bridging the static/dynamic gap | Malware classification systems have typically used some machine learning algorithm in conjunction with either static or dynamic features collected from the binary. Recently, more advanced malware has introduced mechanisms to avoid detection in these views by using obfuscation techniques to avoid static detection and execution-stalling techniques to avoid dynamic detection. In this paper we construct a classification framework that is able to incorporate both static and dynamic views into a unified framework in the hopes that, while a malicious executable can disguise itself in some views, disguising itself in every view while maintaining malicious intent will prove to be substantially more difficult. Our method uses kernels to place a similarity metric on each distinct view and then employs multiple kernel learning to find a weighted combination of the data sources which yields the best classification accuracy in a support vector machine classifier. Our approach opens up new avenues of malware research which will allow the research community to elegantly look at multiple facets of malware simultaneously, and which can easily be extended to integrate any new data sources that may become popular in the future. |
A Real Little Game: The Pinocchio Effect in Pervasive Play | Mobile digital technologies and networks have fueled a recent proliferation of opportunities for pervasive play in everyday spaces. In this paper, I examine how players negotiate the boundary between these pervasive games and real life. I trace the emergence of what I call “the Pinocchio effect” – the desire for a game to be transformed into real life, or conversely, for everyday life to be transformed into a "real little game.” Focusing on two examples of pervasive play – the 2001 immersive game known as the Beast, and the Go Game, an ongoing urban superhero game — I argue that gamers maximize their play experience by performing belief, rather than actually believing, in the permeability of the game-reality boundary. |
The True Face of Myelodysplastic Syndromes and Related Neoplasms in the Netherlands: Studies based on population-based registries | Background Studies with long-term follow-up of patients with myelodysplastic syndromes (MDS) based on data from nationwide population-based cancer registries are lacking. We conducted a nationwide population-based study to assess trends in incidence, initial treatment and survival in MDS patients diagnosed in the Netherlands from 2001-2010. Methods We identified 5,144 MDS patients (median age, 74 years) from the Netherlands Cancer Registry (NCR). The NCR only includes MDS cases that were confirmed by bone marrow examinations. Information regarding initial treatment decisions was available in the NCR. Results The age-standardized incidence rate of MDS was 2.3/100,000 in 2001-2005 and 2.8/100,000 in 2006-2010. The incidence increased with older age, with the highest incidence among those aged ≥80 years (32.1/100,000 in 2006-2010). Forty-nine percent of all MDS cases were unspecified. Of all patients, 89% receive no treatment or only supportive care and 8% were started on intensive therapy as initial treatment. Survival did not improve over time. Five-year relative survival was 53%, 58%, 48%, 38% and 18% in patients with refractory anemia (RA), RA with ringed sideroblasts, 5qsyndrome, refractory cytopenia with multilineage dysplasia, and RA with excess blasts, respectively. Conclusion The incidence of MDS increased over time due to improved notification and better disease awareness, and has stabilized since 2007. The classification of MDS seems challenging as almost half of the pathologically confirmed cases were unspecified. The lack of improvement in survival might be explained by the limited availability of therapeutic agents. Therefore, ameliorated management and new treatment options are warranted. INTRODUCTION The myelodysplastic syndromes (MDS) constitute a heterogeneous group of clonal hematopoietic stem cell disorders characterized by ineffective hematopoiesis and an increased risk of leukemic transformation.1 At the beginning of the new millennium, the World Health Organization (WHO) classified MDS as malignant myeloid neoplasms,2,3 and consequently MDS became reportable malignancies to population-based cancer registries as of 2001. The age-standardized incidence rate (ASR) of MDS is currently 2.0 to 3.4 per 100,000 in Western countries and the incidence increases sharply with older age.4-8 Life expectancy of patients with MDS is variable and is dependent on the MDS subtype, and several clinical and prognostic parameters.9-13 Treatment decisions also rely on clinical and prognostic parameters.14-16 Recent clinical studies have reported favorable outcomes in patients with MDS after treatment with immunomodulatory agents (e.g. lenalidomide17) or hypomethylating agents (i.e. azacitidine18 and decitabine19). Survival data derived from clinical trials can be biased, however, because of patient selection (e.g. exclusion of elderly patients with comorbidities);20 therefore, inference about the general patient population might not be made. The availability of nationwide population-based studies with long-term follow-up on incidence and survival in an unselected MDS population are lacking. In the few reported population-based studies on incidence and survival in MDS, the period of patient inclusion was short and the follow-up period was limited.4-6,21 Furthermore, population-based studies regarding treatment decision in the entire MDS population have not been reported previously. We have performed a nationwide population-based study in more than 5,000 newly diagnosed patients with MDS in the Netherlands from 2001 to 2010 reported to the Netherlands Cancer Registry (NCR). The aim of this study was to assess trends in incidence, initial treatment and survival among these MDS patients. 56 • 57 • Chapter 2 Trends in incidence, initial treatment and survival in MDS PATIENTS AND METHODS The Netherlands Cancer Registry Established in 1989, the population-based nationwide NCR is maintained and hosted by the Comprehensive Cancer Centres. The NCR is based on notifications of all newly diagnosed malignancies in the Netherlands by the automated nationwide archive of histopathology and cytopathology (PALGA), to which all pathological laboratories report. The NCR also receive notifications from the national registry of hospital discharges and various hematology departments. Information on date of birth, sex, date of diagnosis, morphology, and initial treatment decision is routinely collected by trained registrars from the medical records. The registrars register the diagnosis that is given by the treating physician. Initial treatment is recorded in four categories by the NCR, namely no therapy or only supportive care, chemotherapy, chemotherapy followed by a stem cell transplantation (SCT), and other therapy. Diagnostic criteria and study population MDS was included in the NCR as of January 1, 2001 when the International Classification of Diseases for Oncology Third Edition (ICD-O-3) was implemented for case ascertainment.2 Notification of MDS is possibly incomplete in the first years after implementation of the ICD-O-3 seeing that implementation of the new WHO classification into clinical practice and notification sources of the NCR will have been delayed. Cases of MDS classified as non-malignant after 2000 will not have been notified to the NCR. The NCR exclusively includes MDS cases that were confirmed by bone marrow examinations. All MDS subtypes according to the ICD-O-3 morphology codes are included in the NCR, namely refractory anemia (RA; 9980), RA with ringed sideroblasts (RARS; 9982), RA with excess blasts (RAEB; 9983), refractory cytopenia with multilineage dysplasia (RCMD; 9985), MDS with isolated deletion 5q (5qsyndrome; 9986) and MDS not otherwise specified (MDS NOS; 9989). The ICD-O-3 is developed by the WHO and is in accordance with the disease definitions according to the third edition of the WHO classification of hematological malignancies.3 All patients diagnosed with MDS between 2001 and 2010 were identified from the NCR. Patients were observed from date of diagnosis to date of death, date of emigration or end of follow-up (i.e. February 1, 2012). Death dates were retrieved from the nationwide population registries network, which holds vital statistics of all Dutch residents. Statistical analysis ASRs of MDS were calculated per 100,000 person-years for the entire study period (20012010), two calendar periods (2001-2005 and 2006-2010) and year of diagnosis, using the annual mid-year population size as obtained from Statistics Netherlands. Incidence rates were age-standardized to the European standard population. ASRs were also calculated according to sex and MDS subtype. Besides, we calculated the age-specific incidence for five age categories. Relative survival rates (RSRs) with 95% confidence intervals (CIs) were calculated as a measure of disease-specific survival. The RSR is defined as the ratio between the observed survival in the group of patients and the expected survival of a comparable group from the general population. Expected survival was calculated using the Hakulinen method from Dutch population life tables according to age, sex and period.22 RSRs were calculated for the entire study period, the two abovementioned calendar periods and year of diagnosis. Furthermore, RSRs were calculated by MDS subtype, age category and sex. Median Kaplan-Meier overall survival (OS) was calculated during the entire study period for the latter three characteristics. Patients aged <18 years at diagnosis (n=53) and patients diagnosed at autopsy (n=3) were excluded from the survival analysis. All statistical analyses were performed with STATA version 12.0 (College Station, TX). 58 • 59 • Chapter 2 Trends in incidence, initial treatment and survival in MDS |
What's the difference? Learning collaboratively using iPads in conventional classrooms | Since its release in 2010, Apple's iPad has attracted much attention as an affordable and flexible learning tool for all levels of education. A number of trials have been undertaken exploring the device's efficacy for specific purposes, such as improving delivery of course content and learning resources at tertiary level, and the performance of apps for meeting specialised learning needs. However, with increased mainstreaming of these devices through iPad-supported modern learning environment (MLE) and Bring Your Own Device (BYOD) programmes, data are becoming available that provides insight into how these devices function as part of regular classroom environments. This article reports an analysis of data collected over almost 3 years from nearly 100 New Zealand primary (elementary) students of different ages, who used iPads daily for most curriculum tasks. Specifically, it uses different data sources to explore how observed and recorded device design and app attributes, affected the students' ability to work collaboratively. Results suggest fundamental differences exist between iPads and other digital devices that helped these students collaborate, and that when combined with cloud-based apps and services such as Google Docs, extended this collaboration to much wider audiences well beyond the school gate. It concludes that beyond the hype and rhetoric, exciting potential exists for this tool to support a ‘blurring in the line’ between learning in formal school and informal environments. © 2015 Elsevier Ltd. All rights reserved. |
Effects of substrate temperature and RF power on the formation of aligned nanorods in ZnO thin films | We report on the effects of substrate temperature and RF power on the formation of aligned nanorod-like morphology in ZnO thin films. ZnO thin films were sputter-deposited in mixed Ar and N2 gas ambient at various substrate temperatures and RF powers. We find that the substrate temperature plays more important role than RF power in the formation of ZnO nanorod-like morphology. At low substrate temperatures (below 300°C), ZnO nanorod-like morphology does not form regardless of RF powers. High RF power helps to promote the formation of aligned ZnO nanorod-like morphology. However, lower RF powers usually lead to ZnO films with better crystallinity at the same substrate temperatures in mixed Ar and N2 gas ambient and therefore better photoelectrochemical response. |
Functionally linked resting-state networks reflect the underlying structural connectivity architecture of the human brain. | During rest, multiple cortical brain regions are functionally linked forming resting-state networks. This high level of functional connectivity within resting-state networks suggests the existence of direct neuroanatomical connections between these functionally linked brain regions to facilitate the ongoing interregional neuronal communication. White matter tracts are the structural highways of our brain, enabling information to travel quickly from one brain region to another region. In this study, we examined both the functional and structural connections of the human brain in a group of 26 healthy subjects, combining 3 Tesla resting-state functional magnetic resonance imaging time-series with diffusion tensor imaging scans. Nine consistently found functionally linked resting-state networks were retrieved from the resting-state data. The diffusion tensor imaging scans were used to reconstruct the white matter pathways between the functionally linked brain areas of these resting-state networks. Our results show that well-known anatomical white matter tracts interconnect at least eight of the nine commonly found resting-state networks, including the default mode network, the core network, primary motor and visual network, and two lateralized parietal-frontal networks. Our results suggest that the functionally linked resting-state networks reflect the underlying structural connectivity architecture of the human brain. |
Load balancing system for IPTV web application virtualization | Web applications are easy to develop and maintain. It is desirable to provide Web application services to IPTV. However many legacy set-top boxes lack enough computing power to run Web applications. Desktop virtualization technologies can be used to provide Web application services to legacy set-top boxes. Multiple servers and a load balancing system are required to serve many clients. This paper describes a load balancing system for IPTV Web application virtualization. The load balancing system distributes requests based on server loads and supports high availability. |
Unregistered Multiview Mammogram Analysis with Pre-trained Deep Learning Models | We show two important findings on the use of deep convolutional neural networks (CNN) in medical image analysis. First, we show that CNN models that are pre-trained using computer vision databases (e.g., Imagenet) are useful in medical image applications, despite the significant differences in image appearance. Second, we show that multiview classification is possible without the pre-registration of the input images. Rather, we use the high-level features produced by the CNNs trained in each view separately. Focusing on the classification of mammograms using craniocaudal (CC) and mediolateral oblique (MLO) views and their respective mass and micro-calcification segmentations of the same breast, we initially train a separate CNN model for each view and each segmentation map using an Imagenet pre-trained model. Then, using the features learned from each segmentation map and unregistered views, we train a final CNN classifier that estimates the patient’s risk of developing breast cancer using the Breast Imaging-Reporting and Data System (BI-RADS) score. We test our methodology in two publicly available datasets (InBreast and DDSM), containing hundreds of cases, and show that it produces a volume under ROC surface of over 0.9 and an area under ROC curve (for a 2-class problem benign and malignant) of over 0.9. In general, our approach shows state-of-the-art classification results and demonstrates a new comprehensive way of addressing this challenging classification problem. |
On the Generalization of Equivariance and Convolution in Neural Networks to the Action of Compact Groups | Convolutional neural networks have been extremely successful in the image recognition domain because they ensure equivariance to translations. There have been many recent attempts to generalize this framework to other domains, including graphs and data lying on manifolds. In this paper we give a rigorous, theoretical treatment of convolution and equivariance in neural networks with respect to not just translations, but the action of any compact group. Our main result is to prove that (given some natural constraints) convolutional structure is not just a sufficient, but also a necessary condition for equivariance to the action of a compact group. Our exposition makes use of concepts from representation theory and noncommutative harmonic analysis and derives new generalized convolution formulae. |
Benefit from the inclusion of self-treatment guidelines to a self-management programme for adults with asthma. | This study assessed the long-term efficacy of adding self-treatment guidelines to a self-management programme for adults with asthma. In this prospective randomized controlled trial, 245 patients with stable, moderate to severe asthma were included. They were randomized into a self-treatment group (group S) and a control group (group C). Both groups received self-management education. Additionally, group S received self-treatment guidelines based on peak expiratory flow (PEF) and symptoms. Outcome parameters included: asthma symptoms, quality of life, pulmonary function, and exacerbation rate. The 2-yr study was completed by 174 patients. Both groups showed an improvement in the quality of life of 7%. PEF variability decreased by 32% and 29%, and the number of outpatient visits by 25% and 18% in groups S and C, respectively. No significant differences in these parameters were found between the two groups. After 1 yr, patients in both groups perceived better control of asthma and had more self-confidence regarding their asthma. The latter improvements were significantly greater in group S as compared to group C. There were no other differences in outcome parameters between the groups. Individual self-treatment guidelines for exacerbations on top of a general self-management programme does not seem to be of additional benefit in terms of improvements in the clinical outcome of asthma. However, patients in the self-treatment group had better scores in subjective outcome measures such as perceived control of asthma and self-confidence than patients in the control group. |
Implementation of continuous VLC modulation schemes on commercial LED spotlights | When modulated with a continuous signal, the intensity of light emitted by a Light Emitting Diode needs to be adjusted such that linearity is maintained between the modulating voltage and the intensity of the emitted light. This paper presents a low-cost solution that is able to maintain linearity over virtually the entire intensity range. Extending the range of linearity is accomplished by incorporating a feedback loop that compensates for the non-linear parameters of the circuit's active components, including the LED. The technique is applied to convert a commercially available LED spotlight into a device that allows for the transmission of continuous Visible Light Communication schemes. |
Improved Ring Oscillator PUF: An FPGA-friendly Secure Primitive | In this paper, we analyze ring oscillator (RO) based physical unclonable function (PUF) on FPGAs. We show that the systematic process variation adversely affects the ability of the RO-PUF to generate unique chip-signatures, and propose a compensation method to mitigate it. Moreover, a configurable ring oscillator (CRO) technique is proposed to reduce noise in PUF responses. Our compensation method could improve the uniqueness of the PUF by an amount as high as 18%. The CRO technique could produce nearly 100% error-free PUF outputs over varying environmental conditions without post-processing while consuming minimum area. |
SupportNet: solving catastrophic forgetting in class incremental learning with support data | A plain well-trained deep learning model often does not have the ability to learn new knowledge without forgetting the previously learned knowledge, which is known as catastrophic forgetting. Here we propose a novel method, SupportNet, to efficiently and effectively solve the catastrophic forgetting problem in the class incremental learning scenario. SupportNet combines the strength of deep learning and support vector machine (SVM), where SVM is used to identify the support data from the old data, which are fed to the deep learning model together with the new data for further training so that the model can review the essential information of the old data when learning the new information. Two powerful consolidation regularizers are applied to stabilize the learned representation and ensure the robustness of the learned model. We validate our method with comprehensive experiments on various tasks, which show that SupportNet drastically outperforms the state-of-theart incremental learning methods and even reaches similar performance as the deep learning model trained from scratch on both old and new data. Our program is accessible at: https://github.com/lykaust15/SupportNet |
Multicenter evaluation of the NucliSens EasyQ HIV-1 v1.1 assay for the quantitative detection of HIV-1 RNA in plasma. | The Nuclisens EasyQ HIV-1 v1.1 assay (Biomerieux) is a real-time detection method combined with NASBA technology designed to measure plasma HIV-RNA. Its performance was assessed in 1008 clinical specimens collected from individuals infected with clade B (774) and non-B (234) HIV-1 variants at four European laboratories. The results were compared with those obtained using three other commercial viral load assays: Cobas Amplicor Monitor HIV-1 v1.5 (Roche), Versant HIV-1 RNA assay (Bayer) and Nuclisens HIV-1 QT (Biomerieux). Overall, the linearity, specificity and reproducibility of the EasyQ assay was comparable with that from the other tests. The correlation coefficient (R) between methodologies was 0.85 for Amplicor; 0.87 for Versant; and 0.91 for Nuclisens. The specificity of the assay was 99.4%. Of note, Versant missed 17% of specimens with non-B subtypes which could be detected by EasyQ, while Amplicor provided similar results than EasyQ. HIV-1 group O specimens were only detected by the EasyQ assay. In conclusion, the performance of the EasyQ assay seems to be similar to that of other HIV-1 viral load tests currently on the market, but it is more sensitive than Versant for HIV-1 non-B subtypes and shows a wider dynamic range than Amplicor. Moreover, as it incorporates the advantage of real-time detection procedures, it facilitates high throughput and short turnaround time. |
Three-Phase (LC)(L)-Type Series-Resonant Converter With Capacitive Output Filter | This paper presents a three-phase (LC)(L)-type dc-dc series-resonant converter with capacitive output filter. Operation of the converter has been presented using the operating waveforms and equivalent-circuit diagrams during different intervals. An approximate analysis is used, and design procedure is presented with a design example. Intusoft simulation results for the designed converter are given for input voltage and load variations. Experimental results obtained with a 300-W converter are presented. Major advantages of this converter are the leakage and magnetizing inductances of the high-frequency transformer that are used as a part of resonant circuit, and the output-rectifier voltage is clamped to the output voltage. Also, the converter operates in soft switching for the inverter switches with a narrow frequency control range and the tank current decreases with the load current. |
SIPHON: Towards Scalable High-Interaction Physical Honeypots | In recent years, the emerging Internet-of-Things (IoT) has led to rising concerns about the security of networked embedded devices. In this work, we propose the SIPHON architecture---a Scalable high-Interaction Honeypot platform for IoT devices. Our architecture leverages IoT devices that are physically at one location and are connected to the Internet through so-called \emph{wormholes} distributed around the world. The resulting architecture allows exposing few physical devices over a large number of geographically distributed IP addresses. We demonstrate the proposed architecture in a large scale experiment with 39 wormhole instances in 16 cities in 9 countries. Based on this setup, five physical IP cameras, one NVR and one IP printer are presented as 85 real IoT devices on the Internet, attracting a daily traffic of 700MB for a period of two months. A preliminary analysis of the collected traffic indicates that devices in some cities attracted significantly more traffic than others (ranging from 600 000 incoming TCP connections for the most popular destination to less than 50 000 for the least popular). We recorded over 400 brute-force login attempts to the web-interface of our devices using a total of 1826 distinct credentials, from which 11 attempts were successful. Moreover, we noted login attempts to Telnet and SSH ports some of which used credentials found in the recently disclosed Mirai malware. |
Regulation of Mitochondrial Genome Inheritance by Autophagy and Ubiquitin-Proteasome System: Implications for Health, Fitness, and Fertility | Mitochondria, the energy-generating organelles, play a role in numerous cellular functions including adenosine triphosphate (ATP) production, cellular homeostasis, and apoptosis. Maternal inheritance of mitochondria and mitochondrial DNA (mtDNA) is universally observed in humans and most animals. In general, high levels of mitochondrial heteroplasmy might contribute to a detrimental effect on fitness and disease resistance. Therefore, a disposal of the sperm-derived mitochondria inside fertilized oocytes assures normal preimplantation embryo development. Here we summarize the current research and knowledge concerning the role of autophagic pathway and ubiquitin-proteasome-dependent proteolysis in sperm mitophagy in mammals, including humans. Current data indicate that sperm mitophagy inside the fertilized oocyte could occur along multiple degradation routes converging on autophagic clearance of paternal mitochondria. The influence of assisted reproductive therapies (ART) such as intracytoplasmic sperm injection (ICSI), mitochondrial replacement (MR), and assisted fertilization of oocytes from patients of advanced reproductive age on mitochondrial function, inheritance, and fitness and for the development and health of ART babies will be of particular interest to clinical audiences. Altogether, the study of sperm mitophagy after fertilization has implications in the timing of evolution and developmental and reproductive biology and in human health, fitness, and management of mitochondrial disease. |
Fast Adaptation in Generative Models with Generative Matching Networks | Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples. We develop a new generative model called Generative Matching Network which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks. By conditioning on the additional input dataset, our model can instantly learn new concepts that were not available in the training data but conform to a similar generative process. The proposed framework does not explicitly restrict diversity of the conditioning data and also does not require an extensive inference procedure for training or adaptation. Our experiments on the Omniglot dataset demonstrate that Generative Matching Networks significantly improve predictive performance on the fly as more additional data is available and outperform existing state of the art conditional generative models. |
Context-awareness and the smart grid: Requirements and challenges | New intelligent power grids (smart grids) will be an essential way of improving efficiency in power supply and power consumption, facilitating the use of distributed and renewable resources on the supply side and providing consumers with a range of tailored services on the consumption side. The delivery of efficiencies and advanced services in a smart grid will require both a comprehensive overlay communications network and flexible software platforms that can process data from a variety of sources, especially electronic sensor networks. Parallel developments in autonomic systems, pervasive computing and context-awareness (relating in particular to data fusion, context modelling, and semantic data) could provide key elements in the development of scalable smart grid data management systems and applications that utilise a multi-technology communications network. This paper describes: (1) the communications and data management requirements of the emerging smart grid, (2) state-of-the-art techniques and systems for context-awareness and (3) a future direction towards devising a context-aware middleware platform for the smart grid, as well as associated requirements and challenges. Smart grids will transform the methods of generating electric power and the monitoring and billing of consumption. The drivers behind the development of smart grids include economic, political and technical elements. Major initiatives have been launched in Europe by the EU Commission [1] and the European Electricity Grid Initiative [2]. In the US, overall policies are set out by the National Science and Technology Council [3] while grid modernisation is specifically described in a report by the GridWise Alliance [4]. The main policy drivers for power grid development are as follows: Promote the integration of distributed renewable power sources (e.g. wind, solar, wave and tidal power, geothermal, biofuel); Provide significant reductions in carbon dioxide (CO 2) emissions through the phasing-out of fossil fuel power plants. This is to help meet agreed world targets in reducing greenhouse gases and combatting climate change; Promote the use of electric vehicles as an alternative to fossil fuelled transport systems; Renew and upgrade older grid transmission infrastructure to provide greater efficiency and security of supply; Introduce two-way ''smart'' metering to facilitate both power saving and power production by consumers; Apart from these policy drivers, power production and distribution will also have to operate in an increasingly deregulated and competitive market environment. |
Relationships between Transformational and Active Transactional Leadership and Followers ’ Organizational Identification : The Role of Psychological Empowerment | We examined the underlying processes through which transformational and active transactional leadership affects followers’ organizational identification in a survey study. Using a sample of managers across different industries, we found that followers’ psychological empowerment, including competence, impact, meaning, and selfdetermination, partially mediated the effect of transformational leadership and active transactional leadership on followers’ organizational identification. Furthermore, LEADERSHIP AND ORGANIZATIONAL IDENTIFICATION Copyright (c) 2012 Institute of Behavioral and Applied Management. All Rights Reserved. 187 transformational leadership explained variance in followers’ organizational identification and psychological empowerment above and beyond active transactional leadership. These findings provide additional support for transformational leadership theory by demonstrating a motivational mechanism through which followers identify with their organizations. Theoretical contributions and practical implications are discussed. |
In Vivo 3D 19 F Fast Spectroscopic Imaging (F-uTSI) of Angiogenesis on Vx-2 Tumors in Rabbits Using Targeted Perfluorocarbon Emulsions | Quantitative molecular MR imaging of angiogenesis may fulfill an unmet clinical need for patient stratification by increasing efficacy of antiangiogenic therapy. In particular, perfluorocarbons targeted to αvβ3 have been used to image angiogenesis through paramagnetic markers and also direct detection by F MR imaging or spectroscopy [1,2,3]. F offers several advantages including absolute quantification, high intrinsic specificity, and no need for pre-contrast imaging. However, one of the major drawbacks of more clinically relevant F compounds is the large chemical shift dispersion of the multiple resonances. If imaged by a gradient or spin echo technique, the resulting images will display a large chemical shift artifact in both the read-out direction and the slice selection direction. Fluorine ultra-fast Turbo Spectroscopic Imaging (F-uTSI) has been developed to overcome these drawbacks, without sacrificing sensitivity [4,5]. Additionally F-uTSI offers, without increasing scan time, the advantage of distinguishing various F compounds based on chemical shift differences allowing for ‘multi-color’ imaging. Beyond the preliminary, non-targeted in vivo results already shown, herein we demonstrate with in vivo tumor models the sensitive detection of angiogenesis with the F-uTSI technique. |
Aberrant global methylation patterns affect the molecular pathogenesis and prognosis of multiple myeloma. | We used genome-wide methylation microarrays to analyze differences in CpG methylation patterns in cells relevant to the pathogenesis of myeloma plasma cells (B cells, normal plasma cells, monoclonal gammopathy of undetermined significance [MGUS], presentation myeloma, and plasma cell leukemia). We show that methylation patterns in these cell types are capable of distinguishing nonmalignant from malignant cells and the main reason for this difference is hypomethylation of the genome at the transition from MGUS to presentation myeloma. In addition, gene-specific hypermethylation was evident at the myeloma stage. Differential methylation was also evident at the transition from myeloma to plasma cell leukemia with remethylation of the genome, particularly of genes involved in cell-cell signaling and cell adhesion, which may contribute to independence from the bone marrow microenvironment. There was a high degree of methylation variability within presentation myeloma samples, which was associated with cytogenetic differences between samples. More specifically, we found methylation subgroups were defined by translocations and hyperdiploidy, with t(4;14) myeloma having the greatest impact on DNA methylation. Two groups of hyperdiploid samples were identified, on the basis of unsupervised clustering, which had an impact on overall survival. Overall, DNA methylation changes significantly during disease progression and between cytogenetic subgroups. |
Development of Torque Sensor with High Sensitivity for Joint of Robot Manipulator Using 4-Bar Linkage Shape | The torque sensor is used to measure the joint torque of a robot manipulator. Previous research showed that the sensitivity and the stiffness of torque sensors have trade-off characteristics. Stiffness has to be sacrificed to increase the sensitivity of the sensor. In this research, a new torque sensor with high sensitivity (TSHS) is proposed in order to resolve this problem. The key idea of the TSHS comes from its 4-bar linkage shape in which the angular displacement of a short link is larger than that of a long link. The sensitivity of the torque sensor with a 4-bar link shape is improved without decreasing stiffness. Optimization techniques are applied to maximize the sensitivity of the sensor. An actual TSHS is constructed to verify the validity of the proposed mechanism. Experimental results show that the sensitivity of TSHS can be increased 3.5 times without sacrificing stiffness. |
A Laplacian Framework for Option Discovery in Reinforcement Learning | Representation learning and option discovery are two of the biggest challenges in reinforcement learning (RL). Proto-RL is a well known approach for representation learning in MDPs. The representations learned with this framework are called proto-value functions (PVFs). In this paper we address the option discovery problem by showing how PVFs implicitly define options. We do it by introducing eigenpurposes, intrinsic reward functions derived from the learned representations. The options discovered from eigenpurposes traverse the principal directions of the state space. They are useful for multiple tasks because they are independent of the agents’ intentions. Moreover, by capturing the diffusion process of a random walk, different options act at different time scales, making them helpful for exploration strategies. We demonstrate features of eigenpurposes in traditional tabular domains as well as in Atari 2600 games. |
Connecting with Tort Law | This work guides and supports the development of each students: knowledge of the law of torts; problem solving skills; communication skills; professional development; attitude towards personal development and lifelong learning.This book is not a traditional practitioners treatise on the law of torts; rather, it has been written to help students to understand and connect with their torts courses. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.