abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
We present a parametric extension of the Bayesian Rough Set (BRS) model. Its properties are investigated in relation to non-parametric BRS, classicalRough Set (RS) modeland the Variable Precision Rough Set (VPRS) model.
['Dominik Şlęzak', 'Wojciech Ziarko']
Variable precision Bayesian rough set model
303,670
Improved Usability through Internationalization
['Carsten Witte']
Improved Usability through Internationalization
588,716
Automatic detection and recognition of ship in satellite images is very important and has a wide array of applications. This paper concentrates on optical satellite sensor, which provides an important approach for ship monitoring. Graph-based fore/background segmentation scheme is used to extract ship candidant from optical satellite image chip after the detection step, from course to fine. Shadows on the ship are extracted in a CFAR scheme. Because all the parameters in the graph-based algorithms and CFAR are adaptively determined by the algorithms, no parameter tuning problem exists in our method. Experiments based on measured optical satellite images shows our method achieved good balance between computation speed and ship extraction accuracy.
['Feng Chen', 'Wenxian Yu', 'Xingzhao Liu', 'Kaizhi Wang', 'Lin Gong', 'Wentao Lv']
Graph-based ship extraction scheme for optical satellite image
44,618
Decision trees and randomized forests are widely used in computer vision and machine learning. Standard algorithms for decision tree induction optimize the split functions one node at a time according to some splitting criteria. This greedy procedure often leads to suboptimal trees. In this paper, we present an algorithm for optimizing the split functions at all levels of the tree jointly with the leaf parameters, based on a global objective. We show that the problem of finding optimal linear-combination (oblique) splits for decision trees is related to structured prediction with latent variables, and we formulate a convex-concave upper bound on the tree's empirical loss. Computing the gradient of the proposed surrogate objective with respect to each training exemplar is O(d2), where d is the tree depth, and thus training deep trees is feasible. The use of stochastic gradient descent for optimization enables effective training with large datasets. Experiments on several classification benchmarks demonstrate that the resulting non-greedy decision trees outperform greedy decision tree baselines.
['Mohammad Norouzi', 'Maxwell D. Collins', 'Matthew A. Johnson', 'David J. Fleet', 'Pushmeet Kohli']
Efficient non-greedy optimization of decision trees
554,638
Foreword to the Special Section on SIGGRAPH Asia 2015 Symposium on Education
['Miho Aoki']
Foreword to the Special Section on SIGGRAPH Asia 2015 Symposium on Education
854,489
An open, extensible, administrative data schema that is based on electronic design interchange format (EDIF) is presented. The schema closely reflects the hardware design process. Thus, it enables the realization of central design management services. The use of a common schema implemented with the help of an object management system allows the integrated tools to share the data covered by this schema during runtime. Hence, online consistency management is supported. >
['Maria Brielmann', 'Elisabeth Kupitz']
Representing the hardware design process by a common data schema
504,176
This paper presents quantized color pack eXtension (QCPX) ISA to accelerate performance of pixel-oriented media processing applications. The QCPX ISA (with a 32 bit word size) supports two packed, quantized (reduced) 16-bit color pixels represented in a YCbCr (Y: luminance, Cr and Cb: chrominance) color format. Unlike typical multimedia instruction set extensions (e.g., MDMX, MMX, ALTIVEC), QCPX obtains substantial performance and code density improvements through implicit support for color pixel processing rather than depending solely on generic subword parallelism. To fully measure its impact, QCPX is evaluated in the context of a massively data-parallel SIMD execution platform where data parallelism is harnessed by an orthogonal mechanism. Simulation results indicate that the 32-bit QCPX ISA achieves an overall average speedup of 584% over the non-QCPX and 88% over the 32-bit MDMX-like ISA with four media applications in a same machine platform. In addition, QCPX results in a higher system utilization in excess of 95% due to a significant reduction of conditional instructions.
['Jongmyon Kim', 'D.S. Wills']
Quantized color instruction set for media-on-demand applications
116,480
Artificial Intelligence in Engineering: Diagnosis and Learning , edited by J.S. Gero, Elsevier Science Publishers, Amsterdam 1988, 421 pages, no index. Price £59.00.
['Tony Owen']
Artificial Intelligence in Engineering: Diagnosis and Learning , edited by J.S. Gero, Elsevier Science Publishers, Amsterdam 1988, 421 pages, no index. Price £59.00.
389,648
There are many uncertainty factors when the electric vehicle (EV) drives in the complicated variable circumstance. It makes the control effect of some control modes non-ideal. To ensure the robustness of the closed-loop system under the presence of uncertainties, such as parameter perturbation and unknown model dynamics, and to minimize the effect of disturbances, the μ synthesis robust control is introduced in this paper, and used in the EV driving control system. The μ synthesis robust controller for driving of EV is designed, ensuring the robustness, performances-such as response speed and steady-state error - when there is perturbation of EV voltage of the battery and the driving mode. The experimental results with different voltage of batteries and driving modes show that the μ synthesis robust controllers are superior to the traditional Proportional Integration Differential (PID) controller for reducing steady-state tracking error and accelerating response rate.
['Wenxin Yang', 'Xiaomin Zhou', 'Baodong Ju', 'Peng Xu']
Robust control of electric vehicle's driving system
917,458
For years, agile methods are considered the most promising route toward successful software development, and a considerable number of published studies the successful use of agile methods and reports on the benefits companies have from adopting agile methods. Yet, since the world is not black or white, the question for what happened to the traditional models arises. Are traditional models replaced by agile methods? How is the transformation toward Agile managed, and, moreover, where did it start? With this paper we close a gap in literature by studying the general process use over time to investigate how traditional and agile methods are used. Is there coexistence or do agile methods accelerate the traditional processes' extinction? The findings of our literature study comprise two major results: First, studies and reliable numbers on the general process model use are rare, i.e., we lack quantitative data on the actual process use and, thus, we often lack the ability to ground process-related research in practically relevant issues. Second, despite the assumed dominance of agile methods, our results clearly show that companies enact context-specific hybrid solutions in which traditional and agile development approaches are used in combination.
['Georgios Theocharis', 'Marco Kuhrmann', 'Jürgen Münch', 'Philipp Diebold']
Is Water-Scrum-Fall Reality? On the Use of Agile and Traditional Development Practices
994,342
This paper focuses on the development of a decoupling mechanism and a speed control scheme based on total sliding-mode control (TSMC) theory for a direct rotor field-oriented (DRFO) induction motor (IM). First, a robust decoupling mechanism including an adaptive flux observer and a sliding-mode current estimator is investigated to decouple the complicated flux and torque dynamics of an IM. The acquired flux angle is utilized for the DRFO object such that the dynamic behavior of the IM is like that of a separately excited dc motor. However, the control performance of the IM is still influenced seriously by the system uncertainties including electrical and mechanical parameter variation, external load disturbance, nonideal field-oriented transient responses, and unmodeled dynamics in practical applications. In order to enhance the robustness of the DRFO IM drive for high-performance applications, a TSMC scheme is constructed without the reaching phase in conventional sliding-mode control (CSMC). The control strategy is derived in the sense of Lyapunov stability theorem such that the stable tracking performance can be ensured under the occurrence of system uncertainties. In addition, numerical simulations as well as experimental results are provided to validate the effectiveness of the developed methodologies in comparison with a model reference adaptive system flux observer and a CSMC system.
['Rong-Jong Wai', 'Kuo-Min Lin']
Robust decoupled control of direct field-oriented induction motor drive
158,502
Using techniques of convex analysis, we provide a direct proof of a recent characterization of convexity given in the setting of Banach spaces in [J. Saint Raymond, J. Nonlinear Convex Anal., 14 (2013), pp. 253--262]. Our results also extend this characterization to locally convex spaces under weaker conditions.
['Rafael Correa', 'Abderrahim Hantoute', 'Pedro Pérez-Aros']
On the Klee--Saint Raymond's Characterization of Convexity
816,154
Multi-view image-based rendering consists in generating a novel view of a scene from a set of source views. In general, this works by first doing a coarse 3D reconstruction of the scene, and then using this reconstruction to establish correspondences between source and target views, followed by blending the warped views to get the final image. Unfortunately, discontinuities in the blending weights, due to scene geometry or camera placement, result in artifacts in the target view. In this paper, we show how to avoid these artifacts by imposing additional constraints on the image gradients of the novel view. We propose a variational framework in which an energy functional is derived and optimized by iteratively solving a linear system. We demonstrate this method on several structured and unstructured multi-view datasets, and show that it numerically outperforms state-of-the-art methods, and eliminates artifacts that result from visibility discontinuities.
['Grégoire Nieto', 'Frédéric Devernay', 'James L. Crowley']
Variational image-based rendering with gradient constraints
935,210
In this paper, we address the problem of computing the probability that r out of n interfering signals can be correctly received in a random access wireless system with capture. We extend previous results on the capture probability computation, and provide an expression for the distribution of the number of captured packets that is scalable with n and r. We also provide an approximate expression, that is much easier to compute and provides good results for r = 0 and r = n. Finally, we study the dependence of the system throughput performance on the multi-packet reception capabilities of the receiver.
['Andrea Zanella', 'Ramesh R. Rao', 'Michele Zorzi']
Capture analysis in wireless radio systems with multi-packet reception capabilities
453,552
Service and Science
['Jim Spohrer', 'Haluk Demirkan', 'Vikas Krishna']
Service and Science
600,389
Implicit surfaces are often created by summing a collection of radial basis functions. Researchers have begun to create implicit surfaces that exactly interpolate a given set of points by solving a simple linear system to assign weights to each basis function. Due to their ability to interpolate, these implicit surfaces are more easily controllable than traditional "blobby" implicits. There are several additional forms of control over these surfaces that make them attractive for a variety of applications. Surface normals may be directly specified at any location over the surface, and this allows the modeller to pivot the normal while still having the surface pass through the constraints. The degree of smoothness of the surface can be controlled by changing the shape of the basis functions, allowing the surface to be pinched or smooth. On a point-by-point basis the modeller may decide whether a constraint point should be exactly interpolated or approximated. Applications of these implicits include shape transformation, creating surfaces from computer vision data, creation of an implicit surface from a polygonal model, and medical surface reconstruction.
['Greg Turk', 'Huong Quynh Dinh', "James F. O'Brien", 'Gary Yngve']
Implicit surfaces that interpolate
106,353
Multi-sensor diode for magnetic field and photo detection
['Chalin Sutthinet', 'Toempong Phetchakul', 'Wittaya Luanatikomkul', 'Amporn Poyai']
Multi-sensor diode for magnetic field and photo detection
669,128
Elastic graph matching has been proposed as a practical implementation of dynamic link matching, which is a neural network with dynamically evolving links between a reference model and an input image. Each node of the graph contains features that characterize the neighborhood of its location in the image. The elastic graph matching usually consists of two consecutive steps, namely a matching with a rigid grid, followed by a deformation of the grid, which is actually the elastic part. The deformation step is introduced in order to allow for some deformation, rotation, and scaling of the object to be matched. This method is applied here to the authentication of human faces where candidates claim an identity that is to be checked. The matching error as originally suggested is not powerful enough to provide satisfying results in this case. We introduce an automatic weighting of the nodes according to their significance. We also explore the significance of the elastic deformation for an application of face-based person authentication. We compare performance results obtained with and without the second matching step. Results show that the deformation step slightly increases the performance, but has lower influence than the weighting of the nodes. The best results are obtained with the combination of both aspects. The results provided by the proposed method compare favorably with two methods that require a prior geometric face normalization, namely the synergetic and eigenface approaches.
['Benoît Duc', 'Stefan Fischer', 'Josef Bigun']
Face authentication with Gabor information on deformable graphs
306,536
In this article we highlight temporal effects in information and communication technology-enabled organizational change. Examples of temporal effects are explored in the context of one organization's efforts to implement an enterprise-wide information system. Temporality is presented as having two aspects, with the first being the well-recognized, linear and measured clock time. The second aspect of time is that which is perceived--often as nonlinear--and socially defined. We find that temporal effects arise both in changes to the structure of work and in differences among groups in how time is perceived. Evidence suggests that both specific characteristics of the implementation and of the enterprise systems' technologies further exacerbate these temporal effects. We conclude with suggestions for how to incorporate a temporally reflective perspective into analysis of technology-enabled organizational change and how a temporal perspective provides insight into both the social and technical aspects of the s...
['Steve Sawyer', 'Richard Southwick']
Temporal Issues in Information and Communication Technology-Enabled Organizational Change: Evidence From an Enterprise Systems Implementation
172,630
Separation logic formalizes the idea of local reasoning for heap-manipulating programs via the frame rule and the separating conjunction P * Q, which describes states that can be split into separate parts, with one satisfying P and the other satisfying Q. In standard separation logic, separation means physical separation. In this paper, we introduce fictional separation logic, which includes more general forms of fictional separating conjunctions P * Q, where * does not require physical separation, but may also be used in situations where the memory resources described by P and Q overlap. We demonstrate, via a range of examples, how fictional separation logic can be used to reason locally and modularly about mutable abstract data types, possibly implemented using sophisticated sharing. Fictional separation logic is defined on top of standard separation logic, and both the meta-theory and the application of the logic is much simpler than earlier related approaches.
['Jonas Braband Jensen', 'Lars Birkedal']
Fictional separation logic
369,752
In this paper, we provide a formal logical account of the burden of proof and proof standards in legal reasoning. As opposed to the usual argument-based model we use a hybrid model for Inference to the Best Explanation, which uses stories or explanations as well as arguments. We use examples of real cases to show that our hybrid reasoning model allows for a natural modeling of burdens and standards of proof.
['Floris Bex', 'Douglas Walton']
Burdens and Standards of Proof for Inference to the Best Explanation
79,873
Lung cancer is the major cause of death among patients with cancer worldwide. This work is intended to develop a methodology for the diagnosis of lung nodules using images from the Image Database Consortium and Image Database Resource Initiative (LIDC–IDRI). The proposed methodology uses image processing and pattern recognition techniques. To differentiate the patterns of malignant and benign forms, we used a Minkowski functional, distance measures, representation of the vector of points measures, triangulation measures, and Feret diameters. Finally, we applied a genetic algorithm to select the best model and a support vector machine for classification. In the test stage, we applied the proposed methodology to 1405 (394 malignant and 1011 benign) nodules from the LIDC–IDRI database. The proposed methodology shows promising results for diagnosis of malignant and benign forms, achieving accuracy of 93.19 %, sensitivity of 92.75 %, and specificity of 93.33 %. The results are promising and demonstrate a good rate of correct detections using the shape features. Because early detection allows faster therapeutic intervention, and thus a more favorable prognosis for the patient, herein we propose a methodology that contributes to the area.
['Antonio Oseas de Carvalho Filho', 'Aristófanes Corrêa Silva', 'Anselmo Cardoso de Paiva', 'Rodolfo Acatauassú Nunes', 'Marcelo Gattass']
Computer-aided diagnosis system for lung nodules based on computed tomography using shape analysis, a genetic algorithm, and SVM
900,411
While motion estimation has been extensively studied in the computer vision literature, the inherent information redundancy in an image sequence has not been well utilised. In particular as many as N(N-1)/2 pairwise relative motions can be estimated efficiently from a sequence of N images. This highly redundant set of observations can be efficiently averaged resulting in fast motion estimation algorithms that are globally consistent. In this paper we demonstrate this using the underlying Lie-group structure of motion representations. The Lie-algebras of the Special Orthogonal and Special Euclidean groups are used to define averages on the Lie-group which in turn gives statistically meaningful, efficient and accurate algorithms for fusing motion information. Using multiple constraints also controls the drift in the solution due to accumulating error. The performance of the method in estimating camera motion is demonstrated on image sequences.
['Venu Madhav Govindu']
Lie-algebraic averaging for globally consistent motion estimation
373,063
New Distinguishers for Reduced Round Trivium and Trivia-SC using Cube Testers.
['Anubhab Baksi', 'Subhamoy Maitra', 'Santanu Sarkar']
New Distinguishers for Reduced Round Trivium and Trivia-SC using Cube Testers.
745,638
Modeling of terrain topography is crucial for vegetated areas given that even small slopes impact and alter the radar wave interactions between the ground and the overlying vegetation. Current missions either exclude pixels with large topographic slopes or disregard the terrain topography entirely, potentially accumulating substantial modeling errors and therefore impacting the retrieval performance over such sloped pixels. The underlying terrain topography needs to be considered and modeled to obtain a truly general and accurate radar scattering model. In this paper, a flexible and modular model is developed: the vegetation is considered by a multilayered multispecies vegetation model capable of representing a wide range of vegetation cover types ranging from bare soil to dense forests. The ground is incorporated with the stabilized extended boundary condition method, allowing the representation of an $N$ -layered soil structure with rough interfaces. Terrain topography is characterized by a 2-D slope with two tilt angles $(\alpha, \beta)$ . Simulation results for an evergreen forest show the impact of a 2-D slope for a range of tilt angles: a 10 $^{\circ}$ tilt in the plane of incidence translates to a change of up to 15 dB in HH, 10 dB in VV, and 1.5 dB in HV for the total radar backscatter. Terrain topography is shown to be crucial for accurate forward modeling, especially over forested areas.
['Mariko Burgin', 'Uday K. Khankhoje', 'Xueyang Duan', 'Mahta Moghaddam']
Generalized Terrain Topography in Radar Scattering Models
701,988
The use of lambda calculus in richer settings, possibly involving parallelism, is examined in terms of its effect on the equivalence between lambda terms. We concentrate here on Abramsky ’s lazy lambda calculus and we follow two directions. First, the lambda calculus is studied within a process calculus by examining the equivalence E induced by Milner’s encoding into the a-calculus. We give exact operational and denotational characterizations for G. Secondly, we examine Abramsky’s applicative bisi/nulation when the lambda calculus is augmented with (well-formed) operators, i.e. symbols equipped with reduction rules describing their behaviour. Then, maximal discrimination is obtained when all operators are considered; we show that this discrimination coincides with the one given by Z+ and that the adoption of certain non-deterministic operators is sufficient and necessary to induce it.
['Davide Sangiorgi']
The Lazy Lambda Calculus in a Concurrency Scenario (Extended Abstract)
35,834
Editorial: Information fusion as a tool for forecasting/prediction - An overview
['Belur V. Dasarathy']
Editorial: Information fusion as a tool for forecasting/prediction - An overview
700,737
Traditionally, object-oriented software adopts the Observer pattern to implement reactive behavior. Its drawbacks are well-documented and two families of alternative approaches have been proposed, extending object-oriented languages with concepts from functional reactive and dataflow programming, respectively event-driven programming. The former hardly escape the functional setting; the latter do not achieve the declarativeness of more functional approaches. In this paper, we present REScala, a reactive language which integrates concepts from event-based and functional-reactive programming into the object-oriented world. REScala supports the development of reactive applications by fostering a functional declarative style which complements the advantages of object-oriented design.
['Guido Salvaneschi', 'Gerold Hintz', 'Mira Mezini']
REScala: bridging between object-oriented and functional style in reactive applications
506,983
Purpose#R##N##R##N##R##N##R##N##R##N#Wireless Multimedia Sensor Network (WMSN) is expected to be a key technology for future network. The multimodal information along with very low-cost availability of the camera sensor nodes is promoting the extensive use of audio, image and video in various real-time implementations. The purpose of this paper is to study various routing issues and the effect of mobility on existing solutions for the applications in future internet.#R##N##R##N##R##N##R##N##R##N#Design/methodology/approach#R##N##R##N##R##N##R##N##R##N#This paper conducts a survey of the various methodologies for routing and the vital issues in the design of routing protocols for WMSN, and it also discusses about the effect of mobility on various routing methodologies of WMSN. WMSN ubiquitously performs data acquisition, processing and routing for scalar and multimedia data in a mobile environment. The routing protocols should be adaptive in nature and should have a dynamic approach to service effectively for future network. Many authors proposed the mobility of sink to improve lifetime of the network. This paper discusses some effective approaches for the network where not only the sink node but also some of the sensor nodes are mobile.#R##N##R##N##R##N##R##N##R##N#Findings#R##N##R##N##R##N##R##N##R##N#During the survey, the performance and lifetime of the network are discussed, other parameters like delay, packet loss, energy consumption and simulators used for implementation are also discussed.#R##N##R##N##R##N##R##N##R##N#Originality/value#R##N##R##N##R##N##R##N##R##N#The techniques in this paper represent a considerable solution for mobility issues in future internet applications.
['Rachana Borawake-Satao', 'Rajesh Shardanand Prasad']
Comprehensive survey on effect of mobility over routing issues in wireless multimedia sensor networks
903,709
Labor Saving and Labor Making of Value in Online Congratulatory Messages
['Jennifer G. Kim', 'Stephany Park', 'Karrie Karahalios', 'Michael B. Twidale']
Labor Saving and Labor Making of Value in Online Congratulatory Messages
666,273
There are few conceptual tools available to analyse physical spaces in terms of their support for social interactions and their potential for technological augmentation. In this paper, we describe how we used Adam Kendon's characterisation of the F-formation system of spatial organisation as a conceptual lens to analyse the social interactions between visitors and staff in a tourist information centre. We describe how the physical structures in the space encouraged and discouraged particular kinds of interactions and discuss how F-formations might be used to think about augmenting physical spaces.
['Paul Marshall', 'Yvonne Rogers', 'Nadia Pantidi']
Using F-formations to analyse spatial patterns of interaction in physical environments
221,379
This paper describes a model of the immunologic response of latent viruses and a donor kidney in a renal transplant recipient. An optimal control problem with state variable inequality constraints is considered to maintain the balance between over-suppression where latent viruses are reactivated and under-suppression where the transplanted kidney is rejected. A feedback methodology based on the model predictive control (MPC) method is proposed to design (sub)optimal treatment regimes. In addition, the problem of implementing the MPC methodology and nonlinear Kalman filter with inaccurate or incomplete observation data and long measurement periods is addressed. The results of numerical simulations show that a (sub)optimal dynamic immunosuppression therapy method can help strike a balance between the over-suppression and under-suppression of the immune system.
['Hee-Dae Kwon', 'Jeehyun Lee', 'Myoungho Yoon']
Feedback control of the immune response of renal transplant recipients with inequality constraints
613,079
Matching of Complex Scenes Based on Constrained Clustering.
['Alexander C. Loui', 'Madirakshi Das']
Matching of Complex Scenes Based on Constrained Clustering.
760,015
A main challenging problem for many machine learning and data mining applications is that the amount of data and features are very large, so that low-rank approximations of original data are often required for efficient computation. We propose new multi-level clustering based low-rank matrix approximations which are comparable and even more compact than Singular Value Decomposition (SVD). We utilize the cluster indicators of data clustering results to form the subspaces, hence our decomposition results are more interpretable. We further generalize our clustering based matrix decompositions to tensor decompositions that are useful in high-order data analysis. We also provide an upper bound for the approximation error of our tensor decomposition algorithm. In all experimental results, our methods significantly outperform traditional decomposition methods such as SVD and high-order SVD.
['Dijun Luo', 'Chris H. Q. Ding', 'Heng Huang']
Multi-level cluster indicator decompositions of matrices and tensors
588,727
In this paper we present a new approach for the automatic reconstruction of seismic horizons and the generation of a pseudo-geological time cube. Our method can accomodate user constraints and relies on the computation of a local Riemannian metric on the seismic image, whose geodesic lines correspond to seismic horizons. The parameterization chosen in our method eases some of the restrictions imposed by currently available algorithms and can thus be used with a greater variety of seismc data, such as roll-over folds and salt domes.
['Moctar Monniron', 'S. Frambati', 'Sebastien Quillon', 'Yannick Berthoumieu', 'Marc Donias']
Seismic horizon and pseudo-geological time cube extraction based on a riemmanian geodesic search
859,986
In this paper we introduce a Relaxed Dimensional Factorization (RDF) preconditioner for saddle point problems. Properties of the preconditioned matrix are analyzed and compared with those of the closely related Dimensional Splitting (DS) preconditioner recently introduced by Benzi and Guo [7]. Numerical results for a variety of finite element discretizations of both steady and unsteady incompressible flow problems indicate very good behavior of the RDF preconditioner with respect to both mesh size and viscosity.
['Michele Benzi', 'Michael K. Ng', 'Qiang Niu', 'Zhen Wang']
A Relaxed Dimensional Factorization preconditioner for the incompressible Navier-Stokes equations
196,542
Controlling cellular microenvironment to induce neural differentiation of embryonic stem cells (ESCs) remains a major challenge. We address this need by introducing a micro-engineered co-culture system that resembles embryonic development in terms of direct intercellular interactions and induces neural differentiation of ESCs. A polymeric aqueous two-phase system (ATPS)-mediated robotic microprinting technology allows precise localization of mouse ESCs (mESCs) over a layer of supporting stromal cells. mESCs proliferate over a 2-week culture period into a single colony of defined size. Physical and chemical cues from the stromal cells guide mESCs to differentiate toward specific neural lineages. We generated mESC colonies of three different sizes from 100, 250 and 500 single cells and showed that size of mESC colonies is an important factor determining the yield of neural cells. Expression of early neural cell markers nestin denoting neural stem cells, NCAM specifying neural progenitors, and β-III tubulin (TuJ) indicating post mitotic neurons escalated from day 4. Differentiation into specific neural cells astrocytes marked by GFAP, oligodendrocytes indicated by CNPase, and TH-positive dopaminergic neurons was observed during the second week of culture. Unexpectedly, analysis of protein expression revealed a disproportionate increase in neural differentiation of mESCs by increase in the colony size. For the first time, our study establishes colony size as an important regulator of fate of ESCs in this heterocellular niche. This approach of deriving neural cells may make a major impact on stem cell research for treating neurodegenerative diseases.
['Ramila Joshi', 'James Carlton Buchanan', 'Hossein Tavana']
Colony size effect on neural differentiation of embryonic stem cells microprinted on stromal cells
911,692
We present a reduction from a new logic extending van der Meyden's dynamic logic of permission (DLP) into propositional dynamic logic (PDL), providing a 2EXPTIME decision procedure and showing that all the machinery for PDL can be reused for reasoning about dynamic policies. As a side-effect, we establish that DLP is EXPTIME-complete. The logic we introduce extends the logic DLP so that the policy set can be updated depending on its current value and such an update corresponds to add/delete transitions in the model, showing similarities with van Benthem's sabotage modal logic.
['Stéphane Demri']
A Reduction from DLP to PDL
315,441
It has been shown previously that systems based on local features and relatively complex generative models, namely 1D hidden Markov models (HMMs) and pseudo-2D HMMs, are suitable for face recognition (here we mean both identification and verification). Recently a simpler generative model, namely the Gaussian mixture model (GMM), was also shown to perform well. In this paper we first propose to increase the performance of the GMM approach (without sacrificing its simplicity) through the use of local features with embedded positional information; we show that the performance obtained is comparable to 1D HMMs. Secondly, we evaluate different training techniques for both GMM and HMM based systems. We show that the traditionally used maximum likelihood (ML) training approach has problems estimating robust model parameters when there is only a few training images available; we propose to tackle this problem through the use of maximum a posteriori (MAP) training, where the lack of data problem can be effectively circumvented; we show that models estimated with MAP are significantly more robust and are able to generalize to adverse conditions present in the BANCA database.
['Fabien Cardinaux', 'Conrad Sanderson', 'Samy Bengio']
Face verification using adapted generative models
354,727
Design a fast Non‐Technical Loss fraud detector for smart grid
['Wenlin Han', 'Yang Xiao']
Design a fast Non‐Technical Loss fraud detector for smart grid
925,196
Recently, the World Wide Web (WWW) has been attracting much attention as the predominant method of sharing information using multimedia contents. However, when users want to provide multimedia contents on the Web, they must learn how to use authoring software or how to write HTML documents. In this paper, we propose a Web information-sharing system called "Big Blackboard", which is based on a large Web page. The large Web page can act as a metaphor for real-world bulletin boards to realize not only a one-dimensional message board but also a two-dimensional message board on the Web. Any user can easily provide multimedia contents using unlimited layout through a Web browser without needing to know HTML. In the case of displaying on a large Web page, loading Web page contents onto a Web browser may require users to wait a long time. We propose a novel method that reduces transmission cost by only loading the contents within the scope at which a user is looking at an entire Web page. The experimental results demonstrate that our method can effectively reduce user waiting time for loading a Web page.
['Takeshi Konagaya', 'Toramatsu Shintani', 'Tadachika Ozono', 'Takayuki Ito', 'Kentaro Nishi']
Big Blackboard: On a Large Web Page by Using a Fast Page Loading Method
142,123
Improving Programming Instruction with Subgoal Labeled Instructional Text
['Lauren E. Margulieux', 'Richard Catrambone']
Improving Programming Instruction with Subgoal Labeled Instructional Text
735,095
With increasing demands for indoor GIS, indoor routing and analysis attracts attention from both GIS and architecture worlds. This paper aimes to provide executable methods in GIS softwares eg ArcGIS for indoor path generation and to explore the possibilities for further analysis. In this paper, two methods are proposed and impletmented: Mesh and TIN. The Mesh method used a standard-sized grid graph as the referencing network for a floor and subsequently mapping the movement on a 2D plane to the movement along grid edges. On the other hand, TIN method utilized the TIN as the base, it generates a usable path network by appling two customized TIN. Considering the outputs, the result shows a value for both appling the methods in real use and research analysis of network in indoor environement. TIN provide a network suitable for indoor navigation but need less storage space compared to the Mesh method which provide more accurate network but required extra storage spaces.
['Xin Li', 'Ihab Hamzi Hijazi', 'Mengchao Xu', 'Haibin Lv', 'Rani El Meouche']
Implementing two methods in GIS software for indoor routing: an empirical study
583,777
Local binary pattern (LBP) is widely adopted for efficient image feature description and simplicity. To describe the color images, it is required to combine the LBPs from each channel of the image. The traditional way of binary combination is to simply concatenate the LBPs from each channel, but it increases the dimensionality of the pattern. In order to cope with this problem, this paper proposes a novel method for image description with multichannel decoded LBPs. We introduce adder- and decoder-based two schemas for the combination of the LBPs from more than one channel. Image retrieval experiments are performed to observe the effectiveness of the proposed approaches and compared with the existing ways of multichannel techniques. The experiments are performed over 12 benchmark natural scene and color texture image databases, such as Corel-1k, MIT-VisTex, USPTex, Colored Brodatz, and so on. It is observed that the introduced multichannel adder- and decoder-based LBPs significantly improve the retrieval performance over each database and outperform the other multichannel-based approaches in terms of the average retrieval precision and average retrieval rate.
['Shiv Ram Dubey', 'Satish K. Singh', 'Rajat Kumar Singh']
Multichannel Decoded Local Binary Patterns for Content-Based Image Retrieval
809,777
This paper presents a backward reasoning approach based on fuzzy Petri nets (FPNs). This approach takes full advantage of the structural and behavioral properties of a FPN. It can identify middle places by a vector-computational manner rather than the conventional search way, improving inference efficiency. To reduce the complexity and scale of a FPN, man-machine interaction is introduced to it. It is suggested that high efficiency and low costs which an inference method brings in practice, play a more important role than the operational efficiency of the method itself. An instance illustrates that the reasoning approach is feasible and effective.
['Jie Yuan', 'Haibo Shi', 'Chang Liu', 'Wenli Shang']
Backward concurrent reasoning based on fuzzy Petri nets
6,129
Each year, consumers carry an increasing number of gadgets on their person: mobile phones, tablets, smartwatches, etc. As a result, users must remember to recharge each device, every day. Wireless charging promises to free users from this burden, allowing devices to remain permanently unplugged. Today's wireless charging, however, is either limited to a single device, or is highly cumbersome, requiring the user to remove all of her wearable and handheld gadgets and place them on a charging pad. This paper introduces MultiSpot, a new wireless charging technology that can charge multiple devices, even as the user is wearing them or carrying them in her pocket. A MultiSpot charger acts as an access point for wireless power. When a user enters the vicinity of the MultiSpot charger, all of her gadgets start to charge automatically. We have prototyped MultiSpot and evaluated it using off-the-shelf mobile phones, smartwatches, and tablets. Our results show that MultiSpot can charge 6 devices at distances of up to 50cm.
['Lixin Shi', 'Zachary Kabelac', 'Dina Katabi', 'David J. Perreault']
Wireless Power Hotspot that Charges All of Your Devices
448,750
IP Traceback Based on Chinese Remainder Theorem.
['Lih-Chyau Wuu', 'Chi-Hsiang Hung', 'Jyun-Yan Yang']
IP Traceback Based on Chinese Remainder Theorem.
792,162
Introduction The rapid advancement of information technology has led to the widespread adoption of educational technology known as e-learning. An increasing number of courses now incorporate digitalized learning materials and an e-learning approach (Jung, Wong, Li, Baigaltugs, & Belawati, 2011; Simelane, 2009). However, there is a difference between online learning and traditional physical classroom learning. For example, e-learning classes lack face-to-face interaction, and the learning materials they use are often designed for self-study. These differences may lead to undesirable learning outcomes (Stewart, Goodson, Miertschin, Norwood, & Ezell, 2013). Thus, it is critical to develop standards for the quality of digital learning materials to enhance learning and teaching. In terms of the development of digital materials, a framework incorporating a sequence of steps including analysis, design, development, implementation, and evaluation (ADDIE) can be used to systematically create high-quality digital material. This model can also be used to certify digital learning materials and select outstanding examples. The mechanism by which digital content gains accreditation should include a variety of standards to ensure that instructional designs meet relevant requirements. The process should also be rigorous, yielding objective, fair, and consistent outcomes (Berta, 2013). Such an accreditation process should meet the following goals: (1) Recognize outstanding and high-quality digital content. (2) Provide criteria for the evaluation of instructional designs and principles to guide instructors and curriculum development. These standards will enable instructors who do not know how to improve their online teaching to adjust their teaching methods and create appropriate digital content (Jung, 2011). (3) Systematically format digital content. Because digital content is usually designed in various formats, it is not easy to share. However, the accreditation process can systematize digital content, thus making it available to other schools and even to the community. This practice can also prevent the development of identical and overlapping materials by different schools. (4) Promote the development of e-learning courses that can be easily implemented in rural areas to overcome problems related to geography and the availability of space (Ehlers, 2012). The process of accrediting e-learning materials can be implemented by the government or by private organizations. Since 2006, the Taiwanese government has accredited e-learning materials using clear standards for online course production and for digital content. During the past decade, this certification process has been recognized by the majority of universities as well as the community in Taiwan. It also has had positive effects on the quality of e-learning. For example, national projects to promote teaching excellence have adopted the accreditation of e-learning as one of the criteria used to evaluate project outcomes. Although the availability of e-learning accreditation has elicited numerous applications for accreditation, these data have not yet been analyzed for research purposes. In terms of accredited digital content, there are 13 datasets with an average of two sets from each year between 2006 and 2014; these data were drawn from 308 courses. The aim of this study was to analyze these data in terms of the following research questions: (a) what are the general findings about and difficulties encountered during the development of digital content? (b) does the success of application for accreditation differ significantly among subject areas? (c) are the various dimensions relevant to e-learning, i.e., teaching content and architecture, the design of teaching material, the use of computer-aided design, the media and interface design of accredited digital content, correlated with accreditation success? and (d) what are the key standards used in determinations of accreditation? …
['Tony C. T. Kuo', 'Hong-Ren Chen', 'Wu-Yuin Hwang', 'Nian-Shing Chen']
The Factors and Impacts of Large-Scale Digital Content Accreditations
552,337
Definition of a Model for Measuring the Effectiveness of Information Technology Governance: a Study of the Moderator Effect of Organizational Culture Variables
['Guilherme Wiedenhöft', 'Edimara Mezzomo Luciano', 'Mauricio Gregianin Testa']
Definition of a Model for Measuring the Effectiveness of Information Technology Governance: a Study of the Moderator Effect of Organizational Culture Variables
982,945
Real-time image transmission is crucial to an emerging class of distributed embedded systems operating in open network environments. Examples include avionics mission replanning over Link-16, security systems based on wireless camera networks, and online collaboration using camera phones. Meeting image transmission deadlines is a key challenge in such systems due to unpredictable network conditions. In this paper, we present CAMRIT, a Control-based Adaptive Middleware framework for Real-time Image Transmission in distributed real-time embedded systems. CAMRIT features a distributed feedback control loop that meets image transmission deadlines by dynamically adjusting the quality of image tiles. We derive an analytic model that captures the dynamics of a distributed middleware architecture. A control-theoretic methodology is applied to systematically design a control algorithm with analytic assurance of system stability and performance, despite uncertainties in network bandwidth. Experimental results demonstrate that CAMRIT can provide robust real-time guarantees for a representative application scenario.
['Xiaorui Wang', 'Ming Chen', 'Huang-Ming Huang', 'Venkita Subramonian', 'Chenyang Lu', 'Christopher D. Gill']
Control-Based Adaptive Middleware for Real-Time Image Transmission over Bandwidth-Constrained Networks
356,753
Stronger Approximate Singular Value Decomposition via the Block Lanczos and Power Methods
['Cameron Musco', 'Christopher Musco']
Stronger Approximate Singular Value Decomposition via the Block Lanczos and Power Methods
789,596
SystemC language is widely used for hardware modules and whole systems on chip development. Hardware development for low power applications requires power management techniques like power gating, clock gating and memory power control. SystemC does not support power related features that is a gap between power intentions at architecture level and power specification implemented in RTL. We suggest SCPower extension that allows to inject power specification into SystemC designs and automatically generate UPF file. The SCPower extension provides power aware SystemC simulation that is equivalent to RTL simulation with power specification in UPF.
['Kirill Gagarski', 'Maxim S. Petrov', 'Mikhail J. Moiseev', 'Ilya Klotchkov']
Power specification, simulation and verification of SystemC designs
976,167
Generating high-resolution, photo-realistic images has been a long-standing goal in machine learning. Recently, Nguyen et al. (2016) showed one interesting way to synthesize novel images by performing gradient ascent in the latent space of a generator network to maximize the activations of one or multiple neurons in a separate classifier network. In this paper we extend this method by introducing an additional prior on the latent code, improving both sample quality and sample diversity, leading to a state-of-the-art generative model that produces high quality images at higher resolutions (227x227) than previous generative models, and does so for all 1000 ImageNet categories. In addition, we provide a unified probabilistic interpretation of related activation maximization methods and call the general class of models "Plug and Play Generative Networks". PPGNs are composed of 1) a generator network G that is capable of drawing a wide range of image types and 2) a replaceable "condition" network C that tells the generator what to draw. We demonstrate the generation of images conditioned on a class (when C is an ImageNet or MIT Places classification network) and also conditioned on a caption (when C is an image captioning network). Our method also improves the state of the art of Multifaceted Feature Visualization, which generates the set of synthetic inputs that activate a neuron in order to better understand how deep neural networks operate. Finally, we show that our model performs reasonably well at the task of image inpainting. While image models are used in this paper, the approach is modality-agnostic and can be applied to many types of data.
['Anh Nguyen', 'Jason Yosinski', 'Yoshua Bengio', 'Alexey Dosovitskiy', 'Jeff Clune']
Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space
947,982
The proliferation of social media usage has not resulted in significant social change.
['Manuel Cebrian', 'Iyad Rahwan', 'Alex Sandy Pentland']
Beyond viral
711,675
Provisioning QoS enabled VPN services over packet switched networks is increasingly important for service provider. Some prior works adopted a proactive service setup approach such as the measurement based service admission control, but little attention has been given to control the congestion and maintain QoS after the service has been instantiated. Congestion control is a key issue for network service provider to manage QoS assured VPN services. Most studies in congestion control have focused on buffer management and tried to maintain the queue length to a preset value. This paper proposes a rate-based feedback control system that maintains preset packet loss targets for instantiated VPN services in the provider's backbone network. Specifically, the system utilizes the estimation for packet loss probability as the feedback signal, and then applies a PI controller to throttle ingress customers' traffic rates. Through a number of simulation experiments, the transient and steady state performance of the controller is evaluated. The numeric results show that the control system is effective in maintaining the network operation within a prescribed loss range.
['Dongli Zhang', 'Dan Ionescu']
Feedback control of packet loss for VPN services
86,491
Unstable problem of PMSM current control system in the overmodulation range of an inverter is a well known problem, however, the cause and mechanism of this problem are still not investigated well. Actually, the main reasons which are large amount of harmonic components and voltage saturation of an inverter in this range are already known. However, the effects from current controller such as PI controller gain and control period are still not considered well. In this paper, we focus on the effects from current controller to this unstable problem. Moreover, we also propose the easy calculation method to predict this unstable phenomena too. Compared with the simulation results, the calculation results from our proposed method match very well.
['Smith Lerdudomsak', 'Shinji Doki', 'Shigeru Okuma']
Analysis for Unstable Problem of PMSM Current Control System in Overmodulation Range
500,084
This paper studies a layered erasure model for two-user interference channels, which can be viewed as a simplified version of Gaussian fading interference channel. It is assumed that channel state information (CSI) is only available at receivers but not at transmitters. Under such assumption, an outer bound is derived for the capacity region of such interference channel. The new outer bound is tight in many circumstances. For the remaining open cases, the outer bound extends previous results in [1].
['Yan Zhu', 'Cong Shen']
On layered erasure interference channels without CSI at transmitters
628,426
This paper proposes, for a fixed demand traffic network problem, a route travel choice adjustment process formulated as a projected dynamical system, whose stationary points correspond to the traffic equilibria. Stability analysis is then conducted in order to investigate conditions under which the route travel choice adjustment process approaches equilibria. We also propose a discrete time algorithm, the Euler method, for the computation of the traffic equilibrium and provide convergence results. The notable feature of the algorithm is that it decomposes the traffic problem into network subproblems of special structure, each of which can then be solved simultaneously and in closed form using exact equilibration. Finally, we illustrate the computational performance of the Euler method through various numerical examples.
['Anna Nagurney', 'Ding Zhang']
Projected Dynamical Systems in the Formulation, Stability Analysis, and Computation of Fixed-Demand Traffic Network Equilibria
188,111
Electronic voting provides accuracy and efficiency to the electoral processes. World democracies would benefit from a secure e-voting system not only to improve voter participation and trust but also to prevent electoral fraud. However, current e-voting systems are complex and have security weaknesses. In this paper, we describe a secure e-voting system for national and local elections (S-Vote). This system satisfies the important requirements of an e-voting system through state-of-the-art technologies and secure processes. S-Vote relies on homomorphic cryptography, zero-knowledge proofs, biometrics, smartcards, open-source software, and secure computers for securely and efficiently implementing the system processes over the various stages of the electoral process, without relying on online network connections. We outline the main conclusions of the pilot implementations of S-Vote that tested the main technologies and processes used. We also explain how the used technologies and processes achieve the system requirement. In conclusion, we recommend adopting S-Vote for its security, flexibility, economic, and scalability features.
['Gheith A. Abandah', 'Khalid A. Darabkh', 'Tawfiq Ammari', 'Omar Qunsul']
Secure National Electronic Voting System
567,518
Social virtual worlds (SVWs) have become increasingly popular spaces for social interaction. To be attractive to engage with, maintaining a sufficient base of active users is a sine qua non. Using Habbo as an example, this paper develops a framework for investigating the continuous use of social virtual worlds. Based on a detailed review of literature, we propose that a decomposed theory of planned behavior complemented with critical mass and allure of competitors would be an applicable theoretical lens to explain why users continuously engage with a social virtual world. We suggest that the social aspects are of particular importance in determining the continuous use of SVWs. This research attempts to build a theoretical foundation for further studies empirically investigating the phenomenon.
['Jani Merikivi', 'Matti Mantymaki']
Explaining the Continuous Use of Social Virtual Worlds: An Applied Theory of Planned Behavior Approach
184,881
Close Encounters of the Agent Kind: Designing Agents for Effective Training.
['Frank Dignum', 'Virginia Dignum', 'Catholijn M. Jonker']
Close Encounters of the Agent Kind: Designing Agents for Effective Training.
735,645
Uma Estratégia não Supervisionada para Previsão de Eventos usando Redes Sociais.
['Derick M. de Oliveira', 'Roberto C. S. N. P. Souza', 'Denise E. F. de Brito', 'Wagner Meira', 'Gisele L. Pappa']
Uma Estratégia não Supervisionada para Previsão de Eventos usando Redes Sociais.
979,704
Over the last few years a large number of security patterns have been proposed. However, this large number of patterns has created a problem in selecting patterns that are appropriate for different security requirements. In this paper, we present a selection approach for security patterns, which allows us to understand in depth the trade-offs involved in the patterns and the implications of a pattern to various security requirements. Moreover, our approach supports the search for a combination of security patterns that will meet given security requirements.
['Michael Weiss', 'Haralambos Mouratidis']
Selecting Security Patterns that Fulfill Security Requirements
125,586
In this paper we present a novel design for an efficient FPGA architecture of fast Walsh transform (FWT) for hardware implementation of pattern analysis techniques such as projection kernel calculation and feature extraction. The proposed architecture is based on distributed arithmetic (DA) principles using ROM accumulate (RAC) technique and sparse matrix factorisation. The implementation has been carried out using a hybrid design approach based on Celoxica Handel-C which is used as a wrapper for highly optimised VHDL cores. The algorithm has been implemented and verified on the Xilinx Virtex-2000E FPGA. An evaluation has also been reported based on maximum system frequency and chip area for different system parameters, and have been shown to outperform existing work in all key performance measures. Additionally, a novel functional level power analysis and modelling (FLPAM) methodology has been proposed to enable a high level estimation of power consumption.
['Shrutisagar Chandrasekaran', 'Abbes Amira']
FPGA Implementation and Power Modelling of the Fast Walsh Transform
18,987
Cluster administration toolkits simplify the deployment and management of large commodity clusters by incorporating scalable installers with management and cataloguing interfaces. Although these toolkits facilitate operating system level cluster administration, they typically lack tightly integrated frameworks for hardware management. Administrators who wish to leverage the standards based hardware management capabilities commonly available in commodity clusters are left to their own devices. Most opt to write their own integration scripts which can be a costly and inefficient process. This paper presents a case study for resolving this problem by integrating standards based hardware management utilities into a typical open source deployment and administration framework.
['Jacob Liberman', 'Garima Kochhar', 'Arun Rajan', 'Munira Hussain', 'Onur Celebioglu']
Integrating hardware management with cluster administration toolkits
173,757
25 Gb/s 150-m Multi-Mode Fiber Transmission Using a CMOS-Driven 1.3-µm Lens-Integrated Surface-Emitting Laser
['Daichi Kawamura', 'Toshiaki Takai', 'Yong Lee', 'Kenji Kogo', 'Koichiro Adachi', 'Yasunobu Matsuoka', 'Norio Chujo', 'Reiko Mita', 'Saori Hamamura', 'Satoshi Kaneko', 'Kinya Yamazaki', 'Yoshiaki Ishigami', 'Toshiki Sugawara', 'Shinji Tsuji']
25 Gb/s 150-m Multi-Mode Fiber Transmission Using a CMOS-Driven 1.3-µm Lens-Integrated Surface-Emitting Laser
389,508
This paper presents a method for on-line control reconfiguration of fault-tolerant control to dynamic positioning (DP) vessel using disturbance decoupling methods after the occurrence of thruster failures. First, a suitable linear state-space model of DP vessel is derived which consists of motions in 3 DOF. Then control reconfiguration after thruster failures is finished which combines with faulty DP vessel model and reconfiguration block. The paper shows that this reconfiguration problem is equivalent to a disturbance decoupling problem. To solve it, this paper follows the geometric approach. The resulting reconfiguration block generates suitable inputs for the faulty vessel based on the output of the nominal controllers. At last, the feasible and effective of this method are demonstrated by a simulation of a DP vessel.
['Mingyu Fu', 'Jipeng Ning', 'Yushi Wei']
Fault-tolerant control of dynamic positioning vessel after thruster failures using disturbance decoupling methods
93,221
A program analysis tool can play an important role in helping users understand and improve OpenMP codes. Array privatization is one of the most effective ways to improve the performance and scalability of OpenMP programs. In this paper we present an extension to the Open64 compiler and the Dragon tool, a program analysis tool built on top of this compiler, to enable them to collect and represent information on the manner in which threads access the elements of shared arrays at run time. This information can be useful to the programmer for restructuring their code to maximize data locality, reducing false sharing, identifying program errors (as a result of unintended true sharing) or accomplishing aggressive privatization.
['Oscar R. Hernandez', 'Chunhua Liao', 'Barbara M. Chapman']
A tool to display array access patterns in OpenMP programs
825,703
In this paper we investigate the existence of attractive and uniformly locally attractive solutions for a functional nonlinear integral equation with a general kernel. We use methods and techniques of fixed point theorems and properties of measure of noncompactness. We extend and generalize results obtained by other authors in the context of fractional functional differential equations.
['Edgardo Alvarez', 'Carlos Lizama']
Attractivity for functional Volterra integral equations of convolution type
630,322
Motivation: The importance of studying biology at the system level has been well recognized, yet there is no well-defined process or consistent methodology to integrate and represent biological information at this level. To overcome this hurdle, a blending of disciplines such as computer science and biology is necessary.#R##N##R##N#Results: By applying an adapted, sequential software engineering process, a complex biological system (severe acquired respiratory syndrome-coronavirus viral infection) has been reverse-engineered and represented as an object-oriented software system. The scalability of this object-oriented software engineering approach indicates that we can apply this technology for the integration of large complex biological systems.#R##N##R##N#Availability: A navigable web-based version of the system is freely available at http://people.musc.edu/~zhengw/SARS/Software-Process.htm#R##N##R##N#Contact: [email protected]#R##N##R##N#Supplementary information: Supplemental data: Table 1 and Figures 1--16.
['Daniel Shegogue', 'W. Jim Zheng']
Object-oriented biological system integration: a SARS coronavirus example
515,626
In this paper, we propose an efficient deblocking algorithm that uses block boundary characteristics and adaptive filtering in spatial domain. In the proposed algorithm, we detect the block boundary with blocking artifacts and classify the detected block boundaries into smooth regions or complex regions based on the statistical characteristics of neighborhood block. Thereafter, spatial adaptive filtering is performed. In smooth regions, the blocking artifacts seem a step-wise function, so the blocking artifacts can be reduced by a simple nonlinear 1-D 8-tap filter. In complex regions, there exist the blocking and ringing artifacts, so we propose adaptive filtering based on a feedforward neural network to reduce the blocking and ringing artifacts simultaneously. The horizontal and vertical block boundaries are processed separately with a different neural network. Experimental results show that the proposed algorithm gives better results than those of the conventional algorithms from both a subjective and objective viewpoints.
['Kee-Koo Kwon', 'Sung-Ho Im', 'Dong-Sun Lim']
Deblocking algorithm in MPEG-4 video coding using block boundary characteristics and adaptive filtering
346,265
We present a novel VLSI co-processor for real-time multiprocessor scheduling. The co-processor can be used for sophisticated static scheduling as well as for online scheduling using many different algorithms such as earliest deadline first, highest value first, or the Spring scheduling algorithm. When such an algorithm is used online it is important to assess the performance impact of the interface of the co-processor to the host system, in this case, the Spring kernel. We focus on the interface and its implications for overall scheduling performance. We show that the current VLSI chip speeds up the main portion of the scheduling operation by over three orders of magnitude and speeds up the overall scheduling operation 30 fold. The parallel VLSI architecture for scheduling is briefly presented. This architecture can be scaled for different numbers of tasks, resources, and internal word lengths. The implementation uses an advanced clocking scheme to allow further scaling using future IC technologies. >
['Douglas Niehaus', 'Krithi Ramamritham', 'John A. Stankovic', 'Gary Wallace', 'Charles C. Weems', 'Wayne Burleson', 'Jason Ko']
The Spring scheduling co-processor: Design, use, and performance
144,223
Accurate monitoring of heavy metal stress in crops is of great importance to assure agricultural productivity and food security, and remote sensing is an effective tool to address this problem. However, given that Earth observation instruments provide data at multiple scales, the choice of scale for use in such monitoring is challenging. This study focused on identifying the characteristic scale for effectively monitoring heavy metal stress in rice using the dry weight of roots (WRT) as the representative characteristic, which was obtained by assimilation of GF-1 data with the World Food Studies (WOFOST) model. We explored and quantified the effect of the important state variable LAI (leaf area index) at various spatial scales on the simulated rice WRT to find the critical scale for heavy metal stress monitoring using the statistical characteristics. Furthermore, a ratio analysis based on the varied heavy metal stress levels was conducted to identify the characteristic scale. Results indicated that the critical threshold for investigating the rice WRT in monitoring studies of heavy metal stress was larger than 64 m but smaller than 256 m. This finding represents a useful guideline for choosing the most appropriate imagery.
['Zhi Huang', 'Xiangnan Liu', 'Ming Jin', 'Chao Ding', 'Jiale Jiang', 'Ling Wu']
Deriving the Characteristic Scale for Effectively Monitoring Heavy Metal Stress in Rice by Assimilation of GF-1 Data with the WOFOST Model
670,531
Traffic simulation tools are becoming increasingly popular for evaluating transportation planning and traffic operations management strategies. Planners and managers may choose from a range of available simulation tools that differ in the accuracy (i.e. fidelity) of their representation of real-world demand and supply phenomena. These tools can generally be classified as microscopic, mesoscopic or macroscopic, representing a decreasing order in level of detail and realism. In this paper, we showcase TransModeler, a simulation tool that can seamlessly handle all three fidelity levels simultaneously, on the same network. This hybrid simulation approach (released first in TransModeler in 2006 and subsequently used extensively by consultants) allows agencies to simulate very large urban networks while choosing the links, segments or corridors that require a detailed, microscopic focus. We present some methodological concepts behind Trans-Modeler, its dynamic equilibrium capabilities, and illustrative case studies on real-world networks.
['Ramachandran Balakrishna', 'Daniel Morgan', 'Howard Slavin', 'Qi Yang']
Large-scale simulation tools for transportation planning and traffic operations management
126,774
To apply semiotics to organizational analysis and information systems design, it is essential to unite two basic concepts: the sign and the norm. A sign is anything that stands for something else for some community. A norm is a generalized disposition to the world shared by members of a community. When its condition is met, a norm generates a propositional attitude which may, but not necessarily will, affect the subject's behaviour. Norms reflect regularities in the behaviour of members in an organization, allowing them to coordinate their actions. Organized behaviour is norm-governed behaviour. Signs trigger the norms leading to more signs being produced. Both signs and norms lend themselves to empirical study. The focus in this paper is on the properties of norms since those for signs are relatively well known. The paper discusses a number of different taxonomies of norms: formal, informal, technical; evaluative, perceptual, behavioural, cognitive; structure, action; substantive, communication and control. A semiotic analysis of information systems is adduced in this paper from the social, pragmatic, semantic, syntactic, empiric and physical perspectives. The paper finally presents a semiotic approach to information systems design, by discussing the method of information modelling and systems architecture. This approach shows advantages over other traditional ones in a higher degree of separation of knowledge, and hence in the consistency, integrity and maintainability of systems.
['Ronald K. Stamper', 'Kecheng Liu', 'Mark Hafkamp', 'Yasser Ades']
Understanding the roles of signs and norms in organizations - a semiotic approach to information systems design
117,083
Implementing a Maximum Flow Algorithm: Experiments with Dynamic Trees.
['Tamás Badics', 'Endre Boros']
Implementing a Maximum Flow Algorithm: Experiments with Dynamic Trees.
770,899
Previous methods for accelerating Tanimoto queries have been based on using bit strings for representing molecules. No work has gone into examining accelerating Tanimoto queries on real valued descriptors, even though these offer a much more fine grained measure of similarity between molecules. This study utilises a recently discovered reduction from Tanimoto queries to distance queries in Euclidean space to accelerate Tanimoto queries using standard metric data structures. The presented experiments show that it is possible to gain a significant speedup and that general metric data structures are better suited than a data structure tailored for Euclidean space on vectors generated from molecular data.
['T. Kristensen', 'Christian N. S. Pedersen']
Data structures for accelerating Tanimoto queries on real valued vectors
110,471
Proactive latency-aware adaptation is an approach for self-adaptive systems that improves over reactive adaptation by considering both the current and anticipated adaptation needs of the system, and taking into account the latency of adaptation tactics so that they can be started with the necessary lead time. Making an adaptation decision with these characteristics requires solving an optimization problem to select the adaptation path that maximizes an objective function over a finite look-ahead horizon. Since this is a problem of selecting adaptation actions in the context of the probabilistic behavior of the environment, Markov decision processes (MDP) are a suitable approach. However, given all the possible interactions between the different and possibly concurrent adaptation tactics, the system, and the environment, constructing the MDP is a complex task. Probabilistic model checking can be used to deal with this problem since it takes as input a formal specification of the stochastic system, which is internally translated into an MDP, and solved. One drawback of this solution is that the MDP has to be constructed every time an adaptation decision has to be made to incorporate the latest predictions of the environment behavior. In this paper we present an approach that eliminates that run-time overhead by constructing most of the MDP offline, also using formal specification. At run time, the adaptation decision is made by solving the MDP through stochastic dynamic programming, weaving in the stochastic environment model as the solution is computed. Our experimental results show that this approach reduces the adaptation decision time by an order of magnitude compared to the probabilistic model checking approach, while producing the same results.
['Gabriel A. Moreno', 'Javier Cámara', 'David Garlan', 'Bradley R. Schmerl']
Efficient Decision-Making under Uncertainty for Proactive Self-Adaptation
890,437
Network information multicast has been considered extensively, either as a routing problem or more recently in the context of network coding. Most of the Internet bandwidth, however, is consumed by multimedia contents that are amenable to lossy reconstruction. In this paper, we investigate the following fundamental question: How does one communicate a media content from nodes (servers) that observe/supply the content to a set of sink nodes (clients) to realize the best possible reconstruction of the content in a rate-distortion sense? While this problem remains essentially open, this paper takes the first step by exploring the intricate entanglement of source coding and network communication, within an optimization framework. In particular, we investigate the joint optimization of network communication strategies (e.g., routing or network coding) and common source coding schemes (e.g., progressive coding or more general multiple description coding). We formulate several such problems for which we are able to develop efficient polynomial time solutions. In particular, we consider layered multicast of progressively encoded source code streams using network coding and optimal routing of balanced multiple description codes. Finally, the improvement in the overall quality of source reconstruction by using the proposed schemes is verified through simulations.
['Nima Sarshar', 'Xiaolin Wu']
Rate-Distortion Optimized Network Communication
200,746
Research on Cloud Datacenter Interconnect Technology
['Nan Chen', 'Yongbing Fan', 'Xiaowu He', 'Yi Liu', 'Qiaoling Li']
Research on Cloud Datacenter Interconnect Technology
669,561
Background#R##N#Zinc Finger Nucleases (ZFNs) are man-made restriction enzymes useful for manipulating genomes by cleaving target DNA sequences. ZFNs allow therapeutic gene correction or creation of genetically modified model organisms. ZFN specificity is not absolute; therefore, it is essential to select ZFN target sites without similar genomic off-target sites. It is important to assay for off-target cleavage events at sites similar to the target sequence.
['Thomas J Cradick', 'Giovanna Ambrosini', 'Christian Iseli', 'Philipp Bucher', 'Anton P. McCaffrey']
ZFN-site searches genomes for zinc finger nuclease target sites and off-target sites.
76,906
Medical devices are nowadays more and more software dependent, and software malfunctioning can lead to injuries or death for patients. Several standards have been proposed for the development and the validation of medical devices, but they establish general guidelines on the use of common software engineering activities without any indication regarding methods and techniques to assure safety and reliability.#R##N##R##N#This paper takes advantage of the Hemodialysis machine case study to present a formal development process supporting most of the engineering activities required by the standards, and provides rigorous approaches for system validation and verification. The process is based on the Abstract State Machine formal method and its model refinement principle.
['Paolo Arcaini', 'Silvia Bonfanti', 'Angelo Michele Gargantini', 'Elvinia Riccobene']
How to Assure Correctness and Safety of Medical Software: The Hemodialysis Machine Case Study
861,487
In this paper, we propose an accurate sampling scheme for defeating SYN flooding attacks as well as TCP portscan activity. The scheme examines TCP segments to find at least one of multiple ACK segments coming from the server. The method is simple and scalable, because it achieves good detection performance with false positive rate close to zero even for very low sampling rates. Our trace-based simulations show that the effectiveness of the proposed scheme only relies on the sampling rate regardless on the sampling method.
['Maciej Korczynski', 'Lucjan Janowski', 'Andrzej Duda']
An Accurate Sampling Scheme for Detecting SYN Flooding Attacks and Portscans
60,420
The World Wide Web has become the primary means for information dissemination. Due to the limited resources of the network bandwidth, users always suffer from long time waiting. Web prefetching and Web caching are the primary approaches to reducing the user perceived access latency and improving the quality of service. In this paper, a SPN (Stochastic Petri Nets) model of Web prefetching and caching system is constructed, and based on which, the performance analysis of the integrated Web prefetching and caching model are made. The performance metrics latency, throughput are compared and analyzed theoretically. Simulations show that compared with caching mechanism, Web prefetching mechanism can further reduce the access latency, improve the throughput and the hit ratio efficiently. The performance evaluation based on the SPN model provides an implementation basis for Web prefetching and caching.
['Lei Shi', 'Yingjie Han', 'Xiaoguang Ding', 'Lin Wei', 'Zhimin Gu']
SPN Model for Web Prefetching and Caching
224,650
Appropriately defining and efficiently calculating similarities from large data sets are often essential in data mining, both for gaining understanding of data and generating processes and for building tractable representations. Given a set of objects and their correlations, we here rely on the premise that each object is characterized by its context, i.e., its correlations to the other objects. The similarity between two objects can then be expressed in terms of the similarity between their contexts. In this way, similarity pertains to the general notion that objects are similar if they are exchangeable in the data. We propose a scalable approach for calculating all relevant similarities among objects by relating them in a correlation graph that is transformed to a similarity graph. These graphs can express rich structural properties among objects. Specifically, we show that concepts—abstractions of objects—are constituted by groups of similar objects that can be discovered by clustering the objects in the similarity graph. These principles and methods are applicable in a wide range of fields and will be demonstrated here in three domains: computational linguistics, music, and molecular biology, where the numbers of objects and correlations range from small to very large.
['Olof Görnerup', 'Daniel Gillblad', 'Theodore Vasiloudis']
Domain-agnostic discovery of similarities and concepts at scale
870,619
Commercial enterprises employ data mining techniques to recommend products to their customers. Most of the prior research is usually focused on a specific domain such as movies or books, and recommendation algorithms using similarities between users and/or similarities between products usually performs reasonably well. However, when the domain isn't as specific, recommendation becomes much more difficult, because the data could be too sparse to find similar users or similar products based on purchasing history alone. To solve this problem, we propose using social network data, along with rating history to enhance product recommendations. This paper exploits the state of art collaborative filtering algorithm and social net based recommendation algorithm for the task of open domain recommendation. We show that when a social network can be applied, it is a strong indicator of user preference for product recommendations. However, the high precision is achieved at the cost of recall. Although the sparseness of the data may suggest that the social network is not always applicable, we present a solution to utilize the network in these cases.
['Sarah K. Tyler', 'Yi Zhang']
Open Domain Recommendation: Social Networks and Collaborative Filtering
378,055
This study reports the design and implementation of a pattern recognition algorithm aimed to classify electroencephalographic (EEG) signals based on a class of dynamic neural networks (NN) described by time delay differential equations (TDNN). This kind of NN introduces the signal windowing process used in different pattern classification methods. The development of the classifier included a new set of learning laws that considered the impact of delayed information on the classifier structure. Both, the training and the validation processes were completely designed and evaluated in this study. The training method for this kind of NN was obtained by applying the Lyapunov theory stability analysis. The accuracy of training process was characterized in terms of the number of delays. A parallel structure (similar to an associative memory) with fixed (obtained after training) weights was used to execute the validation stage. Two methods were considered to validate the pattern classification method: a generalization-regularization and the k-fold cross validation processes (k = 5). Two different classes were considered: normal EEG and patients with previous confirmed neurological diagnosis. The first one contains the EEG signals from 100 healthy patients while the second contains information of epileptic seizures from the same number of patients. The pattern classification algorithm achieved a correct classification percentage of 92.12% using the information of the entire database. In comparison with similar pattern classification methods that considered the same database, the proposed CNN proved to achieve the same or even better correct classification results without pre-treating the EEG raw signal. This new type of classifier working in continuous time but using the delayed information of the input seems to be a reliable option to develop an accurate classification of windowed EEG signals.
['M. Alfaro-Ponce', 'A. Argüelles', 'Isaac Chairez']
Windowed electroencephalographic signal classifier based on continuous neural networks with delays in the input
893,284
Use of extrinsic information transfer (EXIT) functions, characterizing the amplification of mutual information between the input and output of the maximum a posteriori (MAP) decoder, significantly facilitates analysis of iterative coding schemes. Previously, EXIT functions derived for binary erasure channels (BECs) were used as an approximation for other channels. Here, we improve on this approach by introducing more accurate methods to construct EXIT functions for binary-input memoryless symmetric (BMS) channels. By defining an alternative pseudo-MAP decoder coinciding with the MAP decoder over BEC, we provide an expression for the EXIT functions of block codes over BEC. Furthermore, we draw a connection between the EXIT function over BEC and the EXIT function over the BMS channel under certain conditions. This is used for deriving accurate or approximate expressions of EXIT functions over BMS channels in certain scenarios
['Eran Sharon', 'Alexei E. Ashikhmin', 'Simon Litsyn']
EXIT functions for binary input memoryless symmetric channels
185,090
AHP applied to decision making of automotive industry supplier selection.Use of AHP in supplier selection gives decision maker confidence of consistency.Sensitivity analysis to check the robustness of the supplier selection decision.Proposed approach divides complex decision making into simpler hierarchy. PurposeThe purpose of this paper is to propose a decision support model for supplier selection based on analytic hierarchy process (AHP) using a case of automotive industry in a developing country of Pakistan and further performs sensitivity analysis to check the robustness of the supplier selection decision. MethodologyThe model starts by identifying the main criteria (price, quality, delivery and service) using literature review and ranking the main criteria based on experts' opinions using AHP. The second stage in the adopted methodology is the identification of sub criteria and ranking them on the basis of main criteria. Lastly perform sensitivity analysis to check the robustness of the decision using Expert Choiceź software. FindingsThe suppliers are selected and ranked based on sub criteria. Sensitivity analysis suggests the effects of changes in the main criteria on the suppliers ranking. The use of AHP in the supplier selection gives the decision maker the confidence of the consistency and the robustness throughout the process. Practical implicationsThe AHP methodology adopted in this study provides managers in automotive industry in Pakistan with the insights of the various factors that need to be considered while selecting suppliers for their organizations. The selected approach also aids them in prioritizing the criterion. Managers can utilize the hierarchical structure of adopted supplier selection methodology suggested in this study to rank the suppliers on the basis of various factors/criteria. Originality/valueThis study makes three novel contributions in supplier selection area. First, AHP is applied to automotive industry and use of AHP in the supplier selection gives decision maker the confidence of the consistency. Second, sensitivity analysis enables in understanding the effects of changes in the main criteria on the suppliers ranking and help decision maker to check the robustness throughout the process. Last, we find it important to come with a simple methodology for managers of automotive industry so that they can select the best suppliers. Moreover, this approach will also help managers in dividing the complex decision making problem into simpler hierarchy.
['Fikri Dweiri', 'Sameer Kumar', 'Sharfuddin Ahmed Khan', 'Vipul Jain']
Designing an integrated AHP based decision support system for supplier selection in automotive industry
820,409
Optimizing Process Model Redesign
['Akhil Kumar', 'Paronkasom Indradat']
Optimizing Process Model Redesign
891,553
This paper presents a robust and accurate method for joint head pose and facial actions tracking, even under challenging conditions such as varying lighting, large head movements, and fast motion. This is made possible by the combination of two types of facial features. We use locations sampled from the facial texture whose appearance is initialized on the first frame and adapted over time, and also illumination-invariant patches located on characteristic points of the face such as the corners of the eyes or of the mouth. The first type of features contains rich information about the global appearance of the face and thus leads to an accurate tracking, while the second type guaranties robustness and stability by avoiding drift. We demonstrate our system on the Boston University Face Tracking benchmark, and show it outperforms state-of-the-art methods.
['Stéphanie Lefèvre', 'Jean-Marc Odobez']
Structure and appearance features for robust 3D facial actions tracking
76,055
For the new signal acquisition methodology of compressive sensing (CS) a challenge is to find a space in which the signal is sparse and hence recoverable faithfully. Given the nonstationarity of many natural signals such as images, the sparse space is varying in time or spatial domain. As such, CS recovery should be conducted in locally adaptive, signal-dependent spaces to counter the fact that the CS measurements are global and irrespective of signal structures. On the contrary existing CS reconstruction methods use a fixed set of bases (e.g., wavelets, DCT, and gradient spaces) for the entirety of a signal. To rectify this problem we propose a new model-based framework to facilitate the use of adaptive bases in CS recovery. In a case study we integrate a piecewise stationary autoregressive model into the recovery process for CS-coded images, and are able to increase the reconstruction quality by $2 \thicksim 7$dB over existing methods. The new CS recovery framework can readily incorporate prior knowledge to boost reconstruction quality.
['Xiaolin Wu', 'Xiangjun Zhang', 'Jia Wang']
Model-Guided Adaptive Recovery of Compressive Sensing
187,996
Despite the wide use of agent-based applications in different areas of human activity, there hasn't been paid much attention to understand how these applications are possible, taking into account that they are build by people coming from such conceptually distant fields of study as, for example, law, artificial intelligence, and software engineering. This paper aims to fill in this gap addressing the different approaches to software agents--understood as building blocks of agent-based applications--adopted in each of these fields of study and suggesting that the way to understand how do these fields manage to work together in building a single agent-based application resides in seeing these agents as boundary objects.
['Migle Laukyte']
Software agents as boundary objects
26,035
We obtain a lower bound on the linear complexity profile of the power generator of pseudo-random numbers modulo a Blum integer. A different method is also proposed to estimate the linear complexity profile of the Blum-Blum-Shub (1986) generator. In particular, these results imply that lattice reduction attacks on such generators are not feasible.
['Frances Griffin', 'Igor E. Shparlinski']
On the linear complexity profile of the power generator
51,460
Ontologies are a basic building block for enterprise interoperability services. As of today ontologies are created by highly specialized ontology engineers. In order to utilize the full knowledge available in modern organizations new approaches to ontology creation are necessary to bridge the "ontology gap" and enable domain experts to create formalized knowledge. In our work we reviewed existing methodologies for ontology creation and accessed their suitability for usage by novice users and domain experts. We analyzed and structured their major characteristics to provide a unified view. From there on we formulated criteria for the success of an ontology creation methodology for domain experts and empirically evaluated them in cooperation with practitioners from both industry and academia.
['Nikolai Dahlem', 'Jianfeng Guo', 'Axel Hahn', 'Matthias Reinel']
Towards an User-Friendly Ontology Design Methodology
16,720
We present a typical synergy between dynamic types (dynamics) and generalised algebraic datatypes (GADTs). The former provides a clean approach to integrating dynamic typing in a statically typed language. It allows values to be wrapped together with their type in a uniform package, deferring type unification until run time using a pattern match annotated with the desired type. The latter allows for the explicit specification of constructor types, as to enforce their structural validity. In contrast to ADTs, GADTs are heterogeneous structures since each constructor type is implicitly universally quantified. Unfortunately, pattern matching only enforces structural validity and does not provide instantiation information on polymorphic types. Consequently, functions that manipulate such values, such as a type-safe update function, are cumbersome due to boilerplate type representation administration. In this paper we focus on improving such functions by providing a new GADT annotation via a natural synergy with dynamics. We formally define the semantics of the annotation and touch on novel other applications of this technique such as type dispatching and enforcing type equality invariants on GADT values.
['Thomas van Noort', 'Peter Achten', 'Rinus Plasmeijer']
A typical synergy: dynamic types and generalised algebraic datatypes
847,116
The development of architectures and strategies that allow for rapid and cost-effective deployment of large scale geographically distributed sensor based systems that organize themselves in an ad hoc fashion in order to improve the monitoring and detection capabilities, is becoming more a requirement rather than a desire. In this paper, we first provide a model that characterizes the corresponding sensor connectivity distribution for a sensor networking system, and based on this model we gain some insight about the trade off among the node connectivity, power consumption, data rate, etc. The impact of node connectivity on system reliability is discussed. Furthermore in order to reduce the sensor power consumption we analyze the relationship between periodical sleeping strategies and the achieved power conservation. Several results and tradeoffs among various sleeping strategies, transmission scenarios and power gains, for given connectivity requirements, are also presented and evaluated.
['Jin Zhu', 'Symeon Papavassiliou']
On the connectivity modeling and the tradeoffs between reliability and energy efficiency in large scale wireless sensor networks
27,896
This article focuses on the high-performance motion control problem of the hydraulic press. Smooth and precise motion control of the hydraulic press is hardly achieved due to the complex external disturbances which typically consist of the deformation force and friction force. An extended fuzzy disturbance observer is first constructed to estimate and compensate the hardly modeled deformation force. The proposed extended fuzzy disturbance observer differs with the fuzzy disturbance observer on parameter adaptation; the fuzzy disturbance observer is commonly driven by the disturbance observer error, while the designed extended fuzzy disturbance observer is driven by the disturbance observer error and the motion tracking error together. The nonlinear cascade controller is further applied to synthesize the motion controller considering the particular work principle of the separate meter-in separate meter-out drive system adopted in the hydraulic press. The outer motion tracking loop of the nonlinear cascade ...
['Jianhua Wei', 'Qiang Zhang', 'Mingjie Li', 'Wenzhuo Shi']
High-performance motion control of the hydraulic press based on an extended fuzzy disturbance observer
874,790
Exploring end users' system requirements for a handheld computer supporting both sepsis test workflow and current IT solutions.
['Lasse Lefevre Samson', 'Louise Pape-Haugaard', 'Mette Søgaard', 'Henrik Carl Schønheyder', 'Ole K. Hejlesen']
Exploring end users' system requirements for a handheld computer supporting both sepsis test workflow and current IT solutions.
557,403
A flux-splitting method is proposed for the hyperbolic-equation system (HES) of magnetized electron fluids in quasi-neutral plasmas. The numerical fluxes are split into four categories, which are computed by using an upwind method which incorporates a flux-vector splitting (FVS) and advection upstream splitting method (AUSM). The method is applied to a test calculation condition of uniformly distributed and angled magnetic lines of force. All of the pseudo-time advancement terms converge monotonically and the conservation laws are strictly satisfied in the steady state. The calculation results are compared with those computed by using the elliptic-parabolic-equation system (EPES) approach using a magnetic-field-aligned mesh (MFAM). Both qualitative and quantitative comparisons yield good agreements of results, indicating that the HES approach with the flux-splitting method attains a high computational accuracy.
['Rei Kawashima', 'Kimiya Komurasaki', 'Tony Schönherr']
A flux-splitting method for hyperbolic-equation system of magnetized electron fluids in quasi-neutral plasmas
598,069