name
stringlengths 7
10
| title
stringlengths 13
125
| abstract
stringlengths 67
3.02k
| fulltext
stringclasses 1
value | keywords
stringlengths 17
734
|
---|---|---|---|---|
train_1273 | Towards an ontology of approximate reason | This article introduces structural aspects in an ontology of approximate reason. The basic assumption in this ontology is that approximate reason is a capability of an agent. Agents are designed to classify information granules derived from sensors that respond to stimuli in the environment of an agent or received from other agents. Classification of information granules is carried out in the context of parameterized approximation spaces and a calculus of granules. Judgment in agents is a faculty of thinking about (classifying) the particular relative to decision rules derived from data. Judgment in agents is reflective, but not in the classical philosophical sense (e.g., the notion of judgment in Kant). In an agent, a reflective judgment itself is an assertion that a particular decision rule derived from data is applicable to an object (input). That is, a reflective judgment by an agent is an assertion that a particular vector of attribute (sensor) values matches to some degree the conditions for a particular rule. In effect, this form of judgment is an assertion that a vector of sensor values reflects a known property of data expressed by a decision rule. Since the reasoning underlying a reflective judgment is inductive and surjective (not based on a priori conditions or universals), this form of judgment is reflective, but not in the sense of Kant. Unlike Kant, a reflective judgment is surjective in the sense that it maps experimental attribute values onto the most closely matching descriptors (conditions) in a derived rule. Again, unlike Kant's notion of judgment, a reflective judgment is not the result of searching for a universal that pertains to a particular set of values of descriptors. Rather, a reflective judgment by an agent is a form of recognition that a particular vector of sensor values pertains to a particular rule in some degree. This recognition takes the form of an assertion that a particular descriptor vector is associated with a particular decision rule. These considerations can be repeated for other forms of classifiers besides those defined by decision rules | reflective judgment;ontology;granules;parameterized approximation spaces;approximate reason;decision rules;pattern recognition;rough sets;information granules |
|
train_1274 | Bounded model checking for the universal fragment of CTL | Bounded Model Checking (BMC) has been recently introduced as an efficient verification method for reactive systems. BMC based on SAT methods consists in searching for a counterexample of a particular length and generating a propositional formula that is satisfiable iff such a counterexample-exists. This new technique has been introduced by E. Clarke et al. for model checking of linear time temporal logic (LTL). Our paper shows how the concept of bounded model checking can be extended to ACTL (the universal fragment of CTL). The implementation of the algorithm for Elementary Net Systems is described together with the experimental results | reactive systems;verification method;universal fragment;linear time temporal logic;bounded semantics;sat methods;bounded model checking;model checking;elementary net systems;propositional formula |
|
train_1275 | Modeling dynamic objects in distributed systems with nested Petri nets | Nested Petri nets (NP-nets) is a Petri net extension, allowing tokens in a net marking to be represented by marked nets themselves. The paper discusses applicability of NP-nets for modeling task planning systems, multi-agent systems and recursive-parallel systems. A comparison of NP-nets with some other formalisms, such as OPNs of R. Valk (2000), recursive parallel programs of O. Kushnarenko and Ph. Schnoebelen (1997) and process algebras is given. Some aspects of decidability for object-oriented Petri net extensions are also discussed | nested petri nets;object-oriented petri net;recursive-parallel systems;multi-agent systems;decidability;process algebras;distributed systems;dynamic objects modelling |
|
train_1276 | A comparative study of some generalized rough approximations | In this paper we focus upon a comparison of some generalized rough approximations of sets, where the classical indiscernibility relation is generalized to any binary reflexive relation. We aim at finding the best of several candidates for generalized rough approximation mappings, where both definability of sets by elementary granules of information as well as the issue of distinction among positive, negative, and border regions of a set are taken into account | generalized rough approximations;binary reflexive relation;classical indiscernibility relation;generalized rough approximation mappings;elementary granules |
|
train_1277 | Dynamic modification of object Petri nets. An application to modelling | protocols with fork-join structures In this paper we discuss possibilities of modelling protocols by objects in object-based high-level Petri nets. Some advantages of dynamically modifying the structure of token objects are discussed and the need for further investigations into mathematically rigorous foundations of object net formalisms incorporating facilities for such operations on its token nets is emphasised | object petri nets;fork-join structures;dynamic modification;object net formalisms;token objects;mathematically rigorous foundations;protocols |
|
train_1278 | Verification of timed automata based on similarity | The paper presents a modification of the standard partitioning technique to generate abstract state spaces preserving similarity for Timed Automata. Since this relation is weaker than bisimilarity, most of the obtained models (state spaces) are smaller than bisimilar ones, but still preserve the universal fragments of branching time temporal logics. The theoretical results are exemplified for strong, delay, and observational simulation relations | timed automata verification;branching time temporal logics;universal fragments;partitioning technique;bisimilarity;observational simulation relations;abstract state spaces |
|
train_1279 | Place/Transition Petri net evolutions: recording ways, analysis and synthesis | Four semantic domains for Place/Transition Petri nets and their relationships are considered. They are monoids of respectively: firing sequences, processes, traces and dependence graphs. For each of them the analysis and synthesis problem is stated and solved. The monoid of processes is defined in a non-standard way, Nets under consideration involve weights of arrows and capacities (finite or infinite) of places. However, the analysis and synthesis tasks require nets to be pure, i.e. each of their transition must have the pre-set and post-set disjoint | monoids;post-set disjoint;firing sequences;semantic domains;dependence graphs;pre-set disjoint;place/transition petri net evolutions |
|
train_128 | A new result on the global convergence of Hopfield neural networks | In this work, we discuss Hopfield neural networks, investigating their global stability. Some sufficient conditions for a class of Hopfield neural networks to be globally stable and globally exponentially stable are given | sufficient conditions;hopfield neural networks;global stability;globally exponentially stable networks |
|
train_1280 | Products and polymorphic subtypes | This paper is devoted to a comprehensive study of polymorphic subtypes with products. We first present a sound and complete Hilbert style axiomatization of the relation of being a subtype in presence of to , * type constructors and the For all quantifier, and we show that such axiornatization is not encodable in the system with to , For all only. In order to give a logical semantics to such a subtyping relation, we propose a new form of a sequent which plays a key role in a natural deduction and a Gentzen style calculi. Interestingly enough, the sequent must have the form E implies T, where E is a non-commutative, non-empty sequence of typing assumptions and T is a finite binary tree of typing judgements, each of them behaving like a pushdown store. We study basic metamathematical properties of the two logical systems, such as subject reduction and cut elimination. Some decidability/undecidability issues related to the presented subtyping relation are also explored: as expected, the subtyping over to , *, For all is undecidable, being already undecidable for the to , For all fragment (as proved in [15]), but for the *, For all fragment it turns out to be decidable | gentzen style calculi;finite binary tree;decidability;polymorphic subtypes;metamathernatical properties;hilbert style axiomatization;pushdown store;products subtypes;logical semantics |
|
train_1281 | A notion of non-interference for timed automata | The non-interference property of concurrent systems is a security property concerning the flow of information among different levels of security of the system. In this paper we introduce a notion of timed non-interference for real-time systems specified by timed automata. The notion is presented using an automata based approach and then it is characterized also by operations and equivalence between timed languages. The definition is applied to an example of a time-critical system modeling a simplified control of an airplane | concurrent systems;timed automata;real-time systems;time-critical system;security property;noninterference notion |
|
train_1282 | Completeness of timed mu CRL | Previously a straightforward extension of the process algebra mu CRL was proposed to explicitly deal with time. The process algebra mu CRL has been especially designed to deal with data in a process algebraic context. Using the features for data, only a minor extension of the language was needed to obtain a very expressive variant of time. Previously it contained syntax, operational semantics and axioms characterising timed mu CRL. It did not contain an in depth analysis of theory of timed mu CRL. This paper fills this gap, by providing soundness and completeness results. The main tool to establish these is a mapping of timed to untimed mu CRL and employing the completeness results obtained for untimed mu CRL | timed mu crl;operational semantics;completeness;process algebra |
|
train_1283 | UPSILON: universal programming system with incomplete lazy object notation | This paper presents a new model of computation that differs from prior models in that it emphasizes data over flow control, has no named variables and has an object-oriented flavor. We prove that this model is a complete and confluent acceptable programming system and has a usable type theory. A new data synchronization primitive is introduced in order to achieve the above properties. Subtle variations of the model are shown to fall short of having all these necessary properties | universal programming system;incomplete lazy object notation;programming system;usable type theory;upsilon;data synchronization primitive;object-oriented flavor |
|
train_1284 | A linear time special case for MC games | MC games are infinite duration two-player games played on graphs. Deciding the winner in MC games is equivalent to the the modal mu-calculus model checking. In this article we provide a linear time algorithm for a class of MC games. We show that, if all cycles in each strongly connected component of the game graph have at least one common vertex, the winner can be found in linear time. Our results hold also for parity games, which are equivalent to MC games | two-player games;linear time special case;linear time algorithm;mc games;modal mu-calculus model checking |
|
train_1285 | On fractal dimension in information systems. Toward exact sets in infinite | information systems The notions of an exact as well as a rough set are well-grounded as basic notions in rough set theory. They are however defined in the setting of a finite information system i.e. an information system having finite numbers of objects as well as attributes. In theoretical studies e.g. of topological properties of rough sets, one has to trespass this limitation and to consider information systems with potentially unbound number of attributes. In such setting, the notions of rough and exact sets may be defined in terms of topological operators of interior and closure with respect to an appropriate topology following the ideas from the finite case, where it is noticed that in the finite case rough-set-theoretic operators of lower and upper approximation are identical with the interior, respectively, closure operators in topology induced by equivalence classes of the indiscernibility relation. Extensions of finite information systems are also desirable from application point of view in the area of knowledge discovery and data mining, when demands of e.g. mass collaboration and/or huge experimental data call for need of working with large data tables so the sound theoretical generalization of these cases is an information system with the number of attributes not bound in advance by a fixed integer i.e. an information system with countably but infinitely many attributes, In large information systems, a need arises for qualitative measures of complexity of concepts involved free of parameters, cf. e.g. applications for the Vapnik-Czervonenkis dimension. We study here in the theoretical setting of infinite information system a proposal to apply fractal dimensions suitably modified as measures of concept complexity | data mining;qualitative measures;rough set;closure operators;equivalence classes;complexity;knowledge discovery;topological properties;infinite information systems;information systems;exact sets;fractal dimension |
|
train_1286 | Self-describing Turing machines | After a sketchy historical account on the question of self-describeness and self-reproduction, and after discussing the definition of suitable encodings for self-describeness, we give the construction of several self-describing Turing machines, namely self-describing machines with, respectively, 350, 267, 224 and 206 instructions | self-describing turing machines;self-reproduction;encodings;self-describeness |
|
train_1287 | On average depth of decision trees implementing Boolean functions | The article considers the representation of Boolean functions in the form of decision trees. It presents the bounds on average time complexity of decision trees for all classes of Boolean functions that are closed over substitution, and the insertion and deletion of unessential variables. The obtained results are compared with the results developed by M.Ju. Moshkov (1995) that describe the worst case time complexity of decision trees | decision trees;worst case time complexity;average depth;boolean functions;average time complexity |
|
train_1288 | A modal logic for indiscernibility and complementarity in information systems | In this paper, we study indiscernibility relations and complementarity relations in information systems, The first-order characterization of indiscernibility and complementarity is obtained through a duality result between information systems and certain structures of relational type characterized by first-order conditions. The modal analysis of indiscernibility and complementarity is performed through a modal logic which modalities correspond to indiscernibility relations and complementarity relations in information systems | modal logic;indiscernibility;duality result;complementarity;first-order characterization;first-order conditions;relational type;information systems |
|
train_1289 | Combining PC control and HMI | Integrating PC-based control with human machine interface (HMI) technology can benefit a plant floor system. However, before one decides on PC-based control, there are many things one should consider, especially when using a soft programmable logic controller (PLC) to command the input/output. There are three strategies to integrate a PC-based control system with an HMI: treat the PC running the control application as if it were a PLC, integrate the system using standard PC interfaces; or using application programming interfaces | pc-based control system;application programming interfaces;pc interfaces;shop floor system;programmable logic controller;human machine interface |
|
train_129 | Phase conditions for Schur polynomials | The rate of change of phase of a real or complex Schur polynomial, evaluated along the unit circle traversed counterclockwise, is strictly positive. For polynomials with real coefficients, this bound can be tightened. These and some other fundamental bounds on the rate of change of phase are derived here, using the Tchebyshev representation of the image of a real polynomial evaluated on the unit circle | phase monotonicity;discrete-time control systems;tchebyshev representation;stabilization;phase conditions;robust stability;rate of change of phase;schur polynomial;real coefficients |
|
train_1290 | Making the MIS integration process work | Focused, cross-functional teams that implement flexible and scalable information systems (IS) can deliver a smooth, lean manufacturing process. When integrating new technology into an existing facility, one should always consider three things: the hard infrastructure, the soft infrastructure, and information flow. Hard infrastructure includes client and server hardware and network infrastructure. Soft infrastructure includes operating systems, existing or legacy software, needed code customizations, and the human resources to run/support the system. Information flow includes how data in the new system interacts with legacy systems and what legacy data the new system will require, as well as who will want to receive/access the information that is held by the system | legacy software;scalable information systems;network infrastructure;human resources;client server hardware;information flow;lean manufacturing process;management information systems |
|
train_1291 | Taking back control [SCADA system] | Most common way to implement a SCADA system is to go outside. However, in the author's opinion, to truly take control of a SCADA project, in-house personnel should handle as much of the job as possible. This includes design, equipment specification, installation, and programming. The more of these tasks one does in-house, the more control and ownership one has. To accomplish this, we first evaluated the existing SCADA system and investigated new technologies to establish a list of features the new system needed to incorporate | compatibility;data acquisition;scada;in-house integration;programmable logic controllers;supervisory control |
|
train_1292 | The heat is on [building automation systems] | Integrating building automation systems (BASs) can result in systems that have the ability to sense changes in the air temperature through a building's heating, ventilation, and air conditioning (HVAC) systems. Taking advantages of the Internet, using remote monitoring, and building interoperability through open protocol systems are some of the issues discussed throughout the BAS/HVAC community. By putting information over the Internet, facility managers get real-time data on energy usage and performance issues | internet;interoperability;hvac;remote monitoring;real-time data;building automation systems;heating;ventilation;air conditioning |
|
train_1293 | Truss topology optimization by a modified genetic algorithm | This paper describes the use of a stochastic search procedure based on genetic algorithms for developing near-optimal topologies of load-bearing truss structures. Most existing cases these publications express the truss topology as a combination of members. These methods, however, have the disadvantage that the resulting topology may include needless members or those which overlap other members. In addition to these problems, the generated structures are not necessarily structurally stable. A new method, which resolves these problems by expressing the truss topology as a combination of triangles, is proposed in this paper. Details of the proposed methodology are presented as well as the results of numerical examples that clearly show the effectiveness and efficiency of the method | load-bearing truss structures;stochastic search procedure;near-optimal topologies;modified genetic algorithm;triangles;truss topology optimization |
|
train_1294 | Multicriterion optimization of composite laminates for maximum failure margins | with an interactive descent algorithm An interactive multicriterion optimization method for composite laminates subjected to multiple loading conditions is introduced. Laminate margins to initial failure (first ply failure, FPF) with respect of the applied loading conditions are treated as criteria. The original problem is reduced to a, bicriterion problem by introducing parameters to combine criteria in a linear manner. The problem is solved by using an interactive descent algorithm. Both the conditions required for a discrete procedure to converge towards a Pareto optimum and numerical examples are given | interactive descent algorithm;first ply failure;composite laminates;interactive multicriterion optimization;multiple loading conditions;convergence;pareto optimum;maximum failure margins;discrete procedure;bicriterion problem |
|
train_1295 | Development of visual design steering as an aid in large-scale | multidisciplinary design optimization. II. Method validation For pt. I see ibid., pp. 412-24. Graph morphing, the first concept developed under the newly proposed paradigm of visual design steering (VDS), is applied to optimal design problems. Graph morphing, described in Part I of this paper, can be used to provide insights to a designer to improve efficiency, reliability, and accuracy of an optimal design in less cycle time. It is demonstrated in this part of the paper that graph morphing can be used to provide insights into design variable impact, constraint redundancy, reasonable values for constraint allowable limits, and function smoothness, that otherwise might not be attainable | function smoothness;visual design steering;reliability;design variable impact;optimal design problems;large-scale multidisciplinary design optimization;accuracy;graph morphing;constraint redundancy;constraint allowable limits;method validation;cycle time |
|
train_1296 | Development of visual design steering as an aid in large-scale | multidisciplinary design optimization. I. Method development A modified paradigm of computational steering (CS), termed visual design steering (VDS), is developed in this paper. The VDS paradigm is applied to optimal design problems to provide a means for capturing and enabling designer insights. VDS allows a designer to make decisions before, during or after an analysis or optimization via a visual environment, in order to effectively steer the solution process. The objective of VDS is to obtain a better solution in less time through the use of designer knowledge and expertise. Using visual representations of complex systems in this manner enables human experience and judgement to be incorporated into the optimal design process at appropriate steps, rather than having traditional black box solvers return solutions from a prescribed input set. Part I of this paper focuses on the research issues pertaining to the Graph Morphing visualization method created to represent an n-dimensional optimization problem using 2-dimensional and 3-dimensional visualizations. Part II investigates the implementation of the VDS paradigm, using the graph morphing approach, to improve an optimal design process. Specifically, the following issues are addressed: impact of design variable changes on the optimal design space; identification of possible constraint redundancies; impact of constraint tolerances on the optimal solution: and smoothness of the objective function contours. It is demonstrated that graph morphing can effectively reduce the complexity and computational time associated with some optimization problems | graph morphing visualization method;visual design steering;n-dimensional optimization;visual representations;design variable changes;3d visualizations;complexity;complex systems;constraint redundancies;optimal design problems;2d visualizations;objective function contour smoothness;large-scale multidisciplinary design optimization;computational time;designer decision making;constraint tolerances;computational steering |
|
train_1297 | Stochastic optimization of acoustic response - a numerical and experimental | comparison The objective of the work presented is to compare results from numerical optimization with experimental data and to highlight and discuss the differences between two fundamentally different optimization methods. The problem domain is minimization of acoustic emission and the structure used in the work is a closed cylinder with forced vibration of one end. The optimization method used in this paper is simulated annealing (SA), a stochastic method. The results are compared with those from a gradient-based method used on the same structure in an earlier paper (Tinnsten, 2000) | simulated annealing;forced vibration;gradient-based method;acoustic response;numerical optimization;acoustic emission minimization;closed cylinder;structure;stochastic optimization |
|
train_1298 | An analytical model for a composite adaptive rectangular structure using the | Heaviside function The objective of this article is to describe a mathematical model, based on the Heaviside function and on the delta -Dirac distribution, for a composite adaptive rectangular structure with embedded and/or bonded piezoelectric actuators and sensors. In the adopted structure model, the laminae are made up a configuration of rectangular nonpiezoelectric and piezoelectric patches. The laminae do not all have the same area nor do they present the same configuration, such that there are points where there is no material. The equations of motion and the boundary conditions, which describe the electromechanical coupling, are based on the Mindlin displacement field, on the linear theory of piezoelectricity, and on the Hamilton principle | hamilton principle;virtual kinetic energy;delta-dirac distribution;electromechanical coupling;embedded actuators;heaviside function;mindlin displacement field;linear piezoelectricity;constitutive relations;closed-form solution;piezoelectric sensors;lagrangian functions;equations of motion;rectangular composite plate;piezoelectric actuators;finite-element method;mathematical model;bonded sensors;embedded sensors;bonded actuators;piezoelectric patches;composite adaptive rectangular structure;boundary conditions;nonpiezoelectric patches |
|
train_1299 | How much should publishers spend on technology? | A study confirms that spending on publishing-specific information technology (IT) resources is growing much faster than IT spending for general business activities, at least among leading publishers in the scientific, technical and medical (STM) market. The survey asked about information technology funding and staffing levels-past, present and future-and also inquired about activities in content management, Web delivery, computer support and customer relationship management. The results provide a starting point for measuring information technology growth and budget allocations in this publishing segment | content management;customer relationship management;budget;publishing;computer support;web delivery;it spending |
|
train_13 | Stability analysis of the characteristic polynomials whose coefficients are | polynomials of interval parameters using monotonicity We analyze the stability of the characteristic polynomials whose coefficients are polynomials of interval parameters via monotonicity methods. Our stability conditions are based on Frazer-Duncan's theorem and all conditions can be checked using only endpoint values of interval parameters. These stability conditions are necessary and sufficient under the monotonicity assumptions. When the monotonicity conditions do not hold on the whole parameter region, we present an interval division method and a transformation algorithm in order to apply the monotonicity conditions. Then, our stability analysis methods can be applied to all characteristic polynomials whose coefficients are polynomials of interval parameters | necessary and sufficient conditions;interval division method;monotonicity;interval parameters;transformation algorithm;characteristic polynomials;frazer-duncan theorem;stability analysis;endpoint values |
|
train_130 | Resolution of a current-mode algorithmic analog-to-digital converter | Errors limiting the resolution of current-mode algorithmic analog-to-digital converters are mainly related to current mirror operation. While systematic errors can be minimized by proper circuit techniques, random sources are unavoidable. In this paper a statistical analysis of the resolution of a typical converter is carried out taking into account process tolerances. To support the analysis, a 4-bit ADC, realized in a 0.35- mu m CMOS technology, was exhaustively simulated. Results were found to be in excellent agreement with theoretical derivations | 0.35 micron;cmos technology;a/d converters;resolution;algorithmic adc;error analysis;analog-to-digital converters;circuit analysis;4 bit;current-mode adc;statistical analysis;circuit techniques;tolerance analysis |
|
train_1300 | Will CPXe save the photofinishing market? | A consortium of film suppliers and electronics firms has proposed the Common Picture Exchange environment. It will let diverse providers cooperate via the Internet to sell digital-photo prints | photofinishing market;kodak;web-services standards;fujifilm;common picture exchange environment;cpxe;hp |
|
train_1301 | Integrate-and-fire neurons driven by correlated stochastic input | Neurons are sensitive to correlations among synaptic inputs. However, analytical models that explicitly include correlations are hard to solve analytically, so their influence on a neuron's response has been difficult to ascertain. To gain some intuition on this problem, we studied the firing times of two simple integrate-and-fire model neurons driven by a correlated binary variable that represents the total input current. Analytic expressions were obtained for the average firing rate and coefficient of variation (a measure of spike-train variability) as functions of the mean, variance, and correlation time of the stochastic input. The results of computer simulations were in excellent agreement with these expressions. In these models, an increase in correlation time in general produces an increase in both the average firing rate and the variability of the output spike trains. However, the magnitude of the changes depends differentially on the relative values of the input mean and variance: the increase in firing rate is higher when the variance is large relative to the mean, whereas the increase in variability is higher when the variance is relatively small. In addition, the firing rate always tends to a finite limit value as the correlation time increases toward infinity, whereas the coefficient of variation typically diverges. These results suggest that temporal correlations may play a major role in determining the variability as well as the intensity of neuronal spike trains | correlated binary variable;integrate-and-fire neurons;computer simulation;synaptic input correlations;coefficient of variation;spike-train variability;output spike trains;temporal correlations;firing times;correlated stochastic input |
|
train_1302 | Dynamics of the firing probability of noisy integrate-and-fire neurons | Cortical neurons in vivo undergo a continuous bombardment due to synaptic activity, which acts as a major source of noise. We investigate the effects of the noise filtering by synapses with various levels of realism on integrate-and-fire neuron dynamics. The noise input is modeled by white (for instantaneous synapses) or colored (for synapses with a finite relaxation time) noise. Analytical results for the modulation of firing probability in response to an oscillatory input current are obtained by expanding a Fokker-Planck equation for small parameters of the problem-when both the amplitude of the modulation is small compared to the background firing rate and the synaptic time constant is small compared to the membrane time constant. We report the detailed calculations showing that if a synaptic decay time constant is included in the synaptic current model, the firing-rate modulation of the neuron due to an oscillatory input remains finite in the high-frequency limit with no phase lag. In addition, we characterize the low-frequency behavior and the behavior of the high-frequency limit for intermediate decay times. We also characterize the effects of introducing a rise time to the synaptic currents and the presence of several synaptic receptors with different kinetics. In both cases, we determine, using numerical simulations, an effective decay time constant that describes the neuronal response completely | colored noise;white noise;membrane time constant;noise filtering;numerical simulation;synaptic activity;firing probability;fokker-planck equation;synaptic time constant;phase lag;cortical neurons;noisy integrate-and-fire neurons;synaptic receptors |
|
train_1303 | Reply to Carreira-Perpinan and Goodhill [mathematics in biology] | In a paper by Carreira-Perpinan and Goodhill (see ibid., vol.14, no.7, p.1545-60, 2002) the authors apply mathematical arguments to biology. Swindale et al. think it is inappropriate to apply the standards of proof required in mathematics to the acceptance or rejection of scientific hypotheses. To give some examples, showing that data are well described by a linear model does not rule out an infinity of other possible models that might give better descriptions of the data. Proving in a mathematical sense that the linear model was correct would require ruling out all other possible models, a hopeless task. Similarly, to demonstrate that two DNA samples come from the same individual, it is sufficient to show a match between only a few regions of the genome, even though there remains a very large number of additional comparisons that could be done, any one of which might potentially disprove the match. This is unacceptable in mathematics, but in the real world, it is a perfectly reasonable basis for belief | cortical maps;mathematical arguments;biology;dna;scientific hypotheses;genome;hypothesis testing;neural nets;linear model |
|
train_1304 | Center-crossing recurrent neural networks for the evolution of rhythmic | behavior A center-crossing recurrent neural network is one in which the null(hyper)surfaces of each neuron intersect at their exact centers of symmetry, ensuring that each neuron's activation function is centered over the range of net inputs that it receives. We demonstrate that relative to a random initial population, seeding the initial population of an evolutionary search with center-crossing networks significantly improves both the frequency and the speed with which high-fitness oscillatory circuits evolve on a simple walking task. The improvement is especially striking at low mutation variances. Our results suggest that seeding with center-crossing networks may often be beneficial, since a wider range of dynamics is more likely to be easily accessible from a population of center-crossing networks than from a population of random networks | random initial population;random networks;center-crossing recurrent neural networks;evolutionary algorithm;activation function;high-fitness oscillatory circuits;learning;null surfaces;low mutation variance;rhythmic behavior evolution;evolutionary search;symmetry |
|
train_1305 | Learning nonregular languages: a comparison of simple recurrent networks and | LSTM Rodriguez (2001) examined the learning ability of simple recurrent nets (SRNs) (Elman, 1990) on simple context-sensitive and context-free languages. In response to Rodriguez's (2001) article, we compare the performance of simple recurrent nets and long short-term memory recurrent nets on context-free and context-sensitive languages | context-free languages;performance;nonregular language learning;lstm;short-term memory recurrent nets;recurrent neural networks;context-sensitive languages |
|
train_1306 | Scalable hybrid computation with spikes | We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moderate-precision analog units to collectively compute a precise answer to a computation. Second, frequent discrete signal restoration of the analog information prevents analog noise and offset from degrading the computation. Third, a state machine enables complex computations to be created using a sequence of elementary computations. A natural choice for implementing this hybrid scheme is one based on spikes because spike-count codes are digital, while spike-time codes are analog. We illustrate how spikes afford easy ways to implement all three components of scalable hybrid computation. First, as an important example of distributed analog computation, we show how spikes can create a distributed modular representation of an analog number by implementing digital carry interactions between spiking analog neurons. Second, we show how signal restoration may be performed by recursive spike-count quantization of spike-time codes. Third, we use spikes from an analog dynamical system to trigger state transitions in a digital dynamical system, which reconfigures the analog dynamical system using a binary control vector; such feedback interactions between analog and digital dynamical systems create a hybrid state machine (HSM). The HSM extends and expands the concept of a digital finite-state-machine to the hybrid domain. We present experimental data from a two-neuron HSM on a chip that implements error-correcting analog-to-digital conversion with the concurrent use of spike-time and spike-count codes. We also present experimental data from silicon circuits that implement HSM-based pattern recognition using spike-time synchrony. We outline how HSMs may be used to perform learning, vector quantization, spike pattern recognition and generation, and how they may be reconfigured | frequent discrete signal restoration;spike-time codes;learning;distributed analog computation;digital carry interactions;feedback interactions;silicon circuits;spikes;finite-state-machine;hybrid analog-digital scheme;error-correcting analog-to-digital conversion;pattern recognition;scalable hybrid computation;analog noise;spike-count codes;vector quantization;two neuron hybrid state machine;binary control vector;moderate-precision analog units |
|
train_1307 | Law librarians' survey: are academic law librarians in decline? | The author reports on the results of one extra element in the BIALL/SPTL survey, designed to acquire further information about academic law librarians. The survey has fulfilled the aim of providing a snapshot of the academic law library profession and has examined the concerns that have been raised. Perhaps most importantly, it has shown that more long-term work needs to be done to monitor the situation effectively. We hope that BIALL will take on this challenge and help to maintain the status of academic law librarians and aid them in their work | academic law librarians;biall/sptl;academic law library;survey |
|
train_1308 | SPTL/BIALL academic law library survey 2000/2001 | The paper outlines the activities and funding of academic law libraries in the UK and Ireland in the academic year 2000/2001. The figures have been taken from the results of a postal questionnaire undertaken by information services staff at Cardiff University on behalf of BIALL | funding;information services;uk;ireland;survey;sptl/biall;postal questionnaire;cardiff university;academic law libraries |
|
train_1309 | Transcripts: bane or boon? [law reporting] | Because judge-made law, by its very nature, is less immediately accessible than the law of codified, statutory systems, it calls for an efficient system of law reporting. Of necessity, any such system will be selective, the majority of decisions going unreported. Considerable power thereby comes to repose in the hands of the law reporters. The author shares his invaluable perception and extensive research on the difficulties which arise from the excess of access to judgments | judge-made law;law reporting;transcripts;judgments |
|
train_131 | On biorthogonal nonuniform filter banks and tree structures | This paper concerns biorthogonal nonuniform filter banks. It is shown that a tree structured filter bank is biorthogonal if it is equivalent to a tree structured filter bank whose matching constituent levels on the analysis and synthesis sides are themselves biorthogonal pairs. We then show that a stronger statement can be made about dyadic filter banks in general: That a dyadic filter bank is biorthogonal if both the analysis and synthesis banks can be decomposed into dyadic trees. We further show that these decompositions are stability and FIR preserving. These results, derived for filter banks having filters with rational transfer functions, thus extend some of the earlier comparable results for orthonormal filter banks | dyadic filter banks;rational transfer functions;stability preserving;fir preserving;tree structured filter bank;biorthogonal pairs;dyadic trees;biorthogonal nonuniform filter banks |
|
train_1310 | Cat and class: what use are these skills to the new legal information | professional? This article looks at the cataloguing and classification skills taught on information studies courses and the use these skills are to new legal information professionals. The article is based on the opinions of nine new legal information professionals from both academic and law firm libraries | information studies courses;classification;legal information professional;law firm libraries;academic libraries;cataloguing |
|
train_1311 | Blended implementation of block implicit methods for ODEs | In this paper we further develop a new approach for naturally defining the nonlinear splittings needed for the implementation of block implicit methods for ODEs, which has been considered by Brugnano [J. Comput. Appl. Math. 116 (2000) 41] and by Brugnano and Trigiante [in: Recent Trends in Numerical Analysis, Nova Science, New York, 2000, pp. 81-105]. The basic idea is that of defining the numerical method as the combination (blending) of two suitable component methods. By carefully choosing such methods, it is shown that very efficient implementations can be obtained. Moreover, some of them, characterized by a diagonal splitting, are well suited for parallel computers. Numerical tests comparing the performances of the proposed implementation with existing ones are also presented, in order to make evident the potential of the approach | blended implementation;odes;numerical tests;parallel computers;numerical method;nonlinear splittings;diagonal splitting;block implicit methods |
|
train_1312 | Stability in the numerical solution of the heat equation with nonlocal boundary | conditions This paper deals with numerical methods for the solution of the heat equation with integral boundary conditions. Finite differences are used for the discretization in space. The matrices specifying the resulting semidiscrete problem are proved to satisfy a sectorial resolvent condition, uniformly with respect to the discretization parameter. Using this resolvent condition, unconditional stability is proved for the fully discrete numerical process generated by applying A( theta )-stable one-step methods to the semidiscrete problem. This stability result is established in the maximum norm; it improves some previous results in the literature in that it is not subject to various unnatural restrictions which were imposed on the boundary conditions and on the one-step methods | integral boundary conditions;fully discrete numerical process;maximum norm;matrices;numerical solution;one-step methods;sectorial resolvent condition;finite differences;semidiscrete problem;space discretization;stability;heat equation;nonlocal boundary conditions |
|
train_1313 | A collocation formulation of multistep methods for variable step-size | extensions Multistep methods are classically constructed by specially designed difference operators on an equidistant time grid. To make them practically useful, they have to be implemented by varying the step-size according to some error-control algorithm. It is well known how to extend Adams and BDF formulas to a variable step-size formulation. In this paper we present a collocation approach to construct variable step-size formulas. We make use of piecewise polynomials to show that every k-step method of order k+1 has a variable step-size polynomial collocation formulation | variable step-size polynomial collocation formulation;equidistant time grid;multistep methods;collocation formulation;k-step method;error-control algorithm;variable step-size extensions;piecewise polynomials;difference operators |
|
train_1314 | Multi-timescale Internet traffic engineering | The Internet is a collection of packet-based hop-by-hop routed networks. Internet traffic engineering is the process of allocating resources to meet the performance requirements of users and operators for their traffic. Current mechanisms for doing so, exemplified by TCP's congestion control or the variety of packet marking disciplines, concentrate on allocating resources on a per-packet basis or at data timescales. This article motivates the need for traffic engineering in the Internet at other timescales, namely control and management timescales, and presents three mechanisms for this. It also presents a scenario to show how these mechanisms increase the flexibility of operators' service offerings and potentially also ease problems of Internet management | packet-based hop-by-hop routed networks;multi-timescale internet traffic engineering;internet management;operator services;control timescale;bgp routing protocol;tcp congestion control;ecn proxy;admission control;resource allocation;packet marking disciplines |
|
train_1315 | Traffic engineering with traditional IP routing protocols | Traffic engineering involves adapting the routing of traffic to network conditions, with the joint goals of good user performance and efficient use of network resources. We describe an approach to intradomain traffic engineering that works within the existing deployed base of interior gateway protocols, such as Open Shortest Path First and Intermediate System-Intermediate System. We explain how to adapt the configuration of link weights, based on a networkwide view of the traffic and topology within a domain. In addition, we summarize the results of several studies of techniques for optimizing OSPF/IS-IS weights to the prevailing traffic. The article argues that traditional shortest path routing protocols are surprisingly effective for engineering the flow of traffic in large IP networks | intradomain traffic engineering;traffic routing;shortest path routing protocols;network conditions;ip routing protocols;network topology;tcp;open shortest path first protocol;interior gateway protocols;ip networks;network resources;intermediate system-intermediate system protocol;transmission control protocol;user performance;link weights configuration;ospf/is-is weights |
|
train_1316 | Understanding Internet traffic streams: dragonflies and tortoises | We present the concept of network traffic streams and the ways they aggregate into flows through Internet links. We describe a method of measuring the size and lifetime of Internet streams, and use this method to characterize traffic distributions at two different sites. We find that although most streams (about 45 percent of them) are dragonflies, lasting less than 2 seconds, a significant number of streams have lifetimes of hours to days, and can carry a high proportion (50-60 percent) of the total bytes on a given link. We define tortoises as streams that last longer than 15 minutes. We point out that streams can be classified not only by lifetime (dragonflies and tortoises) but also by size (mice and elephants), and note that stream size and lifetime are independent dimensions. We submit that ISPs need to be aware of the distribution of Internet stream sizes, and the impact of the difference in behavior between short and long streams. In particular, any forwarding cache mechanisms in Internet routers must be able to cope with a high volume of short streams. In addition ISPs should realize that long-running streams can contribute a significant fraction of their packet and byte volumes-something they may not have allowed for when using traditional "flat rate user bandwidth consumption" approaches to provisioning and engineering | internet stream size measurement;internet stream lifetime measurement;packet volume;tortoises;network traffic streams;isp;elephants;traffic engineering;long-running streams;mice;traffic provisioning;internet routers;dragonflies;traffic distributions;forwarding cache mechanisms;internet traffic streams;byte volume |
|
train_1317 | Dynamic spectrum management for next-generation DSL systems | The performance of DSL systems is severely constrained by crosstalk due to the electromagnetic coupling among the multiple twisted pairs making up a phone cable. In order to reduce performance loss arising from crosstalk, DSL systems are currently designed under the assumption of worst-case crosstalk scenarios leading to overly conservative DSL deployments. This article presents a new paradigm for DSL system design, which takes into account the multi-user aspects of the DSL transmission environment. Dynamic spectrum management (DSM) departs from the current design philosophy by enabling transceivers to autonomously and dynamically optimize their communication settings with respect to both the channel and the transmissions of neighboring systems. Along with this distributed optimization, when an additional degree of coordination becomes available for future DSL deployment, DSM will allow even greater improvement in DSL performance. Implementations are readily applicable without causing any performance degradation to the existing DSLs under static spectrum management. After providing an overview of the DSM concept, this article reviews two practical DSM methods: iterative water-filling, an autonomous distributed power control method enabling great improvement in performance, which can be implemented through software options in some existing ADSL and VDSL systems; and vectored-DMT, a coordinated transmission/reception technique achieving crosstalk-free communication for DSL systems, which brings within reach the dream of providing universal Internet access at speeds close to 100 Mb/s to 500 m on 1-2 lines and beyond 1 km on 2-4 lines. DSM-capable DSL thus enables the broadband age | static spectrum management;coordinated transmission/reception;adsl systems;vectored-dmt;dsl systems performance;crosstalk-free communication;universal internet access;data transmission;twisted pairs;iterative water-filling;phone cable;dsl system design;electromagnetic coupling;transceivers;dynamic spectrum management;autonomous distributed power control method;500 m;vdsl systems;distributed optimization;broadband networks;100 mbit/s;software options |
|
train_1318 | Network intrusion and fault detection: a statistical anomaly approach | With the advent and explosive growth of the global Internet and electronic commerce environments, adaptive/automatic network/service intrusion and anomaly detection in wide area data networks and e-commerce infrastructures is fast gaining critical research and practical importance. We present and demonstrate the use of a general-purpose hierarchical multitier multiwindow statistical anomaly detection technology and system that operates automatically, adaptively, and proactively, and can be applied to various networking technologies, including both wired and wireless ad hoc networks. Our method uses statistical models and multivariate classifiers to detect anomalous network conditions. Some numerical results are also presented that demonstrate that our proposed methodology can reliably detect attacks with traffic anomaly intensity as low as 3-5 percent of the typical background traffic intensity, thus promising to generate an effective early warning | background traffic intensity;fault detection;computer network attacks;early warning systems;traffic anomaly intensity;adaptive/automatic network/service intrusion;wireless ad hoc networks;multiwindow anomaly detection;neural network classification;internet;multivariate classifiers;e-commerce infrastructure;network intrusion;ad hoc wireless experiments;electronic commerce environment;backpropagation;perceptron-back propagation hybrid;denial of service;statistical models;wired ad hoc networks;wide area data networks;hierarchical multitier statistical anomaly detection |
|
train_1319 | Routing security in wireless ad hoc networks | A mobile ad hoc network consists of a collection of wireless mobile nodes that are capable of communicating with each other without the use of a network infrastructure or any centralized administration. MANET is an emerging research area with practical applications. However, wireless MANET is particularly vulnerable due to its fundamental characteristics, such as open medium, dynamic topology, distributed cooperation, and constrained capability. Routing plays an important role in the security of the entire network. In general, routing security in wireless MANETs appears to be a problem that is not trivial to solve. In this article we study the routing security issues of MANETs, and analyze in detail one type of attack-the "black hole" problem-that can easily be employed against the MANETs. We also propose a solution for the black hole problem for ad hoc on-demand distance vector routing protocol | on-demand distance vector routing protocol;distributed cooperation;mobile ad hoc network;dynamic topology;wireless ad hoc networks;home wireless personal area networks;wireless mobile nodes;routing security;open medium;wireless manet;satellite transmission |
|
train_132 | A unified view for vector rotational CORDIC algorithms and architectures based | on angle quantization approach Vector rotation is the key operation employed extensively in many digital signal processing applications. In this paper, we introduce a new design concept called Angle Quantization (AQ). It can be used as a design index for vector rotational operation, where the rotational angle is known in advance. Based on the AQ process, we establish a unified design framework for cost-effective low-latency rotational algorithms and architectures. Several existing works, such as conventional COordinate Rotational Digital Computer (CORDIC), AR-CORDIC, MVR-CORDIC, and EEAS-based CORDIC, can be fitted into the design framework, forming a Vector Rotational CORDIC Family. Moreover, we address four searching algorithms to solve the optimization problem encountered in the proposed vector rotational CORDIC family. The corresponding scaling operations of the CORDIC family are also discussed. Based on the new design framework, we can realize high-speed/low-complexity rotational VLSI circuits, whereas without degrading the precision performance in fixed-point implementations | scaling operations;high-speed rotational vlsi circuits;dsp applications;fixed-point implementations;vector rotational cordic algorithms;low-latency rotational architectures;angle quantization;trellis-based searching algorithm;low-latency rotational algorithms;vector rotational operation;searching algorithms;greedy searching algorithm;low-complexity rotational vlsi circuits;design index;optimization problem;digital signal processing applications;unified design framework |
|
train_1320 | Securing the Internet routing infrastructure | The unprecedented growth of the Internet over the last years, and the expectation of an even faster increase in the numbers of users and networked systems, resulted in the Internet assuming its position as a mass communication medium. At the same time, the emergence of an increasingly large number of application areas and the evolution of the networking technology suggest that in the near future the Internet may become the single integrated communication infrastructure. However, as the dependence on the networking infrastructure grows, its security becomes a major concern, in light of the increased attempt to compromise the infrastructure. In particular, the routing operation is a highly visible target that must be shielded against a wide range of attacks. The injection of false routing information can easily degrade network performance, or even cause denial of service for a large number of hosts and networks over a long period of time. Different approaches have been proposed to secure the routing protocols, with a variety of countermeasures, which, nonetheless, have not eradicated the vulnerability of the routing infrastructure. In this article, we survey the up-to-date secure routing schemes. that appeared over the last few years. Our critical point of view and thorough review of the literature are an attempt to identify directions for future research on an indeed difficult and still largely open problem | networking infrastructure;countermeasures;routing infrastructure;secure routing schemes;network performance;networked systems;routing protocols;preventive security mechanisms;networking technology;integrated communication infrastructure;link state protocols;internet routing infrastructure security;research;false routing information |
|
train_1323 | Editorial system vendors focus on Adobe and the future | Looking over the newspaper-system market, we note that the Mac is getting new respect. Adobe InDesign has established itself as a solid alternative to Quark XPress for pagination. Positioning themselves for the long run, developers are gradually shifting to new software architectures | newspaper-system market;publishing;macintosh;adobe indesign;pagination |
|
train_1324 | A look at MonacoProfiler 4 | The newest profiling program from Monaco Software adds some valuable features: support for up to 8-color printing, profiling for digital cameras, fine-tuning of black generation and tweaking of profile transforms. We tested its ease of use and a few of the advanced functions. In all, it's pretty good | color-correction;commercial printers;monacoprofiler 4;pantone hexachrome |
|
train_1325 | X-Rite: more than a graphic arts company | Although it is well known as a maker of densitometers and spectrophotometers, X-Rite is active in measuring light and shape in many industries. Among them are automobile finishes, paint and home improvements, scientific instruments, optical semiconductors and even cosmetic dentistry | colour measurement;graphic arts;x-rite |
|
train_1326 | Verona Lastre: consolidation provides opening for a new plate vendor | Fewer companies than ever are manufacturing CTP plates. The market has become globalized, with just four big firms dominating the picture. To the Samor Group, however, globalization looked like an opportunity; it reasoned that many a national and local distributor would welcome a small, competitive, regional manufacturer. A couple of years ago it formed a company, Verona Lastre, to exploit that opportunity. Now Vela, as it's familiarly called, has launched its line of high-quality thermal plates and is busily lining up dealers in Europe and the Americas | verona lastre;ctp plates;vela |
|
train_1328 | Tablet PCs on the way [publishing markets] | Previews of hardware and software look promising for publishing markets | publishing markets;tablet pc |
|
train_1329 | PageFlex + MediaRich = PageRich | Layout and graphics innovators collaborate on fully variable combination. Pageflex and Equilibrium have melded their respective EDIT and MediaRich technologies to make a variable-data composition engine with a Web interface. Though a first-generation effort, it shows substantial promise | mediarich;layout;pagerich;pageflex;graphics;composition;software houses |
|
train_133 | L/sub p/ stability and linearization | A theorem by Hadamard gives a two-part condition under which a map from one Banach space to another is a homeomorphism. The theorem, while often very useful, is incomplete in the sense that it does not explicitly specify the family of maps for which the condition is met. Recently, under a typically weak additional assumption on the map, it was shown that Hadamard's condition is met if and only if the map is a homeomorphism with a Lipschitz continuous inverse. Here, an application is given concerning the relation between the L/sub p/ stability (with 1 <or= p < infinity ) of a nonlinear system and the stability of related linear systems. We also give a result that directs attention to a fundamental limitation concerning what can be proved about linearization and stability for a related familiar family of feedback systems | banach space;nonlinear system;feedback systems;lipschitz continuous inverse;l/sub p/ stability;linear systems;hadamard's condition |
|
train_1330 | Strobbe Graphics' next frontier: CTP for commercial printers | Strobbe is one of the more successful makers of newspaper platesetters, which are sold by Agfa under the Polaris name. But the company also has a growing presence in commercial printing markets, where it sells under its own name | strobbe graphics;polaris;commercial printing;punch international;platesetters;workflow;agfa |
|
train_1331 | Enterprise content integration III: Agari Mediaware's Media Star | Since we introduced the term Enterprise Content Integration (ECI) in January, the concept has gained momentum in the market. In addition to Context Media's Interchange Platform and Savantech's Photon Commerce, Agari Mediaware's Media Star is in the fray. It is a middleware platform that allows large media companies to integrate their digital systems with great flexibility | agari mediaware media star;enterprise content integration;middleware |
|
train_1332 | Personal cards for on-line purchases | Buying presents over the Web has advantages for a busy person: lots of choices, 24-hour accessibility, quick delivery, and you don't even have to wrap the gift. But many people like to select a card or write a personal note to go with their presents, and the options for doing that have been limited. Two companies have seen this limitation as an opportunity: 4YourSoul.com and CardintheBox.com | personalized printing;cardinthebox.com;personal cards;online purchases;4yoursoul.com |
|
train_1333 | The crossing number of P(N, 3) | It is proved that the crossing number of the generalized Petersen graph P(3k + h, 3) is k + h if h in {0, 2} and k + 3 if h = 1, for each k >or= 3, with the single exception of P(9,3), whose crossing number is 2 | crossing number;generalized petersen graph |
|
train_1334 | A shy invariant of graphs | Moving from a well known result of P.L. Hammer et al. (1982), we introduce a new graph invariant, say lambda (G) referring to any graph G. It is a non-negative integer which is non-zero whenever G contains particular induced odd cycles or, equivalently, admits a particular minimum clique-partition. We show that).(G) can be efficiently evaluated and that its determination allows one to reduce the hard problem of computing a minimum clique-cover of a graph to an identical problem of smaller size and special structure. Furthermore, one has alpha (G) <or= theta (G) - lambda (G), where alpha (G) and theta (G) respectively denote the cardinality of a maximum stable set of G and of a minimum clique-partition of G | minimum clique-cover;graph invariant;minimum clique-partition;cardinality;induced odd cycles;maximum stable set |
|
train_1335 | Arranging solid balls to represent a graph | By solid balls, we mean a set of balls in R/sup 3/ no two of which can penetrate each other. Every finite graph G can be represented by arranging solid balls in the following way: Put red balls in R/sup 3/, one for each vertex of G, and connect two red balls by a chain when they correspond to a pair of adjacent vertices of G, where a chain means a finite sequence of blue solid balls in which each consecutive balls are tangent. (We may omit the chain if the two red balls are already tangent.) The ball number b(G) of G is the minimum number of balls (red and blue) necessary to represent G. If we put the balls and chains on a table so that all balls sit on the table, then the minimum number of balls for G is denoted by bT(G). Among other things, we prove that b(K/sub 6/) = 8, b(K/sub 7/) = 13 and b/sub T/(K/sub 5/) = 8,b/sub T/(K/sub 6/) = 14. We also prove that c/sub 1/n/sup 3/ < b(K/sub n/) < c/sub 2/n/sup 3/ log n, c/sub 3/n/sup 4//log n < b/sub T/(K/sub n/) < c/sub 4/n/sup 4/ | adjacent vertices;graph representation;finite graph;finite sequence;solid balls |
|
train_1336 | On abelian branched coverings of the sphere | We obtain an enumeration formula for the number of weak equivalence classes of the branched (A * B)-covering of the sphere with m-branch points, when A and B are finite abelian groups with (|A|, |B|) = 1. From this, we can deduce an explicit formula for enumerating the weak equivalence classes of pseudofree spherical (Zp * Zq)-actions on a given surface, when p and q are distinct primes | explicit formula;enumeration formula;abelian branched coverings;pseudofree spherical;weak equivalence classes;finite abelian groups |
|
train_1337 | Some properties of Hadamard matrices coming from dihedral groups | H. Kimura (1996) introduced a method to construct Hadamard matrices of degree 8n + 4 from the dihedral group of order 2n. In this paper we study some properties of this construction | dihedral groups;hadamard matrices |
|
train_1338 | The chromatic spectrum of mixed hypergraphs | A mixed hypergraph is a triple H = (X, C, D), where X is the vertex set, and each of C, D is a list of subsets of X. A strict k-coloring of H is a surjection c : X {1,..., k} such that each member of le has two vertices assigned a common value and each member of D has two vertices assigned distinct values. The feasible set of H is {k: H has a strict k-coloring}. Among other results, we prove that a finite set of positive integers is the feasible set of some mixed hypergraph if and only if it omits the number I or is an interval starting with 1. For the set {s, t} with 2 <or= s <or= t - 2, the smallest realization has 2t - s vertices. When every member of C union D is a single interval in an underlying linear order on the vertices, the feasible set is also a single interval of integers | positive integers;mixed hypergraph;vertex set;mixed hypergraphs;chromatic spectrum;strict k-coloring |
|
train_1339 | Edge-colorings with no large polychromatic stars | Given a graph G and a positive integer r, let f/sub r/(G) denote the largest number of colors that can be used in a coloring of E(G) such that each vertex is incident to at most r colors. For all positive integers n and r, we determine f/sub r/(K/sub n,n/) exactly and f/sub r/(K/sub n/) within 1. In doing so, we disprove a conjecture by Y. Manoussakis et al. (1996) | positive integer;positive integers;edge colorings;polychromatic stars |
|
train_134 | A model of periodic oscillation for genetic regulatory systems | In this paper, we focus on modeling and explaining periodic oscillations in gene-protein systems with a simple nonlinear model and on analyzing effects of time delay on the stability of oscillations. Our main model of genetic regulation comprises of a two-gene system with an autoregulatory feedback loop. We exploit multiple time scales and hysteretic properties of the model to construct periodic oscillations with jumping dynamics and analyze the possible mechanism according to the singular perturbation theory. As shown in this paper, periodic oscillations are mainly generated by nonlinearly negative and positive feedback loops in gene regulatory systems, whereas the jumping dynamics is generally caused by time scale differences among biochemical reactions. This simple model may actually act as a genetic oscillator or switch in gene-protein networks because the dynamics are robust for parameter perturbations or environment variations. We also explore effects of time delay on the stability of the dynamics, showing that the time delay generally increases the stability region of the oscillations, thereby making the oscillations robust to parameter changes. Two examples are also provided to numerically demonstrate our theoretical results | bifurcation;nonlinearly positive feedback loops;time delay;circadian rhythm;genetic regulation;modeling;autoregulatory feedback loop;genetic regulatory system;two-gene system;jumping dynamics;singular perturbation theory;stability region;biochemical reactions;nonlinear model;relaxation oscillator;hysteretic properties;nonlinearly negative feedback loops;periodic oscillations;gene-protein systems;oscillations stability |
|
train_1340 | Orthogonal decompositions of complete digraphs | A family G of isomorphic copies of a given digraph G is said to be an orthogonal decomposition of the complete digraph D/sub n/ by G, if every arc of D/sub n/ belongs to exactly one member of G and the union of any two different elements from G contains precisely one pair of reverse arcs. Given a digraph h, an h family mh is the vertex-disjoint union of m copies of h . In this paper, we consider orthogonal decompositions by h-families. Our objective is to prove the existence of such an orthogonal decomposition whenever certain necessary conditions hold and m is sufficiently large | vertex-disjoint union;isomorphic copies;orthogonal decompositions;necessary conditions;complete digraphs |
|
train_1341 | STEM: Secure Telephony Enabled Middlebox | Dynamic applications, including IP telephony, have not seen wide acceptance within enterprises because of problems caused by the existing network infrastructure. Static elements, including firewalls and network address translation devices, are not capable of allowing dynamic applications to operate properly. The Secure Telephony Enabled Middlebox (STEM) architecture is an enhancement of the existing network design to remove the issues surrounding static devices. The architecture incorporates an improved firewall that can interpret and utilize information in the application layer of packets to ensure proper functionality. In addition to allowing dynamic applications to function normally, the STEM architecture also incorporates several detection and response mechanisms for well-known network-based vulnerabilities. This article describes the key components of the architecture with respect to the SIP protocol | stem;ip telephony;response mechanisms;network-based vulnerabilities;network address translation devices;static devices;network design;network infrastructure;detection mechanisms;firewalls;dynamic applications;sip protocol;stem architecture;secure telephony enabled middlebox;application layer |
|
train_1342 | Defending against flooding-based distributed denial-of-service attacks: a | tutorial Flooding-based distributed denial-of-service (DDoS) attack presents a very serious threat to the stability of the Internet. In a typical DDoS attack, a large number of compromised hosts are amassed to send useless packets to jam a victim, or its Internet connection, or both. In the last two years, it was discovered that DDoS attack methods and tools are becoming more sophisticated, effective, and also more difficult to trace to the real attackers. On the defense side, current technologies are still unable to withstand large-scale attacks. The main purpose of this article is therefore twofold. The first one is to describe various DDoS attack methods, and to present a systematic review and evaluation of the existing defense mechanisms. The second is to discuss a longer-term solution, dubbed the Internet-firewall approach, that attempts to intercept attack packets in the Internet core, well before reaching the victim | internet stability;internet firewall;distributed attack detection;reflector attacks;ddos attack tools;attack packets interception;ddos attack methods;large-scale attacks;flooding-based distributed denial-of-service attacks;tutorial |
|
train_1343 | Estimating the intrinsic dimension of data with a fractal-based method | In this paper, the problem of estimating the intrinsic dimension of a data set is investigated. A fractal-based approach using the Grassberger-Procaccia algorithm is proposed. Since the Grassberger-Procaccia algorithm (1983) performs badly on sets of high dimensionality, an empirical procedure that improves the original algorithm has been developed. The procedure has been tested on data sets of known dimensionality and on time series of Santa Fe competition | fractal-based method;time series;santa fe competition;pattern recognition;data intrinsic dimension estimation |
|
train_1344 | Restoration of archival documents using a wavelet technique | This paper addresses a problem of restoring handwritten archival documents by recovering their contents from the interfering handwriting on the reverse side caused by the seeping of ink. We present a novel method that works by first matching both sides of a document such that the interfering strokes are mapped with the corresponding strokes originating from the reverse side. This facilitates the identification of the foreground and interfering strokes. A wavelet reconstruction process then iteratively enhances the foreground strokes and smears the interfering strokes so as to strengthen the discriminating capability of an improved Canny edge detector against the interfering strokes. The method has been shown to restore the documents effectively with average precision and recall rates for foreground text extraction at 84 percent and 96 percent, respectively | archival documents restoration;handwritten archival documents;wavelet reconstruction process;iterative stroke enhancement;ink seepage;wavelet technique;canny edge detector |
|
train_1345 | Infrared-image classification using hidden Markov trees | An image of a three-dimensional target is generally characterized by the visible target subcomponents, with these dictated by the target-sensor orientation (target pose). An image often changes quickly with variable pose. We define a class as a set of contiguous target-sensor orientations over which the associated target image is relatively stationary with aspect. Each target is in general characterized by multiple classes. A distinct set of Wiener filters are employed for each class of images, to identify the presence of target subcomponents. A Karhunen-Loeve representation is used to minimize the number of filters (templates) associated with a given subcomponent. The statistical relationships between the different target subcomponents are modeled via a hidden Markov tree (HMT). The HMT classifier is discussed and example results are presented for forward-looking-infrared (FLIR) imagery of several vehicles | karhunen-loeve representation;vehicles;hmt;target pose;3d target image;hidden markov trees;target-sensor orientation;infrared-image classification;minimization;ir image classification;contiguous target-sensor orientations;wiener filters;flir imagery;forward-looking-infrared imagery |
|
train_1346 | Automatic multilevel thresholding for image segmentation by the growing time | adaptive self-organizing map In this paper, a Growing TASOM (Time Adaptive Self-Organizing Map) network called "GTASOM" along with a peak finding process is proposed for automatic multilevel thresholding. The proposed GTASOM is tested for image segmentation. Experimental results demonstrate that the GTASOM is a reliable and accurate tool for image segmentation and its results outperform other thresholding methods | growing tasom;image segmentation;growing time adaptive self-organizing map;gtasom;automatic multilevel thresholding;peak finding process |
|
train_1347 | A maximum-likelihood surface estimator for dense range data | Describes how to estimate 3D surface models from dense sets of noisy range data taken from different points of view, i.e., multiple range maps. The proposed method uses a sensor model to develop an expression for the likelihood of a 3D surface, conditional on a set of noisy range measurements. Optimizing this likelihood with respect to the model parameters provides an unbiased and efficient estimator. The proposed numerical algorithms make this estimation computationally practical for a wide variety of circumstances. The results from this method compare favorably with state-of-the-art approaches that rely on the closest-point or perpendicular distance metric, a convenient heuristic that produces biased solutions and fails completely when surfaces are not sufficiently smooth, as in the case of complex scenes or noisy range measurements. Empirical results on both simulated and real ladar data demonstrate the effectiveness of the proposed method for several different types of problems. Furthermore, the proposed method offers a general framework that can accommodate extensions to include surface priors, more sophisticated noise models, and other sensing modalities, such as sonar or synthetic aperture radar | surface fitting;simulated ladar data;noisy range measurements;3d surface models;bayesian estimation;unbiased estimator;synthetic aperture radar;calibration;real ladar data;complex scenes;noisy range data;parameter estimation;heuristic;optimal estimation;maximum-likelihood surface estimator;biased solutions;dense range data;registration;sonar;surface reconstruction;sensor model |
|
train_1348 | Reconstructing surfaces by volumetric regularization using radial basis | functions We present a new method of surface reconstruction that generates smooth and seamless models from sparse, noisy, nonuniform, and low resolution range data. Data acquisition techniques from computer vision, such as stereo range images and space carving, produce 3D point sets that are imprecise and nonuniform when compared to laser or optical range scanners. Traditional reconstruction algorithms designed for dense and precise data do not produce smooth reconstructions when applied to vision-based data sets. Our method constructs a 3D implicit surface, formulated as a sum of weighted radial basis functions. We achieve three primary advantages over existing algorithms: (1) the implicit functions we construct estimate the surface well in regions where there is little data, (2) the reconstructed surface is insensitive to noise in data acquisition because we can allow the surface to approximate, rather than exactly interpolate, the data, and (3) the reconstructed surface is locally detailed, yet globally smooth, because we use radial basis functions that achieve multiple orders of smoothness | 3d point sets;vision-based data sets;sparse range data;data acquisition techniques;surfaces reconstruction;noisy data;computer vision;low resolution range data;radial basis functions;stereo range images;nonuniform data;weighted radial basis functions;3d implicit surface;space carving;volumetric regularization |
|
train_1349 | Efficient simplicial reconstructions of manifolds from their samples | An algorithm for manifold learning is presented. Given only samples of a finite-dimensional differentiable manifold and no a priori knowledge of the manifold's geometry or topology except for its dimension, the goal is to find a description of the manifold. The learned manifold must approximate the true manifold well, both geometrically and topologically, when the sampling density is sufficiently high. The proposed algorithm constructs a simplicial complex based on approximations to the tangent bundle of the manifold. An important property of the algorithm is that its complexity depends on the dimension of the manifold, rather than that of the embedding space. Successful examples are presented in the cases of learning curves in the plane, curves in space, and surfaces in space; in addition, a case when the algorithm fails is analyzed | sampling density;simplicial reconstructions;true manifold;learned manifold;finite-dimensional differentiable manifold;manifold learning;simplicial complex |
|
train_135 | Hysteretic threshold logic and quasi-delay insensitive asynchronous design | We introduce the class of hysteretic linear-threshold (HLT) logic functions as a novel extension of linear threshold logic, and prove their general applicability for constructing state-holding Boolean functions. We then demonstrate a fusion of HLT logic with the quasi-delay insensitive style of asynchronous circuit design, complete with logical design examples. Future research directions are also identified | cmos implementation;hlt logic;state-holding boolean functions;asynchronous circuit design;quasi-delay insensitive style;digital logic;hysteretic linear-threshold logic functions;logic design |
|
train_1350 | Generalized mosaicing: wide field of view multispectral imaging | We present an approach to significantly enhance the spectral resolution of imaging systems by generalizing image mosaicing. A filter transmitting spatially varying spectral bands is rigidly attached to a camera. As the system moves, it senses each scene point multiple times, each time in a different spectral band. This is an additional dimension of the generalized mosaic paradigm, which has demonstrated yielding high radiometric dynamic range images in a wide field of view, using a spatially varying density filter. The resulting mosaic represents the spectrum at each scene point. The image acquisition is as easy as in traditional image mosaics. We derive an efficient scene sampling rate, and use a registration method that accommodates the spatially varying properties of the filter. Using the data acquired by this method, we demonstrate scene rendering under different simulated illumination spectra. We are also able to infer information about the scene illumination. The approach was tested using a standard 8-bit black/white video camera and a fixed spatially varying spectral (interference) filter | scene sampling rate;scene illumination;registration method;image fusion;wide field of view multispectral imaging;physics-based vision;image-based rendering;color balance;image acquisition;spatially varying spectral bands;generalized mosaicing;scene rendering;hyperspectral imaging;simulated illumination spectra;spatially varying density filter |
|
train_1351 | Analytic PCA construction for theoretical analysis of lighting variability in | images of a Lambertian object We analyze theoretically the subspace best approximating images of a convex Lambertian object taken from the same viewpoint, but under different distant illumination conditions. We analytically construct the principal component analysis for images of a convex Lambertian object, explicitly taking attached shadows into account, and find the principal eigenmodes and eigenvalues with respect to lighting variability. Our analysis makes use of an analytic formula for the irradiance in terms of spherical-harmonic coefficients of the illumination and shows, under appropriate assumptions, that the principal components or eigenvectors are identical to the spherical harmonic basis functions evaluated at the surface normal vectors. Our main contribution is in extending these results to the single-viewpoint case, showing how the principal eigenmodes and eigenvalues are affected when only a limited subset (the upper hemisphere) of normals is available and the spherical harmonics are no longer orthonormal over the restricted domain. Our results are very close, both qualitatively and quantitatively, to previous empirical observations and represent the first essentially complete theoretical explanation of these observations | principal eigenvalues;irradiance;lighting variability;surface normal vectors;five-dimensional subspace;radiance;spherical harmonics;principal eigenmodes;analytic principal component analysis;convex lambertian object |
|
train_1352 | Elastically adaptive deformable models | We present a technique for the automatic adaptation of a deformable model's elastic parameters within a Kalman filter framework for shape estimation applications. The novelty of the technique is that the model's elastic parameters are not constant, but spatio-temporally varying. The variation of the elastic parameters depends on the distance of the model from the data and the rate of change of this distance. Each pass of the algorithm uses physics-based modeling techniques to iteratively adjust both the geometric and the elastic degrees of freedom of the model in response to forces that are computed from the discrepancy between the model and the data. By augmenting the state equations of an extended Kalman filter to incorporate these additional variables, we are able to significantly improve the quality of the shape estimation. Therefore, the model's elastic parameters are always initialized to the same value and they are subsequently modified depending on the data and the noise distribution. We present results demonstrating the effectiveness of our method for both two-dimensional and three-dimensional data | elastically adaptive deformable models;physics-based modeling techniques;elastic degrees of freedom;extended kalman filter;geometric degrees of freedom;state equations;kalman filter framework;shape estimation;elastic parameters;automatic adaptation |
|
train_1353 | Generalized spatio-chromatic diffusion | A framework for diffusion of color images is presented. The method is based on the theory of thermodynamics of irreversible transformations which provides a suitable basis for designing correlations between the different color channels. More precisely, we derive an equation for color evolution which comprises a purely spatial diffusive term and a nonlinear term that depends on the interactions among color channels over space. We apply the proposed equation to images represented in several color spaces, such as RGB, CIELAB, Opponent colors, and IHS | vector-valued diffusion;opponent colors;generalized spatio-chromatic diffusion;cielab;diffusion;thermodynamics;color evolution;color images;rgb;ihs;irreversible transformations;nonlinear term;color channels;spatial diffusive term;scale-space |
|
train_1354 | Design and analysis of optimal material distribution policies in flexible | manufacturing systems using a single AGV Modern automated manufacturing processes employ automated guided vehicles (AGVs) for material handling, which serve several machine centres (MC) in a factory. Optimal scheduling of AGVs can significantly help to increase the efficiency of the manufacturing process by minimizing the idle time of MCs waiting for the raw materials. We analyse the requirements for an optimal schedule and then provide a mathematical framework for an efficient schedule of material delivery by an AGV. A mathematical model is developed and then a strategy for optimal material distribution of the available raw material to the MCs is derived. With this model, the optimal number of MCs to be utilized is also determined. Finally, the material delivery schedule employing multiple journeys to the MCs by the AGV is carried out. Through rigorous analysis and simulation experiments, we show that such a delivery strategy will optimize the overall performance | manufacturing lead time;optimal scheduling;material delivery;automated guided vehicle;idle time minimization;machine centres;waiting time;agv;flexible manufacturing systems;optimal material distribution policies;material handling |
|
train_1355 | Comparison of push and pull systems with transporters: a metamodelling approach | Analyses push and pull systems with transportation consideration. A multiproduct, multiline, multistage production system was used to compare the two systems. The effects of four factors (processing time variation, demand variation, transporters, batch size) on throughput rate, average waiting time in the system and machine utilization were studied. The study uses metamodels to compare the two systems. They serve a dual purpose of expressing system performance measures in the form of a simple equation and reducing computational time when comparing the two systems. Research shows that the number of transporters used and the batch size have a significant effect on the performance measures of both systems | performance measures;multiproduct multiline multistage production system;throughput rate;average waiting time;pull systems;machine utilization;metamodelling approach;push systems;batch size;demand variation;processing time variation;transporters |
|
train_1356 | Five-axis NC milling of ruled surfaces: optimal geometry of a conical tool | The side milling of ruled surfaces using a conical milling cutter was studied. This is a field that has largely been ignored by research scientists, but it is much used in industry, especially to machine turbine blades. We first suggest an improved positioning with respect to the directrices of the ruled surface. As compared with the methods already developed for the cylindrical cutter, this positioning enables the error between the cutter and the work-piece to be reduced. An algorithm is then introduced to calculate error so one can determine the cutter dimensions (cone radius and angle) in order to respect the tolerance interval imposed by the design office. This study provides an opportunity to determine cutters with greater dimensions, thus alleviating bending problems during milling | optimal geometry;conical milling cutter;conical;side milling;tolerance interval;positioning;five-axis nc milling;cutter dimensions;ruled surfaces |
|
train_1357 | Work sequencing in a manufacturing cell with limited labour constraints | This study focuses on the analysis of group scheduling heuristics in a dual-constrained, automated manufacturing cell, where labour utilization is limited to setups, tear-downs and loads/unloads. This scenario is realistic in today's automated manufacturing cells. The results indicate that policies for allocating labour to tasks have very little impact in such an environment. Furthermore, the performance of efficiency oriented, exhaustive, group scheduling heuristics deteriorated while the performance of the more complex, non-exhaustive heuristics improved. Thus, it is recommended that production managers use the simplest labour scheduling policy, and instead focus their efforts to activities such as job scheduling and production planning in such environments | efficiency oriented exhaustive group scheduling heuristics;automated manufacturing cells;job scheduling;dual-constrained automated manufacturing cell;nonexhaustive heuristics;manufacturing cell;group scheduling heuristics;limited labour constraints;work sequencing;labour allocation policies;production planning |
|
train_1358 | Analysis of the surface roughness and dimensional accuracy capability of fused | deposition modelling processes Building up materials in layers poses significant challenges from the viewpoint of material science, heat transfer and applied mechanics. However, numerous aspects of the use of these technologies have yet to be studied. One of these aspects is the characterization of the surface roughness and dimensional precision obtainable in layered manufacturing processes. In this paper, a study of roughness parameters obtained through the use of these manufacturing processes was made. Prototype parts were manufactured using FDM techniques and an experimental analysis of the resulting roughness average (R/sub a/) and rms roughness (R/sub q/) obtained through the use of these manufacturing processes was carried out. Dimensional parameters were also studied in order to determine the capability of the Fused Deposition Modelling process for manufacturing parts | cnc-controlled robot;rms roughness;surface roughness;dimensional accuracy capability;layered manufacturing processes;fused deposition modelling processes;prototype parts;dimensional precision;cad model;rapid prototyping;three-dimensional solid objects;extrusion head;roughness average |
|
train_1359 | On fuzzy and probabilistic control charts | In this article, different procedures of constructing control charts for linguistic data, based on fuzzy and probability theory, are discussed. Three sets of membership functions, with different degrees of fuzziness, are proposed for fuzzy approaches. A comparison between fuzzy and probability approaches, based on the Average Run Length and samples under control, is conducted for real data. Contrary to the conclusions of Raz and Wang (1990) the choice of degree of fuzziness affected the sensitivity of control charts | control chart construction;membership functions;fuzzy control charts;probabilistic control charts;average run length;fuzzy subsets;porcelain products;fuzziness degree;linguistic data;sensitivity |
|
train_136 | Design of 1-D and 2-D variable fractional delay allpass filters using weighted | least-squares method In this paper, a weighted least-squares method is presented to design one-dimensional and two-dimensional variable fractional delay allpass filters. First, each coefficient of the variable allpass filter is expressed as the polynomial of the fractional delay parameter. Then, the nonlinear phase error is approximated by a weighted equation error such that the cost function can be converted into a quadratic form. Next, by minimizing the weighted equation error, the optimal polynomial coefficients can be obtained iteratively by solving a set of linear simultaneous equations at each iteration. Finally, the design examples are demonstrated to illustrate the effectiveness of the proposed approach | cost function;fractional delay parameter;variable fractional delay allpass filters;linear simultaneous equations;nonlinear phase error approximation;1d allpass filters;2d allpass filters;weighted least-squares method;optimal polynomial coefficients;weighted equation error |
|
train_1360 | Automated post bonding inspection by using machine vision techniques | Inspection plays an important role in the semiconductor industry. In this paper, we focus on the inspection task after wire bonding in packaging. The purpose of wire bonding (W/B) is to connect the bond pads with the lead fingers. Two major types of defects are (1) bonding line missing and (2) bonding line breakage. The numbers of bonding lines and bonding balls are used as the features for defect classification. The proposed method consists of image preprocessing, orientation determination, connection detection, bonding line detection, bonding ball detection, and defect classification. The proposed method is simple and fast. The experimental results show that the proposed method can detect the defects effectively | defect classification;wire bonding;orientation determination;lead fingers;bonding line detection;machine vision;packaging;connection detection;semiconductor industry;bonding line missing;ic manufacturing;bonding line breakage;bonding balls;automated post bonding inspection;bonding ball detection;image preprocessing;bond pad connection |
|
train_1361 | Adaptive scheduling of batch servers in flow shops | Batch servicing is a common way of benefiting from economies of scale in manufacturing operations. Good examples of production systems that allow for batch processing are ovens found in the aircraft industry and in semiconductor manufacturing. In this paper we study the issue of dynamic scheduling of such systems within the context of multi-stage flow shops. So far, a great deal of research has concentrated on the development of control strategies, which only address the batch stage. This paper proposes an integral scheduling approach that also includes succeeding stages. In this way, we aim for shop optimization, instead of optimizing performance for a single stage. Our so-called look-ahead strategy adapts its scheduling decision to shop status, which includes information on a limited number of near-future arrivals. In particular, we study a two-stage flow shop, in which the batch stage is succeeded by a serial stage. The serial stage may be realized by a single machine or by parallel machines. Through an extensive simulation study it is demonstrated how shop performance can be improved by the proposed strategy relative to existing strategies | flow shops;control strategies;two-stage flow shop;batch servers;simulation study;parallel machines;manufacturing operations;production systems;integral scheduling approach;near-future arrivals;dynamic scheduling;batch servicing;look-ahead strategy;aircraft industry;shop optimization;single machine;ovens;semiconductor manufacturing;adaptive scheduling;multi-stage flow shops |
|
train_1362 | Process planning for reliable high-speed machining of moulds | A method of generating NC programs for the high-speed milling of moulds is investigated. Forging dies and injection moulds, whether plastic or aluminium, have a complex surface geometry. In addition they are made of steels of hardness as much as 30 or even 50 HRC. Since 1995, high-speed machining has been much adopted by the die-making industry, which with this technology can reduce its use of Sinking Electrodischarge Machining (SEDM). EDM, in general, calls for longer machining times. The use of high-speed machining makes it necessary to redefine the preliminary stages of the process. In addition, it affects the methodology employed in the generation of NC programs, which requires the use of high-level CAM software. The aim is to generate error-free programs that make use of optimum cutting strategies in the interest of productivity and surface quality. The final result is a more reliable manufacturing process. There are two risks in the use of high-speed milling on hardened steels. One of these is tool breakage, which may be very costly and may furthermore entail marks on the workpiece. The other is collisions between the tool and the workpiece or fixtures, the result of which may be damage to the ceramic bearings in the spindles. in order to minimize these risks it is necessary that new control and optimization steps be included in the CAM methodology. There are three things that the firm adopting high-speed methods should do. It should redefine its process engineering, it should systematize access by its CAM programmers to high-speed knowhow, and it should take up the use of process simulation tools. In the latter case, it will be very advantageous to use tools for the estimation of cutting forces. The new work methods proposed in this article have made it possible to introduce high speed milling (HSM) into the die industry. Examples are given of how the technique has been applied with CAM programming re-engineered as here proposed, with an explanation of the novel features and the results | forging dies;cam methodology;process engineering redefinition;optimum cutting strategies;complex surface geometry;cutting strategies;injection moulds;error-free programs;moulds;process simulation tools;cam programming re-engineering;productivity;tool workpiece collisions;tool breakage;nc programs;process planning;high-speed milling;surface quality;ceramic bearings;hardened steels;reliable high-speed machining |
|
train_1363 | Heuristics for single-pass welding task sequencing | Welding task sequencing is a prerequisite in the offline programming of robot arc welding. Single-pass welding task sequencing can be modelled as a modified travelling salesman problem. Owing to the difficulty of the resulting arc-routing problems, effective local search heuristics are developed. Computational speed becomes important because robot arc welding is often part of an automated process-planning procedure. Generating a reasonable solution in an acceptable time is necessary for effective automated process planning. Several different heuristics are proposed for solving the welding task-sequencing problem considering both productivity and the potential for welding distortion. Constructive heuristics based on the nearest neighbour concept and tabu search heuristics are developed and enhanced using improvement procedures. The effectiveness of the heuristics developed is tested and verified on actual welded structure problems and random problems | random problems;nearest neighbour concept;welding distortion;welded structure problems;tabu search heuristics;single-pass welding task sequencing;local search heuristics;offline programming;modified travelling salesman problem;productivity;computational speed;automated process-planning procedure;robot arc welding;constructive heuristics |
|
train_1364 | An adaptive sphere-fitting method for sequential tolerance control | The machining of complex parts typically involves a logical and chronological sequence of n operations on m machine tools. Because manufacturing datums cannot always match design constraints, some of the design specifications imposed on the part are usually satisfied by distinct subsets of the n operations prescribed in the process plan. Conventional tolerance control specifies a fixed set point for each operation and a permissible variation about this set point to insure compliance with the specifications, whereas sequential tolerance control (STC) uses real-time measurement information at the completion of one stage to reposition the set point for subsequent operations. However, it has been shown that earlier sphere-fitting methods for STC can lead to inferior solutions when the process distributions are skewed. This paper introduces an extension of STC that uses an adaptive sphere-fitting method that significantly improves the yield in the presence of skewed distributions as well as significantly reducing the computational effort required by earlier probabilistic search methods | real-time measurement information;compliance;skewed distributions;sequential tolerance control;yield improvement;adaptive sphere-fitting method;computational effort;machine tools;design constraints |
|
train_1365 | Deadlock-free scheduling in flexible manufacturing systems using Petri nets | This paper addresses the deadlock-free scheduling problem in Flexible Manufacturing Systems. An efficient deadlock-free scheduling algorithm was developed, using timed Petri nets, for a class of FMSs called Systems of Sequential Systems with Shared Resources (S/sup 4/ R). The algorithm generates a partial reachability graph to find the optimal or near-optimal deadlock-free schedule in terms of the firing sequence of the transitions of the Petri net model. The objective is to minimize the mean flow time (MFT). An efficient truncation technique, based on the siphon concept, has been developed and used to generate the minimum necessary portion of the reachability graph to be searched. It has been shown experimentally that the developed siphon truncation technique enhances the ability to develop deadlock-free schedules of systems with a high number of deadlocks, which cannot be achieved using standard Petri net scheduling approaches. It may be necessary, in some cases, to relax the optimality condition for large FMSs in order to make the search effort reasonable. Hence, a User Control Factor (UCF) was defined and used in the scheduling algorithm. The objective of using the UCF is to achieve an acceptable trade-off between the solution quality and the search effort. Its effect on the MFT and the CPU time has been investigated. Randomly generated examples are used for illustration and comparison. Although the effect of UCF did not affect the mean flow time, it was shown that increasing it reduces the search effort (CPU time) significantly | optimal deadlock-free schedule;siphon truncation technique;systems of sequential systems with shared resources;optimality condition relaxation;mean flow time minimization;user control factor;randomly generated examples;deadlock-free scheduling;near-optimal deadlock-free schedule;petri net model transitions firing sequence;flexible manufacturing systems;partial reachability graph;cpu time;petri nets |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.