title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Design, measurement and equivalent circuit synthesis of high power HF transformer for three-phase composite dual active bridge topology | High voltage high frequency (HF) transformer provides the isolation between high and low dc link voltages in dual active bridge (DAB) converters. Such DAB converters are finding wide applications as an intermediate DC-DC converter in transformerless intelligent power substation (TIPS), which is proposed as an alternative for conventional distribution-transformer connecting 13.8 kV and 480 V grids. The design of HF transformer used in DAB stage of such application is very challenging considering the required isolation, size and cost. In this paper, the specification generation, design, characterization, test and measurement results on a 10kHz HF transformer are presented, highlighting the above challenges. |
Thinking with Mahatma Gandhi: Beyond Liberal Democracy | Gandhi's case against the West looks... infinitely stronger than it looked, to us Westerners, thirty years ago.G. D. H. cole |
Survey of Fog Computing: Fundamental, Network Applications, and Research Challenges | Fog computing is an emerging paradigm that extends computation, communication, and storage facilities toward the edge of a network. Compared to traditional cloud computing, fog computing can support delay-sensitive service requests from end-users (EUs) with reduced energy consumption and low traffic congestion. Basically, fog networks are viewed as offloading to core computation and storage. Fog nodes in fog computing decide to either process the services using its available resource or send to the cloud server. Thus, fog computing helps to achieve efficient resource utilization and higher performance regarding the delay, bandwidth, and energy consumption. This survey starts by providing an overview and fundamental of fog computing architecture. Furthermore, service and resource allocation approaches are summarized to address several critical issues such as latency, and bandwidth, and energy consumption in fog computing. Afterward, compared to other surveys, this paper provides an extensive overview of state-of-the-art network applications and major research aspects to design these networks. In addition, this paper highlights ongoing research effort, open challenges, and research trends in fog computing. |
Data-driven risk-averse stochastic optimization with Wasserstein metric | The traditional two-stage stochastic programming approach is to minimize the total expected cost with the assumption that the distribution of the random parameters is known. However, in most practices, the actual distribution of the random parameters is not known, and instead, only a series of historical data are available. Thus, the solution obtained from the traditional twostage stochastic program can be biased and suboptimal for the true problem, if the estimated distribution of the random parameters is not accurate, which is usually true when only a limited amount of historical data are available. In this paper, we propose a data-driven risk-averse stochastic optimization approach. Based on the observed historical data, we construct the confidence set of the ambiguous distribution of the random parameters, and develop a riskaverse stochastic optimization framework to minimize the total expected cost under the worstcase distribution within the constructed confidence set. We introduce the Wasserstein metric to construct the confidence set and by using this metric, we can successfully reformulate the risk-averse two-stage stochastic program to its tractable counterpart. In addition, we derive the worst-case distribution and develop efficient algorithms to solve the reformulated problem. Moreover, we perform convergence analysis to show that the risk averseness of the proposed formulation vanishes as the amount of historical data grows to infinity, and accordingly, the corresponding optimal objective value converges to that of the traditional risk-neutral twostage stochastic program. We further precisely derive the convergence rate, which indicates the value of data. Finally, the numerical experiments on risk-averse stochastic facility location and stochastic unit commitment problems verify the effectiveness of our proposed framework. |
Transitioning from federated avionics architectures to Integrated Modular Avionics | This paper identifies considerations for transitioning from a federated avionics architecture to an integrated modular avionics (IMA) architecture. Federated avionics architectures make use of distributed avionics functions that are packaged as self-contained units (LRUs and LRMs). IMA architectures employ a high-integrity, partitioned environment that hosts multiple avionics functions of different criticalities on a shared computing platform. This provides for weight and power savings since computing resources can be used more efficiently. This paper establishes the benefits of transitioning to IMA. To aid in the planning process, the paper also identifies factors to consider before transitioning to IMA. The approach to resource management (computing, communication, and I/O) is identified as the fundamental architectural difference between federated and IMA systems. The paper describes how this difference changes the development process and benefits the systems integrator. This paper also addresses misconceptions about the resource management mechanisms that can occur during a transition to IMA and concludes that resources are not inherently constrained by IMA architectures. Guidance is provided for transitioning to both "open" and "closed" IMA architectures. Open IMA architectures utilize open interface standards that are available in the public domain. Closed IMA architectures utilize proprietary interfaces that can be customized. The analysis of these avionics architectures is based upon the authors' experience in developing platform computing systems at GE Aviation. GE Aviation has developed open system IMA architectures for commercial aircraft (Boeing 787 Dreamliner), as well as military aircraft (Boeing C-130 combat aircraft, and Boeing KC-767 Tanker). |
Refining Word Embeddings Using Intensity Scores for Sentiment Analysis | Word embeddings that provide continuous low-dimensional vector representations of words have been extensively used for various natural language processing tasks. However, existing context-based word embeddings such as Word2vec and GloVe typically fail to capture sufficient sentiment information, which may result in words with similar vector representations having an opposite sentiment polarity e.g., good and bad, thus degrading sentiment analysis performance. To tackle this problem, recent studies have suggested learning sentiment embeddings to incorporate the sentiment polarity positive and negative information from labeled corpora. This study adopts another strategy to learn sentiment embeddings. Instead of creating a new word embedding from labeled corpora, we propose a word vector refinement model to refine existing pretrained word vectors using real-valued sentiment intensity scores provided by sentiment lexicons. The idea of the refinement model is to improve each word vector such that it can be closer in the lexicon to both semantically and sentimentally similar words i.e., those with similar intensity scores and further away from sentimentally dissimilar words i.e., those with dissimilar intensity scores. An obvious advantage of the proposed method is that it can be applied to any pretrained word embeddings. In addition, the intensity scores can provide more fine-grained real-valued sentiment information than binary polarity labels to guide the refinement process. Experimental results show that the proposed refinement model can improve both conventional word embeddings and previously proposed sentiment embeddings for binary, ternary, and fine-grained sentiment classification on the SemEval and Stanford Sentiment Treebank datasets. |
LineScout Technology Opens the Way to Robotic Inspection and Maintenance of High-Voltage Power Lines | Historically, the inspection and maintenance of high-voltage power lines have been performed by linemen using various traditional means. In recent years, the use of robots appeared as a new and complementary method of performing such tasks, as several initiatives have been explored around the world. Among them is the teleoperated robotic platform called LineScout Technology, developed by Hydro-Québec, which has the capacity to clear most obstacles found on the grid. Since its 2006 introduction in the operations, it is considered by many utilities as the pioneer project in the domain. This paper’s purpose is to present the mobile platform design and its main mechatronics subsystems to support a comprehensive description of the main functions and application modules it offers. This includes sensors and a compact modular arm equipped with tools to repair cables and broken conductor strands. This system has now been used on many occasions to assess the condition of power line infrastructure and some results are presented. Finally, future developments and potential technologies roadmap are briefly discussed. |
Stress sensitivity of fault seismicity : a comparison between limited-offset oblique and major strike-slip faults | We present a new three-dimensional inventory of the southern San Francisco Bay area faults and use it to calculate stress applied principally by the 1989 M=7.1 Loma Prieta earthquake, and to compare fault seismicity rates before and after 1989. The major high-angle right-lateral faults exhibit a different response to the stress change than do minor oblique (rightlateral/thrust) faults. Seismicity on oblique-slip faults in the southern Santa Clara Valley thrust belt increased where the faults were unclamped. The strong dependence of seismicity change on normal stress change implies a high coefficient of static friction. In contrast, we observe that faults with significant offset (> 50-100 km) behave differently; microseismicity on the Hayward fault diminished where right-lateral shear stress was reduced, and where it was unclamped by the Loma Prieta earthquake. We observe a similar response on the San Andreas fault zone in southern California after the Landers earthquake sequence. Additionally, the offshore San Gregorio fault shows a seismicity rate increase where right-lateral/oblique shear stress was increased by the Loma Prieta earthquake despite also being clamped by it. These responses are consistent with either a low coefficient of static friction or high pore fluid pressures within the fault zones. We can explain the different behavior of the two styles of faults if those with large cumulative offset become impermeable through gouge buildup; coseismically pressurized pore fluids could be trapped and negate imposed normal stress changes, whereas in more limited offset faults fluids could rapidly escape. The difference in behavior between minor and major faults may explain why frictional failure criteria that apply intermediate coefficients of static friction can be effective in describing the broad distributions of aftershocks that follow large earthquakes, since many of these events occur both inside and outside major fault zones. |
Penetration testing: Concepts, attack methods, and defense strategies | Penetration testing helps to secure networks, and highlights the security issues. In this paper investigate different aspects of penetration testing including tools, attack methodologies, and defense strategies. More specifically, we performed different penetration tests using a private networks, devices, and virtualized systems and tools. We predominately used tools within the Kali Linux suite. The attacks we performed included: smartphone penetration testing, hacking phones Bluetooth, traffic sniffing, hacking WPA Protected Wifi, Man-in-the-Middle attack, spying (accessing a PC microphone), hacking phones Bluetooth, and hacking remote PC via IP and open ports using advanced port scanner. The results are then summarized and discussed. The paper also outlined the detailed steps and methods while conducting these attacks. |
Knowledge Discovery and Data Mining Applications in the Healthcare Industry : A Comprehensive Study | The healthcare industry is one of the most attractive domains to realize the actionable knowledge discovery objectives. This chapter studies recent researches on knowledge discovery and data mining applications in the healthcare industry and proposes a new classification of these applications. Studies show that knowledge discovery and data mining applications in the healthcare industry can be classified to three major classes, namely patient view, market view, and system view. Patient view includes papers that performed pure data mining on healthcare industry data. Market view includes papers that saw the patients as customers. System view includes papers that developed a decision support system. The goal of this classification is identifying research opportunities and gaps for researchers interested in this context. |
A new method on designing and simulating CNTFET_based ternary gates and arithmetic circuits | This paper presents a new design of three-valued logic gates on the basis of carbon nanotube transistors. Next, the proposed circuit is compared with the existing models of circuits. The simulation results, using HSPICE indicate that the proposed model outperform the existing models in terms of power and delay. Moreover, using load-transistor results in 48.23% reductions in terms of power-delay-product (PDP) compared with the best model ever observed. Furthermore, the implementation of ternary arithmetic circuits such as half adder and multiplier using load-transistor has been investigated. An added common term between Sum and Carry output in half-adder (HA) and Carry and Product output in lower multiplier (MUL) has been used in arithmetic circuits. Considering the fact that a better proposal with higher performance has been utilized for the implementation of three-valued gates, our model results in the reduction of 9.95% and 23.87% PDP in HA and MUL respectively compared with the best models observed. |
Cyclostationary Feature Detection in Cognitive Radio using Different Modulation Schemes | The various spectrum sensing schemes involved in cognitive radio have always been researched and discussed. An ideal detection scheme should be fast, accurate and efficient. Cyclostationary feature detection is a detection scheme that satisfies all these criteria. The method also possesses the ability to distinguish between noise and the primary user signal. One major advantage of cyclostationary feature detection method is that in addition to identifying the primary user signal, it also identifies the modulation scheme used by the primary user. This paper investigates the cyclostationary feature detection method under different modulation schemes. In this paper, cooperative spectrum sensing is performed which involves the cooperation among various cognitive relay nodes. Cooperative spectrum sensing is thus found to be an effective technique to improve the detection performance by exploring the spatial diversity of various relay nodes. |
Lower Health Literacy is Associated with Poorer Health Status and Outcomes in Chronic Obstructive Pulmonary Disease | Limited health literacy is associated with poor outcomes in many chronic diseases, but little is known about health literacy in chronic obstructive pulmonary disease (COPD). To examine the associations between health literacy and both outcomes and health status in COPD. Structured interviews were administered to 277 subjects with self-report of physician-diagnosed COPD, recruited through US random-digit telephone dialing. Health literacy was measured with a validated three-item battery. Multivariable linear regression, controlling for sociodemographics including income and education, determined the cross-sectional associations between health literacy and COPD-related health status: COPD Severity Score, COPD Helplessness Index, and Airways Questionnaire-20R [measuring respiratory-specific health-related quality of life (HRQoL)]. Multivariable logistic regression estimated associations between health literacy and COPD-related hospitalizations and emergency department (ED) visits. Taking socioeconomic status into account, poorer health literacy (lowest tertile compared to highest tertile) was associated with: worse COPD severity (+2.3 points; 95 % CI 0.3–4.4); greater COPD helplessness (+3.7 points; 95 % CI 1.6–5.8); and worse respiratory-specific HRQoL (+3.5 points; 95 % CI 1.8–4.9). Poorer health literacy, also controlling for the same covariates, was associated with higher likelihood of COPD-related hospitalizations (OR = 6.6; 95 % CI 1.3–33) and COPD-related ED visits (OR = 4.7; 95 % CI 1.5–15). Analyses for trend across health literacy tertiles were statistically significant (p < 0.05) for all above outcomes. Independent of socioeconomic status, poor health literacy is associated with greater COPD severity, greater COPD helplessness, worse respiratory-specific HRQoL, and higher odds of COPD-related emergency health-care utilization. These results underscore that COPD patients with poor health literacy may be at particular risk for poor health-related outcomes. |
Cell-Adhesive Bioinspired and Catechol-Based Multilayer Freestanding Membranes for Bone Tissue Engineering | Mussels are marine organisms that have been mimicked due to their exceptional adhesive properties to all kind of surfaces, including rocks, under wet conditions. The proteins present on the mussel's foot contain 3,4-dihydroxy-l-alanine (DOPA), an amino acid from the catechol family that has been reported by their adhesive character. Therefore, we synthesized a mussel-inspired conjugated polymer, modifying the backbone of hyaluronic acid with dopamine by carbodiimide chemistry. Ultraviolet-visible (UV-vis) spectroscopy and nuclear magnetic resonance (NMR) techniques confirmed the success of this modification. Different techniques have been reported to produce two-dimensional (2D) or three-dimensional (3D) systems capable to support cells and tissue regeneration; among others, multilayer systems allow the construction of hierarchical structures from nano- to macroscales. In this study, the layer-by-layer (LbL) technique was used to produce freestanding multilayer membranes made uniquely of chitosan and dopamine-modified hyaluronic acid (HA-DN). The electrostatic interactions were found to be the main forces involved in the film construction. The surface morphology, chemistry, and mechanical properties of the freestanding membranes were characterized, confirming the enhancement of the adhesive properties in the presence of HA-DN. The MC3T3-E1 cell line was cultured on the surface of the membranes, demonstrating the potential of these freestanding multilayer systems to be used for bone tissue engineering. |
Life expectancy, economic inequality, homicide, and reproductive timing in Chicago neighbourhoods. | In comparisons among Chicago neighbourhoods, homicide rates in 1988-93 varied more than 100-fold, while male life expectancy at birth ranged from 54 to 77 years, even with effects of homicide mortality removed. This "cause deleted" life expectancy was highly correlated with homicide rates; a measure of economic inequality added significant additional prediction, whereas median household income did not. Deaths from internal causes (diseases) show similar age patterns, despite different absolute levels, in the best and worst neighbourhoods, whereas deaths from external causes (homicide, accident, suicide) do not. As life expectancy declines across neighbourhoods, women reproduce earlier; by age 30, however, neighbourhood no longer affects age specific fertility. These results support the hypothesis that life expectancy itself may be a psychologically salient determinant of risk taking and the timing of life transitions. |
A Fast Algorithm for Finding Dominators in a Flowgraph | A fast algorithm for finding dominators in a flowgraph is presented. The algorithm uses depth-first search and an efficient method of computing functions defined on paths in trees. A simple implementation of the algorithm runs in <italic>O</italic>(<italic>m</italic> log <italic>n</italic>) time, where <italic>m</italic> is the number of edges and <italic>n</italic> is the number of vertices in the problem graph. A more sophisticated implementation runs in <italic>O</italic>(<italic>m</italic>α(<italic>m</italic>, <italic>n</italic>)) time, where α(<italic>m</italic>, <italic>n</italic>) is a functional inverse of Ackermann's function.
Both versions of the algorithm were implemented in Algol W, a Stanford University version of Algol, and tested on an IBM 370/168. The programs were compared with an implementation by Purdom and Moore of a straightforward <italic>O</italic>(<italic>mn</italic>)-time algorithm, and with a bit vector algorithm described by Aho and Ullman. The fast algorithm beat the straightforward algorithm and the bit vector algorithm on all but the smallest graphs tested. |
Full 3D reconstruction of transparent objects | Numerous techniques have been proposed for reconstructing 3D models for opaque objects in past decades. However, none of them can be directly applied to transparent objects. This paper presents a fully automatic approach for reconstructing complete 3D shapes of transparent objects. Through positioning an object on a turntable, its silhouettes and light refraction paths under different viewing directions are captured. Then, starting from an initial rough model generated from space carving, our algorithm progressively optimizes the model under three constraints: surface and refraction normal consistency, surface projection and silhouette consistency, and surface smoothness. Experimental results on both synthetic and real objects demonstrate that our method can successfully recover the complex shapes of transparent objects and faithfully reproduce their light refraction properties. |
Group Equivariant Capsule Networks | We present group equivariant capsule networks, a framework to introduce guaranteed equivariance and invariance properties to the capsule network idea. Our work can be divided into two contributions. First, we present a generic routing by agreement algorithm defined on elements of a group and prove that equivariance of output pose vectors, as well as invariance of output activations, hold under certain conditions. Second, we connect the resulting equivariant capsule networks with work from the field of group convolutional networks. Through this connection, we provide intuitions of how both methods relate and are able to combine the strengths of both approaches in one deep neural network architecture. The resulting framework allows sparse evaluation of the group convolution operator, provides control over specific equivariance and invariance properties, and can use routing by agreement instead of pooling operations. In addition, it is able to provide interpretable and equivariant representation vectors as output capsules, which disentangle evidence of object existence from its pose. |
Simplified Adaptive Spatial Modulation for Limited-Feedback MIMO Systems | Adaptive spatial modulation (ASM), where the modulation orders assigned to transmit antennas are dynamically adapted to the changing channel conditions, is a new limited-feedback multiple-input-multiple-output (MIMO) transmission technique. However, the exhaustive search for the optimal modulation order vector in ASM requires very high computational complexity and feedback load. In this paper, we propose two simplified ASM (SASM) schemes that reduce the high computational complexity and feedback load with negligible performance loss. More specifically, a candidate-reduction-based ASM (CR-ASM) that exploits the candidate selection probability is first developed to shrink the search space. The CR-ASM scheme only considers the candidates with high probability. To further reduce the complexity, another SASM scheme that uses a one-bit reallocation (OBRA) algorithm for modulation order assignment is proposed. Regardless of the number of transmit antennas, OBRA only needs to consider the transmit antenna pair with the highest correlation and selects their optimum orders among four possible candidates, which significantly reduce the search complexity. Simulation results show that the proposed schemes substantially reduce computational cost and feedback load while maintaining the benefits of the conventional ASM, particularly for a high number of transmit antennas. |
Does Governance Travel Around the World? Evidence from Institutional Investors | We examine whether institutional investors affect corporate governance by analyzing portfolio holdings of institutions in companies from 23 countries during the period 2003– 2008. We find that firm-level governance is positively associated with international institutional investment. Changes in institutional ownership over time positively affect subsequent changes in firm-level governance, but the opposite is not true. Foreign institutions and institutions from countries with strong shareholder protection play a role in promoting governance improvements outside of the U.S. Institutional investors affect not only which corporate governance mechanisms are in place, but also outcomes. Firms with higher institutional ownership are more likely to terminate poorly performing Chief Executive Officers (CEOs) and exhibit improvements in valuation over time. Our results suggest that international portfolio investment by institutional investors promotes good corporate governance practices around the world. & 2010 Elsevier B.V. All rights reserved. |
Die dorsale Kapseldoppelung zur Therapie der dorsalen Instabilität des distalen Radioulnargelenks | Wiederherstellung der Stabilität des distalen Radioulnargelenks (DRUG) durch eine dorsale Kapseldoppelung bei dorsaler Instabilität mit dem Ziel der Schmerzreduktion und Vermeidung einer posttraumatischen Arthrose. Posttraumatische dorsale Instabilität des distalen Radioulnargelenks mit klinisch fehlendem Anschlag bei Translation im DRUG oder Subluxation während aktiver Unterarmdrehbewegung. Auch indiziert bei bereits fehlgeschlagenen anderen stabilisierenden Techniken, wie z. B. Diskusnähten. Arthrose im DRUG, mehrfache Voroperationen im Kapselbereich des DRUG. Knochenbedingte Instabilitäten (fehlverheilte Frakturen und Pseudarthrosen) sollten bereits versorgt sein. Dorsaler Zugang mit Eröffnung des 5. Strecksehnenfachs und Darstellen der dorsalen Kapsel des DRUG. Längsinzision der Kapsel mit Belassen radial und ulnar suffizienten Kapselgewebes zur Kapselraffung und Vorlegen von 2 U-Nähten mit „Fiber-wire“-Fadenmaterial. Reposition der Fehlstellung und Einstellung des Unterarms in Supination. Anziehen der vorgelegten Kapselnähte und Naht des Retinakulums mit einer fortlaufenden überwendlichen resorbierbaren Naht. Oberarmgips in Supinationsstellung für 4 Wochen, gefolgt von einer Unterarmschiene zur Einschränkung der Pronation/Supination auf 45° für weitere 4 Wochen. Vollbelastung nach 12 Wochen möglich. Die Nachuntersuchung von 20 Patienten ergab gute, zufriedenstellende Ergebnisse der Operationsmethode ohne wesentliche Komplikationen. Der durchschnittliche DASH-Wert („disabilities of the arm, shoulder and hand“) betrug 15,8 Punkte. Eine Schmerzreduktion konnte bei 17 der 20 Patienten (85 %) beobachtet werden. Die Instabilität war bei 18 Patienten (90 %) reduziert. Die Pronation/Supination war postoperativ nicht eingeschränkt. To stabilize the distal radioulnar joint (DRUJ) by performing dorsal capsular imbrication in patients presenting with dorsal instability. The goal was to reduce pain and prevent the occurrence of posttraumatic arthrosis. Posttraumatic dorsal instability of the DRUJ with missing block while performing translational activities in the DRUJ or subluxation while actively rotating the forearm. Cases, in which other stabilizing techniques, such as, sutures of the triangular fibrocartilage complex failed. DRUJ arthrosis, previous surgical interventions to the capsule area of the DRUJ, instabilities due to osseous reasons (malposition or pseudarthrosis) should already have been treated. Dorsal approach and opening of the 5th extensor compartment to expose the dorsal joint capsule. A longitudinal division of the capsule was performed and sufficient tissue on the radial and ulnar border was retained to ensure a solid suture technique. Then 2 U-shaped sutures using FiberWire suture material were made. Correction of the malposition and repositioning the forearm into supination. Tightening of the prepared capsule sutures and closing of the retinaculum with a resorbable suture. Patients wore a long-arm cast with the forearm being in supination for a period of 4 weeks. Following cast removal, patients wore a forearm splint for a period of 4 weeks to limit forearm pronation/supination at 45°. Full load on the wrist was allowed after 12 weeks. The subjective and functional outcomes of 20 patients having received capsular imbrication using this technique were good and entailed no significant complications. The postoperative DASH was 15.8 points. Of the 20 patients, 17 patients (85 %) had a reduction of pain. Symptoms of DRUJ instability could be reduced in 18 patients (90 %). Pronation/supination of the wrist was not restricted postoperatively. |
Exploring aesthetical gameplay design patterns: camaraderie in four games | This paper explores how a vocabulary supporting design-related discussions of gameplay preferences can be developed. Using the preference of experiencing camaraderie as an example, we have analyzed four games: the board games Space Alert and Battlestar Galactica, the massively multiplayer online game World of Warcraft, and the cooperative FPS series Left for Dead. Through a combination of the MDA model on how game mechanics give rise to game aesthetics via game dynamics, and the concept of aesthetic ideals in gameplay, we present gameplay design patterns related to achieving camaraderie. We argue that some of these patterns can be seen as aesthetic gameplay design patterns in that they are closely related to aesthetic ideals. Further, as a consequence, gameplay design pattern collections which include patterns related to all levels of the MDA model can be used as design tools when aiming for certain gameplay aesthetics. |
Design of a 3D printed hand prosthesis actuated by nylon 6-6 polymer based artificial muscles | The majority of arm amputees live in developing countries and cannot afford prostheses beyond cosmetic hands with simple grippers. Customized hand prostheses with high performance are too expensive for the average arm amputee. Currently, commercially available hand prostheses use costly and heavy DC motors for actuation. This paper presents an inexpensive hand prosthesis, which uses a 3D printable design to reduce the cost of customizable parts and novel electro-thermal actuator based on nylon 6-6 polymer muscles. The prosthetic hand was tested and found to be able to grasp a variety of shapes 100% of the time tested (sphere, cylinder, cube, and card) and other commonly used tools. Grip times for each object were repeatable with small standard deviations. With a low estimated material cost of $170 for actuation, this prosthesis could have a potential to be used for low-cost and high-performance system. |
Factorization tricks for LSTM networks | We present two simple ways of reducing the number of parameters and accelerating the training of large Long Short-Term Memory (LSTM) networks: the first one is ”matrix factorization by design” of LSTM matrix into the product of two smaller matrices, and the second one is partitioning of LSTM matrix, its inputs and states into the independent groups. Both approaches allow us to train large LSTM networks significantly faster to the near state-of the art perplexity while using significantly less RNN parameters. |
Macro to Micro: Country Exposures, Firm Fundamentals and Stock Returns | We outline a systematic approach to incorporate macroeconomic information into firm level forecasting from the perspective of an equity investor. Using a global sample of 198,315 firm-years over the 1998-2010 time period, we find that combining firm level exposures to countries (via geographic segment data) with forecasts of country level performance, is able to generate superior forecasts for firm fundamentals. This result is particularly evident for purely domestic firms. We further find that this forecasting benefit is associated with future excess stock returns. These relations are stronger after periods of higher dispersion in expected country level performance. JEL classification: G12; G14; M41 |
Estimating customer service in a two-location continuous review inventory model with emergency transshipments | In this paper, an approximate analytical two-location inventory transshipment model is developed that combines the popular order-quantity, reorder-point ðQ;RÞ continuous review ordering policy with a third parameter, the hold-back amount, which limits the level of outgoing transshipments. The degree to which transshipments improve both Type I (no-stockout probability) and Type II (fill rate) customer service levels can be calculated using the model. Simulation studies conducted to test the validity of the approximations in the analytical model indicate that it performs very well over a wide range of inputs. 2002 Elsevier Science B.V. All rights reserved. |
Automatic Generation of Game Level Solutions as Storyboards | Game programmers rely on artificial intelligence techniques to encode characters' behaviors initially specified by game designers. Although significant efforts have been made to assist their collaboration, the formalization of behaviors remains a time-consuming process during the early stages of game development. We propose an authoring tool allowing game designers to formalize, visualize, modify, and validate game level solutions in the form of automatically generated storyboards. This system uses planning techniques to produce a level solution consistent with gameplay constraints. The main planning agent corresponds to the player character, and the system uses the game actions as planning operators and level objectives as goals to plan the level solutions. Generated solutions are presented as 2-D storyboards similar to comic strips. We present in this paper the first version of a fully implemented prototype as well as examples of generated storyboards, adapted from the original design documents of the blockbuster game Hitman. |
Tandem dosing of samarium-153 ethylenediamine tetramethylene phosphoric acid with stem cell support for patients with high-risk osteosarcoma. | BACKGROUND
Samarium-153 ethylenediamine tetramethylene phosphoric acid (153Sm-EDTMP) is a radiopharmaceutical that has been used to treat osteosarcoma. The authors conducted a phase 2 study to test safety and response of high-risk osteosarcoma to tandem doses of 153Sm-EDTMP and to determine correlation between radiation delivered by low and high administered activities.
METHODS
Patients with recurrent, refractory osteosarcoma detectable on standard 99mTc bone scan received a low dose of 153Sm-EDTMP (37.0-51.8 MBq/kg), followed upon count recovery by a second, higher dose (222 MBq/kg). Fourteen days later, patients were rescued with autologous hematopoietic stem cells. The authors assessed response to therapy, performed dosimetry to determine the relationship between administered activity and tumor absorbed dose, and investigated whether changes in 2-(fluorine-18) fluoro-2-deoxy-d-glucose (18F-FDG) tumor uptake upon hematologic recovery reflected disease response.
RESULTS
Nine patients were given tandem doses of 153Sm-EDTMP; 2 received only the initial dose because of disease progression. Six patients experienced radiographic disease stabilization, but this was not considered a response, so the study was terminated early. There was a linear relationship between administered activity and tumor absorbed dose, but there was no correlation between change in 18F-FDG positron emission tomography tumor uptake and tumor absorbed dose or time to progression. The median time to progression for the entire group was 79 days.
CONCLUSIONS
Tandem doses of 153Sm-EDTMP were safe for this cohort of heavily pretreated patients with very high-risk disease. The strong correlation between absorbed dose and administered activity within each evaluable patient provides a methodology to individually tailor tandem doses of this agent. |
Maritime Traffic Monitoring Based on Vessel Detection, Tracking, State Estimation, and Trajectory Prediction | Maneuvering vessel detection and tracking (VDT), incorporated with state estimation and trajectory prediction, are important tasks for vessel navigational systems (VNSs), as well as vessel traffic monitoring and information systems (VTMISs) to improve maritime safety and security in ocean navigation. Although conventional VNSs and VTMISs are equipped with maritime surveillance systems for the same purpose, intelligent capabilities for vessel detection, tracking, state estimation, and navigational trajectory prediction are underdeveloped. Therefore, the integration of intelligent features into VTMISs is proposed in this paper. The first part of this paper is focused on detecting and tracking of a multiple-vessel situation. An artificial neural network (ANN) is proposed as the mechanism for detecting and tracking multiple vessels. In the second part of this paper, vessel state estimation and navigational trajectory prediction of a single-vessel situation are considered. An extended Kalman filter (EKF) is proposed for the estimation of vessel states and further used for the prediction of vessel trajectories. Finally, the proposed VTMIS is simulated, and successful simulation results are presented in this paper. |
A STUDY OF TRANSLATION ERROR RATE WITH TARGETED HUMAN ANNOTATION | We define a new, intuitive measure for evaluating machine translation output that avoids the knowledge intensiveness of more meaning-based approaches, and the labor-intensiveness of human judgments. Translation Error Rate (TER) measures the amount of editing that a human would have to perform to change a system output so it exactly matches a reference translation. We also compute a human-targeted TER (or HTER), where the minimum TER of the translation is computed against a human ‘targeted reference’ that preserves the meaning (provided by the reference translations) and is fluent, but is chosen to minimize the TER score for a particular system output. We show that: (1) The single-reference variant of TER correlates as well with human judgments of MT quality as the four-reference variant of BLEU; (2) The human-targeted HTER yields a 33% error-rate reduction and is shown to be very well correlated with human judgments; (3) The four-reference variant of TER and the single-reference variant of HTER yield higher correlations with human judgments than BLEU; (4) HTER yields higher correlations with human judgments than METEOR or its human-targeted variant (HMETEOR); and (5) The four-reference variant of TER correlates as well with a single human judgment as a second human judgment does, while HTER, HBLEU, and HMETEOR correlate significantly better with a human judgment than a second human judgment does. This work has been supported, in part, by BBNT contract number 9500006806. 1 |
A customized appliance for molar uprighting and space regaining. | Fig. 1 A. 25-year-old male patient with mesially tipped upper right second molar before treatment. B. After removal of lower left third molar. molar uprighting.6 Although early mechanics tended to cause molar extrusion or premolar intrusion, more recently developed springs can upright molars without these undesirable side effects.7,8 The need for simpler appliances to manage relatively limited adjunctive-orthodontic cases cannot be overemphasized. This article presents a quick and esthetic option for second-molar uprighting with a custom-designed space regainer. A orthodontic treatment is defined as tooth movement carried out to facilitate other dental procedures that may be required to control disease, restore function, or enhance appearance. As an example, an adult patient with a missing first molar is commonly treated by uprighting the adjacent second molar for eventual prosthetic replacement.1-5 Differential diagnosis, force selection, and appliance design are key factors in successful |
A Spoofing Detection Method for Civilian L1 GPS and the E1-B Galileo Safety of Life Service | The work presented here describes an effective method for signal-authentication and spoofing detection for civilian GNSS receivers using the GPS L1 C/A and the Galileo E1-B Safety of Life service. The paper discusses various spoofing attack profiles and how the proposed method is able to detect these attacks. This method is relatively low cost and can be suitable for numerous mass-market applications. |
Finer Grained Entity Typing with TypeNet | We consider the challenging problem of entity typing over an extremely fine grained set of types, wherein a single mention or entity can have many simultaneous and often hierarchically-structured types. Despite the importance of the problem, there is a relative lack of resources in the form of fine-grained, deep type hierarchies aligned to existing knowledge bases. In response, we introduce TypeNet, a dataset of entity types consisting of over 1941 types organized in a hierarchy, obtained by manually annotating a mapping from 1081 Freebase types to WordNet. We also experiment with several models comparable to state-of-the-art systems and explore techniques to incorporate a structure loss on the hierarchy with the standard mention typing loss, as a first step towards future research on this dataset. |
Markov Logic Networks for Optical Chemical Structure Recognition | Optical chemical structure recognition is the problem of converting a bitmap image containing a chemical structure formula into a standard structured representation of the molecule. We introduce a novel approach to this problem based on the pipelined integration of pattern recognition techniques with probabilistic knowledge representation and reasoning. Basic entities and relations (such as textual elements, points, lines, etc.) are first extracted by a low-level processing module. A probabilistic reasoning engine based on Markov logic, embodying chemical and graphical knowledge, is subsequently used to refine these pieces of information. An annotated connection table of atoms and bonds is finally assembled and converted into a standard chemical exchange format. We report a successful evaluation on two large image data sets, showing that the method compares favorably with the current state-of-the-art, especially on degraded low-resolution images. The system is available as a web server at http://mlocsr.dinfo.unifi.it. |
Deep Structured Energy Based Models for Anomaly Detection | In this paper, we attack the anomaly detection problem by directly modeling the data distribution with deep architectures. We propose deep structured energy based models (DSEBMs), where the energy function is the output of a deterministic deep neural network with structure. We develop novel model architectures to integrate EBMs with different types of data such as static data, sequential data, and spatial data, and apply appropriate model architectures to adapt to the data structure. Our training algorithm is built upon the recent development of score matching (Hyvärinen, 2005), which connects an EBM with a regularized autoencoder, eliminating the need for complicated sampling method. Statistically sound decision criterion can be derived for anomaly detection purpose from the perspective of the energy landscape of the data distribution. We investigate two decision criteria for performing anomaly detection: the energy score and the reconstruction error. Extensive empirical studies on benchmark tasks demonstrate that our proposed model consistently matches or outperforms all the competing methods. |
Low-Cost CPW Meander Inductors Utilizing Ink-Jet Printing on Flexible Substrate for High-Frequency Applications | This paper describes the design and fabrication of low-cost coplanar waveguide (CPW) miniature meander inductors. Inductors are fabricated on a flexible plastic polyimide foil in ink-jet printed technology with silver nanoparticle ink in a single layer. For the first time, the detailed characterization and simulation of CPW inductors in this technology is reported. The inductors are developed with impressive measured self-resonance frequency up to 18.6 GHz. The 2.107-nH inductor measures only 1 mm × 1.7 mm × 0.075 mm and demonstrates a high level of miniaturization in ink-jet printing technology. The measured response characteristics are in excellent agreement with the predicted simulation response. |
Neural Machine Translation of Text from Non-Native Speakers | Neural Machine Translation (NMT) systems are known to degrade when confronted with noisy data, especially when the system is trained only on clean data. In this paper, we show that augmenting training data with sentences containing artificially-introduced grammatical errors can make the system more robust to such errors. In combination with an automatic grammar error correction system, we can recover 1.9 BLEU out of 3.1 BLEU lost due to grammatical errors. We also present a set of Spanish translations of the JFLEG grammar error correction corpus, which allows for testing NMT robustness to real grammatical errors. |
Destruction of Fermion Zero Modes on Cosmic Strings | I examine the existence of zero-energy fermionsolutions (zero modes) on cosmic strings in an SO (10)grand unified theory. The current-carrying capability ofa cosmic string formed at one phase transition can be modified at subsequent phasetransitions. I show that the zero modes may be destroyedand the conductivity of the string altered. I discussthe cosmological implications of this, and show that it allows vorton bounds to be relaxed. |
Evaluating the Effectiveness of China's Financial Reform The Efficiency of China's Domestic Banks | This paper estimates the cost and profit efficiency of the Chinese domestic banking sector to evaluate the effectiveness of China's financial reforms since 1978. We use the performance of foreign banks as the benchmark because foreign banks, subject to intensive worldwide competition, are perceived as possessing superior governing structure and organization, more advanced technologies and better trained labor force. On the other hand, competition in China's banking sector is mainly in the form of nonprice measures, thus putting foreign banks at a disadvantage. We find domestic banks have gradually caught up the cost advantage of foreign banks in a manner consistent with the increased competitive pressure. On the other hand, the profit advantage of domestic banks over foreign banks is widening because of institutional arrangements, cultural and social networks as well as the profit scope and revenue scale economy. |
Complementary Cooperation Algorithm Based on DEKF Combined With Pattern Recognition for SOC/Capacity Estimation and SOH Prediction | Differences in electrochemical characteristics among Li-ion batteries result in erroneous state-of-charge (SOC)/capacity estimation and state-of-health (SOH) prediction when using the existing dual extended Kalman filter (DEKF) algorithm. This paper presents a complementary cooperation algorithm based on DEKF combined with pattern recognition as an application Hamming neural network to the identification of suitable battery model parameters for improved SOC/capacity estimation and SOH prediction. Two kinds of pattern such as discharging/charging voltage pattern (DCVP) and capacity pattern (CP) were measured, together with the battery parameters, as representative patterns. Through statistical analysis, the Hamming network is applied for identification of the representative DCVP and CP that most closely matche that of the arbitrary battery to be measured. The model parameters of the representative battery are then applied for SOC/capacity estimation and SOH prediction of the arbitrary battery using the DEKF. This avoids the need for repeated parameter measurement. |
Safety of sublingual-swallow immunotherapy in children aged 3 to 7 years. | BACKGROUND
The minimum age to start specific immunotherapy with inhalant allergens in children has not been clearly established, and position papers discourage its use in children younger than 5 years.
OBJECTIVE
To assess the safety of high-dose sublingual-swallow immunotherapy (SLIT) in a group of children younger than 5 years.
METHODS
Sixty-five children (51 boys and 14 girls; age range, 38-80 months; mean +/- SD age, 60 +/- 10 years; median age, 60 months) were included in this observational study. They were treated with SLIT with a build-up phase of 11 days, culminating in a top dose of 300 IR (index of reactivity) and a maintenance phase of 300 IR 3 times a week. The allergens used were house dust mites in 42 patients, grass pollen in 11 patients, olive pollen in 5 patients, Parietaria pollen in 4 patients, and cypress pollen in 3 patients. All adverse reactions and changes in the treatment schedule were compared in 2 subgroups: children 38 to 60 months old and children 61. to 80 months old.
RESULTS
The average cumulative dose of SLIT was 36,900 IR. Adverse reactions were observed in 11 children, none of them severe enough to require discontinuation of immunotherapy. Six reactions occurred in the 60 months or younger age group and 7 in the older than 60 months age group, with no differences between these 2 groups.
CONCLUSION
High-dose immunotherapy in children younger than 5 years does not cause more adverse reactions than in children aged 5 to 7 years. There is no reason to forbear studies on safety and efficacy of these preparations in young children. |
Penile squamous cell carcinoma with urethral extension treated with Mohs micrographic surgery. | Penile squamous cell carcinoma (SCC) with considerable urethral extension is uncommon and difficult to manage. It often is resistant to less invasive and nonsurgical treatments and frequently results in partial or total penectomy, which can lead to cosmetic disfigurement, functional issues, and psychological distress. We report a case of penile SCC in situ with considerable urethral extension with a focus of cells suspicious for moderately well-differentiated and invasive SCC that was treated with Mohs micrographic surgery (MMS). A review of the literature on penile tumors treated with MMS also is provided. |
Scalable OFDMA Physical Layer in IEEE 802.16 WirelessMAN | The concept of scalability was introduced to the IEEE 802.16 WirelessMAN Orthogonal Frequency Division Multiplexing Access (OFDMA) mode by the 802.16 Task Group e (TGe). A scalable physical layer enables standard-based solutions to deliver optimum performance in channel bandwidths ranging from 1.25 MHz to 20 MHz with fixed subcarrier spacing for both fixed and portable/mobile usage models, while keeping the product cost low. The architecture is based on a scalable subchannelization structure with variable Fast Fourier Transform (FFT) sizes according to the channel bandwidth. In addition to variable FFT sizes, the specification supports other features such as Advanced Modulation and Coding (AMC) subchannels, Hybrid Automatic Repeat Request (H-ARQ), high-efficiency uplink subchannel structures, Multiple-Input-MultipleOutput (MIMO) diversity, and coverage enhancing safety channels, as well as other OFDMA default features such as different subcarrier allocations and diversity schemes. The purpose of this paper is to provide a brief tutorial on the IEEE 802.16 WirelessMAN OFDMA with an emphasis on scalable OFDMA. INTRODUCTION The IEEE 802.16 WirelessMAN standard [1] provides specifications for an air interface for fixed, portable, and mobile broadband wireless access systems. The standard includes requirements for high data rate Line of Sight (LOS) operation in the 10-66 GHz range for fixed wireless networks as well as requirements for Non Line of Sight (NLOS) fixed, portable, and mobile systems operating in sub 11 GHz licensed and licensed-exempt bands. Because of its superior performance in multipath fading wireless channels, Orthogonal Frequency Division Multiplexing (OFDM) signaling is recommended in OFDM and WirelessMAN OFDMA Physical (PHY) layer modes of the 802.16 standard for operation in sub 11 GHz NLOS applications. OFDM technology has been recommended in other wireless standards such as Digital Video Broadcasting (DVB) [2] and Wireless Local Area Networking (WLAN) [3]-[4], and it has been successfully implemented in the compliant solutions. Amendments for PHY and Medium Access Control (MAC) layers for mobile operation are being developed (working drafts [5] are being debated at the time of publication of this paper) by TGe of the 802.16 Working Group. The task group’s responsibility is to develop enhancement specifications to the standard to support Subscriber Stations (SS) moving at vehicular speeds and thereby specify a system for combined fixed and mobile broadband wireless access. Functions to support optional PHY layer structures, mobile-specific MAC enhancements, higher-layer handoff between Base Stations (BS) or sectors, and security features are among those specified. Operation in mobile mode is limited to licensed bands suitable for mobility between 2 and 6 GHz. Unlike many other OFDM-based systems such as WLAN, the 802.16 standard supports variable bandwidth sizes between 1.25 and 20 MHz for NLOS operations. This feature, along with the requirement for support of combined fixed and mobile usage models, makes the need for a scalable design of OFDM signaling inevitable. More specifically, neither one of the two OFDM-based modes of the 802.16 standard, WirelessMAN OFDM and OFDMA (without scalability option), can deliver the kind of performance required for operation in vehicular mobility multipath fading environments for all bandwidths in the specified range, without scalability enhancements that guarantee fixed subcarrier spacing for OFDM signals. The concept of scalable OFDMA is introduced to the IEEE 802.16 WirelessMAN OFDMA mode by the 802.16 TGe and has been the subject of many contributions to the standards committee [6]-[9]. Other features such as AMC subchannels, Hybrid Automatic Repeat Request Intel Technology Journal, Volume 8, Issue 3, 2004 Scalable OFDMA Physical Layer in IEEE 802.16 WirelessMAN 202 (H-ARQ), high-efficiency Uplink (UL) subchannel structures, Multiple-Input-Multiple-Output (MIMO) diversity, enhanced Advanced Antenna Systems (AAS), and coverage enhancing safety channels were introduced [10]-[14] simultaneously to enhance coverage and capacity of mobile systems while providing the tools to trade off mobility with capacity. The rest of the paper is organized as follows. In the next section we cover multicarrier system requirements, drivers of scalability, and design tradeoffs. We follow that with a discussion in the following six sections of the OFDMA frame structure, subcarrier allocation modes, Downlink (DL) and UL MAP messaging, diversity options, ranging in OFDMA, and channel coding options. Note that although the IEEE P802.16-REVd was ratified shortly before the submission of this paper, the IEEE P802.16e was still in draft stage at the time of submission, and the contents of this paper therefore are based on proposed contributions to the working group. MULTICARRIER DESIGN REQUIREMENTS AND TRADEOFFS A typical early step in the design of an Orthogonal Frequency Division Multiplexing (OFDM)-based system is a study of subcarrier design and the size of the Fast Fourier Transform (FFT) where optimal operational point balancing protection against multipath, Doppler shift, and design cost/complexity is determined. For this, we use Wide-Sense Stationary Uncorrelated Scattering (WSSUS), a widely used method to model time varying fading wireless channels both in time and frequency domains using stochastic processes. Two main elements of the WSSUS model are briefly discussed here: Doppler spread and coherence time of channel; and multipath delay spread and coherence bandwidth. A maximum speed of 125 km/hr is used here in the analysis for support of mobility. With the exception of high-speed trains, this provides a good coverage of vehicular speed in the US, Europe, and Asia. The maximum Doppler shift [15] corresponding to the operation at 3.5 GHz (selected as a middle point in the 26 GHz frequency range) is given by Equation (1). Hz m s m f m 408 086 . 0 / 35 = = = λ ν Equation (1) The worst-case Doppler shift value for 125 km/hr (35 m/s) would be ~700 Hz for operation at the 6 GHz upper limit specified by the standard. Using a 10 KHz subcarrier spacing, the Inter Channel Interference (ICI) power corresponding to the Doppler shift calculated in Equation (1) can be shown [16] to be limited to ~-27 dB. The coherence time of the channel, a measure of time variation in the channel, corresponding to the Doppler shift specified above, is calculated in Equation (2) [15]. |
A potentiostat circuit for multiple implantable electrochemical sensors | This work proposes a potentiostat circuit for multiple implantable sensor applications. Implantable sensors play a vital role in continuous in situ monitoring of biological phenomena in a real-time health care monitoring system. In the proposed work a three-electrode based electrochemical sensing system has been employed. In this system a fixed potential difference between the working and the reference electrodes is maintained using a potentiostat to generate a current signal in the counter electrode which is proportional to the concentration of the analyte. This potential difference between the working and the reference electrodes can be changed to detect different analytes. The designed low power potentiostat consumes only 66 µW with 2.5 volt power supply which is highly suitable for low-power implantable sensor applications. All the circuits are designed and fabricated in a 0.35-micron standard CMOS process. |
On the Comprehension of Program Comprehension | Research in program comprehension has evolved considerably over the past decades. However, only little is known about how developers practice program comprehension in their daily work. This article reports on qualitative and quantitative research to comprehend the strategies, tools, and knowledge used for program comprehension. We observed 28 professional developers, focusing on their comprehension behavior, strategies followed, and tools used. In an online survey with 1,477 respondents, we analyzed the importance of certain types of knowledge for comprehension and where developers typically access and share this knowledge.
We found that developers follow pragmatic comprehension strategies depending on context. They try to avoid comprehension whenever possible and often put themselves in the role of users by inspecting graphical interfaces. Participants confirmed that standards, experience, and personal communication facilitate comprehension. The team size, its distribution, and open-source experience influence their knowledge sharing and access behavior. While face-to-face communication is preferred for accessing knowledge, knowledge is frequently shared in informal comments.
Our results reveal a gap between research and practice, as we did not observe any use of comprehension tools and developers seem to be unaware of them. Overall, our findings call for reconsidering the research agendas towards context-aware tool support. |
HIGH MANEUVERING TARGET TRACKING USING A NOVEL HYBRID KALMAN FILTER-FUZZY LOGIC ARCHITECTURE | In this paper, a fast target maneuver detection technique and high accurate tracking scheme is proposed with the use of a new hybrid Kalman filter-fuzzy logic architecture. Due to the stressful environment of target tracking problem such as inaccurate detection and target maneuver, most of existing trackers do not represent desired performance in different situations. In practice, while the conventional Kalman filters (KF) perform well in tracking a target with constant velocity, their performance may be seriously degraded in the presence of maneuver. To reach an accurate target tracking system in such a stressful environment, fuzzy logic-based algorithms with intelligent adaptation capabilities have recently been issued. Although these methods yield reasonable performance in tracking maneuvering targets, their accuracy in non-maneuvering mode was not satisfactory. In this research, based on information about the target maneuver dynamics, a new hybrid tracker (HT) is introduced. The proposed algorithm combines two methodologies into one architecture synergistically. In other words, the KF is used when the target velocity is approximately constant, whereas fuzzy estimator is used when the target maneuvers. Simulation results show that the proposed method is superior to some conventional approaches in tracking accuracy. |
Symbol Recognition: Current Advances and Perspectives | The recognition of symbols in graphic documents is an intensive research activity in the community of pattern recognition and document analysis. A key issue in the interpretation of maps, engineering drawings, diagrams, etc. is the recognition of domain dependent symbols according to a symbol database. In this work we first review the most outstanding symbol recognition methods from two different points of view: application domains and pattern recognition methods. In the second part of the paper, open and unaddressed problems involved in symbol recognition are described, analyzing their current state of art and discussing future research challenges. Thus, issues such as symbol representation, matching, segmentation, learning, scalability of recognition methods and performance evaluation are addressed in this work. Finally, we discuss the perspectives of symbol recognition concerning to new paradigms such as user interfaces in handheld computers or document database and WWW indexing by graphical content. |
Initializing Convolutional Filters with Semantic Features for Text Classification | The crux of our initialization technique is n-gram selection, which assists neural networks to extract important n-gram features at the beginning of the training process. In the following tables, we illustrate those selected n-grams of different classes and datasets to understand our technique intuitively. Since all of MR, SST-1, SST-2, CR, and MPQA are sentiment classification datasets, we only report the selected n-grams of SST-1 (Table 1). N-grams selected by our method in SUBJ and TREC are shown in Table 2 and Table 3. |
Velocity based artificial bee colony algorithm for high dimensional continuous optimization problems | Artificial bee colony (ABC) is a swarm optimization algorithmwhich has been shown to be more effective than the other population based algorithms such as genetic algorithm (GA), particle swarm optimization (PSO) and ant colony optimization (ACO). Since it was invented, it has received significant interest from researchers studying in different fields because of having fewer control parameters, high global search ability and ease of implementation. Although ABC is good at exploration, the main drawback is its poor exploitation which results in an issue on convergence speed in some cases. Inspired by particle swarm optimization, we propose a modified ABC algorithm called VABC, to overcome this insufficiency by applying a new search equation in the onlooker phase, which uses the PSO search strategy to guide the search for candidate solutions. The experimental results tested on numerical benchmark functions show that the VABC has good performance compared with PSO and ABC. Moreover, the performance of the proposed algorithm is also compared with those of state-of-the-art hybrid methods and the results demonstrate that the proposed method has a higher convergence speed and better search ability for almost all functions. & 2014 Elsevier Ltd. All rights reserved. |
DDoS defense system for web services in a cloud environment | Recently, a new kind of vulnerability has surfaced: application layer Denial-of-Service (DoS) attacks targeting web services. These attacks aim at consuming resources by sending Simple Object Access Protocol (SOAP) requests that contain malicious XML content. These requests cannot be detected on the network or transportation (TCP/IP) layer, as they appear as legitimate packets. Until now, there is no web service security specification that addresses this problem. Moreover, the current WS-Security standard induces crucial additional vulnerabilities threatening the availability of certain web service implementations. First, this paper introduces an attack-generating tool to test and confirm previously reported vulnerabilities. The results indicate that the attacks have a devastating impact on theweb service availability, even whilst utilizing an absolute minimum of attack resources. Since these highly effective attacks can be mounted with relative ease, it is clear that defending against them is essential, looking at the growth of cloud andweb services. Second, this paper proposes an intelligent, fast and adaptive system for detecting against XML and HTTP application layer attacks. The intelligent system works by extracting several features and using them to construct a model for typical requests. Finally, outlier detection can be used to detect malicious requests. Furthermore, the intelligent defense system is capable of detecting spoofing and regular flooding attacks. The system is designed to be inserted in a cloud environmentwhere it can transparently protect the cloud broker and even cloud providers. For testing its effectiveness, the defense systemwas deployed to protect web services running onWSO2 with Axis2: the defacto standard for open source web service deployment. The proposed defense system demonstrates its capability to effectively filter out the malicious requests, whilst generating a minimal amount of overhead for the total response time. © 2014 Elsevier B.V. All rights reserved. |
Air-light estimation using haze-lines | Outdoor images taken in bad weather conditions, such as haze and fog, look faded and have reduced contrast. Recently there has been great success in single image dehazing, i.e., improving the visibility and restoring the colors from a single image. A crucial step in these methods is the calculation of the air-light color, the color of an area of the image with no objects in line-of-sight. We propose a new method for calculating the air-light. The method relies on the haze-lines prior that was recently introduced. This prior is based on the observation that the pixel values of a hazy image can be modeled as lines in RGB space that intersect at the air-light. We use Hough transform in RGB space to vote for the location of the air-light. We evaluate the proposed method on an existing dataset of real world images, as well as some synthetic and other real images. Our method performs on-par with current state-of-the-art techniques and is more computationally efficient. |
Detecting human activities in retail surveillance using hierarchical finite state machine | Cashiers in retail stores usually exhibit certain repetitive and periodic activities when processing items. Detecting such activities plays a key role in most retail fraud detection systems. In this paper, we propose a highly efficient, effective and robust vision technique to detect checkout-related primitive activities, based on a hierarchical finite state machine (FSM). Our deterministic approach uses visual features and prior spatial constraints on the hand motion to capture particular motion patterns performed in primitive activities. We also apply our approach to the problem of retail fraud detection. Experimental results on a large set of video data captured from retail stores show that our approach, while much simpler and faster, achieves significantly better results than state-of-the-art machine learning-based techniques both in detecting checkout-related activities and in detecting checkout-related fraudulent incidents. |
Economic Globalization and Transnational Terrorism | The effect of economic globalization on the number of transnational terrorist incidents within countries is analyzed statistically, using a sample of 112 countries from 1975 to 1997. Results show that trade, foreign direct investment (FDI), and portfolio investment have no direct positive effect on transnational terrorist incidents within countries and that economic developments of a country and its top trading partners reduce the number of terrorist incidents inside the country. To the extent that trade and FDI promote economic development, they have an indirect negative effect on transnational terrorism. |
BLAST Application on the GPE/UnicoreGS Grid | Sequence analysis is one of the most fundamental tasks in molecular biology. Because of the increasing number of sequences we still need more computing power. One of the solutions are grid environments, which make use of computing centers. In this paper we present plug-in which enables the use of BLAST software for sequence analysis within Grid environments such as UNICORE (Uniform Interface to Computing Resources) and GPE (Grid Programming Environment). |
Comprehension Using Memory Networks | In this work, we will investigate the task of building a Question Answering system using deep neural networks augmented with a memory component. Our goal is to implement the MemNN and its extensions described in [10] and [8] and apply it on the bAbI QA tasks introduced in [9]. Unlike simulated datasets like bAbI, the vanilla MemNN system is not sufficient to achieve satisfactory performance on real-world QA datasets like Wiki QA [6] and MCTest [5]. We will explore extensions to the proposed MemNN systems to make it work on these complex datasets. |
Reversibility of Fenofibrate Therapy–Induced Renal Function Impairment in ACCORD Type 2 Diabetic Participants | OBJECTIVE
To assess the reversibility of the elevation of serum creatinine levels in patients with diabetes after 5 years of continuous on-trial fenofibrate therapy.
RESEARCH DESIGN AND METHODS
An on-drug/off-drug ancillary study to the Action to Control Cardiovascular Risk in Diabetes (ACCORD) Lipid Trial to investigate posttrial changes in serum creatinine and cystatin C. Eligible participants were recruited into a prospective, nested, three-group study based on retrospective on-trial serum creatinine levels: fenofibrate case subjects (n = 321, ≥ 20% increase after 3 months of therapy); fenofibrate control subjects (n = 175, ≤ 2% increase); and placebo control subjects (n = 565). Serum creatinine and cystatin C were measured at trial end and 6-8 weeks after discontinuation of trial therapy. RESULTS At trial end, case subjects had the highest adjusted serum creatinine (± SE) mg/dL (1.11 ± 0.02) and the lowest adjusted estimated glomerular filtration rate (eGFR) (± SE) mL/min/1.73 m(2) (68.4 ± 1.0) versus control subjects (1.01 ± 0.02; 74.8 ± 1.3) and placebo subjects (0.98 ± 0.01; 77.8 ± 0.7). After 51 days off-drug, serum creatinine in case subjects was still higher (0.97 ± 0.02) and eGFR still lower (77.8 ± 1.0) than control subjects (0.90 ± 0.02; 81.8 ± 1.3) but not different from placebo subjects (0.99 ± 0.01; 76.6 ± 0.7). Changes in serum cystatin C recapitulated the serum creatinine changes.
CONCLUSIONS
Participants with significant initial on-trial increases in serum creatinine (≥ 20%) returned to the same level of renal function as participants receiving placebo while participants who had ≤ 2% increase in serum creatinine had net preservation of renal function compared with the same unselected placebo reference group. The fenofibrate-associated on-trial increases in serum creatinine were reversible, and the reversal was complete after 51 days off-drug. The similarity of the cystatin C results suggests that the mechanism of this change is not specific for serum creatinine. |
Chemical composition of tomatoes depending on the stage of ripening | Tomatoes are well-known vegetables, grown and eaten around the world due to their nutritional benefits. The aim of this research was to determine the chemical composition (dry matter, soluble solids, titritable acidity, vitamin C, lycopene), the taste index and maturity in three cherry tomato varieties (Sakura, Sunstream, Mathew) grown and collected from greenhouse at different stages of ripening. The output of the analyses showed that there were significant differences in the mean values among the analysed parameters according to the stage of ripening and variety. During ripening, the content of soluble solids increases on average two times in all analyzed varieties; the highest content of vitamin C and lycopene was determined in tomatoes of Sunstream variety in red stage. The highest total acidity expressed as g of citric acid 100 g was observed in pink stage (variety Sakura) or a breaker stage (varieties Sunstream and Mathew). The taste index of the variety Sakura was higher at all analyzed ripening stages in comparison with other varieties. This shows that ripening stages have a significant effect on tomato biochemical composition along with their variety. |
The unitary-model-operator approach to nuclear many-body problems | Microscopic nuclear structure calculations have been performed within the framework of the unitary-model-operator approach. Ground-state and single-particle energies are calculated for nuclei around 14C, 16O and 40Ca with modern nucleon–nucleon interactions. |
An investigation on the design and performance assessment of double-PID and LQR controllers for the inverted pendulum | The widespread application of inverted pendulum principles requires better dynamic performance and steady state performance of the inverted pendulum system. The objective of this paper is to design and investigate the time specification performance of the inverted pendulum controllers. Two control methods are proposed in this paper, an innovative double PID control method and a modern LQR (liner quadratic regulator) control method. Dynamic performance and steady state performance are investigated and compared of the two controllers. This paper proves that the LQR controller can guarantee the inverted pendulum a faster and smoother stabilizing process and with better robustness and less oscillation than the double-PID controller. The novelty of this paper is the design of the two controllers, and the adoption of limits cycles as the performance assessment method for the inverted pendulum, which not only makes the steady state performance assessment available, but also, provides an effective way for the evaluation of any equilibrium control problem with friction involved. |
Conditional Affordance Learning for Driving in Urban Environments | Most existing approaches to autonomous driving fall into one of two categories: modular pipelines, that build an extensive model of the environment, and imitation learning approaches, that map images directly to control outputs. A recently proposed third paradigm, direct perception, aims to combine the advantages of both by using a neural network to learn appropriate low-dimensional intermediate representations. However, existing direct perception approaches are restricted to simple highway situations, lacking the ability to navigate intersections, stop at traffic lights or respect speed limits. In this work, we propose a direct perception approach which maps video input to intermediate representations suitable for autonomous navigation in complex urban environments given high-level directional inputs. Compared to state-of-the-art reinforcement and conditional imitation learning approaches, we achieve an improvement of up to 68 % in goal-directed navigation on the challenging CARLA simulation benchmark. In addition, our approach is the first to handle traffic lights and speed signs by using image-level labels only, as well as smooth car-following, resulting in a significant reduction of traffic accidents in simulation. |
Web Video Popularity Prediction using Sentiment and Content Visual Features | Hundreds of hours of videos are uploaded every minute on YouTube and other video sharing sites: some will be viewed by millions of people and other will go unnoticed by all but the uploader. In this paper we propose to use visual sentiment and content features to predict the popularity of web videos. The proposed approach outperforms current state-of-the-art methods on two publicly available datasets. |
Real-Time Big Data Processing Framework: Challenges and Solutions | Data type and amount in human society is growing in amazing sp eed which is caused by emerging new services as cloud computing, internet of things and location-based services , th era of big data has arrived. As data has been fundamental r source, how to manage and utilize big data better has attracted much attent ion. Especially, with the development of internet of things , how to processing large amount real-time data has become a great challenge in r esearch and applications. Recently, cloud computing techn ology has attracted much attention with high-performance, but how to use cloud computing technology for large-scale real-time d ata processing has not been studied. This paper studied the challenges of bi g data firstly and concludes all these challenges into six iss ues. In order to improve the performance of real-time processing of large data, this paper builds a kind of real-time big data processi ng (RTDP) architecture based on the cloud computing technology and th en proposed the four layers of the architecture, and hierarc hical computing model. This paper proposed a multi-level storage model and t he LMA-based application deployment method to meet the real -time and heterogeneity requirements of RTDP system. We use DSMS, CEP batch-based MapReduce and other processing mode and FPGA, GPU, CPU, ASIC technologies differently to processing the d ata at the terminal of data collection. We structured the dat a and then upload to the cloud server and MapReduce the data combined wi th the powerful computing capabilities cloud architecture . This paper points out the general framework for future RTDP system and c alculation methods, is currently the general method RTDP sy stem |
An Overview to Customer Relationship Management | Marketing historically has undergone various shifts in emphasis from production through sales to marketing orientation. However, the various orientations have failed to engage customers in meaningful relationship mutually beneficial to organisations and customers, with all forms of the shift still exhibiting the transactional approach inherit in traditional marketing (Kubil & Doku, 2010). However, Coltman (2006) indicates that in strategy and marketing literature, scholars have long suggested that a customer centred strategy is fundamental to competitive advantage and that customer relationship management (CRM) programmes are increasingly being used by organisations to support the type of customer understanding and interdepartmental connectedness required to effectively execute a customer strategy. |
A Study on Deep Learning Based Sauvegrain Method for Measurement of Puberty Bone Age | This study applies a technique to expand the number of images to a level that allows deep learning. And the applicability of the Sauvegrain method through deep learning with relatively few elbow X-rays is studied. The study was composed of processes similar to the physicians’ bone age assessment procedures. The selected reference images were learned without being included in the evaluation data, and at the same time, the data was extended to accommodate the number of cases. In addition, we adjusted the X-ray images to better images using U-Net and selected the ROI with RPN + so as to be able to perform bone age estimation through CNN. The mean absolute error of the Sauvegrain method based on deep learning is 2.8 months and the Mean Absolute Percentage Error (MAPE) is 0.018. This result shows that X ray analysis using the Sauvegrain method shows higher accuracy than that of the age group of puberty even in the deep learning base. This means that deep learning of the Suvegrain method can be measured at a level similar to that of an expert, based on the extended X-ray image with the image data extension technique. Finally, we applied the Sauvegrain method to deep learning for accurate measurement of bone age at puberty. As a result, the present study is based on deep learning, and compared with the evaluation results of experts, it is possible to overcome limitations of the method of measuring bone age based on machine learning which was in TW3 or Greulich & Pyle due to lack of XI confirmed the fact. And we also presented the Sauvegrain method, which is applicable to adolescents as well. |
Target Classification Using the Deep Convolutional Networks for SAR Images | The algorithm of synthetic aperture radar automatic target recognition (SAR-ATR) is generally composed of the extraction of a set of features that transform the raw input into a representation, followed by a trainable classifier. The feature extractor is often hand designed with domain knowledge and can significantly impact the classification accuracy. By automatically learning hierarchies of features from massive training data, deep convolutional networks (ConvNets) recently have obtained state-of-the-art results in many computer vision and speech recognition tasks. However, when ConvNets was directly applied to SAR-ATR, it yielded severe overfitting due to limited training images. To reduce the number of free parameters, we present a new all-convolutional networks (A-ConvNets), which only consists of sparsely connected layers, without fully connected layers being used. Experimental results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) benchmark data set illustrate that A-ConvNets can achieve an average accuracy of 99% on classification of ten-class targets and is significantly superior to the traditional ConvNets on the classification of target configuration and version variants. |
Modified two-stage procedures for the treatment of gingival recession. | BACKGROUND
Unfavorable conditions at the soft tissues adjacent to a recession defect may preclude performing pedicle flaps (advanced or rotational) both as a root coverage procedure, and as a covering flap for a connective tissue graft. Free gingival grafts may not be recommended because of the low root coverage predictability and the poor esthetic outcome. The goal of the present case report is to suggest modifications of the two-stage surgical technique aimed at improving root coverage and esthetic outcomes, and reducing patient morbidity.
METHOD
In the first case report, a Miller class II gingival recession, associated with a deep buccal probing depth, affecting a lower central incisor was treated. In the first step of the surgery an epithelized graft with an apical-coronal dimension equal to the keratinized tissue height of the adjacent teeth was sutured apical to the bone dehiscence. Four months later, a coronally advanced flap was performed to cover the root exposure. In the second case report, a Miller class III gingival recession, complicated with a deep buccal probing depth affecting the mesial root of the first lower molar was treated. In the first step of the surgery, a free gingival graft was positioned mesially to the root exposure to create keratinized tissue lateral to the recession defect. This was adequate to perform the laterally moved, coronally advanced flap that was used as a second-step root coverage surgical procedure.
RESULTS
In the first case report complete root coverage, an increase (4 mm) in keratinized tissue height and realignment of the mucogingival line were achieved 1 year after the surgery. The reduced dimension of the graft permitted to minimize patient's discomfort and to obtain good esthetics of mucogingival tissues. These successful outcomes were well maintained for 5 years. In the second case report successful root coverage, increase (3 mm) in keratinized tissue height and good harmony of mucogingival tissues were achieved 1 year after the surgery. These outcomes were well maintained 5 years after the surgery.
CONCLUSIONS
The present study suggested that modifications of the two-stage procedure by minimizing the dimension of the graft and by standardizing the surgical techniques allowed successful results to be achieved in the treatment of gingival recessions characterized by local conditions that otherwise preclude the use of one-step root coverage surgical techniques. |
Exquisite Sensitivity of TP53 Mutant and Basal Breast Cancers to a Dose-Dense Epirubicin−Cyclophosphamide Regimen | BACKGROUND
In breast cancers, only a minority of patients fully benefit from the different chemotherapy regimens currently in use. Identification of markers that could predict the response to a particular regimen would thus be critically important for patient care. In cell lines or animal models, tumor protein p53 (TP53) plays a critical role in modulating the response to genotoxic drugs. TP53 is activated in response to DNA damage and triggers either apoptosis or cell-cycle arrest, which have opposite effects on cell fate. Yet, studies linking TP53 status and chemotherapy response have so far failed to unambiguously establish this paradigm in patients. Breast cancers with a TP53 mutation were repeatedly shown to have a poor outcome, but whether this reflects poor response to treatment or greater intrinsic aggressiveness of the tumor is unknown.
METHODS AND FINDINGS
In this study we analyzed 80 noninflammatory breast cancers treated by frontline (neoadjuvant) chemotherapy. Tumor diagnoses were performed on pretreatment biopsies, and the patients then received six cycles of a dose-dense regimen of 75 mg/m(2) epirubicin and 1,200 mg/m(2) cyclophosphamide, given every 14 days. After completion of chemotherapy, all patients underwent mastectomies, thus allowing for a reliable assessment of chemotherapy response. The pretreatment biopsy samples were used to determine the TP53 status through a highly efficient yeast functional assay and to perform RNA profiling. All 15 complete responses occurred among the 28 TP53-mutant tumors. Furthermore, among the TP53-mutant tumors, nine out of ten of the highly aggressive basal subtypes (defined by basal cytokeratin [KRT] immunohistochemical staining) experienced complete pathological responses, and only TP53 status and basal subtype were independent predictors of a complete response. Expression analysis identified many mutant TP53-associated genes, including CDC20, TTK, CDKN2A, and the stem cell gene PROM1, but failed to identify a transcriptional profile associated with complete responses among TP53 mutant tumors. In patients with unresponsive tumors, mutant TP53 status predicted significantly shorter overall survival. The 15 patients with responsive TP53-mutant tumors, however, had a favorable outcome, suggesting that this chemotherapy regimen can overcome the poor prognosis generally associated with mutant TP53 status.
CONCLUSIONS
This study demonstrates that, in noninflammatory breast cancers, TP53 status is a key predictive factor for response to this dose-dense epirubicin-cyclophosphamide regimen and further suggests that the basal subtype is exquisitely sensitive to this association. Given the well-established predictive value of complete responses for long-term survival and the poor prognosis of basal and TP53-mutant tumors treated with other regimens, this chemotherapy could be particularly suited for breast cancer patients with a mutant TP53, particularly those with basal features. |
The Effects of Business-to-Business E-Commerce on Transaction Costs: Description, Examples, and Implications | In this paper, we study the changes in transaction costs from the introduction of the Internet in transactions between firms (i.e., business-to-business (B2B) e-commerce). We begin with a conceptual framework to organize the changes in transaction costs that are likely to result when a transaction is transferred from a physical marketplace to an Internet-based one. Following Milgrom and Roberts (1992), we differentiate between the impact on coordination costs and motivation costs. We argue that it is likely that B2B e-commerce reduces coordination costs / increase efficiency. We classify these efficiencies into three broad categories – (1) process improvements; (2) marketplace benefits; and (3) indirect improvements. At the same time, B2B e-commerce affects motivation costs. In particular, we discuss the impact of the introduction of e-commerce on informational asymmetries. Second, we present some early examples of such effects from three companies from which we have been able to obtain data. We measure the process improvements for all three companies. We measure marketplace benefits for one company. We then attempt to measure motivation costs for one company by comparing the performance of an Internet marketplace to a physical marketplace. We are able to document potentially large process improvements. We document one example of large marketplace benefits. We find mixed evidence of motivation costs. |
SqueezedText: A Real-Time Scene Text Recognition by Binary Convolutional Encoder-Decoder Network | A new approach for real-time scene text recognition is proposed in this paper. A novel binary convolutional encoderdecoder network (B-CEDNet) together with a bidirectional recurrent neural network (Bi-RNN). The B-CEDNet is engaged as a visual front-end to provide elaborated character detection, and a back-end Bi-RNN performs characterlevel sequential correction and classification based on learned contextual knowledge. The front-end B-CEDNet can process multiple regions containing characters using a one-off forward operation, and is trained under binary constraints with significant compression. Hence it leads to both remarkable inference run-time speedup as well as memory usage reduction. With the elaborated character detection, the back-end Bi-RNN merely processes a low dimension feature sequence with category and spatial information of extracted characters for sequence correction and classification. By training with over 1,000,000 synthetic scene text images, the B-CEDNet achieves a recall rate of 0.86, precision of 0.88 and F-score of 0.87 on ICDAR-03 and ICDAR-13. With the correction and classification by Bi-RNN, the proposed real-time scene text recognition achieves state-of-the-art accuracy while only consumes less than 1-ms inference run-time. The flow processing flow is realized on GPU with a small network size of 1.01 MB for B-CEDNet and 3.23 MB for Bi-RNN, which is much faster and smaller than the existing solutions. Introduction The success of convolutional neural network (CNN) has resulted in a potential general machine learning engine for various computer vision applications (LeCun et al. 1998; Krizhevsky, Sutskever, and Hinton 2012), such as text detection, recognition and interpretation from images. Applications, such as Advanced Driver Assistance System (ADAS) for road signs with text, however, require a real-time processing capability that is beyond the existing approaches (Jaderberg et al. 2014; Jaderberg, Vedaldi, and Zisserman 2014) in terms of processing functionality, efficiency and latency. For a real-time scene text recognition application, one needs a method with memory efficiency and fast processing time. In this paper, we reveal that binary features (Courbariaux and Bengio 2016) can effectively and efficiently Copyright c © 2018, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. represent the scene text image. Combining with deconvolution technique, we introduce a binary convolutional encoderdecoder network (B-CEDNet) for real-time one-shot character detection and recognition. The scene text recognition is further enhanced with a back-end character-level sequential correction and classification, based on a bidirectional recurrent neural network (Bi-RNN). Instead of detecting characters sequentially (Bissacco et al. 2013; Wang et al. 2012; Shi, Bai, and Yao 2015), our proposed method, called SqueezedText, can detect multiple characters simultaneously and extracts a length-variable character sequence with corresponding spatial information. This sequence will be subsequently fed into a Bi-RNN, which then learns the detection error characteristics from the previous stage to provides characterlevel correction and classification based on the spatial and contextual cues. By training with over 1,000,000 synthetic scene text images, the proposed SqueezedText can achieve recall rate of 0.86, precision of 0.88 and F-score of 0.87 on ICDAR-03 (Lucas et al. 2003) dataset. More importantly, it achieves state-of-the-art accuracy of 93.8%, 92.7%, 94.3% 96.1% and 83.6% on ICDAR-03, ICDAR-13, IIIT5K, STV and Synthe90K datasets. SqueezedText is realized on GPU with a small network size of 1.01 MB for B-CEDNet and 3.23 MB for Bi-RNN; and consumes less than 1 ms inference runtime on average. It is up to 4× faster and 6× smaller than state-of-the-art work. The contributions of this paper are summarized as follows: • We propose a novel binary convolutional encoder-decoder neural network model, which acts as a visual front-end module to provide unconstrained scene text detection and recognition. It effectively detects individual character with high recall rate, realizing an extremely fast run-time speed and small memory consumption. • We reveal that the text features can be learned and encoded in binary format without loss of discriminative information. This information can be further decoded and recovered to perform multi-character detection and recognition in parallel. • We further design a back-end bidirectional RNN (BiRNN) to provide fast and robust scene text recognition with correction and classification. |
Design and analysis of a cable-driven manipulator with variable stiffness | A manipulator with variable stiffness allows the manipulator to adjust its stiffness to fulfill different task requirements. In this paper, a cable-driven manipulator with the ability to significantly regulate its stiffness through tension manipulation is introduced. Variable stiffness is achieved by attaching a novel variable stiffness device along each driving cable, in which the stiffness of the device is a function the cable tension. As cable-driven manipulator has actuation redundancy, the tension distribution can be manipulated even at a stationary pose. Such property allows the cable-driven manipulator to adjust the stiffness of each variable stiffness device, thereby changing the stiffness of the manipulator. The design and analysis of the variable stiffness device is presented. The variable stiffness device uses commercial torsion springs, and has a compact and light-weight design. Experimental and simulation results verified that cable-driven manipulator with such variable stiffness devices is able to achieve significant stiffness regulation. |
The Computational Rise and Fall of Fairness | The fair division of indivisible goods has long been an important topic in economics and, more recently, computer science. We investigate the existence of envy-free allocations of indivisible goods, that is, allocations where each player values her own allocated set of goods at least as highly as any other player’s allocated set of goods. Under additive valuations, we show that even when the number of goods is larger than the number of agents by a linear fraction, envy-free allocations are unlikely to exist. We then show that when the number of goods is larger by a logarithmic factor, such allocations exist with high probability. We support these results experimentally and show that the asymptotic behavior of the theory holds even when the number of goods and agents is quite small. We demonstrate that there is a sharp phase transition from nonexistence to existence of envy-free allocations, and that on average the computational problem is hardest at that transition. |
Sex differences in the administration-time-dependent effects of low-dose aspirin on ambulatory blood pressure in hypertensive subjects. | Previous studies have revealed sex differences in blood pressure (BP) regulation, pathophysiology of hypertension, and treatment responses to medication. On the other hand, low-dose aspirin has been shown to reduce BP when administered at bedtime, as opposed to upon awakening, in hypertensive subjects and pregnant women at risk for preeclampsia. The purpose of this research was to investigate the potential sex differences in the administration-time-dependent influence of aspirin on BP. We studied 130 men and 186 women with untreated mild hypertension, 44.1 +/- 13.2 yrs of age, randomized to receive aspirin (100 mg/day) either on awakening or at bedtime daily for three months. BP was measured for 48 h before and after treatment. With ASA on awakening, ambulatory BP was unchanged in men and slightly but significantly elevated in women (1.7/1.4 mmHg in the 48 h SBP/DBP means, respectively; p < 0.023). BP was significantly reduced after aspirin at bedtime and to a larger extent in women (-8.0/-5.6 mmHg in SBP/DBP) than men (5.5/3.4 mmHg, respectively; p < 0.009 between men and women). Factors influencing a stronger response of BP to aspirin at bedtime included female sex, elevated fasting glucose, and high glomerular filtration rate. This study corroborates the significant administration-time-dependent effect of low-dose aspirin on ambulatory BP in subjects with untreated mild hypertension, while documenting significant sex differences in the BP response to aspirin. Accordingly, results indicate that bedtime is the optimal time for aspirin ingestion in both men and women. This timed administration of low-dose aspirin could provide a cost-effective valuable approach for BP control and potential added cardiovascular protection, especially in hypertensive women. |
Adjustable speed generators for wind turbines based on doubly-fed induction machines and 4-quadrant IGBT converters linked to the rotor | Wind turbines are being built at power levels above 1.5 MW. Higher power levels are being anticipated for off-shore applications. To limit mechanical stresses and power surges in these high power systems speed control is necessary. The doubly-fed induction generator (DFIG) system is investigated as a viable alternative to adjust speed over a wide range while keeping cost of the power converters minimal. A four-quadrant IGBT ac-to-ac converter is used to feed power bi-directionally to the rotor circuit. Decoupled control of active and reactive power can be realized using the dynamic model of the DFIG. Simulations and measurements confirm the validity of the model and the viability of the DFIG wind turbine. |
Deep Convolutional Neural Network for Natural Image Matting Using Initial Alpha Mattes | We propose a deep convolutional neural network (CNN) method for natural image matting. Our method takes multiple initial alpha mattes of the previous methods and normalized RGB color images as inputs, and directly learns an end-to-end mapping between the inputs and reconstructed alpha mattes. Among the various existing methods, we focus on using two simple methods as initial alpha mattes: the closed-form matting and KNN matting. They are complementary to each other in terms of local and nonlocal principles. A major benefit of our method is that it can “recognize” different local image structures and then combine the results of local (closed-form matting) and nonlocal (KNN matting) mattings effectively to achieve higher quality alpha mattes than both of the inputs. Furthermore, we verify extendability of the proposed network to different combinations of initial alpha mattes from more advanced techniques such as KL divergence matting and information-flow matting. On the top of deep CNN matting, we build an RGB guided JPEG artifacts removal network to handle JPEG block artifacts in alpha matting. Extensive experiments demonstrate that our proposed deep CNN matting produces visually and quantitatively high-quality alpha mattes. We perform deeper experiments including studies to evaluate the importance of balancing training data and to measure the effects of initial alpha mattes and also consider results from variant versions of the proposed network to analyze our proposed DCNN matting. In addition, our method achieved high ranking in the public alpha matting evaluation dataset in terms of the sum of absolute differences, mean squared errors, and gradient errors. Also, our RGB guided JPEG artifacts removal network restores the damaged alpha mattes from compressed images in JPEG format. |
Business process modelling : Review and framework | A business process is the combination of a set of activities within an enterprise with a structure describing their logical order and dependence whose objective is to produce a desired result. Business process modelling enables a common understanding and analysis of a business process. A process model can provide a comprehensive understanding of a process. An enterprise can be analysed and integrated through its business processes. Hence the importance of correctly modelling its business processes. Using the right model involves taking into account the purpose of the analysis and, knowledge of the available process modelling techniques and tools. The number of references on business modelling is huge, thus making it very time consuming to get an overview and understand many of the concepts and vocabulary involved. The primary concern of this paper is to make that job easier, i.e. review business process modelling literature and describe the main process modelling techniques. Also a framework for classifying business process-modelling techniques according to their purpose is proposed and discussed. r 2003 Elsevier B.V. All rights reserved. |
Integrity attestation in military IoT | Trust in the correct operation (“bona fide”) of a transaction is sometimes required in order to trust the validity of exchanged data. Authentication of users/subjects does give some trust in the intent of a transaction, but not in its conduct. Malware may cause the other end to send corrupted data or misbehave in other ways. This paper discusses different mechanisms through which nodes can prove to each other that their software stack is clean from unwarranted modifications, called integrity attestation. For IoT applications, integrity assurance can lead to higher trust in the exchanged data, e.g., sensor readings. |
DeepLogic: Towards End-to-End Differentiable Logical Reasoning. | Combining machine learning with logic-based expert systems in order to get the best of both worlds are becoming increasingly popular. However, to what extent machine learning can already learn to reason over rule-based knowledge is still an open problem. In this paper, we explore how symbolic logic, defined as logic programs at a character level, is learned to be represented in a high-dimensional vector space using RNNbased iterative neural networks to perform reasoning. We create a new dataset that defines 12 classes of logic programs exemplifying increased level of complexity of logical reasoning and train the networks in an end-to-end fashion to learn whether a logic program entails a given query. We analyse how learning the inference algorithm gives rise to representations of atoms, literals and rules within logic programs and evaluate against increasing lengths of predicate and constant symbols as well as increasing steps of multi-hop reasoning. |
Human Capital and the Adoption of Information and Communications Technologies: Evidence from Investment Climate Survey of Pakistan | This paper studies the impact of human capital on the adoption and diffusion of Information and Communications Technologies (ICT) in the Pakistani firms using the World Bank Enterprise Survey 2002-07. The paper considers various indicators of human capital and measures of ICT adoption and diffusion. On-the-job training, manager's level of qualification and production workers' level of education are found to positively determine the use of emails, website and other means of communication in a firm. The results are robust to the inclusion of geographical, sectoral and structural control variables. Firm size, sales and workers' compensation are also positively associated with the use of ICT. The findings show the importance of accumulation and development of human capital in the productivity growth in the era of skill-biased technical change. A concerted national effort for the enhancement of the workforce's computing skills is therefore a must if a developing economy such as Pakistan is to improve its competitiveness. |
Self-Indexing Inverted Files for Fast Text Retrieval | Query-processing costs on large text databases are dominated by the need to retrieve and scan the inverted list of each query term. Retrieval time for inverted lists can be greatly reduced by the use of compression, but this adds to the CPU time required. Here we show that the CPU component of query response time for conjunctive Boolean queries and for informal ranked queries can be similarly reduced, at little cost in terms of storage, by the inclusion of an internal index in each compressed inverted list. This method has been applied in a retrieval system for a collection of nearly two million short documents. Our experimental results show that the self-indexing strategy adds less than 20% to the size of the compressed inverted file, which itself occupies less than 10% of the indexed text, yet can reduce processing time for Boolean queries of 5-10 terms to under one fifth of the previous cost. Similarly, ranked queries of 40-50 terms can be evaluated in as little as 25% of the previous time, with little or no loss of retrieval effectiveness. |
Beyond Exchangeability: The Chinese Voting Process | Many online communities present user-contributed responses such as reviews of products and answers to questions. User-provided helpfulness votes can highlight the most useful responses, but voting is a social process that can gain momentum based on the popularity of responses and the polarity of existing votes. We propose the Chinese Voting Process (CVP) which models the evolution of helpfulness votes as a self-reinforcing process dependent on position and presentation biases. We evaluate this model on Amazon product reviews and more than 80 StackExchange forums, measuring the intrinsic quality of individual responses and behavioral coefficients of different communities. |
Probabilistic Typology: Deep Generative Models of Vowel Inventories | Linguistic typology studies the range of structures present in human language. The main goal of the field is to discover which sets of possible phenomena are universal, and which are merely frequent. For example, all languages have vowels, while most—but not all—languages have an [u] sound. In this paper we present the first probabilistic treatment of a basic question in phonological typology: What makes a natural vowel inventory? We introduce a series of deep stochastic point processes, and contrast them with previous computational, simulation-based approaches. We provide a comprehensive suite of experiments on over 200 distinct languages. |
Concolic Execution on Small-Size Binaries: Challenges and Empirical Study | Concolic execution has achieved great success in many binary analysis tasks. However, it is still not a primary option for industrial usage. A well-known reason is that concolic execution cannot scale up to large-size programs. Many research efforts have focused on improving its scalability. Nonetheless, we find that, even when processing small-size programs, concolic execution suffers a great deal from the accuracy and scalability issues. This paper systematically investigates the challenges that can be introduced even by small-size programs, such as symbolic array and symbolic jump. We further verify that the proposed challenges are non-trivial via real-world experiments with three most popular concolic execution tools: BAP, Triton, and Angr. Among a set of 22 logic bombs we designed, Angr can solve only four cases correctly, while BAP and Triton perform much worse. The results imply that current tools are still primitive for practical industrial usage. We summarize the reasons and release the bombs as open source to facilitate further study. |
The responsibilities of the state and businesses in the creation of social capital | According to different types of studies conducted in recent years, not natural but social factors will constitute a major barrier in the process of achieving sustainable development. The barrier will be quickly noticed in the countries and regions of low social capital. The purpose of this article is to present responsibilities for state authorities and enterprises in the creation of social capital. This conducted study of literature has enabled the creation of a set of methods that impact these entities as to the nature of the relationship and level of trust within certain social structures. This presented work may serve as reference points for entities implementing similar solutions in their environment. The author cites numerous examples which oppose the opinion that the level of social capital is only a result of inheritance. The current generation can take the initiative in this regard, although it is a long-term process. The author also points out the dependence of the effectiveness of measures taken to adapting them to the specific context in which the group operates. He also draws attention to the role of society, which, like the state and commercial entities is responsible for raising the level of social capital. |
Securing SMS Based One Time Password Technique from Man in the Middle Attack | Security of financial transactions in E-Commerce is difficult to implement and there is a risk that user’s confidential data over the internet may be accessed by hackers. Unfortunately, interacting with an online service such as a banking web application often requires certain degree of technical sophistication that not all Internet users possess. For the last couple of year such naive users have been increasingly targeted by phishing attacks that are launched by miscreants who are aiming to make an easy profit by means of illegal financial transactions. In this paper, we have proposed an idea for securing e-commerce transaction from phishing attack. An approach already exists where phishing attack is prevented using one time password which is sent on user’s registered mobile via SMS for authentication. But this method can be counter attacked by “Man in the Middle”. In our paper, a new idea is proposed which is more secure compared to the existing online payment system using OTP. In this mechanism OTP is combined with the secure key and is then passed through RSA algorithm to generate the Transaction password. A Copy of this password is maintained at the server side and is being generated at the user side using a mobile application; so that it is not transferred over the insecure network leading to a fraudulent transaction. Keywords—Phishing, Replay attack, MITM attack, RSA, Random Generator. |
Visual reaction time and high-speed ball games. | Laboratory measures of visual reaction time suggest that some aspects of high-speed ball games such as cricket are 'impossible' because there is insufficient time for the player to respond to unpredictable movements of the ball. Given the success with which some people perform these supposedly impossible acts, it has been assumed by some commentators that laboratory measures of reaction time are not applicable to skilled performers. An analysis of high-speed film of international cricketers batting on a specially prepared pitch which produced unpredictable movement of the ball is reported, and it is shown that, when batting, highly skilled professional cricketers show reaction times of around 200 ms, times similar to those found in traditional laboratory studies. Furthermore, professional cricketers take roughly as long as casual players to pick up ball flight information from film of bowlers. These two sets of results suggest that the dramatic contrast between the ability of skilled and unskilled sportsmen to act on the basis of visual information does not lie in differences in the speed of operation of the perceptual system. It lies in the organisation of the motor system that uses the output of the perceptual system. |
Evaluation of the Shear Strength of Dapped Ended Beam | Dapped end beams are precast members of concrete structures which are widely used in buildings and bridges. The re-entrant corner is the weakest portion of the beam, where stress concentration develops, such regions are known as disturbed regions. These regions cannot be analyzed with ordinary flexural analysis theory rather than another method named Strut and Tie Model (STM) is used. The same approach has been used in this research. In this research four reinforced concrete dapped end beams divided into two groups G-1 and G-2 having depths of 18” (457 mm) and 12” (305 mm) respectively were designed for the assumed external load. The beams were later tested under monotonic loads to study the shear strength of the dapped ends and compared with the assumed external designed loads. In case of G-1, failure loads were observed much higher than the design loads, which show that STM gives very conservative solution for the design of dapped ended beams of greater depth. Actual values of strut forces have been observed as quite closer to the values proposed by ACI 318-08 for both diagonal bottled shaped struts and horizontal prismatic struts. The experimental values of strength reduction factor for struts βs are also close to the values specified in ACI code. Whereas, in case of beams of G-2, failure loads were observed lower than the design loads, which shows that STM gives non realistic solution for the design of dapped end beams of lower depth. Actual values of strut forces and strength reduction factor βs are also very smaller as compared to the values proposed by ACI code. [Ahmad S, Elahi A, Hafeez J. Evaluation of the Shear Strength of Dapped Ended Beam. Life Sci J 2013;10(3):1038-1044] (ISSN:1097-8135). http://www.lifesciencesite.com. 151 |
IPv4-v6 configured tunnel and 6to4 transition mechanisms network performance evaluation on Linux operating systems | Last few decades has brought many fundamental changes to data communications and the Internet. Internet has its roots in a networking project started by ARPA which consisted of four computers. Now the Internet spans the globe, and has become the default communication mechanism for businesses and individuals. IPv4 addresses will soon be exhausted, which has initiated development of IPv6. The new IP version provides solution to problems that were inherent in the earlier version, and also offers additional opportunities. However transition to the new version has been remarkably slow. Thus in the interim, various transition mechanisms can be employed. In this paper two such mechanisms, namely configured tunnel and 6to4 transition mechanism, have been empirically evaluated for performance. Both mechanisms are implemented on two different Linux distributions, and performance related metrics like throughput, delay, jitter and CPU usage of the transition end nodes are measured. The results obtained on the test-bed show that TCP/UDP throughput and jitter values of the two mechanisms are similar, but delay reading is significantly different depending on the choice of transition mechanism and operating system. |
Clinical profile and prognostic value of low systolic blood pressure in patients hospitalized for heart failure with reduced ejection fraction: insights from the Efficacy of Vasopressin Antagonism in Heart Failure: Outcome Study with Tolvaptan (EVEREST) trial. | BACKGROUND
Systolic blood pressure (SBP) is related to the pathophysiologic development and progression of heart failure (HF) and is inversely associated with adverse outcomes during hospitalization for HF (HHF). The prognostic value of SBP after initiating inhospital therapy and the mode of death and etiology of cardiovascular readmissions based on SBP have not been well characterized in HHF.
METHODS
A post hoc analysis was performed of the placebo group (n = 2061) of the EVEREST trial, which enrolled patients within 48 hours of admission for worsening HF with an ejection fraction (EF) ≤40% and an SBP ≥90 mm Hg, for a median follow-up of 9.9 months. Systolic blood pressure was measured at baseline, daily during hospitalization, and at discharge/day 7. Patients were divided into the following quartiles by SBP at baseline: ≤105, 106 to 119, 120 to 130, and ≥131 mm Hg. Outcomes were all-cause mortality (ACM) and the composite of cardiovascular mortality or HHF (CVM + HHF). The associations between baseline, discharge, and inhospital change in SBP and ACM and CVM + HHF were assessed using multivariable Cox proportional hazards regression models adjusted for known covariates.
RESULTS
Median (25th, 75th) SBP at baseline was 120 (105, 130) mm Hg and ranged from 82 to 202 mm Hg. Patients with a lower SBP were younger and more likely to be male; had a higher prevalence of prior revascularization and ventricular arrhythmias; had a lower EF, worse renal function, higher natriuretic peptide concentrations, and wider QRS durations; and were more likely to require intravenous inotropes during hospitalization. Lower SBP was associated with increased mortality, driven by HF and sudden cardiac death, and cardiovascular hospitalization, primarily caused by HHF. After adjusting for potential confounders, SBP was inversely associated with risk of the coprimary end points both at baseline (ACM: hazard ratio [HR]/10-mm Hg decrease 1.15, 95% CI1.08-1.22; CVM + HHF: HR 1.09/10-mm Hg decrease, 95% CI 1.04-1.14) and at the time of discharge/day 7 (ACM: HR 1.15/10-mm Hg decrease, 95% CI 1.08-1.22; CVM + HHF: HR 1.07/10-mm Hg decrease, 95% CI 1.02-1.13), but the association with inhospital SBP change was not significant.
CONCLUSION
Systolic blood pressure is an independent clinical predictor of morbidity and mortality after initial therapy during HHF with reduced EF. |
MRI of female genital and pelvic organs during sexual arousal. | We utilized contrast enhanced magnetic resonance imaging (MRI) to delineate the anatomy of the female genital and pelvic organs during sexual arousal. Eleven healthy pre-menopausal women and eight healthy post-menopausal women underwent MRI of the pelvis while watching an erotic video. A 1.5 Tesla MR system was used to produce T1-weighted images following administration of MS-325, a gadolinium-based blood pool contrast agent. Selected structural dimensions and enhancement were measured prior to and during sexual arousal. In both pre- and post-menopausal subjects, vestibular bulb and labia minora width increased with arousal. Enhancement measurements increased in the bulb, labia minora and clitoris in both pre- and post-menopausal subjects, and in the vagina in pre-menopausal subjects. There were no marked changes in size or enhancement of the labia majora, urethra, cervix, or rectum during sexual arousal in pre- or post-menopausal subjects. Using MRI, we observed specific changes in the female genitalia and pelvic organs with sexual arousal, in both pre- and post-menopausal women. MRI can potentially provide detailed anatomical information in the assessment of female sexual function, particularly with regard to changes in blood flow. |
A Simple and Practical Approach to Unit Testing: The JML and JUnit Way | Writing unit test code is labor-intensive, hence it is often not done as an integral part of programming. However, unit testing is a practical approach to increasing the correctness and quality of software; for example, the Extreme Programming approach relies on frequent unit testing. In this paper we present a new approach that makes writing unit tests easier. It uses a formal specification language’s runtime assertion checker to decide whether methods are working correctly, thus automating the writing of unit test oracles. These oracles can be easily combined with hand-written test data. Instead of writing testing code, the programmer writes formal specifications (e.g., preand postconditions). This makes the programmer’s task easier, because specifications are more concise and abstract than the equivalent test code, and hence more readable and maintainable. Furthermore, by using specifications in testing, specification errors are quickly discovered, so the specifications are more likely to provide useful documentation and inputs to other tools. We have implemented this idea using the Java Modeling Language (JML) and the JUnit testing framework, but the approach could be easily implemented with other combinations of formal specification languages and unit test tools. |
Physicians' knowledge of future vascular disease in women with preeclampsia. | OBJECTIVE
Preeclampsia, a hypertensive disorder of pregnancy, affects 5-8% of women. Large studies demonstrate a strong association between preeclampsia and future cardiovascular disease (CVD). Despite CVD being the leading cause of mortality for women, there has been little education for internal medicine physicians or obstetrician-gynecologists (ob-gyns) about this association; published guidelines do not include preeclampsia as a risk factor for future CVD. Therefore, women with a history of preeclampsia may not receive adequate risk-reduction counseling for CVD. It is unclear whether primary care physicians are aware of the association; thus, we sought to determine whether primary care providers at our institution were aware of preeclampsia's association with future CVD and whether they were providing appropriate counseling.
METHODS
An anonymous online survey was sent to all internists and (ob-gyns) at our hospital.
RESULTS
Although most internists (95%) and (ob-gyns) (70%) provide routine cardiovascular risk-reduction counseling, a substantial proportion of them were unaware of any health risk associated with a history of preeclampsia. Many internists were unsure or did not know whether preeclampsia is associated with ischemic heart disease (56%), stroke (48%), and decreased life expectancy (79%). The corresponding proportions for (ob-gyns) were 23, 38, and 77%, respectively. Only 9% of internists and 38% of obstetrician-gynecologists were providing cardiovascular risk-reduction counseling to women with a history of preeclampsia.
CONCLUSION
There is limited knowledge of the association between preeclampsia and future CVD; this deficiency may limit the application of this risk factor to clinical care. |
Abstractive Summarization of Reddit Posts with Multi-level Memory Networks | ive Summarization of Reddit Posts with Multi-level Memory Networks Byeongchang Kim Hyunwoo Kim Gunhee Kim Department of Computer Science and Engineering & Center for Superintelligence Seoul National University, Seoul, Korea {byeongchang.kim,hyunwoo.kim}@vision.snu.ac.kr [email protected] http://vision.snu.ac.kr/projects/reddit-tifu |
When is a metal not a metal? | Small clusters of naked (no ligand attached) atoms or single atoms are seen as the ultimate catalyst for achieving different types of hydrocarbon reactions (i.e. Fischer-Tropsch reactions) without catalyzing the production of undesirable materials. Recently published works aim at the production of single atoms or clusters of naked atoms, and the characterization of these is surveyed. Attempts to answer the question of how many atoms of a specific material must cluster to bring about the transition to bulk material properties are also reported. This transition of properties occurs at different numbers of atoms/cluster for physics and chemistry. (BLM) |
Finding Overlapping Communities Using Disjoint Community Detection Algorithms | Many algorithms have been designed to discover com munity structure in networks. Most of these detect disjoint communities , while a few can find communities that overlap. We propose a new, two-phase, method of detecting overlapping communities. In the first phase, a network is transformed to a new one by splitting vertices, using the idea of split betweenness ; in the second phase, the transformed network is processed by a disjoint comm unity detection algorithm. This approach has the potential to convert any disj oint community detection algorithm into an overlapping community detection algor ithm. Our experiments, using several “disjoint” algorithms, demonstrate that the m thod works, producing solutions, and execution times, that are often better t han those produced by specialized “overlapping” algorithms. |
Deep Co-attention based Comparators For Relative Representation Learning in Person Re-identification | Person re-identification (re-ID) requires rapid, flexible yet discriminant representations to quickly generalize to unseen observations on-the-fly and recognize the same identity across disjoint camera views. Recent effective methods are developed in a pair-wise similarity learning system to detect a fixed set of features from distinct regions which are mapped to their vector embeddings for the distance measuring. However, the most relevant and crucial parts of each image are detected independently without referring to the dependency conditioned on one and another. Also, these region based methods rely on spatial manipulation to position the local features in comparable similarity measuring. To combat these limitations, in this paper we introduce the Deep Co-attention based Comparators (DCCs) that fuse the co-dependent representations of the paired images so as to focus on the relevant parts of both images and produce their relative representations. Given a pair of pedestrian images to be compared, the proposed model mimics the foveation of human eyes to detect distinct regions concurrent on both images, namely co-dependent features, and alternatively attend to relevant regions to fuse them into the similarity learning. Our comparator is capable of producing dynamic representations relative to a particular sample every time, and thus well-suited to the case of re-identifying pedestrians on-the-fly. We perform extensive experiments to provide the insights and demonstrate the effectiveness of the proposed DCCs in person re-ID. Moreover, our approach has achieved the state-of-the-art performance on three benchmark data sets: DukeMTMC-reID [1], CUHK03 [2], and Market-1501 [3]. |
Genetic Alterations in Thyroid Carcinoma Associated with Familial Adenomatous Polyposis: Clinical Implications and Suggestions for Early Detection | Germ-line mutations of the adenomatous polyposis ( APC ) gene, responsible for familial adenomatous polyposis (FAP) were analyzed in 15 patients with FAP-associated papillary thyroid carcinomas: 13 had the mutation between codons 778 and 1309 (exon 15), 1 at codon 593 (exon 14), and 1 at codon 140 (exon 3). Therefore APC gene mutations clustered in the genomic area associated with congenital hypertrophy of the retinal pigment epithelium (CHRPE) (codons 463–1387). Ocular patches were documented in 12 patients. In particular, 4 of the 15 patients, all women with a mean age of 23.5 (range 20–32), were found during the study of 15 consecutive kindreds who had undergone systematic screening for extracolonic manifestations. Three of them belonged to the same kindred and were asymptomatic. These four patients were also screened for loss of heterozygosity of APC in the thyroid tumoral tissue. No biallelic inactivation of the APC gene was found. In contrast, three of these four patients had activation of the ret-PTC oncogene. In particular, there was activation of the ret-PTC1 isoform, a chimeric gene resulting from fusion of a gene named H4 with the RET gene. On histologic examination, three of the four patients showed Hashimoto-like lymphocytic infiltration. Present data suggest that: (1) the incidence of FAP-associated thyroid cancer probably has been underestimated in the past; (2) intensive screening could detect a larger than expected number of thyroid carcinomas; (3) systematic screening is recommended in patients with ocular patches and genetic mutation in exon 15; (4) Hashimoto-like findings do not exclude carcinoma but are a frequent accompanying finding; (5) despite frequent multicentricity and early lymph node involvement, FAP-associated thyroid tumors seem to have an excellent prognosis, in particular those showing ret-PTC activation. |
Shape and Pose Space Deformation for Subject Specific Animation | In this paper we present a framework for generating arbitrary human models and animating them realistically given a few intuitive parameters. Shape and pose space deformation (SPSD) is introduced as a technique for modeling subject specific pose induced deformations from whole-body registered 3D scans. By exploiting examples of different people in multiple poses we are able to realistically animate a novel subject by interpolating and extrapolating in a joint shape and pose parameter space. Our results show that we can produce plausible animations of new people and that greater detail is achieved by incorporating subject specific pose deformations. We demonstrate the application of SPSD to produce subject specific animation sequences driven by RGB-Z performance capture. |
Online Verification of Automated Road Vehicles Using Reachability Analysis | An approach for formally verifying the safety of automated vehicles is proposed. Due to the uniqueness of each traffic situation, we verify safety online, i.e., during the operation of the vehicle. The verification is performed by predicting the set of all possible occupancies of the automated vehicle and other traffic participants on the road. In order to capture all possible future scenarios, we apply reachability analysis to consider all possible behaviors of mathematical models considering uncertain inputs (e.g., sensor noise, disturbances) and partially unknown initial states. Safety is guaranteed with respect to the modeled uncertainties and behaviors if the occupancy of the automated vehicle does not intersect that of other traffic participants for all times. The applicability of the approach is demonstrated by test drives with an automated vehicle at the Robotics Institute at Carnegie Mellon University. |
Exploring the knowledge landscape: four emerging views of knowledge | Purpose – The concept of ‘‘knowledge’’ is presented in diverse and sometimes even controversial ways in the knowledge management (KM) literature. The aim of this paper is to identify the emerging views of knowledge and to develop a framework to illustrate the interrelationships of the different knowledge types. Design/methodology/approach – This paper is a literature review to explore how ‘‘knowledge’’ as a central concept is presented and understood in a selected range of KM publications (1990-2004). Findings – The exploration of the knowledge landscape showed that ‘‘knowledge’’ is viewed in four emerging and complementary ways. The ontological, epistemological, commodity, and community views of knowledge are discussed in this paper. The findings show that KM is still a young discipline and therefore it is natural to have different, sometimes even contradicting views of ‘‘knowledge’’ side by side in the literature. Practical implications – These emerging views of knowledge could be seen as opportunities for researchers to provide new contributions. However, this diversity and complexity call for careful and specific clarification of the researchers’ standpoint, for a clear statement of their views of knowledge. Originality/value – This paper offers a framework as a compass for researchers to help their orientation in the confusing and ever changing landscape of knowledge. |
Antibody–drug conjugates as novel anti-cancer chemotherapeutics | Over the past couple of decades, antibody-drug conjugates (ADCs) have revolutionized the field of cancer chemotherapy. Unlike conventional treatments that damage healthy tissues upon dose escalation, ADCs utilize monoclonal antibodies (mAbs) to specifically bind tumour-associated target antigens and deliver a highly potent cytotoxic agent. The synergistic combination of mAbs conjugated to small-molecule chemotherapeutics, via a stable linker, has given rise to an extremely efficacious class of anti-cancer drugs with an already large and rapidly growing clinical pipeline. The primary objective of this paper is to review current knowledge and latest developments in the field of ADCs. Upon intravenous administration, ADCs bind to their target antigens and are internalized through receptor-mediated endocytosis. This facilitates the subsequent release of the cytotoxin, which eventually leads to apoptotic cell death of the cancer cell. The three components of ADCs (mAb, linker and cytotoxin) affect the efficacy and toxicity of the conjugate. Optimizing each one, while enhancing the functionality of the ADC as a whole, has been one of the major considerations of ADC design and development. In addition to these, the choice of clinically relevant targets and the position and number of linkages have also been the key determinants of ADC efficacy. The only marketed ADCs, brentuximab vedotin and trastuzumab emtansine (T-DM1), have demonstrated their use against both haematological and solid malignancies respectively. The success of future ADCs relies on improving target selection, increasing cytotoxin potency, developing innovative linkers and overcoming drug resistance. As more research is conducted to tackle these issues, ADCs are likely to become part of the future of targeted cancer therapeutics. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.